Dollar Words

December 12, 2019

My son came home from 5th grade with an interesting assignment for finding something called dollar words.  As I understand it, you assign the values 1-26 to your letters and then look for words whose letter values sum up to 100.  (These are exactly worth a dollar if you think of the letter values as being in cents.)

Naturally, this is the sort of problem that is really easy to solve in Raku.  In fact, I think carefully explaining what the program is going to do is going to take me considerably longer than writing it in the first place did!  First, here’s the full program:

sub value($word) {
    [+] ${ $_.ord - "a".ord + 1 });

my @words = "/usr/share/dict/words".IO.lines;
for @words.grep({ value($_) == 100 }) -> $dollar-word {
    say $dollar-word;

Breaking it down, piece by piece:

sub value($word) {
    [+] ${ $_.ord - "a".ord + 1 });

This creates a subroutine called value which takes a word (in the variable $word) and determines its value under this system. Step-by-step:

  • [+] takes the sum of a list that follows it.
  • $ takes the variable $word and transforms it to lower-case letters. (The value chart doesn’t care whether a letter is upper or lower case, so we simplify the problem.)
  • .comb takes the result of that and translates it to a list of letters. (It’s named that because the full version of the command combs through text looking for things that match a pattern — but if you do not specify the pattern to match, it just returns one letter at a time.)
  • .map takes a list of items and applies an operation to each of them, returning another list of items. In this case we have a list of letters and want them to be a list of numbers.
  • { $_.ord - "a".ord + 1 } is the operation. $_here is the input value, and .ord takes a letter and returns its Unicode code point, the unique number the computer assigns to each letter when it stores it internally. Because the letters a-z are stored in increasing order from 97-122, this is almost what we want. We subtract the code point for “a”, which means now we have a-z mapped to 0-25. Then we add one to get the value we want.
my @words = "/usr/share/dict/words".IO.lines;

This is how we get a list of words to check for dollar words. On a Mac or Linux computer, "/usr/share/dict/words" is a file with a complete list of default words for spell-checking. This code uses the lines routine to read a file and return a list where each line in the file becomes an item in the list — in this case, each word is on its own line, so it’s a list with lots of words. (On my Mac, it’s 235,886 words.) It stores that list in the variable @words.

for @words.grep({ value($_) == 100 }) -> $dollar-word {
    say $dollar-word;

And now we reach the home stretch. grep is sort of like map, but instead of applying an operation to each item in a list and returning a list of the results, grep applies an operation to each item in the list and and only returns the items from the original list whose operation returns the value True. In this case, the operation is { value($_) == 100 }, which passes the input $_ to the value subroutine we defined above and then checks to see if the result it returns equals 100. So once grep has done its job, we have a list of dollar words.

We use for to go over that list of dollar words, take each word, and (one-by-one) assign it to the variable $dollar-word, then perform the operation say $dollar-word, which prints the resulting word. So taken together this prints out the full list of words from the "/usr/share/dict/words" file which are dollar words. On my computer, this finds 2,303 dollar words.



June 6, 2017

So, I’ve been utterly terrible about updating this blog.  I’ve also been pretty darned quiet in the Perl 6 world, I fear.

That doesn’t mean I’ve abandoned Perl 6, though.  Actually, it’s been a very exciting time for me, because I’m finally actually using Perl 6 productively on a very regular basis.  Most weeks see me write two or three helpful little scripts, both for work and pleasure.  As an example, here’s yesterday’s:

my $proc-rsync = run 'rsync', '-trvL', '--delete', 'colomon@melissa:s3/Music/TunesToLearn', '/Users/colomon/sol/temp/', :out, :err;

my $proc-ls = run 'ls', '/Users/colomon/sol/temp/TunesToLearn', :out;

my @files = $proc-ls.out.slurp.lines;

for @files.pick(*) -> $file {
    say "Playing $file";
    my $proc-afplay = run 'afplay', '/Users/colomon/sol/temp/TunesToLearn/' ~ $file, :out;
    $proc-afplay.out.slurp: :close; # let the track finish playing before moving on

What’s this do?  The first two lines rsync the contents of my “tunes to learn” directory on my server to my laptop.  This directory is a set of MP3 files containing tunes that I would like to learn to play.

The next two lines get the list of those files and stashes them in the @files array.

Finally the for loop shuffles that list and goes through it calling OS X’s afplay command to play each file.  The $proc-afplay.out.slurp is extremely important — without that, the script tried to play all the files at once!  afplay seemed fine with that, but it was quite painful to hear.

This script isn’t exactly rocket science, but it seems to get the job done quite nicely, with a minimum of programming effort.

Fun and Easy Fibonacci Trick

July 2, 2015

So, Futility Closet has a lovely trick for generating the first 100-some Fibonacci numbers. It’s not efficient or anything, but it is pretty fun.

The trick is that the decimal expansion of 1 / 999,999,999,999,999,999,999,998,999,999,999,999,999,999,999,999 generates a Fibonacci number every 24 digits. That’s an easy number to create in Perl 6:

> my $n = 1.FatRat / 999_999_999_999_999_999_999_998_999_999_999_999_999_999_999_999

The default decimal expansion doesn’t have enough digits to see this effect. But luckily, it’s easier to get a better expansion:

> $n.base(10,2400).comb(/\d ** 24/).join("\n")

We can check that it works by comparing it to a more normal p6 way of calculating the Fibonacci series:

> [&&] $n.base(10,2400).comb(/\d ** 24/).map(+*) Z== (0, 1, *+* ... *)

Role “Inheritance”

March 7, 2015

This morning we had another discussion on the ordering of roles. This is a follow-up to a discussion with Ovid back in January, which I cannot find quickly.

As I understand Ovid’s original complaint, it was that given

role A {
    method foo { ... }

role B does A {
    method foo { ... }

class C does B { ... }

C gets B::foo, but in

role B {
    method foo { ... }

role A does B {
    method foo { ... }

class C does A { ... }

C gets A::foo. As I understand, Ovid sees this as roughly equivalent to the difference between role D; role E; class F does D does E and role D; role E; class F does E does D. To my mind they are completely different things, and if someone re-orders role “inheritance” without considering the consequences, they deserve what they get.

Personally, I think using A and B here obscures what’s going on, so let me switch to a real world example, straight (though simplified) from Rakudo’s source:

role Numeric {
    method log { ... } # very general support for log

role Real does Numeric {
    method log { ... } # Real-specific support for log

class Int does Real { ... }

When I put like this, clearly in Real does Numeric the direction of the does is of utmost importance — every Real is Numeric (which is the general p6 role for all numbers, including Reals and things like Complex), but not every Numeric is Real. It is obvious that it makes sense for Real::log to override Numeric::log as the log that Int gets.

It has been suggested that we should have a new keyword claim for this. Using it, the example from Rakudo becomes

role Numeric {
    method log { ... } # very general support for log

role Real does Numeric {
    claim method log { ... } # Real-specific support for log

class Int does Real { ... }

It is more explicit about what is going on, and allows us to generate an error (or maybe warning?) if you leave the claim out. I have no big objection to this idea, doing this sort of thing explicitly is a pretty good idea. However, it seems to me that as a solution to this (rather than just a nice notification to the programmer) it is very weak. For my example, consider an additional (simplified) part of the Rakudo role hierarchy (but a fake method):

role Numeric {
    method log { ... } # very general support for log

role Real does Numeric {
    claim method log { ... } # Real-specific support for log

role Rational does Real {
    claim method log { $!numerator.log - $!denominator.log } # cleverly save a divide!

class Rat does Rational { ... }

Now we have two claims. How do we determine which one to honor? masak++ thinks it is the “most-derived” one. That’s certainly the answer I’d want, too — but now look at what has happened here. In our very first example, we had two method log, and we added claim to establish which had priority. In this most recent example, we have two claim method log, and so we’ve had to add a notion of “most derived” to distinguish between them. But of course, the notion of “most derived” was all we needed to solve the first problem!

In summary: We have to have a “most derived” notion for roles, or else we limit a lot of really useful behavior. We certainly can add claim to help eliminate confusion on programmer’s part, but we need to understand that it will only catch the most obvious cases.

PS I don’t know if skipping the divide in log is actually worthwhile on modern computers. But the point that it makes that people might want to have a longish chain of role inheritance is what is important here.

Rakudo To-Do List

February 2, 2015

With TimToady’s announcement today, I’m sure a lot of the p6 developers are thinking about what needs to be done before the release. So I thought I’d take a stab at writing down the things on my mind.

The one really big thing for me is the precompilation bug that’s stopped my ABC module from building on Rakudo since November last year. Thanks to Rakudobrew this isn’t a complete catastrophe — I just keep around an older version of Rakudo that can run ABC. But it is really annoying, and I think it suggests is seriously going wrong with the precompilation process. (If I’m lucky, jnthn++’s current work might resolve this one.)

The next one is much less important to me, but seems kind of worrying. A while back I tweaked the old script to run in parallel. Alas, this script is wildly unstable, alternating between crashing, generating loads of warnings and incorrect results, and working.

One as-yet-unimplemented feature are the more unusual standard math functions. I’ve been meaning to do this for years, and actually started implementing them in Niecza at one point. I’ve considered restarting the process on Rakudo a few times recently, but the thought of implementing it on three different backends is kind of imposing. Maybe I’ll start with a module.

Huh. Seems like I started this post with one more thing on my mind, but I can’t remember what it was now. I will update it as needed.

Flattening and such

October 13, 2014

A quick blog post to supplement discussion on IRC and (presumably) at the #APW hackathon.

I just did a couple of quick acks on the ABC codebase, and this is what I found:

Wynne:ABC colomon$ ack -Q .list
187:            for $bar.list {
202:                for $_.value.ast.list {

72:                for $match<key-def><global-accidental>.list -> $ga {

The first one here I totally understand why I have to use .list. The remaining two, though, are classic examples of where my intuition says “These should return a list I can loop on” but instead I get an item. I guarantee that both times I only added the .list after minutes of head-scratching about my code failing in mysterious ways.

Wynne:ABC colomon$ ack -Q .flat
147:        make (@chords, @texts).flat;

421:    is $match<header_field>{ .<header_field_name> }), "X T S M L K", "Got the right field names";
442:        is .<header_field>{ .<header_field_name> }), "X T S M L K", "Got the right field names";

Here the first one really is flattening, but I think is a prime example of why we need an explicit list concatenation operation.

The remaining two … I’m not sure if I really meant .flat or .list there. Let me go look at the code…

Updated: Yes, .list gives me the same results on those last two. So in practice they are probably another example where I just tacked something randomish on to get things to behave the way I thought they should have by default.

Update to the update: Actually, for those last two, it works fine without .list or .flat. So that call was necessary at some point (or I wouldn’t have added it) but is no longer.

Another update: Okay, I tried removing the second two .lists from the first batch. line 202 can be removed without breaking any tests. line 72 breaks loads of tests if the .list is removed.

Also, it appears my code is using @( $match ) and similar constructions much more often than it is using .list or .flat. Again the standard usage is somewhere I would have expected it to not be needed.

JVM weirdness

May 8, 2014
Compiling lib/ABC/ to jar
Function 'ABC::Note' needs parens to avoid taking the block
at lib/ABC/
------>             }⏏<EOL>
Missing block (apparently taken by 'ABC::Note')
at lib/ABC/
------>             ⏏when ABC::Stem {$note.not

The code in question (from is

when ABC::Note {$note.accidental,

I believe it’s the first ABC::Note there that is causing the problem.

This appears to be legal to me, and works fine under Parrot and Moar.

Tune Reminder

May 5, 2014

We’re going to Newfoundland later this year, and as a result I have a list of tunes I’d really like to learn before I get there, and I need to brush up on the ones I already know. (The sad truth is there isn’t a lot of chance in Michigan to practice many of these tunes with other musicians.)

Years ago I wrote a Perl 6 script to help me track what I needed to practice. I meant to make it use spaced repetition, but the truth is I got so caught up using Niecza and GTK# to implement a simple GUI interface for the program I never got around to actually adding the spaced repetition part. I used it for a bit, and then it fell into disrepair.

Well, I’ve dusted off the repo it was in and added a new script which attempts to do an approximation of actual spaced repitition. And it was beautifully simple to write, barring a small bug in the Mix class that lizmat++ fixed before I could even break out the rakudo source.

The first slightly tricky bit is initializing the record of how you’re doing on each team. I wanted it to be able to handle the case where you’re running the code for the first time and have no records yet, and the case where there have been new tunes added. Each tune gets a level between 0 and 5.

my @status;
if $status-database.IO.e {
    my $file = from-json($status-database.IO.slurp);
    @status = @$file, 0 xx ($tunes.GetNumberTunes - @$file);
} else {
    @status = 0 xx $tunes.GetNumberTunes;

I used the JSON::Tiny module to handle I/O of the records, and initialize tunes not seen before with 0.

The main loop of the code is very straightforward:

loop {
    my $mix => $tune, $status { $tune => 1 / 2 ** $status }).Mix;
    my $tune = $mix.roll;
    say $tunes.GetTuneName($tune);
    given prompt "Know / Don't Know / Quit: " {
        when "q" | "Q" { last; }
        when "k" | "K" { @status[$tune] = (@status[$tune] + 1) min 5 }
        when "d" | "D" { @status[$tune] = (@status[$tune] - 1) max 0 }
    say "";

First I create a Mix based on the each tune’s status. The idea is that the higher the status level for the tune, the less likely you are to be prodded about it. Mix.roll is used to choose a random tunes with those weights taken into account. Then we print the name of the tune and use a simple combination of given and prompt to record how you did. (Note that this version doesn’t have any error checking here!)

Once you’ve hit “q” to exit the loop, the final line just records changes to the tune status file:

spurt($status-database, to-json($@status));

All-in-all, quite easy to code, and I’m already using it to help direct my practice!


May 5, 2014

Sorry for the long silence here.  My autumn was a mess, and my winter was buried in snow.  In all that time I have kept working with Perl 6, and things actually keep on getting better and better, which is quite exciting.

One really exciting thing for me is rakudobrew.  tadzik’s lovely little script makes it easy to build and rebuild multiple different versions of rakudo and switch between them.  In the past, keeping up-to-date with Rakudo was a matter of typing and executing four commands, the second of which was very fiddly to remember and type.  Now it’s just
rakudobrew build moar and everything moar-related is updated, including all your installed modules.  That may sound like a small change, but it works and it’s terrifically handy.

So, last night I tried to move the module smoke tester to using rakudobrew.  This allowed me to delete nearly half the lines in my smoke_test script, and added the potential to test all three of rakudo’s backends instead of just Parrot.  However, I ran into problems on all three backends, none of which are rakudobrew’s fault.

First and most vividly, rakudo under parrot is completely broken at the moment.  benabik++ tracked down the breaking patch two days ago.  It’s been broken since May 1st.  It really says something about the current state of rakudo use on parrot that almost nobody seems to have even noticed this.  (Hint: I’ve been using rakudo on moar for pretty much everything for a month or two now.)

rakudobrew does a beautiful job building and installing rakudo on moar.  However, the module smoke test (which tries to install every module) fails because installing File::Find breaks the module system, possibly because File::Find is used in that installation process.

==> Successfully installed File::Find
Invalid opcode executed (corrupt bytecode stream?) opcode 4352

This one has been the case for a while now.  Last night I was hoping I could get around it by just having emmentaler (the smoker) specifically skip testing File::Find, but it turns out a number of other modules depend on it.

rakudobrew built the JVM version of rakudo with no issues.  But the smoke test ran very slowly and died after making it through a few modules.  I haven’t really had time to look into this one, just wanted to mention it.

One other issue I’ve been bumping into, again not rakudobrew’s fault but making it less than perfectly awesome on OS X.  When you build rakudo on moar on the Mac, the makefile dies with the message make: write error after it has finished building and installing rakudo.  Attempts turn on make’s debugging tools cause a seg fault.  This is pretty innocuous when you’re building by hand, but rakudobrew actually (implicitly?) checks the results of the make, and stops before going on to rebuild the modules.  As nearly as I could tell, this seems to be an actual bug in GNU make on OS X, but I have no idea how to make further progress fixing it.

This may sound like a lot of complaints, so let me close by pointing out how amazingly awesome the progress has been in the last year.  I don’t know that anyone but jnthn believed in early May last year that we’d have rakudo fully working on JVM and yet another backend by this time.  Rakudo on moar has become clearly the best Perl 6 we have.  For ages I’ve needed to keep Niecza around to handle a few things that rakudo couldn’t do, but rakudo on moar is rapidly overtaking it.  And development continues at a furious pace.  This is really a great time to be a Perl 6 user.

Rakudo performance

August 2, 2013

Last week I had an amazing adventure in Perl 6. I had a specific goal I needed to accomplish for my day job: determining which face in a STEP model was disappearing when the model was tessellated.

I could determine a face I knew was connected to the face I was looking for, so I decided to try to find all the faces that shared an edge with that face. I have a set of Perl 6 tools for parsing STEP files. It was pretty easy to modify the code I already had to create a script to read & parse the file, walk the topology tree to each edge on the specified face, and then back from those edges to the faces that use them.

This worked well enough when I fed it a small sample file. It did not work so well when I fed it the actual 13M file I needed to work with. I left it running overnight, and it had a fatal error (presumably out of memory) sometime in mid-morning the next day. I think at that point I switched from trying it on Rakudo-Parrot to Rakudo-JVM, with essentially the same results. I added a bunch of say statements, and eventually determined that the line

my %reverse = $step-data.entities.keys.categorize({ $step-data.entities{$_}.map(*.entity_instances) });

was the guilty party. This line built the reverse lookup table, so you could find all the entities that reference a given entity — once built, this lets you quickly walk from an edge to each face that used it. Unfortunately, it seemed to eat up all the memory on my 32G Linux box.

At this point, I figured using categorize was a mistake. I wasn’t sure why it was, mind you. (Though my test program examining the question ended up leading to a separate impressive optimization from jnthn++.) But it occurred to me there was a different approach that avoided it entirely. Instead of walking back from the edges to the faces, just walk from all the faces to their edges, and then look for faces whose set of edges intersected the given face’s set of edges. Promisingly, this approach was about 33% faster on my small test file. And it didn’t use categorize.

However, when I tried this on the big model, this led to the same exact same ballooning memory problem as the first script. Now, though, I had something more concrete to work with, and a few minutes later, I established that the issue was $step-data.entities.kv. On a big hash (this one had 182500 elements), calling .kv or .keys would consume all available memory on my system. You could create the hash, no problem, and if you knew the keys you could access them quite quickly. But asking it to give you the keys killed Rakudo.

With this I had a nice simple golfed test case I could give to jnthn. And before the morning was out, he gave me a patch to fix it. With the patch, the script executed correctly in 11 minutes under Rakudo-JVM. (Note that I did have to change the Java -XmX maximum memory setting in the perl6 script to get it to work.) It also executed correctly in Rakudo-Parrot — but in 7 hours, 52 minutes.

Let me emphasize that. For this real-world task of significant size, Rakudo-JVM was 40 times faster than Rakudo-Parrot.

Note that I don’t know why. But I do know that jnthn++’s fix which was needed to make this work on both Rakudos was in no way JVM-specific. This isn’t an example of optimized Rakudo-JVM versus non-optimized Rakudo-Parrot.

But the important thing here is that this represents a very significant improvement in Rakudo’s performance by switching to JVM. The script is pretty basic core stuff, mostly file I/O, grammar parsing, and hashes. The improvement is much smaller on a small data-set — on my small test file, Rakudo-JVM is not even twice as fast as Rakudo-Parrot. But throw a big task (well, this big task, anyway) at Rakudo, and Rakudo-JVM crushes Rakudo-Parrot.