Pythagoras Pie

(Image courtesy Tomohiro Tachi under Creative Commons license.)

Recently I came across a fun programming challenge called the Pythagoras Pie, which was described as:

At a party a pie is to be shared by 100 guests. The first guest gets 1% of the pie, the second guest gets 2% of the remaining pie, the third gets 3% of the remaining pie, the fourth gets 4% and so on.

Write a script that figures out which guest gets the largest piece of pie.

I sat down for a few minutes, and wrote the obvious code. It iterates over the list of guests. For each guest, it calculates how large a piece of pie the guest will get. All the while, it stores size of the largest piece of pie it has seen so far.

Here is a solution in Perl.

sub slice_pie {
    my $iters   = shift;
    my $pie     = 1;
    my $largest = 0;
    my $winner  = 0;
    for ( 0 .. $iters ) {
        my $iter_value = $_ * .01;
        my $portion    = ( $iter_value * $pie );
        $pie = $pie - $portion;

        if ( $portion >= $largest ) {
            $largest = $portion;
            $winner  = $_;
        }
    }
    print qq[Winner is guest # $winner with the largest portion: $largest\n];
}

slice_pie(100);

The answer, as it turns out, is that the 10th guest gets the largest piece of the pie: 0.0628156509555295, or about 6%.

Just for fun, I wrote almost the same exact code once again, except this time in Perl 6. Even though this is a straightforward translation using the same basic loop structure, it has a few nice improvements:

  • No need for argument unpacking (saves a horizontal line — vertical compactness is good)
  • Nice type annotations mean we can call an integer an Int, which also helps the compiler
  • No need for parens around the for and if checks
sub slice-pie(Int $iters) {
    my $pie = 1;
    my $largest = 0;
    my Int $winner;

    for 0 .. $iters {
        my $iter_value = $_ * .01;
        my $portion = $iter_value * $pie;
        $pie -= $portion;

        if $portion >= $largest {
            $largest = $portion;
            $winner = $_;
        }
    }
    say qq[Winner is guest number $winner with the largest portion: $largest ];
}

slice-pie(100);

Advertisements

A Perl one liner to generate passwords

I’ve noticed that browsers like Safari and Chrome are helpfully offering to generate secure passwords for me when I create a new login somewhere.

Sometimes this is really nice! Certainly it’s better than having to take a few minutes to compose a new password, especially since the quality of passwords I can easily generate on the spot is … shall we say of questionable quality sometimes, depending on time of day, blood sugar levels, etc.

So just for fun I decided to type out a Perl one liner to generate passwords for me in situations where I don’t necessarily have access to (or want) Safari and friends to do it for me.

I make no claims nor warranties about the “security” of the passwords generated by the following code, but I sure did enjoy writing it.  Just for fun, I did paste an output string into Kaspersky’s online password strength tester, and according to the tester it’s … actually not bad?  (Again: not an expert here)

Anyway, here’s the code.  It loops over an alphanumeric array with some special characters thrown in, grabbing one character at random for each iteration.  It also folds the case of the character if the number of the current iteration is even (assuming the character is of the sort whose case can be folded, which some aren’t).

$ perl -E '@vals = split "", "0x1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ-?!@^&*()"; $_ % 2 == 0 ? print $vals[ rand($#vals) ] : print fc $vals[ rand($#vals) ] for 0 .. 24; say;'

To give you a sense of the output of this script, here’s the password I typed into the Kaspersky checker for reference:

Vp8vJmNnN*8(CrE8*30*4@JlC

Genera Notes, Part 1/N

I think I’d like to start sharing some screenshots and notes taken while playing with my local Open Genera installation.  In part for historical capture reasons, and in part because I think it’s fun.

I set up the system using the instructions in this Youtube video, which can be tl;dr’d as:

  • install an old-ass Ubuntu in a Virtualbox
  • configure NFS and other random things
  • SSH in with Putty and fire up an Open Genera instance via X Windows

The old-ass Ubuntu is allegedly necessary due to some behavior in newer X that breaks Open Genera, but I haven’t verified that yet, only read it.

I’m planning to write up the (Virtualbox on Windows) installation process shown in that video soon for my own future reference.  At that point I’ll probably write a script to automate it as well.

If you don’t use Windows, there is already this excellent tutorial: Running Open Genera 2.0 on Linux.  I’ve exchanged mail with the author of this piece, he seems like quite a nice guy in addition to being pretty knowledgeable.  Apparently Open Genera runs more robustly on a Compaq AlphaServerDS10L (or similar machine) as was originally intended, though it’s much slower than modern systems.

13 November 2018

Currently reading the Genera User’s Guide section entitled Getting Acquainted with Dynamic Windows (link is to the exact page of the PDF on Internet Archive!).  There is a list of bookmarks on the right that I’d like to revisit and finish reading.

To add a document to the Bookmarks pane in Document Examiner, either:

  • Visit it (so it’s added automatically)
  • From somewhere else in the interface, when you see a link (AKA a hoverable title for a document or section thereof), press Shift and then click with the left mouse button (also denoted as Sh-Mouse-M by the system – you need a three-button mouse to use this system properly)

Note: the excessive (?) whitespace in the screenshot below is due to the fact that we’re running at 1920×1080, which is my laptop’s default resolution but is probably (?) larger than any physical Lisp Machine monitor that ever existed.  Based on some pictures of actual monitors I’ve seen, I wonder if this environment would profit from running on a vertically oriented monitor as well.  Something to play with.

As I read various docs, I’ve been taking notes in Zmacs.  The Zmacs buffer shown in the next screenshot is actually getting written to the Linux machine’s (virtual) disk, and can thus be backed up, edited from other text editors, etc.  It’s all happening over NFS!  And as you have probably deduced from the window borders, this Genera window is being served over the X Windows system (specifically XMing running on Windows).

Here’s the Zmacs window after being expanded using the System menu (shown, which can be accessed at any time via Sh-Mouse-R):

In addition to the Genera window, there is the “cold load” window that is also displayed via X Windows while Open Genera is running.  And lo!  As I began writing this, Genera crashed by trying to display an ellipse (an image from the ZMail documentation, specifically Conceptual Zmail Architecture), which caused it to try to reference an array out of bounds (I don’t know why, yet).  Here’s the backtrace as shown in the cold load window (the Genera window with Document Examiner just beeped and froze – when that happens it’s time to look at the cold load window):

In the cold load window’s debugger I was able to ascertain the following keys’ meanings, at least on my laptop (confusingly, the keys here do not map to the same keys as in the Genera X window):

  • Shift-E means “eval (program)”, dropping you into Lisp
  • * (asterisk) means Abort.  It popped me back up out of the cold load stream and into Genera (the Document Examiner in particular).  My document window state was nuked, but I was able to click the bookmark to return to the section I was reading.  (Now to go back and see if I can get Document Examiner to crash again with a bad array subscript by viewing that page again!).

Note: the above crash(es) happened while I was simultaneously loading CLIM in the Listener, which seems to put a lot of load on the (smallish, Virtualbox’d) system I’m running on.  So it may have something to do with that.  Here’s what some of the output of loading CLIM looks like:

Oh, and another thing: Zmail can read GNU Emacs RMAIL files according to the docs!  I’m not 100% sure what “UNIX” mail format means in this context, but perhaps it means good old mbox?

Anyway I’ve got a lot left to explore on this system.  As it is I’ve been reading the documentation and browsing around in my off hours for the last week or two, and it feels like I’m just getting started.

Announcing flycheck-pod.el, Flycheck support for Perl 5 POD files

Recently I was writing some Plain Old Documentation (aka POD) for one of my Perl modules, and I thought it would be convenient to have Flycheck support while writing docs.

Since I couldn’t find the Elisp code already written to do this, I whipped something up: flycheck-pod.

As of right now, it supports error checking in Emacs’ various POD and Perl modes using podchecker in the background. I doubt it covers all of the features of POD::Checker though — patches are certainly welcome. See the source code for details.

(Image courtesy Kenneth Lu via Creative Commons license.)

How to use Locate from Emacs on Windows

If you are like me, you like to:

  • Live in Emacs as much as possible to avoid context-switching
  • Set up Emacs so your environment abstracts the OS as much as possible

Being able to sit down at any of my computers and type M-x locate in Emacs is a requirement for me, even if It’s running Windows underneath.

In this post I’ll describe how to set up a locate(1) command on Windows 10, and how to access it from Emacs.

Step 1. Install Locate32

Download and install locate32 on your machine. It doesn’t have an installer, it just gives you a directory full of things, including the locate.exe binary. I put mine in "C:/Users/rml/Programs/locate32/", and added that location to my Windows %PATH%.

Step 2. Tell Emacs where to find it

In Emacs, set the value of the locate-command variable to wherever you ended up putting it. Here’s where it is on my machine:

(setq locate-command "c:/Users/rml/Programs/locate32/locate.exe")

Step 3. Locate all the things

Now when you run the M-x locate command from inside Emacs, it should give you a Dired buffer of results, the same way it does on other systems. Because it’s Dired, you can hit enter on a filename to visit it or mark files in various ways and then operate on them.

Here’s what it looks like on my Windows 10 laptop if I search for the text “svn”:

How to use CockroachDB with Emacs SQL Mode

old skool

(Image courtesy Sajith T S under Creative Commons license.)

In this post I’ll describe how to hack up the Postgres config for sql.el to work with CockroachDB (aka “CRDB”).

In the future I’d like to submit code to sql.el to make CRDB a fully-supported option, but for now this is what I’ve been using.

Note that these instructions assume you are running a secure local cluster. If you are running insecure, you can avoid all of this and just M-x sql-postgres and use most of the defaults, modulo port numbers and such. It uses psql on the backend which works because CRDB speaks the Postgres wire format.

However, once you get up and running for real and want to use a secure network connection, it’s easier to use the cockroach sql client. That’s what we’ll configure in this post.

(It may be possible to configure psql to use the CRDB certs, but I don’t know since I haven’t looked into it. Also, keep in mind that I have not tested this setup over the network yet – only on my local machine.)

Step 1. Modify basic config

Since the client is invoked by comint as two “words”, cockroach sql, you have to mess with the options a bit.

First set the cockroach binary:

(setq sql-postgres-program "cockroach")

Then invoke the SQL client with the first arg, and pass options in with the rest. The certs directory is where your encryption certificates are stored. Since this is an ephemeral local cluster I’m using the temp directory.

(setq sql-postgres-options
      '("sql" "--certs-dir=/tmp/certs"))

Finally the login params are pretty standard. My local clusters are not usually long-lived, so I just use the “root” user. This would not be recommended on real systems of course.

(setq sql-postgres-login-params
      '((user :default "root")
        (database :default "")
        (server :default "localhost")
        (port :default 26500)))

Step 2. Modify the Postgres “product” to work with CRDB

Sql.el calls each of its supported databases “products”. for whatever reason.

In any case, here’s how to modify the Postgres product to work for CRDB.

First we need a new function to talk to comint.el. (See the bottom of this post for the definition since it’s longer and not interesting.)

(sql-set-product-feature 'postgres
                         :sqli-comint-func #'sql-comint-cockroach)

The usual comint prompt regexp things. This one isn’t that well tested but works on my machine ™ … so far.

(sql-set-product-feature 'postgres
                         :prompt-regexp "^[a-z]+\@[a-zA-Z0-9\.-_]+:[0-9]+/\\([a-z]+\\)?> ")

I don’t really know what this does. The CRDB prompt is not necessarily of a fixed length so it doesn’t really apply. It seems to have no effect, I just cargo culted it from some other DBs. Probably not needed.

(sql-set-product-feature 'postgres
                         :prompt-length 0)

Regexp to match CRDB’s little continuation marker thingy:

(sql-set-product-feature 'postgres
                         :prompt-cont-regexp "^ +-> ")

Set the “end of a SQL statement” character to the semicolon. Some of the other DBs have some pretty fancy settings here, but this seems to mostly work.

(sql-set-product-feature 'postgres
                         :terminator '(";" . ";"))

Command to show all the tables in the current database.

(sql-set-product-feature 'postgres
                         :list-all "SHOW TABLES;")

And finally, this is the comint function we need to work with CRDB:

(defun sql-comint-cockroach (product options)
  "Create comint buffer and connect to CockroachDB."
  (let ((params
         (append
          (if (not (= 0 sql-port))
              (list "--port" (number-to-string sql-port)))
          (if (not (string= "" sql-user))
              (list "--user" sql-user))
          (if (not (string= "" sql-server))
              (list "--host" sql-server))
          options
          (if (not (string= "" sql-database))
              (list sql-database)))))
    (sql-comint product params)))

It’s all about the BATNA

8109693804_d735a19f95_z
 
(Image courtesy Ismael Celis under Creative Commons license.)

It seems like there is a constant stream of articles being turned out about how we’re all going to be working in Amazon fulfillment centers and holding in our pee for 12 hours while we dry-swallow bottles of Aleve and live in fear of our slave-driving lower-level warehouse managers.

You can read a lot of these types of articles on sites like the Verge for some reason. (I am beginning to think of them – at least in part – as “nominally ‘tech’ but actually ‘tech pessimism'” sites.)

Meanwhile, there is another – perhaps-less-frequent but still influential – stream of articles about how companies “can’t find” good employees, they “can’t hire”, millennials want “too much” from their employers, Americans “won’t work hard” and “don’t have the necessary skills” for “the future” ™, and so on.

You can probably read these articles in the Wall Street Journal.

The NY Times, that bourgeois rag, will happily run both types of article. (Parts of its demographic hold both views, in some cases simultaneously, and hey, the ads pay either way.)

Unfortunately there is an important concept taken from business negotiation called BATNA that is almost never even mentioned in either type of article – even though it usually explains the behaviors chronicled in the article! I could almost forgive this if the writer had studied journalism and not economics (although not really), but if they have any economics or business background at all it’s just criminal.

What is BATNA though, really? Well you can read the wiki article for more information, but it is an acronym that means “Best Alternative To Negotiated Agreement”. In other words, it’s a way of thinking during any type of negotiation about questions of the form “What’s my next best option if this deal falls through?”

For example, if you are an employer with a lot of cash on the balance sheet you can afford to wait a few quarters (or years) until employee wages come down to a level you find more appealing, maybe. If you are a wage-earning employee, you probably cannot. (Not to mention that it’s probably cheaper for companies to have their PR people push articles in the WSJ about how hard it is to hire than it is to just raise wages until hiring picks up.)

P.S. Special thanks to Andrew Kraft, who gave a great talk on BATNA and other related topics a few years back at AppNexus. Without his talk, I might never have heard of this magical acronym.)

Thoughts on Rewrites

As a user, when I hear engineers start talking about doing a rewrite of an application or an API that I depend on, I get nervous. A rewrite almost never results in something better for me.

Based on personal experience, I have some (possibly unfair) opinions:

  • Rewrites are almost always about the engineering organization
  • They are almost never about the end users
  • Inside any given organization, it’s very difficult for people to understand this because their salary depends on them not understanding it
  • Attempts at rewriting really large apps rarely get to a state of “fully done”, so the engineers may end up with a Lava Layer anyway
  • Except now users are angry because features they depended on are gone

Why am I writing this? Because I’m still mad they took away my Opera.

Until recently, I’d been using Opera for over a decade. By the time Opera 12 came out, it was amazing. It had everything I needed. It was lightweight, and could run on computers with less than a gig of RAM. With all of the keyboard shortcuts enabled, I could slice and dice my way through any website. I could browse the web for hours without removing my hands from the keyboard, popping open tabs, saving pages for later reference, downloading files. It was amazing.

Oh, and Opera also had a good email client built in. It was, like the browser part, lightweight and fast, with keyboard shortcuts for almost everything. It also read RSS feeds. Oh, and newsgroups too. It had great tagging and search, so you could really organize the information coming into your world.

Then they decided to take it all away. They didn’t want to maintain their own rendering engine anymore. They let go of most of the core rendering engine developers and decided to focus on making Yet Another Chromium Skin ™. No mail reader. Most of the keyboard shortcuts gone. Runs like shit (or not at all) in computers with 1 gig of RAM.

I realize I got exactly what I paid for. But if you are wondering why users get twitchy when engineers and PMs start talking about rewrites, wonder no longer.

After Opera stopped getting maintenance, I switched back to Firefox, and fell in love with Pentadactyl, the greatest “make my browser act like Vim” addon that ever was.

Can you guess what happened next? Yep, they decided to rewrite everything and break the addon APIs. I know they had some good reasons, but those reasons meant the end of my beloved Penta. Now I am back to using Firefox with Vimium (like an animal), and I suppose I should be grateful to have even that.

And don’t get me started on my experiences with “REST APIs”, especially in a B2B environment.

Related:

Set up Gnus on Windows

There are many “set up Gnus to read email from Emacs on Windows” posts. This one is mine. Unlike the 10,000 others on the internet, this one actually worked for me.

A nice thing is that, with a few tweaks, this setup also works on UNIX-like machines.

PREREQUISITES

OVERVIEW

At a high level, the way this all works is that:

  • A mail server is out there on the interwebs somewhere
  • stunnel runs locally, and creates an encrypted “tunnel” between a port on the mail server and a port on the local machine

  • Emacs (Gnus) connects to the local port and fetches mail from there (as far as it knows)

STEP 1. INSTALL AND CONFIGURE STUNNEL

Download and install stunnel for Windows:
https://www.stunnel.org/downloads.html

I use Fastmail, so the following configuration worked for me. I put it in the file ‘C:/Users/rml/_stunnel.conf’.

# Windows stunnel config

# 1. GLOBAL OPTIONS

debug = 7
output = C:/Users/rml/Desktop/stunnel.log

# 2. SERVICE-LEVEL OPTIONS

[IMAP (rmloveland@fastmail.fm) Incoming]
client = yes
accept = 127.0.0.1:143
connect = mail.messagingengine.com:993

[SMTP (rmloveland@fastmail.fm) Outgoing]
client = yes
accept = 127.0.0.1:465
connect = mail.messagingengine.com:465

If memory serves, you will need to do some messing around with stunnel to get it to read from a config file other than the default. Luckily it puts a little icon in the notification tray that you can right-click to get it to do things such as edit the config file or view the log file. From there, you should be able to get the config in shape as shown above.

In the particular case of Fastmail, you’ll need to set up an app password via its web UI. See your email provider’s documentation for more information.

STEP 2. CONFIGURE GNUS

On the Emacs side, we need Gnus to ask the right port on the local machine for mail. Here’s what I did:

(setq send-mail-function 'smtpmail-send-it
message-send-mail-function 'smtpmail-send-it
smtpmail-smtp-server "localhost"
smtpmail-smtp-service 465
smtpmail-stream-type nil
smtpmail-default-smtp-server "localhost")

This is the part of your Gnus config that tells it how to talk to stunnel; all of the other Gnus things are beyond the scope of this article. If you need more Gnus info, you should be able to get something going using the EmacsWiki:
https://www.emacswiki.org/emacs/CategoryGnus

A Trivial Utility: Prepend

Recently at work I needed to add a timestamp to the top of a bunch of Markdown files. There are plenty of ways to skin this particular cat. As you probably know, the semantics of how you navigate UNIX file contents mean it’s easy to add something to the end of a file, but it’s not as easy to add something to the beginning.

This is a pretty trivial task that other people have solved in lots of ways. In my case, I decided against a shell script using sh or the like because I use Windows a bunch, too, and I wanted something cross-platform. As usual for me, this meant breaking out Perl.

I decided to name the tool prepend, on the grounds that that’s what it does: it adds text content to the beginning of a file.

Since I like to design top-down, let’s look at how it’s meant to be used:

$ prepend STRING_OR_FILE FILES

There are two ways to use it:

  1. Add a string to the beginning of one or more files
  2. Add the contents of a file to the beginning of one or more files

Let’s say I wanted to add a timestamp to every Markdown file in a directory. In such a case I’d add a string like so:

$ prepend '<!-- Converted on: 1/26/2017 -->' *.md

If I had some multi-line text I wanted to add to the beginning of every Markdown file, I’d say

$ prepend /tmp/multiline-preamble.md *.md

The code is shown below. I could have written it using a more low-level function such as seek but hey, why fiddle with details when memory is cheap and I can just read the entire file into an array using Tie::File?

#!/usr/bin/env perl

use strict;
use warnings;
use experimentals;
use autodie;
use IO::All;
use Tie::File;

my $usage = <<"EOF";
Usage:
    \$ $0 MESSAGE_OR_FILE FILE(S)
e.g.,
    \$ $0 '<!-- some text for the opening line -->' *.md
OR
    \$ $0 /tmp/message.txt *.txt
EOF
die "$usage\n" unless scalar @ARGV >= 2;
my @files = @ARGV;

my $maybe_file = shift;
my $content;

if (-f $maybe_file) {
  $content = io($maybe_file)->slurp;
}
else {
  $content = $maybe_file;
}

for my $file (@files) {
  my @lines;
  tie @lines, 'Tie::File', $file;
  unshift @lines, $content;
  untie @lines;
}