Teach your kids to curse!

“Oh, schmarkle!”

Imagine your son or daughter saying that in class.  Not too upsetting, I’m guessing.

Curse words are only upsetting because of the meaning we attach to them.  And they serve as curse words because of the meaning we attach to them.  Usually those are aligned, but what if you were to teach your children to curse using words that have no significance to anyone else?  It would sound a little silly, perhaps even humorous, but it wouldn’t sound offensive.

Then later, when friends inevitably introduce your son or daughter to “real” curse words, they will have some degree of immunization against them — the ingrained habit of using an inoffensive word.


O(1) sum_of_multiples() in Rust

I had been working mostly in Scala for a while, then took a diversion into Swift and Objective C.  I wanted to learn another language after that, and had all but decided on Clojure.  But Rust kept nagging at me — there was something about it.  Perhaps I’ll blog more about that later.

So I watched some videos, then read the book, and then started the Rust track at Exercism.io.  Nice site.  One of the exercises is about generating the sum of all multiples under some limit for a given list of factors.  So for example, if the factors are 3, 5, and 7, then the sum of the multiples under 12 is 3 + 5 + 6 + 7 + 9 + 10 = 40.

A naive solution is pretty straightforward:

pub fn naive_sum_of_multiples(limit: u32, factors: &[u32]) -> u32 {
        factors.iter().any(|&j| i % j == 0 )

It just iterates through all of the eligible numbers, picking out any that are evenly divisible by any of the factors, and sums them.  Rust’s iterators make that look pretty similar to a lot of other modern languages, like Scala.

But this solution is slow.  I mean really slow — at least compared to what’s possible.  As limit grows, the execution time increases linearly; that is, this is O(n).  (With respect to the limit, that is — we’re ignoring the (very real) impact of the number of factors on execution time. Note that in Exercism’s tests, there are generally two factors, max three, so for simplicity let’s focus on the limit here.)

Well, O(n) doesn’t sound so bad, does it?  But here’s the thing: it could be O(1).

What???  O(1)?  That would mean that you aren’t doing any more work if you increase the limit by a factor of 100.  That can’t be right!  But yes, it is.

Here is the key observation.  The instant you see it, your brain will likely jump halfway to the solution.  The sum of (for example)

3 + 6 + 9 + 12 + 15 + 18 + 21

is just

3 * ( 1 + 2 + 3 + 4 + 5 + 6 + 7)

Got it yet?  Remember that the sum of the numbers from 1 to n is n * (n + 1) / 2 — actually summing the numbers is unnecessary!

That’s not quite all there is to it, though. If asked to sum the multiples of 3 and 5 less than 1000, we can use the above technique to sum up the multiples of 3 and the multiples of 5 and then add them together, but we will have counted the multiples of 15 twice; we have to subtract those out.

And that’s a little trickier to do that than it sounds. First, what you really need to subtract out are the multiples of the least common multiple (lcm) of the two numbers, not their product. So, for example, if asked to sum the multiples of 6 and 15, we need to subtract off the multiples of 30 (not 90). The lcm of two numbers is their product divided by their greatest common divisor (gcd).

Also, we need to do this for an arbitrarily long list of numbers, so consider what happens if we are asked to sum the multiples of 4, 6, and 10:

  • First sum the multiples of 4.
  • Then add in the multiples of 6, but subtract the multiples of lcm(4, 6) = 12.
  • Then add in the multiples of 10, but subtract the multiples of lcm(4, 10) = 20 and the multiples of lcm(6, 10) = 30.

But oops, now we have gone the other way, subtracting off the multiples of 20 and 30 in common (60, 120, …) twice, and our result is too low, so we’ll have to add those back in. And if there were multiple corrections at that level (i.e. if we were given a larger list of numbers), we’d have to subtract their elements in common, and so on ad infinitum. At every step we have to take care not to add or subtract the same numbers twice.

That sounds like a pain, but using recursion it’s pretty straightforward.  In the following Rust code, I’ve changed the API a bit from the Exercism problem.  First, the integers are u64, so that we can use much bigger limits.  And secondly, in this case we’ll sum all multiples up to and including limit.  It’s an arbitrary choice anyway, and doing it this way will save us a small step for clarity.

pub fn fast_sum_of_multiples(limit: u64, factors: &[u64]) -> u64 {
  fn lcm(a: u64, b: u64) -> u64 { a*b / gcd(a,b) }
  fn gcd(a: u64, b: u64) -> u64 { if b == 0 {a} else { gcd(b, a%b) } }
  fn sum_from_ix(i: usize, limit: u64, factors: &[u64]) -> u64 {
    if i == factors.len() {  // we've processed all factors
    } else {
      let factor = factors[i];
      let n = limit / factor;  // # of multiples of factor to sum
      let sum_of_multiples_of_factor = factor * (n*(n+1)/2);
      let new_factors: Vec<_> = factors[..i].iter()
        .map(|&prev_factor| lcm(prev_factor, factor))
        .filter(|&factor| factor <= limit)
      let sum_of_previously_seen_multiples_of_factor =
        sum_from_ix(0, limit, &new_factors[..]);
      let sum_of_multiples_of_rest_of_factors =
        sum_from_ix(i+1, limit, factors);
        - sum_of_previously_seen_multiples_of_factor
        + sum_of_multiples_of_rest_of_factors
  sum_from_ix(0, limit, factors)

This is not too far from what a Scala solution would look like, although I think the Scala solution is a bit more readable.  In part this owes to Scala’s persistent List data structure, which was a little cleaner to work with here than a Rust Vec.  Also, Scala’s implicits make it unnecessary to make two calls that are explicit in the Rust version: iter() and collect().  Oh, and in Rust functions cannot close over variables; only closures can do that, but we couldn’t use a closure since in Rust closures cannot be recursive.  That forced us to explicitly include limit in the argument list of sum_from_ix().  Ampersands here and there in closures.  Semicolons.  These are little things, but collectively noticeable.

def fastSumOfMultiples(limit: Long, factors: List[Long]): Long = {
  def lcm(a: Long, b: Long) = a*b / gcd(a,b)
  def gcd(a: Long, b: Long): Long = if (b == 0) a else gcd(b, a%b)
  def sumOfMults(factors: List[Long], prevFactors: List[Long] = Nil): Long =
    factors match {
      case Nil => 0
      case factor::rest =>
        val n = limit / factor  // # of multiples of factor to sum
        val sum_of_multiples_of_factor = factor * (n*(n+1)/2)
        val sum_of_previously_seen_multiples_of_factor =
          sumOfMults(prevFactors.map(lcm(_, factor)).filter(_ <= limit))
        val sum_of_multiples_of_rest_of_factors =
          sumOfMults(rest, factor::prevFactors)
        sum_of_multiples_of_factor -
           sum_of_previously_seen_multiples_of_factor +

But really, the difference is not so stark.  And the Rust version does not need a garbage collector — amazing!

On to the next exercise….

Thoughts on “Self-Serving Bias”

If someone were to ask you and your roommate what percent of the work around the house you each do, the answers would almost surely total to more than 100%; you each likely overestimate your contribution. This is an oft-mentioned example of “self-serving bias.”

While I have no doubt that this bias exists, there is one confounding factor that you have to be careful about in attributing such a discrepancy in assessment to self-serving bias, something I have never seen discussed.  That confounding factor is that we value things differently.

My wife, when I was married, used to iron my undershirts after they came out of the dryer.  I asked her not to do it: not only was it a waste of her time because the wrinkles would not be visible, it was also a waste of energy both to iron them and then for the AC to remove the heat from the house.  She saw things differently and continued to iron them.

So when thinking about how much work she did around the house, there were those twenty minutes of ironing she did for me.  But I counted that activity more as an annoyance than as a contribution.

I had another partner who had a curio cabinet with shelves of little glass unicorns and other trinkets on display.  We lived in Phoenix, so all those little pieces needed frequent dusting; in her mind that effort was part of the housework.  But that curio cabinet did nothing for me; it was strictly for her pleasure.  Whatever dusting she did on those unicorns was maintenance on her hobby, as far as I was concerned, comparable to me keeping the tires inflated on my bike.

There can even be discrepancies for work that is valued by both.  For my partner, keeping the kitchen tidy may involve putting on a shelf in the pantry some things that I would prefer to see left on the counter, like the cinnamon and jars of nuts.  Or imagine living with someone who wants the carpets vacuumed every week, when you are fine with once a month.

None of this is to say that humans don’t engage in self-serving bias — just that such discrepancies may also owe in part to discrepancies in what people consider the goal.

Why do we like foods that are bad for us?

How about a pepperoni and sausage pizza with extra cheese? Or perhaps a chocolate eclair?

No thanks, you say? You like those foods, but you’re trying to eat healthy? Well darn — why is it that everything that tastes good is bad for us?

OK, that’s a bit of exaggeration, but we do like fats, sugars, and salt, all to our detriment — they contribute to obesity, diabetes, heart disease, cancer, and more. It seems odd, doesn’t it? Why would we evolve to like foods that are bad for us?

Well hey, I have some great news! The reason we like those things is that they are actually good for us! Really, that’s true — but only if you take it in context.

Our distant ancestors didn’t have supermarkets with candy aisles. Even fruits available to them were hardly sweet at all compared to the fruits we eat today, which are the result of many, many generations of selective breeding for taste. Our ancestors hunted, but during most of our evolutionary history success was by no means assured — they occasionally brought home some meat. And that meat was not marbled with fat like the cows we eat today, which were not only bred for taste but also given cushy lives so that their muscles don’t get tough.

So we evolved over many, many generations during which fats, sugars, and the like were generally not available in quantity. Whatever little bits you were lucky enough to come across, it was to your benefit to eat. Those who liked such foods were more motivated to eat them when available, and they benefited from the nutritional boost, which ultimately translated into greater reproductive success — they really needed the calories. So the genes that programmed into their brains the taste for such foods were passed on with greater success.

To drive this home, picture yourself lost on some grassy plains, with trees here and there. You haven’t eaten much today, so you are motivated to find something. A few of the trees and bushes have fruits that you sort of recognize. You try one and quickly spit it out — bitter. After some experimentation you find one that is, well, not great — nothing like the fruits you are used to eating — but not disagreeable. The chemical laboratories in your tongue and nose steered you away from foods which probably weren’t going to work for your body, toward foods that might. That is what your senses of taste and smell are for, not to help you decide between broccoli and Twinkies. There were no Twinkies.

But that’s why we like sweet foods today: they were a good sign for us health-wise way back then. Our ancestors were the ones who did like sugars and fats; the ones who didn’t fared less well, and their genes became less common. That’s how evolution works. But now we are able to refine and concentrate sugars and fats to form the nutritional monstrosities that call to us like sirens from supermarket shelves, and in those concentrations they are by no means good for us.

If such unhealthy crap had existed in our evolutionary past (imagine giant Twinky trees on the savanna, laden with “fruit”), we would have had to evolve to deal with it.  Our bodies might have evolved to be a little more tolerant of sugar/fat bombs. Our brains might have evolved to enjoy a little bit of Twinky now and then, but to quickly lose interest, preferring foods with the nutrition our bodies need. But that didn’t happen, because there were no Twinky trees until recently. So we are defenseless, led by our senses — like moths to a flame — to obesity, diabetes, and heart disease.

Well, OK, we aren’t quite defenseless — we can, through our intellect and force of will, override our pleasure system. But that works much better for some than for others.

After I explained this to a friend, he asked “but aren’t we evolving to deal with much higher concentrations of sugars, fats, etc.?”

Unfortunately it’s not that simple. To understand why, consider what that evolution would look like. People whose genes build bodies which don’t handle these excesses well would have to be less successful reproducers, so that their genes would become less frequent in the gene pool. A key way such people would be less successful reproducers is by dying. Well, they do die, you complain! Yes they do, BUT — they would have to die soon enough to reduce the number of children they have (and raise successfully). These days we expect to have a couple of kids in our twenties or thirties and live into our eighties. Natural selection doesn’t much care if you die in your fifties of heart disease if you weren’t going to have more kids after that anyway.

To be fair, grandparents can be of some benefit in raising their grandchildren, although that’s probably far less a factor than it used to be, what with insurance and social programs that use tax money to help the needy, not to mention the fact that at least here in the U.S. grandparents don’t typically live with their grandchildren anymore. In fact, it may be that — from our genes’ point of view, which is always about reproductive success — it is better nowadays if grandparents die young and leave a larger inheritance to their descendants sooner!

So no, sorry — we are probably not significantly evolving toward bodies that run just fine on cheeseburgers and ice cream. And in fact that’s almost surely not what would happen even if we were dying soon enough to reduce our reproductive success; incremental changes to our brains that cause us to like sugars and fats less are far more likely than sweeping changes to our physiology to allow us to thrive on junk food.

Human beings are intelligent planners, capable of working out the means to attain goals. But an old part of your brain tries to “steer” you and your great planning ability toward reproductive success by dumping neurotransmitters into your noggin to control how you feel. And there is something you should know about that old part of your brain, this part that controls what you like: it’s dumb as a stone. Worse, it has no idea that the industrial revolution happened.

Think of this old part of your brain as the firmware in a computer. I call this part of my brain Dumbo, and the corresponding part of a woman’s brain Morona. Dumbo and Morona only get updated by evolution, a glacially slow process that — as we saw above — doesn’t necessarily lead us someplace we’d like to go, because humans want more out of life than just lots of descendants. Dumbo and Morona do not reason; they execute encoded heuristics that contribute to reproductive success. The encoding that is there is almost entirely from a time when our lives were not very much like they are today, so there is a huge gap between the environment we were programmed to live in and the one we really do live in. Dumbo thinks I am a hunter-gatherer, and that if I run across a bit of sugar or fat it’s an opportunity not to be missed. So that’s how he steers me.

Quite a lot of human unhappiness is the result of Dumbo and Morona being in serious need of an update; stay tuned for more about that.

But for now, hand me another slice of that pizza, would you?

For those especially interested in evolution…

I should be honest here and say that there are other ways (besides dying early) in which evolution could operate on our desire to eat foods that are bad for us, but they don’t change the story. I’ll go through a couple of those here.

1. You could become undesirable during mate selection and have trouble finding a mate. This kind of selection is called “sexual selection.” So if a person’s genes contribute to their eating a diet which makes them less desirable as a mate, then they will have fewer choices in the mating game.

2. You could have trouble performing the tasks required to raise a family. So if a person’s genes contribute to their eating a diet which makes them sick (diabetes leaps to mind), that could theoretically impact their reproductive success.

Eons ago, these would have been very important factors — had junk foods been available. Couples didn’t use birth control to limit themselves to a couple of kids; dying early might very well reduce the number of children you left behind. Getting a disease like diabetes was much more likely to kill you or leave you incapacitated and unable to take care of your family. Modern medicine (and the insurance that pays for it) can inform us of the danger of such a diet and significantly ameliorate its effects. For example, a diabetic might be given insulin — paid for by insurance — and still work and raise a family.

Long ago, a less desirable mate might have meant less children — after all, what we find attractive in a mate is largely about reproductive potential (more on this in later posts!). But modern medicine goes a long way toward allowing everyone who wants a child to have one — even if, for example, they don’t have optimal hormone levels or hips wide enough for a safe delivery. A graduated income tax and other “progressive” policies go a long way toward allowing everyone who has a child to raise it successfully. And birth control drastically reduces the number of children we have from our true reproductive potential. And frankly, a lot of those heuristics are horribly out of date anyway. So really, not getting as attractive a mate has little effect on reproductive success these days.

Eons ago, you couldn’t eat junk food — it didn’t exist. Now that it does, the effects on reproductive success are pretty limited, so evolution doesn’t have a lot to work with.

Reboot/Restart in a REST API using PUT

it is actually quite possible to do the reboot/reset in an idempotent manner using PUT

There was at one time a controversy around whether you were restricted to CRUD (Create/Read/Update/Delete) in defining REST APIs or whether it is OK to use POST for the odd “do something” request. Roy Fielding, who came up with REST in the first place, largely put this to bed by saying, in effect, “I never said you couldn’t create additional commands.”

The problem often surfaced when someone asked

I have a a resource with a status attribute. What should a request to reboot (or restart) it look like?

If you were using SOAP, the answer is obvious: have a Reboot command. But this is REST; is that the right thing to do? Why not use a PUT to set the status to Rebooting?

The problem with that is that it’s not idempotent. An intermediate server between the client and the REST API is allowed to reissue the command, which means that the system could, at least in theory (I wonder whether a server could really reboot fast enough for this to be an issue in practice), get rebooted a second time as a result.

On the other hand, you can understand REST API designers’ reluctance to just invent a new command. Falling back on POST to create new commands for things you don’t know how to do idempotently is, in a sense, the API equivalent of

Just use a goto statement.

That is, the facility is too general — it’s a catch-all that has completely open-ended semantics — “do something.” It begs to be abused. In my opinion, it is better to create useful abstractions on top of such open-ended facilities and then restrict developers to those abstractions. Just as I don’t want us to run a server as root or use a programming language with a goto statement, I don’t want us to have a “do something” facility in the abstraction layers over the API.

But the main point I want to make is that it is actually quite possible to do the reboot/reset in an idempotent manner using PUT. The reason it isn’t usually thought of is that people are a priori focused on a status attribute. Imagine that you also have a last_reboot attribute that holds the time of the last reboot; to reboot the system, simply do a PUT to change that to the current time.

The result is perfectly idempotent; if an intermediate server resends the command, it will have the same last_reboot time, and such an update is treated as a no-op. And an attempt to change the last_reboot time to a time older than its current value is an error. So picture something along these lines:

  class Server {
    def reboot = put( "/last_reboot", getTime )

Note that last_reboot is useful information about the system, as is its time. Sure, you could instead model this as a new, non-idempotent Reboot command that has a side-effect on the last_reboot value, but — uhhh, why? You already have a perfectly good, idempotent command that will do it, whose effect on last_reboot is not implicit.

I’m not saying that there will never be a case where you ought to create a command. But if you are stuck thinking that there is no idempotent way to make a certain update, perhaps you are thinking about the wrong attributes. Don’t be too quick to use a goto.

A Monte Carlo Simulation in Scala

On LinkedIn a fellow named Francois Dewaste posts links to some Scala-related articles he runs across, and I often check them out. Recently he shared an article about a Monte Carlo simulation in Scala, written by Alvin Alexander. I found the problem interesting, so I want to share my own Scala solution, which is written using a more functional style, and also a mathematical interpretation of the results.

First I’ll repeat the basic problem:

If two people who are each right on a test 80% of the time (but whose answers are uncorrelated) have the same answer for a particular question, what is the probability that their answer is right?

You’ll get much more out of this post if you spend a minute thinking about the problem. How would you model it mathematically? How would you explore the problem using software?

Apparently several statisticians he asked said that the answer was 80%. Presumably their thinking was along the lines that since their answers are uncorrelated, the fact that two (or three or …) agree means nothing. But that’s wrong.

If you’re like me, your immediate answer will be “there isn’t enough information.” Clearly you would have to know how many possible answers there are, because that will affect the probability that they have the same wrong answer. If there are only two possible answers, then if they are wrong they agree 100% of the time; if there are an infinity of possible answers, then if they are wrong they agree 0% of the time. Looking at his code, though, it was clear that he meant for there to only be two possible answers to each question.

OK — ready for the answers?

First the Simulation

I know most folks are here to look at the Scala code, so here it is. To make it a bit easier to follow, bold font indicates where the parameters of the simulation are used.

object MonteCarlo extends App {

  // Get the simulation parameters from the command line.
  val Array(numQuestions, numChoices, numPlayers, pctCorrect, numMustAgree)
    = args map (_.toInt)

  // The choices will be 0 .. numChoices-1; call the last choice correct.
  val correctAnswer = numChoices - 1

  // Generates an answer with a pctCorrect chance of being correct.
  def genAnswer =
    if (util.Random.nextInt(100) < pctCorrect)
    else  // pick a wrong answer from 0 to correctAnswer-1

  // For each question, generate player answers and look for consensus.
  // Where consensus is achieved, yield whether or not it is correct.
  // The result is an array, with one element for each consensus reached,
  // containing true if the answer they agreed on was correct.
  val correctnessOfConsensus =
    for { i <- 1 to numQuestions
          (answer,instances) <- Array.fill(numPlayers)(genAnswer) groupBy identity
          if instances.size >= numMustAgree
    } yield answer == correctAnswer

  // Print how often the consensus answer was correct.
  val timesAgreed = correctnessOfConsensus.size
  val timesRight  = correctnessOfConsensus count identity  // num true
  println( s"Consensus meant correctness $timesRight out of $timesAgreed times." )
  if (timesAgreed > 0) {
    println( f"That's ${ 100.0 * timesRight / timesAgreed }%.1f%%." )


It’s interesting to note that because of Scala’s type inference there isn’t a single type specification in the entire program, in spite of the fact that there are eleven bound variables. The call to toInt() is a conversion, not a type specification, and it would be needed in Python or Ruby as well.

The for-comprehension is the heart of the program: it repeatedly (once for each question) generates the players’ answers with the specified probability of being correct, figures out how many times each answer was used, picks out the cases where the required number of people agreed, and yields a Boolean indicating whether or not the agreed-upon value was correct.  Note that there can be more than one consensus for a given question; for example, if you use ten players, five of which must agree, then you can achieve consensus twice for a single question. Twelve players can reach consensus four times on a question if only three have to agree.  The for-comprehension (with its underlying flatmap) makes all of that trivial.

Now let’s run it for the parameters of the original problem, but with a million questions so we get a more accurate probability estimate:

  $ scala MonteCarlo 1000000 2 2 80 2
  Consensus meant correctness 640039 out of 679984 times.
  That's 94.1%.

And they were also interested in the probability of the answer being correct if 3 out of 3 had it:

  $ scala MonteCarlo 1000000 2 3 80 3
  Consensus meant correctness 512213 out of 520220 times.
  That's 98.5%.

Those are essentially the same answers Alvin got.

Now the Math

So what’s behind these numbers? In the above runs (where everyone must agree), there are three possibilities for each question:

  • A. They all agree and they are correct.
  • B. They all agree and they are incorrect.
  • C. They don’t all agree.

We don’t care about C; what we are interested in is

      p(A) + p(B)

Since their answers are uncorrelated, the probability that two players will both be right is the product of the probabilities that each is right. Since each is right 80% of the time, p(A) = square(0.8) = 0.64, and p(B) = square(0.2) = 0.04. So the expression above is 0.64 / ( 0.64 + 0.04 ) = 0.941 (roughly).

With three people, p(A) = cube(0.8) = 0.512 and p(B) = cube(0.2) = 0.008, so the probability that the consensus is correct is 0.512 / ( 0.512 + 0.008 ) = 0.9846 (roughly).

Oh, and remember the point about the number of possible answers mattering? Let’s say that instead of 2 possibilities there are a million:

  $ scala MonteCarlo 1000000 1000000 2 80 2
  Consensus meant correctness 640167 out of 640167 times.
  That's 100.0%.

With so many possible answers, the players almost never agree if they are incorrect. So, if they do agree, it’s because they are right.

There you have it! Interesting problem, Alvin — thanks!

[other Scala posts]

How to Make Google Chrome Launcher Open New Window

I’ve used Google Chrome on Linux for quite a while now, mostly happily, but there is one thing I’ve always found extremely frustrating. I just fixed it, so I thought I’d share.

Here was the problem. I usually have a dozen or so Chrome windows open at once, each of which may have several tabs. These windows, spread across four desktops, represent work in progress for me, as I handle interruptions or set aside a task while waiting for a response, or whatever. OK, so when I click on the Chrome icon to open a new browser window, I don’t get one — Chrome instead finds my most recently touched browser window and adds a new tab to it. It doesn’t matter if it has been hours since I touched that window, or if the window is on a different desktop — that other window gets the new tab. So I have to pull the tab from the lucky window, iconize the old window again, move the new window to the desktop I was on, and switch back to that desktop myself. Aaargh!

I have a hard time understanding why the launcher would work this way. If I want a new tab in some Chrome window, I can simply press the new tab icon in that window. If I go to the panel and ask for a new browser, why would I want it to dig up some old window and add a new tab to that?

I had looked at the man page, but there was no switch for this behavior. But today I found an undocumented (why???) switch that does just what I want:


So I edited the properties for the Google Chrome launcher to add in the switch, and all is golden:


Dynamic Resource Types for a No-Fuss API

“this was one of those rare occasions where an idea turns out to be a lot better than intended”

In late 2009 I left a dying Sun Microsystems (right before the Oracle acquisition) to help my good friend Alok with his tiny un-funded startup called Yunteq. He was understandably having trouble finding engineers willing to work for a stake in the company rather than a salary, and since he was trying to make it as an entrepreneur and was in desperate need of help in a critical portion of the system, I went to his aid. The unsalaried period was only going to last three months — I could handle that — and he thought the company would probably be acquired in five, by a suitor they had been in talks with for a while. They were almost to the goal line, but they had a big problem; if I stepped in as a third developer to get them through it, I could own a significant share of a company about to be acquired. He had assembled a good team, and we would make decisions together, as co-founders.

It sure sounded good. In reality it was an extremely bumpy ride.  :^)

Anyway, in early 2010 I started on the task of designing a public REST API for the product. It had to present the functionality of the server — which was all about managing and organizing roomfuls of servers, and the virtual machines running on them — to the outside world. We started with a typical design in which the URL broke down the universe into a hierarchy of resources like this one:


The resources in such an API effectively form a tree, the shape of which is defined by the application. But at some point Alok mentioned to me that while the above was fine for a cloud provider, a large company would probably not want the customer part but would want some way of representing divisions, regions, groups, departments, or whatever. “We’ll probably have to fork the code base for each customer,” he said.

I nearly choked on my tea at the sound of that. “Ummmm, let me think about this,” I responded.

In a couple of days I presented him with a design that at first he had trouble understanding. There was no fixed hierarchy to the API at all — the resource tree would be whatever we made it, by creating new nodes and saying what type we wanted each to be. Not only could each customer have a different tree arrangement, we didn’t have to set it up for them — they could do it themselves. They could do it piecemeal; anytime they wanted to add to or rearrange the tree, they could do so easily from the CLI. All using a single code base.

That is, this kind of API was really not about customers and applications and VMs and Disks and NICs and so on — it was about manipulating a tree of typed nodes. If you want a top-level grouping by customer, fine — create a node of type Folder called “cust” and within that a Folder for each customer. If you want to organize things some other way, no problem — just create the appropriate nodes in the tree to represent it. It was completely analogous to the way you create directories in a filesystem to organize your data the way you want, except that you could create nodes of arbitrary types from some available palette. A Folder here, a VM there, an ISO repository over there, etc. — however you want to arrange it.

By mid-2010 we had such a system working; the public API allowed clients to manipulate a dynamically typed resource tree.

But this was one of those rare occasions where an idea turns out to be a lot better than intended. Yes, the interface was now infinitely more flexible, and that was great. Each customer could organize its resources in our system in whatever way made sense for that enterprise. But that was just the beginning.

The API itself treated all nodes in the tree the same — as typed sets of attributes. How much RAM a VM was to have, or CPU, or disk space — all were just attributes of different types. The API was entirely about manipulating a tree of attribute-sets; it didn’t know or care what the resources or attributes were about. Developing in this system involved writing new resource types, but you never had to go back and fiddle with the API to be able to handle them; the API could already handle them. To it, your new resource type was just another typed set of attributes.

And then our CLI got simpler. There was no longer any code specific to VMs or customers or a disks or anything else — it just extended the API’s view of the world, allowing the user to manipulate a tree of attribute-sets from the command line. Not only did we no longer have to fuss with the API when we added functionality, we didn’t have to fuss with the CLI either. As we added new resource types, or added/changed attributes of existing resource types, no work at all had to be done in the API or CLI. This was way cool.

Then we added the notion of a “policy” — a kind of resource that changes the way the system deals with other resources. The fact that the customer was in control of the arrangement of resources gave us a scoping mechanism for policies: wherever you put the policy in the tree, the policy was in effect for the subtree under that point. And if you put another instance of the same type of policy below it, the deeper policy overrode the shallower one. This was a simple yet powerful scoping mechanism, made possible by the fact that we had handed control over the tree’s layout to the user.

Testing was also improved. Among other things, the fact that resource types and attributes were treated so generically meant that much testing could be done once and for all. For example, a single suite of tests can check that an attribute which is supposed to be an integer within a specified range works as it should; then if there are actually a dozen such attributes in the system there is no need to test out-of-range values or non-integer input on each.

Even the API documentation and help system documentation were simplified. Once you understand the basic idea, that the API just allows you to manipulate a tree of typed resources, the bulk of what you need to know is 1) what resource types are available, and 2) what are their attributes? Much of this kind of documentation can be generated automatically by the system.

In effect we had created some good generic infrastructure for creating applications. Starting from that infrastructure, you just define resource types for your domain and you are done. Of course, those resource types are non-trivial — that’s the meat of the application. But you get a lot for free.

There’s more to this, of course — I can write about the API because it is public, but I won’t say much about the interesting mechanics inside the server. I will say, though, that I had wanted to do this in Scala and Akka, but management had never heard of those (“Forget Scala,” I was told) and got someone to set up a much more conventional stack. It worked reasonably well, but had its problems. A bit frustrated, I spent a few weeks over Christmas writing a Scala/Akka/Spray prototype and demoed it to them when they got back from their vacations. They were really impressed at how much cleaner the API code was (the Spray DSL for creating APIs is awesome), and defining resource types in this new system was much easier. To their credit, they took a serious look at Scala and Akka and decided that we should use it. I now have new server infrastructure for dynamic resource trees working in Scala/Akka/Spray; it’s a huge improvement and I am a much happier camper.

And now that same management has told the entire group to use Scala. Go figure.

Extremely Agile Testing using a Lazy Test Harness

“it is generally a lot easier to verify a result reported by the test harness than it is to figure out the right answer yourself beforehand and write the code to check for it”

I don’t particularly enjoy writing tests, but l noticed long ago that I enjoy debugging insufficiently tested systems even less. If you have a suite of old tests that you can run whenever you think you’ve gotten something new working, it can save you a ton of trouble later by showing you that you’ve broken something — right away, when you know that something in those 40 lines you just mucked with must have caused the problem.

The traditional way of writing tests looks something like this:

  1. Have the system under test do something.
  2. Capture some result that you can use to determine whether what you tried worked.
  3. Check that result in various ways, raising a flag if it’s wrong.

Eons ago I was writing a date/time library and realized that I would need hundreds of tests; it was worth my effort to make writing them as simple as possible. I created a simple CLI to invoke the routines in the library and a test harness that used the CLI to run a bunch of tests. Each test was just a single line that made the library do something, e.g. convert a date to Julian; the harness did all the rest.

“Wait a minute,” you complain — “how did the harness know whether the test passed? A test must have done more than just make the system do something!”

But it did not.  A test really did just look something like

  toJulian 1988-03-21


  addDays 1988-12-29 5

So how did the test harness know whether the test passed or failed? Well, the first time the test was run, the harness did not know — it had to ask whether the output was correct. If I said yes, it saved the output as the correct result. The tests were “lazy” inasmuch as the correct results were not established until the tests were run the first time.

This approach proved extremely convenient — I could create a new test in a few seconds. And while the regression tests usually just passed without incident, there were in fact many times when I had inadvertently broken something and the tests saved my bacon. Without those tests I would have discovered far later, perhaps even at a customer site, that something somewhere wasn’t working, and would have had to laboriously trace it back to those 40 lines. And it might take me a while to fully remember what those 40 lines were about.

The point of doing tests this lazy way is that it is generally a lot easier to verify a result reported by the test harness than it is to figure out the right answer yourself beforehand and write the code to check for it. This is especially true if the right answer is, as is often the case for me, a dump of some tree structure, dozens of lines long. I can look at such output and pretty quickly say “yes, that’s right,” but if I had to explicitly code such a tree structure into each test, you sure wouldn’t see me writing many of them!

Furthermore, if I deliberately change something that causes the tests to fail, I don’t have to go back and fix all of the affected tests manually. The harness stops at every failure, shows me a side-by-side diff of what was expected and what we actually got, and asks me whether I want to accept the new output as the “good” output. If I say yes, the harness fixes the test for me. I can even tell the harness that if further tests fail with the same diff, they should be automatically updated without asking.

In some scenarios this approaches presumes the existence of a CLI. These days I write server apps with REST APIs, and I always create CLIs for them. “We don’t have a requirement for a CLI,” a manager told me recently, thinking we would save time by not bothering with one. “You’re getting one anyway,” I responded. I always port/write a shell-based CLI, giving us a very powerful way to control and script the server — very handy. Then I port/write a lazy regression test harness (a couple of hundred lines of shell, currently) to the CLI and begin writing lots of one-line tests.

And suddenly testing is not such a drag.

UPDATE:  Eric Torreborre would like to see support for lazy test development in his highly regarded “specs2” unit test framework for Scala code. That would be a fantastic feature.

UPDATE:  I discovered from Bill Venners (who wrote ScalaTest) that somebody has created a facility for doing this with unit tests, called ApprovalTests.  Unfortunately it seems tightly bound to JUnit, so interfaces to specs2 and ScalaTest are unlikely.

Attitude and Aptitude

My son Carlos just got his A.S. degree in Engineering. The road he took to get it is pretty unusual, and nobody would have guessed that he would accomplish what he has. I think there’s a good lesson in the story.

First I should explain that Carlos is not my biological son. He was born in Honduras, a poor country, to parents who were poor even by Honduran standards. We’re not talking about driving a used car instead of a new one — we’re talking about no car in the family at all, feeling excited if for Christmas you get a new shirt and pants because you won’t be getting them any other time of year, and skipping the occasional meal because there is no food. When I first saw the house his family lived in, I was a bit confused — I thought it was a storage shed.

When Carlos talks to friends from his childhood days and tells them that he is a student at an American college, they think he is pulling their legs. You see, as a kid Carlos was a terrible student, ultimately expelled from the eighth grade for causing trouble. He didn’t see the value in what was being said, and wasn’t picking up much of it. It seemed like a waste of time, so he didn’t invest himself in it — and failed miserably.

Since his family was so poor, he decided to help out by working in the banana plantations.  It was back-breaking work, and the others on his team — all much bigger than Carlos — were expecting him to fail. He surprised them all by coming back the next day, and the next — soon he knew not only that job, but other jobs on the plantation as well. The money was a huge help to his family. Carlos really wanted to help his younger brothers and sisters, especially Delma, whose life he saved when she was a toddler — they have a special bond.

But it was really hard work. One day he was taking a break, sitting on a tree stump, thinking about the men working with him. Some were in their 50s, and the work was clearly a challenge for them. “If I keep doing what I’m doing, I’ll be working on a plantation in my 50s just like them,” he thought to himself.  So when the opportunity came to join his mother in the US, he took it.

Coincidentally, within days of his arrival he went to his mother’s graduation ceremony — she had earned her GED (high school equivalency). She was given her diploma and led to a podium with a microphone to say a few words; tears streamed down her face as she thanked everyone.

Carlos had planned to work with his brother in construction, but I told him that if he wanted to get his GED too, he could, and that it would help him. “Here in the US,” I told him in Spanish, “a young woman’s parents may not want her dating a guy who didn’t finish high school. And there are lots of jobs that require a high school diploma.”

The next day he came to me and told me yes — he would like to get his GED. “Great!” I told him. “Let’s see how much you learned in Honduras. What’s 7 times 9?”

He thought for a few seconds before admitting that he had no idea. In math, at least, he remembered very little. Probably he had never learned very much in the first place; he had never had a textbook of any kind. My heart sank a little — it had been a long struggle tutoring his mother through her GED, and Carlos seemed no better prepared.

But he began taking classes, and I set up a whiteboard and helped as best I could. Pretty soon he understood fractions and basic geometry. He tells his friends now that I taught him Spanish, just to see the surprise on their faces; of course he already spoke Spanish fluently, but he needed to learn the basic concepts of Spanish grammar and improve his use of it in writing. It must have felt very strange to have a gringo explain to him how his own language works.

Algebra was a problem. I tried again and again to explain what was going on, taking different approaches, but he just didn’t get it. He couldn’t even tell me what part he didn’t understand — it just wasn’t working. I resigned myself to the idea that that would be where he stopped with math.

But Carlos surprised me. Instead of giving up, he spent day after day with the books (finally he had books!), and before I knew it he was doing basic algebra. He passed the math test easily.

He ended up with his GED in far less time than it had taken his mother. A GED is really not the equivalent of a high school education, but it was still a huge accomplishment given how little he knew when he started. And remember that this is the same kid who was ejected from school in Honduras. The neighbors, who are well-educated, cheered him on; in particular a pair of math teachers across the street were quite fond of Carlos and helped him celebrate his success.

Carlos had enjoyed his taste of algebra, once he got the hang of it, so when we ran across a thick elementary algebra textbook in a used bookstore I asked him whether he wanted it. He did. He disappeared into his room and started working through it on his own. He worked through far more problems than a teacher would have asked him to do, and he mastered the material.

Carlos had to study English for a couple of years before the community college would let him take regular college classes, but eventually he got there. He took College Algebra, then Trigonometry, then Pre-Calculus, and did well in all of them. He liked it. On he went to Calculus I, Calculus II, and Calculus III. Most people can’t handle math classes at that level — who would have guessed that this eighth-grade dropout would do well in them. Then on to Differential Equations, Statistics, and Discrete Math. Carlos has now taken all but a couple of the math classes offered by the community college (and is signed up for one of the remaining courses this summer), along with a lot of physics and chemistry classes and of course the usual requirements for English, Humanities, History, etc. At first it was a huge struggle for him to write even a small paragraph, but now he writes several papers in English every semester and gets good marks. He can now use Windows and Linux and a variety of tools that run on them — even program a little in Python. And his GPA is a hair over 3.5 — not bad for someone who spoke no English and had essentially no education eight years ago!

So what the heck happened here? How can this guy who is remembered in Honduras as the class clown and a dropout have accomplished so much here in the US, and in a different language? Clearly he was smart enough to have succeeded in Honduras; why didn’t he?

I think in large part it boils down to attitude. Oh sure, Carlos was much better supported here, and the quality of education is clearly better. But I’m pretty sure that fixing those things alone wouldn’t have done the trick. In Honduras Carlos couldn’t relate what the teacher was talking about to his life — it all seemed irrelevant. But he came to the US, watched his mother’s success in education, began interacting with educated people with good engineering and teaching jobs and an understanding of all sorts of things, and he saw the relevance. He had worked for years doing back-breaking labor that paid very little, and realized it was a dead end. When he got here he quickly realized that there was a lot he could learn, that he would understand the world around him far better. He understood that the more he learned, the more he could contribute, and the more he would be rewarded financially. He had always wanted to contribute; it just hadn’t seemed to him that school had any relevance to that. Now it did.

So when Carlos had trouble with a subject, he didn’t give up — he just kept working at it, surprising me more than once with his determination. Carlos feels tremendously fortunate to have gotten this education, and he isn’t going to give up easily when there are problems.

There was in fact one semester in which Carlos stumbled: he had discovered YouTube and Facebook and was spending too much time on them. I knew nothing of the problem until he came to me one day, handed me the laptop, and explained what had happened. He was honest enough with himself to realize that he had screwed up and to know what needed to be done — he cut himself off. It was entirely his idea; he had come too far to let an addiction to the Internet steal his future. He ultimately had to take the laptop back, of course, to do his schoolwork.  And the Internet is his connection to much of his family and to his old friends; he needs to spend some time with that. But to this day he comes to me and asks me to cut off his access to the Internet for a week (I use a MAC filter on the router) when he has a lot to do.

Carlos is smart, but he’s not a genius — the magnitude of his success owes more to attitude than aptitude. I find him an inspiration. And I look at Carlos and feel sorry for the many American youth who achieve far less in spite of having so much more support. He tells me about classmates trying to throw together a paper the night before it is due, seldom coming to class, doing the bare minimum it takes to get a passing grade. They have none of the handicaps Carlos had, but they lack his desire and determination to learn, to succeed academically, and to give back to society.

I can’t help but wonder whether they would be doing better now if, before starting high school, they had spent a year or two hauling bananas…