Category Archives: Uncategorized

6 Reasons to Eat Less Meat

Most people seem to think that not eating meat is just about preventing animal suffering, and a lot of people simply can’t (or won’t) empathize with an animal raised in a crowded cage with feces on the floor. But there are many reasons why we would be better off not eating meat:

1. The way we raise the animals — routinely giving them antibiotics — breeds antibiotic-resistant bacteria, leaving us increasingly unable to treat infections.

2. The way we raise the animals — in crowded conditions, often with multiple species in proximity — encourages zoonotic transmission, causing human pandemics.

3. The way we raise the animals — in crowded conditions, so that new hosts to infect are always close at hand — breeds hyper-virulent pathogens (pathogens that do not try to minimize harm to their hosts).

4. Raising animals for food is much harder on the environment, in a number of ways.

5. Our health is much the poorer for all the animal flesh we eat (obesity, heart disease, diabetes, and much more).

6. The animals we raise in factory farms have horrific lives, worse than the worst concentration camp.

When I look at this list, I’m sold at #1. Even if none of the rest were true, I would say “Hey, how about we all eat less meat!” Same with #2. These are HUGE problems!

Compiler Explorer and Rust

Well, I just learned something fun. There is a site that makes it easy to see what assembly code is generated for some code in any of dozens of languages, including C and Rust. You get to choose the compiler version and specify the options. Here I wanted to see whether at opt-level 3 the compiler would combine these two calls (it doesn’t):

You can even bring up the code generated by two compiler versions, or two compiler settings, side by side for comparison. Awesome tool!

Rust in the Browser is About More than Performance

I see folks talking about compiling Rust code to WebAssembly (so that it can be run in the browser) as if it were just about performance. They might further judge that most apps are fast enough in Javascript, so Rust in the browser is going to be a very niche thing.

Of course it’s true that Javascript is the de facto language of the browser, and that isn’t going to change anytime soon. However, looking around at existing applications and saying “meh, they’re fine in Javascript” really misses the point.

First, Rust is a far, far better language than Javascript. I don’t want to start a language war, but Javascript is well known to have more warts than a frog. If you don’t like type systems you won’t like Rust, but then you won’t have any use for my blog either. I recognize the value of good type systems, and Rust has a good one. It also does concurrency well, has a good macro system, and much more.

So another reason (besides performance) that you might want to use Rust rather than Javascript for certain tasks is that you want to write higher quality, more robust code.

By the way, even if Javascript performs well enough for your app, you might appreciate not having garbage collection pauses. Rust doesn’t have any.

Nor does Rust come with a runtime. One reason that Rust is especially attractive in the browser is that it is little more than the WebAssembly for your code, and a memory allocator, that is going to get downloaded. Sure, you could use C# or Go or whatever, but if the language is garbage-collected then the downloaded WebAssembly has to include a garbage collector — and whatever else is in that language’s runtime. Rust makes for a leaner download.

Also remember that by virtue of being more performant, Rust uses less battery to do the same task. If your app does something cpu-intensive, then even if Javascript is fast enough to be usable, phone/tablet users will still appreciate you not sucking the battery dry.

And RAM. Given a language that wants all objects on the heap and a header on every object, like Java, you end up using 40 bytes to hold the empty string. That’s not true in Rust. And depending on the type of garbage collector, extra memory may be needed for copying objects around.

And finally, don’t just think about *existing* applications — having a more performant language with no GC pauses opens up new possibilities for things that people would not previously have considered a browser app suitable.

It is still true that most applications present no compelling case for using WebAssembly — they just don’t do anything that significant outside of the GUI and talking to some server. For an app like that, with a team that is already facile with Javascript, and given the gymnastics of getting into and out of WebAssembly modules, why bother with WebAssembly?

So I agree that WebAssembly isn’t something that you will use with every app. However, when you do want a better tool than Javascript, enough to be worth dealing with WebAssembly, why would you choose anything but Rust? Much of the point was to improve performance and/or avoid GC pauses, right? Well, Java/C#/Go/etc. are going to take longer to load, perform worse, have GC pauses, use more RAM, use more battery, and be vulnerable to null pointer exceptions, data races, and other problems. If you are going to go to the trouble of using WebAssembly to improve a browser app, why not use the language that will do the best job of it?

A Thought on the College Admissions Scandal

The world of US university education experienced an earthquake recently when a federal indictment exposed wealthy parents paying to get their underperforming kids accepted into universities.  I’d like to share my personal take on this, channeling my inner Milton Friedman.

Instead of trying to ferret out and prosecute what we currently consider corruption, why not make it legal and visible?  This will surely sound weird at first, but hear me out.

Each university would produce a ranked list of applicants, each with an associated score derived from metrics indicating their likelihood of success — AND NOTHING ELSE — and then let parents sweeten the pot.  If you really want your mathematically mediocre kid in Caltech’s physics department, you can pay extra according to some published schedule to bump their score.

All above board.  By definition it’s not cheating.  The base score is STRICTLY about potential and is subject to review and challenge.  The base score is never bumped because your parents went to the school, or because of your race or religion or anything else.  The only way to bump the score is with money.

Wait — isn’t that bad?  Don’t we want to force colleges to accept the applicants we think are most deserving?

I’m strongly in favor of getting those students with “the right stuff” intellectually a good education so that they are more productive (and pay more taxes).  So given that there is not enough education to go around, how do we get more of it?

Money.  Note that by accepting that bump money from parents whose kids wouldn’t otherwise rank high enough, we are obviously getting that one kid into school who wouldn’t otherwise go, at least not there.  Sure, some of them will fail — at least one of the kids identified in the scandal didn’t even want to go to college — but the more likely the kid is to fail, the more money the parents have to give the school in order to let them try.  And you use that money to provide more/better education the next year.  This isn’t a zero-sum game; more money means that more kids can get educated.  (And even those rich kids who fail will probably learn *something* from the attempt.)

Want to give a break to a particular kid you believe deserves it?  Fine, there’s a way — the same way open to everyone else.  Want to give an entire class of people a break, like poor inner-city kids?  Fine, there’s a way.  It’s all right there in the open, and it works the same for everyone.  Get some money allocated to the project and you’re off to the races.  The costs of the program are obvious.  No crooks getting rich off of bribes.  No encouraging parents who want the best opportunities for their kids to cheat.  No more spending money to investigate that crap.  Who would bribe a crook to cook their kid’s SAT results, knowing that they could be convicted of a felony (and publicly humiliated) for doing so, when they could just pay the university directly to let their kid in, knowing that the money would be used to educate more kids?

OK, I’ll bet now you’re worried that the school will just keep all of the money instead of using it to educate more students.  But that would be tremendously stupid of the school, since they only get the money for the kids they admit and educate.  More kids admitted, more money.  More money, more slots for more kids.  Any school that decided to take only rich parents’ kids would simply be giving up the opportunity to serve the rest, along with the associated revenue.  Some other school will gladly step up and take the money.

And universities would be required to be transparent about the average score of the kids they admitted, the bump money that they accepted, etc.  If a university accepts so much bump money that it drags down the average scores, then maybe you don’t want to go there.  Every university would strike its own balance, not accepting so much bump money that it would significantly degrade the school’s reputation.  Students could see where various universities land on this spectrum and decide whether they want to pay more in order to get a more “elite” experience.

In short, let rich people pay extra to get their kids in — more kids get educated that way.

If right now you’re thinking that people shouldn’t be able to buy *preferential* access to a university education, you’re still thinking that it’s a zero-sum game, that for some reason having more money does not allow us to educate more students.

I actually think we need far more radical reforms to our education system than this, but I don’t have any problem at all with rich people buying their kids an education that they couldn’t get on intellectual promise alone.  And I would argue that you don’t mind either if you think that it’s OK that star athletes are given preference in admissions — you know that schools do that because people donate more to schools with competitive athletic programs, right?  Not to mention ticket sales.  It has nothing to do with the sport being important in any educational sense; in fact you could easily argue that reifying athletes is anti-educational.  Those athletic kids are accepted ahead of others who are intellectually more promising because of the money they bring with them.  The only difference from the current scandal is that the money is not coming from their parents.  If that difference somehow makes it OK in your mind, I can’t relate to your thought process.

So let’s turn this education scandal around, explicitly making a place for money to influence admissions rather than forcing it underground, where it would continue anyway but with the money going to unsavory folks.  Accept the money in broad daylight and use it to educate more kids.  Let’s be completely up front about it with students and parents — the more you look like a good bet academically, the less you pay, but anyone can come here to study.  Once in, you earn your grades on merit, and *that* is where we will be looking for corruption.

Teach your kids to curse!

“Oh, schmarkle!”

Imagine your son or daughter saying that in class.  Not too upsetting, I’m guessing.

Curse words are upsetting because those within earshot consider them vulgar.  And curse words are satisfyingly cathartic (and potentially even healthy) because of the utterer’s personal culture.  Usually those considerations are aligned, but what if you were to teach your children to curse using words that have no significance to anyone else?  It might sound a little silly, perhaps even humorous, but it wouldn’t sound offensive.  So schmarkle away!

Then later, when friends inevitably introduce your son or daughter to “real” curse words, they will have some degree of immunization against adopting them — the ingrained habit of using an inoffensive word.

O(1) sum_of_multiples() in Rust

I had been working mostly in Scala for a while, then took a diversion into Swift and Objective C.  I wanted to learn another language after that, and had all but decided on Clojure.  But Rust kept nagging at me — there was something about it.  Perhaps I’ll blog more about that later.

So I watched some videos, then read the book, and then started the Rust track at Exercism.io.  Nice site.  One of the exercises is about generating the sum of all multiples under some limit for a given list of factors.  So for example, if the factors are 3, 5, and 7, then the sum of the multiples under 12 is 3 + 5 + 6 + 7 + 9 + 10 = 40.

A naive solution is pretty straightforward:

pub fn naive_sum_of_multiples(limit: u32, factors: &[u32]) -> u32 {
    (1..limit).filter( |&i|
        factors.iter().any(|&j| i % j == 0)
    ).sum()
}

It just iterates through all of the eligible numbers, picking out any that are evenly divisible by any of the factors, and sums them.  Rust’s iterators make that look pretty similar to a lot of other modern languages, like Scala.

But this solution is slow.  I mean really slow — at least compared to what’s possible.  As limit grows, the execution time increases linearly; that is, this is O(n).  (With respect to the limit, that is — we’re ignoring the (very real) impact of the number of factors on execution time. Note that in Exercism’s tests, there are generally two factors, max three, so for simplicity let’s focus on the limit here.)

Well, O(n) doesn’t sound so bad, does it?  But here’s the thing: it could be O(1).

What???  O(1)?  That would mean that you aren’t doing any more work if you increase the limit — and thus the numbers summed — by a factor of 100.  That can’t be right!  But yes, it is.

Here is the key observation.  The instant you see it, your brain will likely jump halfway to the solution.  The sum of (for example)

3 + 6 + 9 + 12 + 15 + 18 + 21

is just

3 * ( 1 + 2 + 3 + 4 + 5 + 6 + 7)

Got it yet?  Remember that the sum of the numbers from 1 to n is simply the value of n * (n + 1) / 2 — actually summing the numbers is unnecessary!

That’s not all there is to it, though. If asked to sum the multiples of 3 and 5 less than 1000, we can use the above technique to sum up the multiples of 3 and the multiples of 5 and then add them together, but we will have counted the multiples of 15 twice; we have to subtract those out.

And that’s a little trickier to do that than it sounds. First, what you really need to subtract out are the multiples of the least common multiple (lcm) of the two numbers, not their product. So, for example, if asked to sum the multiples of 6 and 15, we need to subtract off the multiples of 30 (not 90). The lcm of two numbers is their product divided by their greatest common divisor (gcd).

Also, we need to do this for an arbitrarily long list of numbers, so consider what happens if we are asked to sum the multiples of 4, 6, and 10:

  • First sum the multiples of 4.
  • Then add in the multiples of 6, but subtract the multiples of lcm(4, 6) = 12.
  • Then add in the multiples of 10, but subtract the multiples of lcm(4, 10) = 20 and the multiples of lcm(6, 10) = 30.

But oops, now we have gone the other way, subtracting off the multiples of 20 and 30 in common (60, 120, …) twice, and our result is too low, so we’ll have to add those back in. And if there were multiple corrections at that level (i.e. if we were given a larger list of numbers), we’d have to subtract their elements in common, and so on ad infinitum. At every step we have to take care not to add or subtract the same numbers twice.

That sounds like a pain, but using recursion it’s actually fairly straightforward.  In the following Rust code, I’ve changed the API a bit from the Exercism problem.  First, the integers are u64, so that we can use much bigger limits.  And secondly, in this case we’ll sum all multiples up to and including limit.  It’s an arbitrary choice anyway, and doing it this way will save us a small step for clarity.

pub fn fast_sum_of_multiples(limit: u64, factors: &[u64]) -> u64 {
  fn lcm(a: u64, b: u64) -> u64 { a*b / gcd(a,b) }
  fn gcd(a: u64, b: u64) -> u64 { if b == 0 {a} else { gcd(b, a%b) } }
  fn sum_from_ix(i: usize, limit: u64, factors: &[u64]) -> u64 {
    if i == factors.len() {  // we've processed all factors
      0
    } else {
      let factor = factors[i];
      let n = limit / factor;  // # of multiples of factor to sum
      let sum_of_multiples_of_factor = factor * (n*(n+1)/2);
      let new_factors: Vec<_> = factors[..i].iter()
        .map(|&prev_factor| lcm(prev_factor, factor))
        .filter(|&factor| factor <= limit)
        .collect();
      let sum_of_previously_seen_multiples_of_factor =
        sum_from_ix(0, limit, &new_factors[..]);
      let sum_of_multiples_of_rest_of_factors =
        sum_from_ix(i+1, limit, factors);
      sum_of_multiples_of_factor
        - sum_of_previously_seen_multiples_of_factor
        + sum_of_multiples_of_rest_of_factors
    }
  };
  sum_from_ix(0, limit, factors)
}

This is not too far from what a Scala solution would look like, although I think the Scala solution is a bit more readable.  In part this owes to Scala’s persistent List data structure, which was a little cleaner to work with here than a Rust Vec.  Also, Scala’s implicits make it unnecessary to make two calls that are explicit in the Rust version: iter() and collect().  Oh, and in Rust functions cannot close over variables; only closures can do that, but we couldn’t use a closure since in Rust closures cannot be recursive.  That forced us to explicitly include limit in the argument list of sum_from_ix().  Ampersands here and there in closures.  Semicolons.  These are little things, but collectively noticeable.

def fastSumOfMultiples(limit: Long, factors: List[Long]): Long = {
  def lcm(a: Long, b: Long) = a*b / gcd(a,b)
  def gcd(a: Long, b: Long): Long = if (b == 0) a else gcd(b, a%b)
  def sumOfMults(factors: List[Long], prevFactors: List[Long] = Nil): Long =
    factors match {
      case Nil => 0
      case factor::rest =>
        val n = limit / factor  // # of multiples of factor to sum
        val sum_of_multiples_of_factor = factor * (n*(n+1)/2)
        val sum_of_previously_seen_multiples_of_factor =
          sumOfMults(prevFactors.map(lcm(_, factor)).filter(_ <= limit))
        val sum_of_multiples_of_rest_of_factors =
          sumOfMults(rest, factor::prevFactors)
        sum_of_multiples_of_factor -
           sum_of_previously_seen_multiples_of_factor +
           sum_of_multiples_of_rest_of_factors
    }
  sumOfMults(factors)
}

But really, the difference is not so stark.  And the Rust version does not need a garbage collector — amazing!

On to the next exercise….

Thoughts on “Self-Serving Bias”

If someone were to ask you and your roommate what percent of the work around the house you each do, the answers would almost surely total to more than 100%; you each likely overestimate your contribution. This is an oft-mentioned example of “self-serving bias.”

While I have no doubt that this bias exists, there is one confounding factor that you have to be careful about in attributing such a discrepancy in assessment to self-serving bias, something I have never seen discussed.  That confounding factor is that we value things differently.

My wife, when I was married, used to iron my undershirts after they came out of the dryer.  I asked her not to do it: not only was it a waste of her time because the wrinkles would not be visible, it was also a waste of energy both to iron them and then for the AC to remove the heat from the house.  She saw things differently and continued to iron them.

So when thinking about how much work she did around the house, there were those twenty minutes of ironing she did for me.  But I counted that activity more as an annoyance than as a contribution.

I had another partner who had a curio cabinet with shelves of little glass unicorns and other trinkets on display.  We lived in Phoenix, so all those little pieces needed frequent dusting; in her mind that effort was part of the housework.  But that curio cabinet did nothing for me; it was strictly for her pleasure.  Whatever dusting she did on those unicorns was maintenance on her hobby, as far as I was concerned, comparable to me keeping the tires inflated on my bike.

There can even be discrepancies for work that is valued by both.  For my partner, keeping the kitchen tidy may involve putting on a shelf in the pantry some things that I would prefer to see left on the counter, like the cinnamon and jars of nuts.  Or imagine living with someone who wants the carpets vacuumed every week, when you are fine with once a month.

None of this is to say that humans don’t engage in self-serving bias — just that such discrepancies may also owe in part to discrepancies in what people consider the goal.

Wanted: DNA for Smart Brains

OK, smart people, you may be in a position to help science! There is important research underway to help determine the genetic factors of intelligence; our genes have a large influence on how are brains are built, and this effort will try to determine what is different about the genes of highly intelligent people from those of the general population. This research is likely to be ground-breaking, and the project is looking for test subjects. If your test scores meet any of the following criteria, you qualify:

– a pre-1995 SAT score of at least 780 Math and 700 Verbal
– a post-1995 SAT score of at least 800 Math and 760 Verbal
– a GRE score of at least  800 Quantitative and 700 Verbal
– an ACT score of at least 35

Sound like you? If you still have the documentation, apply at

https://www.cog-genomics.org/

If selected, they will send you a little kit for you to collect saliva. You’ll be able to see the results of your own DNA analysis.

Here’s a great talk on the subject:

Sounds interesting!

Oxytocin and relationships

Oxytocin is all in the news these days, and people seem to want a better understanding of the role the hormone plays in relationships. Here is an analogy based on my take on it.

Have you ever made bread? I had a breadmaker for a long time, a gift from a good friend. It eventually died of old age, after making literally thousands of loaves. I experimented with the ingredients all the time. Every once in a while I would forget to add an ingredient, and the results were, as a rule, disastrous.

I made whole-wheat loaves, which are a bit touchy. If I forgot to add an egg — or gluten or something else to increase the protein content — everything would still go fine in the beginning. The yeast would eat the sugars in the honey or molasses and release CO2, causing the loaf to rise beautifully.

At that point the heat was supposed to chemically change the protein in the gluten or egg, to denature it, making it firm so that the loaf would hold its shape. With too little protein, the bread would rise and then, as the yeast started to die from the heat, fall in the center. As that network of tiny holes collapsed, the moisture could no longer evaporate out — the result was a dense, moist brick of firm dough with a nearly bulletproof crust.

For an adult whose brain doesn’t manufacture much oxytocin — perhaps through some misfortune of genetic inheritance, or a lack of parental love as a child — trying to have an intimate relationship may be a bit like baking that doomed loaf of bread. In the early stages of a relationship the brain is swimming in hormones like dopamine and serotonin and adrenalin, and all goes well. Like yeast happy in their sugar-rich environment, the relationship feeds on the resulting euphoria and grows by leaps and bounds.

Eventually, though, the high wears off. Oxytocin is supposed to be in play at that point, though, keeping you firmly attached to your beloved. Ideally you feel as though your life and your partner’s life are more or less one life shared by the two of you, in a large emotional space opened up by passion and given permanence by oxytocin. You still have your own separate interests, of course, but the core of your life is not just in a space shared through some act of will but rather is the same as the core of your partner’s life.

But if your brain doesn’t produce enough oxytocin, you might still feel like a person with his/her own separate life, sharing an intimate world with somebody you enjoy and care deeply about but who is nevertheless, in some sense, ever so subtly intruding on your life.  You make compromises for the benefit of the relationship, but those compromises feel different than they would if you were really bonded to your partner. Disappointments and inconveniences accumulate because goals are not truly shared.  You begin to secretly wish that your life were your own again.

Eventually the relationship collapses into a disagreeable blob that is impossible to get out of the pan without a hacksaw.

Whole wheat, anyone?

Amazon Unofficial Errata, Kindle Thoughts

Amazon makes an attempt to be a one-stop shop for information about products. This increases the value of their site to consumers, making it one of the first places many think to go when they are looking to make a purchase.

Cool.

OK, how about this, then. Under any book they could have an

Unofficial Errata

link that takes you to something like a forum, where people give an edition number, a page number, the existing text, and what they think is wrong with it. Maybe a fixed-in-edition field. People could respond to that post saying “no, you got it wrong” or whatever. And Like/Dislike buttons.

The publisher would probably still have their own *official* errata page. But I think Amazon could easily take over *unofficial* errata. Surely the errata forum software would not be that hard to write, given that they already have other forums.

Or heck, maybe publishers would rather Amazon take care of the official errata as well. The publisher/author could log in and “approve” errata.

So now imagine that such forums exist. If you own a Kindle, perhaps you could say that you want errata to be shown inline. The old text would be crossed out, and the errata version would be shown. Perhaps the user could select “show errata those with Likes > Dislikes” or “show only approved errata.”

Kindle Sidekick Mode?

Speaking of Kindles, Amazon’s big Kindle is way more expensive than the little one — probably a reflection of two things: lower volume, and a higher failure rate in larger display sizes. The failure rate is bound to come down, but in the meantime, how about making it possible to use a *pair* of little Kindles together in interesting ways? For example, let’s say I’m reading text that refers to a diagram; can I bring the diagram up in the other Kindle and keep reading? Or it could be a reference to a table, or a picture, or a concept in a previous chapter, or a URL, or errata, or a note I wrote, or a link *I* made to a page in another book, or whatever? Or I could be in the index or table of contents in one Kindle and whatever I selected would show up in the other Kindle? Or maybe I simply want to turn them sideways and show two synchronized pages at once, one above the other. Or maybe I want to zoom into two different spots on the same map. You could imagine a holder that looks like a book, with a Kindle on each side.

The second Kindle could be the cheaper one, at $139, or I might just be using one I borrowed from a spouse or a friend, temporarily displaying text from my book without storing it there. So for significantly less ($139 + $189 = $328) than the cost of the larger Kindle ($379) I could have a *pair* of Kindles which I could use together or separately. And when I used them together, I could do some interesting things with them, with one Kindle driving the other. If one of them broke or the battery died, well, I’d still have the other. And either separately or together they would be more portable than the big one.

I wonder whether they have thought about doing something like that…

The Kindle I Would Like

While we’re on the topic, I haven’t yet bought a Kindle. One of the main reasons for this is that I’ve heard that its support for PDF files isn’t great (as of the Kindle 3), and apparently its conversion PDF-to-Kindle conversion facility only works well with simple documents.  So since much of what I would like the Kindle for is to read PDFs, I’m holding off for now. I’d really want the following for PDF support:

  • accurate rendering
  • good page-turn performance
  • usable navigation on the page
  • the ability to recognize two-column text and flow through it naturally
  • a usable zoom facility
  • search capability
  • the choice between portrait and landscape modes
  • the ability to add annotations (highlights and text notes).

Beyond that, I’d really prefer a Kindle with a slightly larger screen. Seven inches is common for LCD readers, and for technical books with code listings and diagrams that seems like a big advantage.

In fact, one such reader, the Ematic EB101, has no network interface at all; you add books via a memory card or USB, and for that reduced functionality the price drops to $80. For me, that’s a great trade-off: I am on computers a lot, so plugging in a reader once in a while to pick up a new document is no problem.  In fact, for some work documents, I am not allowed to upload them to Amazon, as I would be required to do to read them on the Kindle; direct transfer to the device is essentially the only option.

The reason I haven’t bought that reader is that it lacks the Kindle’s highly readable screen and its battery life.  I’m hoping that the next Kindle will get closer to providing the best of both worlds.

UPDATE

I just discovered the enTourage Pocket eDGe — that’s the idea!