6 Reasons to Eat Less Meat

Most people seem to think that not eating meat is just about preventing animal suffering, and a lot of people simply can’t (or won’t) empathize with an animal raised in a crowded cage with feces on the floor. But there are many reasons why we would be better off not eating meat:

1. The way we raise the animals — routinely giving them antibiotics — breeds antibiotic-resistant bacteria, leaving us increasingly unable to treat infections.

2. The way we raise the animals — in crowded conditions, often with multiple species in proximity — encourages zoonotic transmission, causing human pandemics.

3. The way we raise the animals — in crowded conditions, so that new hosts to infect are always close at hand — breeds hyper-virulent pathogens (pathogens that do not try to minimize harm to their hosts).

4. Raising animals for food is much harder on the environment, in a number of ways.

5. Our health is much the poorer for all the animal flesh we eat (obesity, heart disease, diabetes, and much more).

6. The animals we raise in factory farms have horrific lives, worse than the worst concentration camp.

When I look at this list, I’m sold at #1. Even if none of the rest were true, I would say “Hey, how about we all eat less meat!” Same with #2. These are HUGE problems!

Thoughts on Educational Priorities

It seems like our education system currently has the goal of trying to get the bulk of students to some level of knowledge we consider minimal.  It’s like we aim at the middle 60% or so of the curve, presenting material that they should be able to understand, at a pace that works for them. OK, some folks below that won’t get it, and some folks above that could learn much more and more quickly, but we’re doing roughly the right thing for that 60% or so.

And schools spend extra effort/money trying to educate that bottom 20%.  No child left behind! Get everyone to that minimum!

But then these people go out into the workforce and it’s largely that *top* 20% that creates the value in STEM fields.  And the vast majority of the other 80%, the ones that we organized our education system around, well, most of them can’t *really* make effective use of what we taught them.  Yes, they learned algebra well enough to pass that exam in high school, but they didn’t really GROK it.  Now they’ve forgotten most of it, and even the part they *do* sort of remember they do not grok well enough to use it to improve any process.

Of course one problem is that we are forcing everyone to learn at the same pace in the first place.  We need to find a way to let the top 20% go fast and the bottom 20% go slow.

But it also feels like we are investing the least in the people who offer the most potential for return.  Why not try hard to maximize what the really bright kids get out of school?  Instead of taking the attitude “they’re fine — those others need help,” spend more money on the bright kids.

Jonathan Wai, a psychologist studying the correlation between early cognitive ability and adult achievement, was quoted in this article as saying “The kids who test in the top 1% tend to become our eminent scientists and academics, our Fortune 500 CEOs and federal judges, senators and billionaires,”

And if some kids are having a really hard time with even basic algebra, why torture them?  Let them focus on something else, something that comes more naturally to them.  No matter how much we pressure them, there is very little chance that they are going to make use of the knowledge outside of the classroom, let alone be the ones who change the world through STEM advances.  If we *do* manage to get them through an algebra course, and the last they think about algebra is the immense relief that they never have to do that again, haven’t we wasted our time, wasted their time, and done major damage to their self-esteem for nothing?

Don’t get me wrong. I think that everyone should give math their best shot, and go as far as they can — with solid comprehension and high retention. Don’t give up on those who struggle! But don’t insist that they all acquire certain skills, when we all know darn well that a lot of them are not going to remember it after the test. Really, what is the point in that?

I would like to see more effort and attention going toward the Diracs and Feynmans of the next generation, not just the ones who are trying to meet a requirement by demonstrating a skill they will never use in the real world.

For example, perhaps you could arrange to get the top students of all high schools in the state get together online under the tutelage of the best math and science educators for their level?  Have them dive much deeper into the material, interacting with their equally bright peers in small groups.  Young Einsteins and Eulers become friends during class, even though they live in different cities (or even states or countries!), and talk about what they are learning over Skype. Awesome!

My claim is that the top students could learn much more, and more deeply, without being held back (and often socially regarded as freaks) by the other students.  I know this to be true in part because one grade school I know of tried an experiment: they gave all the kids in the upper grades a long math test, and then divided them up into about a dozen groups based on aptitude.  Then they taught each group at its own pace.  Unfortunately they spent no money at all on the top students — they simply put them in separate rooms, alone, with a box of lessons.  Take the next lesson from the box, read the material, and work the problems.  Nobody to talk to — at all.  It was not a good environment: it was socially isolating, and the learning materials were not designed for that purpose.  Yet the top student did 22 months worth of material in 9 months — in spite of feeling bored and alone, and having nobody to ask questions of.

A less visible benefit accrued to the *other* students: they were less afraid of looking stupid.  Average students are loathe to venture a guess when they know that the “class brain” over there knows the answer for sure and is just holding back to give others a chance.

Imagine if you spent *more* money on the top students, not *less*!  Imagine if instead of teaching the top kids the same way you teach the average kids, you really dug in to each topic and had the kids grovel around in the concepts, soaking it all up.  What *is* algebra, really?  There are other algebras — let’s explore some!  What is a *number*, really?  Prove some interesting theorems.  Use a language like Idris or Agda to do some math. Imagine the top 20% coming out of that education system!  Wouldn’t you expect that the downstream economic benefit from having taught that top 20% much more, and with deeper understanding, would more than cover the extra cost of teaching them at their own pace rather than just shunting them off to be taught with the middle of the bell curve?

I doubt that this would ever fly, because society places value on equality.  Teach them equally, please, even if that means extra effort spent on those with the least aptitude.  Even if most will never use it.  Even if those most likely to change the world with it could have learned far more, and been much better prepared to create the future with that knowledge.

Equally is not how I would do it.

Compiler Explorer and Rust

Well, I just learned something fun. There is a site that makes it easy to see what assembly code is generated for some code in any of dozens of languages, including C and Rust. You get to choose the compiler version and specify the options. Here I wanted to see whether at opt-level 3 the compiler would combine these two calls (it doesn’t):

You can even bring up the code generated by two compiler versions, or two compiler settings, side by side for comparison. Awesome tool!

Rust in the Browser is About More than Performance

I see folks talking about compiling Rust code to WebAssembly (so that it can be run in the browser) as if it were just about performance. They might further judge that most apps are fast enough in Javascript, so Rust in the browser is going to be a very niche thing.

Of course it’s true that Javascript is the de facto language of the browser, and that isn’t going to change anytime soon. However, looking around at existing applications and saying “meh, they’re fine in Javascript” really misses the point.

First, Rust is a far, far better language than Javascript. I don’t want to start a language war, but Javascript is well known to have more warts than a frog. If you don’t like type systems you won’t like Rust, but then you won’t have any use for my blog either. I recognize the value of good type systems, and Rust has a good one. It also does concurrency well, has a good macro system, and much more.

So another reason (besides performance) that you might want to use Rust rather than Javascript for certain tasks is that you want to write higher quality, more robust code.

By the way, even if Javascript performs well enough for your app, you might appreciate not having garbage collection pauses. Rust doesn’t have any.

Nor does Rust come with a runtime. One reason that Rust is especially attractive in the browser is that it is little more than the WebAssembly for your code, and a memory allocator, that is going to get downloaded. Sure, you could use C# or Go or whatever, but if the language is garbage-collected then the downloaded WebAssembly has to include a garbage collector — and whatever else is in that language’s runtime. Rust makes for a leaner download.

Also remember that by virtue of being more performant, Rust uses less battery to do the same task. If your app does something cpu-intensive, then even if Javascript is fast enough to be usable, phone/tablet users will still appreciate you not sucking the battery dry.

And RAM. Given a language that wants all objects on the heap and a header on every object, like Java, you end up using 40 bytes to hold the empty string. That’s not true in Rust. And depending on the type of garbage collector, extra memory may be needed for copying objects around.

And finally, don’t just think about *existing* applications — having a more performant language with no GC pauses opens up new possibilities for things that people would not previously have considered a browser app suitable.

It is still true that most applications present no compelling case for using WebAssembly — they just don’t do anything that significant outside of the GUI and talking to some server. For an app like that, with a team that is already facile with Javascript, and given the gymnastics of getting into and out of WebAssembly modules, why bother with WebAssembly?

So I agree that WebAssembly isn’t something that you will use with every app. However, when you do want a better tool than Javascript, enough to be worth dealing with WebAssembly, why would you choose anything but Rust? Much of the point was to improve performance and/or avoid GC pauses, right? Well, Java/C#/Go/etc. are going to take longer to load, perform worse, have GC pauses, use more RAM, use more battery, and be vulnerable to null pointer exceptions, data races, and other problems. If you are going to go to the trouble of using WebAssembly to improve a browser app, why not use the language that will do the best job of it?

A Thought on the College Admissions Scandal

The world of US university education experienced an earthquake recently when a federal indictment exposed wealthy parents paying to get their underperforming kids accepted into universities.  I’d like to share my personal take on this, channeling my inner Milton Friedman.

Instead of trying to ferret out and prosecute what we currently consider corruption, why not make it legal and visible?  This will surely sound weird at first, but hear me out.

Each university would produce a ranked list of applicants, each with an associated score derived from metrics indicating their likelihood of success — AND NOTHING ELSE — and then let parents sweeten the pot.  If you really want your mathematically mediocre kid in Caltech’s physics department, you can pay extra according to some published schedule to bump their score.

All above board.  By definition it’s not cheating.  The base score is STRICTLY about potential and is subject to review and challenge.  The base score is never bumped because your parents went to the school, or because of your race or religion or anything else.  The only way to bump the score is with money.

Wait — isn’t that bad?  Don’t we want to force colleges to accept the applicants we think are most deserving?

I’m strongly in favor of getting those students with “the right stuff” intellectually a good education so that they are more productive (and pay more taxes).  So given that there is not enough education to go around, how do we get more of it?

Money.  Note that by accepting that bump money from parents whose kids wouldn’t otherwise rank high enough, we are obviously getting that one kid into school who wouldn’t otherwise go, at least not there.  Sure, some of them will fail — at least one of the kids identified in the scandal didn’t even want to go to college — but the more likely the kid is to fail, the more money the parents have to give the school in order to let them try.  And you use that money to provide more/better education the next year.  This isn’t a zero-sum game; more money means that more kids can get educated.  (And even those rich kids who fail will probably learn *something* from the attempt.)

Want to give a break to a particular kid you believe deserves it?  Fine, there’s a way — the same way open to everyone else.  Want to give an entire class of people a break, like poor inner-city kids?  Fine, there’s a way.  It’s all right there in the open, and it works the same for everyone.  Get some money allocated to the project and you’re off to the races.  The costs of the program are obvious.  No crooks getting rich off of bribes.  No encouraging parents who want the best opportunities for their kids to cheat.  No more spending money to investigate that crap.  Who would bribe a crook to cook their kid’s SAT results, knowing that they could be convicted of a felony (and publicly humiliated) for doing so, when they could just pay the university directly to let their kid in, knowing that the money would be used to educate more kids?

OK, I’ll bet now you’re worried that the school will just keep all of the money instead of using it to educate more students.  But that would be tremendously stupid of the school, since they only get the money for the kids they admit and educate.  More kids admitted, more money.  More money, more slots for more kids.  Any school that decided to take only rich parents’ kids would simply be giving up the opportunity to serve the rest, along with the associated revenue.  Some other school will gladly step up and take the money.

And universities would be required to be transparent about the average score of the kids they admitted, the bump money that they accepted, etc.  If a university accepts so much bump money that it drags down the average scores, then maybe you don’t want to go there.  Every university would strike its own balance, not accepting so much bump money that it would significantly degrade the school’s reputation.  Students could see where various universities land on this spectrum and decide whether they want to pay more in order to get a more “elite” experience.

In short, let rich people pay extra to get their kids in — more kids get educated that way.

If right now you’re thinking that people shouldn’t be able to buy *preferential* access to a university education, you’re still thinking that it’s a zero-sum game, that for some reason having more money does not allow us to educate more students.

I actually think we need far more radical reforms to our education system than this, but I don’t have any problem at all with rich people buying their kids an education that they couldn’t get on intellectual promise alone.  And I would argue that you don’t mind either if you think that it’s OK that star athletes are given preference in admissions — you know that schools do that because people donate more to schools with competitive athletic programs, right?  Not to mention ticket sales.  It has nothing to do with the sport being important in any educational sense; in fact you could easily argue that reifying athletes is anti-educational.  Those athletic kids are accepted ahead of others who are intellectually more promising because of the money they bring with them.  The only difference from the current scandal is that the money is not coming from their parents.  If that difference somehow makes it OK in your mind, I can’t relate to your thought process.

So let’s turn this education scandal around, explicitly making a place for money to influence admissions rather than forcing it underground, where it would continue anyway but with the money going to unsavory folks.  Accept the money in broad daylight and use it to educate more kids.  Let’s be completely up front about it with students and parents — the more you look like a good bet academically, the less you pay, but anyone can come here to study.  Once in, you earn your grades on merit, and *that* is where we will be looking for corruption.

Teach your kids to curse!

“Oh, schmarkle!”

Imagine your son or daughter saying that in class.  Not too upsetting, I’m guessing.

Curse words are upsetting because those within earshot consider them vulgar.  And curse words are satisfyingly cathartic (and potentially even healthy) because of the utterer’s personal culture.  Usually those considerations are aligned, but what if you were to teach your children to curse using words that have no significance to anyone else?  It might sound a little silly, perhaps even humorous, but it wouldn’t sound offensive.  So schmarkle away!

Then later, when friends inevitably introduce your son or daughter to “real” curse words, they will have some degree of immunization against adopting them — the ingrained habit of using an inoffensive word.

O(1) sum_of_multiples() in Rust

I had been working mostly in Scala for a while, then took a diversion into Swift and Objective C.  I wanted to learn another language after that, and had all but decided on Clojure.  But Rust kept nagging at me — there was something about it.  Perhaps I’ll blog more about that later.

So I watched some videos, then read the book, and then started the Rust track at Exercism.io.  Nice site.  One of the exercises is about generating the sum of all multiples under some limit for a given list of factors.  So for example, if the factors are 3, 5, and 7, then the sum of the multiples under 12 is 3 + 5 + 6 + 7 + 9 + 10 = 40.

A naive solution is pretty straightforward:

pub fn naive_sum_of_multiples(limit: u32, factors: &[u32]) -> u32 {
    (1..limit).filter( |&i|
        factors.iter().any(|&j| i % j == 0)

It just iterates through all of the eligible numbers, picking out any that are evenly divisible by any of the factors, and sums them.  Rust’s iterators make that look pretty similar to a lot of other modern languages, like Scala.

But this solution is slow.  I mean really slow — at least compared to what’s possible.  As limit grows, the execution time increases linearly; that is, this is O(n).  (With respect to the limit, that is — we’re ignoring the (very real) impact of the number of factors on execution time. Note that in Exercism’s tests, there are generally two factors, max three, so for simplicity let’s focus on the limit here.)

Well, O(n) doesn’t sound so bad, does it?  But here’s the thing: it could be O(1).

What???  O(1)?  That would mean that you aren’t doing any more work if you increase the limit — and thus the numbers summed — by a factor of 100.  That can’t be right!  But yes, it is.

Here is the key observation.  The instant you see it, your brain will likely jump halfway to the solution.  The sum of (for example)

3 + 6 + 9 + 12 + 15 + 18 + 21

is just

3 * ( 1 + 2 + 3 + 4 + 5 + 6 + 7)

Got it yet?  Remember that the sum of the numbers from 1 to n is simply the value of n * (n + 1) / 2 — actually summing the numbers is unnecessary!

That’s not all there is to it, though. If asked to sum the multiples of 3 and 5 less than 1000, we can use the above technique to sum up the multiples of 3 and the multiples of 5 and then add them together, but we will have counted the multiples of 15 twice; we have to subtract those out.

And that’s a little trickier to do that than it sounds. First, what you really need to subtract out are the multiples of the least common multiple (lcm) of the two numbers, not their product. So, for example, if asked to sum the multiples of 6 and 15, we need to subtract off the multiples of 30 (not 90). The lcm of two numbers is their product divided by their greatest common divisor (gcd).

Also, we need to do this for an arbitrarily long list of numbers, so consider what happens if we are asked to sum the multiples of 4, 6, and 10:

  • First sum the multiples of 4.
  • Then add in the multiples of 6, but subtract the multiples of lcm(4, 6) = 12.
  • Then add in the multiples of 10, but subtract the multiples of lcm(4, 10) = 20 and the multiples of lcm(6, 10) = 30.

But oops, now we have gone the other way, subtracting off the multiples of 20 and 30 in common (60, 120, …) twice, and our result is too low, so we’ll have to add those back in. And if there were multiple corrections at that level (i.e. if we were given a larger list of numbers), we’d have to subtract their elements in common, and so on ad infinitum. At every step we have to take care not to add or subtract the same numbers twice.

That sounds like a pain, but using recursion it’s actually fairly straightforward.  In the following Rust code, I’ve changed the API a bit from the Exercism problem.  First, the integers are u64, so that we can use much bigger limits.  And secondly, in this case we’ll sum all multiples up to and including limit.  It’s an arbitrary choice anyway, and doing it this way will save us a small step for clarity.

pub fn fast_sum_of_multiples(limit: u64, factors: &[u64]) -> u64 {
  fn lcm(a: u64, b: u64) -> u64 { a*b / gcd(a,b) }
  fn gcd(a: u64, b: u64) -> u64 { if b == 0 {a} else { gcd(b, a%b) } }
  fn sum_from_ix(i: usize, limit: u64, factors: &[u64]) -> u64 {
    if i == factors.len() {  // we've processed all factors
    } else {
      let factor = factors[i];
      let n = limit / factor;  // # of multiples of factor to sum
      let sum_of_multiples_of_factor = factor * (n*(n+1)/2);
      let new_factors: Vec<_> = factors[..i].iter()
        .map(|&prev_factor| lcm(prev_factor, factor))
        .filter(|&factor| factor <= limit)
      let sum_of_previously_seen_multiples_of_factor =
        sum_from_ix(0, limit, &new_factors[..]);
      let sum_of_multiples_of_rest_of_factors =
        sum_from_ix(i+1, limit, factors);
        - sum_of_previously_seen_multiples_of_factor
        + sum_of_multiples_of_rest_of_factors
  sum_from_ix(0, limit, factors)

This is not too far from what a Scala solution would look like, although I think the Scala solution is a bit more readable.  In part this owes to Scala’s persistent List data structure, which was a little cleaner to work with here than a Rust Vec.  Also, Scala’s implicits make it unnecessary to make two calls that are explicit in the Rust version: iter() and collect().  Oh, and in Rust functions cannot close over variables; only closures can do that, but we couldn’t use a closure since in Rust closures cannot be recursive.  That forced us to explicitly include limit in the argument list of sum_from_ix().  Ampersands here and there in closures.  Semicolons.  These are little things, but collectively noticeable.

def fastSumOfMultiples(limit: Long, factors: List[Long]): Long = {
  def lcm(a: Long, b: Long) = a*b / gcd(a,b)
  def gcd(a: Long, b: Long): Long = if (b == 0) a else gcd(b, a%b)
  def sumOfMults(factors: List[Long], prevFactors: List[Long] = Nil): Long =
    factors match {
      case Nil => 0
      case factor::rest =>
        val n = limit / factor  // # of multiples of factor to sum
        val sum_of_multiples_of_factor = factor * (n*(n+1)/2)
        val sum_of_previously_seen_multiples_of_factor =
          sumOfMults(prevFactors.map(lcm(_, factor)).filter(_ <= limit))
        val sum_of_multiples_of_rest_of_factors =
          sumOfMults(rest, factor::prevFactors)
        sum_of_multiples_of_factor -
           sum_of_previously_seen_multiples_of_factor +

But really, the difference is not so stark.  And the Rust version does not need a garbage collector — amazing!

On to the next exercise….

Thoughts on “Self-Serving Bias”

If someone were to ask you and your roommate what percent of the work around the house you each do, the answers would almost surely total to more than 100%; you each likely overestimate your contribution. This is an oft-mentioned example of “self-serving bias.”

While I have no doubt that this bias exists, there is one confounding factor that you have to be careful about in attributing such a discrepancy in assessment to self-serving bias, something I have never seen discussed.  That confounding factor is that we value things differently.

My wife, when I was married, used to iron my undershirts after they came out of the dryer.  I asked her not to do it: not only was it a waste of her time because the wrinkles would not be visible, it was also a waste of energy both to iron them and then for the AC to remove the heat from the house.  She saw things differently and continued to iron them.

So when thinking about how much work she did around the house, there were those twenty minutes of ironing she did for me.  But I counted that activity more as an annoyance than as a contribution.

I had another partner who had a curio cabinet with shelves of little glass unicorns and other trinkets on display.  We lived in Phoenix, so all those little pieces needed frequent dusting; in her mind that effort was part of the housework.  But that curio cabinet did nothing for me; it was strictly for her pleasure.  Whatever dusting she did on those unicorns was maintenance on her hobby, as far as I was concerned, comparable to me keeping the tires inflated on my bike.

There can even be discrepancies for work that is valued by both.  For my partner, keeping the kitchen tidy may involve putting on a shelf in the pantry some things that I would prefer to see left on the counter, like the cinnamon and jars of nuts.  Or imagine living with someone who wants the carpets vacuumed every week, when you are fine with once a month.

None of this is to say that humans don’t engage in self-serving bias — just that such discrepancies may also owe in part to discrepancies in what people consider the goal.

Why do we like foods that are bad for us?

How about a pepperoni and sausage pizza with extra cheese? Or perhaps a chocolate eclair?

No thanks, you say? You like those foods, but you’re trying to eat healthy? Well darn — why is it that everything that tastes good is bad for us?

OK, that’s a bit of exaggeration, but we do like fats, sugars, and salt, all to our detriment — they contribute to obesity, diabetes, heart disease, cancer, and more. It seems odd, doesn’t it? Why would we evolve to like foods that are bad for us?

Well hey, I have some great news! The reason we like those things is that they are actually good for us! Really, that’s true — but only if you take it in context.

Our distant ancestors didn’t have supermarkets with candy aisles. Even fruits available to them were hardly sweet at all compared to the fruits we eat today, which are the result of many, many generations of selective breeding for taste. Our ancestors hunted, but during most of our evolutionary history success was by no means assured — they occasionally brought home some meat. And that meat was not marbled with fat like the cows we eat today, which were not only bred for taste but also given cushy lives so that their muscles don’t get tough.

So we evolved over many, many generations during which fats, sugars, and the like were generally not available in quantity. Whatever little bits you were lucky enough to come across, it was to your benefit to eat. Those who liked such foods were more motivated to eat them when available, and they benefited from the nutritional boost, which ultimately translated into greater reproductive success — they really needed the calories. So the genes that programmed into their brains the taste for such foods were passed on with greater success.

To drive this home, picture yourself lost on some grassy plains, with trees here and there. You haven’t eaten much today, so you are motivated to find something. A few of the trees and bushes have fruits that you sort of recognize. You try one and quickly spit it out — bitter. After some experimentation you find one that is, well, not great — nothing like the fruits you are used to eating — but not disagreeable. The chemical laboratories in your tongue and nose steered you away from foods which probably weren’t going to work for your body, toward foods that might. That is what your senses of taste and smell are for, not to help you decide between broccoli and Twinkies. There were no Twinkies.

But that’s why we like sweet foods today: they were a good sign for us health-wise way back then. Our ancestors were the ones who did like sugars and fats; the ones who didn’t fared less well, and their genes became less common. That’s how evolution works. But now we are able to refine and concentrate sugars and fats to form the nutritional monstrosities that call to us like sirens from supermarket shelves, and in those concentrations they are by no means good for us.

If such unhealthy crap had existed in our evolutionary past (imagine giant Twinky trees on the savanna, laden with “fruit”), we would have had to evolve to deal with it.  Our bodies might have evolved to be a little more tolerant of sugar/fat bombs. Our brains might have evolved to enjoy a little bit of Twinky now and then, but to quickly lose interest, preferring foods with the nutrition our bodies need. But that didn’t happen, because there were no Twinky trees until recently. So we are defenseless, led by our senses — like moths to a flame — to obesity, diabetes, and heart disease.

Well, OK, we aren’t quite defenseless — we can, through our intellect and force of will, override our pleasure system. But that works much better for some than for others.

After I explained this to a friend, he asked “but aren’t we evolving to deal with much higher concentrations of sugars, fats, etc.?”

Unfortunately it’s not that simple. To understand why, consider what that evolution would look like. People whose genes build bodies which don’t handle these excesses well would have to be less successful reproducers, so that their genes would become less frequent in the gene pool. A key way such people would be less successful reproducers is by dying. Well, they do die, you complain! Yes they do, BUT — they would have to die soon enough to reduce the number of children they have (and raise successfully). These days we expect to have a couple of kids in our twenties or thirties and live into our eighties. Natural selection doesn’t much care if you die in your fifties of heart disease if you weren’t going to have more kids after that anyway.

To be fair, grandparents can be of some benefit in raising their grandchildren, although that’s probably far less a factor than it used to be, what with insurance and social programs that use tax money to help the needy, not to mention the fact that at least here in the U.S. grandparents don’t typically live with their grandchildren anymore. In fact, it may be that — from our genes’ point of view, which is always about reproductive success — it is better nowadays if grandparents die young and leave a larger inheritance to their descendants sooner!

So no, sorry — we are probably not significantly evolving toward bodies that run just fine on cheeseburgers and ice cream. And in fact that’s almost surely not what would happen even if we were dying soon enough to reduce our reproductive success; incremental changes to our brains that cause us to like sugars and fats less are far more likely than sweeping changes to our physiology to allow us to thrive on junk food.

Human beings are intelligent planners, capable of working out the means to attain goals. But an old part of your brain tries to “steer” you and your great planning ability toward reproductive success by dumping neurotransmitters into your noggin to control how you feel. And there is something you should know about that old part of your brain, this part that controls what you like: it’s dumb as a stone. Worse, it has no idea that the industrial revolution happened.

Think of this old part of your brain as the firmware in a computer. I call this part of my brain Dumbo, and the corresponding part of a woman’s brain Morona. Dumbo and Morona only get updated by evolution, a glacially slow process that — as we saw above — doesn’t necessarily lead us someplace we’d like to go, because humans want more out of life than just lots of descendants. Dumbo and Morona do not reason; they execute encoded heuristics that contribute to reproductive success. The encoding that is there is almost entirely from a time when our lives were not very much like they are today, so there is a huge gap between the environment we were programmed to live in and the one we really do live in. Dumbo thinks I am a hunter-gatherer, and that if I run across a bit of sugar or fat it’s an opportunity not to be missed. So that’s how he steers me.

Quite a lot of human unhappiness is the result of Dumbo and Morona being in serious need of an update; stay tuned for more about that.

But for now, hand me another slice of that pizza, would you?

For those especially interested in evolution…

I should be honest here and say that there are other ways (besides dying early) in which evolution could operate on our desire to eat foods that are bad for us, but they don’t change the story. I’ll go through a couple of those here.

1. You could become undesirable during mate selection and have trouble finding a mate. This kind of selection is called “sexual selection.” So if a person’s genes contribute to their eating a diet which makes them less desirable as a mate, then they will have fewer choices in the mating game.

2. You could have trouble performing the tasks required to raise a family. So if a person’s genes contribute to their eating a diet which makes them sick (diabetes leaps to mind), that could theoretically impact their reproductive success.

Eons ago, these would have been very important factors — had junk foods been available. Couples didn’t use birth control to limit themselves to a couple of kids; dying early might very well reduce the number of children you left behind. Getting a disease like diabetes was much more likely to kill you or leave you incapacitated and unable to take care of your family. Modern medicine (and the insurance that pays for it) can inform us of the danger of such a diet and significantly ameliorate its effects. For example, a diabetic might be given insulin — paid for by insurance — and still work and raise a family.

Long ago, a less desirable mate might have meant less children — after all, what we find attractive in a mate is largely about reproductive potential (more on this in later posts!). But modern medicine goes a long way toward allowing everyone who wants a child to have one — even if, for example, they don’t have optimal hormone levels or hips wide enough for a safe delivery. A graduated income tax and other “progressive” policies go a long way toward allowing everyone who has a child to raise it successfully. And birth control drastically reduces the number of children we have from our true reproductive potential. And frankly, a lot of those heuristics are horribly out of date anyway. So really, not getting as attractive a mate has little effect on reproductive success these days.

Eons ago, you couldn’t eat junk food — it didn’t exist. Now that it does, the effects on reproductive success are pretty limited, so evolution doesn’t have a lot to work with.

Reboot/Restart in a REST API using PUT

it is actually quite possible to do the reboot/reset in an idempotent manner using PUT

There was at one time a controversy around whether you were restricted to CRUD (Create/Read/Update/Delete) in defining REST APIs or whether it is OK to use POST for the odd “do something” request. Roy Fielding, who came up with REST in the first place, largely put this to bed by saying, in effect, “I never said you couldn’t create additional commands.”

The problem often surfaced when someone asked

I have a a resource with a status attribute. What should a request to reboot (or restart) it look like?

If you were using SOAP, the answer is obvious: have a Reboot command. But this is REST; is that the right thing to do? Why not use a PUT to set the status to Rebooting?

The problem with that is that it’s not idempotent. An intermediate server between the client and the REST API is allowed to reissue the command, which means that the system could, at least in theory (I wonder whether a server could really reboot fast enough for this to be an issue in practice), get rebooted a second time as a result.

On the other hand, you can understand REST API designers’ reluctance to just invent a new command. Falling back on POST to create new commands for things you don’t know how to do idempotently is, in a sense, the API equivalent of

Just use a goto statement.

That is, the facility is too general — it’s a catch-all that has completely open-ended semantics — “do something.” It begs to be abused. In my opinion, it is better to create useful abstractions on top of such open-ended facilities and then restrict developers to those abstractions. Just as I don’t want us to run a server as root or use a programming language with a goto statement, I don’t want us to have a “do something” facility in the abstraction layers over the API.

But the main point I want to make is that it is actually quite possible to do the reboot/reset in an idempotent manner using PUT. The reason it isn’t usually thought of is that people are a priori focused on a status attribute. Imagine that you also have a last_reboot attribute that holds the time of the last reboot; to reboot the system, simply do a PUT to change that to the current time.

The result is perfectly idempotent; if an intermediate server resends the command, it will have the same last_reboot time, and such an update is treated as a no-op. And an attempt to change the last_reboot time to a time older than its current value is an error. So picture something along these lines:

  class Server {
    def reboot = put( "/last_reboot", getTime )

Note that last_reboot is useful information about the system, as is its time. Sure, you could instead model this as a new, non-idempotent Reboot command that has a side-effect on the last_reboot value, but — uhhh, why? You already have a perfectly good, idempotent command that will do it, whose effect on last_reboot is not implicit.

I’m not saying that there will never be a case where you ought to create a command. But if you are stuck thinking that there is no idempotent way to make a certain update, perhaps you are thinking about the wrong attributes. Don’t be too quick to use a goto.