Why do we like foods that are bad for us?

How about a pepperoni and sausage pizza with extra cheese? Or perhaps a chocolate eclair?

No thanks, you say? You like those foods, but you’re trying to eat healthy? Well darn — why is it that everything that tastes good is bad for us?

OK, that’s a bit of exaggeration, but we do like fats, sugars, and salt, all to our detriment — they contribute to obesity, diabetes, heart disease, cancer, and more. It seems odd, doesn’t it? Why would we evolve to like foods that are bad for us?

Well hey, I have some great news! The reason we like those things is that they are actually good for us! Really, that’s true — but only if you take it in context.

Our distant ancestors didn’t have supermarkets with candy aisles. Even fruits available to them were hardly sweet at all compared to the fruits we eat today, which are the result of many, many generations of selective breeding for taste. Our ancestors hunted, but during most of our evolutionary history success was by no means assured — they occasionally brought home some meat. And that meat was not marbled with fat like the cows we eat today, which were not only bred for taste but also given cushy lives so that their muscles don’t get tough.

So we evolved over many, many generations during which fats, sugars, and the like were generally not available in quantity. Whatever little bits you were lucky enough to come across, it was to your benefit to eat. Those who liked such foods were more motivated to eat them when available, and they benefited from the nutritional boost, which ultimately translated into greater reproductive success — they really needed the calories. So the genes that programmed into their brains the taste for such foods were passed on with greater success.

That’s why we like such foods today: they were good for us way back then. Our ancestors were the ones who did like sugars and fats; the ones who didn’t fared less well, and their genes became less common. That’s how evolution works. But now we are able to refine and concentrate sugars and fats to form the nutritional monstrosities that call to us like sirens from supermarket aisles, and in those concentrations they are by no means good for us.

After I explained this to a friend, he asked “but aren’t we evolving to deal with much higher concentrations of sugars, fats, etc.?”

Unfortunately it’s not that simple. To understand why, consider what that evolution would look like. People whose genes build bodies which don’t handle these excesses well would have to be less successful reproducers, so that their genes would become less frequent in the gene pool. A key way such people would be less successful reproducers is by dying. Well, they do die, you complain! Yes they do, BUT — they would have to die soon enough to reduce the number of children they have (and raise successfully). These days we expect to have a couple of kids in our twenties or thirties and live into our eighties. Natural selection doesn’t much care if you die in your fifties of heart disease if you weren’t going to have more kids after that anyway.

To be fair, grandparents can be of some minor benefit in raising their grandchildren, although that’s probably far less a factor than it used to be, what with insurance and social programs that use tax money to help the needy, not to mention the fact that at least here in the U.S. grandparents don’t typically live with their grandchildren anymore. In fact, it may be that — from our genes’ point of view, which is always about reproductive success — it is better nowadays if grandparents die young and leave a larger inheritance to their descendants sooner!

So no, sorry — we are probably not significantly evolving toward bodies that run just fine on cheeseburgers and ice cream. And in fact that’s almost surely not what would happen even if we were dying soon enough to reduce our reproductive success; incremental changes to our brains that cause us to like sugars and fats less are far more likely than changes to our physiology to thrive on junk food.

Human beings are intelligent planners, capable of working out the means to attain goals. But an old part of your brain tries to “steer” you and your great planning ability toward reproductive success by dumping neurotransmitters into your noggin to control how you feel. And there is something you should know about that old part of your brain, this part that controls what you like: it’s dumb as a stone. Worse, it has no idea that the industrial revolution happened.

Think of this old part of your brain as the firmware in a computer. I call this part of my brain Dumbo, and the corresponding part of a woman’s brain Morona. Dumbo and Morona only get updated by evolution, a glacially slow process that — as we saw above — doesn’t necessarily lead us someplace we’d like to go, because humans want more out of life than just lots of descendants. Dumbo and Morona do not reason; they execute encoded heuristics that contribute to reproductive success. The encoding that is there is almost entirely from a time when our lives were not very much like they are today, so there is a huge gap between the environment we were programmed to live in and the one we really do live in. Dumbo thinks I am a hunter-gatherer, and that if I run across a bit of sugar or fat it’s an opportunity not to be missed. So that’s how he steers me.

Quite a lot of human unhappiness is the result of Dumbo and Morona being in serious need of an update; stay tuned for more about that.

But for now, hand me another slice of that pizza, would you?


For those especially interested in evolution…

I should be honest here and say that there are other ways (besides dying early) in which evolution could operate on our desire to eat foods that are bad for us, but they don’t change the story. I’ll go through a couple of those here.

1. You could become undesirable during mate selection and have trouble finding a mate. This kind of selection is called “sexual selection.” So if a person’s genes contribute to their eating a diet which makes them less desirable as a mate, then they will have fewer choices in the mating game.

2. You could have trouble performing the tasks required to raise a family. So if a person’s genes contribute to their eating a diet which makes them sick (diabetes leaps to mind), that could theoretically impact their reproductive success.

Eons ago, these would have been very important factors — had junk foods been available. Couples didn’t use birth control to limit themselves to a couple of kids; dying early might very well reduce the number of children you left behind. Getting a disease like diabetes was much more likely to kill you or leave you incapacitated and unable to take care of your family. Modern medicine (and the insurance that pays for it) can inform us of the danger of such a diet and significantly ameliorate its effects. For example, a diabetic might be given insulin — paid for by insurance — and still work and raise a family.

Long ago, a less desirable mate might have meant less children — after all, what we find attractive in a mate is largely about reproductive potential (more on this in later posts!). But modern medicine goes a long way toward allowing everyone who wants a child to have one — even if, for example, they don’t have optimal hormone levels or hips wide enough for a safe delivery. A graduated income tax and other “progressive” policies go a long way toward allowing everyone who has a child to raise it successfully. And birth control drastically reduces the number of children we have from our true reproductive potential. And frankly, a lot of those heuristics are horribly out of date anyway. So really, not getting as attractive a mate has little effect on reproductive success these days.

Eons ago, you couldn’t eat junk food — it didn’t exist. Now that it does, the effects on reproductive success are pretty limited, so evolution doesn’t have a lot to work with.

Reboot/Restart in a REST API using PUT

it is actually quite possible to do the reboot/reset in an idempotent manner using PUT

There was at one time a controversy around whether you were restricted to CRUD (Create/Read/Update/Delete) in defining REST APIs or whether it is OK to use POST for the odd “do something” request. Roy Fielding, who came up with REST in the first place, largely put this to bed by saying, in effect, “I never said you couldn’t create additional commands.”

The problem often surfaced when someone asked

I have a a resource with a status attribute. What should a request to reboot (or restart) it look like?

If you were using SOAP, the answer is obvious: have a Reboot command. But this is REST; is that the right thing to do? Why not use a PUT to set the status to Rebooting?

The problem with that is that it’s not idempotent. An intermediate server between the client and the REST API is allowed to reissue the command, which means that the system could, at least in theory (I wonder whether a server could really reboot fast enough for this to be an issue in practice), get rebooted a second time as a result.

On the other hand, you can understand REST API designers’ reluctance to just invent a new command. Falling back on POST to create new commands for things you don’t know how to do idempotently is, in a sense, the API equivalent of

Just use a goto statement.

That is, the facility is too general — it’s a catch-all that has completely open-ended semantics — “do something.” It begs to be abused. In my opinion, it is better to create useful abstractions on top of such open-ended facilities and then restrict developers to those abstractions. Just as I don’t want us to run a server as root or use a programming language with a goto statement, I don’t want us to have a “do something” facility in the abstraction layers over the API.

But the main point I want to make is that it is actually quite possible to do the reboot/reset in an idempotent manner using PUT. The reason it isn’t usually thought of is that people are a priori focused on a status attribute. Imagine that you also have a last_reboot attribute that holds the time of the last reboot; to reboot the system, simply do a PUT to change that to the current time.

The result is perfectly idempotent; if an intermediate server resends the command, it will have the same last_reboot time, and such an update is treated as a no-op. And an attempt to change the last_reboot time to a time older than its current value is an error. So picture something along these lines:

  class Server {
    ...
    def reboot = put( "/last_reboot", getTime )
  }

Note that last_reboot is useful information about the system, as is its time. Sure, you could instead model this as a new, non-idempotent Reboot command that has a side-effect on the last_reboot value, but — uhhh, why? You already have a perfectly good, idempotent command that will do it, whose effect on last_reboot is not implicit.

I’m not saying that there will never be a case where you ought to create a command. But if you are stuck thinking that there is no idempotent way to make a certain update, perhaps you are thinking about the wrong attributes. Don’t be too quick to use a goto.

A Monte Carlo Simulation in Scala

On LinkedIn a fellow named Francois Dewaste posts links to some Scala-related articles he runs across, and I often check them out. Recently he shared an article about a Monte Carlo simulation in Scala, written by Alvin Alexander. I found the problem interesting, so I want to share my own Scala solution, which is written using a more functional style, and also a mathematical interpretation of the results.

First I’ll repeat the basic problem:

If two people who are each right on a test 80% of the time (but whose answers are uncorrelated) have the same answer for a particular question, what is the probability that their answer is right?

If you’re like me, your immediate answer will be “there isn’t enough information.” Clearly you would have to know how many possible answers there are, because that will affect the probability that they have the same wrong answer. If there are only two possible answers, then if they are wrong they agree 100% of the time; if there are an infinity of possible answers, then if they are wrong they agree 0% of the time. Looking at his code, though, it was clear that he meant for there to only be two possible answers to each question.

The Program

I know folks are here to look at the Scala code, so here it is. To make it a bit easier to follow, bold font indicates where the parameters of the simulation are used.


object MonteCarlo extends App {

  val Array( numQuestions, numChoices, numPlayers, pctCorrect, numMustAgree ) =
    args map (_.toInt)

  val correctAnswer = numChoices - 1

  // For each question, generate player answers and look for consensus.
  // Where consensus is achieved, yield whether or not it is correct.
  val correctnessOfConsensus =
    for { i <- 1 to numQuestions
          (answer,instances) <- Array.fill(numPlayers)(
            if ( util.Random.nextInt(100) < pctCorrect )
              correctAnswer
            else // pick a wrong answer from 0 to correctAnswer-1
              util.Random.nextInt(correctAnswer)
          ) groupBy identity
          if instances.size >= numMustAgree
    } yield answer == correctAnswer

  val timesAgreed = correctnessOfConsensus.size
  val timesRight  = correctnessOfConsensus.filter(identity).size
  println( s"Consensus meant correctness $timesRight out of $timesAgreed times." )
  if ( timesAgreed > 0 )
    println( f"That's ${ 100.0 * timesRight / timesAgreed }%.1f%%." )

}

It’s interesting to note that because of Scala’s type inference there isn’t a single type specification in the entire program, in spite of the fact that there are eleven bound variables. The call to toInt() is a conversion, not a type specification, and it would be needed in Python or Ruby as well.

The 9-line for-comprehension is the heart of the program: it repeatedly (once for each question) generates the players’ answers with the specified probability of being correct, figures out how many times each answer was used, picks out the cases where the required number of people agreed, and yields a Boolean indicating whether or not the agreed-upon value was correct.  Note that there can be more than one consensus for a given question; for example, if you use ten players, five of which must agree, then you can achieve consensus twice for a single question. The for-comprehension makes that trivial.

Now let’s run it for the parameters of the original problem, but with a million questions so we get a more accurate probability estimate:

  $ scala MonteCarlo 1000000 2 2 80 2
  Consensus meant correctness 640039 out of 679984 times.
  That's 94.1%.

And they were also interested in the probability of the answer being correct if 3 out of 3 had it:

  $ scala MonteCarlo 1000000 2 3 80 3
  Consensus meant correctness 512213 out of 520220 times.
  That's 98.5%.

Those are essentially the same answers Alvin got.

The Math

So what’s behind these numbers? In the above runs (where everyone must agree), there are three possibilities for each question:

  • A. They all agree and they are correct.
  • B. They all agree and they are incorrect.
  • C. They don’t all agree.

We don’t care about C; what we are interested in is

         p(A)
     -------------
      p(A) + p(B)

Since their answers are uncorrelated, the probability that two players will both be right is the product of the probabilities that each is right. Since each is right 80% of the time, p(A) = square(0.8) = 0.64, and p(B) = square(0.2) = 0.04. So the expression above is 0.64 / ( 0.64 + 0.04 ) = 0.941 (roughly).

With three people, p(A) = cube(0.8) = 0.512 and p(B) = cube(0.2) = 0.008, so the probability that the consensus is correct is 0.512 / ( 0.512 + 0.008 ) = 0.9846 (roughly).

Oh, and remember the point about the number of possible answers mattering? Let’s say that instead of 2 possibilities there are a million:

  $ scala MonteCarlo 1000000 1000000 2 80 2
  Consensus meant correctness 640167 out of 640167 times.
  That's 100.0%.

With so many possible answers, the players almost never agree if they are incorrect. So, if they do agree, it’s because they are right.

There you have it! Interesting problem, Alvin — thanks!

[other Scala posts]

How to Make Google Chrome Launcher Open New Window

I’ve used Google Chrome on Linux for quite a while now, mostly happily, but there is one thing I’ve always found extremely frustrating. I just fixed it, so I thought I’d share.

Here was the problem. I usually have a dozen or so Chrome windows open at once, each of which may have several tabs. These windows, spread across four desktops, represent work in progress for me, as I handle interruptions or set aside a task while waiting for a response, or whatever. OK, so when I click on the Chrome icon to open a new browser window, I don’t get one — Chrome instead finds my most recently touched browser window and adds a new tab to it. It doesn’t matter if it has been hours since I touched that window, or if the window is on a different desktop — that other window gets the new tab. So I have to pull the tab from the lucky window, iconize the old window again, move the new window to the desktop I was on, and switch back to that desktop myself. Aaargh!

I have a hard time understanding why the launcher would work this way. If I want a new tab in some Chrome window, I can simply press the new tab icon in that window. If I go to the panel and ask for a new browser, why would I want it to dig up some old window and add a new tab to that?

I had looked at the man page, but there was no switch for this behavior. But today I found an undocumented (why???) switch that does just what I want:

  --new-window

So I edited the properties for the Google Chrome launcher to add in the switch, and all is golden:

GoogleLauncherProperties

Dynamic Resource Types for a No-Fuss API

“this was one of those rare occasions where an idea turns out to be a lot better than intended”

In late 2009 I left a dying Sun Microsystems (right before the Oracle acquisition) to help my good friend Alok with his tiny un-funded startup called Yunteq. He was understandably having trouble finding engineers willing to work for a stake in the company rather than a salary, and since he was trying to make it as an entrepreneur and was in desperate need of help in a critical portion of the system, I went to his aid. The unsalaried period was only going to last three months — I could handle that — and he thought the company would probably be acquired in five, by a suitor they had been in talks with for a while. They were almost to the goal line, but they had a big problem; if I stepped in as a third developer to get them through it, I could own a significant share of a company about to be acquired. He had assembled a good team, and we would make decisions together, as co-founders.

It sure sounded good. In reality it was an extremely bumpy ride.  :^)

Anyway, in early 2010 I started on the task of designing a public REST API for the product. It had to present the functionality of the server — which was all about managing and organizing roomfuls of servers, and the virtual machines running on them — to the outside world. We started with a typical design in which the URL broke down the universe into a hierarchy of resources like this one:

/cust/CustomerId/app/ApplicationId/vm/VmId/disk/DiskId

The resources in such an API effectively form a tree, the shape of which is defined by the application. But at some point Alok mentioned to me that while the above was fine for a cloud provider, a large company would probably not want the customer part but would want some way of representing divisions, regions, groups, departments, or whatever. “We’ll probably have to fork the code base for each customer,” he said.

I nearly choked on my tea at the sound of that. “Ummmm, let me think about this,” I responded.

In a couple of days I presented him with a design that at first he had trouble understanding. There was no fixed hierarchy to the API at all — the resource tree would be whatever we made it, by creating new nodes and saying what type we wanted each to be. Not only could each customer have a different tree arrangement, we didn’t have to set it up for them — they could do it themselves. They could do it piecemeal; anytime they wanted to add to or rearrange the tree, they could do so easily from the CLI. All using a single code base.

That is, this kind of API was really not about customers and applications and VMs and Disks and NICs and so on — it was about manipulating a tree of typed nodes. If you want a top-level grouping by customer, fine — create a node of type Folder called “cust” and within that a Folder for each customer. If you want to organize things some other way, no problem — just create the appropriate nodes in the tree to represent it. It was completely analogous to the way you create directories in a filesystem to organize your data the way you want, except that you could create nodes of arbitrary types from some available palette. A Folder here, a VM there, an ISO repository over there, etc. — however you want to arrange it.

By mid-2010 we had such a system working; the public API allowed clients to manipulate a dynamically typed resource tree.

But this was one of those rare occasions where an idea turns out to be a lot better than intended. Yes, the interface was now infinitely more flexible, and that was great. Each customer could organize its resources in our system in whatever way made sense for that enterprise. But that was just the beginning.

The API itself treated all nodes in the tree the same — as typed sets of attributes. How much RAM a VM was to have, or CPU, or disk space — all were just attributes of different types. The API was entirely about manipulating a tree of attribute-sets; it didn’t know or care what the resources or attributes were about. Developing in this system involved writing new resource types, but you never had to go back and fiddle with the API to be able to handle them; the API could already handle them. To it, your new resource type was just another typed set of attributes.

And then our CLI got simpler. There was no longer any code specific to VMs or customers or a disks or anything else — it just extended the API’s view of the world, allowing the user to manipulate a tree of attribute-sets from the command line. Not only did we no longer have to fuss with the API when we added functionality, we didn’t have to fuss with the CLI either. As we added new resource types, or added/changed attributes of existing resource types, no work at all had to be done in the API or CLI. This was way cool.

Then we added the notion of a “policy” — a kind of resource that changes the way the system deals with other resources. The fact that the customer was in control of the arrangement of resources gave us a scoping mechanism for policies: wherever you put the policy in the tree, the policy was in effect for the subtree under that point. And if you put another instance of the same type of policy below it, the deeper policy overrode the shallower one. This was a simple yet powerful scoping mechanism, made possible by the fact that we had handed control over the tree’s layout to the user.

Testing was also improved. Among other things, the fact that resource types and attributes were treated so generically meant that much testing could be done once and for all. For example, a single suite of tests can check that an attribute which is supposed to be an integer within a specified range works as it should; then if there are actually a dozen such attributes in the system there is no need to test out-of-range values or non-integer input on each.

Even the API documentation and help system documentation were simplified. Once you understand the basic idea, that the API just allows you to manipulate a tree of typed resources, the bulk of what you need to know is 1) what resource types are available, and 2) what are their attributes? Much of this kind of documentation can be generated automatically by the system.

In effect we had created some good generic infrastructure for creating applications. Starting from that infrastructure, you just define resource types for your domain and you are done. Of course, those resource types are non-trivial — that’s the meat of the application. But you get a lot for free.

There’s more to this, of course — I can write about the API because it is public, but I won’t say much about the interesting mechanics inside the server. I will say, though, that I had wanted to do this in Scala and Akka, but management had never heard of those (“Forget Scala,” I was told) and got someone to set up a much more conventional stack. It worked reasonably well, but had its problems. A bit frustrated, I spent a few weeks over Christmas writing a Scala/Akka/Spray prototype and demoed it to them when they got back from their vacations. They were really impressed at how much cleaner the API code was (the Spray DSL for creating APIs is awesome), and defining resource types in this new system was much easier. To their credit, they took a serious look at Scala and Akka and decided that we should use it. I now have new server infrastructure for dynamic resource trees working in Scala/Akka/Spray; it’s a huge improvement and I am a much happier camper.

And now that same management has told the entire group to use Scala. Go figure.

Extremely Agile Testing using a Lazy Test Harness

“it is generally a lot easier to verify a result reported by the test harness than it is to figure out the right answer yourself beforehand and write the code to check for it”

I don’t particularly enjoy writing tests, but l noticed long ago that I enjoy debugging insufficiently tested systems even less. If you have a suite of old tests that you can run whenever you think you’ve gotten something new working, it can save you a ton of trouble later by showing you that you’ve broken something — right away, when you know that something in those 40 lines you just mucked with must have caused the problem.

The traditional way of writing tests looks something like this:

  1. Have the system under test do something.
  2. Capture some result that you can use to determine whether what you tried worked.
  3. Check that result in various ways, raising a flag if it’s wrong.

Eons ago I was writing a date/time library and realized that I would need hundreds of tests; it was worth my effort to make writing them as simple as possible. I created a simple CLI to invoke the routines in the library and a test harness that used the CLI to run a bunch of tests. Each test was just a single line that made the library do something, e.g. convert a date to Julian; the harness did all the rest.

“Wait a minute,” you complain — “how did the harness know whether the test passed? A test must have done more than just make the system do something!”

But it did not.  A test really did just look something like

  toJulian 1988-03-21

or

  addDays 1988-12-29 5

So how did the test harness know whether the test passed or failed? Well, the first time the test was run, the harness did not know — it had to ask whether the output was correct. If I said yes, it saved the output as the correct result. The tests were “lazy” inasmuch as the correct results were not established until the tests were run the first time.

This approach proved extremely convenient — I could create a new test in a few seconds. And while the regression tests usually just passed without incident, there were in fact many times when I had inadvertently broken something and the tests saved my bacon. Without those tests I would have discovered far later, perhaps even at a customer site, that something somewhere wasn’t working, and would have had to laboriously trace it back to those 40 lines. And it might take me a while to fully remember what those 40 lines were about.

The point of doing tests this lazy way is that it is generally a lot easier to verify a result reported by the test harness than it is to figure out the right answer yourself beforehand and write the code to check for it. This is especially true if the right answer is, as is often the case for me, a dump of some tree structure, dozens of lines long. I can look at such output and pretty quickly say “yes, that’s right,” but if I had to explicitly code such a tree structure into each test, you sure wouldn’t see me writing many of them!

Furthermore, if I deliberately change something that causes the tests to fail, I don’t have to go back and fix all of the affected tests manually. The harness stops at every failure, shows me a side-by-side diff of what was expected and what we actually got, and asks me whether I want to accept the new output as the “good” output. If I say yes, the harness fixes the test for me. I can even tell the harness that if further tests fail with the same diff, they should be automatically updated without asking.

Of course, this approach assumes that you have a CLI. These days I write server apps with REST APIs, and I always create CLIs for them. “We don’t have a requirement for a CLI,” a manager told me recently, thinking we would save time by not bothering with one. “You’re getting one anyway,” I responded. I always port/write a shell-based CLI, giving us a very powerful way to control and script the server — very handy. Then I port/write a lazy regression test harness (a couple hundred lines of shell) to the CLI and begin writing lots of one-line tests.

And suddenly testing is not such a drag.

Attitude and Aptitude

My son Carlos just got his A.S. degree in Engineering. The road he took to get it is pretty unusual, and nobody would have guessed that he would accomplish what he has. I think there’s a good lesson in the story.

First I should explain that Carlos is not my biological son. He was born in Honduras, a poor country, to parents who were poor even by Honduran standards. We’re not talking about driving a used car instead of a new one — we’re talking about no car in the family at all, feeling excited if for Christmas you get a new shirt and pants because you won’t be getting them any other time of year, and skipping the occasional meal because there is no food. When I first saw the house his family lived in, I was a bit confused — I thought it was a storage shed.

When Carlos talks to friends from his childhood days and tells them that he is a student at an American college, they think he is pulling their legs. You see, as a kid Carlos was a terrible student, ultimately expelled from the eighth grade for causing trouble. He didn’t see the value in what was being said, and wasn’t picking up much of it. It seemed like a waste of time, so he didn’t invest himself in it — and failed miserably.

Since his family was so poor, he decided to help out by working in the banana plantations.  It was back-breaking work, and the others on his team — all much bigger than Carlos — were expecting him to fail. He surprised them all by coming back the next day, and the next — soon he knew not only that job, but other jobs on the plantation as well. The money was a huge help to his family. Carlos really wanted to help his younger brothers and sisters, especially Delma, whose life he saved when she was a toddler — they have a special bond.

But it was really hard work. One day he was taking a break, sitting on a tree stump, thinking about the men working with him. Some were in their 50s, and the work was clearly a challenge for them. “If I keep doing what I’m doing, I’ll be working on a plantation in my 50s just like them,” he thought to himself.  So when the opportunity came to join his mother in the US, he took it.

Coincidentally, within days of his arrival he went to his mother’s graduation ceremony — she had earned her GED (high school equivalency). She was given her diploma and led to a podium with a microphone to say a few words; tears streamed down her face as she thanked everyone.

Carlos had planned to work with his brother in construction, but I told him that if he wanted to get his GED too, he could, and that it would help him. “Here in the US,” I told him in Spanish, “a young woman’s parents may not want her dating a guy who didn’t finish high school. And there are lots of jobs that require a high school diploma.”

The next day he came to me and told me yes — he would like to get his GED. “Great!” I told him. “Let’s see how much you learned in Honduras. What’s 7 times 9?”

He thought for a few seconds before admitting that he had no idea. In math, at least, he remembered very little. Probably he had never learned very much in the first place; he had never had a textbook of any kind. My heart sank a little — it had been a long struggle tutoring his mother through her GED, and Carlos seemed no better prepared.

But he began taking classes, and I set up a whiteboard and helped as best I could. Pretty soon he understood fractions and basic geometry. He tells his friends now that I taught him Spanish, just to see the surprise on their faces; of course he already spoke Spanish fluently, but he needed to learn the basic concepts of Spanish grammar and improve his use of it in writing. It must have felt very strange to have a gringo explain to him how his own language works.

Algebra was a problem. I tried again and again to explain what was going on, taking different approaches, but he just didn’t get it. He couldn’t even tell me what part he didn’t understand — it just wasn’t working. I resigned myself to the idea that that would be where he stopped with math.

But Carlos surprised me. Instead of giving up, he spent day after day with the books (finally he had books!), and before I knew it he was doing basic algebra. He passed the math test easily.

He ended up with his GED in far less time than it had taken his mother. A GED is really not the equivalent of a high school education, but it was still a huge accomplishment given how little he knew when he started. And remember that this is the same kid who was ejected from school in Honduras. The neighbors, who are well-educated, cheered him on; in particular a pair of math teachers across the street were quite fond of Carlos and helped him celebrate his success.

Carlos had enjoyed his taste of algebra, once he got the hang of it, so when we ran across a thick elementary algebra textbook in a used bookstore I asked him whether he wanted it. He did. He disappeared into his room and started working through it on his own. He worked through far more problems than a teacher would have asked him to do, and he mastered the material.

Carlos had to study English for a couple of years before the community college would let him take regular college classes, but eventually he got there. He took College Algebra, then Trigonometry, then Pre-Calculus, and did well in all of them. He liked it. On he went to Calculus I, Calculus II, and Calculus III. Most people can’t handle math classes at that level — who would have guessed that this eighth-grade dropout would do well in them. Then on to Differential Equations, Statistics, and Discrete Math. Carlos has now taken all but a couple of the math classes offered by the community college (and is signed up for one of the remaining courses this summer), along with a lot of physics and chemistry classes and of course the usual requirements for English, Humanities, History, etc. At first it was a huge struggle for him to write even a small paragraph, but now he writes several papers in English every semester and gets good marks. He can now use Windows and Linux and a variety of tools that run on them — even program a little in Python. And his GPA is a hair over 3.5 — not bad for someone who spoke no English and had essentially no education eight years ago!

So what the heck happened here? How can this guy who is remembered in Honduras as the class clown and a dropout have accomplished so much here in the US, and in a different language? Clearly he was smart enough to have succeeded in Honduras; why didn’t he?

I think in large part it boils down to attitude. Oh sure, Carlos was much better supported here, and the quality of education is clearly better. But I’m pretty sure that fixing those things alone wouldn’t have done the trick. In Honduras Carlos couldn’t relate what the teacher was talking about to his life — it all seemed irrelevant. But he came to the US, watched his mother’s success in education, began interacting with educated people with good engineering and teaching jobs and an understanding of all sorts of things, and he saw the relevance. He had worked for years doing back-breaking labor that paid very little, and realized it was a dead end. When he got here he quickly realized that there was a lot he could learn, that he would understand the world around him far better. He understood that the more he learned, the more he could contribute, and the more he would be rewarded financially. He had always wanted to contribute; it just hadn’t seemed to him that school had any relevance to that. Now it did.

So when Carlos had trouble with a subject, he didn’t give up — he just kept working at it, surprising me more than once with his determination. Carlos feels tremendously fortunate to have gotten this education, and he isn’t going to give up easily when there are problems.

There was in fact one semester in which Carlos stumbled: he had discovered YouTube and Facebook and was spending too much time on them. I knew nothing of the problem until he came to me one day, handed me the laptop, and explained what had happened. He was honest enough with himself to realize that he had screwed up and to know what needed to be done — he cut himself off. It was entirely his idea; he had come too far to let an addiction to the Internet steal his future. He ultimately had to take the laptop back, of course, to do his schoolwork.  And the Internet is his connection to much of his family and to his old friends; he needs to spend some time with that. But to this day he comes to me and asks me to cut off his access to the Internet for a week (I use a MAC filter on the router) when he has a lot to do.

Carlos is smart, but he’s not a genius — the magnitude of his success owes more to attitude than aptitude. I find him an inspiration. And I look at Carlos and feel sorry for the many American youth who achieve far less in spite of having so much more support. He tells me about classmates trying to throw together a paper the night before it is due, seldom coming to class, doing the bare minimum it takes to get a passing grade. They have none of the handicaps Carlos had, but they lack his desire and determination to learn, to succeed academically, and to give back to society.

I can’t help but wonder whether they would be doing better now if, before starting high school, they had spent a year or two hauling bananas…

Impostor Syndrome

You may have heard of the Impostor Phenomenon (sometimes called “Impostor Syndrome“), the feeling that the success and praise you are getting is undeserved, that you were just lucky. You feel like a phony. It was once believed to be particularly common in successful women, but apparently men suffer from it commensurately; I suspect that men simply don’t report the feeling as frequently, being 1) less inclined to share this or any other feeling, and 2) more motivated to appear invulnerable — no cracks in the armor.

This article quotes one researcher as saying that some people are “phony phonies,” feigning modesty for their own benefit:

“Particularly when people think that they might not be able to live up to others’ views of them, they may maintain that they are not as good as other people think,” Dr. Mark Leary, the lead author, wrote in an e-mail message. “In this way, they lower others’ expectations — and get credit for being humble.”

I think that’s something our culture encourages, and is likely even wired into us. We like the successful person who doesn’t flaunt it, the astounding teammate who treats us all as equals, the beautiful woman who doesn’t seem to realize she is beautiful. We award others bonus points if they appear to underrate themselves.

It’s not hard to suggest an evolutionary basis for this bias: these people’s services appear to be on sale. In seeking an alliance or a mate or any other exchange, you would have fared better with those who genuinely undervalued themselves than with those who thought they were hot stuff. Genes causing a preference for such “deals” might well have had greater reproductive success, causing the preference to spread throughout the population. In fact, the genes wouldn’t even have to be specific to modesty; brain circuitry for recognizing a deal in any context would probably suffice.

Given such a preference for modesty encoded into our brains (or at least a preference for bargains and the ability to recognize modesty as a sign of one), that would in turn create evolutionary pressure to feign modesty, because it makes you more attractive (which of course contributes to reproductive success).

But OK, enough of phony phonies — there are plenty of folks who really do feel like phonies. I remember the feeling quite distinctly growing up, and am aware of some contributing factors. For example, my father was fascinated by telekinesis, and he described it to me one day when I was perhaps eight years old. “As smart as you are, you should be able to do this,” he told me. He encouraged me to practice moving objects with mind power.

So I did. Practice, that is — not move objects. For years, off and on, I tried to move pencils, spoons, and the like, experimenting with various ways of trying to compel them. Nothing ever worked, not even a little bit. “As smart as you are, you should be able to do this,” he had said. The conclusion was obvious: I wasn’t really that smart. That’s why the spoon was not moving.

I’m sure my father didn’t deliberately set me up to fail, or to link that inevitable failure to a lack of intelligence. He no doubt believed that telekinesis was possible and that if you could convince a child that s/he could do it, they might just succeed. And wouldn’t that be cool! But the result was abject failure, and a resulting deep suspicion that the intelligence others thought they saw in me was illusory. I could not even move a spoon with my mind.

And there are other ways to elicit this feeling of being an impostor. For example, “intelligence” is a broad term that encompasses multiple traits, but we talk about it as if it were a single characteristic. If a child has a generous helping of all of the underlying traits, then sure, we say the child is intelligent. But what if a child has, say, strong analytical skills and good semantic memory, but poor episodic memory? The child will excel in many academic ventures and be called intelligent, but will struggle and often fail at commonplace tasks that most people can handle. “Aw, Billy — you can’t remember that? What’s the matter with you — you are supposed to be so smart!” Such children, not understanding that there are various aspects to what we call intelligence and that they possess more of some than others, can easily feel like phonies.

Or what if an otherwise bright child has a relatively limited short-term memory? Academic exercises, which are typically distilled to just the relevant facts for pedagogical reasons, will likely seem easy, but messy real-world situations will be harder to keep track of. Again, a parent’s reaction can be devastating, but so can that of peers. Classmates who might resent curve-breakers held up as examples by the teacher are sometimes eager to tear them down when they make a mistake. Monday yields an award, Tuesday a humiliation.

And of course, even academically gifted kids do stupid things. “When are you going to get Brain One?!” my father would often shout when angry.  He never told us what Brain One was, but the implication was obvious. So what’s the truth here — those awards for academic achievement in school, or these angry assessments?

There are probably lots of other ways to induce the same confusion and lack of confidence, all leading to roughly the same place. While receiving an award for academic performance, a child might secretly remember repeatedly feeling like a fool at times when his memory failed him. Simultaneously the child feels pride in the accomplishment and worry that this is all a mistake, that s/he will not be able to live up to expectations and will be exposed as a fraud.

If the child lacks self-esteem and feels unloved, the problem is exacerbated. If a child only feels appreciated for his or her intelligence, then the fear of being a phony can be quite acute.

And of course the same thing can happen with adults. You might be good at certain types of things, and really not so good at others, and not be clear on the difference. To you it just seems that sometimes you shine, and sometimes you stink. Who wouldn’t feel like an impostor, especially if you didn’t have good self-esteem to start with?

And finally, luck really does play a part in success. So do other non-meritorious factors like money, looks, connections, and outright cheating. Some portion of that feeling of being a phony may, for some people, actually be quite well-deserved.

O Ye of Little Faith

I sympathized somewhat with Neil deGrasse Tyson in his Atheist or Agnostic video. “Atheist” is not a term that people have chosen for themselves, like “Christian” or “Hindu” — it is a term imposed upon them by other folks who insist on categorizing everyone according to their beliefs around a topic important to them, the imposers.

There are people (I’ve known some) whose spiritual beliefs are based on the idea that life on Earth was seeded by ancient beings on Mars who are advanced far beyond us intellectually and morally but choose not to interact with us. To those people you might be an “origin denier” — would you mind carrying around that label for their pleasure, along with whatever stereotype they have in mind when they use the term? “Who cares,” you might say, because you know no such believers.  But what if you lived in a place where 90% believed as they did?

Dr. Tyson’s point is that when you label someone, you feel that you “already know everything important there is to know” about the person. You probably don’t.

Are “atheist” beliefs really so different?

Labels can be very divisive, separating “us” into “us and them,” when perhaps the gulf is not really so wide. Quick quiz: which of these deities do you believe in?

  1. Hinduism’s Shiva
  2. Christianity’s and Judaism’s Yahweh
  3. the Romans’ Zeus
  4. Norse Thor

Of course a real list of cultural deities is huge — there are hundreds or perhaps thousands of them, particularly if you include the minor deities often not explicitly called gods but nonetheless ascribed supernatural powers, like the saints of some Christian denominations. Surely nobody believes in all of the gods on the list. Most people I know would dismiss the beliefs of the Norse and the Romans as mythology (although it was perfectly real to the Norse and Romans!), and those of the Hindus — which include an elephant god and a monkey god — as colorful fantasy. They would not say as much to a Hindu, of course; aloud they would show respect for the fact that people have differing beliefs. But they could never really take such beliefs seriously. Of the major gods, they completely and firmly reject all but one: Yahweh. In fact, Christians call Yahweh simply “God,” emphasizing that he is the only one, that all others are false gods.

That is, like the person they insist on calling an atheist, they reject one god after another. Zeus, Shiva, Ganesha — all are false gods to them, mere cultural artifacts, unless they are really just Yahweh by other names in other cultures. Their only quibble with the so-called atheist comes when they reach Yahweh in the list — that one, they insist, is absolutely real, and if you don’t believe that, then you have no faith. There is something wrong with you.

There is nothing wrong with them, mind you, for not believing in the elephant god Ganesha. That’s somehow different.

So if you are Christian and can’t relate to those you call “atheists,” just reflect for a moment on how you already feel about Ganesha and a thousand other gods on that list, and then imagine feeling that way about just one more.

Faith, or the lack thereof?

Here is another way of thinking about those who might be called (probably against their wishes) atheists. Many people look around them and think

Wow, what an amazing universe. And what an incredible voyage of discovery as we learn more and more about it — we can already explain so much! Just imagine what we will learn in the next century or two.

I have faith that there is a rational explanation for everything, even if we haven’t figured it out yet.

Others see the cup as half-empty:

Wow, what an amazing universe. There’s so much we can’t explain!

I can’t accept that this splendor and complexity occurred naturally; it must ultimately be the work of beings with incredible supernatural powers.

It is easy to understand how, eons ago, people could lack faith (!) in reason and evidence to ever provide natural (i.e. not supernatural) explanations for all of the seemingly mysterious phenomena around them. They did not know what those lights in the sky were, or the basis for their strange motion. They had no rational explanation for the astonishing diversity of life. Pregnancy and childbirth seemed miraculous. Intermittent plagues and inclement weather looked like wrathful punishments for offenses committed, administered by someone wielding powers far beyond our own.

Religion

It is human psychology to look for an intelligent agent behind significant events, a human-like cause; we evolved this tendency because the resulting wariness was advantageous during our evolutionary past. So since natural explanations were out of reach for many natural phenomena, we invented explanations based on supernatural beings to cover what we did not understand. Such beliefs became firmly rooted in human cultures, and religions arose as power structures and money-making institutions fueled by those beliefs.

Our fear of death created a huge opportunity for religions. After reassuringly explaining that you have a soul which lives on — effectively a comforting end run around death — religions could then describe two violently different futures, heaven and hell, that await your soul for the rest of eternity. Which one your soul will experience depends on whether you comply with the religion’s rules and support it with your time and money. If this isn’t a case study in university marketing classes, it certainly should be: imagine selling a completely intangible product of absolutely infinite perceived value — the difference between one’s “soul” burning in hell or living in paradise for all eternity — that nobody can check whether you actually delivered. Surely in the long history of marketing this stands out as a phenomenal achievement.

The key is to get people to believe, without evidence, in the existence of the soul, heaven, and hell, and in the ability of compliance with religious doctrine to affect the outcome. Toward this end parents are strongly urged to involve their kids (who supposedly inherit sins for crimes they did not commit, and thus require church intervention) in religious rites and indoctrination. Young children believe what their parents — and authority figures recognized by their parents — tell them, so they accept religious doctrine as truth before they have the mental faculties to discern truth in such matters for themselves. By the time their brains are developmentally capable of the type of reasoning required to evaluate whether or not they require supernatural explanations for the phenomena around them, it’s too late — the indoctrination is complete. Of those who do believe in gods, the vast majority would surely have rejected the same teachings if they had first heard them as adults. But they were indoctrinated early, and rare is the individual who can, after indoctrination, re-evaluate his or her core beliefs rationally. Thus is the institution’s future ensured.

But our understanding of the universe has improved dramatically over the millennia; while there is still much that we do not know, we already understand vastly more than we did when we first resorted to the supernatural explanations that formed the basis for the religions and superstitions that survive to this day through childhood indoctrination. Each time the march of reason grants us a new evidence-based, rational explanation for some phenomenon, the indoctrinated implicitly choose between defending increasingly superfluous supernatural explanations or abandoning them for rational explanations. It takes uncommon courage to do the latter, in part because it usually means abandoning much of one’s social context as well. A religion can also “vaccinate” its followers against defection by breeding contempt for nonbelievers (starting with the use of labels to distance them from us) and even for the very desire for evidence. For example, a willingness to believe in a god without real evidence might be celebrated as a measure of spiritual competency rather than credulity.

It is interesting that those without faith in reason and evidence to provide rational explanations — sometimes even having outright contempt for science — often criticize those who do not believe in their gods as being “without faith.” Many even consider those who do not share their belief in god immoral. A young woman once told me that she knew why she did not do bad things — she would go to hell if she did — and could never trust someone who did not believe in hell. Projecting her own basis for morality onto others, she believed that a person who did not believe in hell would act without restraint. It had not occurred to her, apparently, that people could have a vision of the kind of world they want to live in and a sense of individual responsibility to behave accordingly, to do their part in realizing the vision.

Still, it seems that these days many Americans, at least, enjoy religion cafeteria-style. They take what they want, like the end run around death and the social network and the feeling that they are under the protective care of a powerful being and the festivities surrounding Christmas and Easter — and ignore a lot of the rest. Increasingly people consider their religious texts metaphorical, the precepts of their churches merely rough guidelines that can be set aside when inconvenient. Is this a lamentable degradation of societal values, or is it a sign of progress toward rationality? How will the landscape change in the next 100 years?

Rational explanations, or supernatural — to each his/her own. Well, as long as it doesn’t affect public policy — there let’s stick with reason and evidence.

Test with Bigger Integers

It’s always a jolt when some simple expression in a language I’ve been using for a long time evaluates to something I can’t explain.  It happened today with Java.

I had asked a coworker to run FindBugs against one of our products. He forwarded me the output, which contained several issues of this type:

Suspicious comparison of Integer references

Sure enough, when I looked at the code I found expressions using == to compare one Integer to another. That’s usually bad because Integers are objects, and == for objects tests whether they are the same object, not whether they have equal values, so two Integers could appear to be != even though they hold the same value.

That’s easy enough to fix, but just to make sure I understood all the cases I wrote a test that compared, among other things, an Integer 88 to an int 88 and to another Integer 88. The good news was that Integer == int was true (autounboxing). The bad news was that Integer == Integer was also true — there was something wrong with my test.

I tried a number of similar tests, all with the same result. I began to suspect that this was some kind of optimization, and in that case it was likely only for small numbers. Ultimately I figured out that == always returns true when comparing Integers from -128 to 127. Once I saw that, I was able to find references to this behavior in the specification. The same is true for Longs and Integers and Shorts. The JRE prepares objects for small numbers in advance, and hands you one of those instead of creating a new one each time. Since you get the same object each time for those numbers, == works. But it doesn’t work for numbers outside that range.

OK, cool — I learned something new. But looking at the output of FindBugs, I realized that we hadn’t seen those bugs in testing, probably because we typically used small numbers for those values. Let’s see, we need a couple of numbers for VLAN tags — what do we pick? Typically 1 and 2. Maybe 7 and 12 if we’re feeling wild and crazy. Had we never tested with a number greater than 127, we might have shipped the code with that problem. Yay FindBugs!

So here are some thoughts:

  • If you are in QA for a Java product, use numbers outside the range -128..127 wherever they are allowed.
  • If you are writing a Java product, consider initializing ID generators to 128 rather than 0 or 1, at least during testing.
  • Run FindBugs on your code.
  • Use Scala.

The semantics for == in Java are troublesome: value comparison for primitives, but identity comparison for objects, but effectively back to value comparison for small numeric objects. Oh, and with autoboxing we’re going to magically convert between primitives and objects to help you out, sometimes (depending on the actual value) changing the semantics of ==. You’re welcome.

In Scala, == compares values. Always. That doesn’t depend on what types you are comparing, or on the actual values. If you really want to test whether two variables refer to the same object, you use eq, which isn’t even allowed on Ints, Longs, etc. In Scala it’s pretty hard to screw this up.

Nice.

Follow

Get every new post delivered to your Inbox.