Give your app a shell-based CLI

I want to share a neat trick for making powerful CLIs (command-line interfaces). I used it at Sun eons ago in a tool called “warlock”, which statically analyzes multi-threaded C programs for locking problems — data races, deadlocks, etc.

But I should start with the project I was working on before that — MP-SAS, an architectural simulator for Sparc multiprocessor systems. The simulator had a CLI, and occasionally somebody would add something like the ability to re-execute the previous line, or the ability to store some result in a named variable.

I argued that we should stop adding shell-like features piecemeal and instead rig the simulator to get started as a daemon along with a real interactive shell, like ksh, and arrange for commands run from the shell to be able to talk to the simulator. Then you could do everything that you already know how to do in the shell while talking to the simulator.

They didn’t go for it — one guy in particular was convinced that it would be too slow. But a year later I had put such a CLI on my next project, warlock. Well, guess what — the performance was just fine. Not only did you get all of the interactive features like recalling lines, line editing, completion, and so on, but you also got scripting. Anything you could do in a shell you could do in warlock. And not somebody else’s shell — *your* shell, whichever shell you happened to like, with all of the aliases you use, with all of the environment variables you have set, etc. I used zsh for interactive warlock work, but scripts typically used sh for compatibility. Ksh, bash, csh, tcsh — they could all be used as the front-end to warlock.

For example, you could give warlock commands like

  load xlt_*.ll  # load files matching a pattern
  locks | grep xlt | sort >locks.out  # save sorted info about certain locks to a file
  func foo<TAB>  # complete a function name

The shell integration was pretty handy! There was even a feature with which you could push the current state of the analyzer on a stack, perform some experiment, and then return to that saved state by popping it off the stack. This was fairly trivial to implement — the “save” command caused the daemon to fork(). The parent waited for the child to exit, and the child responded to further requests from the user. When you said “exit”, the child exited and the parent took over again.

This is not to say that warlock was a highly usable program — few actually suffered with it long enough to get good results. One who did, Frits Vanderlinden, managed to discipline an entire group of engineers writing Solaris device drivers to make their code clean of warlock errors before checking in changes, and he claimed that as a result warlock caught “countless” bugs in driver code, making Solaris releases that much more solid.

Anyway, the lack of usability wasn’t the fault of the shell integration — I was always quite happy with the way that turned out.

The way I implemented it, when you ran the warlock CLI you were really invoking a perl script (I would probably use python or ruby today, but perl was a great choice then) that did the following:

  1. Set up a temp directory for the session.
  2. Created two named pipes in it, COMMANDS and RESPONSES.
  3. Started the warlock analyzer (a C++ program, but you could do it in Java or whatever) in the background.
  4. Started a shell (whatever was specified in env var WARLOCK_SHELL, or sh by default) with its path augmented to include a directory containing warlock’s commands, as ordinary executables.
  5. Waited around for either the shell or the analyzer to exit.
  6. Cleaned things up.

If you invoked the tool with -c Command, it just passed that on to the shell — batch mode processing.

The analyzer just went into a simple loop in which it basically did

  • Open the COMMANDS pipe, read a command, and close the pipe.
  • Open the RESPONSES pipe, write the response, and close the pipe.

A command like “funcs -v” would just write “funcs -v” to the COMMANDS pipe and read the results on the RESPONSES pipe. But because funcs is just another command to the shell, you could use pipes, redirection, for loops — whatever — to accomplish some task.

Anyway, that’s the idea. An alternative, by the way, is to link with a ksh library. That would give you better performance, if you need to perform hundreds of commands per second. However, it would force your users to use that particular shell, limit you to that shell’s features, and limit the hosts you could run on. Another option would be to use something like Guile, if your users wouldn’t mind it. That would give your users a very powerful scripting environment.  On the other hand, it probably doesn’t have interactive features on a par with modern shells, and most people would have quite a learning curve to use your program.

Different techniques would be appropriate for different situations. I’ve used this technique of grafting an actual shell into the CLI twice now, and both times the result has been great. There you have it!

UPDATE:

I recently did yet another such CLI, this time in Python using the awesome Requests library, that talks directly to a RESTful API (no named pipes). Very nice!

Advertisements

2 Trackbacks

  1. […] bothering with one. “You’re getting one anyway,” I responded. I always create a shell-based CLI, giving me a very powerful way to direct the server. Then I write a regression test harness for the […]

  2. […] then our CLI got simpler. There was no longer any code specific to VMs or customers or a disks or anything else […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: