Red Echo

November 25, 2015

Thanksgiving is a science-fiction story

The proper genre for Thanksgiving is science-fiction:

It has come to my attention that people are woefully uninformed about certain episodes in the Thanksgiving narrative. For example, almost no one mentions the part where Squanto threatens to release a bioweapon buried under Plymouth Rock that will bring about the apocalypse.

Mr. S, an ordinary American, is minding his own business outside his East Coast home when he is suddenly abducted by short large-headed creatures like none he has ever seen before. They bring him to their ship and voyage across unimaginable distances to an alien world both grander and more horrible than he could imagine. The aliens have godlike technologies, but their society is dystopian and hivelike. Enslaved at first, then displayed as a curiosity, he finally wins his freedom through pluck and intelligence. Despite the luxuries he enjoys in his new life, he longs for his homeworld.

November 14, 2015


November 13, 2015

ne, the nice editor:

ne is a free (GPL’d) text editor based on the POSIX standard that
runs (we hope) on almost any UN*X machine. ne is easy to use for
the beginner, but powerful and fully configurable for the wizard, and most
sparing in its resource usage. If you have the resources and the patience to
use emacs or the right mental twist to use vi then
probably ne is not for you. However, if you need an editor that:

  • compiles without effort everywhere (or almost everywhere), is packaged for
    all Linux distributions, and ported to other operating systems (such as Mac OS X);
  • is fast, small, powerful and simple to use;
  • has standard keystrokes (e.g., copy is CTRL-C);
  • uses little bandwidth, so it is ideal for email, editing through phone line (or
    slow GSM/GPRS/UMTS) connections;
  • has a very compact internal text
    representation, so you can easily load and modify very large

… then you should try ne.

November 12, 2015


From IEEE Spectrum, Bosch’s Giant Robot Can Punch Weeds To Death:

At IROS last month, researchers from a Bosch startup called Deepfield Robotics presented a paper on “Vision-Based High-Speed Manipulation for Robotic Ultra-Precise Weed Control,” which has like four distinct exciting-sounding phrases in it. We wanted to write about it immediately, but Deepfield asked us to hold off a bit until their fancy new website went live, which it now has. This means that we can show you video of their enormous agricultural robot that can autonomously detect and physically obliterate individual weeds in a tenth of a second.

November 6, 2015

Oh yeah, this project to marry unikernels with the Qubes system is pretty much exactly what I have been going for with Fleet.

October 24, 2015

A few solid hours hacking on fleet last night got me to a basic C “hello world”. The C library has everything from string.h and the simple character I/O functions from stdio.h, and the kernel has just enough of a driver interface to make simple file operations work. I’m using the legacy serial ports for now, since the drivers are trivial; stdin maps to COM1, stdout is COM2, and stderr writes to the host console via port E9.

This is the part where it starts to get really interesting. If you handwave past the part where most of the C library isn’t there yet, it’s now possible to compile a couple of ordinary C programs, link them against the fleet library, run them in VMs, and pipe data between them in grand old Unix shell style. It’s all very normal- except that these processes are just as comprehensively isolated from each other as if they were running on separate physical machines.

October 21, 2015

This fleet project is a lot of fun, combining a shiny new idea with an excuse to take a crack at a lot of classic problems.

The next layer after the startup code should be something to do with drivers and the low-level kernel apparatus, but it all felt a bit vague, so I decided to start with the C standard library interface and work my way down, letting that drive the rest of the kernel architecture.

There are dozens of free C library implementations available, but I have not been able to find one that will work for my project. I don’t want POSIX support, don’t need hardware portability, and won’t have a Unix-style system call interface underneath. And while I’m building this in the style of an embedded firmware project, it’s actually designed to run on a purely virtual machine, so I don’t need or want a lot of code dealing with legacy PC hardware.

Oh, well, I’m writing my own C library. Of course I’ll fill it in with a lot of existing components, but this architecture is apparently weird enough that the framework is up to me.

I did write the string library myself, though, because I thought it would be fun. There sure is a lot of weirdness in there – it’s been 23 years since I learned C, and I can’t say I had ever noticed the existence of strspn, strxfrm, or strcoll – but now I’ve written ’em and built test suites too.

October 19, 2015

I posted a thing: a piece of fleet called ‘startc’

I factored the lowest-level portion of the fleet code out as a standalone library which I’ve named ‘startc’ and posted on github. I also announced it on and on hackernews. Of course it feels a wee bit nerve-racking to post something experimental like this for the world to examine, but it’s a good exercise as it forces me to get all the loose ends tied up and to really think carefully about the interfaces between modules. So far the reception has been generally positive, which is nice. I have no idea whether anyone will actually use the library, but perhaps someone will get through the early stages of a similar project more quickly by looking at its source code, and that would make me feel good.

October 16, 2015



October 15, 2015

According to The Death Clock, I have about a billion seconds left.

That… seems reasonable.


October 12, 2015

I spent four hours hanging a TV on the wall yesterday. Yes, really. I thought I’d simplify the project and save myself a bunch of work by purchasing a wall-mount swivel arm for the TV instead of building what I wanted from scratch.

As soon as I got started, it was clear that the wall-mount was designed to be mounted on a solid wood or brick wall (seriously? how many of those do you find in the USA?), so I started with a trip to the hardware store for a plank and some lag screws. After some careful measuring and a lot of exploratory drilling, I found the right spot and bolted the anchor panel firmly into the studs.

Next, I discovered that the wall-mount was a little bit too small for the TV. What!? I thought I measured it before I ordered it! Well… the wall mount listed a diagonal measurement range which includes the size of my TV, and its mounting bracket style is the same as that of the bracket I formerly used to attach the TV to the entertainment center, but it was designed for TVs with square bolt patterns and it just doesn’t spread out enough.

So… back to the hardware store, for another handful of bolts and some aluminum bars. I cut and drilled until I had a workable pair of adapter brackets.

Finally, I bolted the adapter brackets onto the TV, bolted the swivel-arm brackets onto the adapter brackets, screwed the swivel-arm brackets onto the arm head, bolted the swivel-arm base onto the anchor panel, which I’d previously bolted onto the wall.

Sure saved myself a lot of work there!

October 6, 2015

The hypervisor is the new kernel.
The virtual machine is the new process.
The process is the new thread.
Virtual PCI devices are the new POSIX.

Shared mutable state does not scale.

October 1, 2015

Text editing as a wire protocol

I spend a lot of my computer time editing text files, and so I’ve thought a lot about how one might go about that in a system like Fleet. One approach would pack all possible editing services into a single, monolithic IDE, which could run within a single VM. It would mount the disk containing the files you want to work on, present a file browser, and let you edit away to your heart’s content.

There’s nothing wrong with that approach, and it wouldn’t be hard to build out of existing components, but it doesn’t really satisfy my sense of elegance. I’d rather find a way to plug my editing tools together like Lego bricks.

It’d be really convenient, for example, to separate the code that renders text on screen from the code that manages all the data and performs the edits. Text can be displayed in lots of different ways depending on the context (code? email? notepad? letter writing?), but the process of editing a text buffer is the same. Wouldn’t it be neat if I could write the editing engine once and just slap a bunch of different interfaces on it depending on context?

The Fleet philosophy says that every connection between components has to take the form of a wire protocol, but what kind of wire protocol would represent a text editor? That really isn’t the sort of thing client/server processes typically do!

It occurred to me, however, that unix is full of command-line apps which accept commands typed in through a serial connection, producing output as text. There is an ancient program called ‘ed’, part of Unix since the 60s, whose user interface is basically a little line-oriented command language. What if we just redefined its interface as a wire protocol? A text-editing interface program would become a bridge, with one end connected to an “edit buffer service” and the other connected to a “terminal display service”.

This would allow multiplexing: one could have an arsenal of tiny, single-purpose editing tools which do their work by sending commands to an edit-buffer service. No need to keep reimplementing the edit buffer in every tool – just send some ed commands down the wire.

The `ed` program was designed to edit text files, but considering its command language as a wire protocol, what we’re looking at in the abstract is simply an array of text chunks. There’s no reason the actual bits on disk have to be nothing more than a flat text file: one could implement a different edit-buffer service for each different kind of file format, allowing one to mix and match editor interfaces and buffer services.

We can take it further. `ed` commands consist of a line reference, an identifying char, and optional parameters if the command needs them. What if we could extend the line reference syntax and used the same protocol to manipulate multidimensional data?

The syntax currently makes no use of the colon character ‘:’, so I suggest that the editor wire protocol could be extended by allowing a sequence of indexes delimited by colons:



2D extension:

One could thus write a generic table viewer which would speak this protocol, then plug it into an edit-buffer service representing a CSV spreadsheet file or an NCSA server log file. And of course there’s no reason you couldn’t continue stacking dimensions arbitrarily if you wanted an edit service backed by JSON or some other hierarchical format.

It might be worthwhile to define a read-only subset of the protocol, since some tools will be content to view data, and it would be useful to develop buffer services which present a common interface for exploring structured data even if it’s not practical to perform edits.

System programming is fun: introducing FLEET

I couldn’t sleep the other night so I spent a few hours coding up the foundation of a kernel for this new exokernel-style operating system concept I’ve been talking about, which I’ve decided to call ‘fleet’. (Trindle was the microkernel idea, now dead.) It’s a lot of fun – it feels a lot like working on an embedded device, except the board just happens to have been designed by lunatics. I feel satisfied with my progress; the kernel boots the machine, configures memory and interrupts, spews log messages to the serial port, and enumerates the devices on the PCI bus.

Since I’m treating the PC as an embedded device dedicated to a single application, this “rump kernel” is really more like a new flavor of the C runtime library than a traditional kernel. I don’t have to worry about paging, memory protection, or user/supervisor mode switches, and most of the usual concurrency problems just disappear. An application which needed those services could link them in as libraries, but I’ll worry about that later.

Once upon a time, when the world was young and people were still trying to figure out what you could do with a computer network, people tried to build abstractions that would represent remote services as though they were local ones. “Remote procedure call” was the concept of the day, and this really took off in the early days of OOP: the idea was that you’d have local proxy objects which transparently communicated with remote ones, and you’d just call methods and get property values and everything would be shuttled back and forth automatically.

This just plain doesn’t work, because the semantics are totally different. You simply can’t make the fundamental constraints of concurrency, latency, and asynchrony disappear just by throwing a lot of threads around.

Modern interfaces are focused not on procedure calls, but on data blobs. Instead of making lots of granular, modal, stateful requests, machines communicate by serializing big blobs of data and streaming them back and forth at each other. This emphasizes bandwidth over latency, and focusing on large transactions rather than small interactions simplifies the problem of concurrent changes to remote state.

My plan is to take this idea out of the network and apply it inside a single PC. The rise of multicore computing has demonstrated that the traditional approaches don’t even scale within a single machine, once that machine is full of asynchronous processes competing for shared resources! In the ‘fleet’ world, rather than trying to represent remote resources with local proxies, we’ll represent local resources as though they were remote. There will be no DLLs and no system calls: the system API will be a folder full of wire protocol and data format specifications.

This solves the problem of network transparency from the opposite direction: since programs will already be communicating with local services through some network datastream interface, remote services will look exactly the same, except for the higher latency and lower reliability.

I believe that this approach will substantially improve the security picture, since the absence of any shared memory or common filesystem limits the damage a single program can do to the rest of the machine should it become compromised. Hypervisors seem to be holding up well in practice. Of course there’s nothing which would prevent a single ‘fleet’ process from spawning its own subprocesses and reintroducing all those concerns – the fleet shell would be perfectly happy to run linux as a subprocess, for that matter – but it’ll be easier to use the hypervisor interface and spawn “sub”-processes as independent virtual machines.

Requiring each program to include drivers for every possible hardware device would be madness, and slow madness since device emulation is tricky and expensive. These programs are never going to be run on bare metal anyway, so I’m going to ignore all legacy PC devices and define the ‘fleet’ system interface as consisting solely of virtio devices. These devices all have a simple, standardized IO interface, so it should be no problem to build drivers for six or eight of them into my kernel-library. I’ll offer an efficient low-level I/O API for nonblocking DMA transfers. All the clunky, synchronous, blocking C APIs can be implemented on top of that.

Looking at this system from above, it’s clear that making this fleet of VMs do useful work is going to involve a lot of datastream routing. I’m still working on the details, but I’m thinking that each program will have to include a compiled-in manifest describing the connections it wants to make and receive and the protocols it wants to use with them. Fixed connections like ‘stdin’, ‘stdout’ can be represented as serial ports, while other traffic can be specified using IP port numbers.

I have no idea how far I’ll get with all this, but I’m back in my old stomping grounds with all this low-level hackery and having a great time at it, so I’ll probably stick with it long enough to build a proof of concept. Something that boots into a shell where you can manipulate a filesystem and pipe data between programs, with a little monitor that lets you see what all the VMs are doing – that should be fun.

September 27, 2015

Deep Playa 2015

Well, that was a fun weekend, out in the trees near Sedro-Woolley. This was apparently the fourth year of the Deep Playa campout and it looked to be around 300 people this time. There were interesting art projects, fun activities, decent music, and overall a happy burnery festival vibe despite the cold damp weather.

AJ and I camped out in our big truck, as is becoming usual, and while it really needs a heater, at least it’s insulated and we had a generator powering the electric blanket. We also hung a big tarp off the side of the truck and made a shaded area where we could set up the propane camp fire – and lo there was much gathering and enjoying on Saturday night, as everyone was pretty much clustered up around one fire or another.

I brought the small version of my sound system and set it up by our camp, renegade-style. Two 15″ subs and two Mackie 450 tops – it was more sound than we needed, honestly, and I had a great time rocking the neighborhood with it. I played an electroswing set on Friday afternoon, and three psytrance sets at various other times when the mood struck me. I also got to play glitch-hop on the big main stage sound system Saturday night – it was a little challenging, perhaps due to the cold, but it went well anyway and I’m glad I did it.

Tomorrow it’ll be time to unpack; tonight I’m making an early night of it.

September 24, 2015

I did a little research and the pieces of this plan are becoming clear. Virtio appears to be a totally reasonable platform abstraction API, and KVM will do the job as a hypervisor. I’ll set up an x86_64-elf gcc crosscompiler and use newlib as the C library. Each executable will have its own disk image, and exec will function by spawning a new VM and booting it with the target executable.

The missing piece, so far as I can tell, is a proxy representation of the hypervisor’s management interface which can be provided to a guest OS, so that our VMs can virtualize themselves – and pass on a proxy for their own hypervisor proxy, so that subprocesses can virtualize themselves in turn, recursively. This would enable the construction of a guest-OS shell managing an array of processes which are themselves independent guest-OS machines. Current thought: define the ‘virsh’ terminal interface as a serial protocol, then write a linux-side launcher process that creates a pipe-based virtual serial device and hands it off when starting up the first guest process.

With the launcher and the multitasking shell in place, a toolchain targeting this baremetal environment, and an array of virtio device drivers in the form of static libs you can link in, the platform would be ready to go.

September 23, 2015

To simplify a bit further: I want to throw away the traditional “operating system” entirely, use the hypervisor as a process manager, use virtual device IO for IPC, and implement programs as unikernels.

I think this could all be done inside Linux, using KVM or Lguest, constructing the secure new world inside the creaky, complex old one.

September 22, 2015

Perhaps the reason I can’t sell myself on a specific minimal microkernel interface is that the system I want to build is not a microkernel at all. What I really want is no interface, no API, but an exokernel system where every program is written as though it were the only occupant of a single machine.

The interior space of a POSIX machine is so complex I’ve given up on the prospect of securing it, but hypervisors seem to have accomplished the job of secure isolation well enough to make the whole “cloud computing” business work. What if processes in this hypothetical environment were merely paravirtualized machines? Each executable would be a single-purpose “operating system” for a virtual machine.

A hypervisor takes the place of the traditional kernel, VirtIO devices stand in for the usual device-manipulation syscalls, and the shell becomes a HID multiplexer. Since each process sees itself as a separate machine, there is no longer any requirement for a shared mutable filesystem; instead of communicating by manipulating shared resources, processes must share resources by communicating.

From this perspective it is no longer important to know whether the system is running on bare metal or within some other host OS. Each process merely interacts with some array of devices to accomplish some defined task. An instance of this system built for a bare-metal environment would have to include drivers for actual devices so that they can be represented as virtio elements, but from the perspective of a program, inside its paravirtual machine, it simply doesn’t matter how many layers of emulation are stacked up above.

This offers a lovely progressive path toward implementation of the various components necessary for a useful operating system, since they can be implemented one by one as QEMU guests. In effect, it’s a redefinition of the API: instead of looking at the traditional POSIX style syscall interfaces as the OS API, we simply define the notional standard PC implied by virtio as the system interface, and anything capable of running on such machine becomes a valid element of the overall system.

In effect, this means that KVM becomes the kernel, and my project would be a shell program which can multiplex a set of interface devices among an array of VMs containing the actual programs I want to use.

September 18, 2015

Now THAT’S a 3D printer

I’ve been reluctant to get on the 3D-printing hype train since I have trouble thinking of anything I would actually want to make with one – who needs more cheap plastic crap cluttering up their lives? But this is a 3D printing technology that seems like it might actually be useful – Hershey has announced a chocolate printer:

“We are now using 3-D technology to bring Hershey goodness to consumers in unanticipated and exciting ways,” said Will Papa, Chief Research and Development Officer, The Hershey Company. “3-D printing gives consumers nearly endless possibilities for personalizing their chocolate, and our exhibit will be their first chance to see 3-D chocolate candy printing in action.”

September 15, 2015

“Interim OS” project for ARM

Simple OS project for the Raspberry Pi with information about getting a kernel to boot.

September 4, 2015

Things I’d still like to improve in this hypothetical kernel interface:

– access() and measure() are blatantly inefficient and really kind of terrible; you should just get that information for free when the message comes in, and if you want to inquire about object state, the call should let you ask about a whole batch of objects at once, to reduce the impact of syscall overhead.

– the mailbox design is sort of excessively clever, not likely to survive contact with the real world. I should just make different structs for incoming and outgoing messages.

– the idea of using a single syscall for all interactions with the outside world feels really nice, but I’m not sure I’ve gotten it right yet.

– I have a strong hunch that it will be important to resize queues some time.

– It feels wrong that there’s no way to cancel a message read and send some kind of fail signal back to the sender. Perhaps the solution would be to process send errors asynchronously, as messages received? But then you would need a bidirectional pipe, which I’ve been doing my best to avoid so far.

– extend() is the wrong name but I haven’t thought of the right one yet.

– every process can currently allocate memory willy-nilly, which feels like a contradiction with the overall exokernel style. Perhaps you should have to request a block of address space from a specific allocator… This would make an address space hierarchy easier, and would make it possible to provide feedback about memory pressure. right now it’s impossible to impose policy

– the previous draft, which I didn’t publish, had a notion I liked called a “bundle” – you could pack an array of objects up as a single object , send it around as an indivisible unit, and unpack it again later. It occurred to me that queues are not entirely dissimilar: what if you could create a pipe, push a bunch of stuff into it, then send the whole pipe with all of its contents to some other object? On receipt it would be a pipe with both send and receive permission.

– I still think there ought to be a way to share writable memory through some kind of transactional key-value mechanism.

– It makes me really happy that there is no file system.

I have no idea whether I’ll actually implement any of this, but I have three specific implementation concepts in mind providing constraints as I work on the design.

The first is naturally the idea of building out a full scale desktop/laptop computer operating system, suitable for all my daily computing activities – doesn’t every systems developer fantasize about throwing it all away and starting over? The capability / exokernel strategy has some significant security benefits, and the lack of a global filesystem, or any way to implement global mutations at all, means that every layer of the system can insulate itself against the layers underneath. It also provides a mechanism allowing the user’s shell to lie, cheat, and manipulate programs to make them do what the user wants, whether they like it or not, which makes me happy when I swear at stupid javascript crap.

Of course this will never happen. An embedded RTOS for microcontroller projects is small enough that I could feasibly implement it on my own, however, and I’ve actually done so in the past – in a limited, ad-hoc way – when I worked at Synapse.

This is the second project I think about as I consider the kernel architecture: a small, efficient kernel suitable for embedded realtime applications. There are several actions which can take advantage of an MMU’s virtual addressing features if present, but Trindle will get by just fine without it – while benefiting greatly from the kind of simple memory protection features found on high-end microcontrollers.

The third and simplest project would implement the Trindle kernel as a user-space library for Unix systems, which could help an application manage its parallel data processing needs by spawning a fleet of worker threads and managing their interactions. In this environment, there is no MMU, but we can still get basically what we need through judicious use of mmap/mprotect/munmap.

I don’t really know yet how useful this would be as an actual tool, but it seems like it would be easy enough to try it out and see what happened.

Another Trindle draft

I had trouble sleeping last night so I spent a couple of hours writing up another draft of the Trindle kernel system call interface. I’ve managed to knock the complexity down a bit further without losing any functionality. Still has some issues to noodle over, but they’re growing increasingly minor and I think it’s at the point now where I could build it and it might actually work.

Every kernel-managed entity visible in user space is an object. Every object has a globally unique address. This value is only useful within a process which has access permission for that object.

typedef void *object_t;

What is the current process allowed to do with the object at this address? The result will be a bitmask of the relevant access rights from the enum.

	ACCESS_READ = 1,		// can read from this segment
	ACCESS_WRITE = 2,		// can write to this segment
	ACCESS_EXECUTE = 4, 	// can execute code inside this segment
	ACCESS_SEND = 8,		// can transfer messages into this pipe
	ACCESS_RECEIVE = 16,	// can receive messages from this pipe
int access(object_t);

How large is this object? For a memory segment, this is its size in bytes; for a pipe, this is a lower bound on the number of objects in its queue.

size_t measure(object_t);

A segment is a contiguous block of memory with a common access right. The object address is a pointer to the first byte in the block. Create a new segment by concatenating some arbitrary number of source buffers together. The kernel may zerofill the buffer up to a more convenient size. A source buffer with an address of NULL represents zerofill, not an actual copy. A new segment will have ACCESS_READ|ACCESS_WRITE.

typedef object_t segment_t;
struct buffer_t
	size_t bytes;
	uint8_t *address;
segment_t allocate(size_t, const buffer_t[]);

Processes send and receive messages through fixed-length queues called pipes. Any number of processes may send messages to a single pipe, but only one process may read from it at a time. A pipe is an abstract object, not a memory segment. A new pipe will have ACCESS_SEND|ACCESS_RECEIVE.

typedef object_t pipe_t;
pipe_t pipe(size_t queue_items);

A process communicates with the rest of the world by sending and receiving messages. A message describes a state change involving an object and/or a communication pipe.

struct message_t
	pipe_t address;
	object_t content;

For efficiency, messages are exchanged in batches, sending and receiving as many at a time as possible. A batch of messages is called a mailbox.

struct mailbox_t
	size_t count;
	message_t *address;

An outgoing message can accomplish three different jobs, depending on which fields you populate with non-NULL values.

  • both populated: share the content object by sending it through the pipe
  • address only, content NULL: receive messages from the specified pipe
  • content only, address NULL: release access to the specified object

Prepare a list of outgoing messages: the outbox. Fill out an array of message_t, then provide the address of the array base and the item count. Allocate a second array of message_t for incoming messages: the inbox. Provide the address of this array and the maximum number of messages the array can hold. Then call sync to let the system transfer as many messages as it can manage.

void sync(mailbox_t *out, mailbox_t *in);

On return, the outbox will have been sorted, grouping all of the failed messages at the beginning of the buffer, updating out->count with the number of messages which could not be sent (hopefully zero).

When a send fails, it is either because the recipient pipe has closed or because its queue was temporarily full. You can determine which it was by checking to see whether you still have ACCESS_SEND for the pipe specified in the failed message’s address.

On return, the inbox may also have been populated with incoming messages, and in->count will have been changed to reflect the number of messages that were received. The content of the remaining array items is undefined.

An incoming message can communicate several different changes of state depending on which fields are populated with non-NULL values.

  • Both address and content: we received a message from an input pipe.
  • content only: we now have exclusive ownership of this object.
  • address only: the receiver has released this pipe and it is now closed.

What does it mean to have exclusive access to an object, and why would you
want to release it?

A segment can only be safely modified when there is exactly one process with access to its contents. If one process shares a segment object with another, the sender will lose ACCESS_WRITE and the receiver will gain only ACCESS_READ.

Should the sender later release its access to the segment, however, such that there remained exactly one process with access, the one remaining process would then gain ACCESS_WRITE for that segment, whether or not it
had anything to do with the segment’s original creation.

A process can therefore transfer read/write access to a segment in one sync by sending the segment through a pipe and then by releasing its own access. When the last process releases the resource, so nobody has access to it any longer, the kernel will delete it.

Pipes work differently: any number of processes can have ACCESS_SEND, but only the creating process can ever have ACCESS_RECEIVE. When the creating process releases its access to the pipe, the pipe goes dead and all the other processes will instantly lose ACCESS_SEND.

Every process has ACCESS_EXECUTE to the segment which contains its machine code. ACCESS_EXECUTE and ACCESS_WRITE are mutually exclusive, so code segments are read-only whether they are owned by one process or many.

Each new process starts up with an input queue providing access to whatever resources the launching process has chosen to share with it.

typedef void (*entrypoint_t)(pipe_t input);

The entrypoint function cannot return since it lives at the base of the thread stack. When it’s done with its work it should call exit, passing in whichever object represents its output. If something goes horribly wrong, it can bail out on the error path instead.

void exit(object_t);
void abort(object_t);

If you have loaded or generated some code and you want to execute it, you can acquire execute permission for a segment. Once executable, a segment cannot be made writable again; you must release and recreate it if you want to change it.

void extend(segment_t);

An existing process may launch a new one, specifying an entrypoint and a pair of completion pipes that will be notified when the process terminates. The entrypoint MUST be located inside a segment with ACCESS_READ. The launch function returns the ACCESS_SEND end of a pipe representing the new process’ main input queue. The out pipe will receive the process’ final output object when it exits; if it aborts, the err pipe will get the report instead.

pipe_t launch(entrypoint_t, size_t queue_count, pipe_t out, pipe_t err);

August 18, 2015

Camping in the desert

Floodland 2015 is over.

I hear it was a success, which is great. People had a good time and it was an authentically old-school-Burning-Man-like experience. Sounds like people want to come back and do it again next year, and have ideas and enthusiasm for projects they’d like to try.

I spent nearly all of my time during the event working, stressing, or trying with limited success to recover from working and stressing, so I didn’t really get to participate, which was not so great.

We had unreasonably hot weather on Thursday, which delayed setup, and we had an unbelievably intense windstorm on Friday, which knocked everything down and kept everyone huddled up inside vehicles and the sturdier tents. I’ve been out to the site on five occasions now and this was by far the most challenging weather.

We got things put back together on Saturday and people apparently had a great time, though I had already wiped myself out and missed it all. Oh, well. We will do better next year.

August 3, 2015

Comparison of programming fonts

A convenient table of programming fonts showing examples in a compact form allowing easy comparison.

July 30, 2015

Trindle kernel interface exploration

A computer’s fundamental resources are blocks of memory, interfaces to other pieces of hardware, and (for a portable device) a supply of battery power. An operating system’s fundamental job is to allow a computer to run more than one program at a time by dividing those resources among them in some fashion consistent with the priorities of the machine’s owner. The design of an OS kernel therefore begins with its mechanisms for allocating resources to specific programs and the interface through which it allows programs to manipulate them.

The fundamental tool of permission management is the MMU, and the MMU’s finest granularity is the 4K page, so we’ll give each system object a unique page-aligned address.
typedef void *object_t;

The operations a process may apply to an object are defined by an array of permission bits; an object may inspect an object address to find out what it can do.
enum permission_t {
  PERMISSION_READ = 1, // can supply data
  PERMISSION_WRITE = 2, // can receive data
  PERMISSION_EXECUTE = 4, // contains machine code
  PERMISSION_DIRECT = 8, // backed by physical storage
  PERMISSION_PIPE = 16 // contains a transfer queue
permission_t inspect(object_t obj);

A buffer is a contiguous group of writable pages beginning at its identifying address, which will have PERMISSION_READ|PERMISSION_WRITE|PERMISSION_DIRECT.
typedef object_t buffer_t;

Allocate a range of address space as a new buffer. Each page will be mapped against the zerofill page and reassigned to physical storage when it receives its first write.
buffer_t allocate(size_t page_count);

Truncate the buffer down to some number of pages, splitting the remaining pages off as a new, independent buffer and returning its address.
buffer_t split(buffer_t buf, size_t page_count);

Move the contents of these buffers into a new, contiguous buffer, releasing the original buffers in the process.
buffer_t join(buffer_t head, buffer_t tail);

Copy a range of pages into a new buffer; the source address must be page-aligned but may come from any region where the process has read permission.
buffer_t copy(void *source, size_t page_count);

A shared resource is an immutable buffer, which means that it can be owned by more than one process at a time. It offers PERMISSION_READ|PERMISSION_DIRECT.
typedef object_t resource_t;

Create a new shared resource by cloning the contents of an existing mutable buffer.
resource_t share(buffer_t data);

One process may communicate an object to another by transmitting it through a pipe. Unless the object is a shared resource, this transfers ownership from the sender to the receiver, and the object is removed from the sender’s access space. The pipe contains a queue, allowing communication to happen asynchronously.

An output is the end of the pipe you send objects into. It has PERMISSION_PIPE|PERMISSION_WRITE.
typedef object_t output_t;

Transmit a list of objects, one at a time, until they are all sent or the pipe’s queue has filled up. Returns the number of objects which were successfully transmitted.
size_t transmit(output_t dest, const object_t objs[], size_t count);

An input is the end of the pipe you receive objects from. It offers PERMISSION_PIPE|PERMISSION_READ.
typedef object_t input_t;

Receive objects from the pipe until its queue empties or the destination array fills up, then return the number of objects which were received, if any.
size_t receive(input_t src, object_t objs[], size_t array_size);

Allocate a new pipe able to queue up a certain number of elements, populating the variables pointed at by in and out with the pipe’s endpoint objects.
void pipe(size_t entries, input_t *in, output_t *out);

Close a pipe by releasing its input or output. The object representing the pipe’s other end will lose PERMISSION_READ or PERMISSION_WRITE and retain only PERMISSION_PIPE.

An executable is a shared resource which contains machine code. Since it is an immutable shared resource, it can be owned by more than one process at a time. It offers PERMISSION_EXECUTE|PERMISSION_DIRECT.
typedef object_t executable_t;

Create a new executable by cloning the contents of an existing shared resource.
executable_t prepare_code(resource_t text);

Create a new process in suspended state, configure its saved instruction pointer and stack pointer, and assign it ownership of some objects (thereby releasing all but the shared resources, as usual). The process will begin executing when the scheduler next gets around to granting it a timeslice and “resuming” it.
void launch(object_t bundle[], size_t bundle_count, void *entrypoint, void *stack);

Delete the current process and release all of its resources.
void exit();

Suspend the process until an empty input starts to fill, a blocked output starts to drain, a pipe closes, or a certain number of milliseconds have elapsed. If woken by an event involving a pipe, the call will return the relevant input or output, otherwise it will return zero.
object_t sleep(uint64_t milliseconds);

I’m starting to lose track of all the Trindle draft documents I’ve written, rewritten, replaced, and abandoned, scattered as they are across three laptops, my home desktop, and a remote server. This is either my fifth or sixth attempt at a concrete design for the system interface, but it’s the first time I’ve made it all the way through without discovering a fatal flaw, and that feels like progress.

July 27, 2015

This simple structure, made of rope and 2x4s, looks like a cozy little minimalist Burning Man hangout: it supports three hammocks and can be covered in tarps for shade. Author quotes $42.60 in materials.


July 21, 2015

Chrysler vehicles vulnerable to remote exploit

I’ve been joking for years that I refuse to drive a car that has a computer in it, because I’m a software engineer and am therefore unable to trust any system other software engineers have ever touched.

Except I’m not entirely joking. I really like my old-fashioned, non-upgradeable, non-networked, CAN-bus-free classic Range Rover, and part of the reason I am happy to keep on paying its hefty repair and maintenance bills is that I don’t have to worry that its 20-year-old electrical systems are vulnerable to control by malicious external agents like hackers or federal agents:

The Jeep’s strange behavior wasn’t entirely unexpected. I’d come to St. Louis to be Miller and Valasek’s digital crash-test dummy, a willing subject on whom they could test the car-hacking research they’d been doing over the past year. The result of their work was a hacking technique—what the security industry calls a zero-day exploit—that can target Jeep Cherokees and give the attacker wireless control, via the Internet, to any of thousands of vehicles. Their code is an automaker’s nightmare: software that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.

Motorcycles are even more trustworthy; most of them don’t contain so much as a single microcontroller.

July 20, 2015

Yosemite backpacking

I’m back in Seattle after a week in California. The backpacking trip went well and I am really glad I went. The group was a little smaller than average, but even a small slice of my very large family adds up to a good-sized crowd. Still, it was funny that my mother and I were the only people present who had actually participated in the notorious Disaster Hike – for everyone else it was just a pretty loop among some alpine lakes.

The beginning of the hike was a bit stiffer than we’d anticipated; I’m not sure what the trail builders were thinking, but they had us do a lot of climbing and descending without encountering any notable vista or any other apparent justification. Once we reached Crescent Lake, however, the loop was steady and smooth.

Mom, AJ, Abigail, and I all scrambled up Buena Vista Peak as the trail crossed its shoulder, yielding a glorious panoramic view of the southern park, a perspective I’ve never seen before. We camped that night by Buena Vista Lake, peaceful and quiet, with a beautiful glowing sunset rolling across the granite; I’ve never seen waves on a lake reflecting quite so distinctly orange and blue.

We had planned to find an unmaintained cross-country trail leading from the main trail past Hart Lakes over to Ostrander Lake, but after looking at the terrain from atop Buena Vista, decided it would be easier and more fun to bushwack across Horse Ridge instead. This started out as a ridiculously pleasant walk through a spacious forest, but once we reached the crest of Horse Ridge we discovered that the far side is a precipice, not shown on our maps. With a bit of exploring we found a steep but workable ravine cutting through the sheer face, however, and after a little work we got everyone down and across to the Ostrander Lake bowl.

Oh, such a lovely day that was, and so satisfying to dip our feet in the water!

AJ and I weathered the trip with ease; you know you’ve got something good when the relationship-maintenance work flows so easily and automatically that it doesn’t even feel like work.

Next Page »