Red Echo

October 15, 2015

tmp_2401-IMG_20151015_0956502082328948

October 12, 2015

I spent four hours hanging a TV on the wall yesterday. Yes, really. I thought I’d simplify the project and save myself a bunch of work by purchasing a wall-mount swivel arm for the TV instead of building what I wanted from scratch.

As soon as I got started, it was clear that the wall-mount was designed to be mounted on a solid wood or brick wall (seriously? how many of those do you find in the USA?), so I started with a trip to the hardware store for a plank and some lag screws. After some careful measuring and a lot of exploratory drilling, I found the right spot and bolted the anchor panel firmly into the studs.

Next, I discovered that the wall-mount was a little bit too small for the TV. What!? I thought I measured it before I ordered it! Well… the wall mount listed a diagonal measurement range which includes the size of my TV, and its mounting bracket style is the same as that of the bracket I formerly used to attach the TV to the entertainment center, but it was designed for TVs with square bolt patterns and it just doesn’t spread out enough.

So… back to the hardware store, for another handful of bolts and some aluminum bars. I cut and drilled until I had a workable pair of adapter brackets.

Finally, I bolted the adapter brackets onto the TV, bolted the swivel-arm brackets onto the adapter brackets, screwed the swivel-arm brackets onto the arm head, bolted the swivel-arm base onto the anchor panel, which I’d previously bolted onto the wall.

Sure saved myself a lot of work there!

October 6, 2015

The hypervisor is the new kernel.
The virtual machine is the new process.
The process is the new thread.
Virtual PCI devices are the new POSIX.

Shared mutable state does not scale.

October 1, 2015

Text editing as a wire protocol

I spend a lot of my computer time editing text files, and so I’ve thought a lot about how one might go about that in a system like Fleet. One approach would pack all possible editing services into a single, monolithic IDE, which could run within a single VM. It would mount the disk containing the files you want to work on, present a file browser, and let you edit away to your heart’s content.

There’s nothing wrong with that approach, and it wouldn’t be hard to build out of existing components, but it doesn’t really satisfy my sense of elegance. I’d rather find a way to plug my editing tools together like Lego bricks.

It’d be really convenient, for example, to separate the code that renders text on screen from the code that manages all the data and performs the edits. Text can be displayed in lots of different ways depending on the context (code? email? notepad? letter writing?), but the process of editing a text buffer is the same. Wouldn’t it be neat if I could write the editing engine once and just slap a bunch of different interfaces on it depending on context?

The Fleet philosophy says that every connection between components has to take the form of a wire protocol, but what kind of wire protocol would represent a text editor? That really isn’t the sort of thing client/server processes typically do!

It occurred to me, however, that unix is full of command-line apps which accept commands typed in through a serial connection, producing output as text. There is an ancient program called ‘ed’, part of Unix since the 60s, whose user interface is basically a little line-oriented command language. What if we just redefined its interface as a wire protocol? A text-editing interface program would become a bridge, with one end connected to an “edit buffer service” and the other connected to a “terminal display service”.

This would allow multiplexing: one could have an arsenal of tiny, single-purpose editing tools which do their work by sending commands to an edit-buffer service. No need to keep reimplementing the edit buffer in every tool – just send some ed commands down the wire.

The `ed` program was designed to edit text files, but considering its command language as a wire protocol, what we’re looking at in the abstract is simply an array of text chunks. There’s no reason the actual bits on disk have to be nothing more than a flat text file: one could implement a different edit-buffer service for each different kind of file format, allowing one to mix and match editor interfaces and buffer services.

We can take it further. `ed` commands consist of a line reference, an identifying char, and optional parameters if the command needs them. What if we could extend the line reference syntax and used the same protocol to manipulate multidimensional data?

The syntax currently makes no use of the colon character ‘:’, so I suggest that the editor wire protocol could be extended by allowing a sequence of indexes delimited by colons:

Current: ‘X’\n

2D extension:
:‘X’\n

One could thus write a generic table viewer which would speak this protocol, then plug it into an edit-buffer service representing a CSV spreadsheet file or an NCSA server log file. And of course there’s no reason you couldn’t continue stacking dimensions arbitrarily if you wanted an edit service backed by JSON or some other hierarchical format.

It might be worthwhile to define a read-only subset of the protocol, since some tools will be content to view data, and it would be useful to develop buffer services which present a common interface for exploring structured data even if it’s not practical to perform edits.

System programming is fun: introducing FLEET

I couldn’t sleep the other night so I spent a few hours coding up the foundation of a kernel for this new exokernel-style operating system concept I’ve been talking about, which I’ve decided to call ‘fleet’. (Trindle was the microkernel idea, now dead.) It’s a lot of fun – it feels a lot like working on an embedded device, except the board just happens to have been designed by lunatics. I feel satisfied with my progress; the kernel boots the machine, configures memory and interrupts, spews log messages to the serial port, and enumerates the devices on the PCI bus.

Since I’m treating the PC as an embedded device dedicated to a single application, this “rump kernel” is really more like a new flavor of the C runtime library than a traditional kernel. I don’t have to worry about paging, memory protection, or user/supervisor mode switches, and most of the usual concurrency problems just disappear. An application which needed those services could link them in as libraries, but I’ll worry about that later.

Once upon a time, when the world was young and people were still trying to figure out what you could do with a computer network, people tried to build abstractions that would represent remote services as though they were local ones. “Remote procedure call” was the concept of the day, and this really took off in the early days of OOP: the idea was that you’d have local proxy objects which transparently communicated with remote ones, and you’d just call methods and get property values and everything would be shuttled back and forth automatically.

This just plain doesn’t work, because the semantics are totally different. You simply can’t make the fundamental constraints of concurrency, latency, and asynchrony disappear just by throwing a lot of threads around.

Modern interfaces are focused not on procedure calls, but on data blobs. Instead of making lots of granular, modal, stateful requests, machines communicate by serializing big blobs of data and streaming them back and forth at each other. This emphasizes bandwidth over latency, and focusing on large transactions rather than small interactions simplifies the problem of concurrent changes to remote state.

My plan is to take this idea out of the network and apply it inside a single PC. The rise of multicore computing has demonstrated that the traditional approaches don’t even scale within a single machine, once that machine is full of asynchronous processes competing for shared resources! In the ‘fleet’ world, rather than trying to represent remote resources with local proxies, we’ll represent local resources as though they were remote. There will be no DLLs and no system calls: the system API will be a folder full of wire protocol and data format specifications.

This solves the problem of network transparency from the opposite direction: since programs will already be communicating with local services through some network datastream interface, remote services will look exactly the same, except for the higher latency and lower reliability.

I believe that this approach will substantially improve the security picture, since the absence of any shared memory or common filesystem limits the damage a single program can do to the rest of the machine should it become compromised. Hypervisors seem to be holding up well in practice. Of course there’s nothing which would prevent a single ‘fleet’ process from spawning its own subprocesses and reintroducing all those concerns – the fleet shell would be perfectly happy to run linux as a subprocess, for that matter – but it’ll be easier to use the hypervisor interface and spawn “sub”-processes as independent virtual machines.

Requiring each program to include drivers for every possible hardware device would be madness, and slow madness since device emulation is tricky and expensive. These programs are never going to be run on bare metal anyway, so I’m going to ignore all legacy PC devices and define the ‘fleet’ system interface as consisting solely of virtio devices. These devices all have a simple, standardized IO interface, so it should be no problem to build drivers for six or eight of them into my kernel-library. I’ll offer an efficient low-level I/O API for nonblocking DMA transfers. All the clunky, synchronous, blocking C APIs can be implemented on top of that.

Looking at this system from above, it’s clear that making this fleet of VMs do useful work is going to involve a lot of datastream routing. I’m still working on the details, but I’m thinking that each program will have to include a compiled-in manifest describing the connections it wants to make and receive and the protocols it wants to use with them. Fixed connections like ‘stdin’, ‘stdout’ can be represented as serial ports, while other traffic can be specified using IP port numbers.

I have no idea how far I’ll get with all this, but I’m back in my old stomping grounds with all this low-level hackery and having a great time at it, so I’ll probably stick with it long enough to build a proof of concept. Something that boots into a shell where you can manipulate a filesystem and pipe data between programs, with a little monitor that lets you see what all the VMs are doing – that should be fun.

September 27, 2015

Deep Playa 2015

Well, that was a fun weekend, out in the trees near Sedro-Woolley. This was apparently the fourth year of the Deep Playa campout and it looked to be around 300 people this time. There were interesting art projects, fun activities, decent music, and overall a happy burnery festival vibe despite the cold damp weather.

AJ and I camped out in our big truck, as is becoming usual, and while it really needs a heater, at least it’s insulated and we had a generator powering the electric blanket. We also hung a big tarp off the side of the truck and made a shaded area where we could set up the propane camp fire – and lo there was much gathering and enjoying on Saturday night, as everyone was pretty much clustered up around one fire or another.

I brought the small version of my sound system and set it up by our camp, renegade-style. Two 15″ subs and two Mackie 450 tops – it was more sound than we needed, honestly, and I had a great time rocking the neighborhood with it. I played an electroswing set on Friday afternoon, and three psytrance sets at various other times when the mood struck me. I also got to play glitch-hop on the big main stage sound system Saturday night – it was a little challenging, perhaps due to the cold, but it went well anyway and I’m glad I did it.

Tomorrow it’ll be time to unpack; tonight I’m making an early night of it.

September 24, 2015

I did a little research and the pieces of this plan are becoming clear. Virtio appears to be a totally reasonable platform abstraction API, and KVM will do the job as a hypervisor. I’ll set up an x86_64-elf gcc crosscompiler and use newlib as the C library. Each executable will have its own disk image, and exec will function by spawning a new VM and booting it with the target executable.

The missing piece, so far as I can tell, is a proxy representation of the hypervisor’s management interface which can be provided to a guest OS, so that our VMs can virtualize themselves – and pass on a proxy for their own hypervisor proxy, so that subprocesses can virtualize themselves in turn, recursively. This would enable the construction of a guest-OS shell managing an array of processes which are themselves independent guest-OS machines. Current thought: define the ‘virsh’ terminal interface as a serial protocol, then write a linux-side launcher process that creates a pipe-based virtual serial device and hands it off when starting up the first guest process.

With the launcher and the multitasking shell in place, a toolchain targeting this baremetal environment, and an array of virtio device drivers in the form of static libs you can link in, the platform would be ready to go.

September 23, 2015

To simplify a bit further: I want to throw away the traditional “operating system” entirely, use the hypervisor as a process manager, use virtual device IO for IPC, and implement programs as unikernels.

I think this could all be done inside Linux, using KVM or Lguest, constructing the secure new world inside the creaky, complex old one.

September 22, 2015

Perhaps the reason I can’t sell myself on a specific minimal microkernel interface is that the system I want to build is not a microkernel at all. What I really want is no interface, no API, but an exokernel system where every program is written as though it were the only occupant of a single machine.

The interior space of a POSIX machine is so complex I’ve given up on the prospect of securing it, but hypervisors seem to have accomplished the job of secure isolation well enough to make the whole “cloud computing” business work. What if processes in this hypothetical environment were merely paravirtualized machines? Each executable would be a single-purpose “operating system” for a virtual machine.

A hypervisor takes the place of the traditional kernel, VirtIO devices stand in for the usual device-manipulation syscalls, and the shell becomes a HID multiplexer. Since each process sees itself as a separate machine, there is no longer any requirement for a shared mutable filesystem; instead of communicating by manipulating shared resources, processes must share resources by communicating.

From this perspective it is no longer important to know whether the system is running on bare metal or within some other host OS. Each process merely interacts with some array of devices to accomplish some defined task. An instance of this system built for a bare-metal environment would have to include drivers for actual devices so that they can be represented as virtio elements, but from the perspective of a program, inside its paravirtual machine, it simply doesn’t matter how many layers of emulation are stacked up above.

This offers a lovely progressive path toward implementation of the various components necessary for a useful operating system, since they can be implemented one by one as QEMU guests. In effect, it’s a redefinition of the API: instead of looking at the traditional POSIX style syscall interfaces as the OS API, we simply define the notional standard PC implied by virtio as the system interface, and anything capable of running on such machine becomes a valid element of the overall system.

In effect, this means that KVM becomes the kernel, and my project would be a shell program which can multiplex a set of interface devices among an array of VMs containing the actual programs I want to use.

September 18, 2015

Now THAT’S a 3D printer

I’ve been reluctant to get on the 3D-printing hype train since I have trouble thinking of anything I would actually want to make with one – who needs more cheap plastic crap cluttering up their lives? But this is a 3D printing technology that seems like it might actually be useful – Hershey has announced a chocolate printer:

“We are now using 3-D technology to bring Hershey goodness to consumers in unanticipated and exciting ways,” said Will Papa, Chief Research and Development Officer, The Hershey Company. “3-D printing gives consumers nearly endless possibilities for personalizing their chocolate, and our exhibit will be their first chance to see 3-D chocolate candy printing in action.”

September 15, 2015

“Interim OS” project for ARM

Simple OS project for the Raspberry Pi with information about getting a kernel to boot.

September 4, 2015

Things I’d still like to improve in this hypothetical kernel interface:

– access() and measure() are blatantly inefficient and really kind of terrible; you should just get that information for free when the message comes in, and if you want to inquire about object state, the call should let you ask about a whole batch of objects at once, to reduce the impact of syscall overhead.

– the mailbox design is sort of excessively clever, not likely to survive contact with the real world. I should just make different structs for incoming and outgoing messages.

– the idea of using a single syscall for all interactions with the outside world feels really nice, but I’m not sure I’ve gotten it right yet.

– I have a strong hunch that it will be important to resize queues some time.

– It feels wrong that there’s no way to cancel a message read and send some kind of fail signal back to the sender. Perhaps the solution would be to process send errors asynchronously, as messages received? But then you would need a bidirectional pipe, which I’ve been doing my best to avoid so far.

– extend() is the wrong name but I haven’t thought of the right one yet.

– every process can currently allocate memory willy-nilly, which feels like a contradiction with the overall exokernel style. Perhaps you should have to request a block of address space from a specific allocator… This would make an address space hierarchy easier, and would make it possible to provide feedback about memory pressure. right now it’s impossible to impose policy

– the previous draft, which I didn’t publish, had a notion I liked called a “bundle” – you could pack an array of objects up as a single object , send it around as an indivisible unit, and unpack it again later. It occurred to me that queues are not entirely dissimilar: what if you could create a pipe, push a bunch of stuff into it, then send the whole pipe with all of its contents to some other object? On receipt it would be a pipe with both send and receive permission.

– I still think there ought to be a way to share writable memory through some kind of transactional key-value mechanism.

– It makes me really happy that there is no file system.


I have no idea whether I’ll actually implement any of this, but I have three specific implementation concepts in mind providing constraints as I work on the design.

The first is naturally the idea of building out a full scale desktop/laptop computer operating system, suitable for all my daily computing activities – doesn’t every systems developer fantasize about throwing it all away and starting over? The capability / exokernel strategy has some significant security benefits, and the lack of a global filesystem, or any way to implement global mutations at all, means that every layer of the system can insulate itself against the layers underneath. It also provides a mechanism allowing the user’s shell to lie, cheat, and manipulate programs to make them do what the user wants, whether they like it or not, which makes me happy when I swear at stupid javascript crap.

Of course this will never happen. An embedded RTOS for microcontroller projects is small enough that I could feasibly implement it on my own, however, and I’ve actually done so in the past – in a limited, ad-hoc way – when I worked at Synapse.

This is the second project I think about as I consider the kernel architecture: a small, efficient kernel suitable for embedded realtime applications. There are several actions which can take advantage of an MMU’s virtual addressing features if present, but Trindle will get by just fine without it – while benefiting greatly from the kind of simple memory protection features found on high-end microcontrollers.

The third and simplest project would implement the Trindle kernel as a user-space library for Unix systems, which could help an application manage its parallel data processing needs by spawning a fleet of worker threads and managing their interactions. In this environment, there is no MMU, but we can still get basically what we need through judicious use of mmap/mprotect/munmap.

I don’t really know yet how useful this would be as an actual tool, but it seems like it would be easy enough to try it out and see what happened.

Another Trindle draft

I had trouble sleeping last night so I spent a couple of hours writing up another draft of the Trindle kernel system call interface. I’ve managed to knock the complexity down a bit further without losing any functionality. Still has some issues to noodle over, but they’re growing increasingly minor and I think it’s at the point now where I could build it and it might actually work.


Every kernel-managed entity visible in user space is an object. Every object has a globally unique address. This value is only useful within a process which has access permission for that object.

typedef void *object_t;

What is the current process allowed to do with the object at this address? The result will be a bitmask of the relevant access rights from the enum.

enum ACCESS
{
	ACCESS_READ = 1,		// can read from this segment
	ACCESS_WRITE = 2,		// can write to this segment
	ACCESS_EXECUTE = 4, 	// can execute code inside this segment
	ACCESS_SEND = 8,		// can transfer messages into this pipe
	ACCESS_RECEIVE = 16,	// can receive messages from this pipe
};
int access(object_t);

How large is this object? For a memory segment, this is its size in bytes; for a pipe, this is a lower bound on the number of objects in its queue.

size_t measure(object_t);

A segment is a contiguous block of memory with a common access right. The object address is a pointer to the first byte in the block. Create a new segment by concatenating some arbitrary number of source buffers together. The kernel may zerofill the buffer up to a more convenient size. A source buffer with an address of NULL represents zerofill, not an actual copy. A new segment will have ACCESS_READ|ACCESS_WRITE.

typedef object_t segment_t;
struct buffer_t
{
	size_t bytes;
	uint8_t *address;
};
segment_t allocate(size_t, const buffer_t[]);

Processes send and receive messages through fixed-length queues called pipes. Any number of processes may send messages to a single pipe, but only one process may read from it at a time. A pipe is an abstract object, not a memory segment. A new pipe will have ACCESS_SEND|ACCESS_RECEIVE.

typedef object_t pipe_t;
pipe_t pipe(size_t queue_items);

A process communicates with the rest of the world by sending and receiving messages. A message describes a state change involving an object and/or a communication pipe.

struct message_t
{
	pipe_t address;
	object_t content;
};

For efficiency, messages are exchanged in batches, sending and receiving as many at a time as possible. A batch of messages is called a mailbox.

struct mailbox_t
{
	size_t count;
	message_t *address;
};

An outgoing message can accomplish three different jobs, depending on which fields you populate with non-NULL values.

  • both populated: share the content object by sending it through the pipe
  • address only, content NULL: receive messages from the specified pipe
  • content only, address NULL: release access to the specified object

Prepare a list of outgoing messages: the outbox. Fill out an array of message_t, then provide the address of the array base and the item count. Allocate a second array of message_t for incoming messages: the inbox. Provide the address of this array and the maximum number of messages the array can hold. Then call sync to let the system transfer as many messages as it can manage.

void sync(mailbox_t *out, mailbox_t *in);

On return, the outbox will have been sorted, grouping all of the failed messages at the beginning of the buffer, updating out->count with the number of messages which could not be sent (hopefully zero).

When a send fails, it is either because the recipient pipe has closed or because its queue was temporarily full. You can determine which it was by checking to see whether you still have ACCESS_SEND for the pipe specified in the failed message’s address.

On return, the inbox may also have been populated with incoming messages, and in->count will have been changed to reflect the number of messages that were received. The content of the remaining array items is undefined.

An incoming message can communicate several different changes of state depending on which fields are populated with non-NULL values.

  • Both address and content: we received a message from an input pipe.
  • content only: we now have exclusive ownership of this object.
  • address only: the receiver has released this pipe and it is now closed.

What does it mean to have exclusive access to an object, and why would you
want to release it?

A segment can only be safely modified when there is exactly one process with access to its contents. If one process shares a segment object with another, the sender will lose ACCESS_WRITE and the receiver will gain only ACCESS_READ.

Should the sender later release its access to the segment, however, such that there remained exactly one process with access, the one remaining process would then gain ACCESS_WRITE for that segment, whether or not it
had anything to do with the segment’s original creation.

A process can therefore transfer read/write access to a segment in one sync by sending the segment through a pipe and then by releasing its own access. When the last process releases the resource, so nobody has access to it any longer, the kernel will delete it.

Pipes work differently: any number of processes can have ACCESS_SEND, but only the creating process can ever have ACCESS_RECEIVE. When the creating process releases its access to the pipe, the pipe goes dead and all the other processes will instantly lose ACCESS_SEND.

Every process has ACCESS_EXECUTE to the segment which contains its machine code. ACCESS_EXECUTE and ACCESS_WRITE are mutually exclusive, so code segments are read-only whether they are owned by one process or many.

Each new process starts up with an input queue providing access to whatever resources the launching process has chosen to share with it.

typedef void (*entrypoint_t)(pipe_t input);

The entrypoint function cannot return since it lives at the base of the thread stack. When it’s done with its work it should call exit, passing in whichever object represents its output. If something goes horribly wrong, it can bail out on the error path instead.

void exit(object_t);
void abort(object_t);

If you have loaded or generated some code and you want to execute it, you can acquire execute permission for a segment. Once executable, a segment cannot be made writable again; you must release and recreate it if you want to change it.

void extend(segment_t);

An existing process may launch a new one, specifying an entrypoint and a pair of completion pipes that will be notified when the process terminates. The entrypoint MUST be located inside a segment with ACCESS_READ. The launch function returns the ACCESS_SEND end of a pipe representing the new process’ main input queue. The out pipe will receive the process’ final output object when it exits; if it aborts, the err pipe will get the report instead.

pipe_t launch(entrypoint_t, size_t queue_count, pipe_t out, pipe_t err);

August 18, 2015

Camping in the desert

Floodland 2015 is over.

I hear it was a success, which is great. People had a good time and it was an authentically old-school-Burning-Man-like experience. Sounds like people want to come back and do it again next year, and have ideas and enthusiasm for projects they’d like to try.

I spent nearly all of my time during the event working, stressing, or trying with limited success to recover from working and stressing, so I didn’t really get to participate, which was not so great.

We had unreasonably hot weather on Thursday, which delayed setup, and we had an unbelievably intense windstorm on Friday, which knocked everything down and kept everyone huddled up inside vehicles and the sturdier tents. I’ve been out to the site on five occasions now and this was by far the most challenging weather.

We got things put back together on Saturday and people apparently had a great time, though I had already wiped myself out and missed it all. Oh, well. We will do better next year.

August 3, 2015

Comparison of programming fonts

A convenient table of programming fonts showing examples in a compact form allowing easy comparison.

July 30, 2015

Trindle kernel interface exploration

A computer’s fundamental resources are blocks of memory, interfaces to other pieces of hardware, and (for a portable device) a supply of battery power. An operating system’s fundamental job is to allow a computer to run more than one program at a time by dividing those resources among them in some fashion consistent with the priorities of the machine’s owner. The design of an OS kernel therefore begins with its mechanisms for allocating resources to specific programs and the interface through which it allows programs to manipulate them.

The fundamental tool of permission management is the MMU, and the MMU’s finest granularity is the 4K page, so we’ll give each system object a unique page-aligned address.
typedef void *object_t;

The operations a process may apply to an object are defined by an array of permission bits; an object may inspect an object address to find out what it can do.
enum permission_t {
  PERMISSION_READ = 1, // can supply data
  PERMISSION_WRITE = 2, // can receive data
  PERMISSION_EXECUTE = 4, // contains machine code
  PERMISSION_DIRECT = 8, // backed by physical storage
  PERMISSION_PIPE = 16 // contains a transfer queue
};
permission_t inspect(object_t obj);

A buffer is a contiguous group of writable pages beginning at its identifying address, which will have PERMISSION_READ|PERMISSION_WRITE|PERMISSION_DIRECT.
typedef object_t buffer_t;

Allocate a range of address space as a new buffer. Each page will be mapped against the zerofill page and reassigned to physical storage when it receives its first write.
buffer_t allocate(size_t page_count);

Truncate the buffer down to some number of pages, splitting the remaining pages off as a new, independent buffer and returning its address.
buffer_t split(buffer_t buf, size_t page_count);

Move the contents of these buffers into a new, contiguous buffer, releasing the original buffers in the process.
buffer_t join(buffer_t head, buffer_t tail);

Copy a range of pages into a new buffer; the source address must be page-aligned but may come from any region where the process has read permission.
buffer_t copy(void *source, size_t page_count);

A shared resource is an immutable buffer, which means that it can be owned by more than one process at a time. It offers PERMISSION_READ|PERMISSION_DIRECT.
typedef object_t resource_t;

Create a new shared resource by cloning the contents of an existing mutable buffer.
resource_t share(buffer_t data);

One process may communicate an object to another by transmitting it through a pipe. Unless the object is a shared resource, this transfers ownership from the sender to the receiver, and the object is removed from the sender’s access space. The pipe contains a queue, allowing communication to happen asynchronously.

An output is the end of the pipe you send objects into. It has PERMISSION_PIPE|PERMISSION_WRITE.
typedef object_t output_t;

Transmit a list of objects, one at a time, until they are all sent or the pipe’s queue has filled up. Returns the number of objects which were successfully transmitted.
size_t transmit(output_t dest, const object_t objs[], size_t count);

An input is the end of the pipe you receive objects from. It offers PERMISSION_PIPE|PERMISSION_READ.
typedef object_t input_t;

Receive objects from the pipe until its queue empties or the destination array fills up, then return the number of objects which were received, if any.
size_t receive(input_t src, object_t objs[], size_t array_size);

Allocate a new pipe able to queue up a certain number of elements, populating the variables pointed at by in and out with the pipe’s endpoint objects.
void pipe(size_t entries, input_t *in, output_t *out);

Close a pipe by releasing its input or output. The object representing the pipe’s other end will lose PERMISSION_READ or PERMISSION_WRITE and retain only PERMISSION_PIPE.

An executable is a shared resource which contains machine code. Since it is an immutable shared resource, it can be owned by more than one process at a time. It offers PERMISSION_EXECUTE|PERMISSION_DIRECT.
typedef object_t executable_t;

Create a new executable by cloning the contents of an existing shared resource.
executable_t prepare_code(resource_t text);

Create a new process in suspended state, configure its saved instruction pointer and stack pointer, and assign it ownership of some objects (thereby releasing all but the shared resources, as usual). The process will begin executing when the scheduler next gets around to granting it a timeslice and “resuming” it.
void launch(object_t bundle[], size_t bundle_count, void *entrypoint, void *stack);

Delete the current process and release all of its resources.
void exit();

Suspend the process until an empty input starts to fill, a blocked output starts to drain, a pipe closes, or a certain number of milliseconds have elapsed. If woken by an event involving a pipe, the call will return the relevant input or output, otherwise it will return zero.
object_t sleep(uint64_t milliseconds);

I’m starting to lose track of all the Trindle draft documents I’ve written, rewritten, replaced, and abandoned, scattered as they are across three laptops, my home desktop, and a remote server. This is either my fifth or sixth attempt at a concrete design for the system interface, but it’s the first time I’ve made it all the way through without discovering a fatal flaw, and that feels like progress.

July 27, 2015

This simple structure, made of rope and 2x4s, looks like a cozy little minimalist Burning Man hangout: it supports three hammocks and can be covered in tarps for shade. Author quotes $42.60 in materials.

tmp_2933-IMG_20150727_100219978222122

July 21, 2015

Chrysler vehicles vulnerable to remote exploit

I’ve been joking for years that I refuse to drive a car that has a computer in it, because I’m a software engineer and am therefore unable to trust any system other software engineers have ever touched.

Except I’m not entirely joking. I really like my old-fashioned, non-upgradeable, non-networked, CAN-bus-free classic Range Rover, and part of the reason I am happy to keep on paying its hefty repair and maintenance bills is that I don’t have to worry that its 20-year-old electrical systems are vulnerable to control by malicious external agents like hackers or federal agents:

The Jeep’s strange behavior wasn’t entirely unexpected. I’d come to St. Louis to be Miller and Valasek’s digital crash-test dummy, a willing subject on whom they could test the car-hacking research they’d been doing over the past year. The result of their work was a hacking technique—what the security industry calls a zero-day exploit—that can target Jeep Cherokees and give the attacker wireless control, via the Internet, to any of thousands of vehicles. Their code is an automaker’s nightmare: software that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.

Motorcycles are even more trustworthy; most of them don’t contain so much as a single microcontroller.

July 20, 2015

Yosemite backpacking

I’m back in Seattle after a week in California. The backpacking trip went well and I am really glad I went. The group was a little smaller than average, but even a small slice of my very large family adds up to a good-sized crowd. Still, it was funny that my mother and I were the only people present who had actually participated in the notorious Disaster Hike – for everyone else it was just a pretty loop among some alpine lakes.

The beginning of the hike was a bit stiffer than we’d anticipated; I’m not sure what the trail builders were thinking, but they had us do a lot of climbing and descending without encountering any notable vista or any other apparent justification. Once we reached Crescent Lake, however, the loop was steady and smooth.

Mom, AJ, Abigail, and I all scrambled up Buena Vista Peak as the trail crossed its shoulder, yielding a glorious panoramic view of the southern park, a perspective I’ve never seen before. We camped that night by Buena Vista Lake, peaceful and quiet, with a beautiful glowing sunset rolling across the granite; I’ve never seen waves on a lake reflecting quite so distinctly orange and blue.

We had planned to find an unmaintained cross-country trail leading from the main trail past Hart Lakes over to Ostrander Lake, but after looking at the terrain from atop Buena Vista, decided it would be easier and more fun to bushwack across Horse Ridge instead. This started out as a ridiculously pleasant walk through a spacious forest, but once we reached the crest of Horse Ridge we discovered that the far side is a precipice, not shown on our maps. With a bit of exploring we found a steep but workable ravine cutting through the sheer face, however, and after a little work we got everyone down and across to the Ostrander Lake bowl.

Oh, such a lovely day that was, and so satisfying to dip our feet in the water!

AJ and I weathered the trip with ease; you know you’ve got something good when the relationship-maintenance work flows so easily and automatically that it doesn’t even feel like work.

July 12, 2015

I’m on my way to California for a week’s backpacking in Yosemite with my family. It is the 25th anniversary of the “disaster hike” notorious in family lore, so we’re going to revisit the trail and see if we can do it a little more successfully this time. I will therefore be completely unable to communicate with anyone not in the immediate backpacking group until some time late Friday.

July 9, 2015

I’ve had a concept for an operating system bouncing around my head for a decade and a half or so. With the exception of a general affinity for exokernels, the structure I’m thinking about now bears no resemblance to anything I considered back in the ’90s, but on the basis of arbitrary convenience I’m going to say that this Ship of Theseus is, in fact, still the same boat. The current incarnation lives in a series of C header files named “trindle.h”, “trindle2.h”, “trindle3.h”, und so weiter, documenting the kernel API, which is the only part that actually exists.

I have at various times written all of the individual components necessary for an operating system, though if one were to imagine them all glommed together it would be form one unholy mongrel with no particular reason to exist. The Trindle concept is rather an attempt to answer the same sorts of questions I was exploring with the Radian language. Observing that all of the interesting problems in computing currently have to do with asynchrony and distributed processing, immutability has become a prominent and valuable tool: but “immutability” is really just a way of describing the way objects look when your tools require you to be explicit about the whens and hows of the changes you are making to observable state.

Trindle is therefore not a Unix: it is a single-user, single-address-space, capability-based, filesystem-driven architecture which may well end up offering a POSIX API but only as a secondary concern should it happen to be practical. It does, however, retain all the familiar notions of independent processes, protected memory, virtual memory, and the stdin/stdout/argv/envp conglomeration necessary for operating C programs.

The capability system works by attaching a list of inodes to each process. A process may read from those objects and no others; it doesn’t matter what sort of path-mangling shenanigans you get up to or what other subprocesses you launch, there is simply no way for a process to refer to any file not granted by its upstream launching process.

To be more precise, permission to open a read stream from an inode is a capability attached to some other stream. A stream is an interface to one end of a pipe connecting two processes; the upstream process can send data through the pipe, and can also attach permission to access some object it knows about.

A process may generate a new pipe either by forking or by loading an executable image. This pipe is itself a new file, which can be opened and read, or can be sent down an output stream so that some downstream process can read from it.

The only difference between an executable and a file is that the executable does not yet have an input stream, while the file does. To be solved: memoization and lazy computation of file contents.

Since processes cannot alter existing files, merely read from them, how do you actually get any work done? I’m imagining that the shell would be a process which reads from various processes representing user-interface devices and then pipes the filesystem root through various programs as the user requests. Changes would be made by generating a new directory tree as appropriate.

But that seems like a lot of copying and replacement. I think this system needs some sort of “log” object, preferably one which can merge writes from multiple inputs. A directory could thus be represented by a series of mutations, so that inserting a new file or deleting an old one just involves appending a log entry recording the fact, with periodic writes of a new summary of the current state.

The equivalent of a user’s home directory would then be something like an activity log, recording the various files the user has created, with an index of their names. The user can pass these files through various programs in order to generate new files, which can either live along side the originals under new names, or which can replace the originals by redefining their names.

Since the only way to gain access to a file is to be given it by the upstream process, the user is therefore in complete control over which programs get to see which files. If you don’t want a program to have access to your contact list, you simply don’t give it a pointer to your contact list, and that’s that – there is no mechanism by which it can name that file, ask for access to it, or raise its privilege level in order to read it. Nor can any program alter your files for you; programs merely generate files, and it is up to you, through the shell, to put the results where you want them.

I’m not sure whether I will ever actually build this thing, but it’s been an interesting concept to chew on while riding the bus or laying in bed unable to sleep.

July 8, 2015

High tech tuxedo shirt for musicians

A startup called Coregami has introduced a tuxedo shirt for symphony musicians using modern, wicking, machine-washable four-way-stretch material and a raglan sleeve for less restriction of shoulder movement. I would wear one of these, and $120 is a totally reasonable price.

July 7, 2015

Happy weekend

I spent Fourth of July weekend at Goodness, a 150-person campout on the Green River. It’s a happy, relaxed event with big trees, lots of kids, potluck dinners, swimming, and (of course) dancing all night under the stars. My burner friends have this party logistics business dialled in, and the festival flowed smoothly as the river’s current. Load-out and MOOP check on Sunday went so quickly that I felt like complaining that there was not enough work to do!

I had to restrain my ego somewhat because we used the PA system I bought a few weeks ago and it sounded ABSOLUTELY AWESOME. I mean, WOW. The sound was gorgeous – bigger, louder, and cleaner than I had expected – and I just wanted to bounce around with glee. So much fun, and I could not stop dancing. There is nothing else in the world like the luxurious glory of dancing til dawn in a wide-open grassy meadow with a couple dozen of your friends as the music rolls along like some enormous machine and the sun starts to peek up through the trees.

We’ll be bringing an even more impressive system out to Floodland next month, once Danne and Erik finish building their Danley-style tapped horn subs. I’m told to expect purple glitter sparkles. Perfect.

June 25, 2015

So much baseball

I had no idea baseball teams played so many games. I’m in SF for the week, and the office I’m working in overlooks the stadium parking lot. Every single day, I’ve watched it fill up, crowds streaming across the bridge to the stadium – in the middle of the afternoon on a work day, at that. Is this normal? Do baseball teams really play games pretty much every day? I had imagined it was like once a week or something.

June 23, 2015

Musing on the development of the web

I learned HTML some twenty years ago and had a good few years of fun with the web, but recoiled from Javascript in horror and CSS in frustration. I eventually gave up on the server side as well, for political reasons: the strength of the Internet was in its gift of decentralized communication, but the web is all about big central servers controlled by singular institutions. I came to feel that investing time in such projects was actually counterproductive, in terms of helping to create the kind of world I want to live in.

That was a long time ago now, over a decade at least, and I am periodically shocked by glimpses into a world that has continued developing broadly and quickly, and which no longer much resembles any of the stuff I used to work with. I suppose the old mainframe hackers must have felt like this, as they watched the microcomputers take over.

The first of today’s jolts was a thread on Hacker News about a new standard for virtualization containers. I understand what virtual machines are and some of the reasons why people use them, and I know a fair bit about the low-level mechanics that make them work, but it’s clear that web people have taken the whole thing far beyond all that because I just can’t wrap my head around containers. I am ignorant of the problem they are designed to solve, and so I can’t really grasp – from the descriptions – what it is they are intended to do, or why that would be useful.

The second was a presentation about a piece of security analysis software, which started with a series of extremely startling claims about the product’s capabilities. I was running ahead with what I know about debuggers and low-level machine operations trying to figure out how they had accomplished these things… but of course the reason they can detect these things is that they’re not analyzing what I would call “applications” at all, but rather web services, and web services written in Java or .NET at that. And suddenly the whole thing seemed trivial, because of course you can analyze anything you want when you can play god with the virtual machine! Which is not to diminish the engineering work they did to make it happen, just to reduce it from the domain of magic. It seemed clear, at that moment, that I must be thinking about software from a sufficiently different perspective to their intended audience that they could reasonably expect people to understand the implied limits on their description as they apply to web programming.

I’m not really unhappy about this state of affairs, since I’m still not interested in working on web software, and I’m still not having trouble finding work in the field of what I still, with increasing quaintness, think of as “normal software development”. But it is clear that the world around me is changing, and I’m not seeing anything like a return to the kind of robust, resilient, democratic distributed architectures I want for the future of the Internet. It makes me wonder how long I can keep on holding out, and how long it will take me to catch up if the day comes that I have to hold my nose and jump in.

Building the ultimate solar system

An exploration of planetary science: working out a design for a system containing the greatest possible number of habitable planets and moons.

Related: what is the largest possible inhabitable world?

June 18, 2015

Chaotic Noise practice session

IMG_20150618_163601

They’re just, you know, playin’ away in the back yard, getting ready for Honkfest. Perks of having a bandmember as a housemate.

June 15, 2015

Whistler/Blackcomb is going to try to preserve the Horstman Glacier by feeding it with artificial snow.

Yeehaw, climate change.

In other news, the flotilla of “kayaktivists” has been doing a pretty good job at keeping the Polar Pioneer bottled up in the Puget Sound. The GPS track shows a steady cruise northward but it’s been going in circles off Bainbridge for a few hours now.

June 8, 2015

I bought a new stereo

IMG_20150608_184325
I think I need a bigger entertainment center.

June 6, 2015

Ceiling fan in my bedroom

IMG_20150606_170644

It got warm, so I decided it was time to install a ceiling fan. It’s a nice way to take advantage of the post-remodel bedroom ceiling height.

« Previous PageNext Page »