In this paper we present a simple colorization method that requires neither precise image segmentation, nor accurate region tracking. Our method is based on a simple premise: neighboring pixels in space-time that have similar intensities should have similar colors. We formalize this premise using a quadratic cost function and obtain an optimization problem that can be solved efficiently using standard techniques. In our approach an artist only needs to annotate the image with a few color scribbles, and the indicated colors are automatically propagated in both space and time to produce a fully colorized image or sequence. We demonstrate that high quality colorizations of stills and movie clips may be obtained from a relatively modest amount of user input.
February 8, 2016
February 1, 2016
January 29, 2016
I’ve just discovered a language called Sisal, developed in the early ’80s, which was based on the same ideas that inspired Radian. It was quickly forgotten, as most languages are, but a group of students resurrected it in 2010 and built a fine-grained parallelism backend for it, based on pthreads. Which was, again, the whole point of Radian. It looks like they pulled it off about a year before I got Radian doing the same tricks, but the project seems to have lapsed back into obscurity.
January 24, 2016
bookmark: useful notes on writing makefiles.
January 10, 2016
A straightforward and well-presented article about the hopelessness of the currently-dominant computing model with some sensible suggestions for a better way.
January 4, 2016
I had a startling thought on the bus this morning which led to a pretty exciting idea. What if I combined the work I’ve been doing on lightweight unikernels with all the work I did on Radian, turning Radian into a tool for building lightweight services?
As a new language, though, it suffers from the unavoidable defect of being new: there is no language community, there are no libraries, there are no example programs, there are no tutorials, there is barely any documentation. Not if I made it my full-time job and worked the kind of hours I put in when I was doing Object Basic could I make up for that lack myself.
As a tool, it also has significant limitations. The constraints of its immutable dataflow semantics make it very difficult to interact with code written in other, less-constrained languages. That makes it very difficult to interact with the operating system and all of its libraries. Its parallel execution model and its dataflow semantics make it very difficult to debug using a conventional debugger: the traditional mode of stepping through line by line and watching the variables change is basically impossible, because there is nothing in the compiled code which cleanly matches up with the original source lines or variable names.
Finally, and worst of all, I was never able to find a problem for which Radian was a uniquely powerful solution. I looked into datamining, and then into numerics, but in each case I got bogged down in the immense amount of work necessary to get the core libraries up to the point that someone not already dedicated to the language would care to use it.
So, the new idea.
Unikernel-based services are still quite new, and *lightweight* unikernel services are still quite a cutting-edge activity. I’ve been having a great time poking around in the gritty guts of the x86 architecture but that’s because it reminds me of coding in the wild west of the late ’80s and early ’90s. Quite a barrier to entry!
What if I turned radian into a dedicated tool for building standalone microservices? The output would not be an executable program, but a bootable disk image you could start up inside a VM.
This would neatly turn all of radian’s disadvantages into strengths – or inevitable constraints of the target environment, at least. Can’t interact with system libraries? Well, isolation is the whole point of a lightweight unikernel service. Can’t step through with a debugger? Who cares, it’s probably running in the cloud anyway, so you debug it via log files – which is pretty much exactly the way I used to debug Radian programs anyway. Can’t link your favorite library into Radian? Doesn’t matter, just boot it up in a separate microservice, then make them sling packets at each other!
Radian’s strengths, meanwhile, would remain strengths. It’s all about slinging data between parallel asynchronous processes, and what else are microservices? Lightweight tasks are built right into the language, as is a synchronization mechanism which would fit perfectly on top of a socket-based communication mechanism.
Furthermore, there’s basically nothing else out there that does this. The only comparable system is MirageOS, which basically wraps OCaml and a bunch of unix libraries into a bootable VM. And that’s great, but it’s OCaml, and it’s a *much* heavier solution than I’d be presenting.
Radian, under this vision, would become shell scripting for the cloud: letting anyone with the tiniest trace of programming knowledge go from idea to running microservice in a matter of seconds. I am undoubtedly feeling unreasonably excited about this idea right now, but if I could accomplish this, it seems like it might be the most valuable thing I ever did with my career.
So. What would I have to do in order to bring this idea into reality? The radian compiler wouldn’t need any changes; all the work would be on the runtime side. I would certainly have to write some better documentation, though. I’d then replace the fleet kernel’s C runtime with the Radian runtime and rework the radian allocator and threads accordingly. I’d also have to write a little bit of code for emitting a disk image, and for invoking qemu on it.
As far as the radian standard library goes, I’d scrap the fancy but unfinished bignums package and bang out something simple and reliable based on good ol’ int64 and float64. The major piece of new work I’d have to do would be the development of a socket interface. Since I’d be going for an all-in-one runtime library, though, I could scrap the clunky ffi workarounds that used to make this sort of thing so laborious, and just build it right in! Call straight into the kernel, why not.
At that point… we’ve got sockets, tools for working with data going in and out of sockets, tools for managing complex asynchronous processes involving multiple sockets, and a tool that turns source code into a running VM in a snap of the fingers. What’s next? Time to play.
This would be an insanely formidable task if I were thinking about starting it from scratch, but I’ve already done all the hard parts! What I’ve just described sounds more like… assembling pieces.
Wow. Could I actually make all those years of effort pay off? I think I might try it.
January 1, 2016
The Website Obesity Crisis: starts off pointing out just how absurdly large websites have become, then continues – in remarkably straightforward, plain-spoken terms – to highlight everything I hate about modern web design. Such a satisfying read.
December 31, 2015
– router sets up first VM, exposes libvirtd proxy, launches boot process; implements firewall policies between fleet VMs
– shell implementation (bash fork? just posix sh? something new?) uses DNS-SD to locate libvirtd proxy, implements process management in terms of VMs
then, port/compile the programs I want to use as unikernel VM images.
I’ve made significant progress toward the completion of a new, non-POSIX-oriented unikernel library in the form of a baremetal C99 runtime, but even as I’m cruising down the home stretch I think I will put the project aside. Sure, my unikernel library would be tighter and lighter than the rest, if I finished it, but it really doesn’t add all that much, and working on it means I haven’t been making any progress on the stuff that is truly unique to my vision.
IncludeOS is marketed as a cloud-service tool, but when I look at what they’re actually building, it looks very much like what I’m doing. Their project is C++, mine is C; their architecture is a bit more full-featured, where mine is deliberately minimal… but it’s still basically the same idea, and I generally agree with all of their architectural choices. So… why not join forces? I can always dust off my fleet-kernel later, if there’s some problem it would solve better; one of the great things about this VM/unikernel architecture is that you can have as many kernel architectures as you like running simultaneously.
The part I really need to build, in order to get my vision for fleet off the ground, is the router/shell system that will make unikernel-based programs useful as everyday workstation tools. I’ve grown increasingly antsy as development of my unikernel library has dragged on, preventing me from getting to the shell; it’s time to refocus and get on with it.
Maybe I can contribute some of my stuff to IncludeOS along the way, too.
December 14, 2015
Useful information about electric bicycle commuting in Seattle, including links to relevant RCW statutes and recommendations for equipment.
December 9, 2015
I miss the web. I didn’t expect it to be over so quickly. What’s next, I wonder? Should we try to build another one, or is the concept fundamentally flawed?
November 25, 2015
It has come to my attention that people are woefully uninformed about certain episodes in the Thanksgiving narrative. For example, almost no one mentions the part where Squanto threatens to release a bioweapon buried under Plymouth Rock that will bring about the apocalypse.
Mr. S, an ordinary American, is minding his own business outside his East Coast home when he is suddenly abducted by short large-headed creatures like none he has ever seen before. They bring him to their ship and voyage across unimaginable distances to an alien world both grander and more horrible than he could imagine. The aliens have godlike technologies, but their society is dystopian and hivelike. Enslaved at first, then displayed as a curiosity, he finally wins his freedom through pluck and intelligence. Despite the luxuries he enjoys in his new life, he longs for his homeworld.
November 14, 2015
November 13, 2015
ne is a free (GPL’d) text editor based on the POSIX standard that
runs (we hope) on almost any UN*X machine. ne is easy to use for
the beginner, but powerful and fully configurable for the wizard, and most
sparing in its resource usage. If you have the resources and the patience to
use emacs or the right mental twist to use vi then
probably ne is not for you. However, if you need an editor that:
- compiles without effort everywhere (or almost everywhere), is packaged for
all Linux distributions, and ported to other operating systems (such as Mac OS X);
- is fast, small, powerful and simple to use;
- has standard keystrokes (e.g., copy is CTRL-C);
- uses little bandwidth, so it is ideal for email, editing through phone line (or
slow GSM/GPRS/UMTS) connections;
- has a very compact internal text
representation, so you can easily load and modify very large
… then you should try ne.
November 12, 2015
From IEEE Spectrum, Bosch’s Giant Robot Can Punch Weeds To Death:
At IROS last month, researchers from a Bosch startup called Deepfield Robotics presented a paper on “Vision-Based High-Speed Manipulation for Robotic Ultra-Precise Weed Control,” which has like four distinct exciting-sounding phrases in it. We wanted to write about it immediately, but Deepfield asked us to hold off a bit until their fancy new website went live, which it now has. This means that we can show you video of their enormous agricultural robot that can autonomously detect and physically obliterate individual weeds in a tenth of a second.
November 6, 2015
Oh yeah, this project to marry unikernels with the Qubes system is pretty much exactly what I have been going for with Fleet.
October 24, 2015
A few solid hours hacking on
fleet last night got me to a basic C “hello world”. The C library has everything from
string.h and the simple character I/O functions from
stdio.h, and the kernel has just enough of a driver interface to make simple file operations work. I’m using the legacy serial ports for now, since the drivers are trivial; stdin maps to COM1, stdout is COM2, and stderr writes to the host console via port E9.
This is the part where it starts to get really interesting. If you handwave past the part where most of the C library isn’t there yet, it’s now possible to compile a couple of ordinary C programs, link them against the fleet library, run them in VMs, and pipe data between them in grand old Unix shell style. It’s all very normal- except that these processes are just as comprehensively isolated from each other as if they were running on separate physical machines.
October 21, 2015
fleet project is a lot of fun, combining a shiny new idea with an excuse to take a crack at a lot of classic problems.
The next layer after the startup code should be something to do with drivers and the low-level kernel apparatus, but it all felt a bit vague, so I decided to start with the C standard library interface and work my way down, letting that drive the rest of the kernel architecture.
There are dozens of free C library implementations available, but I have not been able to find one that will work for my project. I don’t want POSIX support, don’t need hardware portability, and won’t have a Unix-style system call interface underneath. And while I’m building this in the style of an embedded firmware project, it’s actually designed to run on a purely virtual machine, so I don’t need or want a lot of code dealing with legacy PC hardware.
Oh, well, I’m writing my own C library. Of course I’ll fill it in with a lot of existing components, but this architecture is apparently weird enough that the framework is up to me.
I did write the string library myself, though, because I thought it would be fun. There sure is a lot of weirdness in there – it’s been 23 years since I learned C, and I can’t say I had ever noticed the existence of
strcoll – but now I’ve written ’em and built test suites too.
October 19, 2015
I factored the lowest-level portion of the fleet code out as a standalone library which I’ve named ‘startc’ and posted on github. I also announced it on reddit.com/r/programming and on hackernews. Of course it feels a wee bit nerve-racking to post something experimental like this for the world to examine, but it’s a good exercise as it forces me to get all the loose ends tied up and to really think carefully about the interfaces between modules. So far the reception has been generally positive, which is nice. I have no idea whether anyone will actually use the library, but perhaps someone will get through the early stages of a similar project more quickly by looking at its source code, and that would make me feel good.
October 16, 2015
October 15, 2015
According to The Death Clock, I have about a billion seconds left.
That… seems reasonable.
October 12, 2015
I spent four hours hanging a TV on the wall yesterday. Yes, really. I thought I’d simplify the project and save myself a bunch of work by purchasing a wall-mount swivel arm for the TV instead of building what I wanted from scratch.
As soon as I got started, it was clear that the wall-mount was designed to be mounted on a solid wood or brick wall (seriously? how many of those do you find in the USA?), so I started with a trip to the hardware store for a plank and some lag screws. After some careful measuring and a lot of exploratory drilling, I found the right spot and bolted the anchor panel firmly into the studs.
Next, I discovered that the wall-mount was a little bit too small for the TV. What!? I thought I measured it before I ordered it! Well… the wall mount listed a diagonal measurement range which includes the size of my TV, and its mounting bracket style is the same as that of the bracket I formerly used to attach the TV to the entertainment center, but it was designed for TVs with square bolt patterns and it just doesn’t spread out enough.
So… back to the hardware store, for another handful of bolts and some aluminum bars. I cut and drilled until I had a workable pair of adapter brackets.
Finally, I bolted the adapter brackets onto the TV, bolted the swivel-arm brackets onto the adapter brackets, screwed the swivel-arm brackets onto the arm head, bolted the swivel-arm base onto the anchor panel, which I’d previously bolted onto the wall.
Sure saved myself a lot of work there!
October 6, 2015
The hypervisor is the new kernel.
The virtual machine is the new process.
The process is the new thread.
Virtual PCI devices are the new POSIX.
Shared mutable state does not scale.
October 1, 2015
I spend a lot of my computer time editing text files, and so I’ve thought a lot about how one might go about that in a system like Fleet. One approach would pack all possible editing services into a single, monolithic IDE, which could run within a single VM. It would mount the disk containing the files you want to work on, present a file browser, and let you edit away to your heart’s content.
There’s nothing wrong with that approach, and it wouldn’t be hard to build out of existing components, but it doesn’t really satisfy my sense of elegance. I’d rather find a way to plug my editing tools together like Lego bricks.
It’d be really convenient, for example, to separate the code that renders text on screen from the code that manages all the data and performs the edits. Text can be displayed in lots of different ways depending on the context (code? email? notepad? letter writing?), but the process of editing a text buffer is the same. Wouldn’t it be neat if I could write the editing engine once and just slap a bunch of different interfaces on it depending on context?
The Fleet philosophy says that every connection between components has to take the form of a wire protocol, but what kind of wire protocol would represent a text editor? That really isn’t the sort of thing client/server processes typically do!
It occurred to me, however, that unix is full of command-line apps which accept commands typed in through a serial connection, producing output as text. There is an ancient program called ‘ed’, part of Unix since the 60s, whose user interface is basically a little line-oriented command language. What if we just redefined its interface as a wire protocol? A text-editing interface program would become a bridge, with one end connected to an “edit buffer service” and the other connected to a “terminal display service”.
This would allow multiplexing: one could have an arsenal of tiny, single-purpose editing tools which do their work by sending commands to an edit-buffer service. No need to keep reimplementing the edit buffer in every tool – just send some ed commands down the wire.
The `ed` program was designed to edit text files, but considering its command language as a wire protocol, what we’re looking at in the abstract is simply an array of text chunks. There’s no reason the actual bits on disk have to be nothing more than a flat text file: one could implement a different edit-buffer service for each different kind of file format, allowing one to mix and match editor interfaces and buffer services.
We can take it further. `ed` commands consist of a line reference, an identifying char, and optional parameters if the command needs them. What if we could extend the line reference syntax and used the same protocol to manipulate multidimensional data?
The syntax currently makes no use of the colon character ‘:’, so I suggest that the editor wire protocol could be extended by allowing a sequence of indexes delimited by colons:
One could thus write a generic table viewer which would speak this protocol, then plug it into an edit-buffer service representing a CSV spreadsheet file or an NCSA server log file. And of course there’s no reason you couldn’t continue stacking dimensions arbitrarily if you wanted an edit service backed by JSON or some other hierarchical format.
It might be worthwhile to define a read-only subset of the protocol, since some tools will be content to view data, and it would be useful to develop buffer services which present a common interface for exploring structured data even if it’s not practical to perform edits.
I couldn’t sleep the other night so I spent a few hours coding up the foundation of a kernel for this new exokernel-style operating system concept I’ve been talking about, which I’ve decided to call ‘fleet’. (Trindle was the microkernel idea, now dead.) It’s a lot of fun – it feels a lot like working on an embedded device, except the board just happens to have been designed by lunatics. I feel satisfied with my progress; the kernel boots the machine, configures memory and interrupts, spews log messages to the serial port, and enumerates the devices on the PCI bus.
Since I’m treating the PC as an embedded device dedicated to a single application, this “rump kernel” is really more like a new flavor of the C runtime library than a traditional kernel. I don’t have to worry about paging, memory protection, or user/supervisor mode switches, and most of the usual concurrency problems just disappear. An application which needed those services could link them in as libraries, but I’ll worry about that later.
Once upon a time, when the world was young and people were still trying to figure out what you could do with a computer network, people tried to build abstractions that would represent remote services as though they were local ones. “Remote procedure call” was the concept of the day, and this really took off in the early days of OOP: the idea was that you’d have local proxy objects which transparently communicated with remote ones, and you’d just call methods and get property values and everything would be shuttled back and forth automatically.
This just plain doesn’t work, because the semantics are totally different. You simply can’t make the fundamental constraints of concurrency, latency, and asynchrony disappear just by throwing a lot of threads around.
Modern interfaces are focused not on procedure calls, but on data blobs. Instead of making lots of granular, modal, stateful requests, machines communicate by serializing big blobs of data and streaming them back and forth at each other. This emphasizes bandwidth over latency, and focusing on large transactions rather than small interactions simplifies the problem of concurrent changes to remote state.
My plan is to take this idea out of the network and apply it inside a single PC. The rise of multicore computing has demonstrated that the traditional approaches don’t even scale within a single machine, once that machine is full of asynchronous processes competing for shared resources! In the ‘fleet’ world, rather than trying to represent remote resources with local proxies, we’ll represent local resources as though they were remote. There will be no DLLs and no system calls: the system API will be a folder full of wire protocol and data format specifications.
This solves the problem of network transparency from the opposite direction: since programs will already be communicating with local services through some network datastream interface, remote services will look exactly the same, except for the higher latency and lower reliability.
I believe that this approach will substantially improve the security picture, since the absence of any shared memory or common filesystem limits the damage a single program can do to the rest of the machine should it become compromised. Hypervisors seem to be holding up well in practice. Of course there’s nothing which would prevent a single ‘fleet’ process from spawning its own subprocesses and reintroducing all those concerns – the fleet shell would be perfectly happy to run linux as a subprocess, for that matter – but it’ll be easier to use the hypervisor interface and spawn “sub”-processes as independent virtual machines.
Requiring each program to include drivers for every possible hardware device would be madness, and slow madness since device emulation is tricky and expensive. These programs are never going to be run on bare metal anyway, so I’m going to ignore all legacy PC devices and define the ‘fleet’ system interface as consisting solely of virtio devices. These devices all have a simple, standardized IO interface, so it should be no problem to build drivers for six or eight of them into my kernel-library. I’ll offer an efficient low-level I/O API for nonblocking DMA transfers. All the clunky, synchronous, blocking C APIs can be implemented on top of that.
Looking at this system from above, it’s clear that making this fleet of VMs do useful work is going to involve a lot of datastream routing. I’m still working on the details, but I’m thinking that each program will have to include a compiled-in manifest describing the connections it wants to make and receive and the protocols it wants to use with them. Fixed connections like ‘stdin’, ‘stdout’ can be represented as serial ports, while other traffic can be specified using IP port numbers.
I have no idea how far I’ll get with all this, but I’m back in my old stomping grounds with all this low-level hackery and having a great time at it, so I’ll probably stick with it long enough to build a proof of concept. Something that boots into a shell where you can manipulate a filesystem and pipe data between programs, with a little monitor that lets you see what all the VMs are doing – that should be fun.
September 27, 2015
Well, that was a fun weekend, out in the trees near Sedro-Woolley. This was apparently the fourth year of the Deep Playa campout and it looked to be around 300 people this time. There were interesting art projects, fun activities, decent music, and overall a happy burnery festival vibe despite the cold damp weather.
AJ and I camped out in our big truck, as is becoming usual, and while it really needs a heater, at least it’s insulated and we had a generator powering the electric blanket. We also hung a big tarp off the side of the truck and made a shaded area where we could set up the propane camp fire – and lo there was much gathering and enjoying on Saturday night, as everyone was pretty much clustered up around one fire or another.
I brought the small version of my sound system and set it up by our camp, renegade-style. Two 15″ subs and two Mackie 450 tops – it was more sound than we needed, honestly, and I had a great time rocking the neighborhood with it. I played an electroswing set on Friday afternoon, and three psytrance sets at various other times when the mood struck me. I also got to play glitch-hop on the big main stage sound system Saturday night – it was a little challenging, perhaps due to the cold, but it went well anyway and I’m glad I did it.
Tomorrow it’ll be time to unpack; tonight I’m making an early night of it.
September 24, 2015
I did a little research and the pieces of this plan are becoming clear. Virtio appears to be a totally reasonable platform abstraction API, and KVM will do the job as a hypervisor. I’ll set up an x86_64-elf gcc crosscompiler and use newlib as the C library. Each executable will have its own disk image, and exec will function by spawning a new VM and booting it with the target executable.
The missing piece, so far as I can tell, is a proxy representation of the hypervisor’s management interface which can be provided to a guest OS, so that our VMs can virtualize themselves – and pass on a proxy for their own hypervisor proxy, so that subprocesses can virtualize themselves in turn, recursively. This would enable the construction of a guest-OS shell managing an array of processes which are themselves independent guest-OS machines. Current thought: define the ‘virsh’ terminal interface as a serial protocol, then write a linux-side launcher process that creates a pipe-based virtual serial device and hands it off when starting up the first guest process.
With the launcher and the multitasking shell in place, a toolchain targeting this baremetal environment, and an array of virtio device drivers in the form of static libs you can link in, the platform would be ready to go.
September 23, 2015
To simplify a bit further: I want to throw away the traditional “operating system” entirely, use the hypervisor as a process manager, use virtual device IO for IPC, and implement programs as unikernels.
I think this could all be done inside Linux, using KVM or Lguest, constructing the secure new world inside the creaky, complex old one.