Now that it is time to nail down the final database format and configuration steps for Virtuoso Cluster, we have to make a couple of small choices.

Design Objective

While fixed partitioning is good for testing, it is impractical even for measuring scalability, let alone deployment. So fixed partitioning is really a no-go. The best would be entirely dynamic partitioning where the data file split after reaching a certain size and where data files migrated between any number of heterogeneous boxes, so as to equalize pressure, like a liquid fills a container. It would have to be a sluggish cool liquid with a high surface tension, else we would get "Brownian motion" in trying to equalize the pressure too often. But a liquid still.

Sure. Now go implement this with generic RDBMS ACID semantics and the works. Feasible. But while I am a fair-to-good geek, I don't have time for this right now.

So something in between, then. The auto-migrating files are really a problem for keeping locks, keeping tractable roll forward, special cases in message routing etc. The papers by Google and Amazon allude to this and they stop far short of serializable transaction semantics.

So what is the design target? In practice, running a few hundred database nodes with fault tolerance. The nodes must be allowed to be of different sizes since a one time replace of hundreds of PC's fits ill with the economy.

Addition and removal of nodes must be without global downtime but putting some partitions in read only mode for restricted periods is OK. Assigning the percentages of the data mass to the nodes can be a DBA task with a utility making suggestions based on measured disk cache hit rates and the like. Partitions and the makeup of the cluster must be maintainable from a single place, no copying of config files or such, nobody can get this right and the screw ups from having cluster nodes disagree about some part of the partition map are so untractable you don't even want to go there.

So, how is this done? Divide the space of partitioning hashes into say 1K slices. For each slice, give the node numbers that hold this slice of the partitioning space. If there are more nodes, just divide the space in more slices. When a node comes new to the cluster, it manages no slices. Slices from other nodes can be offloaded to the new one by copying their contents. The slice holders put the slice in read only mode and the new node just does an insert-select of the partitions it is meant to take over, no logs or locks needed and all comes in order so insert is 100% in processor cache. When the copy is complete, the slice comes back in read write mode in its new location and the old holders of the slices delete theirs in the background. This is not as quick as shifting database files around but takes far less programming.

To remove a node, reassign its slices and have the assignees read the data, as above. When the copy is done, the node can go off line.

A slight modification of this is for cases where a slice always has more than one holder. Now, if we allocate slices to nodes on the go, keeping would-be replicas on different machines but otherwise equalizing load, we run into a problem with redundancy when we perform an operation that has no specific recipient: Suppose we count a table's rows, we should send the count request once per every slice. Now, the slice is not the recipient of the request, the cluster node hosting the slice is. We should either qualify the request with what slices it pertains to, which means extra reading and filtering or have it so that no matter the choice of replicas for any operation, there is no overlap between their contents, yet the contents cover every slice. The only simple way to enforce the latter is to have cluster nodes pair-wise as each others' replicas. Nodes will be adding nodes in pairs or threes or whatever number of replicas there will be. Google's GFS, the file system redundancy layer under Bigtable or Dynamo do not have this problem since they do not deal with generic DBMS semantics. The downside is that if a pair has two different types of boxes, the sizing should go according to the smaller one. No big deal.

If replicas are assigned box by box instead of slice by slice, life is also simpler in terms of roll forward and reconstituting lost nodes.

The complete list of cluster nodes and their slice allocations are kept on a master node. Each node also knows which slices it holds. With the loss of the master the situation can be reconstructed from the others. For normal boot, the nodes get the cluster layout, slice allocation and some config parameters from the master, so that if network addresses change they do not have to be written to each node. These are remembered by each node though, for the event of master failure.

When a Virtuoso 6 database is made, the system objects are partitioned from the get-go. On a single process this has no effect. But since all the data structures exist, the transition from a single process to cluster and back is smooth.

At any time, a database, whether single or cluster, is a self-contained unit. One server process serves one database. It is one-to-many from database to server process. A server process will not mount multiple databases. This could be done but then this would be more changes than fit in the time available so this will not be supported. One can always have multiple server processes and attached tables for genuinely inter-database operations. This said, a single database holds arbitrarily many catalogs and schemas and application objects of all sorts.

In terms of schedule, we do the single copy per partition right now. Duplicate and triplicate copies of partitions are needed as we do some of our web scale things in the pipeline. So a degree of this supported even now but without seamless recovery, so when a replica is offline, the remaining copy is read only. Making this read-write is a matter of a little programming. DDL operations will continue to require all nodes to be online.

As of this writing, we are making the regular test suite run on a cluster with the above described partitioning, a single copy per partition. After this, the database layout will stay constant. The first deployments will go out without replicated partitions. Replicated partitions will follow shortly, together with some of the optimizations mentioned in previous posts.