A few weeks ago, Seagate announced their Kinetic Platform and Drive, a reinvention of the disk interface. I’ll admit that when I first heard about the vision I was skeptical but when you dig into the details of their design and in particular their focus, it starts to get really interesting.
While we have seen evolutions in Hard Drive technology accross various parts of the technology stack, most modern ones are still been based on the logical block addressing (LBA) abstraction introduced by SCSI. Today, scalable cloud storage from folks like Google, Facebook, Amazon and Microsoft use file-systems like GFS, HDFS, Cosmos and Dynamo. All of these systems reinvented the way we think about file systems however most solutions are still built on top of POSIX like file systems using disk command protocols that are based on block addressing. Seagate is intent on changing this by developing a new set of drive access APIs that use Ethernet as the physical interconnect to the drive and by defining a key/value abstraction (apparently using ProtocolBuffers) to replace LBA.
A key idea here is the elimination of servers that exist purely to have drives connected to them. Instead clients connect directly to individual data drives. In GFS this means eliminating chunk servers. In Azure Storage, you might be able to eliminate extent nodes. Their wiki contains some examples of how this could work with Hadoop, Riak CS and others. Seems like in the Hadoop case, they don’t eliminate the need for data nodes but in the case of Riak CS the need for object servers is eliminated. This should increase the drive density in a typical datacenter storage rack (example). Seagate hopes to work with teams building large scale storage solutions to have them build in support for writing to drives through the Kinetic API layer. I’m excited to see how much traction they get on this.
Kinetic reminded me of the CORFU project in Microsoft Research. In CORFU they take a different approach with an append-only log API instead of a key-value API. The CORFU approach is optimized for the characteristics of flash based storage. They have a similar intent to remove storage servers from the picture and introduce a protocol that allows clients to interact directly with internet connected “flash units” called a SLICE (shared log interface controller). To prove out CORFU, the research team built an implementation of ZooKeeper on top of it.
Both Kinetic and CORFU talk about supporting multiple clients and replication and both rely on clients to initiate replication. Kinetic returns version numbers in it’s GET API and supports a version number in it’s PUT API. It seems that with these APIs, a Kinetic drive might meet all the requirements of a CORFU slice, allowing Kinetic drives to be used to implement a log using the ideas outlined in the CORFU papers. On top of this I think one could implement a fairly fast pub-sub messaging system with features similar to Kafaka without requiring a separate service as the broker.
I do not know a whole lot about how Kinetic Drives are implemented but I wonder if drives themselves would implement the Kinetic Key/Value API using a log structured approach internally even if they aren’t SSDs as is the case in Key/Value stores like BigTable, Riak’s BitCask, Azure Storage’s Streams and others.
I really like the idea of lower level components evolving to support modern use cases and I look forward to the observations and new technologies that come from this. In particular I wonder what existing design choices might need to be revisited as we build new systems.
Leave a Reply