GraphPipe itself is a protocol for transmitting machine learning data between remote processes. The GraphPipe project includes data format definitions, guidelines for serving models consistently in accordance with those definitions, examples for a number of machine learning frameworks, and client libraries for models served using GraphPipe.
Oracle developed GraphPipe to address challenges associated with deploying machine learning models. Specifically, the software giant observed that there is no standard model serving APIs, meaning developers are typically forced to use whatever tools their frameworks offer.
This can get complicated for a variety of reasons. For example, Oracle says that deployment currently gets less attention, so out-of-the-box solutions are limited, and for organizations using multiple machine learning frameworks, custom integrations are more often than not required to get them to work together.
Finally, many of the existing solutions are not adequate for performance-critical applications, something Oracle points out can prevent organizations from maximizing their machine learning investments:
In the enterprise, machine-learning models are often trained individually and deployed using bespoke techniques. This impacts an organizations’ ability to derive value from its machine learning efforts. If marketing wants to use a model produced by the finance group, they will have to write custom clients to interact with the model. If the model becomes popular sales wants to use it as well, the custom deployment may crack under the load.
According to Oracle, such problems are only exacerbated when models are needed to support customer-facing mobile and internet of things (IoT) applications. Because models are generally not run on-device, remote services are needed, and there are obvious risks if those services aren’t performant and reliable.
GraphPipe aims to address these issues by offering a standard that “allows researchers to build the best possible models, using whatever tools they desire, and be sure that users can access their models’ predictions without bespoke code.”
Oracle’s new standard uses FlatBuffers, an open-source cross-platform serialization library originally developed by Google. This is a key part of how GraphPipe delivers high performance. As Oracle explains, “Presently, no dominant standard exists for how tensor-like data should be transmitted between components in a deep learning architecture. As such it is common for developers to use protocols like JSON, which is extremely inefficient, or TensorFlow-serving’s protocol buffers, which carries with it the baggage of TensorFlow, a large and complex piece of software.”
FlatBuffers, on the other hand, offer a number of performance advantages. One of the most notable is that it supports zero-copy deserialization, which means that deserialization of data does not require creating a copy of the data in a separate part of memory.
To demonstrate just how much more performant GraphPipe is over commonly-used alternatives such as JSON served by Python and TensorFlow Serving, Oracle has released benchmarks from tests it conducted. One test compared serialization and deserialization operations, and the other measured end-to-end throughput for serving.
Interestingly, Oracle noted that it was able to get TensorFlow Serving to perform similarly to GraphPipe with significant effort, but that actually highlights why Oracle believes there’s a real need for a solution like GraphPipe, which offers high performance out-of-the-box.
GraphPipe is now available on GitHub. To help machine learning developers get started, it is providing clients for Python, Go and Java, as well as a TensorFlow plugin.