As part of this release, an ecosystem of tools has been added to GitHub, such as an NNEF parser and converters from Tensorflow and Caffe. The organization is also working on developing importers into popular inferencing environments, such as Android’s Neural Network API and Khronos’ OpenVX.
Accord to the Khronos Group, NNEF was created in an attempt to “reduce industry fragmentation by facilitating the exchange of neural networks among training frameworks and inference engines, increasing the freedom for developers to mix and match the inferencing and training solutions of their choice.”
The standard offers roadmap stability that hardware and software companies can use for product deployment, while still maintaining the flexibility to respond to the need of the evolving machine learning industry, the Khronos Group explained.
Using NNEF as a standard transfer format, the Khronos Group believes it is possible to create a complete workflow from training to optimization to deployment.
“Khronos recognized a growing format logjam for companies deploying trained neural networks onto edge devices. We set out to build the first open standard solution for engineers to optimize and deploy trained networks onto diverse inference engines. Core NNEF 1.0 will enable cutting-edge solutions today and also flexibly evolve through its extension mechanisms,” said Peter McGuinness, Khronos NNEF working group chair. “In December 2017, we released the developer preview of NNEF and made an open call for industry feedback. Community response has been tremendous, confirming the demand for this standard and enabling us to achieve a responsive and complete NNEF 1.0 specification.”