There’s also ml5.js built on top of TensorFlow that aims to be a bit more approachable if Tensorflow.js is a bit too intense!
While they can communicate without an internet connection, if you need them to, they also all connect to Particle’s own cloud based service. So it’s super easy to send and recieve data over the web too.
Their three microcontrollers that are mesh-enabled are the following:
- Argon — their Wi-Fi connected Mesh Gateway. This is what the other devices can connect to.
- Boron — their LTE or 2G/3G connected Mesh Gateway. It can work like the Argon but can be more portable, as it doesn’t need Wi-Fi and instead can work worldwide (Particle provide a global SIM option!).
- Xenon — these are the mesh endpoints can connect to the Argon and Boron microcontrollers. They are the low cost sensors you can put around the place (e.g. in all your plants to track moisture) and have them report back.
I’m a very big fan of Dialogflow, Google’s platform for creating conversational interfaces for devices like the Google Home and Google Assistant. The same set of trained responses and scenarios can then be used cross-platform in all sorts of places including Facebook Messenger, Skype, Telegram, Twitter, Slack and even Twitch.
While a lot of the set up for your voice interface will be within their platform requiring no coding whatsoever, their Node.js client expands the possibilities a whole lot more! The Twitch integration above is actually something I built myself as a bridge between Dialogflow and Twitch’s API using that very Node.js client.
You can even get Alexa skills running through Dialogflow, it exports to a format that Alexa can import in, but there’s also a more thorough way of bridging the two together using the Alexa Api.ai Bridge.
There are also built in options for hooking into Facebook, Kik, Slack and Twilio SMS. It’s possible to hook an Amazon Lex skill into a Google Home action, but you’ve got to work out the bridging between the two yourself (there’s no official way to link them in either direction).
Low.js is a newer port of Node.js that appears to have emerged late 2018. Its goal is to be a port of Node.js that is more appropriate for IoT devices.
Low.js has lower system requirements than Node itself. The creators say it starts up instantly, whereas Node can take some time to load up (approximately 0.6 to 1.5 seconds on a Raspberry Pi 2). It also uses a much smaller amount of disk space and memory.
The port lets developers utilise the whole Node.js API and can be run on PCs as well as IoT devices. At the moment, the port appears focused on ESP32 microcontrollers with Wi-Fi on-board (ESP32-WROVER). These are a good option for those who want to experiment with the IoT but at a lower cost. The ESP32 microcontroller is quite cheap, (their website says about $3 which I assume is US$, I’ve seen it for about $6 in Australia).
WebXR is the next step in browser-based WebVR, with the WebXR spec aiming to encompass more devices, including augmented/mixed reality headsets. It is definitely still a work in progress but has been in discussions since late 2017. At the moment, it’s up to an Editor’s Draft released 6 February 2019. It is incredibly exciting and I think is going to develop a whole lot in 2019!
If you’re looking to do cross-platform, more stable WebVR, feel free to stick with A-Frame and React 360, however if you’re keen to explore what’s coming and potentially be a part of helping test and give feedback on the new spec, WebXR is the thing to check out!
There is also a WebXR polyfill that provides fallbacks to native WebVR 1.1 and Google Cardboard.