Despite Salesforce’s recent acquisition of the company he founded, MuleSoft’s Ross Mason is apparently not ready to rest on his laurels. Instead, Mason is out on the technology campaign trail with a bit of a reality check for anyone that thinks artificial intelligence is as plug-n-play as the purveyors of it might lead you to believe. Mason has a catchy codename for the conundrum that I’ve yet to hear anyone else talk about. He calls it “Brain in a Jar.”
First, a bit of disclosure. MuleSoft is the parent company to ProgrammableWeb but gives us the latitude to cover the API economy both independently and objectively. I’ve known Mason since joining the company five years ago and yet, this is actually the first time we’ve interviewed him on the pages of ProgrammableWeb. The interview, available in video, audio, and a full-text transcript appears below.
Over the years, I’ve chatted with Mason many times and if there’s one thing I can say about pretty much every conversation, it’s that I come away shaking my head over his knack for spotting the obvious. The only thing is … no one else is ever spotting it or talking about it the way he does. In fact, that same knack is what led to his founding of MuleSoft in 2006 after envisioning an obviously more efficient way for banks to integrate their back end systems. So, it’s a pretty credible knack which is why it’s worth hearing what he has to say.
In my view, artificial intelligence APIs — Ph.D.-grade APIs that give you dirt cheap access to some incredibly powerful capabilities — are the next frontier for businesses looking to keep the competition in their rear view mirrors. But after speaking with Mason, I realize that I’ve been thinking about them in the wrong way. In the many presentations I’ve given this year, I’ve been talking about AI APIs as solutions that you plug into your enterprise data in order to deliver some meaningful insight that you couldn’t otherwise get without spending a fortune.
But, in what could be best-described as a dereliction of duty, what I’ve really been proposing to my audiences is the same as Mason’s idea of a brain in a jar. Metaphorically, a “brain in a jar” is a silo of artificial intelligence that, by virtue of how it’s deployed, is really just a point solution that, at best, can only deliver limited insights and subsequently, limited competitive advantage.
The logic is simple. The grand majority of AI that’s on the market today is one-way. There are exceptions. But generally speaking, deriving something magical out of AI requires you to feed something to it. Often times, this happens through an API. For example, if you feed a huge corpus of images to an AI-driven image recognition API, the API can dig through all of that big data and tell you something about each one. Are they dogs? Cats? Humans? Are any of them Brad Pitt? Or someone from the FBI’s most wanted list? You get the idea.
But for AI to yield something truly awesome, it should be enabled to automatically take action as well. Take the image recognition example. Once there’s a hit (or multiple hits) on whatever you’re looking for, the next stop in the workflow shouldn’t be a human that’s tasked with some corresponding action. That’s neither scalable enough nor is it fast enough for today’s businesses running at clock speed. Imagine if a camera in a train station spotted somone on the FBI’s most wanted list. Shouldn’t the law enforcement officer closest to that location be notified within nanoseconds? Otherwise, the opportunity could be lost as fast as it presented itself.
And this is where we’re just not far enough along when it comes to plug-n-play AI. Before AI can automatically take the appropriate action based on what it has discovered (which is really what you want it to do) — whether it’s directing a satellite to get a better look at some flotsam, shutting down a pipeline before a junction explodes from fatigue, or expediting the capture of a most wanted criminal — the entire enterprise itself has to be programmable. In the same way your code might invoke an artificial intelligence API, the artificial intelligence must be able to invoke your enterprise APIs in return. But absent of the necessary APIs for AI to connect to in order to effect the desired autonomic response, it’s back to the drawing board; writing a bunch of brittle, isolated, custom code with case structures and if-then-else statements to take that action. Not only will it never be good enough, it’s a step backward from where enterprises need to go with their application networks or what Mason simply refers to as their nervous systems.
“Quite a lot of it comes down to automation, especially automation around processes, around robotics, whatever else. What that means is you can’t just have an isolated insight” Mason says in the interview. “I always liken AI to this brain in a jar, this thing that’s very smart, but it doesn’t have a nervous system. It can’t affect anything unless it’s connected with another system to the things around it that it wants to act upon…this idea that by plugging an AI engine in, suddenly I’m going to get insights and automate stuff, is a bit false. It’s not thinking about the whole problem. The whole problem, really, is once I’ve got an insight, and I’m getting a thousand insights a second, as an example, how do I then act upon those insights? The only way you can do that is by having a way to connect into everything else that matters.”
And in order to do that, everything else that matters needs APIs. Furthermore, most of the canned AI solutions that are currently available must evolve to take advantage of those APIs by triggering autonomic responses. It’s one thing for your AI-based security solution to spot a possible infiltration. It’s an entirely different thing for that same solution to automatically neutralize the threat.
For the last decade, there’s been a clarion call across the industry to move to API-led application networks. Whatever you want to call it — “digital transformation” or the “composable enterprise” — pretty much every enterprise needs to bust up their monolithic IT into more of a microservice-driven API-led application network. One key rationale for this is agility. And now, with AI floating about, the enterprises that heeded that advice early on are in a much better position to rapidly and bidrectionally leverage AI in an effort to win in their markets. In other words, the longer you wait to retool your infrastucture, data, and functionality with APIs, the longer it will take to unlock the full potential of AI.
Here’s the interview.
Video of David Berlind’s Interview with Ross Mason
Editor’s Note: This and other original video content (interviews, demos, etc.) from ProgrammableWeb can also be found on ProgrammableWeb’s YouTube Channel.
Audio Interview of of David Berlind’s Interview with Ross Mason
Editor’s note: ProgrammableWeb has started a podcast called ProgrammableWeb Radio. To subscribe to the podcast with an iPhone, go to ProgrammableWeb’s iTunes channel. To subscribe via Google Play Music, go to our Google Play Music channel. Or point your podcatcher to our SoundCloud RSS feed or tune into our station on SoundCloud.
Full Transcript of David Berlind’s Interview with Ross Mason
David Berlind: Hi, I’m David Berlind, editor and chief of ProgrammableWeb. You’re watching and listening to The Developer’s Rock podcast, and with me today is Ross Mason. He is the founder of MuleSoft, and Ross thanks for joining us today.
Ross Mason: It’s great to be here.
David: Yeah. So first I need to disclose that ProgrammableWeb is a wholly-owned subsidiary of MuleSoft. However, I’ve been here for five years, as long as MuleSoft has owned ProgrammableWeb, and as far as I can remember, this is the first time I’m actually interviewing you. Is that right?
Ross: I think it might be, yeah. Kind of late.
David: Yeah. Isn’t that crazy?
Ross: I’ve had conversations with you that felt like interviews in the past.
David: Yeah, well we get a chance to talk a lot offline and over beers and stuff, and I always love to hear the way you think because you’re just, you’re thinking way out into the future and you’re crystallizing a lot of variables into one kind of common thread. Today, or these days you’re out there talking about something called Artificial Intelligence in a Jar (editor’s note: actually…”Brain in a Jar”). I read this article that you wrote on CBR, and then I saw that Phil Wainright interviewed you about the same topic on Diginomica and it sort of captured my attention because I’m intensely interested in Artificial Intelligence. So when I saw that you were out there talking about it, I said, “We gotta connect, and I wanna hear what you’re saying about it.”
David: What is AI in a Jar?
Ross: Yeah, so I liken Artificial Intelligence,… the way some people have been thinking about it… as sort of this isolated technology that you plug in data sets, you run some algorithms, you run some training sets, and then you get some insight. Certainly some of the first generation AI products if you like — so like a [IBM] Watson or something — require you to dump a shed load of data into that platform, and then start running algorithms out of it. It’s very much like big data. The way people thought about big data is like, put loads of stuff in there and then we’ll figure out later what the insights are going to be, and that hasn’t actually worked that well.