Non-Salesforce app developers may be missing out on a hidden gem in the AI world.
When developers think of using the cloud for AI, they might think of IBM Watson, Microsoft Azure Cognitive Services, Google Cloud, or Amazon AI. When they hear of Salesforce Einstein, they might automatically assume it’s limited to the Salesforce developer specialization.
Not so! Any app, whether Salesforce-related or not, can leverage the sophisticated AI cloud technologies that Salesforce has acquired. They’ve entered the AI market with Salesforce Einstein, their own orchestration of AI cloud services. Notably this includes offerings of language- and image-recognition services.
As with other AI cloud solutions, you don’t have to have a PhD to use the heavyweight technologies underneath. In this Salesforce Einstein API tutorial, I’ll show you how to set up an account and make your first AI cloud API calls. For the calls themselves, we’ll play with cURL and Postman, but you could also your own back-end prototype or whatever other technology you’re most comfortable with.
From there, the sky’s the limit.
Creating a Salesforce Einstein API Account
To make Einstein Platform API calls, you first need to create an Einstein API account, download the key, and generate a Salesforce OAuth token using that key. The process only needs to be done once to be able to use both Einstein Vision and Einstein Language.
You can log in using your Salesforce or Heroku credentials. On selecting any of the above option, you will be redirected to their respective login pages. If you log in with Heroku, they require you to set up a credit card with them and attach the service to a specific Heroku instance of yours.
If you’re new to Salesforce and don’t have a Heroku account, setting up an account with them is fairly quick—even quicker if you want to sign up via a preexisting social account like one with Google.
We’ll assume from here that you’re using Salesforce (via a social account or not) instead of Heroku. The process involves a bit of backtracking, so you’ll want to pay close attention to these steps.
Once you’re logged into Salesforce, you’ll be faced with a tour screen that doesn’t have much to do with Einstein. At this point, you should check your email and click their verification link; otherwise, the next step will result in an error.
Getting an Einstein API Token
The next step is to circle back to that initial Einstein API signup link and try the Salesforce login button again. After that, you’ll set a new password—even if you created your account with the help of an external authorization partner like Google—and be redirected, again, to the tour page.
Now, circle back a third time to the API signup page, and click the Salesforce login button again. This time you’ll get a page as shown below. Don’t leave this page before downloading your private key, even though it may say you need to verify your email! If you do, there will be no way to get your private key without manual help from their support team.
You can either download the file to your local machine or copy and paste the key into a text editor and save the file as
Meanwhile, as mentioned, you’ll have another verification email waiting for you, this one being Einstein-specific. Click that verification link too.
Now that you have a private key, you’re able to generate time-limited tokens. Every API call you make—from creating datasets, to training models, to model prediction—needs a valid OAuth token in the request header. To get the token, you need to go to their token generator and use the same email address you used to log in. Paste or upload the private key file you received above.
Hands-on with the Salesforce Einstein API
Using the AI cloud via Salesforce Einstein involves some basic concepts on how to train their artificial intelligence network by uploading sample data. If that doesn’t sound familiar, my previous tutorial gives some examples of working with Salesforce Einstein—both for Einstein Language and Einstein Vision.
Assuming you’re comfortable with that, we will now use the Einstein Image Classification REST API via cURL or Postman. If you’re using Postman, wherever we have a cURL call, you can use Postman’s import feature:
Suppose you came across a useful business requirement where you want to distinguish between the smartphone and landline phone based on images and, using that predication, you want to update your lead score or process your use case.
The next step is creating our own data set. Please note that you need at least 40 examples that have already been categorized. (If this is more time than you want to invest at the moment, you can skip to the prediction section below. Simply use a
In our case, we have two categories: smartphones and landline phones. We create two folders, labeling them as smartphones and landline phones, and add images in each folder. We then create a zip file (zip only: 7z does not work, for example) containing these folders.
This Einstein API endpoint, which is used to create data sets, is next:
curl -X POST -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "type=image" -F "<ZIP_LOCATION>" https://api.einstein.ai/v2/vision/datasets/upload/sync
<ZIP_LOCATION> can be like either of these examples:
In Postman, without importing, you would need to fill out the header and body tabs as shown below:
It will take time until all the images are uploaded. Assuming all the images are uploaded successfully, the response will have a
datasetId (repeated as the main
id and once per category), which will be used in future calls.
Once your data set is uploaded, you have to train the model using the data you just uploaded. To train the model, use the following call:
curl -X POST -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "name=Landline and SmartPhone Model" -F "datasetId=<DATASET_ID>" https://api.einstein.ai/v2/vision/train
Training of the data set is normally placed in their queue, and in response, we will get the
modelId. After that, we can check another endpoint to know whether the model has been trained or not yet:
curl -X GET -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" https://api.einstein.ai/v2/vision/train/<YOUR_MODEL_ID>
When model is trained, you’ll get a response like this:
Using Einstein Vision for Image Prediction
Here’s the heart of it. Once the model is trained, you can now send an image, and the model will return probability values for each category we’ve defined. For the current model, we picked a stock iPhone X image for prediction.
For the prediction itself, we use the following endpoint:
curl -X POST -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "<IMAGE_LOCATION>" -F "modelId=<YOUR_MODEL_ID>" https://api.einstein.ai/v2/vision/predict
<IMAGE_LOCATION> is similar to
<ZIP_LOCATION>, but different keys are used, and there’s a third option:
sampleBase64Content=iVBORw0KGgoAAAANSUhEUgAAAC0...(In other words, you don’t need any prefix, just the raw base 64 part, if you want to use this upload method.)
Looking at the screenshot and probability values, the model successfully predicted that the iPhone X image is classified under the smartphone category. Success!
Whatever your use case, you’ll want to explore whether Einstein Vision assumes you’re providing an image that falls into one of the categories you trained it on. In testing, we discovered that when we sent the above model a picture of a sailboat, it made a best guess between smartphones and landline phones, rather than indicating that it doesn’t seem to be either. In other words, the ratings it gives for your sailboat picture being a landline or smartphone still add up to 1, just like they would with legitimate input.
However, some pre-built models have categories like
Other (for the
SceneClassifier model) and
UNKNOWN (for the
FoodImageClassifier). So it’s worth experimenting for your particular context so you can know what to expect if you will want to feed it images that don’t fit the categories given to it.
There’s also the “multi-label” type of model, which returns all categories, sorted by probability, with the assumption that multiple categories apply—i.e., the probabilities do not add up to 1. If that sounds more like what you’re doing, it would be worth looking into the newly released Einstein Object Detection. Instead of just telling you what might be in an image—overall—it actually gives you bounding boxes along with the predictions. This is similar to what you might have seen with auto-tagging on Facebook, except that it’s not limited to faces.
Salesforce Einstein Language: Intent and Sentiment
If you want to train your own model, Salesforce Einstein theoretically lets you train both Intent and Sentiment, but it’s much more common to only bother training Intent. Training an Intent model is similar to what we went through above, but instead of folders of images, you supply a two-column CSV file, with texts in column A and their corresponding categories in column B. (They also support TSV or JSON.)
Because the training steps are largely the same, we’ll assume at this point that you’ve already trained an Einstein Intent model with the training data they supply in
case_routing_intent.csv and are OK using the standard pre-built model for Einstein Sentiment.
Einstein Intent prediction calls are as easy as:
curl -X POST -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "modelId=<MODEL_ID>" -F "document=<TEXT_TO_PREDICT>" https://api.einstein.ai/v2/language/intent
<TEXT_TO_PREDICT> could be something like, “How can I get a tracking number for my shipment?”
The API call is the same with Einstein Sentiment, except that you can use the pre-built modelId
CommunitySentiment (and note the different endpoint):
curl -X POST -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "modelId=CommunitySentiment" -F "document=How can I get a tracking number for my shipment?" https://api.einstein.ai/v2/language/sentiment
The JSON output in both cases looks exactly like the prediction response format for Einstein Image Classification: The main substance is in an array associated with the key
probabilities, and each array element gives you a label and a probability. That’s all there is to it!
Easy AI with Salesforce Einstein
You’ve now seen how straightforward it is to use Einstein Vision and Einstein Language, and how the Einstein APIs don’t have anything to do with the rest of the Salesforce developer APIs, except in name. What will you create with them?
Understanding the Basics
What is API access like in Salesforce?
Access to the AI cloud via the Salesforce Einstein API requires signing up for an account, obtaining a private key, and using it to generate a token. From there, the token can be used for any Einstein API call.
How do I enable API access in Salesforce?
Once you have a Salesforce (or Heroku) account, enabling API access (and obtaining a private key) is done via the Einstein signup page at https://api.einstein.ai/signup whether you plan to use the API in a Salesforce app or non-Salesforce app.