Integrating Adobe Target and Adobe Analytics into Voice Assistants

With digital experiences on the rise, interactive Voice Assistants like Amazon Alexa, Google Home or Apple’s Siri are still gaining popularity. Companies now need to meet their customers expectations and allow them to interact with their brand however they like. Those new possibilities require a clear strategy to avoid wasting time and resources on products which nobody actually uses. Adobe Analytics can help understand digital experiences better and drive value through customer feedback. With Adobe Target for personalization and experimentation, nothing can stop you from delivering the right experience at the right time.

This article describes how both Analytics and Target can be integrated in Voice Assistants’ backend systems to track and test how users are interacting with your App. We will be using the a direct integration with the Experience Cloud ID Service, sync identifiers and use them for Analytics and Target. Target will then be used to personalize the experience and allow A/B tests. Using Analytics for Target (A4T), information about test groups will be made available to Analytics, together with the general usage data of the assistant. All of this will be done in Python, but since we are not using any SDKs, all examples work with any programming language. TL;DR: The script is also available on Github.

Concepts and architecture

But before we start, let’s have a look at what we are actually trying to build. Voice Assistants work with a concept called Intents, which are invoked by voice commands. Whenever our users talk to our Skill, our backend receives their command and create a response, which is then send to back to the client. For this project, we want to be able track which Intents are used and how. On top of that, we want to change the response in A/B tests and analyze the result. By using the Experience Cloud integrations, we can share segments from Analytics to Target as well.

When we receive an Intent, we will do those steps:

  1. If the user does not already have an Experience Cloud ID (ECID), we will get one from the ID service, or use the existing one otherwise.
  2. With the ECID, we will then ask Target if it has content we want to send to the user. To enable A4T, we are going to send the Test Group information to Analytics.
  3. Last, Analytics will get a call containing information about the Intent that was used.

To keep things simple and readable, we will use the Python programming language. For our demo environment, all we need are the request module and some static config placeholders. For this example we pretend to build for Alexa, to name the variables in a more recognizable way:

import requests

alexa_userID = "abc123"
alexa_deviceID = "def456"
custom_userID = "ghi789"

adobe_orgid = "9B24H87FJ8180A4C98A2@AdobeOrg"
adobe_ecid = ""
adobe_targetproperty = "f3we51a0-63be-i7gb-e23e-b738n2854c9de"
adobe_targetdomain = "probablyyourcompanyname"
adobe_targetcode = "alsoyourcompanybutshorter"
adobe_targetsession = "Session ID from assistant, user ID if not available"
adobe_rsid = "Analytics Report Suite ID"
adobe_analyticstrackingserver = "company.sc.omtrdc.net"

In a real Assistant, we would get line 3-5 from our Skill’s environment. We will be using those IDs to sync with the ID service. It depends on your environment which of those are available and how they are called. In the next lines, we list the variables we need for integrating with the Adobe APIs.

Generating the Experience Cloud ID (ECID)

The first actual step is to give our user an ID from the Experience Cloud ID Service. On the first request we receive from an user, we need to call that service to get an ID from Adobe. Once we have received it, we should persist it four our user in our application and replace line 8 from above.

To deal with the service, we can create two functions. The first one gets an ID for our users, while the second syncs custom IDs that we might have from the user’s account:

def get_visitor_object(ecid=""):
  if ecid:
    r = requests.get('https://dpm.demdex.net/id?d_mid='+ecid+'d_orgid='+adobe_orgid+'&d_ver=2&d_cid=alexaUserID%01'+alexa_userID+'%010&d_cid=alexaDeviceID%01'+alexa_deviceID+'%010')
  else:
    r = requests.get('https://dpm.demdex.net/id?d_orgid='+adobe_orgid+'&d_ver=2&d_cid=alexaUserID%01'+alexa_userID+'%010&d_cid=alexaDeviceID%01'+alexa_deviceID+'%010')
  print("Retrieved Visitor Object from ", r.url)
  visitor_object = r.json()
  return visitor_object

def sync_ids(ecid,ids):
  idstring = ""
  for name, value, authstatus in ids:
    idstring = idstring + "&d_cid=" + name + "%01" + value + "%01" + authstatus
  r = requests.get('https://dpm.demdex.net/id?d_mid='+ecid+'&d_orgid='+adobe_orgid+'&d_ver=2'+idstring)

The get_visitor_object() function can be called with or without an Experience Cloud ID. It returns an object with all the values we need to interact with Target and Analytics later on.

Below that, the sync_ids() function takes an ECID as string and a list with one or more tuples with three elements. To sync a custom user ID “1234” for a logged in user to our ECID, we could call that function like this. The last argument is the authentication state:

sync_ids(adobe_ecid,[("userid","1234","1")])

Server Side Integration with Adobe Target

Now that we can identify our users, it’s time to call Target and ask for the content we want to return to our user. But let’s take some time to look at the function and parameters for this first:

def get_mbox_content(mbox,intent,slots=[],profile_params=[],capabilities=[],ids=[]):

The parameters mean the following. See further below for usage:

  1. mbox is the name of the mBox we need for Target. This is what Target also calls “location” for our experience.
  2. intent contains the name of the Intent in our Assistant.
  3. slots can contain a list with on or more tuples with three elements. It is used to add the Intent Slots to our call to Target as mBox parameters.
  4. profile_params works like 3 regarding the format. If we want to add profile parameters for Target, we can put them in here.
  5. capabilites is a simple list containing the capabilities of our user’s device. This allows us to adapt our response depending on the features the device has.
  6. ids can contain a list of tuples with two elements to sync IDs with Target.

To request content from Target, we need to construct the request like this:

  target_payload = {
    'context':{
      "channel":"web"
    },
    "id":{
      "marketingCloudVisitorId": adobe_ecid
    },
    "property" : {
      "token": adobe_targetproperty
    },
    "experienceCloud": {
      "analytics": {
        "logging": "client_side"
      },
      "audienceManager": {
        "locationHint": str(visitor_object["dcs_region"]),
        "blob": visitor_object["d_blob"]
      }
    },
    "execute": {
      "mboxes" : [
        {
          "name" : mbox,
          "index" : 1,
          "parameters":{
            "intent":intent
          }
        }
      ]
    }
  }

This is pretty straightforward. The most interesting part is the execute-part, which contains the actual mBox request. Note that we already add one parameter for our intent.

Now we can dynamically add more parameters for the Intent Slots, device Capabilities, profile parameters and sync IDs for logged in customers:

  for slot,content in slots:
    target_payload["execute"]["mboxes"][0]["parameters"]["slot_"+slot] = content

  for capability in capabilities:
    target_payload["execute"]["mboxes"][0]["parameters"]["capabilities_"+capability] = "true"

  if len(profile_params)>0:
    target_payload["execute"]["mboxes"][0]["profileParameters"]={}
    for param,content in profile_params:
      target_payload["execute"]["mboxes"][0]["profileParameters"][param] = content

  if len(ids) > 0:
    target_payload["id"]["customerIds"] = []
    for id,content in ids:
      target_payload["id"]["customerIds"].append({"id":content,"integrationCode":id,"authenticatedState":"authenticated"})

Last thing in this function is to make the actual call to Target (line 1) and send the Analytics for Target call to Analytics (line 5). If Target returns content for our mBox, we return it instead of an empty string:

  r = requests.post('https://'+adobe_targetdomain+'.tt.omtrdc.net/rest/v1/delivery?client='+adobe_targetcode+'&sessionId='+adobe_targetsession, json = target_payload)
  target_object = r.json()
  print("Requested Target Mbox from ",r.url)

  r = requests.get("https://"+adobe_analyticstrackingserver+"/b/ss/"+adobe_rsid+"/0/1?c.a.AppID=Spoofify2.0&c.OSType=Alexa&mid="+visitor_object["d_mid"]+"&pe="+target_object["execute"]["mboxes"][0]["analytics"]["payload"]["pe"]+"&tnta="+target_object["execute"]["mboxes"][0]["analytics"]["payload"]["tnta"])
  print("Tracked A4T to ", r.url)

  if "options" in target_object["execute"]["mboxes"][0]:
    return target_object["execute"]["mboxes"][0]["options"][0]["content"]
  else:
    return ""

Tracking Voice Assistant Intents with Adobe Analytics

Now that we have some content for our user, let’s let Analytics know about the Intent that was used. We use a similar function as for Target, but with two more flags to tell us if the Skill has just been installed or if the current Intent is the first of a session:

def track_intent(intent,slots=[],capabilities=[],install=False,launch=False):
  analytics_url = "https://"+adobe_analyticstrackingserver+"/b/ss/"+adobe_rsid+"/0/1?"
  if install:
    analytics_url += "c.a.InstallEvent=1&c.a.InstallDate=[currentDate]&"
    
  if launch:
    analytics_url += "c.a.LaunchEvent=1&"

  if len(slots)>0:
    slotlist = []
    for slot,content in slots:
      slotlist.append(slot+"="+content)
    slotstring = ",".join(slotlist)
    analytics_url += "l1="+slotstring+"&"

  if len(capabilities)>0:
    capabilitiesstring = ",".join(capabilities)
    analytics_url += "l2="+capabilitiesstring+"&"

  analytics_url += "c.a.AppID=Spoofify2.0&c.OSType=Alexa&c.Intent="+intent+"&mid="+visitor_object["d_mid"]+"&pageName="+intent+"&aamlh="+str(visitor_object["dcs_region"])+"&aamb="+visitor_object["d_blob"]

  r = requests.get(analytics_url)
  print("Tracked Intent to ",r.url)

If the Skill was installed, we would replace the date placeholder with the current date and would be able to track installs. Same for the launch flag to denote a new session.

From line 9 and 16 on, we will iterate trough the Slots and Capabilities like we did for Target. Compared to the official Analytics for Voice guide, we are using list variables to track both. This is especially useful for the Slots: If we setup a classification rule in Analytics, we can put Slot names and values in separate classification fields.

Putting it all together

With all those nice functions, we can now call all three Adobe components. We would use them in an order like this:

intent = "Launch Intent"
slots = [("username","Gerald"),("slot2","value2")]
capabilities = ["Capa 1","Capa 2","Capa 3","Capa 4"]

visitor_object = get_visitor_object(adobe_ecid)
adobe_ecid=visitor_object["d_mid"]
sync_ids(adobe_ecid,[("userid","1234","1")])

target_response = get_mbox_content("Voice Response", intent, slots,[("param1","value1"),("param2","value2")],capabilities,[("userid","1234"),("id2","2345")])
track_intent(intent, slots, capabilities)

print(target_response)

Line 1-3 define how our Intent is called, which slots and values it has and what the device’s capabilities are. Based on a persisted ID or an empty string, we then call the ID service and remember the ECID. In line 7, we sync our local user ID to the ID service as well.

With the Target response from line 9, we can then go ahead and look at what we get in line 12! Let’s create a simple activity in Target like this:

When we call this from our script, we actually get a personalized response:

And we are done! We can exactly see what happened: We got an ID from the ECID service, synced it with our device IDs, called Target for some content, tracked the reaction to A4T and the Intent to Analytics! At the end, we got a “Hey Gerald!” since we put “Gerald” in the username slot.

With this simple procedure, we can personalize and test our Voice Skill and see the result in Analytics, next to the other user behavior. Since the code is a bit hard to follow with this post alone, I’ve put the whole script on Github as well.

Sources and useful links:

Scroll to Top