Roughly around 3 years ago, I set up an analytics chain with AWS to address handling logging calls with Amazon Connect. Unfortunately, during some other fixes, that chain was broken and un-noticed for about a month.
The original business need was this:
We need to be able to see information about the incoming calls to the help desk and run reports on it.
What Amazon Offers
Amazon Connect has a dashboard for short timeframes. When you access you’ll notice that there isn’t a lot there. To get any amount of good information, we’ll have to go another way, and that means dropping into the behemoth of AWS.
AWS Integration
Amazon Connect does allow fairly integration of its CTR (Contact Trace Records) into its system through data streaming, which we’ll be taking advantage of momentarily. In the guide, it tells you how to enable data streaming. But, you’ll first need have a data stream to log it into.
So, we need to do that first. Creating the stream is fairly easy, and I name it after the project I implemented. Go back to Amazon Connect / Data Streaming and choose this new stream.
We need to create the Firehose now. Things are more tricky now, so I’ll list the configuration options I chose.
Keep in mind that my setup intends for there to be a 15-minute gap between each update to save on resources. My lambda
code is prepared to handle 1-n
records and when they are stored in s3, are stored with multiple independent json documents
in each file. Ok, here’s how I configured it:
- Transform source records with AWS Lambda. I use this to convert the source record into something I can use. If you have specific requirements for how the stored record should look, this is your spot to do that.
- Buffer size is set to 2MB. It seems like a lot for a record! But in the Firehose, my buffer interval is 900 seconds. So it will take all the records each 5 minutes and dump them into the lambda. There might be dozens, so a little space is good here.
- I set the destination for the records to be S3, which I create a bucket form. I use a custom output prefix plus a custom error one.
ctr/converted/!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/!{firehose:error-output-type}/ctr/converted/!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/
- Just in case, I also set up source record backup. Same bucket, but a different prefix. This way, if there are code errors, the records aren’t lost and I can recover them.
Essentially, my bucket has 3 folders. source, converted and errored.
ctr/source/!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/
- I created a IAM role to allow access to this. For assuming a role, the trust relationship will need to be setup like this:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "firehose.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}
Finally, while storing this is great for a permanent record, I wanted them in a database, so I could generate reports off of them. To simplify this, I used Directus and used a flow for ingesting the records.
Into the CRM
For this part, the flow goes through 2 main steps. Sanitizing the variables and insert a new record.
I created a new flow with a POST body type. Called it something like “AWS Connect Incoming Records” 🤷
With each trigger, I need to sanitize and handle multiple objects. So to handle that, I used this code.
module.exports = async function(data) { function replaceBooleanValues(obj) { if (typeof obj === 'object' && obj !== null) { for (let key in obj) { if (typeof obj[key] === 'object' && obj[key] !== null) { // Recursively traverse nested objects obj[key] = replaceBooleanValues(obj[key]); } else if (typeof obj[key] === 'string') { // Replace "true" with 1 and "false" with 0 if (obj[key] === 'true') { obj[key] = 1; } else if (obj[key] === 'false') { obj[key] = 0; } } } } return obj; }
let updatedValues = replaceBooleanValues(data.$trigger.body)
return updatedValues;}
The next one is easy, I just used the “Create Data” block and the payload is set to {{ $last }}
.
Now, every 15 minutes, new records come in and are archived for reports.
In 3 years, if this breaks down, now I have an overview of how I implemented this 😅