Amperity

Amperity is a comprehensive enterprise customer data platform, helping brands get to know their customers, make strategic decisions, and consistently take the right course of action to serve their consumers better. Amperity provides intelligent capabilities across data management unification, analytics, insights, and activation.

Amperity supports the Braze platform by providing a unified view of your customers across its customer data platform and Braze. This integration allows you to:

  • Sync Amperity Segments: Sync segments to map Amperity user data to Braze user accounts.
  • Unify data: Unify data across various Amperity supported platforms and Braze.
  • Send Amperity data via AWS S3 Buckets to Braze: Use a serverless Lambda function to upload Amperity user segments to your AWS S3 bucket that will post user attribute data to Braze.
  • Manually Upload Amperity data to Braze: Manually upload user CSV segments to the Braze platform through the dashboard.

Prerequisites

Requirement Origin Access Description
Amperity Account Amperity Amperity An Amperity account is required to set up the Amperity-Braze integration

Braze and Amperity Integration

Step 1: Create Amperity User Segment

To upload Amperity user data to Braze, you must first create a segment of existing Amperity users.

  1. Navigate to the Segments tab within the Amperity dashboard.
    Amperity Segments Overview

  2. Click Create to filter and define a segment of users to capture. Under the Summary tab, you can view valuable insights like historical revenue and predicted revenue for the coming year based on the given user segment.
    Amperity Segment Builder

  3. Select the Customers tab, and choose which user fields you would like to include using the Show Columns selector on the right.
    Amperity Segment Builder

  4. Next, click Run Segment.

Step 2: Select Upload Method

Once the segment has run, you can either:

  • Set up Automatic Upload - Recommended
    • Set up a destination workflow to automatically upload Amperity user attribute data to Braze via an AWS S3 Bucket.
  • Set up Manual Upload
    • Manually upload user CSV segments to the Braze platform through the dashboard.

Automatic Upload - Upload via AWS S3 Bucket

Step 3a: Set Braze Destination

Step 3.1a: Activate Segment

Segment 1

First, you must activate the segment by selecting Activate Segment in the upper right corner of the page.

In the window that opens:

  • Name your destination Braze
  • Set the Data Template to Default
  • Enter your S3 bucket
  • Enter your S3 region
  • Set a file name template
  • Set the workflow query frequency

Click Activate.

Step 3.2a: Set Up Destination

Destination Configuration

Next, you must set up the Braze destination workflow by selecting the Destination tab and clicking Add Destination.

In the window that opens:

  • Name your destination Braze and add an optional description
  • Select the Amazon S3 plugin
  • Set the credential type to iam-credential
  • Name and configure the credential based on your Amazon S3 settings
  • Enter your S3 bucket
  • Enter your S3 region
  • Set encoding to None
  • Include header row in output files

Click Save.

Additional Amperity documentation on configuring Amazon S3 can be found here.

Step 4a: Send Data via AWS S3 Bucket

Lambda Function

The following Lambda function is a serverless application that allows you to easily post user attribute data from an Amperity CSV file directly to Braze through the Braze User Track endpoint. This process launches immediately upon uploading a CSV file to a configured AWS S3 bucket. To read more, visit our dedicated Lambda function article.

Requirements and Limitations
  • AWS Account: An AWS Account is required to use the S3 and Lambda services.
  • Braze API URL: Braze API REST Endpoint is required to connect to Braze servers.
  • Braze API Key: A Braze API key with user/track permission is required to send requests to /users/track endpoint.
  • CSV File: Use step 1 of the Amperity integration steps to obtain a CSV with user external IDs and attributes to update.

The Lambda function can handle large files and uploads, but the function will stop execution after 10 minutes due to Lambda’s time limits. This process will then launch another Lambda instance to finish processing the remaining part of the file.

CSV Formatting and Processing
CSV User Attributes

User attributes to be updated must be in the following .csv format:

1
2
external_id,attr_1,...,attr_n
userID,value_1,...,value_n

The first column must specify the external ID of the user to be updated, and the following columns must specify attribute names and values. The number of attributes you specify can vary. If the CSV file to be processed does not follow this format, the function will fail.

CSV file example:

1
2
3
external_id,Loyalty Points,Last Brand Purchased
abc123,1982,Solomon
def456,578,Hunter-Hayes
CSV Processing

Any values in an array (ex. "['Value1', 'Value2']") will be automatically destructured and sent to the API in an array rather than a string representation of an array.

Usage Instructions
  1. Deploy Braze’s publicly available CSV processing Lambda from the AWS Serverless Application Repository.
  2. Drop a CSV file with user attributes in the newly created S3 bucket.
  3. The users will be automatically imported to Braze.
Deploy

To start processing your User Attribute CSV files, we need to deploy the Serverless Application to handle the processing for you. This application will create the following resources automatically to deploy successfully:

  • Lambda function
  • S3 Bucket for your CSV Files that the Lambda process can read from (Note: this Lambda function will only receive notifications for .csv extension files)
  • Role allowing for the creation of the above
  • Policy to allow Lambda to receive S3 upload event in the new bucket

Follow the direct link to the application or open the AWS Serverless Application Repository and search for braze-user-attribute-import. Note that you must check the Show apps that create custom IAM roles and resource policies checkbox to see this application. The application creates a policy for the Lambda to read from the newly created S3 bucket.

Click Deploy and let AWS create all the necessary resources.

You can watch the deployment and verify that the stack (i.e., all the required resources) is being created in the CloudFormation. Find the stack named serverlessrepo-braze-user-attribute-import. Once the Status turns to CREATE_COMPLETE, the function is ready to use. You can click on the stack and open Resources and watch the different resources being created.

The following resources are created:

  • S3 Bucket - a bucket named braze-user-csv-import-aaa123 where aaa123 is a randomly generated string
  • Lambda Function - a Lambda function named braze-user-attribute-import
  • IAM Role - policy named braze-user-csv-import-BrazeUserCSVImportRole to allow Lambda to read from S3 and to log function output
Run

To run the function, drop a user attribute CSV file in the newly created S3 bucket.

Manual Upload - Upload via CSV

Step 3b: Amperity Platform

  1. Once the segment has run, click View SQL. This will generate a SQL query that preformats the data to work well with what is required by the Braze platform. Make sure the field names match the fields in Braze that you want to load data into. If you would like to customize it, you can convert the Segment to SQL and alias the fields. Click Run Query to run the SQL query.
    Amperity Segment Builder

  2. Lastly, click Download to download a CSV version of this user segment. This is the file you’ll upload to Braze.

Step 4b: Braze Platform

  1. From the Braze platform, go to the User Import page listed under Users.
  2. Upload the CSV file downloaded from Amperity.
  3. Once uploaded, confirm the default and custom attributes, assign an import name, and optionally create a group within the Braze platform from the uploaded Amperity segment.
  4. Click Start Import.
WAS THIS PAGE HELPFUL?
New Stuff!