Amperity is a comprehensive enterprise customer data platform, helping brands get to know their customers, make strategic decisions, and consistently take the right course of action to serve their consumers better. Amperity provides intelligent capabilities across data management unification, analytics, insights, and activation.
Amperity supports the Braze platform by providing a unified view of your customers across its customer data platform and Braze. This integration allows you to:
- Sync Amperity Segments: Sync segments to map Amperity user data to Braze user accounts.
- Unify data: Unify data across various Amperity supported platforms and Braze.
- Send Amperity data via AWS S3 Buckets to Braze: Use a serverless Lambda function to upload Amperity user segments to your AWS S3 bucket that will post user attribute data to Braze.
- Manually Upload Amperity data to Braze: Manually upload user CSV segments to the Braze platform through the dashboard.
|Amperity Account||Amperity||Amperity||An Amperity account is required to set up the Amperity-Braze integration|
Braze and Amperity integration
Step 1: Create Amperity user segment
To upload Amperity user data to Braze, you must first create a segment of existing Amperity users.
- Navigate to the Segments tab within the Amperity dashboard.
- Click Create to filter and define a segment of users to capture. Under the Summary tab, you can view valuable insights like historical revenue and predicted revenue for the coming year based on the given user segment.
- Select the Customers tab, and choose which user fields you would like to include using the Show Columns selector on the right.
- Next, click Run Segment.
Step 2: Select upload method
Once the segment has run, you can either:
- Set up Automatic Upload - Recommended
- Set up a destination workflow to automatically upload Amperity user attribute data to Braze via an AWS S3 Bucket.
- Set up Manual Upload
- Manually upload user CSV segments to the Braze platform through the dashboard.
Automatic upload - upload via AWS S3 bucket
Step 3a: Set Braze destination
Step 3.1a: Activate segment
First, you must activate the segment by selecting Activate Segment in the upper right corner of the page.
In the window that opens:
- Name your destination Braze
- Set the Data Template to Default
- Enter your S3 bucket
- Enter your S3 region
- Set a file name template
- Set the workflow query frequency
Step 3.2a: Set up destination
Next, you must set up the Braze destination workflow by selecting the Destination tab and clicking Add Destination.
In the window that opens:
- Name your destination Braze and add an optional description
- Select the Amazon S3 plugin
- Set the credential type to iam-credential
- Name and configure the credential based on your Amazon S3 settings
- Enter your S3 bucket
- Enter your S3 region
- Set encoding to None
- Include header row in output files
Additional Amperity documentation on configuring Amazon S3 can be found here.
Step 4a: Send data via AWS S3 bucket
The following Lambda function is a serverless application that allows you to easily post user attribute data from an Amperity CSV file directly to Braze through the Braze User Track endpoint. This process launches immediately upon uploading a CSV file to a configured AWS S3 bucket. To read more, visit our dedicated Lambda function article.
Requirements and limitations
- AWS Account: An AWS Account is required to use the S3 and Lambda services.
- Braze API URL: Braze API REST Endpoint is required to connect to Braze servers.
- Braze API Key: A Braze API key with
user/trackpermission is required to send requests to
- CSV File: Use step 1 of the Amperity integration steps to obtain a CSV with user external IDs and attributes to update.
The Lambda function can handle large files and uploads, but the function will stop execution after 10 minutes due to Lambda’s time limits. This process will then launch another Lambda instance to finish processing the remaining part of the file.
CSV formatting and processing
CSV user attributes
User attributes to be updated must be in the following
1 2 external_id,attr_1,...,attr_n userID,value_1,...,value_n
The first column must specify the external ID of the user to be updated, and the following columns must specify attribute names and values. The number of attributes you specify can vary. If the CSV file to be processed does not follow this format, the function will fail.
CSV file example:
1 2 3 external_id,Loyalty Points,Last Brand Purchased abc123,1982,Solomon def456,578,Hunter-Hayes
Any values in an array (ex.
"['Value1', 'Value2']") will be automatically destructured and sent to the API in an array rather than a string representation of an array.
- Deploy Braze’s publicly available CSV processing Lambda from the AWS Serverless Application Repository.
- Drop a CSV file with user attributes in the newly created S3 bucket.
- The users will be automatically imported to Braze.
To start processing your User Attribute CSV files, we need to deploy the Serverless Application to handle the processing for you. This application will create the following resources automatically to deploy successfully:
- Lambda function
- S3 Bucket for your CSV Files that the Lambda process can read from (Note: this Lambda function will only receive notifications for
- Role allowing for the creation of the above
- Policy to allow Lambda to receive S3 upload event in the new bucket
Follow the direct link to the application or open the AWS Serverless Application Repository and search for braze-user-attribute-import. Note that you must check the Show apps that create custom IAM roles and resource policies checkbox to see this application. The application creates a policy for the Lambda to read from the newly created S3 bucket.
Click Deploy and let AWS create all the necessary resources.
You can watch the deployment and verify that the stack (i.e., all the required resources) is being created in the CloudFormation. Find the stack named serverlessrepo-braze-user-attribute-import. Once the Status turns to
CREATE_COMPLETE, the function is ready to use. You can click on the stack and open Resources and watch the different resources being created.
The following resources are created:
- S3 Bucket - a bucket named
aaa123is a randomly generated string
- Lambda Function - a Lambda function named
- IAM Role - policy named
braze-user-csv-import-BrazeUserCSVImportRoleto allow Lambda to read from S3 and to log function output
To run the function, drop a user attribute CSV file in the newly created S3 bucket.
To read more about different aspects of the Lambda function such as monitoring and logging, updating an existing function, fatal errors, and more. Visit our dedicated Lambda function article.
Manual upload - upload via CSV
Step 3b: Amperity platform
- Once the segment has run, click View SQL. This will generate a SQL query that preformats the data to work well with what is required by the Braze platform. Make sure the field names match the fields in Braze that you want to load data into. If you would like to customize it, you can convert the Segment to SQL and alias the fields. Click Run Query to run the SQL query.
- Lastly, click Download to download a CSV version of this user segment. This is the file you’ll upload to Braze.
Step 4b: Braze platform
- From the Braze platform, go to the User Import page listed under Users.
- Upload the CSV file downloaded from Amperity.
- Once uploaded, confirm the default and custom attributes, assign an import name, and optionally create a group within the Braze platform from the uploaded Amperity segment.
- Click Start Import.