Introduction
MoEngage allows customers to import users and events through files stored in various sources like S3 Buckets and SFTP servers as well as directly from tables in your data warehouses. MoEngage Dashboard enables you to configure multiple imports directly by defining file locations, configuring one or more than one file import sources, and import methods.
Supported Sources
MoEngage currently supports importing files from:
MoEngage also supports importing from data warehouses:
Types of Imports
MoEngage can import the following from your sources:
- Registered Users: These are users who are already registered on MoEngage. You can also use this type of import to update existing users in bulk.
- Anonymous Users: These are users who still need to be registered on MoEngage.
- Events (Standard and User Defined): MoEngage can import standard events like Campaign Interaction Events as well as your own user-defined events.
- Auxiliary Data: There are user data temporarily ingested into MoEngage for specific business purposes.
info |
Note The following NAT IPs need to be whitelisted for Imports:
|
Preparing the files
Before configuring the imports on the MoEngage dashboard, please make sure that all your files are in the expected format and within the sanctioned limits.
Supported File Types
We only support CSV files (in UTF-8 file encoding) as of now.
Naming Conventions
The file name is made up of two parts: <import type>_<date time format>.csv
. The final name will depend on the type of import and the date-time format you pick.
Import Type
-
Registered Users
- File names should start with the prefix
registered_user_data_
- Example:
registered_user_data_01311997.csv
- File names should start with the prefix
-
Anonymous Users
- File names should start with the prefix
anonymous_user_data_
- Example:
anonymous_user_data_jan_2022.csv
- File names should start with the prefix
-
Events
-
File names should start with the prefix
<event name>_
- In certain cases, Event Name and Event Display Name can be different. You can manage your events by going to the Data Management page in MoEngage.
- For example, if the event name is "Purchase Summary":
Purchase Summary_31011997.csv
- For example, if the event name is "Purchase_Transaction":
Purchase_Transaction_19970131_1259.csv
-
File names should start with the prefix
info |
Note You can not use "moe_" as a prefix while naming events, event attributes, or user attributes. It is a system prefix and using it might result in periodic blacklisting without prior communication. |
Date Time Format
Your file names should end with the suffix <date time format>.csv
. This "date-time" represents the day on which the import runs. For example, if the import runs on "7th January 2022", and the selected date-time format is ddmmyyyy
, we will look for all the files that end with the suffix 07012022.csv
.
We support multiple date-time formats:
Type |
Description |
Example |
|
|
12_2022 |
|
|
31012022 |
|
|
20220112 |
|
|
jan_2022 |
|
|
20220131_1259 |
You can configure the date time formats for the file names on the MoEngage dashboard while setting up the import.
info |
Note We also support custom filenames, as long as the following naming convention is followed: Naming Convention: Filenames should have the custom prefix name and date-time format separated by "_". The custom name should consist of alphabets, numerals, space, and "_". For Example: |
Valid File Name Examples
Few examples of valid file names:
- Registered Users Import with date time format
{yyyymmdd}_{hhmm}
will have file names likeregistered_user_data_20220131_1259.csv
- Anonymous Users Import with date time format
{mon_yyyy}
will have file names likeanonymous_user_data_feb_2022.csv
- "Item Purchased" event import with date time format
{ddmmyyyy}
will have file names likeItem Purchased_31012022.csv
File Structure
We support CSV files only. Here are some rules that your files need to adhere to:
- The first row of your file needs to be the column names. MoEngage will always treat the first row of your file as the header row.
- Please ensure there are no duplicate column names in your file. Do note that column names are case-sensitive.
- Please ensure there are no missing column names.
- Avoid empty rows (either the row is blank, or all the columns in that row are blank).
MoEngage will recognize the first row of your files as the header file and parse the column names. You can then map them to MoEngage Attributes on the MoEngage Dashboard or choose to upload a manifest file.
For each type of import, we require you to have a column that can be mapped to a mandatory attribute:
- Registered Users Import: Your files should have a column that can be mapped to the user ID (user identifier to identify User Profile in MoEngage).
- Anonymous Users Import: Your files should have a column (email, mobile, etc) that can be mapped to an anonymous ID. An anonymous ID such as email, mobile, etc can help you identify users who have used your app/website but have not yet signed up.
- Event Import: Your files should have a column that can be mapped to Event Time and a column that can be mapped to the user ID (user identifier to identify User Profile in MoEngage).
Here are some sample files:
- Import Registered Users Sample File
- Import Anonymous Users Sample File
- Import Events Sample File
- Sample Manifest/Mapping File
Manifest Files
Using the MoEngage Dashboard, you can map the columns in your CSV files to attributes inside MoEngage. Or optionally, you can also choose to upload a manifest file.
A manifest file contains mappings between the source column and a MoEngage Attribute. It also contains the data type of the column. This file needs to be in JSON format.
An example of a manifest file is shown below:
{
"mapping": [
{
"column": "ID",
"moe_attr": "uid",
"type": "string",
"is_skipped": false
},
{
"column": "First Name",
"moe_attr": "u_fn",
"type": "string",
"is_skipped": false
},
{
"column": "First Seen",
"moe_attr": "cr_t",
"type": "datetime",
"datetime_format": "YYYY-MM-DD hh:mm:ss",
"is_skipped": false
},
{
"column": "LTV",
"moe_attr": "t_rev",
"type": "double",
"is_skipped": false
},
{
"column": "Install Status",
"moe_attr": "installed",
"type": "bool",
"is_skipped": false
}
]
}
For each column, you need to mention the following fields:
- “column”: (required) Represents the column name from the source file.
- “moe_attr”: (required) The MoEngage Attribute (attribute name) to which you want to map the source file column. You can find all the attributes and their names on the Data Management page. Ensure each column is mapped to a unique "moe_attr".
- “type”: (optional) This field represents the data type of the column. Refer to the table below for the acceptable "type" formats for attributes. Refer to the Data Management page for more information.
- “datetime_format”: (optional) This field indicates the date and time format. It is mandatory for DateTime fields only. Refer to the table below for the complete list of supported date-time formats.
- “is_skipped”: (optional) This field represents whether MoEngage should import the column or not. It is a boolean field, and any attribute marked as true will be skipped during import.
The above sample manifest file consists of 5 standard attributes. You can refer to the below list for more such standard attributes or refer to your Data Management dashboard to get the exhaustive standard attribute list.
Some other standard user attributes in MoEngage for reference:
Key | Attribute Name on Dashboard | Datatype | Description |
---|---|---|---|
name | Name | String | Full name of the user. |
first_name | First Name | String | First name of the user. |
last_name | Last Name | String | Last name of the user. |
Email (Standard) | String | Email Address of the user. For example, john@example.com | |
age | Age | Numeric | Age of the user |
gender | Gender | String | Gender of the user |
mobile | Mobile Number (Standard) | String | Mobile Number of the user. For example, 918888444411 |
moe_geo_location | Location |
Array of [lat,lng] in double in the format {"lat": 12.11, "lon": 123.122} |
A sample value would be the location of the user. For example: {"lat": 12.11, "lon": 123.122} |
source | Publisher Name | String | This is the Publisher Name of Install. For example, Google Ads |
revenue | LTV | Numeric | Life Time Value of the user. |
moe_unsubscribe | Unsubscribe | Boolean | Email Unsubscribe Attribute. Emails are not sent to the user when the set value is true. |
moe_hard_bounce | Hard Bounce | Boolean |
Email Hard Bounce Attribute. The emails are not sent to the user when the set value is true. |
moe_spam | Spam | Boolean |
Email Spam Attribute. The emails are not sent to the user when the set value is true. |
Supported User Attribute Types
Type |
Description |
Value in Manifest |
String |
Any string value. Examples of acceptable values: |
|
Double |
Any decimal value. Examples of acceptable values: |
|
Boolean |
Examples of acceptable values: |
|
Date Time |
Any date time value. The supported formats are covered in the next table. Example of acceptable values: |
|
Supported Datetime Formats
Datetime Format | Examples |
“datetime_format”: "YYYY-MM-DD" |
|
“datetime_format”: "YYYY/MM/DD" |
|
“datetime_format”: "DD/MM/YYYY" |
|
“datetime_format”: "DD-MM-YYYY" |
|
“datetime_format”: "DD-MM-YYYY hh:mm:ss" |
|
“datetime_format”: "DD/MM/YYYY hh:mm:ss" |
|
“datetime_format”: "YYYY-MM-DD hh:mm:ss" |
|
“datetime_format”: "YYYY/MM/DD hh:mm:ss" |
|
“datetime_format”: "DD-MM-YYYYThh:mm:ss.s" |
|
“datetime_format”: "DD/MM/YYYYThh:mm:ss.s" |
|
“datetime_format”: "DD-MM-YYYYThh:mm:ssTZD" |
|
“datetime_format”: "DD/MM/YYYYThh:mm:ssTZD" |
|
“datetime_format”: "YYYY-MM-DD hh:mm:ss.s" |
|
“datetime_format”: "YYYY/MM/DD hh:mm:ss.s" |
|
“datetime_format”: "YYYY-MM-DDThh:mm:ssTZD" |
|
“datetime_format”: "YYYY/MM/DDThh:mm:ssTZD" |
|
“datetime_format”: "YYYY-MM-DDThh:mm:ss.sTZD" |
|
“datetime_format”: "YYYY/MM/DDThh:mm:ss.sTZD" |
|
Once you are ready with the files, you need to place them inside a folder.
Folder Structure
Place all files into a folder. You can configure the folder path while setting up the import. Please note that we do not look for files in the root folder or sub-folders.
Setting Up Imports
You can set up the S3/SFTP Imports in the following navigation on the MoEngage Dashboard: Segment -> S3/SFTP Imports. You can also find user-specific imports in the following navigation on the Dashboard: Segment -> Import users.
Amazon S3
In order to properly set up S3 Imports, you need to give MoEngage relevant permissions and ensure all your files are properly accessible to us. Learn how to set up S3 Imports.
SFTP Servers
To learn about the types of authentication methods we support and the steps to configure, see how to set up SFTP Imports.
Imports Dashboard
The S3/SFTP Imports Dashboard contains all the information you need to keep track of your imports. You can view the most important information at a glance:
Name |
Description |
Name |
The name of the import you gave while setting it up. |
Type |
The source of the imports, we support S3, SFTP, CSV, Snowflake |
Custom Segment |
If the user import happens in a custom segment, the name of the segment is displayed in this column. |
Created at |
The date and time of the import when it was created. |
Last Sync Status |
The last status of the import. One-Time Imports: The current status of the import.
Recurring Imports: The current status of the latest scheduled import.
|
Files Processed |
The number of files processed to date. |
Rows Processed |
The total number of rows processed to date. |
Actions |
You can choose to View Details, Edit, Duplicate, or Delete an import. |
You can also view the details of each import by clicking on the three-dot actions menu against it and selecting View Details.
Name |
Description |
Scheduled At |
The name of the import you gave while setting it up. |
File Name |
The name of the file imported. |
Status |
The status of the import.
|
Rows in file |
The number of data rows present in the file. |
Events Added |
The number of events that were imported from the file. |
Events Failed |
The number of events that failed to import. |
Users Updated |
The number of users that were updated. |
Users Created |
The number of users that were created during the import. |
Users Failed |
The number of users that failed to import. |
Aux Data Added |
The number of auxiliary data rows added during the import. |
Aux Data failed |
The number of auxiliary data rows failed during the import. |
Rows Skipped |
The number of rows that were skipped during the import. |
Actions |
You can choose "Export File" to download a copy of the file that was imported. |
Triggering Imports
You can trigger imports using one of the following options:
-
Manually - Periodic imports can be triggered manually from the imports page by clicking Actions. Triggering an import manually does not change the schedule of the import.
Note: Triggering an import within five minutes of the scheduled time may lead to an error.
- API - Imports can be triggered using the File Imports Trigger API. For more information, refer to File Imports Trigger API.
File Limits
File Size Limit
The maximum file size we support is 200MB per file. In case of a bigger file, the file needs to be split into 200MB file sizes each.
Other Limits
- Rows: We allow a maximum of 1M rows in a CSV file.
- Columns/Attributes: We would allow 100 columns (attributes) for users and 100 columns (attributes) per event
- File size: The preferred file size should be <200MB (per CSV)
- Import limits: Please refer to the table below.
Import Type |
Users |
Events |
Default Hourly |
500K/hr |
5M/hr |
Default Daily |
Total 5M/day |
Total 25M/day |
You can refer to the Rate limits on the top right corner to check your Daily and Hourly consumed limits/default limits for User, Events, and Auxiliary data, as shown in the following image.
info |
Note
|
FAQ
-
My imports have failed. How do I check what went wrong?
Click the ellipsis on the right and click View to look up the Import details. Hover over the Failed Status to learn the reason.
-
What happens when there is an error in fetching the files from the S3/SFTP folder?
By default, imports are not retried when there is a failure. You can, however, configure to receive an email alert upon failure.
-
What if a scheduled import adds the data into a recently archived segment?
In such cases, the new data will still be added to the archived segment. You can unarchive the segment as required.