Overview - Imports

Introduction

MoEngage allows customers to import users and events through files stored in various sources like S3 Buckets and SFTP servers as well as directly from tables in your data warehouses. MoEngage Dashboard enables you to configure multiple imports directly by defining file locations, configuring one or more than one file import sources, and import methods.

Supported Sources

MoEngage currently supports importing files from:

  1. Amazon S3 Buckets
  2. SFTP Servers

MoEngage also supports importing from data warehouses:

  1. Snowflake Imports

Types of Imports

MoEngage can import the following from your sources:

  1. Registered Users: These are users who are already registered on MoEngage. You can also use this type of import to update existing users in bulk.
  2. Anonymous Users: These are users who still need to be registered on MoEngage.
  3. Events (Standard and User Defined): MoEngage can import standard events like Campaign Interaction Events as well as your own user-defined events.
  4. Auxiliary Data: There are user data temporarily ingested into MoEngage for specific business purposes.
info

Note

The following NAT IPs need to be whitelisted for Imports:

  • DC01: 3.209.165.74, 52.1.205.204

  • DC02: 3.77.101.97, 18.195.110.23

  • DC03: 3.6.246.251, 3.6.251.95

  • DC04: 18.217.73.1, 18.223.244.184

  • DC06: 16.78.83.35, 43.218.197.196 

Preparing the files

Before configuring the imports on the MoEngage dashboard, please make sure that all your files are in the expected format and within the sanctioned limits.

Supported File Types

We only support CSV files (in UTF-8 file encoding) as of now.

Naming Conventions

The file name is made up of two parts: <import type>_<date time format>.csv. The final name will depend on the type of import and the date-time format you pick.

 

Import Type

  • Registered Users
    • File names should start with the prefix registered_user_data_
    • Example: registered_user_data_01311997.csv
  • Anonymous Users
    • File names should start with the prefix anonymous_user_data_
    • Example: anonymous_user_data_jan_2022.csv
  • Events
    • File names should start with the prefix <event name>_
    • In certain cases, Event Name and Event Display Name can be different. You can manage your events by going to the Data Management page in MoEngage.
    • For example, if the event name is "Purchase Summary": Purchase Summary_31011997.csv
    • For example, if the event name is "Purchase_Transaction": Purchase_Transaction_19970131_1259.csv
info

Note

You can not use "moe_" as a prefix while naming events, event attributes, or user attributes. It is a system prefix and using it might result in periodic blacklisting without prior communication.

 

Date Time Format

Your file names should end with the suffix <date time format>.csv. This "date-time" represents the day on which the import runs. For example, if the import runs on "7th January 2022", and the selected date-time format is ddmmyyyy, we will look for all the files that end with the suffix 07012022.csv.

We support multiple date-time formats:

Type

Description

Example

{mm}_{yyyy}

mm: Month from 01 to 12

yyyy: 4-digit year

12_2022

{ddmmyyyy}

dd: Days from 01 to 31

mm: Month from 01 to 12

yyyy: 4-digit year

31012022

{yyyymmdd}

yyyy: 4-digit year

mm: Month from 01 to 12

dd: Days from 01 to 31

20220112

{mon_yyyy}

mon: 3-letter abbreviation of the month from jan to dec (in lowercase)

yyyy: 4-digit year

jan_2022

{yyyymmdd}_{hhmm}

yyyy: 4-digit year

mm: Month from 01 to 12

dd: Days from 01 to 31

hh: 2-digit hours from 00 to 23

mm: 2-digit minutes from 00 to 59

20220131_1259

You can configure the date time formats for the file names on the MoEngage dashboard while setting up the import.

info

Note

We also support custom filenames, as long as the following naming convention is followed:

Naming Convention: <custom name>_<date time format>.csv

Filenames should have the custom prefix name and date-time format separated by "_". The custom name should consist of alphabets, numerals, space, and "_".

For Example: abc_31012024.csv

Valid File Name Examples

Few examples of valid file names:

  • Registered Users Import with date time format {yyyymmdd}_{hhmm} will have file names like registered_user_data_20220131_1259.csv
  • Anonymous Users Import with date time format {mon_yyyy} will have file names like anonymous_user_data_feb_2022.csv
  • "Item Purchased" event import with date time format {ddmmyyyy} will have file names like Item Purchased_31012022.csv

File Structure

We support CSV files only. Here are some rules that your files need to adhere to:

  1. The first row of your file needs to be the column names. MoEngage will always treat the first row of your file as the header row.
  2. Please ensure there are no duplicate column names in your file. Do note that column names are case-sensitive.
  3. Please ensure there are no missing column names.
  4. Avoid empty rows (either the row is blank, or all the columns in that row are blank).

MoEngage will recognize the first row of your files as the header file and parse the column names. You can then map them to MoEngage Attributes on the MoEngage Dashboard or choose to upload a manifest file.

For each type of import, we require you to have a column that can be mapped to a mandatory attribute:

  1. Registered Users Import: Your files should have a column that can be mapped to the user ID (user identifier to identify User Profile in MoEngage).
  2. Anonymous Users Import: Your files should have a column (email, mobile, etc) that can be mapped to an anonymous ID. An anonymous ID such as email, mobile, etc can help you identify users who have used your app/website but have not yet signed up.
  3. Event Import: Your files should have a column that can be mapped to Event Time and a column that can be mapped to the user ID (user identifier to identify User Profile in MoEngage).

Here are some sample files:

  1. Import Registered Users Sample File
  2. Import Anonymous Users Sample File
  3. Import Events Sample File
  4. Sample Manifest/Mapping File

Manifest Files

Using the MoEngage Dashboard, you can map the columns in your CSV files to attributes inside MoEngage. Or optionally, you can also choose to upload a manifest file.

A manifest file contains mappings between the source column and a MoEngage Attribute. It also contains the data type of the column. This file needs to be in JSON format.

An example of a manifest file is shown below:

Sample Manifest File
{
  "mapping": [
    {
      "column": "ID",
      "moe_attr": "uid",
      "type": "string",
      "is_skipped": false
    },
    {
      "column": "First Name",
      "moe_attr": "u_fn",
      "type": "string",
      "is_skipped": false
    },
    {
      "column": "First Seen",
      "moe_attr": "cr_t",
      "type": "datetime",
      "datetime_format": "YYYY-MM-DD hh:mm:ss",
      "is_skipped": false
    },
    {
      "column": "LTV",
      "moe_attr": "t_rev",
      "type": "double",
      "is_skipped": false
    },
    {
      "column": "Install Status",
      "moe_attr": "installed",
      "type": "bool",
      "is_skipped": false
    }
  ]
}

For each column, you need to mention the following fields:

  1. “column”: (required) Represents the column name from the source file. 
  2. “moe_attr”: (required) The MoEngage Attribute (attribute name) to which you want to map the source file column. You can find all the attributes and their names on the Data Management page. Ensure each column is mapped to a unique "moe_attr".
  3. “type”:  (optional) This field represents the data type of the column. Refer to the table below for the acceptable "type" formats for attributes. Refer to the Data Management page for more information.
  4. “datetime_format”: (optional) This field indicates the date and time format. It is mandatory for DateTime fields only. Refer to the table below for the complete list of supported date-time formats.
  5. “is_skipped”: (optional) This field represents whether MoEngage should import the column or not. It is a boolean field, and any attribute marked as true will be skipped during import.

Supported User Attribute Types

Type

Description

Value in Manifest

String

Any string value.

Examples of acceptable values: ABC, ABC XYZ, ABC123, etc.

"type": "string"

Double

Any decimal value.

Examples of acceptable values: 3.14159, 241.23, -123.1

"type": "double"

Boolean

Examples of acceptable values: true, false

"type": "bool"

Date Time

Any date time value. The supported formats are covered in the next table.

Example of acceptable values: 2019/02/22 17:54:14.933

"type": "datetime"

Supported Datetime Formats

Datetime Format Examples
“datetime_format”: "YYYY-MM-DD"
  • 2022-01-22
“datetime_format”: "YYYY/MM/DD"
  • 2022/01/22
“datetime_format”: "DD/MM/YYYY"
  • 22/01/2022
“datetime_format”: "DD-MM-YYYY"
  • 22-01-2022
“datetime_format”: "DD-MM-YYYY hh:mm:ss"
  • 31-12-2022 12:10:33
“datetime_format”: "DD/MM/YYYY hh:mm:ss"
  • 31/12/2022 12:10:33
“datetime_format”: "YYYY-MM-DD hh:mm:ss"
  • 2019-02-22 17:54:14
“datetime_format”: "YYYY/MM/DD hh:mm:ss"
  • 2019/02/22 17:54:14
“datetime_format”: "DD-MM-YYYYThh:mm:ss.s"
  • 31-12-2022T12:10:33.882
“datetime_format”: "DD/MM/YYYYThh:mm:ss.s"
  • 31/12/2022T12:10:33.882
“datetime_format”: "DD-MM-YYYYThh:mm:ssTZD"
  • 31-12-2022T12:10:33Z
  • 31-12-2022T12:10:33+08:00
  • 31-12-2022T12:10:33-08:00
“datetime_format”: "DD/MM/YYYYThh:mm:ssTZD"
  • 31/12/2022T12:10:33Z
  • 31/12/2022T12:10:33+08:00
  • 31/12/2022T12:10:33-08:00
“datetime_format”: "YYYY-MM-DD hh:mm:ss.s"
  • 2019-02-22 17:54:14.933
“datetime_format”: "YYYY/MM/DD hh:mm:ss.s"
  • 2019/02/22 17:54:14.933
“datetime_format”: "YYYY-MM-DDThh:mm:ssTZD"
  • 2019-11-14T00:01:02Z
  • 2019-11-14T00:01:02+08:00
  • 2019-11-14T00:01:02Z-08:00
“datetime_format”: "YYYY/MM/DDThh:mm:ssTZD"
  • 2019/11/14T00:01:02Z
  • 2019/11/14T00:01:02+08:00
  • 2019/11/14T00:01:02-08:00
“datetime_format”: "YYYY-MM-DDThh:mm:ss.sTZD"
  • 2019-02-22T17:54:14.957Z
  • 2019-02-22T17:54:14.957299-08:00
  • 2019-02-22T17:54:14.957299+08:00
“datetime_format”: "YYYY/MM/DDThh:mm:ss.sTZD"
  • 2019/02/22T17:54:14.957Z
  • 2019/02/22T17:54:14.957299-08:00
  • 2019/02/22T17:54:14.957299+08:00

Once you are ready with the files, you need to place them inside a folder.

Folder Structure

Place all files into a folder. You can configure the folder path while setting up the import. Please note that we do not look for files in the root folder or sub-folders.

Setting Up Imports

You can set up the S3/SFTP Imports in the following navigation on the MoEngage Dashboard: Segment -> S3/SFTP Imports. You can also find user-specific imports in the following navigation on the Dashboard: Segment -> Import users.

Amazon S3

In order to properly set up S3 Imports, you need to give MoEngage relevant permissions and ensure all your files are properly accessible to us. Learn how to set up S3 Imports.

SFTP Servers

To learn about the types of authentication methods we support and the steps to configure, see how to set up SFTP Imports.

Imports Dashboard

Imports_OvervoewSS1.png

The S3/SFTP Imports Dashboard contains all the information you need to keep track of your imports. You can view the most important information at a glance:

Name

Description

Name

The name of the import you gave while setting it up.

Type

The source of the imports, we support S3, SFTP, CSV, Snowflake
The type of imports, One-Time or Periodic, is also mentioned in the brackets.

Custom Segment

If the user import happens in a custom segment, the name of the segment is displayed in this column.

Created at

The date and time of the import when it was created.

Last Sync Status

The last status of the import.

One-Time Imports: The current status of the import.

  1. Scheduled: The import is scheduled to run in the future.
  2. Processing: The import is currently ongoing.
  3. Successful: The import was successful.
  4. Partial Success: At least one file was imported with Partial Success.
  5. Failed: The import has failed.

Recurring Imports: The current status of the latest scheduled import.

  1. Scheduled: The import is scheduled to run in the future. The scheduled time is given below.
  2. Processing: The import is currently ongoing. An estimated time of completion is also mentioned.
  3. Successful: The import was successful. The completion time is also provided.
  4. Partial Success: When at least one file was imported with Partial Success.
  5. Failed: The import has failed. The failure time is given for reference.

Files Processed

The number of files processed to date. 

Rows Processed

The total number of rows processed to date.

Actions

You can choose to View Details, Edit, Duplicate, or Delete an import.

You can also view the details of each import by clicking on the three-dot actions menu against it and selecting View Details.

ViewDetails2.png

Name

Description

Scheduled At

The name of the import you gave while setting it up.

File Name

The name of the file imported.

Status

The status of the import.

  1. Successful
  2. Failed - Hover to see the failure reason.
  3. Partial Success - At least 1 row failed to import in the file.

Rows in file

The number of data rows present in the file.

Events Added

The number of events that were imported from the file.

Events Failed

The number of events that failed to import.

Users Updated

The number of users that were updated.

Users Created

The number of users that were created during the import.

Users Failed

The number of users that failed to import.

Aux Data Added

The number of auxiliary data rows added during the import.

Aux Data failed

The number of auxiliary data rows failed during the import.

Rows Skipped

The number of rows that were skipped during the import.

Actions

You can choose "Export File" to download a copy of the file that was imported.

Triggering Imports

You can trigger imports using one of the following options:

  1. Manually  - Periodic imports can be triggered manually from the imports page by clicking ActionsTriggering an import manually does not change the schedule of the import.

    Note: Triggering an import within five minutes of the scheduled time may lead to an error.ManualImports.png

  2. API  - Imports can be triggered using the File Imports Trigger API. For more information, refer to File Imports Trigger API.

File Limits

File Size Limit

The maximum file size we support is 200MB per file. In case of a bigger file, the file needs to be split into 200MB file sizes each.

Other Limits

  1. Rows: We allow a maximum of 1M rows in a CSV file.
  2. Columns/Attributes: We would allow 100 columns (attributes) for users and 100 columns (attributes) per event 
  3. File size: The preferred file size should be <200MB  (per CSV)
  4. Import limits: Please refer to the table below.

Import Type

Users 

Events 

Default Hourly

500K/hr

5M/hr

Default Daily

Total 5M/day

Total 25M/day

You can refer to the Rate limits on the top right corner to check your Daily and Hourly consumed limits/default limits for User, Events, and Auxiliary data, as shown in the following image.

RateLimits.png

info

Note

  • The values mentioned above are the default limits. If your requirement exceeds these limits, you can get in touch with MoEngage to increase the same.
  • The default hourly rate indicates the total data that can be ingested in an hour and does not indicate the total data limit for the day. The Default Daily limit is the total cap on the data that can be ingested in a day, as indicated in the table above.
  • For one-time historical imports, get in touch with the support team to guide you.
  • When the rate limits for the imports are breached, you will receive a Rate limit breached alert in the Segment-> Imports section on the MoEngage Dashboard.

FAQ

  1. My imports have failed. How do I check what went wrong?

    Click the ellipsis on the right and click View to look up the Import details. Hover over the Failed Status to learn the reason.

  2. What happens when there is an error in fetching the files from the S3/SFTP folder?

    By default, imports are not retried when there is a failure. You can, however, configure to receive an email alert upon failure.

  3. What if a scheduled import adds the data into a recently archived segment?

    In such cases, the new data will still be added to the archived segment. You can unarchive the segment as required.

Was this article helpful?
0 out of 0 found this helpful

How can we improve this article?