Share via


Option 4: Use the Azure Data Factory template to send data to our API-based import

Follow the steps below to use an Azure Data Factory template to send data to the API-based import.

1. Create new Azure Data Factory

  1. Log in to https://adf.azure.com/en/datafactories.

  2. Create a new data factory or use an existing data factory. Complete the fields, then select Create.

    Screenshot that shows how to create a new data factory or use an existing one.

2. Create a new pipeline and activity

  1. Create a new pipeline and enter a name for the pipeline.

    Screenshot that shows how to create a new pipeline.

  2. Under Activities, add Copy data.

    Screenshot that shows how to add copy data.

3. Copy data activity settings: General

Select your Copy data activity, then select General to complete each field using the guidance below.

Screenshot that shows how to copy data activity settings.

  • Name: Enter a name for your activity.
  • Description: Enter a description for your activity.
  • Activity state: Select Activated. Or select Deactivated to exclude the activity from the pipeline run and validation.
  • Timeout: This is the maximum amount of time an activity can run. The default is 12 hours, the minimum is 10 minutes, and the maximum amount of time allowed is seven days. The format is in D.HH:MM:SS.
  • Retry: The maximum number of retry attempts. This can be left as 0.
  • Retry interval (sec): The maximum number of retry attempts. This can be left as 30 if the retry attempt is set as 0.
  • Secure output: When selected, the output from the activity isn't captured in logging. You can leave this cleared.
  • Secure input: When selected, the input from the activity isn't captured in logging. You can leave this cleared.

4. Copy data activity settings: Source

  1. Select Source.

  2. Select an existing source dataset or select +New to create a new source dataset. For example, under New dataset select Azure Blob Storage, then select the format type of your data.

    Screenshot that shows how to create a new source dataset.

  3. Set the properties for the .csv file. Enter a Name and under Linked service, select an existing location or select +New.

    Screenshot that shows how to set the properties for the csv file.

  4. If you selected +New, enter the details for the new linked service using the guidance below.

    Screenshot that shows how to add details for the new linked service.

  5. Next to Source dataset, select Open.

    Screenshot that shows how to open the source dataset.

  6. Select First row as header.

    Screenshot that shows how to select the first row as header.

5. Copy data activity settings: Sink

  1. Select Sink.

  2. Select +New to configure a new rest resource to connect to the API. Search for "Rest" and select Continue.

    Screenshot that shows how to configure a new rest resource to connect to the API.

  3. Name the service. Under Linked service select +New.

    Screenshot that shows how to name the service and add a new linked service.

  4. Search for "Rest" and select it.

    Screenshot that shows how to search for the Rest dataset.

  5. Enter the fields using the guidance below.

    Screenshot showing how to enter the fields for the dataset.

  • Name: Enter a name for your new linked service.
  • Description: Enter a description to your new linked service.
  • Connect via integration runtime: Enter preferred method.
  • Base URL: Use the URL below and replace {tenantid} with your tenant ID: https://api.orginsights.viva.office.com/v1.0/tenants/{tenantid}/modis/connectors/HR/ingestions/fileIngestion
  • Authentication type: Select your authentication type as Service principal, then select Secret or Certificate. Service principal example:
    • Inline: Select it.

    • Service principal ID: Enter the App ID authorized for the API connector.

    • Service principal key: Enter the key.

      Screenshot that shows how to enter the service principal key.

    • Tenant: Enter the tenant ID.

    • Microsoft Entra ID resource: https://api.orginsights.viva.office.com

    • Azure cloud type: Select your Azure cloud type.

    • Server certificate validation: Select Enabled.

  1. Enter the Sink settings using the guidance below.

    Screenshot that shows how to enter the Sink settings.

  • Sink dataset: Select the existing or newly created dataset.
  • Request method: Select POST.
  • Request timeout: Five minutes is the default.
  • Request interval (ms): 10 is the default.
  • Write batch size: The batch size should be higher than the maximum number of lines in your file.
  • Http compression type: None is the default. Or you can use GZip.
  • Additional headers: Select +New.
    • Box 1: x-nova-scaleunit
    • Value: You can retrieve the value by navigating to the Data Connections page on the Organizational Data in Microsoft 365 page (Home > Setup > Migration and imports > Organizational Data in Microsoft 365 > Data Connections). Then, select New import > Select connection type > Start API based setup > Set up API based connection page.

6. Copy data activity settings: Mapping

  1. Select Mapping.

  2. For the bootstrap upload, make sure to include PersonId, ManagerId, and Organization in the mapping (destination name). For the incremental upload, verify that the destination names are consistent with those in the previous upload, along with PersonId. You can't perform incremental uploads with new columns, and PersonId is required in all uploads.

    Screenshot that shows how to enter activity settings for Mapping.

7. Copy data activity settings: Settings and User Properties

No other customizations are required for Settings or User Properties. You can edit these settings on a case-by-case basis if you need to.

8. Copy data activity: Trigger Setup (Automation)

To add a trigger to the automation setup, select Add trigger. The recommended automation is weekly. You can also customize the frequency.

Screenshot that shows how to set up the Trigger.