Skip to content

How to use API Connector Toolkit

This tutorial provides a step-by-step guide to help you create a Connector that imports data from an external web API.

In this example, the data source is the public Fixer.io API, but this example also serves as a foundation to help you harness the power of the API Connector Toolkit to quickly and flexibly integrate virtually any API-based data source with Sidra.

To authenticate with the API, you must obtain a key. Save the generated API key for use in later steps.

Prerequisites

  • .NET SDK (version 6.x only)
  • IDE that support C# and Python languages (e.g: Visual Studio or Visual Studio Code)
  • Python 3 or higher

This tutorial assumes the use Visual Studio IDE. If you choose to use other environments such as Visual Studio Code, additional tools or steps may be required—such as manually configuring support for executing T4 templates.

Installing the Template

The API Connector Toolkit is implemented as a .NET template to speed up connector development and reduce the overhead of manual setup. This template includes a ready-to-use solution structure, pre-configured references to required libraries, and helper classes that simplify the development and registration of a new connector. This template includes a ready-to-use solution structure, pre-configured references to required libraries, and helper classes that simplify the development and registration of a new connector. The autogenerated code in the solution is primarily written in C#, with accompanying Python code used specifically for the Databricks notebooks that define the data ingestion logic.

Sidra releases a dotnet template to speed up the development process for connectors.

This template provides the structure of the dotnet solution, references to necessary libraries, and helpers to create and register a connector.
The languages used are C# and Python.

The template is distributed as a NuGet package, making it easy to install and update from this nuget source.

Steps

To create a new connector solution, install the dotnet project template on your machine. Open a PowerShell console or Command Prompt and follow these steps:

  1. Install the template:

    dotnet new install Sidra.ApiConnectorTemplate --nuget-source "https://www.myget.org/F/sidrasdk/api/v3/index.json"
    
  2. Verify that the template has been installed:

    dotnet new --list
    
    3. Look for the following line in the output:
    Template Name                 Short Name               Language    Tags
    ----------------------------  -----------------------  ----------  -------------------------------------
    Sidra API Connector           sidra-api-connector      [C#]        Sidra
    

Creating a New Connector Solution

Once the template is installed, you can generate a new .NET solution pre-configured with the basic structure and parameters needed to start building your connector. Run the following command:

 dotnet new sidra-api-connector -o C:\Code\Fixer --auth-type APIKey --plugin_min_sidra_release_version 2023.R3

Explanation of the parameters:

  • -o specifies the output directory where the solution will be created
  • --auth-type defines the authentication mechanism (options: APIKey, BasicAuth, OAuth, NoAuth)
  • plugin_min_sidra_release_version sets the minimum Sidra version required for compatibility

In this tutorial, we use APIKey authentication and target Sidra version 2023.R3 (Sidra 2.0) upwards.

After executing the command, a new solution will be created and ready to open in your IDE.

Solution Structure

Open the generated .sln file using your preferred IDE. Then, navigate to the Persistence/Seed/Files/Databricks folder in the Sidra.Plugins.Connectors.Fixer project.

This folder contains the Python notebooks that will be deployed to the DSU's Databricks instance. These notebooks define the logic for calling the external API, processing the data, and triggering the corresponding Data Intake Process in Sidra.

Solution tree

Developing the Connector

Main Notebook

The main notebook, Fixer.py, acts as the entry point for your connector's logic. Inside this notebook, you will find the necessary logic to:

  • Retrieve the parameters defined during the Data Intake Process creation
  • Instantiate an authenticated service based on those parameters
  • Build the API request dynamically
  • Handle the API response, including transformation or filtering of data if needed APIConnectorOrchestrator

The actual name of the notebook may vary depending on the name of your generated solution.

Take a few moments to explore the notebook and understand the various steps already implemented. These include logic for looping through each configured source, processing the returned JSON, and posting the cleaned data to Sidra.

More specifically, the execute method in the APIConnectorOrchestrator takes the transformed response, creates the necessary metadata (such as Entities and Attributes) if they don't already exist, and performs the data ingestion into the platform.

While the sample implementation is usable out-of-the-box, this tutorial goes a step further to demonstrate customization. For instance, we will modify the notebook to use a different endpoint of the Fixer API: the Historical endpoint. This endpoint lets us retrieve historical exchange rates for a specific day.

By scheduling the Data Intake Process to run daily and adjusting the notebook to request the previous day’s rates, we can build up a historical dataset of exchange rates. The API also supports a base parameter to indicate the base currency for conversion. Using this parameter, we could create multiple entities for different base currencies if desired.

Be aware that the free version of the fixer API only support EUR as base currency.

You can find more about this endpoint and its capabilities in the official Fixer.io.

Notebook Customization

  1. First, remove the default path assignment:

    endpoint_path = source_option.EndpointPath
    
  2. We will need to import datetime to work with dates, so later we can compute yesterday's date. Let's add the required import to the list of imports at the top of the notebook:

    from datetime import datetime, timedelta, timezone
    
  3. Now we can add logic to compute yesterday’s date:

    yesterday = datetime.now(timezone.utc) - timedelta(days=1)
    yesterday = yesterday.strftime("%Y-%m-%d")
    
  4. Retrieve the base currency from the UI paratemers:

    base_currency = source_option.BaseCurrency
    
  5. Build the API endpoint URL with the computed date and the base currency:

    full_url = f'{connector_parameters.base_url}/{yesterday}?base={base_currency}'
    

With all these changes made, the main notebook's loop will look like this:

    for source_option in endpoint_source_options:

        connector_parameters.entity_name = source_option.EntityName
        base_currency = source_option.BaseCurrency
        yesterday = datetime.now(timezone.utc) - timedelta(days=1)
        yesterday = yesterday.strftime("%Y-%m-%d")
        # Call the historical endpoint for the change rate of the last day
        full_url = f'{connector_parameters.base_url}/{yesterday}?base={base_currency}'
        expand_collections = source_option.ExpandCollections
        # Call endpoint to get the response in json format
        response = authenticated_service.request(full_url)

        if response.status_code != 200:
            logger.error("Error calling endpoint: " + full_url + " with status code: " + response.status_code)
            continue #Continue with the next entity

        api_response_json = response.json()
        ### Transform the json result if needed
        api_response_json = process_response_json(api_response_json)
        api_response_json_string = json.dumps(api_response_json)
        # Call base method to execute the Ingestion process
        data_intake_process = ApiConnectorOrchestrator(connector_parameters)
        data_intake_process.execute(api_response_json_string, expand_collections)

According to the Fixer.io API documentation, the response includes two fields — success and historical — which do not provide meaningful information for our use case. These can safely be removed during the response processing step:

def process_response_json(json : dict):
    del json['success']
    del json['historical']
    return json 

Define Connector Parameters

At this stage of the customization, we've removed the endpoint_path parameter and introduced a new one: base_currency. These changes need to be reflected in the connector's parameter definitions. You can do this by updating the MetadataExtractorOptions class, located in the WizardFormOptions folder.

public class EndpointSourceOptions
{
[JsonProperty("EntityName")]
[Wizard(IsRequired = true, Editable = true, IsMatrixCell = true, CellType = WizardCellType.Text, ValidationType = WizardValidatorType.Regex, ValidationRegex = @"^[a-zA-Z0-9]{5,30}$", ValidationText = "Please enter a name between 5 and 30 characters without special characters")]
public string EntityName { get; set; }

[JsonProperty("BaseCurrency")]
[Wizard(IsRequired = false, Editable = true, IsMatrixCell = true, CellType = WizardCellType.Text)]
public string BaseCurrency { get; set; }

[JsonProperty("ExpandCollections")]
[Wizard(IsRequired = true, Editable = true, IsMatrixCell = true, CellType = WizardCellType.Boolean, DefaultValue = "false")]
public bool ExpandCollections { get; set; }
}
To update the wizard UI, open the Resources\Texts\Connector.resx file and rename:
WizardFormElement_EndpointSourceOptions_EndpointPath_Title
to:
WizardFormElement_EndpointSourceOptions_BaseCurrency_Title
Set the value to BaseCurrency.

To apply these changes and regenerate the code, you need to run the Connector.Template.tt template. If you're using Visual Studio, simply right-click the Connector.Template.tt file and choose "Run Custom Tool". If you're working in Visual Studio Code or another IDE, you can use the T4.exe tool included in the .NET SDK to execute the T4 template manually.

Ensure that the property name matches the JsonProperty exactly, including case, to avoid mismatches in the wizard form.

Set Databricks Job Parameters

To pass configuration values from the Data Intake Process into the notebook at runtime, modify the FillUserParameters method in ConnectorExecutor.cs:

    protected override void FillUserParameters(IList<JobParameter> jobParameters)
    {
       base.FillUserParameters(jobParameters);

            var dataSourceOptions = Parameters.GetOptions<DataSourceOptions>();
            jobParameters.Add(new JobParameter("base_url", dataSourceOptions.BaseUrl));
            jobParameters.Add(new JobParameter("provider_item_id", Provider.ItemId.ToString()));
    }

In this example, each job parameter is defined by a name and an associated value. The value can originate from different sources — in this case, it comes from the DataSourceOptions, which are mapped from the parameters specified during the creation of the Data Intake Process (DIP).

Note that You do not need to add data_intake_process_id as it is automatically included by base.FillUserParameters.

Scheduling

No extra code is needed for scheduling. The Sidra platform handles this automatically. During the Data Intake Process configuration, users can specify when and how often the connector should run.

Make it Shine!

Okay — you’ve got your great connector up and running! Now it’s time to make sure it stands out in Sidra’s connector gallery and gives off the right first impression.

To do that:

  • Replace the default logo.png in the Resources\Images folder with a custom logo that represents your connector.
  • Edit the description.md file in the Resources\Markdown folder to provide a short, friendly description of what your connector does.

These two elements will be shown in the Sidra wizard UI when users are selecting or browsing connectors — so a bit of personality goes a long way!

Publish Your Connector

Once your connector is tested and ready, let’s get it out there! Currently, Sidra performs a manual review process before publishing connectors to the public gallery. To submit your connector for publication:

  1. Send an email to [email protected] with:
    • A brief summary of your connector (data source, purpose, etc.)
    • A zipped version of the connector code or a link to the repo (private or public)
    • Any special instructions or notes we should know during testing
  2. The Sidra team will:
    • Review and test your connector internally
    • Register it in Llagar, our connector backoffice system
    • Provide you with CI/CD configuration instructions to automate future updates

Once approved, your connector will be visible in the Sidra connector gallery, ready to be used by others in your organization (or even beyond, if it's shared more broadly).

Test It Out

Now that your connector is available in the gallery, it’s time to validate that everything works smoothly from end to end.

To test your newly created connector, let’s walk through the creation of a Data Intake Process (DIP) using the Sidra UI:

  1. In the Sidra portal, navigate to Data Intake Processes and click New
  2. From the list of available connectors, select yours — in this case, the Fixer connector.
  3. Fill in the required configuration across the different steps of the wizard:
    • Data Intake Options: Give your DIP a name and a short description that helps identify it later.
    • Configure Provider: As with any DIP, set the provider's name, owner, target DSU, and optionally add a short description and additional details.
    • Configure Data Source: Provide the details required to authenticate and call the Fixer API:
      • Base Url: http://data.fixer.io/api/
      • Key: access_key
      • Value: Paste the API key you obtained from Fixer.io
      • Add To: Set this to Query Params
    • Configure Metadata Extractor: Define the entities the connector should retrieve. For this test, we’ll focus on the Euro exchange rate:
      • Entity Name: EuroRate
      • Base Currency: EUR
      • Expand Collections: No
    • Advanced Configuration: Set the Consolidation Mode to Snapshot to keep things simple for this test.
    • Configure Trigger: Schedule the DIP to run daily, so it pulls exchange rate data regularly. Make sure to enable automatic execution for the first run by toggling that option to Yes.
  4. Once the DIP is created, monitor the logs, check the ingestion status, and inspect the ingested data in the DSU to ensure everything has been processed correctly.

If you see a new Sidra provider, with your EuroRate entity populated with historical data — congrats, your connector is working as expected!