Skip to content

Sidra Data Products metadata

This page is intended to explain the metadata concepts involved in Data Products.

Data Storage Unit

Whenever a content is ingested in the Data Lake inside a Data Storage Unit (DSU), a new Asset is created in the metadata database in Sidra Service. The Asset is associated to the rest of metadata stored for the content: Entity , Provider , Attributes...

The Data Product keeps a synchronized copy of the metadata to discover the ingestion of new Assets in the Data Lake.

That means that the Data Product contains a metadata database -similar to the one in Sidra Service. In the Data Product database, the metadata tables for Sidra are under the schema Sidra.

These are some of the most important tables used in the Data Product metadata database:

  • DataFactory: this table stores the Azure Data Factory resource group for the Data Product
  • Provider
  • Entity
  • Attribute
  • EntityPipeline: to store the Entity-Pipeline associations.
  • AttributesFormat
  • AssetStatus
  • Assets
  • Trigger
  • TriggerPipeline
  • TriggerTemplate
  • Pipeline
  • PipelineTemplate
  • PipelineSyncBehavior: this is described in this documentation page.

Even if we say that the metadata of the Sidra Service and the Data Product databases are synchronized, there are several differences between the metadata in Sidra Service and the metadata in Data Products that are worth clarifying:

  • Some fields have been removed from the Data Product metadata tables because they are not used, for example the fields used for access control like SecurityPath and ParentSecurityPath.
  • Some fields have been added to the Data Product metadata tables. This is the case of the field IsDisabled, which has been added to Entity and Provider for disabling the synchronization for that particular Entity or Provider.
  • The state transitions in Sidra Service and in Data Products are different. Therefore, AssetStatus table contains different states and workflows than in Sidra Service. For example, once the ingestion of the Asset is finished in Sidra Service, the status will be MovedToDataLake in Sidra Service, but in the Data Product the status will continue evolving until ImportedFromDataLake.

You can check the Metadata section section, which explains the above metadata tables, and all those status values and the transitions between them.

Role of Sync and DatabaseBuilder webjobs in Data Products

Sync webjob

The Sync web job uses Sidra API to retrieve latest information from the metadata tables. The metadata returned by the API is conditioned/limited by the permissions granted to the Data Product.

Based on the metadata received from the API, for each metadata table, the webjob updates its Data Product metadata tables. For Provider and Entity tables, any entry that is no longer available in Sidra Service is set as disabled in Client using the abovementioned IsDisabled field.

All this information will be used to decide if there is new content in the Data Lake to be imported to the Data Product.

In addition to this, the Sync webjob will also be responsible for executing the defined pipeline, depending on the sync behavior defined for each pipeline (see PipelineSyncBehavior), described here.

Note

Sidra Service has the concept of dummy Assets, which are Assets of zero length that get created when an incremental load in Sidra Service finishes but results in no new increment of data. This concept was introduced in order to force the presence of a new Asset in the Data Product metadata tables. Without these Assets, the data synchronization would not be generated if the Assets are configured as mandatory Assets (see below information on this point). If these Assets are mandatory but not generated, the data movement would not happen and this could affect the business logic on the Data Product. Since Sidra version 1.10 the generation of dummy Assets in Data ingestion pipelines is optional, with default set to false. Therefore, if the data processing logic of a Data Product needs to have these Assets generated, please ensure that this parameter has the correct setting to true when deploying data intake pipelines.

DatabaseBuilder webjob

The population of the metadata database will be performed by the DatabaseBuilder webjob. The project included in the Data Product solution is already configured for this purpose:

static void Main(string[] args)
{
    ...

    // Add log configuration as well if required
    JobExecutor.Execute(
        configuration,
        new Options()
        {
            CreateClientDatabase = true,
            CreateLogDatabase = true
        },
        loggingBuilderAction);

    ...
}

This job will create the database schema specific for the Data Products, with the differences explained in the sections above.

It also will be used to include in the database the information of the ADF components (pipeline, dataset, triggers) by using SQL scripts. This section explains how to execute SQL scripts using DatabaseBuilder webjob.

Step-by-step

How to execute SQL scripts using DatabaseBuilder webjob

This section explains how to execute SQL scripts using DatabaseBuilder webjob .

Extracting the new content from the Data Storage Unit

The Sidra Data Products will use ADF for the data movement, in particular they will use pipelines for the extraction of new content from the Data Lake (more specifically, from the DSU).

The actions performed by the extraction pipelines will depend on what is going to be done with the content after the extraction. This logic is Data Product-specific and tied to business rules transformations. Some examples of the actions possibly executed by the extraction pipelines are:

  • The content may be used to populate a DataWarehouse inside the Data Product. In this case, such content will be first stored into the staging tables in the Data Product after the extraction.

  • The content will be optionally transformed through the execution of data transformations and business rules within the Data Product.

  • Optionally, this transformed data can be re-ingested as rich data back into the Data Lake. In this case, after the extraction and transformation the new content will be pushed to the landing zone in Sidra Service for the re-ingestion, as any new data Provider for Sidra is configured.

In consequence, there is not a single design for the Data Products extraction pipeline: it will depend on the specifics of the business case implemented by the Data Product.

More information about Data Product pipelines can be checked here.


Last update: 2023-07-17