About the Microsoft SQL Server database connector plugin¶
From Sidra 2022.R3 version onwards, the autogenerated Transfer Query will be replaced by the DSU ingestion script. More information can be checked here .
The SQL Server connector plugin for Sidra enables seamless integration with Microsoft powerful enterprise relational database.
You can see more details of what is a Data Intake Process in this page.
Sidra connector plugin for SQL Server extracts data from any table and view in the source database and loads it into the specified Data Storage Unit at regular intervals. It relies on the Sidra Metadata model for mapping source data structures to Sidra as destination, and uses Azure Data Factory as underlying data integration mechanism within Sidra.
When configuring and executing this connector, several underlying steps are involved to achieve the following:
- The necessary metadata and data governance structures are created and populated in Sidra.
- The actual data integration infrastructure (ADF Pipeline) is created, configured and deployed.
The connector is configured in less than five-minutes. Once the settings are configured and the deployment process is started, the actual duration of the data ingestion may vary from few minutes to few hours, depending on the data volumes.
After starting the Data Intake Process creation, users will receive a message that the process has started and will continue in the background. Users will be able to navigate through Sidra Web as usual while this process happens.
Once the whole deployment process is finished, users will receive a notification in Sidra Web Notifications widget. If this process went successfully, the new data structures (new Entities) will appear in the Data Catalog automatically, and the Data Intake Process will incorporate this new data source.
Supported SQL Sever versions¶
The following list includes all SQL Server versions supported by this connector plugin:
- SQL Server 2012 (version 11.xx)
- SQL Server 2014 (version 12.xx)
- SQL Server 2016 (version 13.xx)
- SQL Server 2017 (version 14.xx)
- SQL Server 2019 (version 15.xx)
All editions (Developer, Standard and Enterprise) are supported, but some features of the connector will only be available if the source SQL Server edition supports the feature, such as the Enterprise edition requirement for Change Tracking on tables.
Supported SQL Server data synchronization mechanisms¶
The SQL Server Database connector plugin for Sidra supports different modes of data synchronization, which also depend on the mechanisms configured on the source system or Sidra:
Full load data synchronization.
Incremental load data synchronization. . For incremental load to work, there must be a defined mechanism to capture updates in the source system. For incremental load data synchronization, two possible types of mechanisms are supported:
Incremental Load with built-in SQL Server Change Tracking (CT). This is achieved by directly activating Change Tracking in the source database.
Incremental Load non-Change Tracking related (non-CT). This is achieved by specific configurations in the Sidra Metadata tables and includes a Non-Change Tracking custom mechanism enabled by Sidra option.
For more information, check the Supported Data synchronization mechanisms page.
The process of setting up an SQL Server Data Intake Process involves several key actions. The different sections of the input form are organized into these main steps:
Step 1. Configure Data Intake Process¶
Please see on the common Sidra connector plugins section about the parameters needed to configure a Data Intake Process.
Step 2. Configure Provider¶
Please see on the common Sidra connector plugins section about the parameters needed to create a new Provider.
Step 3. Configure Data Source¶
The data source represents the connection to the source database. A Data Source abstracts the details of creating a Linked Service in Azure Data Factory. The fields required in this section are the Integration Runtime and the connection string to the database.
Sidra SQL Server connector plugin will register this new data source in Sidra Metadata and deploy a Linked Service in Azure Data factory with this connection.
The created Linked Service will use Key Vault in Sidra Core to store the connection string to the database.
Default value for Integration Runtime is
For more details on Linked Services check the Data Factory documentation.
The information that needs to be provided is the following:
Integration Runtime: this is the ADF integration runtime in charge of executing the connection between the data origin, and is associated with a deployed Linked Service. In case it is not required any IR, the option Default will be selected.
About connection details:
AuthenticationType: From version 2022.R3, Windows Authentication for SQL server plugin has been included improving the authentication security. This field in Sidra Web allows to choose between the SQL Authentication and Windows Authentication type.
Server Name: the name of the server to connect to.
Database Name: the name of the database.
User Name: user with access to the origin server.
Use existing secret: When true, the field
Password's secret namemust be filled, when false, the user
Passwordfield must be filled.
Encrypt: for the connection string (true/false). By default, true is used.
Trust Server Certificate: for the connection string (true/false). By default, false is used.
Step 4. Configure SQL Metadata Extraction¶
Sidra Azure SQL Database connector plugin replicates the schema and tables from the source database, by querying the INFORMATION_SCHEMA System View tables.
The extracted schema is replicated to the Sidra Metadata model hierarchy. Tables in the SQL Database source system are replicated in Sidra as Entities, and columns of these tables, as Attributes.
The Entity table in Sidra metadata model contains data about the format of the Entity generated, as well as information about how this Entity should be handled by the system. Attributes contain data about the columns of the tables. Sidra adds additional system-level Attributes to convey system-handling information at the level of the Entity (or table).
As part of the metadata extraction of a Data Intake Process setup, the autogenerated transfer query / DSU ingestion will be executed following the selected configuration scenario, transforming the data in raw storage into the optimized format in the Data Lake.
For more details, please check the Data Ingestion documentation.
The metadata extraction represents in which way the origin will be consumed in order to obtain its metadata. The information fields required to fill in the SQL Metadata extraction section will be used for three main purposes internally in Sidra:
- Create and populate the Sidra Core metadata structures (Entities and Attributes).
- Create, deploy and execute an Azure Data Factory pipeline for the metadata extraction steps (see below steps).
- Auto-generate the create tables and executes the autogenerated transfer query / DSU ingestion script, including all the steps for storing the optimized data in the data lake storage.
These are the metadata extractor configuration parameters to be input for this version of connector plugin. All of these parameters are optional:
Number of Tables per Batch: this is used internally in the metadata extractor pipeline in order to specify the number of tables that will be consumed per batch.
Types of Objects to Load: views, tables or both.
Object Restriction Mode: there can be three options here:
Include all objects: all objects from all databases will be imported.
Include some objects: to specify a list of objects to include.
Exclude some objects: to specify a list of objects to exclude.
Object Restriction List: this is a list with comma separated values, which includes those objects that we want or do not want to load into the system, depending on the value of
Object Restriction Mode. If no values are specified in this field, all objects will be loaded. To fill in this field, add a list of
[<database_name>]elements in plain text, separated by comma. Examples:
Once the metadata extractor pipeline has been created, it is executed. This execution could take a while (from 5 to 15 minutes) depending on the load of Data Factory and the status and load of the associated Databricks cluster. Sidra exposes API endpoints as well to manually execute the SQL Metadata extractor pipeline.
Step 5. Configure Trigger¶
Please see on the common Sidra connector plugins section about the parameters needed to set up a trigger.
SQL data types and transformations¶
Some of the original SQL Server data types are not supported by Azure Data Factory conversions to Parquet (the format of storage in Sidra DSU).
For those data types that are not supported, Sidra connector plugins incorporate some type translation mechanisms in place, used both at metadata extraction and at data extraction phases.
If a certain data type is not supported, that data type is automatically changed to the closest supported type as defined in a Type Translations table.
You can find more information about the general process for type translations for plugins is in this page.
How to exclude source data¶
Sidra data connector plugin for SQL Server allows to control different settings on which tables will be part of the data to be synchronized between source and destination:
List of tables to include: this field allows to specify whether only original source tables, views or both will be part of the data synchronization. Possible values are
view. These values correspond to the table_type values in INFORMATION_SCHEMA.TABLES SQL Server System View. If not specified, all table types will be included.
List of tables to exclude: this field allows to specify certain schemas and tables in the source database that will not be part of the data synchronization. You can check the list of schemas and tables in the INFORMATION_SCHEMA.TABLES SQL Server System View. If no values are specified in this field, no tables will be excluded. To fill in this field, add a list of [
].[ ] elements, separated by comma.
Data Extraction pipeline¶
Once with all the information provided in the above steps, Sidra Core will create and deploy the actual data extractor pipeline. The data extraction pipeline is where the actual movement and transformation of data happens:
- On one hand, the
copy dataADF activities are executed, which actually move the data between the source (SQL Server database) and the destination (Azure Data Lake Gen2).
- On the other hand, the autogenerated transfer query / DSU ingestion is executed in order to perform data optimization, data validation, etc., and loading the data in its optimized format in the data lake.
The time to execute this pipeline is variable depending on the volumne of data and the environment.
Initial full data synchronization¶
Once Sidra is connected to the source database, Sidra SQL Server connector plugin first copies all rows from every table in every schema and table that has not been explicitly set to be excluded (see above on "Excluding Source Data").
For each table (Entity), rows are copied by performing a SELECT satement. Copy Activity in Azure Data Factory parallelizes reads and writes according to the source and destination. A good practice when using Change Tracking, is that the field by which the changes are calculated is indexed in the source database. This avoids needing to read the whole table.
Loading incremental data mechanisms¶
Once an initial synchronization is complete, Sidra performs incremental synchronizations on the new and modified data in the source system.
Sidra Connector plugin for SQL Server Database uses the incremental data synchronization mechanisms mentioned before.