An Azure service for ingesting, preparing, and transforming data at scale.
Hi Justin Doh,
Yes, there is a way to handle this efficiently in Azure Data Factory without creating 50 datasets manually.
ADF does not support selecting all tables in a single dataset, as each dataset is designed to point to one table or structure. However, you can achieve bulk processing by using a dynamic and parameterized approach.
Create one source dataset and one sink dataset, and add a parameter such as TableName. Use this parameter inside the dataset so that the table name can be passed dynamically at runtime.
Then use a Lookup activity to retrieve the list of all tables from your source database. You can run a query like:
SELECT TABLE_NAME
After that, use a ForEach activity to iterate through the list of tables returned by the Lookup. Inside the loop, add a Copy Activity and pass the table name dynamically to both the source and destination datasets. This way, each iteration copies one table, and all tables get migrated automatically.
This is the recommended and scalable approach for handling multiple tables. There is no built-in option in ADF to select or copy all tables at once directly, but this method effectively achieves the same result.
References:
Metadata-driven copy pipelines in ADF
ADF dynamic pipeline example (Q&A guidance)
ForEach activity in Azure Data Factory
Hope this helps!