An Azure service for ingesting, preparing, and transforming data at scale.
Hi Anic, Benjamin,
Thanks for the update , that’s helpful. Since supportV1DataTypes is now working in one environment, it confirms the configuration is correct. The difference you’re seeing with the second environment, especially in a dynamic pipeline, is typically due to differences in source metadata rather than the connector itself.
In Oracle, columns defined as NUMBER without explicit precision and scale can be interpreted differently at runtime. Even if the data contains only integers, such columns may still be surfaced as high-precision types, which can lead to the IBigDecimal issue in some scenarios, particularly in dynamic pipelines where schema is inferred at runtime.
To validate this, please compare the column definitions between the two environments:
SELECT column_name, data_type, data_precision, data_scale
Pay special attention to columns where DATA_PRECISION and DATA_SCALE are NULL, as these are more likely to cause inconsistent behavior.
As a reliable workaround, we recommend explicitly casting the affected columns in your source query to a fixed-precision type:
SELECT
If your pipeline is dynamic across multiple tables, consider creating views in Oracle with consistent data types or generating queries dynamically with the required casts.
Since this is environment-specific, aligning the schema or enforcing explicit casting should resolve the issue consistently across both environments. If the issue still persists, feel free to share a sanitized example of the table schema or pipeline configuration, and we can help review further.