Share via

Oracle V2 Connector Datatype Error

Anic, Benjamin 0 Reputation points
2026-04-01T16:20:48.9366667+00:00

I migrated to the Oracle V2 connector. My pipelines are failing as I'm receiving the following error message for several fields for each table on the source side.   

ErrorCode=DataTypeNotSupported,‘Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Column: [ColumnName],The data type IBigDecimal is not supported from the column named [ColumnName].,Source=,’  

*Note: I masked the actual column name in the error message.   

Oddly, I've checked several columns receiving this error and they all contain INT values.   

I did try using the supportV1DataTypes property per the documentation but that did not make a difference.   

Any information or support would be greatly appreciated. 

Azure Data Factory
Azure Data Factory

An Azure service for ingesting, preparing, and transforming data at scale.

0 comments No comments

2 answers

Sort by: Most helpful
  1. Pilladi Padma Sai Manisha 6,430 Reputation points Microsoft External Staff Moderator
    2026-04-01T23:45:15.87+00:00

    Hi Anic, Benjamin,
    Thanks for the update , that’s helpful. Since supportV1DataTypes is now working in one environment, it confirms the configuration is correct. The difference you’re seeing with the second environment, especially in a dynamic pipeline, is typically due to differences in source metadata rather than the connector itself.

    In Oracle, columns defined as NUMBER without explicit precision and scale can be interpreted differently at runtime. Even if the data contains only integers, such columns may still be surfaced as high-precision types, which can lead to the IBigDecimal issue in some scenarios, particularly in dynamic pipelines where schema is inferred at runtime.

    To validate this, please compare the column definitions between the two environments:

    SELECT column_name, data_type, data_precision, data_scale
    

    Pay special attention to columns where DATA_PRECISION and DATA_SCALE are NULL, as these are more likely to cause inconsistent behavior.

    As a reliable workaround, we recommend explicitly casting the affected columns in your source query to a fixed-precision type:

    SELECT 
    

    If your pipeline is dynamic across multiple tables, consider creating views in Oracle with consistent data types or generating queries dynamically with the required casts.

    Since this is environment-specific, aligning the schema or enforcing explicit casting should resolve the issue consistently across both environments. If the issue still persists, feel free to share a sanitized example of the table schema or pipeline configuration, and we can help review further.


  2. Q&A Assist
    2026-04-01T16:21:04.2033333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The Oracle V2 connector changes how Oracle numeric types are mapped and interpreted. In V2, some NUMBER/decimal-style columns are surfaced with an internal type like IBigDecimal. When the connector cannot map that internal type to a supported ADF/Synapse type, it raises DataTypeNotSupported for those columns, even if the actual values look like integers.

    To work around this in V2, use one of the supported approaches for data type compatibility:

    1. Explicitly cast the problematic columns in the source query to a supported Oracle type that the connector can map, for example:
    SELECT
        CAST(ColA AS BINARY_DOUBLE) AS ColA
    FROM YourTable
    

    This forces the connector to treat the column as a supported numeric type instead of IBigDecimal.

    1. Ensure the V2 linked service has the additional property supportV1DataTypes set to true. This makes V2 use the same data type mappings as V1 and can avoid IBigDecimal in many cases:
    {
      "type": "Oracle",
      "typeProperties": {
        "connectionString": "...",
        "additionalProperties": {
          "supportV1DataTypes": true
        }
      }
    }
    

    If supportV1DataTypes = true alone does not resolve all failing columns, combine it with explicit casts in the source query for the remaining columns that still surface as unsupported.

    Also ensure that any queries used with the V2 connector do not end with a semicolon, as V2 does not support queries terminated with ; and will throw ORA-00933 in that case.


    References:


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.