It's so simple yet I spent quite many painful minutes with that.
The issue is that I have flat files to use as sources. They have 1 header row and 0 or more detail rows. Since I cannot divide the file in the connection manager for the header row (for the details it's OK because there is a setting called Header rows to skip) I have to use an alternate solution which is getting the whole file into memory and (conditional) split (I almost became angry like a Split) them based on a type column.
The problem is that this way I cannot set the required metadata based on the spec because the header and detail have different metadata! E.g. if the 2nd column of detail is a name with letter while the 2nd column of the header is a number, it's not possible to read the whole file as column2 is an int (or some other number) because it would fail at the source due to data type error.
That's why I changed all columns to varchar(1000). OK so far it worked. - This is how it began...
But there are columns that contain only NULL values (string NULLs!) whereas the destination column is defined as e.g. numeric. "Oh, no problem!" - I thought - "Just put a Data Conversion task in there and it will take care of that." Will it?
The answer is NO. Why not?
Because it cannot convert a string NULL to numeric. I tried that and it failed. I tried to convert the column in a derived column transformation like this: ISNULL([column]) ? NULL(DT_NUMERIC,15,0) : [column] No use. Then the following expression (derived column transformation) has won: (DT_NUMERIC,15,0)(([column]) == "" ? "0" : [column])
I know it's simple but I think the data conversion task should have handled it without tweaking the data flow like that.
That wasn't fully correct because 0 does not equal NULL. So the final expression is: ([column]) == "" ? NULL(DT_NUMERIC,15,0) : (DT_NUMERIC,15,0)([column])