Hi Ivan, Keith,
Thanks for the response. BTW I am using MS-SQL Server 2008 Standard.
I am setting the ConnectTimeout = 0 in my code after reading the connection string and then parse it to add the ConnectTimeout parameter (see the constructor in the original posting). I am sure an overload would have make this process easier
I measured the time manually and it make take from 1 to 45 minutes to update this big table in some cases, that is why I set the ConnectTimeout to zero, I could try putting a value of representing an hour but since this table will just continue to grow at some point it will fail again due to timeout.
Ok, after using Ivan's best friend "Google" I found several post where while doing similar alterations to the table as mine "changing an Integer field to a Big Integer" also reported timeouts. All this is very unique to the fact that the table has 43 million records and will also be affected by any index or constrains used by the field to be altered.
After a lengthy review process I noticed that after my application crashed, the ALTER TABLE command was still running on MS-SQL monitor and had to wait until it finished to be able to try again manually via script.
Some recommendations when working with big tables are to do the following:
- Remove any index or constraint
- Alter the table
- Re-create any index or constraint
I don't know what are the rules the DDT uses to alter the tables and would appreciate any clarification on this regard. I need to be able to trust my update process and not to put in jeopardy the customer data before an update. I know customer should be responsible for doing a backup with a table that big, but in reality that is not always the case and my application will be the one to be blame for any data lost or corruption.
Edhy Rijo