The bigger the table (number of and size of columns) the more expensive it becomes to delete and insert rather than update.
It's hard to distill the actual answer from this text - you can explain your edits in the textfield made to version your edits, not in your main [email protected] - I had not see it (as mentioned above it was a comment said to me) it was a comment from Paul Randal during a training course that Itzik Ben Gan had done it - so considering the source, very [email protected] - just for added context, this was a 5 day course in Dublin, Eire - Sept 2009 organized by Prodata, where Paul and Kim were the trainers.
So there is every chance that in the last 5 years someone has done a higher count.
I never use the ID field for lookup, because my application is always based on working with the Name field. I'm using the following trivial SQL code: @KM: I agree, this is a simplification of my real table, where all the lookup is done on a unique string field that is not the primary key.
I do have a primary key int value that is completely irrelevant so I removed it from the example (it's automatically created and does not take part in the lookup at all)UDPATE also has the benefit not to break any foreign key relations your table might have, as long as the key field being referenced doesn't change. I've had to edit this, I had written that if the heap page split it would ripple updates etc, and it does not.
The other, minor, issue is that when you UPDATE a single variable in a single row, the other columns in that row remain the same.
If you DELETE and then do an INSERT, you run the risk of forgetting about other columns and consequently leaving them behind (in which case you would have to do a SELECT before your DELETE to temporarily store your other columns before writing them back with INSERT). A product could be implemented that (under the covers) converts all UPDATEs into a (transactionally wrapped) DELETE and INSERT.Provided the results are consistent with the UPDATE semantics.I'm not saying I'm aware of any product that does this, but it's perfectly legal.If you want actual data then you will need to write a wile loop (on your system), and update the row 1000 times, write another loop that will delete/insert it 1000 times. My fast (non-exhaustive) search, not pretending to be covering one, gave me ,   Update Operations (Sybase® SQL Server Performance and Tuning Guide Chapter 7: The SQL Server Query Optimizer) UPDATE Statements May be Replicated as DELETE/INSERT Pairs Just tried updating 43 fields on a table with 44 fields, the remaining field was the primary clustered key. A Delete Insert is faster than the minimum time interval that the "Client Statistics" reports via SQL Management Studio.Peter MS SQL 2008 This answer over-simplifies the operations and misses out a lot of steps for the main commercial RDBMs models - deleting a row by just altering the PK (and nothing else) is not how the main commercial RDBMs work.Your information on triggers is incorrect and one-sided.