For the examples in this article, we will create a simple table with 5 columns based on 1,000 records from ALL_OBJECTS, as follows. This table will serve as both the source and target table for our examples.The examples that follow are deliberately simplified.
The table has no PK or Unique Index because of Application reasons.What I would like to do is to update the field that causes the record to be duplicate by adding a consequitive sequence to the end of the field: i.e : say I have the following 6 records where the most left field is the duplicate field: a12,123,ddd,fff a12,345,ggg,hhh a12,567,fff,lll b11,eee,ttt,yyy b11,rrr,hhh,jjj c11,ggg,uuu,ttt Now I must fetch the duplicates : (in this case a12 and b11 only ) and update the fields so the new records will be: a12_1,123,ddd,fff a12_2,345,ggg,hhh a12_3,567,fff,lll b11_1,eee,ttt,yyy b11_2,rrr,hhh,jjj c11,ggg,uuu,ttt Can any one please help and might have an idea how to do this (sql or plsql) ?But if you look closely, most of the time we use cursors to iterate through a row collection and update the same table.In these type of situations, it is ideal to use a Update Cursor, than using the default read only one.However you can override it to customize the performance (for example, if you need consistent data and/or exclusive access for the duration of a whole and more complex transaction).
When your application requires consistent data for the duration of the transaction, not reflecting changes by other transactions, you can achieve transaction-level read consistency by using explicit locking, read-only transactions, serializable transactions, or by overriding default locking.On the face of it, this feature appears to be quite useful, especially in reducing the amount of code we need to write and maintain.However, there are two (major) shortcomings with this feature, as follows.Row locking at transactional level can be achieved with “SELECT FOR UPDATE” statement.The lock is released after a commit or rollback only.What I love about writing SQL Tuning articles is that I very rarely end up publishing the findings I set out to achieve. We have a table containing years worth of data, most of which is static; we are updating selected rows that were recently inserted and are still volatile. For the purposes of the test, we will assume that the target table of the update is arbitrarily large, and we want to avoid things like full-scans and index rebuilds.