Dear MySQL users, MaxDB users and friends,
In this issue of the MaxDB series we will introduce you to some selected aspects of transactions. The topic of transactions seems pretty boring. But in fact it depends very much on the point of view you take. We have tried to put a light on the subject that shows some rather advanced aspects of transactions which are a little less often discussed in database classes. If you are new to the subject of transactions, make sure you read a introduction to the concept of transactions before you continue reading. The MySQL Developer Zone features a good introduction. The article MySQL Tutorial Chapter 10: Using Transactions with InnoDB Tables is an excerpt from the book "MySQL Tutorial - A concise introduction to the fundamentals of working with MySQL" written by Luke Welling and Laura Thomson.
All SQL commands that are executed in MaxDB are embedded in ACID-compliant transactions. MaxDB does not have different storage engines like the MySQL Server. There is "only one storage engine" in MaxDB and it is a transactional one, just like in many other databases such as Postgres, DB/2, Oracle, SQL Server and so on.
Whenever you open an SQL session, you open a transaction in MaxDB. Either you are running in autocommit mode, which is the default in SQL Studio, or you are using the SQL commands COMMIT and ROLLBACK to control the flow of a transaction. This should be no surprise to you, but have you ever heard of subtransactions?
Technically speaking subtransactions are a first step from a flat transaction model to a nested transaction model. In a flat model you cannot nest transactions into each other. You have to end a running transaction before you can start a new one. Starting a transaction within another transaction is not permitted. For most applications and in most transaction processing systems this is all what you need. As it is all you usually need and it is simpler to implement, most database systems offer a flat transaction model.
The transaction model of MaxDB is a bit more powerful and goes one step into the direction of nested transactions. A nested transaction model allows you to start a transaction within another transaction. This is particulary usefull if you have long transactions where a rollback is very expensive.
Here is the basic idea. Say you are developing a travel shop that has one unique selling point: you do not sell only individual tickets but bundles of individually selected tickets. This means, your customer can book a ticket for their trip from Hamburg to Berlin, book a hotel reservation, book a ticket from Berlin to Munich another hotel and a flight back in one step, in one transaction. Booking all these tickets and hotels requires many individual steps in the booking transaction. Some of them might be expensive. What if all steps of the transaction but the flight booking can be successfully executed. With flat transactions you could do nothing but rollback the entire transaction and start from the very beginning. But with nested transactions you could wrap the flight booking in it's own subtransaction (nested transaction) within the booking transaction. If the flight booking subtransaction fails for some reason, the outer booking transaction can decide if the entire transaction should be rolled back or if the deal should be closed without the flight or if another flight should be booked, etc. This extra of control might be exactly the feature that is needed to prevent the expensive rollback of the entire transaction.
In MaxDB you can start a nested transaction with the SUBTRANS statement. Make sure that you turn off the autocommit mode of your client before you try the following example. If you are using the SQL Studio, check the appropriate select box to turn the autocommit mode off or use the menu entry Session -> Settings -> Query Dialog to disable autocommit. In SQLCLI you can use "\a off" to disable autocommit.
// // Turn off autocommit in advance! // BEGIN TRANSACTION // CREATE TABLE subtrans_test(id INT) // INSERT INTO subtrans_test(id) VALUES (1) // SUBTRANS BEGIN // INSERT INTO subtrans_test(id) VALUES (2) // SELECT * FROM subtrans_test // SUBTRANS ROLLBACK // SELECT * FROM subtrans_test // ROLLBACK
The example above shows one transaction with a nested subtransaction. In the outer transaction a table "subtrans_test" gets created and one record gets inserted into the table. After that a subtransaction gets started with SUBTRANS BEGIN. Within the sub-transaction another record gets inserted and the result is made visible with the SELECT. Then the subtransaction is rolled back using SUBTRANS ROLLBACK, a select gets executed to verify that the table "subtrans_test" contains only one record and the effects of the subtransaction have been undone, before the entire (outer) transaction gets rolled back.
// // Turn off autocommit in advance! // BEGIN TRANSACTION // CREATE TABLE subtrans_test(id INT) // INSERT INTO subtrans_test(id) VALUES (1) // SUBTRANS BEGIN // INSERT INTO subtrans_test(id) VALUES (2) // SELECT * FROM subtrans_test // SUBTRANS BEGIN // INSERT INTO subtrans_test(id) VALUES (3) // SUBTRANS END // SUBTRANS ROLLBACK // SELECT * FROM subtrans_test // ROLLBACK
It is allowed to nest several subtransactions. You can start a subtransaction from within another subtransaction. However, note that if you nest subtransactions, the outermost SUBTRANS ROLLBACK or SUBTRANS END determines if any of the actions made by all the nested subtransactions are rolled back or commited.
Before you start using subtransactions, for example to isolate the effects of stored procedures, check the MaxDB manual for more details. There are some details you should know about. One detail is that subtransactions do not affect locks assigned to the (outer) transaction.
Locks? Yes. Locks play an important role in databases to synchronize concurrent accesses to shared resources. Such resources can be any kind of memory areas used by the database server like the I/O buffer cache or SQL objects like tables during a transaction. The less locks need to be set, the less work needs to be done and the lower is the propability that one transaction has to wait for another transaction to release a lock on an SQL object before the transaction can continue its work. In other words: the number of locks has a direct influence on concurrency and parallelism.
Which locks are implicitly set during the execution of a transaction can be controlled with the Isolation levels. MaxDB supports all four ANSI SQL Isolation levels: (1) READ UNCOMMITTED, (2) READ COMMITTED, (3) REPEATABLE READ and (4) SERIALIZABLE. The list of Isolation levels is sorted by the number of locks that MaxDB sets. This means, that the fewest locks are set by the database when using the level READ UNCOMMITTED. Therefore, the most probability of collisions and waits is the lowest when you use that level and the parallelism is likely to be the best. But the problem is that three phenonema can occur in all Isolation levels but SERIALIZABLE. The three phenonema are Dirty Read, Non Repeatable Read and Phantom. In general you do not want to see any of them happen in your applications. Now you are in trouble to decide if you want to go for the Isolation level SERIALIZABLE to avoid the unwanted phenomena and loose some degree of concurrency or if you can accept that some phenomena can occur.
Dirty Read describes a situation when a transaction T1 changes a row and T2 reads the row before T1 ends with COMMIT. If T1 performs a ROLLBACK in this case, then T2 has read a row that never existed.
|Transaction T1||Transaction T2|
|INSERT INTO table1 (column1) VALUES (3)|
|SELECT * FROM table1|
|ROLLBACK, removes the newly inserted row from table1||continues to use a record that never existed|
A Non Repeatable Read can happen in the following situation. Transaction T1 reads a row. Then transaction T2 modifies that row and ends with COMMIT. If T1 then reads the row again, it either gets the modified row or a message indicating that the row no longer exists.
|Transaction T1||Transaction T2|
|SELECT * FROM table1 WHERE column1 = 3|
|DELETE FROM table1 WHERE column1 = 3|
|Row no longer exists: UPDATE table1 SET column2 = 'a' WHERE column1 = 3|
A Phantom is a new record that should not be visible to a transaction. Transaction T1 executes a query and retrieves a result set of n rows. Then Transaction T2 creates at least one new record that meets the search conditions used to buld the result set retrieved by T1. If T1 executes the search again, the new record created in the transaction T2 becomes visible to T1. This breaks the Isolation rule of ACID-transactions.
|Transaction T1||Transaction T2|
|SELECT column2 FROM table1 WHERE column1 = 3, returns: 1, 2, 3|
|INSERT INTO table1(column2, column1) VALUES (4, 3)|
|SELECT column2 FROM table1 WHERE column1 = 3, returns: 1, 2, 3 plus the Phantom 4|
Again, more informations on the Isolation Level and the Locks that are implicitly set can be found in the manual. Use the Glossary to navigate to the related manual pages. Every database user should know the basic properties of transactions, isolation levels and locks. No matter if you are a database administrator or a database software developer. Once you are familiar with the properties from a users standpoint, you should continue to dive into the subject and try to understand how you database system has implemented transactions internally.
Databases write two types of log entries for transactions. One log is called redo-log and the other one is called undo-log. When a database user starts a transaction and executes some commands, the database does not know if the user will end the transaction with commit or rollback. If they end it with commit, then all the changes made in the course of the transaction have to be made durable. If the transaction gets interrupted for some reason, be it that the client disconnects before sending a commit, the system crashes or the user sends an explicit rollback, then all changes made by the transaction need to be undone.
This makes clear why a database needs undo-log records. Undo-log records are also refered to as before-images, because they store the state of the database before a transaction has changed it. For example, if a transaction changes the value of a column from 3 to 4, then the database writes an undo-record to remember the overwritten value of the modified column If the transaction gets rolled back, the database looks up the before-image of the modified column and sets the column value back from 4 to it's original value of 3. Undo information are stored by MaxDB on the data volumes.
Storing the undo informations on the data volumes is beneficial, if you loose all log volumes due to a hardware outage. In that case, MaxDB can undo all unfinished transaction and restore a transaction consistent state on the data volumes without the log volumes. If you are sure you understood the meaning of undo records, ask yourself how it theoretically can happen that during the execution of a DELETE or UPDATE operation the usage level on the data volumnes can temporarily increase instead of being reduced. The answer is that in the worst case the Garbage Collector (Database parameter _MAXGARBAGE_COL) cannot remove undo-log records as fast as you are implicitly creating new ones. As a result the usage level on the data volumnes increases and in the worst case, the volumnes will run full. However, I've never seen this in real-life applications!
Undo-log records are important in case of a rollback, but what do you need a redo log for? The D in ACID stands for Durability. Durability means that after the successful execution of COMMIT all modifications that are made by the transaction must be stored in a persistant, durable medium. MaxDB could write the modified records to places where they are stored. But this would result in a lot of random write operations spreaded over the data volumes. Random write operations are slower than simple sequential write operations on a log file.
In the times of 64bit computers and affordable prices for RAM, most database servers will use huge I/O buffer with hit rates above 98%. That means that most read and write operations will be done in main memory. And it is likely that a modification that is made in the cache will soon be overwritten again. Therefore it is great if you do not need to flush all modified pages from the buffer to the disk in case of a COMMIT, but if you simply can write a sequential redo log. Then, from time to time, you write the modified pages to the data volumes to get a transaction consistent state on the data volumes. Flushing all modified pages from the buffer cache to disk is called a savepoint in MaxDB. Savepoints are written in regular intervals (see also Database parameter _RESTART_TIME). If the database server crashes in the middle of two savepoints, then it will replay all redo log entries to the crash date to recover the state of the database when the crash occured.
For you all this means that you should use the fastest disks you can afford for the log volume and you should always mirror the log volumes with the redo log entries. The disks must be fast, because this influences how long a transaction has to wait for a COMMIT to be completed. Mirroring should be used to reduce the risk of a hardware failure. Remember that if you loose the log volumes due to a crash, you loose all the changes made since the last savepoint. By default this can be upto some 10 minutes of work.
To further speed up operations, MaxDB does not write every redo record immediately to disk. Instead all records are first written into a main memory log queue. There can be upto MAX_LOG_QUEUE_COUNT log queues in MaxDB 7.6. The actual number is calculated automatically or can be defined using the database parameter LOG_QUEUE_COUNT. Every log queue has a size of LOG_IO_QUEUE pages (8kb page size). Log queue entries are written to disk by dedicated log writer tasks. If possible, MaxDB tries to bundle write operations and redo records into larger blocks. Writing larger blocks is usually much faster than writing individual records and you have the chance that you can write the records of several transactions to disk with only one physical I/O operation.
Therefore, every MaxDB database administrator should not only take extra care of the log volumes, put them on the fastest disks that they can afford but he should also have an eye on monitoring informations and the values of LOG QUEUE OVERFLOWS and LOG QUEUE MAX USED ENTRIES. The monitoring can be done using the Database Analyzer, the Information screen of the Database Manager GUI or by checking the assorted LOG* system tables, for example the table LOGQUESTATISTICS. This gives you another interesting piece of information in the column WAITCOUNT which describes how may transactions are currently waiting for a COMMIT/ROLLBACK to be completed.
In the next issue of the MaxDB series we will have a short look at some locking details. Stay tuned and happy hacking!
Read all MaxDB series articles: maxdb_class.pdf