Set transaction log size and growth
What is the best practice, how to properly set-up transaction log initial size, growth, and number of VLFs? There are many articles on the net on that theme, and some of them are suggesting really wrong concepts, like setting initial size of the log as the percentage of the database size (e.g. 10% or 25%) – that does not have any reasoning at all, and I’ll explain why. Also, you will get a method to determine a proper size for your transaction log.
Transaction log initial size
The rule of thumb for initial size is:
Set it large enough so it will not grow during normal operation.
If you have set some initial size, and the log grows without something extraordinary happened to db (your log backup job has broken, or someone accidentally left transaction opened forever etc), your size estimate was too low. That simple. If it grows, initial size was not good, it was too small.
The most precise method to determine right initial size is to wait for at least a week or one business cycle (a month, including maintenance operations like index rebuilds to pass), and see what size the log will grow. And that is the actual initial size you should keep. Never shrink the log below that value, because it would be just wasting of IO operations (and increasing fragmentation and number of VLFs, hurting performance) as it will grow back again to that size!
If you do not want to wait so long, you could roughly estimate the log size needed to rebuild your largest index in db (that is usually clustered index of the biggest table) and set the initial log size to that value. That would be the “fit largest transaction” method. Imprecise, but is very simple.
Only in full and bulk logged recovery model you can use another method: to measure maximum transaction log generation rate, multiply by time between two transaction log backups, and set that as initial transaction log size (shorter interval between log backups equals smaller log size). You can measure the log generation rate of your application by monitoring the System Monitor counter Log Bytes Flushed / sec for the database. It is also imprecise method because it is a prediction, but it is a starting point. Later, you will have to check and probably adjust log initial size by using the precise method, first one described, which takes time.
We know how to detect is initial size too low (if log grows, it is too low), but how do detect if your log size is too high? If log never grows and log used percentage never reaches not even close to 100%, you have probably over-sized your transaction log. To be able to detect that, you have to monitor percentage of log file used for a longer period of time to be sure, not just a few days. Over-sizing doesn’t hurt performance (as opposed to under-sizing which is), it just hurts the disk space. So, it is better to over-size than to under-size initial transaction log size.
Tip: If db is in full recovery model and rebuilding indexes has biggest impact on that log size (you see that if log used percentage only grows close to 100% during index rebuilds and at other times it is very low), switch to bulk logged recovery model (just temporarily) during index rebuilds. That will probably significantly decrease transaction log size you need.
Set transaction log file growth
This is almost equally important as initial size. By default it is 10% of log size, and you should definitely change that. Set it as a fixed size in MB, never percentage. If you set it too low, you will end up with double fragmentation in transaction log: file itself will become fragmented, and there will be created too many VLFs that also can causes slow transaction log operations. If you set file growth too high, you should know that instant file initialization does not work for transaction log files. That means, every time transaction log grows the newly allocated chunk will have to be zeroed-out (overwritten with zeroes). During that operation all write activity in that db stops, until your disks finishes zeroing-out that new chunk! So, you definitely do not want to set it too high, either!
The rule of thumb would be to set is as large as possible, but not so large that users would notice the wait because of zeroing-out. That would be somewhere between 50 and 512 MB, depending mostly on the speed of the disk, and a bit less on the size of the log. I usually set it to 50MB for transaction logs that are really small, so larger growth has no sense if total log size is e.g. 10MB. For larger logs, file growth of 128MB or 256MB would be file on average-fast disks, and 512MB for really fast disks. For example, if your storage sequential write performance with one queue is 50MB/s and you set the log file growth to 500MB, all write activity in the db will stop for 10 seconds each time log growth occurs. Bear that in mind, because users will probably notice!
Remember, log growth normally should not happen at all during normal db operation if you have properly set the initial size of transaction log!