Get Rid of High VLF Easily
DBAs! I guess everyone know that huge number of Virtual Log Files (VLFs) in SQL Server can cause Backup/Restore & Database Recovery process slow. Even in rare cases, it can introduce slowness at transaction level.
https://sqlperformance.com/2013/02/system-configuration/transaction-log-configuration
Even we all are aware of it, it has been still a challenge to remove High VLF counts from SQL Server log files since it involved log of manual effort with Shrinking and re-growing the log files.
This is where my ‘Space-Capacity-Automation‘ open source project comes to your rescue. It has a parameter option @optimizeLogFiles that can help you optimize your log files by below below tasks:-
- Find All Log Files with
count of VLF greater than @vlfCountThreshold (default = 500), and also the VLF count cross Ideal VLF count - Shrink the Log File to release maximum space
- Regrow the Log File to the Original Size in
chunk of Appropriate Auto Growth (check next point) - Set the AutoGrowth to
appropriate size by below logic:-
1 2 3 |
if Original Log File size <= 10 gb, set AutoGrowth of 1 GB else if, Original Log file size > 10 gb and <= 65 gb, set AutoGrowth of 2 GB else if size > 65 GB, set AutoGrowth of 8 GB |
So with above AutoGrowth algorithm, the new VLF count generated for databases would never cross 512, thus Optimizing the Configuration of your Log Files.
Want to know how to use procedure, simply execute procedure with @help = 1 parameter
If we switch to Messages tab in result set, then we find TSQL Syntax and some examples for each of the main parameters.
As we can check, @optimizeLogFiles functionality of the procedure supports 4 other additional parameters namely @vlfCountThreshold, @DBs2Consider, @forceExecute
1 2 3 4 5 6 7 8 9 10 11 |
-- Generate TSQL Code to Optimize Log Files where VLF count is greator than default (500) EXEC [dbo].[usp_AnalyzeSpaceCapacity] @optimizeLogFiles = 1; -- Directly optimize the Log Files where VLF count is greator than 1000 EXEC [dbo].[usp_AnalyzeSpaceCapacity] @optimizeLogFiles = 1, @vlfCountThreshold = 1000, @forceExecute = 1; -- Generate TSQL Code to Optimize Log Files of 'unet, Test1Db, MirrorTestDB' database if the VLF count is greator than 1000 EXEC [dbo].[usp_AnalyzeSpaceCapacity] @optimizeLogFiles = 1 ,@DBs2Consider = 'unet, Test1Db, MirrorTestDB', @vlfCountThreshold = 1000; -- Directly optimize the Log files of 'unet, Test1Db, MirrorTestDB' database if the VLF count is greator than 1000 EXEC [dbo].[usp_AnalyzeSpaceCapacity] @optimizeLogFiles = 1 ,@DBs2Consider = 'unet, Test1Db, MirrorTestDB', @vlfCountThreshold = 1000, @forceExecute = 1; |
Below is sample output, that the code generates when @ForceExecute parameter value is set to default (0).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
/* ****************************************************************************************************** TSQL Code to remove high VLF Count for [DSGMantra] database. Currently the log file 'DSGMantra_log' has 626 VLFs which is more than @vlfCountThreshold (500) Shrinking the log file 'DSGMantra_log' to minimum possible size, and Trying to re-grow it to actual size of 30788 MB in chunks of 2GB -- https://dba.stackexchange.com/a/180150/98923 -- https://sqlperformance.com/2013/02/system-configuration/transaction-log-configuration * ******************************************************************************************************/ -- Step 1: backup of the transaction log. -- No action required as either database is in SIMPLE RECOVERY model -- Or, @generateLogBackup is set to 0 -- Step 2: Shrink the log file USE [DSGMantra]; DBCC SHRINKFILE (N'DSGMantra_log' , 0, TRUNCATEONLY); GO -- Step 3: Grow the log file back to the desired size (30788) from size 0 GB, -- and with FILEGROWTH of 2gb -- In case of manual execution, please commentout MODIFY statements where SIZE mentioned is less than CurrentSize from output of DBCC SHRINKFILE USE [master]; ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 2GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 4GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 6GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 8GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 10GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 12GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 14GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 16GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 18GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 20GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 22GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 24GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 26GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 28GB); ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log', SIZE = 30GB); GO USE [master]; ALTER DATABASE [DSGMantra] MODIFY FILE (NAME = N'DSGMantra_log' , FILEGROWTH = 2GB); GO |
Move a short demo on how to use this procedure, kindly check below YouTube video:-
Thanks Friends! I hope this has been helpful. Kindly raise issue on GitHub in case you find any error while using this automation procedure.
Hi,
How we can ensure data integrity in database?
Hi Nirav,
Data Integrity is totally different thing. You can enforce data integrity using constraint like check constraint, foreign key constraint, unique key constraint, primary key constraint, default constraint, triggers and procedures.
You should check Chapter 01 of book ‘SQL Server 2012 T-SQL Fundamentals’ by ItZik Ben-Gan
Thanks & Regards,
Ajay Dwivedi