Your SQL Server may be slow for a peculiar reason: a run-away query that pushes-out all other queries from the plan cache, like a huge wave. So called “tsunami query“. Tsunami query wipes other plans from the cache and makes your SQL running slow. Creating tsunami query is easy and usually not intentional: a developer concatenates values in SQL command string. As it “works on my machine”, it easily slips to production (nobody tests for this, right?), and the monster is loose! Read more ›
You might have lot of virtual machines (VM) for Dev and Test environments. And a storage with huge amount of space to fit all databases. What are the options? Probably the best for performance would be to connect to storage’s LUNs directly from the guest OS (windows iSCSI Initiator). But if you want something much much simpler and still efficient, read on…
Docker is perfect for launching DEV and TEST SQL Server environments, when you need many independent SQL instances, or ability to run and stop instances in seconds, only when you need them. Launched from one common image (1GB in size), all SQL instances “containers” share that common image, spending almost no additional space, expect for databases you add. That means super-low disk space consumption. Also memory (RAM) of the host is shared, used as needed by every instance and without RAM footprint of OS and apps we have in classic VMs. Therefore, much more SQL instances fit into same amount of RAM. One can launch different SQL Editions from the same image within seconds. It is demonstrated in the video.
You developed a new TSQL code and want to check is there a slow SQL statement inside and which is it? Or you debug the code and want to know which statement throws error, maybe inside of a trigger or calculated field which calls a function that fails and is not immediately visible what is happening? Or you test application on QA environment, and want to check for SQL errors or slow statements to return them back to dev for correction before problems hit production? Then read on…
Do you use ROWVERSION/TIMESTAMP to pull changed data?
If you do, you might experience a problem – not all changed rows are pulled, some are skipped. How can that be, if we pulled entire range of rowverion changes with no gaps?
Let me illustrate it in this video with demo inside:
In a few days I am traveling to Iceland for the first time in life. A land of lava, ice caves and Aurora Borealis. There is a SQL Saturday conference organized by local SQL community that I am looking forward to meet. Lot of interesting lectures you can attend there (besides mine of course) – this is the full schedule.
If anyone is interested to meet me, talk about SQL Server, life, or any other topic, it will be an opportunity there in Reykjavik at the conference, or have a drink with me on Sunday after the conference (11th May 2018) – you are welcome to contact me.
Server is sometimes slow and you want to know why? Here is a lightweight performance logging kit which will give you the answer. It supports SQL Server and SQL Azure Databases, but perfmon part can be used on any other Windows machine to log perormance data, even without SQL Server. This lightweight logging is already in use by many productions with mission-critical workload. Advantage is – you are NOT installing any new tools: we exploit native, already existing features of windows and SQL, by configuring them to capture performance-relevant information.
Demonstration of the smallest possible deadlock: only one statement, one table, one row.
Without transactions (no BEGIN TRANSACTION). Even RCSI (Read Committed Snapshot Isolation) is turned ON to eliminate shared locks. Everything is “by the book” as Books Online suggest to minimize deadlocks. How is that possible to deadlock? There is a complexity in simplicity of this demo. In video it is explained why single-row deadlock occurs. If you understand single-row deadlock, you will be able to understand even most complex production deadlock situations. And understanding the cause is the first step to eliminate them.
Enjoy the video, and tell me your opinion. Thank you!
It is SQL Server user group in Basel, after spending short time in Zürich. I am very happy to meet people there. Presentation will be very useful and interesting, and you are more than welcome to register and get there!
Databases should be fast and simple to use. We make them so. Do not let your customers wait for a slow database response - hire a top professional! We are passionate about creating highly-tuned SQL Server systems. Do you want yours to become one? Simply call us or send email, and we will take care of the rest.
Vedran Kesegić, M.Sc.
SQL Server Database Consultant
A Random Thought
Keep your transaction log backups at least to second full backup. Even if the latest full backup is corrupt, you can recover without any data loss. E.g. if you take full backup weekly, diff backup daily, and transaction log backup hourly, keep your transaction log backup at least for 2 weeks, so it reaches the second full backup.