SQL Server triggers are a very controversial topic. They provide convenience at a low cost, but are often misused by developers and DBAs, leading to performance bottlenecks or maintenance challenges.

This article provides a brief review of triggers and an in-depth discussion of how to use them effectively and when triggers can put developers in a difficult position to escape.

While all of the demonstrations in this article are in SQL Server, the recommendations provided here are common to most databases. Triggers pose challenges that can also be seen in MySQL, PostgreSQL, MongoDB, and many other applications.

What is a trigger

You can define SQL Server triggers on databases or tables that allow code to execute automatically when a particular operation occurs. This article focuses on DML triggers on tables because they tend to be overused. Conversely, DDL triggers for databases are typically more centralized and less detrimental to performance.

A trigger is a set of code that evaluates when data in a table changes. Triggers can be defined to execute on insert, update, delete, or any combination of these operations. The MERGE operation can trigger each operation in a statement.

Triggers can be defined as INSTEAD OF or AFTER. An AFTER trigger occurs AFTER data is written to a table and is a separate set of operations that are executed in the same transaction as the write to the table, but AFTER the write has occurred. If the trigger fails, the original operation also fails. INSTEAD OF the write operation called by the trigger. An insert, update, or delete action never occurs, but instead executes the contents of the trigger.

Triggers allow TSQL to be executed when writes occur, regardless of the source of those writes. They are typically used to run critical operations, such as logging, validation, or other DML, when you want to ensure that a write operation is performed. This is handy, the write can come from an API, application code, release scripts, or internal processes, and triggers fire anyway.

What does a trigger look like

Using the Sales.Orders table in the WideWorldImporters sample database as an example, suppose you want to log all updates or deletes on that table, along with some details about how the changes occurred. This can be done by modifying the code, but doing so requires changing each location in the table’s code write. To solve this problem with triggers, you can take the following steps:

1. Create a log table to accept the written data. The following TSQL creates a simple log table with some added data points:

CREATE TABLE sales.orders_log (Orders_log_ID int NOT NULL IDENTITY(1,1) CONSTRAINT PK_Sales_Orders_log PRIMARY KEY CLUSTERED, OrderID int NOT NULL, CustomerID_Old int NOT NULL, CustomerID_New int NOT NULL, SalespersonPersonID_Old int NOT NULL, SalespersonPersonID_New int NOT NULL, PickedByPersonID_Old int NULL, PickedByPersonID_New int NULL, ContactPersonID_Old int NOT NULL, ContactPersonID_New int NOT NULL, BackorderOrderID_Old int NULL, BackorderOrderID_New int NULL, OrderDate_Old date NOT NULL, OrderDate_New date NOT NULL, ExpectedDeliveryDate_Old date NOT NULL, ExpectedDeliveryDate_New date NOT NULL, CustomerPurchaseOrderNumber_Old nvarchar(20) NULL, CustomerPurchaseOrderNumber_New nvarchar(20) NULL, IsUndersupplyBackordered_Old bit NOT NULL, IsUndersupplyBackordered_New bit NOT NULL, Comments_Old nvarchar(max) NULL, Comments_New nvarchar(max) NULL, DeliveryInstructions_Old nvarchar(max) NULL, DeliveryInstructions_New nvarchar(max) NULL, InternalComments_Old nvarchar(max) NULL, InternalComments_New nvarchar(max) NULL, PickingCompletedWhen_Old datetime2(7) NULL, PickingCompletedWhen_New datetime2(7) NULL, LastEditedBy_Old int NOT NULL, LastEditedBy_New int NOT NULL, LastEditedWhen_Old datetime2(7) NOT NULL, LastEditedWhen_New datetime2(7) NOT NULL, ActionType VARCHAR(6) NOT NULL, ActionTime DATETIME2(3) NOT NULL, UserName VARCHAR(128) NULL)Copy the code

This table records the old and new values for all columns. This is very comprehensive, we can simply record the old version of the line, and be able to understand the change process by merging the new version and the old version together. The last three columns are new and provide information about the type of operation performed (insert, update, or delete), when, and by whom.

2. Create a trigger to record table changes:

CREATE TRIGGER TR_Sales_Orders_Audit
 ON Sales.Orders
 AFTER INSERT, UPDATE, DELETE
AS
BEGIN
 SET NOCOUNT ON;
 INSERT INTO Sales.Orders_log
 (OrderID, CustomerID_Old, CustomerID_New, 
 SalespersonPersonID_Old, SalespersonPersonID_New, 
 PickedByPersonID_Old, PickedByPersonID_New,
 ContactPersonID_Old, ContactPersonID_New, 
 BackorderOrderID_Old, BackorderOrderID_New, 
 OrderDate_Old, OrderDate_New, ExpectedDeliveryDate_Old,
 ExpectedDeliveryDate_New, 
 CustomerPurchaseOrderNumber_Old, 
 CustomerPurchaseOrderNumber_New, 
 IsUndersupplyBackordered_Old, 
 IsUndersupplyBackordered_New,
 Comments_Old, Comments_New, 
 DeliveryInstructions_Old, DeliveryInstructions_New, 
 InternalComments_Old, InternalComments_New, 
 PickingCompletedWhen_Old,
 PickingCompletedWhen_New, LastEditedBy_Old, 
 LastEditedBy_New, LastEditedWhen_Old, 
 LastEditedWhen_New, ActionType, ActionTime, UserName)
 SELECT
 ISNULL(Inserted.OrderID, Deleted.OrderID) AS OrderID,
 Deleted.CustomerID AS CustomerID_Old,
 Inserted.CustomerID AS CustomerID_New,
 Deleted.SalespersonPersonID AS SalespersonPersonID_Old,
 Inserted.SalespersonPersonID AS SalespersonPersonID_New,
 Deleted.PickedByPersonID AS PickedByPersonID_Old,
 Inserted.PickedByPersonID AS PickedByPersonID_New,
 Deleted.ContactPersonID AS ContactPersonID_Old,
 Inserted.ContactPersonID AS ContactPersonID_New,
 Deleted.BackorderOrderID AS BackorderOrderID_Old,
 Inserted.BackorderOrderID AS BackorderOrderID_New,
 Deleted.OrderDate AS OrderDate_Old,
 Inserted.OrderDate AS OrderDate_New,
 Deleted.ExpectedDeliveryDate 
 AS ExpectedDeliveryDate_Old,
 Inserted.ExpectedDeliveryDate 
 AS ExpectedDeliveryDate_New,
 Deleted.CustomerPurchaseOrderNumber 
 AS CustomerPurchaseOrderNumber_Old,
 Inserted.CustomerPurchaseOrderNumber 
 AS CustomerPurchaseOrderNumber_New,
 Deleted.IsUndersupplyBackordered 
 AS IsUndersupplyBackordered_Old,
 Inserted.IsUndersupplyBackordered 
 AS IsUndersupplyBackordered_New,
 Deleted.Comments AS Comments_Old,
 Inserted.Comments AS Comments_New,
 Deleted.DeliveryInstructions 
 AS DeliveryInstructions_Old,
 Inserted.DeliveryInstructions 
 AS DeliveryInstructions_New,
 Deleted.InternalComments AS InternalComments_Old,
 Inserted.InternalComments AS InternalComments_New,
 Deleted.PickingCompletedWhen 
 AS PickingCompletedWhen_Old,
 Inserted.PickingCompletedWhen 
 AS PickingCompletedWhen_New,
 Deleted.LastEditedBy AS LastEditedBy_Old,
 Inserted.LastEditedBy AS LastEditedBy_New,
 Deleted.LastEditedWhen AS LastEditedWhen_Old,
 Inserted.LastEditedWhen AS LastEditedWhen_New,
 CASE
 WHEN Inserted.OrderID IS NULL THEN 'DELETE'
 WHEN Deleted.OrderID IS NULL THEN 'INSERT'
 ELSE 'UPDATE'
 END AS ActionType,
 SYSUTCDATETIME() ActionTime,
 SUSER_SNAME() AS UserName
 FROM Inserted
 FULL JOIN Deleted
 ON Inserted.OrderID = Deleted.OrderID;
END
Copy the code

The only function of this trigger is to insert data into the log table, with each row corresponding to a given write operation. It is simple, easy to record and maintain over time, and tables change. If you need to track additional details, you can add additional columns, such as database name, server name, number of rows of affected columns, or the application being invoked.

3. The final step is to test and verify that the log table is correct.

Here is a test to update the table after adding triggers:

UPDATE Orders
 SET InternalComments = 'Item is no longer backordered',
 BackorderOrderID = NULL,
 IsUndersupplyBackordered = 0,
 LastEditedBy = 1,
 LastEditedWhen = SYSUTCDATETIME()
FROM sales.Orders
WHERE Orders.OrderID = 10;
Copy the code

The results are as follows:

Some columns are omitted above, but we can quickly confirm that changes have been triggered, including new columns added at the end of the log table.

The INSERT and DELETE

In the previous example, the data used in the log table was read after the insert and delete operations. This special table can be used as part of any related write operation. INSERT will be triggered by an INSERT operation, DELETE will be triggered by a DELETE operation, and UPDATE will be triggered by an INSERT and DELETE operation.

For INSERTS and UPDATES, snapshots of the new values for each column in the table are included. For DELETE and UPDATE operations, snapshots of the old values of each column in the table prior to the write operation are included.

When are triggers most useful

DML triggers are best used for short, simple, and easily maintainable write operations that are largely independent of the application business logic.

  • Some important uses of triggers include:
  • Records changes to the history table
  • Audit users and their operations on sensitive tables.
  • Add additional values to the table that your application may not be able to use (due to security restrictions or other restrictions), such as:
    • Login/user name
    • Operation time
    • Server/database name
  • Simple validation.

The key is to keep the trigger code compact enough to be maintainable. As triggers grow to thousands of rows, they become a black box that developers are afraid to disturb. As a result, more code is added, but older code is rarely checked. Even with documentation, this is difficult to maintain.

For triggers to work effectively, they should be written to be setup based. If stored procedures must be used in triggers, make sure they use table-valued parameters when needed so that data can be moved on a set-based basis. Here is an example of a trigger that iterates over the ID to execute the sample stored procedure using the result order ID:

CREATE TRIGGER TR_Sales_Orders_Process
 ON Sales.Orders
 AFTER INSERT
AS
BEGIN
 SET NOCOUNT ON;
 DECLARE @count INT;
 SELECT @count = COUNT(*) FROM inserted;
 DECLARE @min_id INT;
 SELECT @min_id = MIN(OrderID) FROM inserted;
 DECLARE @current_id INT = @min_id;
 WHILE @current_id < @current_id + @count
 BEGIN
 EXEC dbo.process_order_fulfillment 
 @OrderID = @current_id;
 SELECT @current_id = @current_id + 1;
 END
END
Copy the code

Although relatively simple, the performance of the sales.Orders INSERT operation suffers when multiple rows are inserted at once because SQL Server is forced to execute the Process_ORDER_FULFILLMENT stored procedure one by one. A simple fix is to rewrite the stored procedure and pass a set of Order ids to the stored procedure instead of doing this one at a time:

CREATE TYPE dbo.udt_OrderID_List AS TABLE(
 OrderID INT NOT NULL,
 PRIMARY KEY CLUSTERED 
( OrderID ASC));
GO
CREATE TRIGGER TR_Sales_Orders_Process
 ON Sales.Orders
 AFTER INSERT
AS
BEGIN
 SET NOCOUNT ON;
 DECLARE @OrderID_List dbo.udt_OrderID_List;
 EXEC dbo.process_order_fulfillment @OrderIDs = @OrderID_List;
END
Copy the code

The result of the change is that the complete set of ids is passed from the trigger to the stored procedure for processing. As long as stored procedures manage this data in a collection-based manner, repeated execution can be avoided, that is, there is great value in avoiding the use of stored procedures within triggers because they add an additional layer of encapsulation that further hides TSQL execution as data is written to a table. They should be considered a last resort and used only when TSQL can be rewritten multiple times in many parts of the application.

When is a trigger dangerous

One of the biggest challenges for architects and developers is to ensure that triggers are used only when needed and not allow them to become a one-size-fits-all solution. Adding TSQL to triggers is generally considered faster and easier than adding code to an application, but the cost of doing so increases with each line of code added over time.

Triggers can become dangerous when:

  • Keep as few triggers as possible to reduce complexity.
  • Trigger code gets complicated. If updating a row in a table results in executing thousands of additional lines of trigger code, it can be difficult for developers to fully understand what happens when data is written to the table. To make matters worse, troubleshooting can be challenging when things go wrong.
  • Triggers across servers. This introduces network operations into triggers, which can result in slow or failed writes in the event of connection problems. Even cross-database triggers can be problematic if the target database is the object to maintain.
  • Trigger invokes trigger. The most painful part of triggers is that when a row is inserted, a write operation causes 100 triggers out of 75 tables to execute. When writing trigger code, make sure the trigger can execute all the necessary logic without firing more triggers. Extra triggers are usually unnecessary.
  • The recursive trigger is set to ON. This is a database-level setting that defaults to OFF. When turned on, it allows the contents of the trigger to call the same trigger. Recursive triggers can be extremely detrimental to performance and can be very messy to debug. Typically, recursive triggers are used when a DML in one trigger triggers other triggers as part of an operation.
  • A function, stored procedure, or view is in a trigger. Encapsulating more business logic in triggers makes them more complex and gives the false impression that trigger code is short and simple, when it is not. Avoid using stored procedures and functions in triggers whenever possible.
  • Iterations occur. Loops and cursors are row-by-row in nature and can cause operations on 1000 rows to be triggered 1000 times at a time, which can significantly hurt query performance.

This is a long list, but it can generally be concluded that short and simple triggers will perform better and avoid most of the pitfalls above. If triggers are used to maintain complex business logic, more and more business logic will be added over time and will inevitably violate the best practices described above.

It is important to note that in order to maintain atomic transactions, any objects affected by the trigger will keep the transaction open until the trigger completes. This means that long triggers not only make transactions last longer, but also hold locks and cause them to last longer. Therefore, when testing triggers, you should be aware of their impact on locking, blocking, and waiting when creating or adding additional logic to existing triggers.

How to improve triggers

There are many ways to make triggers easier to maintain, easier to understand, and higher performance. Here are some tips on how to manage triggers effectively and avoid pitfalls.

Triggers themselves should be well documented:

  • Why does this trigger exist?
  • What can it do?
  • How does it work?
  • Are there any exceptions or warnings about how triggers work?

In addition, if the TSQL in the trigger is difficult to understand, inline comments can be added to help developers looking at it for the first time.

Here is a sample trigger document:

/* 12/29/2020 EHP
 This trigger logs all changes to the table to the Orders_log
 table that occur for non-internal customers.
 CustomerID = -1 signifies an internal/test customer and 
 these are not audited.
*/
CREATE TRIGGER TR_Sales_Orders_Audit
 ON Sales.Orders
 FOR INSERT, UPDATE, DELETE
AS
BEGIN
 SET NOCOUNT ON;
 INSERT INTO Sales.Orders_log
 (OrderID, CustomerID_Old, CustomerID_New, 
 SalespersonPersonID_Old, SalespersonPersonID_New,
 PickedByPersonID_Old, PickedByPersonID_New,
 ContactPersonID_Old, ContactPersonID_New, 
 BackorderOrderID_Old, BackorderOrderID_New, 
 OrderDate_Old, OrderDate_New, 
 ExpectedDeliveryDate_Old,
 ExpectedDeliveryDate_New, 
 CustomerPurchaseOrderNumber_Old, 
 CustomerPurchaseOrderNumber_New, 
 IsUndersupplyBackordered_Old, 
 IsUndersupplyBackordered_New,
 Comments_Old, Comments_New, 
 DeliveryInstructions_Old, DeliveryInstructions_New, 
 nternalComments_Old, InternalComments_New, 
 PickingCompletedWhen_Old, PickingCompletedWhen_New, 
 LastEditedBy_Old, LastEditedBy_New, 
 LastEditedWhen_Old, LastEditedWhen_New, 
 ActionType, ActionTime, UserName)
 SELECT
 ISNULL(Inserted.OrderID, Deleted.OrderID) AS OrderID, 
 -- The OrderID can never change. 
 --This ensures we get the ID correctly, 
 --regardless of operation type.
 Deleted.CustomerID AS CustomerID_Old,
 Inserted.CustomerID AS CustomerID_New,
 Deleted.SalespersonPersonID AS SalespersonPersonID_Old,
 Inserted.SalespersonPersonID AS SalespersonPersonID_New,
 Deleted.PickedByPersonID AS PickedByPersonID_Old,
 Inserted.PickedByPersonID AS PickedByPersonID_New,
 Deleted.ContactPersonID AS ContactPersonID_Old,
 Inserted.ContactPersonID AS ContactPersonID_New,
 Deleted.BackorderOrderID AS BackorderOrderID_Old,
 Inserted.BackorderOrderID AS BackorderOrderID_New,
 Deleted.OrderDate AS OrderDate_Old,
 Inserted.OrderDate AS OrderDate_New,
 Deleted.ExpectedDeliveryDate AS ExpectedDeliveryDate_Old,
 Inserted.ExpectedDeliveryDate AS ExpectedDeliveryDate_New,
 Deleted.CustomerPurchaseOrderNumber 
 AS CustomerPurchaseOrderNumber_Old,
 Inserted.CustomerPurchaseOrderNumber 
 AS CustomerPurchaseOrderNumber_New,
 Deleted.IsUndersupplyBackordered 
 AS IsUndersupplyBackordered_Old,
 Inserted.IsUndersupplyBackordered 
 AS IsUndersupplyBackordered_New,
 Deleted.Comments AS Comments_Old,
 Inserted.Comments AS Comments_New,
 Deleted.DeliveryInstructions 
 AS DeliveryInstructions_Old,
 Inserted.DeliveryInstructions 
 AS DeliveryInstructions_New,
 Deleted.InternalComments AS InternalComments_Old,
 Inserted.InternalComments AS InternalComments_New,
 Deleted.PickingCompletedWhen AS PickingCompletedWhen_Old,
 Inserted.PickingCompletedWhen 
 AS PickingCompletedWhen_New,
 Deleted.LastEditedBy AS LastEditedBy_Old,
 Inserted.LastEditedBy AS LastEditedBy_New,
 Deleted.LastEditedWhen AS LastEditedWhen_Old,
 Inserted.LastEditedWhen AS LastEditedWhen_New,
 CASE -- Determine the operation type based on whether 
 --Inserted exists, Deleted exists, or both exist.
 WHEN Inserted.OrderID IS NULL THEN 'DELETE'
 WHEN Deleted.OrderID IS NULL THEN 'INSERT'
 ELSE 'UPDATE'
 END AS ActionType,
 SYSUTCDATETIME() ActionTime,
 SUSER_SNAME() AS UserName
 FROM Inserted
 FULL JOIN Deleted
 ON Inserted.OrderID = Deleted.OrderID
 WHERE Inserted.CustomerID <> -1 
 -- -1 indicates an internal/non-production 
 --customer that should not be audited.
 OR Deleted.CustomerID <> -1; 
 -- -1 indicates an internal/non-production 
 --customer that should not be audited.
END
Copy the code

Please note that this document is not comprehensive, but does include a short header and explains some of the TSQL key parts within triggers:

  • Exclude CustomerID = -1. This point is not obvious to those who don’t know, so it’s a good note.
  • What the ActionType CASE statement is used for.
  • Why ISNULL is used on the OrderID column between insert and delete.

Use the IF the UPDATE

In triggers, UPDATE provides the ability to determine whether data is written to a given column. This allows the trigger to check if the column has changed before performing the operation. Here is an example of this syntax:

CREATE TRIGGER TR_Sales_Orders_Log_BackorderID_Change ON Sales.Orders AFTER UPDATE AS BEGIN SET NOCOUNT ON; IF UPDATE(BackorderOrderID) BEGIN UPDATE OrderBackorderLog SET BackorderOrderID = Inserted.BackorderOrderID, PreviousBackorderOrderID = Deleted.BackorderOrderID FROM dbo.OrderBackorderLog INNER JOIN Inserted ON Inserted.OrderID =  OrderBackorderLog.OrderID END ENDCopy the code

By first checking whether the BackorderID has been updated, the trigger can bypass subsequent operations when it is not needed. This is a good way to improve performance by allowing triggers to skip code entirely based on the updated value of the desired column.

COLUMNS_UPDATED indicates which columns in the table were updated as part of a write operation, and can be used in triggers to quickly determine whether a specified column is affected by an insert or update operation. Although documented, it is complex to use and difficult to document. I generally don’t recommend using it because it will almost certainly confuse developers who aren’t familiar with it.

Note that for UPDATE or COLUMNS_UPDATED, it does not matter if the column changes. Write to the column and return 1 for UPDATE and 1 for COLUMNS_UPDATED, even if the value has not changed. They only track whether the specified column is the target of a write operation, not whether the value itself has changed.

One trigger per operation

Keep the trigger code as simple as possible. The e number of triggers for a database table increases the complexity of the table and makes it more difficult to understand its operations.

For example, consider the following table trigger definition:

CREATE TRIGGER TR_Sales_Orders_I
 ON Sales.Orders
 AFTER INSERT
CREATE TRIGGER TR_Sales_Orders_IU
 ON Sales.Orders
 AFTER INSERT, UPDATE
CREATE TRIGGER TR_Sales_Orders_UD
 ON Sales.Orders
 AFTER UPDATE, DELETE
CREATE TRIGGER TR_Sales_Orders_UID
 ON Sales.Orders
 AFTER UPDATE, INSERT, DELETE
CREATE TRIGGER TR_Sales_Orders_ID
 ON Sales.Orders
 AFTER INSERT, DELETE
Copy the code

What happens when a row is inserted? What is the firing order of triggers? The answers to these questions require research. Maintaining fewer triggers is a simple solution and takes the guesswork out of how writes occur in a given table. As a reference, you can change the triggerorder using the system stored procedure SP_setTriggerOrder, although this only applies to AFTER triggers.

A little bit simpler

The best practice for triggers is that they are simple to operate, fast to execute, and do not trigger more triggers because of their execution. There are no clear rules for how complex triggers should be, but a simple guideline is that the ideal trigger should be simple enough that if the logic contained in the trigger had to be moved elsewhere, the migration would not be prohibitively expensive. That is, if the business logic in a trigger is so complex that the cost of moving it is too high to consider, then the triggers are likely to become too complex.

Using our previous example, consider changing the trigger for an audit. This can be easily moved from triggers to stored procedures or code without much effort. The ease of logging in triggers makes it worth doing, but in the meantime, we should know how many hours it takes for developers to migrate TSQL from triggers to another location.

The calculation of time can be considered as part of the maintainability cost of triggers. That is, the price that must be paid to get rid of the trigger mechanism if necessary. This may sound abstract, but database migration between platforms is quite common. A set of triggers that performs well in SQL Server may not work in Oracle or PostgreSQL.

Optimization table variable

Sometimes, temporary tables are required in a trigger to allow multiple updates to the data. Temporary tables are stored in TEMPDB and are subject to the size, speed, and performance constraints of the TEMPDB database.

For frequently accessed temporary tables, tuning table variables is a good way to maintain temporary data in memory rather than in tempDB.

The following TSQL configures a database (if required) for memory optimization data:

ALTER DATABASE WideWorldImporters 
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;
ALTER DATABASE WideWorldImporters ADD FILEGROUP WWI_InMemory_Data 
 CONTAINS MEMORY_OPTIMIZED_DATA;
ALTER DATABASE WideWorldImporters ADD FILE 
 (NAME='WideWorldImporters_IMOLTP_File_1', 
 FILENAME='C:\SQLData\WideWorldImporters_IMOLTP_File_1.mem') 
 TO FILEGROUP WWI_InMemory_Data;
Copy the code

Once the configuration is complete, you can create a memory-optimized table type:

CREATE TYPE dbo.SalesOrderMetadata AS TABLE ( OrderID INT NOT NULL PRIMARY KEY NONCLUSTERED, CustomerID INT NOT NULL, SalespersonPersonID INT NOT NULL, ContactPersonID INT NOT NULL, INDEX IX_SalesOrderMetadata_CustomerID NONCLUSTERED HASH (CustomerID) WITH (BUCKET_COUNT = 1000)) WITH (MEMORY_OPTIMIZED  = ON);Copy the code

This TSQL creates the tables needed for the trigger demo:

CREATE TABLE dbo.OrderAdjustmentLog (OrderAdjustmentLog_ID int NOT NULL IDENTITY(1,1) CONSTRAINT PK_OrderAdjustmentLog PRIMARY KEY CLUSTERED, OrderID INT NOT NULL, CustomerID INT NOT NULL, SalespersonPersonID INT NOT NULL, ContactPersonID INT NOT NULL, CreateTimeUTC DATETIME2(3) NOT NULL);Copy the code

Here is a trigger demo using an in-memory optimization table:

CREATE TRIGGER TR_Sales_Orders_Mem_Test
 ON Sales.Orders
 AFTER UPDATE
AS
BEGIN
 SET NOCOUNT ON;
 DECLARE @OrderData dbo.SalesOrderMetadata;
 INSERT INTO @OrderData
 (OrderID, CustomerID, SalespersonPersonID, 
 ContactPersonID)
 SELECT
 OrderID,
 CustomerID,
 SalespersonPersonID,
 ContactPersonID
 FROM Inserted;
 
 DELETE OrderData
 FROM @OrderData OrderData
 INNER JOIN sales.Customers
 ON Customers.CustomerID = OrderData.CustomerID
 WHERE Customers.IsOnCreditHold = 0;
 UPDATE OrderData
 SET ContactPersonID = 1
 FROM @OrderData OrderData
 WHERE OrderData.ContactPersonID IS NULL;
 
 INSERT INTO dbo.OrderAdjustmentLog
 (OrderID, CustomerID, SalespersonPersonID, 
 ContactPersonID, CreateTimeUTC)
 SELECT
 OrderData.OrderID,
 OrderData.CustomerID,
 OrderData.SalespersonPersonID,
 OrderData.ContactPersonID,
 SYSUTCDATETIME()
 FROM @OrderData OrderData;
END
Copy the code

The more operations that are required within triggers, the more time is saved because memory-optimized table variables do not require IO to read/write.

Once the initial data from the inserted table is read, the rest of the trigger can leave the TEMPDB alone, reducing the overhead of using standard table variables or temporary tables.

The following code sets up some test data and runs an update to demonstrate the results of the above code:

UPDATE Customers
 SET IsOnCreditHold = 1
FROM Sales.Customers
WHERE Customers.CustomerID = 832;
UPDATE Orders
 SET SalespersonPersonID = 2
FROM sales.Orders
WHERE CustomerID = 832;
Copy the code

Once executed, the contents of the OrderAdjustmentLog table can be verified:

The result was predictable. By reducing the dependence on standard storage and moving intermediate tables into memory, memory-optimized tables provide a way to dramatically increase the firing speed. This is limited to scenarios where there are a lot of calls to temporary objects, but can also be useful in stored procedures or other procedural TSQL.

Substitute flip-flop

Like all tools, triggers can be abused and can be a source of confusion, performance bottlenecks, and maintainability nightmares. There are many preferable alternatives to triggers, and they should be considered before implementing (or adding to) an existing trigger.

Temporal tables

Temporal Tables was introduced in SQL Server 2016 and provides an easy way to add version control to tables without having to build your own data structures and ETL. This recording is invisible to the application and provides full anSI-compliant version support, making it an easy way to solve the problem of saving older version data.

The Check constraint

For simple data validation, the Check constraint can provide what is needed without the need for functions, stored procedures, or triggers. Define the Check constraint on the column and automatically validate the data when it is created.

Here is an example of a Check constraint:

ALTER TABLE Sales.Invoices WITH CHECK ADD CONSTRAINT
CK_Sales_Invoices_ReturnedDeliveryData_Must_Be_Valid_JSON
CHECK ([ReturnedDeliveryData] IS NULL OR 
ISJSON([ReturnedDeliveryData])<>(0))
Copy the code

This code checks if a column is valid JSON. If yes, the operation is normal. If not, SQL Server will throw an error and the write operation will fail. The Check constraint can Check any combination of columns and values, so you can manage simple or complex validation tasks.

Creating Check constraints is inexpensive and easy to maintain. They are also easier to document and understand because the scope of the Check constraint is limited to validating incoming data and ensuring data integrity, whereas triggers can do virtually anything imaginable!

The only constraints

If a column requires unique values and is not a primary key on the table, unique constraints are a simple and effective way to accomplish this task. A unique constraint is a combination of index and uniqueness. Indexes are required to validate uniqueness effectively.

Here is an example of a unique constraint:

ALTER TABLE Warehouse.Colors ADD CONSTRAINT 
UQ_Warehouse_Colors_ColorName UNIQUE NONCLUSTERED (ColorName ASC);​
Copy the code

Every time a row is inserted into the Warehouse.Colors table, the uniqueness of the ColorName is checked. If the write happens to result in duplicate colors, the statement will fail and the data will not be changed. Unique constraints are built for this purpose, which is the easiest way to enforce uniqueness on a column.

The built-in solution will be more efficient, easier to maintain, and easier to document. Any developer who sees a unique constraint will immediately understand what it does without having to dig deep into TSQL to figure out how things work, and this simplicity makes it an ideal solution.

Foreign key constraints

Like Check and unique constraints, foreign key constraints are another way to verify data integrity before writing it. A foreign key links columns in a table to another table. When data is inserted into the target table, its value is checked against the referenced table. If this value exists, the write operation proceeds normally. If not, an error is thrown and the statement fails.

Here is a simple foreign key example:

ALTER TABLE Sales.Orders WITH CHECK ADD CONSTRAINT
FK_Sales_Orders_CustomerID_Sales_Customers FOREIGN KEY (CustomerID)
REFERENCES Sales.Customers (CustomerID);
Copy the code

When data is written to sales.Orders, the CustomerID column is checked against the CustomerID column in sales.customers.

Like unique constraints, foreign keys have only one purpose: to verify that data written to one table exists in another table. It is easy to document, easy to understand, and efficient to implement.

Triggers are not the right place to perform these validation checks and are a less efficient solution than using foreign keys.

The stored procedure

Logic implemented in triggers can usually be easily moved to stored procedures. This eliminates the complexity that can result from lots of trigger code, while allowing better maintenance by developers. Stored procedures are free to construct operations to ensure as much atomicity as possible.

One of the basic principles for implementing triggers is to ensure that a set of operations is consistent with a write. All successes or failures occur as part of an atomic transaction. Applications don’t always need this level of atomicity. If necessary, an appropriate isolation level or table locking can be used in stored procedures to ensure transaction integrity.

While SQL Server(and most RDBMS) provides ACID to guarantee that transactions will be atomic, consistent, isolated, and persistent, transactions in our own code may or may not need to follow the same rules. Real-world applications have different requirements for data integrity.

Stored procedures allow you to customize code to achieve the data integrity required by your application, ensuring that performance and computing resources are not wasted on data integrity that is not needed.

For example, a social media application that allows users to post photos is unlikely to require its transactions to be completely atomized and consistent. If my picture was a second before or after you, no one would care. Similarly, if you comment on my photos while I’m editing them, the timing may not be important to the person using the data. On the other hand, a banking application that manages currency transactions needs to ensure that transactions are carefully executed so that no funds are lost or numbers are incorrectly reported. If I have a bank account with $20 in it, and I withdraw $20 at the same time that someone else is withdrawing $20, we’re not all going to be successful. One of us gets $20 first, and the other gets an appropriate error message about the $0 balance.

function

Functions provide an easy way to encapsulate important logic in a single place. It is much easier to reuse a single function in 50 table inserts than to perform the same logic with 50 triggers, one for each table.

Consider the following functions:

CREATE FUNCTION Website.CalculateCustomerPrice (@CustomerID INT, @StockItemID INT, @pricingdate DATE) RETURNS DECIMAL(18,2) WITH EXECUTE AS OWNER AS BEGIN DECLARE @calculatedprice DECIMAL(18,2); DECLARE @ UnitPrice decimal (18, 2); DECLARE @ LowestUnitPrice decimal (18, 2); DECLARE @ HighestDiscountAmount decimal (18, 2); DECLARE @ HighestDiscountPercentage decimal (18, 3); DECLARE @BuyingGroupID int; DECLARE @CustomerCategoryID int; DECLARE @ DiscountedUnitPrice decimal (18, 2); SELECT @BuyingGroupID = BuyingGroupID, @CustomerCategoryID = CustomerCategoryID FROM Sales.Customers WHERE CustomerID = @CustomerID; SELECT @UnitPrice = si.UnitPrice FROM Warehouse.StockItems AS si WHERE si.StockItemID = @StockItemID; SET @CalculatedPrice = @UnitPrice; SET @LowestUnitPrice = ( SELECT MIN(sd.UnitPrice) FROM Sales.SpecialDeals AS sd WHERE ((sd.StockItemID = @StockItemID) OR (sd.StockItemID IS NULL)) AND ((sd.CustomerID = @CustomerID) OR (sd.CustomerID IS NULL)) AND ((sd.BuyingGroupID = @BuyingGroupID) OR (sd.BuyingGroupID IS NULL)) AND ((sd.CustomerCategoryID = @CustomerCategoryID) OR (sd.CustomerCategoryID IS NULL)) AND ((sd.StockGroupID IS NULL) OR EXISTS (SELECT 1 FROM Warehouse.StockItemStockGroups AS sisg WHERE sisg.StockItemID = @StockItemID AND sisg.StockGroupID = sd.StockGroupID)) AND sd.UnitPrice IS NOT NULL AND  @PricingDate BETWEEN sd.StartDate AND sd.EndDate); IF @LowestUnitPrice IS NOT NULL AND @LowestUnitPrice < @UnitPrice BEGIN SET @CalculatedPrice = @LowestUnitPrice; END; SET @HighestDiscountAmount = ( SELECT MAX(sd.DiscountAmount) FROM Sales.SpecialDeals AS sd WHERE ((sd.StockItemID = @StockItemID) OR (sd.StockItemID IS NULL)) AND ((sd.CustomerID = @CustomerID) OR (sd.CustomerID IS NULL)) AND ((sd.BuyingGroupID = @BuyingGroupID) OR (sd.BuyingGroupID IS NULL)) AND ((sd.CustomerCategoryID = @CustomerCategoryID) OR (sd.CustomerCategoryID IS NULL)) AND ((sd.StockGroupID IS NULL) OR EXISTS (SELECT 1 FROM Warehouse.StockItemStockGroups AS sisg WHERE sisg.StockItemID = @StockItemID AND sisg.StockGroupID = sd.StockGroupID)) AND sd.DiscountAmount IS NOT NULL AND @PricingDate BETWEEN sd.StartDate AND sd.EndDate); IF @HighestDiscountAmount IS NOT NULL AND ( @UnitPrice - @HighestDiscountAmount) < @CalculatedPrice BEGIN SET @CalculatedPrice = @UnitPrice - @HighestDiscountAmount; END; SET @HighestDiscountPercentage = ( SELECT MAX(sd.DiscountPercentage) FROM Sales.SpecialDeals AS sd WHERE ((sd.StockItemID = @StockItemID) OR (sd.StockItemID IS NULL)) AND ((sd.CustomerID = @CustomerID) OR (sd.CustomerID IS NULL)) AND ((sd.BuyingGroupID = @BuyingGroupID) OR (sd.BuyingGroupID IS NULL)) AND ((sd.CustomerCategoryID = @CustomerCategoryID) OR (sd.CustomerCategoryID IS NULL)) AND ((sd.StockGroupID IS NULL) OR EXISTS (SELECT 1 FROM Warehouse.StockItemStockGroups AS sisg WHERE sisg.StockItemID = @StockItemID AND sisg.StockGroupID = sd.StockGroupID)) AND sd.DiscountPercentage IS NOT NULL AND @PricingDate BETWEEN sd.StartDate AND sd.EndDate); IF @HighestDiscountPercentage IS NOT NULL BEGIN SET @DiscountedUnitPrice = ROUND(@UnitPrice * @HighestDiscountPercentage / 100.0, 2); IF @DiscountedUnitPrice < @CalculatedPrice SET @CalculatedPrice = @DiscountedUnitPrice; END; RETURN @CalculatedPrice; END;Copy the code

In terms of complexity, this is an absolute beast. Although it accepts a scalar parameter to determine the price calculation, but it do very large, and even to the Warehouse. The StockItemStockGroups, Warehouse, StockItems and Sales. Additional reading of Customers. If this is a critical calculation that is often used for a single line of data, encapsulating it in a function is an easy way to get the desired calculation without adding complexity to the triggers. Be careful with functions and be sure to test with large data sets. Simple scalar functions typically scale well with larger data, but more complex functions may have poor performance.

coding

When modifying data in a table from an application, additional data manipulation or validation can also be performed before data is written. This is usually inexpensive, performs well, and helps reduce the negative impact of runaway triggers on the database.

A common reason for putting code into triggers is to avoid changing code and pushing builds that would otherwise cause changes to the application. This is in direct contrast to any risks associated with making changes in the database. This is usually a discussion between the application developer and the database developer about who will be responsible for the new code.

This is a rough guideline, but helps measure maintainability and risk after code is added to an application or trigger.

Computed columns

When other columns change, computed columns can include a variety of arithmetic operations and functions to compute the result. They can be contained in an index, a unique constraint, or even a primary key.

SQL Server automatically maintains the computed columns when any underlying values change. Note that each computed column is ultimately determined by the values of the other columns in the table.

This is a good alternative to using triggers to maintain specified column values. Computed columns are efficient, automatic, and require no maintenance. They simply work, even allowing complex calculations to be integrated directly into a table, with no additional code required in the application or SQL Server.

Use SQL Server triggers

Triggers are a useful feature in SQL Server, but like all tools, they can be misused or abused. When deciding whether to use a trigger, be sure to consider the purpose of the trigger.

If a trigger is used to write short transaction data to a log table, it is likely to be a good trigger. If triggers are used to enforce complex business rules, it is likely that you need to rethink the best way to handle such operations.

There are many tools available as viable alternatives to triggers, such as checking constraints, calculating columns, and so on, and there is no shortage of ways to solve the problem. Success in database architecture lies in choosing the right tools for your work.

Welcome to pay attention to my public number – code non translation station, if you have a favorite foreign language technical articles, you can recommend to me through the public number message.