0

I know this goes against all things logical, however, I am attempting to push this through anyway :(

There is a scenario in which records are written to a table with the trigger below attached. Without the kludge described below, no records get written. This is not batch related.

Adding the COMMIT TRAN and BEGIN TRAN in the manner below came from the last answer to this DA Question

I have 120 records come through and get written to the table. Out of 120, 90 throw exceptions when EXECUTE(@Command) is called, however, the value of @ErrorMessage is 'XXX' for all 120 records, even when there is an exception. I would think that I would see the 90 error messages, but it is like the @ErrorMessage being set in the TRY CATCH is discarded. Can anyone explain why this would happen and if there is a workaround?

ALTER TRIGGER [dbo].[LoadAWSObjects]
ON [dbo].[tblAWSObjectsControl]
AFTER INSERT   
AS  
BEGIN
    COMMIT TRANSACTION -- This is a kludge to have the trigger's internal transaction committed regardless of an Exception in the TRY
                       --See Attached DA Question below
DECLARE @ErrorMessage NVARCHAR(4000) = 'XXX'

SELECT @Command = Command from inserted

BEGIN TRY

   EXECUTE (@Command) WITH RESULT SETS NONE

END TRY
BEGIN CATCH  

   --This should hit 90 times 
   SET @ErrorMessage = ERROR_MESSAGE()  

END CATCH

BEGIN TRANSACTION -- This is a kludge to have the triggers internal transaction committed regardless of an Exception in the TRY
                  --See Attached DA Question below 

--There are 120 records in the log table with an ErrorMessage of 'XXX'  
INSERT INTO tblAWSObjectsErrorLog(ErrorMessage)
SELECT @ErrorMessage

END;

Ross Bush
  • 505
  • 1
  • 4
  • 17
  • Ugh. Have you tried SET XACT_ABORT OFF in the trigger (implicitly it is on) instead of that horrible workaround? – Martin Smith Jul 06 '23 at 16:41
  • @MartinSmith - Thanks for the reply. I commented out the COMMIT TRANS and BEGIN TRANS and added SET XACT_ABORT OFF at the top, however, when I reload table, no records are saved. – Ross Bush Jul 06 '23 at 16:46
  • Is this basically what you are trying to do? https://dbfiddle.uk/svf1vwQR – Martin Smith Jul 06 '23 at 16:58
  • @MartinSmith - Thanks, I tried something like this yesterday. When I replace EXECUTE (@Command) with SET @X = 1 / 0, then all the data comes across and I get 120 divide by zero errors – Ross Bush Jul 06 '23 at 17:01
  • I've tried with EXECUTE ('EXEC FOO') WITH RESULT SETS NONE; and EXECUTE ('PRINT 1/0') WITH RESULT SETS NONE; and both log the expected errors. What is the nature of the command you are trying to run? – Martin Smith Jul 06 '23 at 17:08
  • I did not want to complicate matters, but this table gets populated by inserts into another table via trigger and that table participates in replication. The table above does not participate in replication and just acts as a way to avoid re-running already ran commands during a full load of the source table. I think that maybe the client application doing the replication, aws/data migration service, has another transaction because the @@TRANSCOUNT in the replicated tables trigger is always 2. – Ross Bush Jul 06 '23 at 17:09
  • Also the inserts are definitely single row as SELECT @Command = Command from inserted implies and not happening in a single multi row insert? The trigger isn't written correctly to handle the case that a single insert can insert 0, 1 or many rows. – Martin Smith Jul 06 '23 at 17:11
  • @MartinSmith - Commands are always non- batched in the form of exec sp_execute 1,1040,N'CREATE FUNCTION...'... exec sp_execute 1,1041,N'CREATE FUNCTION...'... exec sp_execute 1,1044,N'DROP VIEW...'... exec sp_execute 1,1059,N'CREATE FUNCTION...'... exec sp_execute 1,1060,N'CREATE FUNCTION...'... – Ross Bush Jul 06 '23 at 17:12
  • 1
    Right, if I try the command EXECUTE ('CREATE PROC P1 AS SELECT 1') WITH RESULT SETS NONE - which fails on the second attempt as the proc already exists - then SET XACT_ABORT OFF doesn't save the day. The transaction is still doomed – Martin Smith Jul 06 '23 at 17:21
  • @MartinSmith :( I think I have a plan B. However, I was asked to try and get the above solution to work. – Ross Bush Jul 06 '23 at 17:23

2 Answers2

2

Triggers were implemented assuming that they will be part of the original transaction. Attempting to execute arbitrary SQL that somehow isn't part of that transaction is difficult as you're attempting something in the opposite way it's supposed to work. Stop banging your head into a wall....

A clean, well-documented and fully supported approach is to use service broker to asynchronously queue up the statements to execute.

The trigger logic is simplified:

  1. begin dialog conversation on queue.
  2. send message on queue containing SQL to execute.
  3. wait for reply (or not if you want "fire and forget")
  4. end dialog conversation.

The queue will have an activation sp that executes the sql and logs the results.

Errors executing the SQL are 100% disconnected from the original transaction.

You can also multi-thread the activation sp (MAX_QUEUE_READERS = ), hence parallelizing your workload.

StrayCatDBA
  • 2,088
  • 16
  • 23
  • Thanks for the comment. I have been working on plan B that is similar. Table at replication source containing all ongoing ddl changes get created and replicated to the target. Using the ID against a non-replicated table at the target maintaining already ran commands (remains during new snapshot or full load), only new commands get executed. My problem is attempting to solution this with a trigger on the second non-replicated audit table. I will look into the above approach or have a job run periodically marking commands in the non-replicated table as IsApplied or something like that. – Ross Bush Jul 06 '23 at 17:22
0

I found the answer around 3/4 of the way down this documentation.

Uncommittable Transactions

Ross Bush
  • 505
  • 1
  • 4
  • 17
  • This is very vague. What answer did you find, and how did it address your problem? If you can, [edit] your answer to clarify please. – Paul White Aug 04 '23 at 10:10