CONTEXT_INFO for audit data?


Author
Message
Keith Chisarik
Keith Chisarik
StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)
Group: StrataFrame Users
Posts: 939, Visits: 40K
Does anyone have an opinion on using CONTEXT_INFO as an efficient way to "pass" application information to auditing triggers? Once could update the CONTEXT_INFO for the current connection string before doing any SAVE() to a table that required auditing. This would allow your triggers to include data from your application that is not necessarily in the table you are updating, RBS username for example (I cannot just use the SQL username). I need to include several fields in each audit record, the data will never be contained in the table being audited. CONTEXT_INFO accepts 128 bytes of binary data (fast so far in testing). I was theorizing to pass a string of the values, delimited in some way that I could parse via the TSQL trigger code. One stop to set my "custom" audit fields whenever they change before a save or transaction on the BO gets committed?

Thoughts? or suggestions on what you may have done to pass data to your audit triggers?

Keith Chisarik

Greg McGuffey
Greg McGuffey
Strategic Support Team Member (2.7K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
I didn't even know about CONTEXT_INFO! Depending on what you are doing, you might look into CLR triggers, as the string manipulation would be much easier (er...any kind of parsing would be much easier). The major limitation I see with it is that it is only 128 bytes of info. If that works, great. My users want comments though. Sad



In my auditing design, I have two tables, one is the master audit history table, containing the table, ID, User name, date, type and comment of any change. The other table stores the old and new values of any changed fields. I use three sprocs to manage this. One is the master sproc that saves an audit record about any change, i.e. table being changed, ID of changed row, user name, date and a comment about the change. Then I have a sproc that saves away the old values and one that saves the new values. I use CodeSmith to manage the last two (it builds the sprocs for me, so when I change a table, I don't have to manually go update the old/new value sprocs). Then I have a base BO that manages the extra data needed for auditing and manages calling the sprocs appropriately during save/update/delete.



Let us know how using the CONTEXT_INFO goes. I'm going to look into this more myself.
Keith Chisarik
Keith Chisarik
StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)StrataFrame VIP (1.2K reputation)
Group: StrataFrame Users
Posts: 939, Visits: 40K
Interesting post Greg.

In your scenario are you adding the additional audit info to each record save via your base class? or calling a seperate sproc for each change. What kind of performance hit do you take, what is your volume like?

I was headed down the road I was so that I could update the values once on the server, only having to pass the audit fields not in the tables once, or when a new user logs into the system/feature, from there everything would be on the SQL Server. I am 99% sure I can easily live with the 128 byte limit, no comments here.

Keith Chisarik

Greg McGuffey
Greg McGuffey
Strategic Support Team Member (2.7K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
In your scenario are you adding the additional audit info to each record save via your base class? or calling a seperate sproc for each change.


The base BO first calls an audit sproc with the extra info (user name, comments, etc.) and gets back an audit ID. It creates an audit record with the extra info. Then the old values sproc is called, passing it the audit ID, the table and ID of record being changed. It creates a set of audit field records (one record for every field being changed), with the old values. Then the BO does the actual save. Finally, the new values sproc is called with the audit ID, table and ID of changed record. It adds the old values to the audit field records created by the new value sproc.



What kind of performance hit do you take, what is your volume like?


Haven't found any performance hit. Almost all the work is done on SQL Sever, as the base BO is only calling sprocs with small sets of data. My app is very text based. Lots of TEXT fields (oh the joy...eventually we'll upgrade to SQL 2005 and can use varchar(max)). This method allows for field level auditing while only sending one set of the actual table data to the server at a time. Our volume is low, but most users are remote with varying connections. So I was more concerned with limiting bandwidth requirements than server issues.



I was headed down the road I was so that I could update the values once on the server, only having to pass the audit fields not in the tables once, or when a new user logs into the system/feature, from there everything would be on the SQL Server.


I wasn't quite clear on how the CONTEXT_INFO worked. You can set do the SET CONTEXT_INFO just once in SF and then it keeps working during that entire session? If so, that is cool.



I'm going to look into this some more myself, though I think in my specific circumstance it might not work....
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Similar Topics

Reading This Topic

Login

Explore
Messages
Mentions
Search