Interesting problem with SQL Server, ARITHABORT, filtered indexes, calculated columns, and compatibility mode of 80

I recently ran into a problem in a legacy application.  Someone had applied a unique index on a column in a SQL Server database table.  Shortly after, we started seeing some errors in our logs relating to that table:

UPDATE failed because the following SET options have incorrect settings: ‘ARITHABORT’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

After a good amount of investigation, I discovered that it was related to the fact that the unique index as a filtered index (it needed to be a filtered unique index because the business rules allowed for duplicate null values, but not duplicate non-null values) and it turns out that if you use a filtered index in SQL Server while running with Compatibility Mode = 80, then any updates to that table need to specify SET ARITHABORT ON, or it will fail with this error.

Well, this is a legacy application, which can’t have the compatibilty mode changed for numerous reasons, and there are many places where we would need to set ARITHABORT, so that wasn’t really a good option either.

First I tried a work around using a computed column that would never be null, and then I applied a unique index to that column, but the same ARITHABORT problem exists with computed columns as it does with filtered indexes.

After a few other attempts at a workround, my friend Phil @ XDev Group mentioned a method he used to solve a similar problem.  I created a trigger that would create a poor man’s calculated column, without the imposed limitations of using a real calculated column in compat80 mode.

CREATE TRIGGER [dbo].UniqueJobNumberGenerator ON [dbo].jobdata
    FOR INSERT, UPDATE
AS
    BEGIN
        SET NOCOUNT ON;

        UPDATE  dbo.jobdata
        SET     UniqueJobNumber = ( CASE WHEN [num] IS NULL
                                         THEN CONVERT([varchar], [id], ( 0 ))
                                              + '00000000'
                                         ELSE [num]
                                    END )
        WHERE   id IN ( SELECT   id
                               FROM     INSERTED )

    END

Then I applied a unique index on that column (without a filter) and … success!

List of years in SQL Server

Here is a CTE from my friend Phil that returns a list of years in an efficient manner. Good reference.

/*Return a list of years since 2009*/

WITH yearlist
AS ( SELECT 2009 AS year
UNION ALL
SELECT yl.year + 1 AS year
FROM yearlist yl
WHERE yl.year + 1 <= YEAR(GETDATE())
)
SELECT year
FROM yearlist
ORDER BY year DESC;

Tracking Schema Change Details in SQL Server

In SQL Server 2005 and newer you can create database catalog wide triggers that you can use to track all schema changes to any objects in that database.

I wanted to track all schema changes to all catalogs on my server, so what I did was to create a database catalog called SchemaChangeLog that contained a table and a SP as follows:

USE [SchemaChanges]
GO

/****** Object:  Table [dbo].[SchemaChangeLog]    Script Date: 01/27/2011 11:17:26 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[SchemaChangeLog](
    [SchemaChangeLogId] [int] IDENTITY(1,1) NOT NULL,
    [EventType] [varchar](100) NULL,
    [PostTime] [datetime] NULL,
    [LoginName] [varchar](100) NULL,
    [Username] [varchar](100) NULL,
    [DatabaseName] [varchar](100) NULL,
    [SchemaName] [varchar](100) NULL,
    [ObjectName] [nchar](100) NULL,
    [ObjectType] [varchar](100) NULL,
    [SqlText] [varchar](max) NULL,
    [ServerName] [varchar](100) NULL,
 CONSTRAINT [PK_SchemaChangeLog] PRIMARY KEY CLUSTERED
(
    [SchemaChangeLogId] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS  = ON,
ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

SET ANSI_PADDING OFF
GO


USE [SchemaChanges]
GO

/****** Object:  StoredProcedure [dbo].[SchemaChangeLog_Save]    Script Date: 01/27/2011 11:17:31 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:        Chris May
-- Create date: 1/27/11
-- Description:    saves schema change info
-- =============================================
CREATE PROCEDURE [dbo].[SchemaChangeLog_Save]
      @data XML
AS
BEGIN
    BEGIN TRY
        
        insert  INTO schemachanges.[DBO].SchemaChangeLog
                (
                  [EventType],
                  PostTime,
                  ServerName,
                  LoginName,
                  Username,
                  DatabaseName,
                  SchemaName,
                  ObjectName,
                  ObjectType,
                  SqlText

                )
        VALUES  (
                  @data.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname'),
                  GETDATE(),
                  @data.value('(/EVENT_INSTANCE/ServerName)[1]', 'sysname'),
                  @data.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname'),
                  @data.value('(/EVENT_INSTANCE/UserName)[1]', 'sysname'),
                  @data.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'sysname'),
                  @data.value('(/EVENT_INSTANCE/SchemaName)[1]', 'sysname'),
                  @data.value('(/EVENT_INSTANCE/ObjectName)[1]', 'sysname'),
                  @data.value('(/EVENT_INSTANCE/ObjectType)[1]', 'sysname'),
                  @data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'VARCHAR(max)')
                ) ;

    END TRY
    BEGIN CATCH
        -- we just don't want any errors here
                
    END CATCH ;
END

GO

Then on each database catalog you want to track schema changes on, just run this to create the trigger:

create TRIGGER Trigger_Track_Schema_Changes ON DATABASE
    FOR DDL_DATABASE_LEVEL_EVENTS
AS
    SET NOCOUNT ON
    DECLARE @data XML ;
    BEGIN TRY

        SET @data = EVENTDATA() ;
        EXEC SchemaChanges.dbo.SchemaChangeLog_Save @data

    END TRY
    BEGIN CATCH
            -- we just don't want any errors here
    END CATCH ;

go

If you want to undo this you can always drop the trigger by using:

drop TRIGGER Trigger_Track_Schema_Changes ON DATABASE

EDIT: I’ve found that some things (like some actions in SQL Compare) cause this trigger to throw an error.  I’m not sure why.  But, if you want to just disable the trigger for a moment you can run this:

disable trigger Trigger_Track_Schema_Changes on database

 

SQL Server Activity Monitoring

If you are using SQL Server and you want to get an overview of whats going on, you can make use of the “Activity Monitor”, which is available in Management Studio for SQL Server 2005 and later.

In Management Studio, just right click on the server (not the DB catalog) and pick Activity Monitor.

If you are using SQL Server 2000, you are in a bit of a tight spot.  Even thought the BOL say that you can use Activity Monitor from Mangement Studio, you can’t.  It’s a lie.  So you have to have Enterprise Manager installed.

From Enterprise Manager, under the server you want to view, expand the “Management” folder, and the “Crrent Activity” node.  Under there you will find Process Info, Locks/Process Id and Locks/Object.

Too bad MS didn’t make Management Studio backwards compatable like BOL claims it is.

Shrinking your SQL Server log files

Shrinking your LDFs is as easy as:

BACKUP LOG DBName WITH TRUNCATE_ONLY
ALTER DATABASE DBName SET RECOVERY SIMPLE
GO
USE DBName
GO
DBCC SHRINKFILE('DBName_log', 2)
GO

Just replace “DBName” with the actual name of the database.

In some cases you might have to inspect the properties of the database using management studio to look at the file names, if the log file happens to have a different name from dbname_log.

 

 

SQL Server Unique Constraint

In SQL Server you can create a UNIQUE constraint on a field that is not the primary key.  The “CLUSTERED” keyword must be removed if you already have a clustered index on the table.  Here are some examples of a 1 column and 2 column unique constraint:


ALTER TABLE [dbo].[item_orders2] ADD CONSTRAINT
UNIQUE_Table2 UNIQUE CLUSTERED
(
order_id
) ON [PRIMARY]


ALTER TABLE [dbo].[item_orders3] ADD CONSTRAINT
UNIQUE_Table3 UNIQUE CLUSTERED
(
order_id,
item_id
) ON [PRIMARY]

 

Removing all extended properties from SQL Server

About a year ago, we tried using a tool to get our SQL schema into a souce control system.  It was supposed to work like this: every night the process would run and checkin each object as a file into TFS. 

Well, aside from not working, it also added Extended Properties to all our objects in SQL Server. 

Not really a big deal, except when we would run our database comparison tools between production and development these would cause it so that almost everything would appear out of sync.

I ended up writing my own service to check in all our SQL objects into TFS, but we were left with all these Extended Properties all over the place. 

So I finally got around to cleaning them up using the script below:

select 'EXEC sp_dropextendedproperty
@name = ''' + sys.extended_properties.name + '''
,@level0type = ''schema''
,@level0name = ' + object_schema_name(extended_properties.major_id) + '
,@level1type = ''' +
    CASE WHEN
        xtype = 'U' THEN 'table'
        WHEN
        xtype = 'FN' THEN 'function'
        WHEN
        xtype = 'V' THEN 'view'
        WHEN
        xtype = 'P' THEN 'procedure' end
    + '''
,@level1name = ''' + object_name(extended_properties.major_id) + ''''
from sys.extended_properties
INNER JOIN sysobjects ON sys.extended_properties.major_id  = sysobjects.id
where extended_properties.class_desc = 'OBJECT_OR_COLUMN'
and extended_properties.minor_id = 0
and extended_properties.name like 'VSS%'

This script will produce a record set that you can copy and paste into another query window to run to clean out all the extended properties. 

Note: all the generated extended properties that this tool created on our DB started with “VSS” so that is why I have that extra item in my where clause.  You’ll probably want to remove that.