Tuesday 26 December 2006

ALTER TABLE SWITCH PARTITION FAILURE

Problem

The SWITCH TO Partition fails when we perform a swith from non-partitioned table to partitioned table even though the same constraint exists on the partitioned column for both the tables

Solution

The solution to the problem can be explained using the below example

/* Partition Function */
CREATE PARTITION FUNCTION [IdentiferPFN](int) AS RANGE LEFT FOR VALUES (100, 200, 300, 400)

/* Partition Scheme */
CREATE PARTITION SCHEME [IdentiferPFN] AS PARTITION [IdentiferPFN] TO ([FILEGROUP1])GO

/* Partition Table */
CREATE TABLE [dbo].[TestPartition]
( [PurchaseOrderID] [int] IDENTITY(1,1) NOT NULL, [EmployeeID] [int] NULL)
ON [IdentiferPFN]([EmployeeID])
GO

ALTER TABLE [dbo].[TestPartition] WITH CHECK ADD CONSTRAINT [CChk] CHECK (([employeeid]>=(0)))
GO
ALTER TABLE [dbo].[TestPartition] WITH CHECK ADD CONSTRAINT [CChk1] CHECK (([employeeid]<=(100)))
GO

/* Non Partition Table */

CREATE TABLE [dbo].[TestPartitionSwitch]
( [PurchaseOrderID] [int] IDENTITY(1,1) NOT NULL, [EmployeeID] [INT] NULL)
ON [FILEGROUP1]
GO
ALTER TABLE [dbo].[TestPartitionSwitch] WITH CHECK ADD CONSTRAINT [Chk] CHECK (([employeeid]>=(0)))
GO
ALTER TABLE [dbo].[TestPartitionSwitch] WITH CHECK ADD CONSTRAINT [Chk1] CHECK (([EMPLOYEEID]<=(100)))
GO

/* Swith the data from Partition table to non Parition table */
ALTER TABLE TestPartitionSwitchSWITCH TO TestPartition PARTITION 1

/* Swith the data back to Partition table */
ALTER TABLE TestPartitionSWITCH PARTITION 1 TO TestPartitionSwitch

This switch back statement will throw an error saying

"ALTER TABLE SWITCH statement failed. Check constraints or partition function of source table 'PARTITIONTEST.dbo.TestPartitionSwitch' allows values that are not allowed by check constraints or partition function on target table'PARTITIONTEST.dbo.TestPartition'."

This problem can be solved by modifying the constriaint to include the NOT NULL check on the partition column "EmployeeID" as below.

ALTER TABLE [dbo].[TestPartitionSwitch] WITH CHECK ADD CONSTRAINT [Chk] CHECK (([employeeid]>=(0) AND [employeeid] IS NOT NULL))
GO
ALTER TABLE [dbo].[TestPartitionSwitch] WITH CHECK ADD CONSTRAINT [Chk1] CHECK (([EMPLOYEEID]<=(100) AND [employeeid] IS NOT NULL))
GO

A NULL in partition column would satisfy the check constraint but fall outside of the target partition.Changing the nullablility of the columns or asserting not null in the constraint resolves the issue thats the trcik.

Tuesday 12 December 2006

Easy Steps for Upgrading to SQL Server 2005

Detach the Database from SQL Server 2000 server and attach to SQL Server 2005 server/Back Up and restore.

Once the Database is restored perform the following steps to make sure the Upgrade process is done perfect.

Update statistics - To help optimize query performance, update statistics on all databases following upgrade. Use the sp_updatestats stored procedure to update statistics in user-defined tables in SQL Server 2005 databases.
Update usage counters - In earlier versions of SQL Server, the values for the table and index row counts and page counts can become incorrect. To correct any invalid row or page counts, we recommend that you run DBCC UPDATEUSAGE on all databases following upgrade.
Change Database Compatibility – Change the Database compatibility level to SQL SERVER 2005 (90)
Configure your new SQL Server installation - To reduce the attackable surface area of a system, SQL Server 2005 selectively installs and activates key services and features. Change the surface area configuration as required. For more information on how to activate SQL Server 2005 features, see SQL Server Surface Area Configuration.

The following check list should be verified before Installing / Upgrading to SQL Server 2005

  • Review Hardware and Software Requirements for Installing SQL Server 2005.
  • Review Check Parameters for the System Configuration Checker.
  • Review Security Considerations for a SQL Server Installation.
  • Review Using Upgrade Advisor to Prepare for Upgrades.
  • Review SQL Server 2005 Database Engine Backward Compatibility.
  • Back up all SQL Server database files from the instance to be upgraded, so you can completely restore them, if necessary.
  • Run the appropriate Database Console Commands (DBCC) on databases to be upgraded to ensure they are in a consistent state.
  • Estimate the disk space required to upgrade SQL Server components, as well as user databases. For disk space required by SQL Server 2005 components, see Hardware and Software Requirements for Installing SQL Server 2005.
  • Ensure that existing SQL Server system databases - master, model, msdb, and tempdb - are configured to autogrow, and ensure that they have adequate hard disk space.
  • Ensure that all database servers have logon information in the master database. This is important for restoring a database, as system logon information resides in master.
  • Disable all startup stored procedures, as the upgrade process will stop and start services on the SQL Server instance being upgraded. Stored procedures processed at startup time may block the upgrade process.
  • Stop Replication and make sure that the replication log is empty.
  • Quit all applications, including all services with SQL Server dependencies. Upgrade may fail if local applications are connected to the instance being upgraded.

Monday 11 December 2006

SQL Server Index Creation Guide Line

Index creation guide

Indexes should be considered on all columns that are frequently accessed by the WHERE, ORDER BY, GROUP BY, TOP, and DISTINCT clauses.

Only add indexes if you know that they will be used by the queries run against the table

As a rule of thumb, every table should have at least a clustered index. Generally, but not always, the clustered index should be on a column that monotonically increases--such as an identity column, or some other column where the value is increasing--and is unique. In many cases, the primary key is the ideal column for a clustered index

Static tables (those tables that change very little, or not at all) can be more heavily indexed that dynamic tables (those that are subject to many INSERTS, UPDATES, or DELETES) without negative effect.

If you will be creating an index to speed the retrieval of a single record, you may want to consider making it a non-clustered index, and saving the clustering index (you can only have one) for a more complex query

Don't over index your OLTP tables, as every index you add increases the time it takes to perform INSERTS, UPDATES, and DELETES

Don't add the same index twice on a table.

Whether an index on a foreign key has either high or low selectivity, an index on a foreign key can be used by the Query Optimizer to perform a merge join on the tables in question. A merge join occurs when a row from each table is taken and then they are compared to see if they match the specified join criteria. If the tables being joined have the appropriate indexes (no matter the selectivity), a merge join can be performed, which is generally much faster than a join to a table with a foreign key that does not have an index.

If you have two or more tables that are frequently joined together, then the columns used for the joins should have an appropriate index.

When creating indexes, try to make them unique indexes if at all possible. SQL Server can often search through a unique index faster than a non-unique index because in a unique index, each row is unique, and once the needed record is found, SQL Server doesn't have to look any further.

Ways boost the performance of a query that includes an AND operator in the WHERE clause, consider the following:

  • Of the search criterions in the WHERE clause, at least one of them should be based on a highly selective column that has an index.
  • If at least one of the search criterions in the WHERE clause is not highly selective, consider adding indexes to all of the columns referenced in the WHERE clause.
    If none of the column in the WHERE clause are selective enough to use an index on their own, consider creating a covering index for this query.
  • The Query Optimizer will always perform a table scan or a clustered index scan on a table if the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed (or does not have a useful index). Because of this, if you use many queries with OR clauses, you will want to ensure that each referenced column in the WHERE clause has an index.
  • The Query Optimizer converts the Transact-SQL IN clause to the OR operator when parsing your code. Because of this, keep in mind that if the referenced column in your query doesn't include an index, then the Query Optimizer will perform a table scan or clustered index scan on the table.
  • Create composite indexes with the most restrictive column first.

Don’t add the index under the following cases

  • If it is not used by the query optimizer. Use Query Analyzer's "Show Execution Plan" option to see if your queries against a particular table use an index or not. If the table is small, most likely indexes will not be used.
  • If the column values exhibit low selectivity, often less than 90%-95% for non-clustered indexes.
  • If the column(s) to be indexed are very wide.
  • If the column(s) are defined as TEXT, NTEXT, or IMAGE data types.
  • If the table is rarely queried.

Thursday 7 December 2006

SQL Server Fill Factor Recommendations

SQL Server Query performance can be improved by creating indexes with the appropriate Fill Factor values. The following steps should be done

Step 1:

The tables should be classified into one of the following categories
• Low Update tables
• High Update tables
• Everything In-Between

Step 2:

Modify the Index creation scripts to hold the fill factor values as given below

Low Update Tables (100-1 read to write ratio): 100% fillfactor
High Update Tables (where writes exceed reads): 50%-70% fillfactor
Everything In-Between: 80%-90% fillfactor.

Tuesday 5 December 2006

Recommendations for SQL Server File Group Creation

· The primary file group must be totally separate and should be left to have only system objects and no user defined object must be created on this primary file group.
· The primary file group should not be set as default file group.
· Separating the system objects from other user objects will increase performance and enhance ability to access tables in the case of serious data failures
· If there are more than one physical drive available in the system then try to create as many physical files per file group and put one file per disk. This will improve performance because whenever a table is accessed sequentially in SQL Server, a separate thread is created for each file to read the table’s data in parallel.
· A separate file group can be created for indexes and the number of physical files can be created as mentioned in the above point.
· Creating a table in one file group and place the text, next and image columns in a different file group on different physical disks.
· The log file must be placed on different physical disk then the data files, because the logging is more write intensive, it is important to have the log on the disk that have good I/O performance. The log file should be RAID 0 or 1 which supports write intensive operation.

Dimension Data Modelling

Dimensional data model is most often used in data warehousing systems. This is different from the 3rd normal form, commonly used for transactional (OLTP) type systems. As you can imagine, the same data would then be stored differently in a dimensional model than in a 3rd normal form model.

To understand dimensional data modeling, let's define some of the terms commonly used in this type of modeling:

Dimension: A category of information. For example, the time dimension.

Attribute: A unique level within a dimension. For example, Month is an attribute in the Time Dimension.

Hierarchy: The specification of levels that represents relationship between different attributes within a dimension. For example, one possible hierarchy in the Time dimension is Year → Quarter → Month → Day.

Fact Table: A fact table is a table that contains the measures of interest. For example, sales amount would be such a measure. This measure is stored in the fact table with the appropriate granularity. For example, it can be sales amount by store by day. In this case, the fact table would contain three columns: A date column, a store column, and a sales amount column.

Lookup Table: The lookup table provides the detailed information about the attributes. For example, the lookup table for the Quarter attribute would include a list of all of the quarters available in the data warehouse. Each row (each quarter) may have several fields, one for the unique ID that identifies the quarter, and one or more additional fields that specifies how that particular quarter is represented on a report (for example, first quarter of 2001 may be represented as "Q1 2001" or "2001 Q1").

A dimensional model includes fact tables and lookup tables. Fact tables connect to one or more lookup tables, but fact tables do not have direct relationships to one another. Dimensions and hierarchies are represented by lookup tables. Attributes are the non-key columns in the lookup tables.

In designing data models for data warehouses / data marts, the most commonly used schema types are Star Schema and Snowflake Schema.

Star Schema: In the star schema design, a single object (the fact table) sits in the middle and is radially connected to other surrounding objects (dimension lookup tables) like a star. A star schema can be simple or complex. A simple star consists of one fact table; a complex star can have more than one fact table.

Snowflake Schema: The snowflake schema is an extension of the star schema, where each point of the star explodes into more points. The main advantage of the snowflake schema is the improvement in query performance due to minimized disk storage requirements and joining smaller lookup tables. The main disadvantage of the snowflake schema is the additional maintenance efforts needed due to the increase number of lookup tables.

Whether one uses a star or a snowflake largely depends on personal preference and business needs.

SQL Server 2005 features that are dependent on Windows Server 2003...

The question about what features are supported by SQL Server 2005 running on Windows Server 2003 comes up quite often. So below are some of the features that are depends on the OS and brief description about them.
1. Password policy/expiration check for SQL logins - CREATE LOGIN is a new DDL for creating SQL logins. It has two options called CHECK_POLICY and CHECK_EXPIRATION that can be used to enforce Windows password policies for SQL logins. This leverages the functionality provided by NetValidatePasswordPolicy() API
2. Hot add memory - This is a new feature in Windows Server 2003 that can help SQL Server 2005 indirectly 3. Dynamic AWE - SQK Server 2005 running on Windows Server 2003 uses dynamic AWE memory allocation. This works similar to regular memory management
4. 64-bit versions of SQL Server 2005 runs only on Windows Server 2003 64-bit (IA and X64) - Native 64-bit editions of SQL Server 2005 runs only on corresponding Windows Server 2003 64-bit editions
5. SOAP support - Native XML web services in SQL Server 2005 relies on HTTP API support which is provided by Windows Server 2003 and Windows XP Service Pack 2
6. Fast file initialization - Creating a new database of any size is almost instantaneous in SQL Server 2005 running on Windows Server 2003/XP. This requires the service account to have the SE_MANAGE_VOLUME_NAME permissions. With fast file initialization, disk content is overwritten when new allocation happens
7. SQL Writer service features that work with Volume Shadow Copy Services - Differential backup, differential restore, restore with move, database rename, copy only backup and auto-recovered Snapshots are supported only on Windows Server 2003 with Service Pack 1
In addition to these features that can benefit SQL Server installations on Windows Server 2003 platform, there are other inherent advantages. Windows Server 2003 is more scalaeable and secure than Windows Server 2000. These are just some of the things to keep in mind when choosing a platform for running SQL Server 2005.

The link below contains some benefits on using Windows Server 2003 with SQL Server 2005:
http://www.microsoft.com/sql/techinfo/planning/winsvr2003benefits.mspx

Monday 4 December 2006

Rename Database in SQL Server 2000 & 2005

The database can be renamed in three ways.
Option 1

Use the following T-SQL: command to make the database name change.
EXEC sp_renamedb 'oldName', 'newName'
This command works for both SQL Server 2000 and SQL Server 2005, but this feature will not be supported in future versions of SQL Server. The new command that should be used for SQL Serer 2005 and beyond is:
ALTER DATABASE oldName MODIFY NAME = newName

Option 2

If you are using SQL Server Management Studio, right click on the database name and select the new option "rename". This did not exist in SQL Server 2000, but if you use Management Studio to manage your SQL Server 2000 server you can take advantage of this option for your SQL Server 2000 instances.

Option 3

Use the detach and attach feature of SQL Server to detach the database first and when you reattach the database you give the database a different name. This can be done by using the GUI or you can do this by using the following commands:
EXEC sp_detach_db 'oldName', 'true'
EXEC sp_attach_db @dbname = N'newName', @filename1 = N'c:\Program Files\Microsoft SQL Server\MSSQL\Data\pubs.mdf', @filename2 = N'c:\Program Files\Microsoft SQL Server\MSSQL\Data\pubs_log.ldf'
Here we are detaching database "Test"

Friday 1 December 2006

Solving data time format problems between SQL Server and APP Server

Details – Source - MS Press book on SQL Server 2005
Datetime manipulation can be quite challenging. What's the correct way to express DATETIME literals? What happens when you enter a value that cannot be represented exactly—for example, '20060211 23:59:59.999'? How do you separate date and time? The following sections provide some answers to these questions.
LiteralsExpressing DATETIME literals for data entry in T-SQL is tricky business. When you need to express one, you use a character string that is implicitly convertible to DATETIME. If a string appears in a context where a DATETIME is expected—for example, as the target value of a DATETIME column in an INSERT or UPDATE statement—it will be implicitly converted to DATETIME. Also, when expressions contain operands with different datatypes, normally the highest in precedence determines the datatype of all operands. Datetime has a higher precedence than a character string. So, for example, if you compare two values, one of which is a DATETIME and the other is a character string, the character string gets implicitly converted to a DATETIME.
To add to the confusion, there are various conventions for expressing DATETIME values. The value '02/12/06' means different things to different people. When this string must be converted to a DATETIME, SQL Server will convert it based on the language settings of the session. The session's language is determined by the login's default language, but it can be overridden by using the SET LANGUAGE session option. You can also control how DATETIME literals comprising digits and separators are interpreted by using the SET DATE-FORMAT option, specifying a combination of the characters d, m, and y. For example, mdy would mean month, day, year. By the way, SET LANGUAGE implicitly sets DATEFORMAT to match the language's convention for date formatting.
So you have tools to control the way some DATETIME literals will be interpreted, but you realize that by issuing one of the aforementioned SET options, you're changing the behavior of the whole session. What if other code that will end up running in your session is supposed to be dependent on the login's default language? This consideration is especially important with international applications.
Whenever possible, I write code that is independent of any settings or switches in the system. There are two literal DATETIME formats in SQL Server that are independent of any settings. I particularly like the one of these formats that has no separators between the date portions: '[yy]yymmdd[ hh:mi[:ss][.mmm]]'. Examples of DATETIME literals in this format are '20060212', '060212', and '20060211 23:59:59.997'. The DATEFORMAT and LANGUAGE settings do not affect the interpretation of DATETIME strings in this format. If you would rather use separators between the date parts, you can use the other setting-independent format in SQL Server: 'yyyy-mm-ddThh:mi:ss[.mmm]'. An example of this format is '2006-02-12T14:23:05'. The time portion cannot be omitted when this format is used.
Another technique you can use to specify DATETIME values is to explicitly convert the character string to a DATETIME using the T-SQL function CONVERT, specifying the option style parameter in the function call. For example, if you want to use the British/French style with two digits for the year, specify style 3: CONVERT(DATETIME, '12/02/06', 3). For the full list of supported styles, please refer to Books Online, under the subject CAST and CONVERT (Transact-SQL).
At some point, you may see a date or time literal such as {d '2006-02-12'}. This is an ODBC format that can be used in some APIs. I wouldn't recommend using such literals because even though they are independent of settings, they are API dependent.

Thursday 30 November 2006

SQL Server 2005 NewSequentialID

SQL Server 2005 provides a new function called NewSequentialID that's similar to NewID(), a function that originated in SQL Server 2000 and is supported in SQL Server 2005. Both functions return a value of data type uniqueidentifier.
In the past, many people used NewID() to assign primary keys to columns, and these columns were unique across servers. When SQL Server 2000 first came out, I too used NewID() to create primary keys. Alas, I soon determined (as did many others) that using the NewID() function this way can cause a variety of performance problems and that NewID() doesn't provide a unique value that increases in order, as the IDENTITY-based key does.You get some important performance benefits when a key value is always higher than the last assigned value; generally these benefits are related to how data is stored on the pages.
The NewSequentialId creates a globally unique identifier (GUID) that's always greater than any GUID previously generated by the NewSequentialId function on a specified computer. Let's look at the example in Listing 1, which shows how to use the two functions to assign GUIDs that are higher in sort order than the previously assigned vales. . . .


Refer to the link below for details
http://www.sqlmag.com/Article/ArticleID/49960/sql_server_49960.html