Wednesday, March 16, 2011

Database development considerations

Many of us intentionally or unintentionally do not plan application Background for high availability website or application. This create frustration at both end, I mean it exhausts developers that they are unable to develop a robust application on the same token it frustrates the client who is not getting optimum performance from the application. Purpose of following note is to create a check list that should be complied before delivering any product for quality assurances. I coloring the points, red mean critical, blue means important and green means supportive.

1. Do plan database for its purpose, if it is for online transaction processing it should be normalized and should not violate normalization rules. No matter how much it boosts the database processing. Please consider following notes for OLTP databases.

i. Create multiple file groups for multiple set of tables like a file group for indexes, file group for lookup items and a file group for transaction. It’s better to create multiple groups to optimize disk IO.

ii. Make sure that your transactions are precise and they are short enough to avoid deadlocks. These transactions should be developed in a way that single stored procedure is handling a complete set of insertion or update.

iii. Database backups should be configured at minimum usage hours, although SQL server supports transactions while taking backups however the client responses become slower during backup.

iv. Make sure that you are using table partitioning if there is a large amount of data, because a huge table size has a worse impact on tables’ insertions and updates.

v. When trying to develop OLTP part of any application make sure that you are using minimum number of indexes, it has a negative impact on performance.

vi. Optimum hardware configuration to handle the large numbers of concurrent users and quick response times required by an OLTP system.

vii. Try to configure server to support max degree of parallelism, by default this attribute is disabled and SQL server does not perform parallelism for queries, so server performance can be enhanced by enabling this feature.

viii. increase the 'min memory per query' option to improve the performance of queries that use hashing or sorting operations, if your SQL Server has a lot of memory available and there are many queries running concurrently on the server.

ix. You can increase the 'max async IO' option if your SQL Server works on a high performance server with high-speed intelligent disk subsystem.

x. With usage of table indexes become really slow, it’s better to run dbcc dbreindex with a proper fill factor regularly to optimize performance.

xi. You are using database with full recover option and mirroring is configure then it’s better to increase the recovery interval. This will have a positive impact on database performance.

xii. If database requires a lot number concurrent request try to increase max number of work thread and you can also use the priority boost to 1. Make sure to increase priority only if the server is dedicated for SQL server operations otherwise it will have negative impact on application due to sql server priority.

xiii. Too many table joins for frequent queries. Overuse of joins in an OLTP application results in longer running queries & wasted system resources. Generally, frequent operations requiring 5 or more table joins should be avoided by redesigning the database.

xiv. Too many indexes on frequently updated (inclusive of inserts, updates and deletes) tables incur extra index maintenance overhead. Generally, OLTP database designs should keep the number of indexes to a functional minimum, again due to the high volumes of similar transactions combined with the cost of index maintenance.

xv. Big IOs such as table and range scans due to missing indexes. By definition, OLTP transactions should not require big IOs and should be examined.

xvi. Unused indexes incur the cost of index maintenance for inserts, updates, and deletes without benefiting any users. Unused indexes should be eliminated. Any index that has been used (by select, update or delete operations) will appear in sys.dm_db_index_usage_stats. Thus, any defined index not included in this DMV has not been used since the last re-start of SQL Server.

xvii. Signal waits > 25% of total waits. See sys.dm_os_wait_stats for Signal waits and Total waits. Signal waits measure the time spent in the runnable queue waiting for CPU. High signal waits indicate a CPU bottleneck.

xviii. Plan re-use < 90% . A query plan is used to execute a query. Plan re-use is desirable for OLTP workloads because re-creating the same plan (for similar or identical transactions) is a waste of CPU resources. Compare SQL Server SQL Statistics: batch requests/sec to SQL compilations/sec. Compute plan re-use as follows: Plan re-use = (Batch requests - SQL compilations) / Batch requests. Special exception to the plan re-use rule: Zero cost plans will not be cached (not re-used) in SQL 2005 SP2. Applications that use zero cost plans will have a lower plan re-use but this is not a performance issue.

xix. Parallel wait type cxpacket > 10% of total waits. Parallelism sacrifices CPU resources for speed of execution. Given the high volumes of OLTP, parallel queries usually reduce OLTP throughput and should be avoided. See sys.dm_os_wait_stats for wait statistics.

xx. Consistently low average page life expectancy. See Average Page Life Expectancy Counter which is in the Perfmon object SQL Server Buffer Manager (this represents is the average number of seconds a page stays in cache). For OLTP, an average page life expectancy of 300 is 5 minutes. Anything less could indicate memory pressure, missing indexes, or a cache flush.

xxi. Sudden big drop in page life expectancy. OLTP applications (e.g. small transactions) should have a steady (or slowly increasing) page life expectancy. See Perfmon object SQL Server Buffer Manager. Small OLTP transactions should not require a large memory grant.

xxii. Sudden drops or consistenty low SQL Cache hit ratio. OLTP applications (e.g. small transactions) should have a high cache hit ratio. Since OLTP transactions are small, there should not be, big drops in SQL Cache hit rates or consistently low cache hit rates < 90%. Drops or low cache hit may indicate memory pressure or missing indexes.

xxiii. Normally it takes 4-8ms to complete a read when there is no IO pressure. When the IO subsystem is under pressure due to high IO requests, the average time to complete a read increases, showing the effect of disk queues. Periodic higher values for disk seconds/read may be acceptable for many applications. For high performance OLTP applications, sophisticated SAN subsystems provide greater IO scalability and resiliency in handling spikes of IO activity. Sustained high values for disk seconds/read (>15ms) does indicate a disk bottleneck.

xxiv. High average disk seconds per write. The throughput for high volume OLTP applications is dependent on fast sequential transaction log writes. A transaction log write can be as fast as 1ms (or less) for high performance SAN environments. For many applications, a periodic spike in average disk seconds per write is acceptable considering the high cost of sophisticated SAN subsystems. However, sustained high values for average disk seconds/write is a reliable indicator of a disk bottleneck.

xxv. Big IOs such as table and range scans due to missing indexes.

xxvi. Index contention. Look for lock and latch waits in sys.dm_db_index_operational_stats. Compare with lock and latch requests.

xxvii. High average row lock or latch waits. The average row lock or latch waits are computed by dividing lock and latch wait milliseconds (ms) by lock and latch waits. The average lock wait ms computed from sys.dm_db_index_operational_stats represents the average time for each block.

xxviii. Block process report shows long blocks. See sp_configure “blocked process threshold” and Profiler “Blocked process Report” under the Errors and Warnings event.

xxix. High number of deadlocks. See Profiler “Graphical Deadlock” under Locks event to identify the statements involved in the deadlock.

xxx. High network latency coupled with an application that incurs many round trips to the database.

xxxi. Network bandwidth is used up. See counters packets/sec and current bandwidth counters in the network interface object of Performance Monitor. For TCP/IP frames actual bandwidth is computed as packets/sec * 1500 * 8 /1000000 Mbps.

a. Excessive fragmentation is problematic for big IO operations. The Dynamic Management table valued function sys.dm_db_index_physical_stats returns the fragmentation percentage in the column avg_fragmentation_in_percent. Fragmentation should not exceed 25%. Reducing index fragmentation can benefit big range scans, common in data warehouse and Reporting scenarios

2. If you are developing a database for decision support system or for analytical processing please follow following general rules and guidelines.

a. Better to develop data marts based on database trend analysis requirements.

b. Merge non key attributes with key attribute using views and tables for optimum retrieval.

c. Use of a star or snowflake schema to organize the data within the database.

d. Estimate the sizes of clustered and non-clustered indexes.

3. Now i am trying to list down general guidelines for database optimization.

a. If database size is less than 200mb then it’s better to use file growth in megabytes. However if it increase better to define in percentages.

b. Do not insert heavy object in the database if they are not required in transactions because they increase database size and it really hurts the performance.

c. If you have really a lengthy operation in the stored procedure better to use SQL query short circuiting, this enhance the query performance by avoiding unnecessary joins and where clauses.

d. Must use where clause to otherwise it will slow down the performance. Even if there is an option that say search all do not fetch all rows from database.

e. Try to implement paging in stored procedure level for large amount of data; if this is to be done on front end it will cause timeout operations.

f. Do not use char, varchar fields’ for comparison operations and joins, it better to use integers always. Use isnull function for parameters to avoid unwanted comparisons in the database.

g. Always prefer to mention fields names rather than * operations, because wildcards are negative for performances.

h. Use views and stored procedure instead of heavy queries. Do not use cursors, try to avoid count (*).

i. Since triggers are heavy for operations better to use constraints, this helps to validate data prior to insert.

j. It is preferred to use table variables rather than temp tables but it better to check either you are in a transaction or not and if this transaction is handle by MSDTC. You may face invalid data operations with usage of variables because they not terminated properly.

k. Do not use having or distinct in your queries because require reprocessing of result set.

l. Prefer to use locking hints with queries, as in a previous document I mentioned to use nolock or rowlock or updlock.

m. Use select statements with the TOP keyword or the SET ROWCOUNT statement if you need to return only the first n rows. OR Use the FAST number_rows table hint if you need to quickly return 'number_rows' rows.

n. Prefer to use union all instead of union in queries.

o. Do not guide SQL for query optimization hints if you are now sure about them, better to leave this part with SQL server to decide.

p. Don’t use top 100 percent with in your sql statements this switch guides the sql server to fetch a count of 100 rows based on the data sampling. This clause have really negative effect on the query.

q. Try to use latest feature of SQL server cross apply rather using cursors.

No comments:

Post a Comment