Peter Jones
|
|
Group: Forum Members
Posts: 386,
Visits: 2.1K
|
Hi Trent, Thanks for the extra info. Hopefully we never have to get down to that level but it is very interesting that changing the data type made such a humongous improvement. Cheers, Peter
|
|
|
Trent Taylor
|
|
Group: StrataFrame Developers
Posts: 6.6K,
Visits: 7K
|
Thanks for the info...it always good to hear how other developers solve their isssues. In our case, we have some extremely complex queries that take place between 8+ tables and get extremely nested while calculating something called "pending." This basically determine how much a patient owes and an insurance owes....but it has to take into account all of their tran history, insurance plans (primary, secondary, tertiary, etc.), deductibles, write-offs, bad debt, and about 50 other things (not kidding on the 50  ). We tried using dates, indexes, and even tried the a nmber of conversion routines...and once we turned this into ticks with and index versus dates with an index (that was the only change) the query went from 4 1/2 minutes for a single patient with 6000 trans (don't ask me why they have 6000 trans for one patient...we just crunch the numbers  ) to 30 ms...so we started doing a little digging and learned that between and ORs are bad words with SQL Server and dates when dealing with any type of complex query. This proved true again just the other day...I had a query running in 4 seconds (way too slow) once we started testing on a large database...changed the dates to ticks...1 ms...crazy. One other thing on this too, we have to be able to have extremely fast queries run on MS Express with siingle core processors as we have a lot of users with 2 GB plus databases that will till use MS SQL Express on existing equipment iin the field...we call this zero impact. It may not be but we have to get as close to that as possible for existing users. Then are the much larger sites they will have a more complex server setup and a full version of SQL Server...so we have to work in a lot of different environments...so absolute optimization is the only way we can do this.
|
|
|
Peter Jones
|
|
Group: Forum Members
Posts: 386,
Visits: 2.1K
|
Hi Guys, While I've never tried converting a date to a bigint I thought I would just let you have my specific experience in that area. Most of our reports are date range based and use transactional data as their source. For this reason we have clustered index on the 'date created' column in the transaction files. I've just connected to one of our sites where the main transaction file is 11+ million rows. I opened Query in SQL Server entered the following (date randomly selected): Select HIDDateTime From dbo.tblHIDHides Where HIDDateTime Between Convert(DateTime, '2006-05-04', 102) And Convert(DateTime, '2006-05-05', 102) So, no stored procedure, no caching from previous queries. The result: 6501 rows returned in < 1 second. The database is low end Xeon database server with just 2Gb of memory, Windows 2003 Standard that wasn't busy when I did the above test. Cheers, Peter
|
|
|
Trent Taylor
|
|
Group: StrataFrame Developers
Posts: 6.6K,
Visits: 7K
|
All of Peter's comments were excellent...I thought I would toss in a few more things as well: - You can tell a query which index to use by using the WITH and INDEX commands. Sometimes SQL needs a little help...we ran into this the other day. It would look like this:
SELECT * FROM Customers WITH(INDEX(IX_MyIndex))
- The framework is not going to change anything in regards to the execution speed and performance....so if you get it down to 1 second in SQL Server management Studio executing the sproc...this will not change on the framework side unless you have some type of connection issue or something else in the mix.
- DateTime columns are aweful about slowing down queries when in the WHERE...one way to get around this is to store dates as a BigInt data type and then store then DateTimes using Ticks. We then create a Custom property on the BO that wraps this as a DateTime so that while using the BOs inside of your app you interact with a DateTime...but it is stored as Ticks on the SQL Server side...and this will drastically improve performance....by like a ton when you are testing with <, >, or betweens.
|
|
|
Peter Jones
|
|
Group: Forum Members
Posts: 386,
Visits: 2.1K
|
Hi Bill, A few random comments that may help: 1) Be careful with data types in a Where clause. I see you have what looks like date parameters defined as varChar. If the database column is a real date and you are comparing with a varChar then SQL Server will not use any index you may have on that column. 2) Big time differences like this will invariable mean that one way is using indexes and the other is using full table scans. The Profiler will show this up. 3) I notice you have: @itemcode varchar(30),IF @itemcode = '' OR @itemcode IS Null BEGIN SET @itemcode = '%%' ENDWHERE Items.Code LIKE @itemcode I think a more efficient approach would be to sort out your parameter and have a defualt and only pass in data if you have a specfic selection criteria. Then you could have: @itemcode varchar(30) = Null,WHERE ((@itemcode Is Null) Or (Items.Code = @itemcode)) Cheers, Peter
|
|
|
Greg McGuffey
|
|
Group: Forum Members
Posts: 2K,
Visits: 6.6K
|
I'm not quite sure why LIKE isn't...er...liked by SQL Server, but I do know enough to include that in my list of things to check out when a query is slow.  Glad you got it working (faster).
|
|
|
Bill Cunnien
|
|
Group: Forum Members
Posts: 785,
Visits: 3.6K
|
The stored procedure does not like the LIKE. Who woulda thunk it?!?!?! I have removed the LIKE and have followed another approach: IF @itemcode = '' BEGIN 'run script without the Items.Code filter END ELSE BEGIN 'run the script with the Items.Code = 'MyCode' END
Thanks, again, Greg!! Bill
|
|
|
Bill Cunnien
|
|
Group: Forum Members
Posts: 785,
Visits: 3.6K
|
I'd see what replacing the LIKE in your where clause with an equals does . . . I'm assuming that Items.Code is indexed. . . . Also, is there just one code in the field? I'll try the sproc without the LIKE...I suppose an IF block may work better. The code column is indexed. Only one code would be passed if the user wanted the list limited. Thanks for you attention on this, Greg. Much appreciated. Bill
|
|
|
Greg McGuffey
|
|
Group: Forum Members
Posts: 2K,
Visits: 6.6K
|
OK, that is different. I understand why you are baffled.  I'd see what replacing the LIKE in your where clause with an equals does, leaving @itemcode as a varchar(30). I.e. WHERE
-- Items.Code LIKE @itemcode (original code)
Items.Code = @itemcode -- Try this new code
AND Items.Class = 1
AND Items.DefaultDiv = @div
AND Items.inactive = 0 Now, you might need the LIKE, but at least this might help see where the problem lies. I'm assuming that Items.Code is indexed. Also, is there just one code in the field? I.e. it isn't a list of codes or anything weird like that is it?
|
|
|
Bill Cunnien
|
|
Group: Forum Members
Posts: 785,
Visits: 3.6K
|
If I remove the first parameter (varchar(30)), then the stored procedure works under 4 sec every time. If I reintroduce the parameter, then it goes right back to the 10 minute mark. The third parameter is a varchar(10). I do not think the type is the problem, here. Still investigating.
|
|
|