1

I am having a weird issue when a running a query that joins some semi-large tables (one table is about 2 million rows and the other about 21 million) together. What happens is the query does not run within the same time frame consistently. If I run the query on a new connection in SSMS it takes about 45 seconds to complete. After it completes if I hit the execute button again it takes almost 4 minutes to complete. If I then open a new query (connection) and execute the query it takes about 45 seconds again. These queries are being run from the server and I can consistently reproduce this.

Any help on what is going on and how to fix is helpful.

Codingrunr
  • 13
  • 2
  • 1
    suggest you to look into the difference between execution plans for - when the query runs fast and when it runs slow. Are stats and indexes up-to-date ? – Kin Shah May 07 '14 at 17:26
  • 1
    Have you looked at execution plans for the "fast" and "slow" versions? Are they exactly the same? If not, therein lieth your answer. Check this answer for some real gold: http://dba.stackexchange.com/a/60181/10832 – Hannah Vernon May 07 '14 at 17:27
  • @Kin Yes the indexes and stats are up to date.I will compare the query plans. Thanks for the ideas. – Codingrunr May 08 '14 at 20:21

1 Answers1

1

With that many rows, moving the results from the server to the client will be a significant part of the elapsed time. Eliminate variablity in that part by wrapping your query in some way such as:

select
    count(*)
from (
    {your query here}
) as xx

This way the network transfer will be a minimal part of the elapsed time and you can eliminate it from your enquiries.

Michael Green
  • 24,839
  • 13
  • 51
  • 96