How can we help you today? How can we help you today?

Eeek! probs comparing database with 2 million rw with backup

Hmmm. trying the simple exercise of comparing a database that has a table with two million rows in it (and not much else) with its SQB backup seems to cause dire problems. The first time I tried it, the program just disappeared. The second time, it ground away for an hour, gradually grenading all the other applications running until it returned an error gracefully (it was an out of memory error but I couldn't get a screendump of it as it had taken all the memory!) The workstation is a dual-core AMD with 2 gigs of memory.
The database isn't really that big- nothing like the size of a commercial database.
AndrewRMClarke
0

Comments

3 comments

  • Robert C
    Morning,

    That's moderately concerning - we've certainly tested it on larger tables than that!

    Were you using a clustered or non-clustered index as the comparison key on that table? Non-clustered indexes are generally a lot more painful (and slower) to compare than clustered indexes, and in an effort to keep performance decent, we do use some quite aggressive memory caching of such tables.

    This is based on the amount of free physical RAM reported by Windows, but the Alpha / RC1 seems to have used a bit more than I was expecting - in particular if you're comparing two backups to each other (this doubles the memory requirement). I've pushed the maximum down considerably for the final release, so hopefully that should solve the problem...

    If you were using a clustered index as the comparison key, and still getting problems, I'm probably more concerned :-s.

    Thanks for the report,
    Robert
    Robert C
    0
  • AndrewRMClarke
    Slightly scary. I'll try as you suggest, using a different workstation, and check that the index is clustered, and see if I get the same results
    AndrewRMClarke
    0
  • Robert C
    Another thing to try if that's still causing you problems is to set a sneaky registry setting:

    HKEY_CURRENT_USER\Software\Red Gate\SQL Backup Reader 1\PageCacheLimit

    It should be a string value (REG_SZ), and set it to something like 1000. This'll limit you to somewhere in the region of 150MB of cache for non-clustered indexes, which will probably slow it down, but should sort the OOME.

    I've just compared one of our samples with 1.7M rows and an NCI, and that stayed under 260MB throughout (the app started at 100MB before I hit compare, then about 160MB went on during the compare).

    Cheers,
    Robert
    Robert C
    0

Add comment

Please sign in to leave a comment.