Comments
Sort by recent activity
SQL Storage Compress is a tool available from Redgate which lowers the disk footprint of a database (different from availablity in 2008 Ent). In our case, by an average of 82%.
I have some databases which require 730 GB in native form, but under this tool (and the preferred extensions mdfx, ndfx) will fit on 155GB.
The tool requires a background service(HyperBacSrv.exe) be installed between sqlservr.exe and the OS. Read and write operations are intercepted and translated for the compressed file using index files(extensions .index, .index2)
I don't want to trust it yet in production, but our QA and TEST departments are reporting no issues. The GUI estimates I can save over 3TB on our QA server (which will likely be used by additional restores [image] )
Here is a link to the main page. SQL STORAGE COMPRESS / comments
SQL Storage Compress is a tool available from Redgate which lowers the disk footprint of a database (different from availablity in 2008 Ent). In our case, by an average of 82%.
I have some databas...
My research is really 2 fold.
We are migrating production databases to a new SAN(and your note of UNPRESENT may yield assistance) and I am trying to convert a series of databases to SQL Storage Compress with minimal downtime. If you are not familiar with SQL Storage Compress, I recommend looking at it for TEST and QA environments with large databases.
I think I will code for using a native DIFF backup. In production we run nightly FULLs and 15 min logs. If the DIFF is time consuming, I will revisit my options.
Thanks for the chatter Chris. / comments
My research is really 2 fold.
We are migrating production databases to a new SAN(and your note of UNPRESENT may yield assistance) and I am trying to convert a series of databases to SQL Storage Com...
This database is the only one that will migrate. The others need to remain online.
Due to the nature of the connection remaining open, I still have the option of running a native DIFF, but I prefer your utility.
I have a number of these projects to complete and would like to automate as much as possible. Previous solutions required detach/rename/reattach/FULLBackup/restore. This is too time consuming.
If I can code a solution from one query window, that would be preferred. / comments
This database is the only one that will migrate. The others need to remain online.
Due to the nature of the connection remaining open, I still have the option of running a native DIFF, but I prefer...
Chris,
both servers are running the same version of SQL 2005 SP4. / comments
Chris,
both servers are running the same version of SQL 2005 SP4.
I adjusted my role to sysadmin, but the error continues. I also tried using the RUN AS to modify the credentials. No dice.
I quickly toggled the database OFFLINE/ONLINE in case some phantom connection is in the way. Still no go.
When I look in activity monitor with no filter, there are no processes for my database.
I would note that there are 2 errors returned inside the command prompt.
The exact message is this:
Backing up CompressTest (full database) to:
L:\CompressRestore20120222\CompressTest_FULL.sqb
Error 880: BACKUP DATABASE permission denied in database: (CompressTest)
SQL error 924: Database 'CompressTest' is already open and can only have one user at a time.
SQL Backup exit code: 880
Last SQL error code: 924
/ comments
I adjusted my role to sysadmin, but the error continues. I also tried using the RUN AS to modify the credentials. No dice.
I quickly toggled the database OFFLINE/ONLINE in case some phantom connec...
I think I got it.
I was adjusting the database access from a query window and then changing my connection to Master.
USE [Master]
I thought this freed the single connection, but it does not. I had to 'DISCONNECT' my query session manually.
Now looking for a way to achieve this result without leaving the query window. / comments
I think I got it.
I was adjusting the database access from a query window and then changing my connection to Master.
USE [Master]
I thought this freed the single connection, but it does no...
We have extra variables in our environment for month end. I will likely have to chalk it up to a fluke. The event log was not very helpful and I can't find a reference to the backup log file. If it is an error in code(yours or ours), I am sure it will turn up again.
I'd prefer to have this record at least in case others are searching for the event. / comments
We have extra variables in our environment for month end. I will likely have to chalk it up to a fluke. The event log was not very helpful and I can't find a reference to the backup log file. If...
I have not resolved the issue. I have only created a second thread in another forum. You are welcome to keep an eye on both items.
The second forum is here. / comments
I have not resolved the issue. I have only created a second thread in another forum. You are welcome to keep an eye on both items.
The second forum is here.
I found that this particular server missed a SQL Backup update. It was running 6.3.
I updated to 6.5.1.9 AND UPDATED SERVER COMPONENTS
The next attempt at a restore, I adjusted some file locations in the 'move to' segment.
6 hours later...Success!
Now on to load testing.
(Note: I found the server components outdated when support requested a log file. I opened it and saw a 6.3.?.? So some credit goes to support ) / comments
I found that this particular server missed a SQL Backup update. It was running 6.3.
I updated to 6.5.1.9 AND UPDATED SERVER COMPONENTS
The next attempt at a restore, I adjusted some file locations...
I was able to execute
DBCC CHECKDB ('REPORTING01') WITH no_infomsgs
which returned only "Command(s) completed successfully."
I have had great success with smaller databases(Native 600GB-) They average 82% compression. (Meaning a 500GB db now has a footprint of 90GB). I think I will inquire on the Compression forum regarding large restores now that I don't believe the backup to be at fault. / comments
I was able to execute
DBCC CHECKDB ('REPORTING01') WITH no_infomsgs
which returned only "Command(s) completed successfully."
I have had great success with smaller databases(Native 600GB-) ...