Comments
Sort by recent activity
This is still an issue in the 6.4 release. The problem is that sometimes backups in the source database don't get recorded in the "Server Cache" aka the SQL CE database. This is usually during times of heavy activity. Even excluding the SQL CE backups from the file system backup doesn't prevent an occasional failure to record a SQL Backup transaction log in the CE database. When it fails to record in CE, the file doesn't get shipped and log shipping breaks. I then need to manually identify the file that gets missed and copy it over to repair log shipping.
I am considering coding a work around to this problem. The work around would be along the lines of putting in a SQL Agent job step that compares the output of a sqbdata query to against the msdb..backup* tables. If a log backup is recorded in msdb but not in SQL CE, then I would perform a sqldata inserts of the missing data.
My question, is for the sqbdata insert, would it be sufficient to just insert a row into the backupfiles_copylist table? Or would I need to perform inserts into the other tables as well in order to get the the missed backup file onto the copy queue? Any guidance along those lines would be appreciated. Thanks. / comments
This is still an issue in the 6.4 release. The problem is that sometimes backups in the source database don't get recorded in the "Server Cache" aka the SQL CE database. This is usually during ti...
The cause has already been definitively identified. It was the file system backup that was running not AV. The specific data.sdf file in question was backed up 1 minute before the error was logged. Prior database backups as well as subsequent backups were logged into the SQL CE tables just fine. It was just an unfortunate matter of timing.
I am looking for a solution more elegent than excluding the Data directory from the file backups. If you don't have a better solution than I would consider this a defect that needs to be addressed by Red Gate.
Let me propose two potential solutions:
1. (Simple) If it fails to open the local cache file, then it needs to retry.
2. (Better) Migrate the SQL CE tables to the SQL Server instance being backed up. This would not only resolve the issue in this thread but would also resolve ongoing corruption problems. / comments
The cause has already been definitively identified. It was the file system backup that was running not AV. The specific data.sdf file in question was backed up 1 minute before the error was logge...
While appending the GUID doesn't appear to be working, I have been able to find an acceptable work around.
I have navigated to the Server Options window within the GUI and have appended ...\<database>\ to the Log file folder: under SQL Backup log files.
That seems to bring the behavior back to the way it worked prior to 6.x. / comments
While appending the GUID doesn't appear to be working, I have been able to find an acceptable work around.
I have navigated to the Server Options window within the GUI and have appended ...\<databa...
I too have been getting this warning on a server that has multiple log restore jobs since the upgrade. I think the previous version created a separate log for each database; whereas, the 6.x version does not. Therefore, if two jobs try to write to the same log file, this warning can happen. / comments
I too have been getting this warning on a server that has multiple log restore jobs since the upgrade. I think the previous version created a separate log for each database; whereas, the 6.x versi...
It turns out that the SQBCoreService.exe files were still at version 6, though the other binary files were at v5. After manually stopping the services and deleting the directory containing these files. I was once again able to reinstall and this time activation worked correctly. / comments
It turns out that the SQBCoreService.exe files were still at version 6, though the other binary files were at v5. After manually stopping the services and deleting the directory containing these f...
This happens to me frequently as well. This problem could easily be avoided if the SQL Backup historical data were stored in a real SQL Server database instead of CE.
The repair utility is SQL Server Management Studio. You need SSMS installed on the same physical servers that holds the Data.sdf file. Then connect to it specifying SQL Server Compact Edition as the "Server Type". Browse to the data.sdf file, and SSMS will recognize that it is corrupt and give you the option to repair it.
Of course you can also delete or rename the file, as you did, and a new empty file will be created. / comments
This happens to me frequently as well. This problem could easily be avoided if the SQL Backup historical data were stored in a real SQL Server database instead of CE.
The repair utility is SQL Ser...
RBA wrote:
Hi DonMan,
Roughly how often does this happen? Also, does this usually happen on, or after certain SQL Backup operations (such as full/diff/log backups, log copying, or restores).
Thanks,
I would estimate that it happens on a few random instances every week. Generally for me the corruption has occurred when a Cluster failover (Gennerally Planned) happens.
Also note that the frequency of this error increased with the upgrade to 6.x.
In my case the cluster is PolyServe not MSCS and I have since recently implemented scripts to try to keep the SQLBackupAgent_InstanceName service active only on the active PolyServe node. Time will tell if the scripts implemented have fixed the problem for me.
Nevertheless, I would still request that a future release not use SQL Server Compact Edition to store backup history. We already have a SQL Server being backed up. So the existing SQL Server could also be leveraged to store the Red Gate SQL Backup historical data and it wouldn't be so fragile. / comments
RBA wrote:
Hi DonMan,
Roughly how often does this happen? Also, does this usually happen on, or after certain SQL Backup operations (such as full/diff/log backups, log copying, or restores).
Than...
I may need to go the route of generating start/stop scripts for instances, but this problem ceratainly became significant with 6.x and wasn't much of an issue with 5.4. Of course if the data were recorded in the msdb database instead of CE, that problem would completely go away.
PolyServe does have the option to replace the SQL Browser with the IA Browser, it's under advanced options of Instance Aliasing. / comments
I may need to go the route of generating start/stop scripts for instances, but this problem ceratainly became significant with 6.x and wasn't much of an issue with 5.4. Of course if the data were ...
Well I still don't like the concept of local cache, but at least the 6.0 GUI is more usable than the 5.4 version is. With lots of instances registered, the performance still leaves lots to be desired. I would recommend that all data be stored in the SQL Server instance database and that the client just queries data as needed. That's how it's supposed to work. The idea of pulling all the data down locally is not in allignment with the client server concept and causes a huge amount of unnecessary overhead. / comments
Well I still don't like the concept of local cache, but at least the 6.0 GUI is more usable than the 5.4 version is. With lots of instances registered, the performance still leaves lots to be desi...
Perhaps I jumped the gun a bit on my last posting. The poor performance was against a SQL Server instances with the SQL Backup 5.4 installed components using the 6.0 GUI, not the 6.0 backend components. I upgraded a DEV instance to the 6.0 backend components and the activity history came up almost instantly. A significant improvement as long as 6.0 components are installed on the backend. / comments
Perhaps I jumped the gun a bit on my last posting. The poor performance was against a SQL Server instances with the SQL Backup 5.4 installed components using the 6.0 GUI, not the 6.0 backend compo...