How can we help you today? How can we help you today?
howarthcd
Hi Nigel Thanks for the update - this does sound promising. We're looking forward to seeing this fixed in a future release. Thanks Chris / comments
Hi Nigel Thanks for the update - this does sound promising. We're looking forward to seeing this fixed in a future release. Thanks Chris
0 votes
I ended up raising this directly with Red-Gate as I was looking for a rapid response, which was as follows: Red-Gate's response: This shouldn't happen as the SQL Backup failure to write the file should be reported back to SQL Server so the whole process is a failure. It may be that the last data block was retrieved from the VDI and then SQL Server truncated the log. The failure could have then happened writing the last few blocks or something similar. The outcome is the same as a corrupted log backup file which is always a risk when backing up straight to a network share. Transferring more data and using Native backup will increase this risk. I am communicating with a developer to see if he can track down an occasion where SQL Backup reports to SQL Server that it is complete before the final data blocks have been written. <Reply sent by me included the failed backup's log file> Red-Gate's response: Thank you for the log file, after discussing this with the development team it seems there is one rare occasion where this can occur. There can be a delay between SQL Backup receiving the data, compressing it and writing it to disk. This delay is caused because the compressed data can not be written to disk before it reaches a specific size, for efficiency and OS requirement issues. SQL Server does not inform SQL Backup that it is sending the last block so when SQL Backup acknowledges it has received that block SQL Server automatically assumes all data has been received/written so SQL Server will perform the truncation. If the last accumulated block of compressed data then then has a problem, it renders the whole backup file incomplete. This is something we are hoping to improve but it may take a lot of work with regards to logic and finding a way to know which block is the last from SQL Server. So it seems that the issue is being caused by an inadequacy inherent in SQL Server's VDI and that most, if not all, 3rd party compression products that rely on the VDI (there is at least one that doesn't) may be open to the same issue. In order to help mitigate the risk I think that we may move to SQL Server 2008's native compression feature when we upgrade. This is unfortunate for Red-Gate as their product is otherwise superb, but we can't afford to risk this happening again. Chris / comments
I ended up raising this directly with Red-Gate as I was looking for a rapid response, which was as follows:Red-Gate's response: This shouldn't happen as the SQL Backup failure to write the file sh...
0 votes
Hi, has there been any progress on the fixing of this bug? It's still affecting the latest version of SQL Compare and is preventing us from upgrading. Thanks Chris / comments
Hi, has there been any progress on the fixing of this bug? It's still affecting the latest version of SQL Compare and is preventing us from upgrading. Thanks Chris
0 votes
In my opinion everything you've said so far points to a problem with VAS pressure and not to anything 'external' to SQL Server, such as disk fragmentation, page files etc... The reason I asked about CLR objects is that it seems that when the CLR initialises it consumes about 120-140MB of VAS, which is consistent with the out of memory error that you experienced when experimenting with CLR objects. @VERSION'? If you haven't done so already it might be worth setting up a SQL Agent job to log the results of the 'sqbmemory' extended proc on a regular basis, I would suggest a frequency of one minute. This way you can track the 'total' and 'maximum' 'free' values over time and possibly correlate any drop-off with processes and/or jobs that are running on the server at that time. If you use Reporting Services then you could create a report to display the data against time. Another thing that can cause VAS problems is heavy use of linked servers - do you use these at all? Other causes can be a large procedure cache, large numbers of SPIDs, larger numbers of cursor operations - it's impossible to quantify 'large' for your server, though, without an understanding of the workload. Chris / comments
In my opinion everything you've said so far points to a problem with VAS pressure and not to anything 'external' to SQL Server, such as disk fragmentation, page files etc... The reason I asked abou...
0 votes
Actually, I've managed to sort this out myself by downloading a copy of SQLite Database Browser and removing the problematic entries from the Document and DocumentVersion table. All is now well. / comments
Actually, I've managed to sort this out myself by downloading a copy of SQLite Database Browser and removing the problematic entries from the Document and DocumentVersion table. All is now well.
0 votes