Comments
Sort by recent activity
This is what we suspected, however we're keen to stick with ANSI (which we deliberately specified when scripting the database) which, for us, is the default encoding for applications such as SQL Server Management Studio and Notepad. We wish to ensure that all scripts use the same encoding when added to our source control software - enforcing non-standard defaults across a large development team would be tricky.
Applications such as Notepad are able to display these characters - why does SQL Compare handle them differently?
Chris / comments
This is what we suspected, however we're keen to stick with ANSI (which we deliberately specified when scripting the database) which, for us, is the default encoding for applications such as SQL Se...
That could be because ~700KB of contiguous free VAS is adequate to support native backups. As I mentioned, you can run into the same problem with T-SQL backups, and you may do so in time should the VAS region become even more fragmented. http://www.red-gate.com/supportcenter/C ... 000121.htm
Chris / comments
That could be because ~700KB of contiguous free VAS is adequate to support native backups. As I mentioned, you can run into the same problem with T-SQL backups, and you may do so in time should the...
To be on the safe side I would schedule some downtime for a quick restart of SQL Server (no need to restart the server). You might want to execute the sqbmemory extended procedure immediately after the restart so that you have figures for comparison at a later date.
It's good practice to regularly perform test restores of backups - if you were to do this then you'd quickly know whether or not the backup files were valid.
Chris
EDIT: Looking at your logs you can see that SQL Backup gradually reduces its memory requirements until the backup runs successfully:
1st attempt: 12,582,912 bytes
2nd attempt: 6,291,456 bytes
3rd attempt: 3,145,728 bytes
4th attempt: 1,572,864 bytes
5th (successful) attempt: presumably 786,432 bytes
I believe that SQL Backup gradually reduces the value of MAXTRANSFERSIZE http://www.red-gate.com/supportcenter/C ... wledgebase\SQL_Backup\KB200708000121.htm
Definitely restart the instance before the backups fail altogether due to a lack of appropriately-sized contiguous VAS, ideally you need at least 12582912 bytes (12MB) to be returned for the 'Maximum Free' value discussed earlier.[/url] / comments
To be on the safe side I would schedule some downtime for a quick restart of SQL Server (no need to restart the server). You might want to execute the sqbmemory extended procedure immediately after...
In my experience, in similar situations, you're unlikely to be at risk of "bad" backups, assuming that by bad backups you mean that the backup file is corrupt, it is far more likely that you'll get to the point where the backups will fail to run at all.
As I mentioned earlier, you need to restart SQL Server then closely monitor the output of sqbmemory to help you determine when the error is next likely to occur and to help you determine exactly what is causing the fragmentation.
Note that this issue is not really a problem with SQL Backup as the same can occur with native backups.
Chris / comments
In my experience, in similar situations, you're unlikely to be at risk of "bad" backups, assuming that by bad backups you mean that the backup file is corrupt, it is far more likely that you'll get...
It looks like the VAS region is heavily fragmented as the maximum 'free' block size is ~768KB. A freshly-restarted 32-bit instance should show a maximum 'free' block size of somewhere in the region of 100MB.
The immediate fix is to restart the instance of SQL Server, the long term fix is to try to work out why the VAS is becoming fragmented.
A few causes of VAS fragmentation and/or pressure are:
1. Using sp_xml_preparedocument without a corresponding sp_xml_removedocument.
2. Use of CLR objects (if you're not using CLR objects then ensure that the CLR sp_configure option is turned off and restart the instance if you change the option).
3. Use of the sp_OA... stored procedures on a SQL Server 2005 SP2 installation - upgrading to SP3 should fix the problem.
4. Use of linked servers.
Upgrading to 64-bit OS and SQL Server should remove the problem, although it's always advisable to first attempt to fix the root cause where possible.
Thanks
Chris / comments
It looks like the VAS region is heavily fragmented as the maximum 'free' block size is ~768KB. A freshly-restarted 32-bit instance should show a maximum 'free' block size of somewhere in the region...
Could you execute the following against the problematic instance of SQL Server and post the results back to the forum?
EXEC master.dbo.sqbmemory
Thanks
Chris / comments
Could you execute the following against the problematic instance of SQL Server and post the results back to the forum?
EXEC master.dbo.sqbmemory
Thanks
Chris
Hi Brian
Thanks for the confirmation.
Chris / comments
Hi Brian
Thanks for the confirmation.
Chris
@Alex B To answer your question regarding development environment, this happens in SSMS but also in VS when working with database projects where the database name is the same as the schema name in which the table is being created. This particular table script was auto-generated by VS when 'importing' the database via the native Database Project import task in VS. / comments
@Alex BTo answer your question regarding development environment, this happens in SSMS but also in VS when working with database projects where the database name is the same as the schema name in w...
Thanks @Alex B for researching into this and getting back to me. I'm not totally convinced about the explanation though as I've subsequently found that the error only surfaces when there is a 'WITH' clause included with the constraint creation statement, even when the table creation statement is not qualified with the database name. Splitting apart the table and constraint creation script gives the error in this scenario: CREATE TABLE [Logging].[LogEntries] ( [LogEntryId] INT IDENTITY (1, 1) NOT NULL ) GO ALTER TABLE [Logging].[LogEntries] ADD CONSTRAINT [PK_LogEntries] PRIMARY KEY CLUSTERED ([LogEntryId] ASC) WITH (FILLFACTOR = 90) GO
...but not in this one: CREATE TABLE [Logging].[LogEntries] (
[LogEntryId] INT IDENTITY (1, 1) NOT NULL
)
GO
ALTER TABLE [Logging].[LogEntries] ADD
CONSTRAINT [PK_LogEntries] PRIMARY KEY CLUSTERED ([LogEntryId] ASC)
GO
The only difference is the omission of the WITH clause in the second pair of statements. I agree that this is evidently an edge case and so likely isn't worth spending the time on fixing. Thanks again Chris / comments
Thanks @Alex B for researching into this and getting back to me.I'm not totally convinced about the explanation though as I've subsequently found that the error only surfaces when there is a 'WITH'...
Alex B said:
If you try formatting both of these, what is the result:
<div>WITH test AS
(SELECT 1 AS col1)
SELECT * FROM test</div>
and SELECT 1
WITH test AS
(SELECT 1 AS clo1)
SELECT * FROM test
For me, the first one does not add a semicolon before the WITH and the second one will add a semicolon after the 1 as normal for insert semicolons, does that happen for you? Kind regards,
Alex
Hi Alex, I get the same as you - no semicolon for the first (as I would expect), semi-colon after the SELECT 1 statement for the second. The error I'm getting appears to be something relating to the word 'Logging' (the table's schema name in my example). If I add an 's' to the schema name (i.e. 'Loggings') then the semi-colon adding works just fine. The error I reported only seems to happen if I attempt to refactor with just the table CREATE statement. As soon as I add in a 'CREATE SCHEMA Logging' statement then the refactor completes. Thanks Chris / comments
Alex B said:
If you try formatting both of these, what is the result:
<div>WITH test AS
(SELECT 1 AS col1)
SELECT * FROM test</div>
andSELECT 1
WITH test AS
(SELECT 1 AS clo1)
SELECT * FROM ...