Comments
3 comments
-
The issue SQL Backup is running into is the lack of free memory to perform a backup using the default settings. When it encounters such a situation, it automatically reattempts the backup again using smaller memory blocks, which is why the backup succeeded on the next attempt.
If you would like to prevent SQL Backup from raising such warnings, you can use the MAXTRANSFERSIZE option to specify a smaller buffer size value (in bytes) to use on the first backup attempt. The default value is 1048576 (1 MB). You could try using 512 Kb e.g.EXEC master..sqlbackup '-sql "BACKUP ... WITH MAXTRANSFERSIZE = 524288..." '
The core issue is that your free SQL Server memory is running low (~29 MB) and is heavily fragmented (363 blocks). We have an extended stored procedure that could help you in determining what's causing the fragmentation. If you run the following:EXEC master..sqbmemory
you will see an output of SQL Server's memory space similar to that in the SQL Backup log. The line of interest is the 'Free' memory element e.g.
9/23/2010 3:40:04 AM: Free 6729728 4096 108183 250 27045888
What you could do is to run sqbmemory periodically e.g. every 30 minutes, and monitor the 'Free' memory values. Do you see a trend in the values? Do the values fluctuate wildly during a certain period of time, or when a certain process is running in the database?
Here's a script you could run in Management Studio, that will log the output of sqbmemory into a temporary table every 30 minutes:CREATE TABLE ##sqbmemorylog (EntryTimestamp datetime, Type varchar(8), Minimum numeric(25, 0), Maximum numeric(25, 0), Average numeric(25, 0), BlkCount numeric(25, 0), Total numeric(25, 0)) WHILE 1 = 1 BEGIN INSERT INTO ##sqbmemorylog (Type, Minimum, Maximum, Average, BlkCount, Total) EXEC master..sqbmemory UPDATE ##sqbmemorylog SET EntryTimestamp = GETDATE() WHERE EntryTimestamp IS NULL WAITFOR DELAY '00:30:00' END
Hopefully, this helps in determining the cause of the fragmentation. -
Hi Peter,
many thanks for your answer.
I will try that with the stored procedure.
Kind regards
Kristofer -
There's also the possibility that the memory usage pattern is usual for your setup/environment, and using SQL Backup just puts more demands on free memory space.
Add comment
Please sign in to leave a comment.
can someone please help me with these type of errors?
Sometimes, the backup job fails with an error message, but seems to write the backup file nevertheless. The Outcome in the "Activity History" is "Successfull (with warnings)
The logfile shows the following errors:
So, the log tells me to reduce the number of threads used in the backup, but the number is already at 1!
What should i do?
Thanks for your help..
Kristofer