Comments
Sort by recent activity
Hi
We shall put this forward for a future version, although it will probable be in hours rather than minutes.
Regards
Dan / comments
Hi
We shall put this forward for a future version, although it will probable be in hours rather than minutes.
Regards
Dan
Hi
If you are using the copyto, or backing up to, a remote location the process running the task must have permissions to write there.
I would recommend the option that you have found which is to run the SQL Agent as a domain user and assign permissions for the remote location.
Regards
Dan / comments
Hi
If you are using the copyto, or backing up to, a remote location the process running the task must have permissions to write there.
I would recommend the option that you have found which is to r...
Hi Hans
If there are multiple log backup files in the restore directory SQL Backup will read information from all of them and restore them in the correct order. This means that the database will not be out of sync.
In your case the file was not copied across so if a file is missing the standby server will be out of sync. There is no way to prevent this included in the software as the restore will use any files it finds.
What should be established is why there was no file. If the backup didn't take place then all should be ok as the next file to be copied across will continue from the last.
A potential work around is to add a count to the file name so that the server are in sync. If the count number is out then the files would not be restored.
As a note when setting up log shipping i would recommend having a full backup copied across each night. This will ensure that the databases are in sync each day, and reduce the amount of backup files needed for a restore. To automate this you can use the scripts in other posts but change the date field to: @datestamp = CAST(DATEPART(m, getdate())AS varchar(5))+ CAST(DATEPART(d, getdate())AS varchar(5))+ CAST(DATEPART(yy, getdate())AS varchar(5))
This will remove the hours minutes and second so you know the filename for the restore process.
Regards
Dan / comments
Hi Hans
If there are multiple log backup files in the restore directory SQL Backup will read information from all of them and restore them in the correct order. This means that the database will no...
Yes it will always try to write a log file here.
This directory should be accessible to all users, can you check the permissions as it sounds like there is an access issue.
Regards
Dan / comments
Yes it will always try to write a log file here.
This directory should be accessible to all users, can you check the permissions as it sounds like there is an access issue.
Regards
Dan
Hi
The command you have given is slightly incorrect. Here is an amended version.
master..sqlbackup '-SQL "BACKUP DATABASE [ISI] TO DISK = [d:\sql_data\MSSQL\BACKUP\<AUTO>] WITH LOGTO = [d:\sql_data\MSSQL\BACKUP\Logs\]" -E'
The with logto, and other options, must be within the quotes (")
You are also missing the credentials to login. Please use the -E option (as above) or -P and -U.
Regards
Dan / comments
Hi
The command you have given is slightly incorrect. Here is an amended version.
master..sqlbackup '-SQL "BACKUP DATABASE [ISI] TO DISK = [d:\sql_data\MSSQL\BACKUP\<AUTO>] WITH LOGTO = [d:\sql_data...
Hi Peter
If you want to change the connection string to take into account extending the timeout, or a SQL Server that does not use the default port, here is what you can do.
When choosing the databases, in the field for the server change the entry to 'servername;Connect Timeout=1', or 'TCP:10.1.1.2,1433;Connect Timeout=1' (ignore quotes). In both these examples you have altered the connection timeout to 1 second, and in the second the connection is forced to use the IP protocol over the default port of 1433.
I hope this helps
Regards
Dan / comments
Hi Peter
If you want to change the connection string to take into account extending the timeout, or a SQL Server that does not use the default port, here is what you can do.
When choosing the datab...
Hi John
It may be worth having a look in enterprise manager to check that there are no other connections, as this should be the cause.
Regadrs
Dan / comments
Hi John
It may be worth having a look in enterprise manager to check that there are no other connections, as this should be the cause.
Regadrs
Dan
Hi John
When data is retrieved from the databases the 'NOLOCK' option is used. This prevents the database from being locked for any other transactions. Although this means that the data may be "dirty" it increases performance and SQl usage.
The problem you are encountering is due to too many transactions taking place while the data is being read.
Unfortunately the only way around this is to try and compare the data while there are less transactions taking place.
Regards
Dan / comments
Hi John
When data is retrieved from the databases the 'NOLOCK' option is used. This prevents the database from being locked for any other transactions. Although this means that the data may be "dir...
Hi
The most common reasons for not being able to see all the tables is that the tables do not have a unique index (usually a primary key), or the columns with an index have different names.
Can you check this please and let us know how you get on.
Regards
Dan / comments
Hi
The most common reasons for not being able to see all the tables is that the tables do not have a unique index (usually a primary key), or the columns with an index have different names.
Can you...
HI
Unfortunately there is no way to do this at the moment directly with the tables. You could create an indexed view instead and work with that.
Regards
Dan / comments
HI
Unfortunately there is no way to do this at the moment directly with the tables. You could create an indexed view instead and work with that.
Regards
Dan