How can we help you today? How can we help you today?
AlexYates
Sorry for the delay. I've been very busy and on vacation. Catching up on emails etc now. Firstly, following the release of SQL Change Automation in the last few weeks, you should now use this link instead: https://documentation.red-gate.com/sca3/reference/powershell-cmdlets Open disclosure: I've not tested this, but I expect something like the following may work. With 100+ database you probably want to use a list and a for each loop to save all your target databases but I'll let you google the syntax for that: #Defining variables $nuget = C:\path\to\your.nupkg $target1 = New-DatabaseConnection -ServerInstance SERVER01 -Database yourdb  | test-DatabaseConnection # This uses WinAuth $target2 = New-DatabaseConnection -ServerInstance SERVER02 -Database yourotherdb -Username user -password P4ssword1!! | test-DatabaseConnection # This uses SQL Auth $releasePath = C:\some\file\share\$OctopusParameters["Octopus.Environment.Name"]\$OctopusParameters["Octopus.Project.Id"]\$OctopusParameters["Octopus.Release.Number"] #Creating release artifacts $release = New-ReleaseArtifact -Source $nuget -Target $target1 # Alternatively, try this to validate that all targets are in same start state. It will take more time to run but alerts you for drift earlier # $release = New-ReleaseArtifact -Source $nuget -Target @($target1, $target2) Export-DatabaseReleaseArtifact $release -path $releasePath #You might like to split this script into two at this point, allowing you to insert an Octopus manual intervention step in between to review the release artifacts. #You might also like to try a dry run deploy to one environment before rolling out to the rest. #Deploying the release Use-DatabaseReleaseArtifact $release -DeployTo $target1 Use-DatabaseReleaseArtifact $release -DeployTo $target2 For reference, several years ago I worked wth the folks at Skyscanner and they employed a strategy very similar to this one. Marketing video for non-techy managers: [image] https://www.youtube.com/watch?v=sNsPnCv7hHo Technical lightning talk for engineers: [image] https://www.youtube.com/watch?v=91n1mkyrSp8 / comments
Sorry for the delay. I've been very busy and on vacation. Catching up on emails etc now. Firstly, following the release of SQL Change Automation in the last few weeks, you should now use this link ...
0 votes
Not sure if it makes you feel more comfortable, but the transaction handling is managed by the open source tSQLt layer, not the Redgate layer. So you can fork it and/or contribute your own patches if you prefer. 😉 / comments
Not sure if it makes you feel more comfortable, but the transaction handling is managed by the open source tSQLt layer, not the Redgate layer.So you can fork it and/or contribute your own patches i...
0 votes
Yes - that could well be the issue. As a rule of thumb, if the static data table is over 1,000 rows expect an impact on performance. If the table is an order of magnitude bigger consider using a different strategy. However, if this is the issue there is a trick you can use to give you a significant performance boost: Setup tab > under Options just for this database, disable checks for changes to static data. Now the source code will still include the static data, but you have turned off the comparison by default. Now that static data will stop slowing down your refresh on the commit/get latest tab. Crucially, however, it will no longer notify you if the data changes. You will need to head back to the settings tab and flip it back on if/when you want to commit or pull down data updates. Hence, this fix will boost performance, but will mean your team need to communicate any static data updates with each other and manually push them up/pull them down. Also, this setting is individual to each dev machine. Hence, if using the dedicated model, each developer will individually need to flip the check back on, pull down the data, and flip the check back off again to get their performance back. / comments
Yes - that could well be the issue. As a rule of thumb, if the static data table is over 1,000 rows expect an impact on performance. If the table is an order of magnitude bigger consider using a di...
0 votes