Comments
Sort by recent activity
The folks at Redgate use GitHub. Microsoft use Git in Azure DevOps Services. https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/use-git-microsoft The Redgate tools will work fine with Git hosted in GitHub, Bitbucket, Azure DevOps Services (hosted) or Azure DevOps Server (TFS / on prem). The de facto source control technology these days is Git and it pays to use the most widely supported and understood tools. / comments
The folks at Redgate use GitHub. Microsoft use Git in Azure DevOps Services.https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/use-git-microsoftThe Redgate tools will work fine...
Sounds like a sensible decision. Using pre/post should mean the "deploy to all workstations" step is just a simple "get latest"/"apply changes". / comments
Sounds like a sensible decision.Using pre/post should mean the "deploy to all workstations" step is just a simple "get latest"/"apply changes".
Fair point, i suppose my instructions for option 1 should have read: UNLINK/RE-LINK STATIC DATA 1. Unlink static data 2. Deploy new column as NULLABLE all the way up to prod ****AND TO EVERY DEV WORKSTATION**** 3. INSERT static data manually all the way to prod ****AND TO EVERY DEV WORKSTATION****, or relink static data to source control and then deploy all the way to prod ****AND TO EVERY DEV WORKSTATION**** 3. Add NOT NULL constraint, commit to source control and deploy all the way to PROD ****AND TO EVERY DEV WORKSTATION**** 4. Re-link static data (if not already done so) In retrospect, perhaps the following is a better solution all round: PRE/POST DEPLOYMENT SCRIPTS (v2) 1. Unlink static data table 2. Add a pre deploy to: a. Check if target table is in the before state and that it already holds data b. If so, create a new table called OriginalTableName_Temp c. Copy all data to new temp table d. Truncate original table 3. Add a post-deploy to: a. check if OriginalTableName_Temp exists b. if so, copy all data, including new default data for new NOT NULL col into original table (by the time this script runs, the new col should exist). c. delete OriginalTableName_Temp 4. Commit your new pre- and post-deploy script, along with your new NOT NULL column as a single commit. 5. Deploy this change to all environments, including prod and all dev workstations 6. Re-link static data * For now, you probably need to manually patch all the other dev workstations. Sorry I forgot to include dev workstations in my original answer. Forgive me, I'm a only fallible human. :-) / comments
Fair point, i suppose my instructions for option 1 should have read:UNLINK/RE-LINK STATIC DATA1. Unlink static data2. Deploy new column as NULLABLE all the way up to prod ****AND TO EVERY DEV WORKS...
Alessandro is spot on. Essentially you need to balance the need to manage all DBs as a single unit (because dependencies) and the need for agility (because if every DB is managed independently you don't need to get everyone to agree to release everything at once). In the centralised source control (TFTC/SVN etc) world folks tended to have bigger repos, but in the distributed (git) world folks tend toward smaller, isolated, dare I say, "microservices". Basically, if you plan to release DB updates for different dbs independently, you should probably have a repo per DB... Unless that's totally impractical because dependencies. And if that's the case you will probably need to group DBs together, but its also a good sign that you should try to remove some of those dependencies. / comments
Alessandro is spot on.Essentially you need to balance the need to manage all DBs as a single unit (because dependencies) and the need for agility (because if every DB is managed independently you d...
Oof, that sounds fun. Good luck! / comments
Oof, that sounds fun. Good luck!
You are right, that is nuts. EITHER: One DLM Dash install per team and the team is responsible for maintaining it - but that relies on each individual team having a neat division of responsibilities and not needing to look after more than 50. OR: Set up an automated process to drift check before deployment. https://documentation.red-gate.com/sca3/automating-database-changes/automated-deployments/handling-schema-drift OR: Build your own thing with triggers: https://www.red-gate.com/simple-talk/sql/database-administration/database-deployment-the-bits-database-version-drift/ OR do several of those. But 13 instances of DLM Dash really is nuts if there are people who will need to look at many different instances. / comments
You are right, that is nuts.EITHER: One DLM Dash install per team and the team is responsible for maintaining it - but that relies on each individual team having a neat division of responsibilities...
Hi Annette, Off the top of my head, try: $project = "C:\Work\scripts"
$targetDb = New-DatabaseConnection -ServerInstance "test01\sql2014" -Database "Test" $options = "ignoreadditional" Sync-DatabaseSchema -Source $project -Target $targetDb -SQLCompareOptions $options See example 6 here: https://documentation.red-gate.com/sca3/reference/powershell-cmdlets/sync-databaseschema And further docs here: https://documentation.red-gate.com/sca3/automating-database-changes/automated-deployments/using-sql-compare-options-with-sql-change-automation-powershell-module / comments
Hi Annette,Off the top of my head, try:$project = "C:\Work\scripts"
$targetDb = New-DatabaseConnection -ServerInstance "test01\sql2014" -Database "Test"$options = "ignoreadditional"Sync-DatabaseSch...
Not sure why that's not working, one for support I guess. In the meantime, reverting to the raw SQL Compare command line is my de facto quick fix where possible. / comments
Not sure why that's not working, one for support I guess.In the meantime, reverting to the raw SQL Compare command line is my de facto quick fix where possible.