How can we help you today? How can we help you today?
Kendra_Little
Hello, SQL Change Automation uses a local database to validate that all your migration scripts can run against a fresh environment successfully  -- this is called the shadow database. What's happening here is that when the commands to create a memory optimized table run against the shadow, SQL Server raises an error because that shadow database hasn't been configured for memory optimized tables. There are two different ways you can handle this: Approach 1 - Clone as Baseline If you are also doing a trial of SQL Clone and/or already have Redgate's SQL Clone, you can use the "Clone as Baseline" feature.  With this approach, an image of the production database (which you may mask if you're using Data Masker) is used as the "starting point" for the shadow. All of the properties of the production database, like the configuration you have for memory optimized tables, are already present in that image / clone, so they carry through and this automatically works.  (Note: with this approach you would only need to create migration scripts for changes you want to make, you wouldn't need to create a migration script for existing objects at all.) Approach 2 - Pre-Deployment Scripts If clone as baseline isn't right for you, then you can use a pre-deployment script to configure database settings (like enabling memory optimized tables) for the verification and build process, which are run against empty databases. With this approach, variables will be very useful.  You will likely want to: Have "USE $(DatabaseName) " at the top of the pre-deployment script to set the context to the shadow Use the $(IsShadowDeployment) variable (or some other method of your choice) to ensure that the pre-deployment script only runs against the correct environments. Usually folks only want the database configuration scripts to run against verify and build environments. Note: If you are working with a Pre-Sales engineer as part of this process, they are very skilled at helping folks with this as well. Hope this helps, Kendra / comments
Hello,SQL Change Automation uses a local database to validate that all your migration scripts can run against a fresh environment successfully  -- this is called the shadow database. What's happeni...
0 votes
Hi, A quick update from our side -- I've been chatting with @DanC and some others about options for this today. I've tried out the "PublishHTMLReports" plugin by Laksahay Kaushik, and I haven't been able to get it to work for the changes.html report. The plugin is designed for publishing jmeter reports specifically. It might be possible to get this to work with some code contributions to rewrite some of the key files, I am not sure.  We are looking at a couple of other options: If one is using YAML Pipelines, it is possible to add a short bit of YAML to publish the Change and Drift reports as artifacts to the pipeline. This doesn't render the report in the Azure DevOps frame itself, but it does make it very easy to click on the published artifacts associated with a pipeline run and easily download the files and open them in the browser.  If one is using Classic Release Pipelines, this requires a different approach. I believe that Classic Release Pipelines still lack the ability to publish artifacts. My colleague is looking at an option to use a fileshare in Azure to publish the latest reports from a pipeline in a way that can be included on an Azure DevOps Dashboard. This approach could fit well with Classic Release Pipelines. Do you already have a preference about whether you plan to use "Classic" pipelines, or YAML pipelines?  ( I think we will be exploring both of the options above as time allows -- there is no wrong answer here. Just curious which feels like it might be the best fit of the two, if either.) Cheers, Kendra / comments
Hi,A quick update from our side -- I've been chatting with @DanC and some others about options for this today.I've tried out the "PublishHTMLReports" plugin by Laksahay Kaushik, and I haven't been ...
0 votes
Hi @Shaggy , Sorry for the delay on this one. For your second question, if there is a way to remove DeployChangesExecutionOrder, I am not knowledgeable about this one and am going to defer this one to other team members.  For your first question, the best way to deal with the merge conflict, here's what I recommend: A) In general it's good for DevA and DevB to regularly merge to their branches from trunk (or where-ever they are eventually going to merge TO), if their branches will be updated on a regular basis. This way they can be aware of other changes as they happen and not have to find out about them all at the end. [image] It's also best for both DevA and DevB to do that merge again right before they create the pull request. That way they can review the conflict locally and handle it.  C) If there's a race condition and someone merges in right before you, the Pull Request will notify you of the conflict as you mention. In this case I'd go back to the local repo, handle the conflict, and re-push. For handling the conflict locally, if you're using the SSMS plug-in, we recommend the free VSCode as a merge tool if you don't already have a favorite. To handle the conflict: In the trunk branch, pull changes Change to the branch in question (in this case BR1) Merge changes from trunk into BR1 -- in the example where trunk is named master: git merge master Open the sqlproj file with the conflict and merge the changes, in this case we would accept both (image below) Stage and commit Push [image] [image] Hope this helps, Kendra / comments
Hi @Shaggy ,Sorry for the delay on this one.For your second question, if there is a way to remove DeployChangesExecutionOrder, I am not knowledgeable about this one and am going to defer this one t...
0 votes