How can we help you today? How can we help you today?
Kendra_Little
Hi Peter, This is a really great question, and I think we can develop some good content around this concept, and maybe even go a little farther. I think there are a couple of clarifying questions that might help me get my head around the best way to think through this. 1) How many total environments are in the mix outside of production? In other words, after the integration environment, is there a QA or Staging environment that things get deployed to before production? The reason I ask is that if we don't have another environment in the pipeline before production, then it may become more important to "reset" the integration environment and then only deploy feature 2 to it. (Even if the two things are feature isolated, the application might have some sort of dependency we wouldn't find without doing this.) 2) My second question was if we had the option to use something like Clone or a snapshotting tool to reset the environments in the scenario. It sounds like we do in this case, so I will pull that into consideration. 3) Are you working primarily with SCA in  Visual Studio or SSMS? If it's SSMS, then I'm curious if you've updated the plugin since Sept 23rd -- there was an update in that release which helps with removing migration scripts in terms of the schema model. (Visual Studio already had the ability to handle deletions of migration scripts.)   Thanks very much for the thoughtful question and looking forward to working through this scenario more on Monday. Have a great weekend. Kendra / comments
Hi Peter,This is a really great question, and I think we can develop some good content around this concept, and maybe even go a little farther.I think there are a couple of clarifying questions tha...
0 votes
Hello, SQL Change Automation uses a local database to validate that all your migration scripts can run against a fresh environment successfully  -- this is called the shadow database. What's happening here is that when the commands to create a memory optimized table run against the shadow, SQL Server raises an error because that shadow database hasn't been configured for memory optimized tables. There are two different ways you can handle this: Approach 1 - Clone as Baseline If you are also doing a trial of SQL Clone and/or already have Redgate's SQL Clone, you can use the "Clone as Baseline" feature.  With this approach, an image of the production database (which you may mask if you're using Data Masker) is used as the "starting point" for the shadow. All of the properties of the production database, like the configuration you have for memory optimized tables, are already present in that image / clone, so they carry through and this automatically works.  (Note: with this approach you would only need to create migration scripts for changes you want to make, you wouldn't need to create a migration script for existing objects at all.) Approach 2 - Pre-Deployment Scripts If clone as baseline isn't right for you, then you can use a pre-deployment script to configure database settings (like enabling memory optimized tables) for the verification and build process, which are run against empty databases. With this approach, variables will be very useful.  You will likely want to: Have "USE $(DatabaseName) " at the top of the pre-deployment script to set the context to the shadow Use the $(IsShadowDeployment) variable (or some other method of your choice) to ensure that the pre-deployment script only runs against the correct environments. Usually folks only want the database configuration scripts to run against verify and build environments. Note: If you are working with a Pre-Sales engineer as part of this process, they are very skilled at helping folks with this as well. Hope this helps, Kendra / comments
Hello,SQL Change Automation uses a local database to validate that all your migration scripts can run against a fresh environment successfully  -- this is called the shadow database. What's happeni...
0 votes
Hi, A quick update from our side -- I've been chatting with @DanC and some others about options for this today. I've tried out the "PublishHTMLReports" plugin by Laksahay Kaushik, and I haven't been able to get it to work for the changes.html report. The plugin is designed for publishing jmeter reports specifically. It might be possible to get this to work with some code contributions to rewrite some of the key files, I am not sure.  We are looking at a couple of other options: If one is using YAML Pipelines, it is possible to add a short bit of YAML to publish the Change and Drift reports as artifacts to the pipeline. This doesn't render the report in the Azure DevOps frame itself, but it does make it very easy to click on the published artifacts associated with a pipeline run and easily download the files and open them in the browser.  If one is using Classic Release Pipelines, this requires a different approach. I believe that Classic Release Pipelines still lack the ability to publish artifacts. My colleague is looking at an option to use a fileshare in Azure to publish the latest reports from a pipeline in a way that can be included on an Azure DevOps Dashboard. This approach could fit well with Classic Release Pipelines. Do you already have a preference about whether you plan to use "Classic" pipelines, or YAML pipelines?  ( I think we will be exploring both of the options above as time allows -- there is no wrong answer here. Just curious which feels like it might be the best fit of the two, if either.) Cheers, Kendra / comments
Hi,A quick update from our side -- I've been chatting with @DanC and some others about options for this today.I've tried out the "PublishHTMLReports" plugin by Laksahay Kaushik, and I haven't been ...
0 votes