Comments
Sort by recent activity
From a chat in the team Slack channel, I believe that it's intended that you *should* be able to deploy pre / post scripts without a schema change, so it could be a bug. I'd like to set up a repro. I'm guessing that you're using SQL Source Control / State Based + SQL Change Automation? (Just checking for the repro setup as migrations approach also has pre and post scripts). / comments
From a chat in the team Slack channel, I believe that it's intended that you *should* be able to deploy pre / post scripts without a schema change, so it could be a bug. I'd like to set up a repro....
Hi @brookal , Just tagging onto @SamTouray 's note , and I'm also assuming that you are authoring changes in SQL Source Control. If you're using SQL Change Automation for everything (authoring in Visual Studio with what used to be called ReadyRoll), just holler -- slightly different answer for that, but similar options. The method of doing this depends a little on which type of tables they are. If these are relatively small reference tables where the data only ever changes as part of the development process (not user input), such as something like a list of states, countries, etc, then you might want to add these as static data tables, which essentially checks the data into source control. Documents on that are here. If you've got static data but it needs to be different in different environments, you can do that with a pre- or post- deployment script, as in this example. For data modifications to tables that aren't suitable for static data, I would also do this with a pre- or post-deployment script. The requirements for these are here, and the first one is really important: it needs to be easily rerunnable, so you want your script to have a way to indicate that it's been done and an initial check that looks for that and skips the step if possible. As @SamTouray mentioned, there are migration scripts in SQL Source Control, but I believe that in the future the functionality of the pre-and post- scripts will be what we rely on instead of the migration scripts, which may be eventually phased out as a feature. Hope this helps! Kendra / comments
Hi @brookal ,Just tagging onto @SamTouray 's note , and I'm also assuming that you are authoring changes in SQL Source Control. If you're using SQL Change Automation for everything (authoring in Vi...
Hi Asaf, I chatted with the team this morning and confirmed that we don't have a custom extension for AWS DevOps at this point. That doesn't mean you can't use it, however -- you can use the SQL Change Automation PowerShell module downloaded from the powershell gallery as part of a build script. The PowerShell cmdlets have full functionality, there's nothing in the extensions you can't do in the scripts. I personally like using them more than the extensions. We have worked examples and also reference documentation for the cmdlets which I've used quite a lot and which got me started fast. I've also submitted a suggestion to get an article done for Product Learning which walks through setting this up in AWS. I think it'd be a great post but the links above should get you started right away. Kendra / comments
Hi Asaf,I chatted with the team this morning and confirmed that we don't have a custom extension for AWS DevOps at this point. That doesn't mean you can't use it, however -- you can use the SQL Cha...
Hi Greg, Apologies in advance, you've asked a question about one of my favorite topics, so I'll probably write way more info than you actually want [image] I would do this in a pre-deployment script in a rerunnable format, something like what is in the gist here: https://gist.github.com/LitKnd/ae231bcd669a9a6c2cc23aacc0429974 Note: i believe the CURRENT syntax for alter database is 2012+ only, so you may need a variation if you're on a lower version. As soon as you allow snapshot isolation, SQL Server will begin creating versions in tempdb for modified rows, and an extra 14 bytes becomes needed on the rows themselves in the database. For existing production databases where performance matters, I recommend releasing the ability to allow snapshot isolation in its own change window, before you actually begin using it. This allows you some time to observe whether or not the row versioning itself has any negative effect before you add in any changes to actually use the snapshot isolation. Part of the reason for this is that if you have connections running SET TRANSACTION ISOLATION LEVEL SNAPSHOT and then you change the database setting so that snapshot isn't allowed, those connections are going to start getting an error when they try to run queries. So it's worth first enabling the setting and letting that burn in for a bit before you start using it -- just simplifies troubleshooting if anything looks weird. I'm a huge fan of snapshot isolation, please do discuss and follow up if you'd like more info! Kendra / comments
Hi Greg,Apologies in advance, you've asked a question about one of my favorite topics, so I'll probably write way more info than you actually want I would do this in a pre-deployment script in a ...
Nice! Yeah, that sounds like a scenario where it will really shine. Glad to hear you love the feature, it's a favorite of mine and can be so powerful! For an Azure VM that didn't have great disk under tempdb, I might still test out Snapshot Iso, TBH, would just use a little caution. The versioning overhead can vary quite a bit, and might be quite small if there aren't a ton of updates and deletes, or if updates and deletes happen against relatively narrow rows. For inserts, it generally doesn't need tempdb (no previous version of the row to capture), and it's also generally smart enough to not version off-row LOB data if you have updated other columns on the row. So the tempdb impact is quite small for many applications depending on the write pattern. I just always figure the time I don't mention it is the time it's likely to be a big deal, hahaha. Anyhoo, cheers! Have a great weekend. / comments
Nice! Yeah, that sounds like a scenario where it will really shine. Glad to hear you love the feature, it's a favorite of mine and can be so powerful!For an Azure VM that didn't have great disk und...
No problem, and makes sense. For excluding filter file from build, it would help to know the exact combo of tools that you are using so that I can set up a repro. Are you using SQL Source Control for the state first version-control tie in, and then using SCA to deploy? Or are you using SQL Change Automation for the whole thing (migrations first approach to version control)? My memory of testing this was actually that you had to specify the filter path on the build to use a filter file at all (and that it didn't carry through filters set up in source control), but it may be that I'm mis-remembering or was using a different product combo than you are, so I'd like to re-check real fast with your setup. / comments
No problem, and makes sense.For excluding filter file from build, it would help to know the exact combo of tools that you are using so that I can set up a repro. Are you using SQL Source Control fo...
Hi JTR, Is it possible for you to put the temporary objects into their own schema, and to have your filter file ignore that schema? I've used this approach in the past for objects like that which needed to be in the database itself rather than tempdb, due to needing to persist through a restart, etc. Putting them in their own schema helped make it clear that they didn't need to be checked into source and also made it easy for operations folks to identify, plus it simplified our cleanup scripts should some of them be orphaned. I am personally more of a fan of filter files than I am of -IgnoreAdditional, just because you're explicitly saying where you don't mind drift with a filter file pattern, and -IgnoreAdditional could allow unintendend drift to creep in. Kendra / comments
Hi JTR,Is it possible for you to put the temporary objects into their own schema, and to have your filter file ignore that schema?I've used this approach in the past for objects like that which nee...
Hello! Great question. One quick note: you're in the "discontinued and previous versions" section under DLM Automation, I am not sure if you meant to be in the SQL Change Automation section? With the migrations-first approach in SQL Change Automation, you are able to put multiple projects in a single Visual Studio solution if you wish. However, personally, my experience is that it is valuable and important to de-couple your database changes from your application changes when it comes to important. A simplified model is:
First, deploy initial database changes to "stage" the scene, but do not change customer-facing functionality. Build and deploy these independently, verify they are in place. (Example: add a column that will be used later, but it's not used yet.) This may be multiple changes rolled through the pipeline over time.
Later, deploy application changes that add the feature, but include a "feature flag" which determines whether the feature is visible to customers or not. Again, this may be multiple changes.
Eventually turn on the feature flag, and expose features to the customer
There's a lot of benefits to decoupling the changes this way: if you need to do a rollback, it's quite easy (simply turn the feature flag off). And the design of staging incrementally allows you to make minimal risk incremental changes. Is your question also about building databases with cross-database dependencies, such as a view in one database which references external databases? If so, I can speak to some approaches for that as well, just not sure about the scope of your question. Hope this helps, Kendra / comments
Hello!Great question. One quick note: you're in the "discontinued and previous versions" section under DLM Automation, I am not sure if you meant to be in the SQL Change Automation section?With the...