Comments
Sort by recent activity
Hi Gill, The shadow database for SQL Change Automation doesn't get automatically dropped. If you manually wish to drop the database, it is OK as it will get recreated when it's needed. We don't automatically drop this because keeping it in place can speed up the process of verifying -- sometimes significantly. You do have a couple of options when it comes to maintaining the database, if you don't want to manually drop it:
You could place the shadow database elsewhere -- even on a non-Azure SQL Database instance, if you wish. Please do note that this will only realistically work if you're not using TSQL which only compiles/successfully executes against Azure SQL Database. (For example if you have a pre-script which creates External Tables.) To configure the shadow database location, you edit your user settings file.
You could use the Serverless tier for your Shadow database if you still wish to use an Azure SQL Database for verification, but wish to not be billed for idle time. This can be done by editing the CreateDatabase.sql script in the Provisioning folder and specifying the SERVICE_OBJECTIVE you wish to use in the script. Please note that when you use the Serverless tier, it is normal for SQL Server to throw an error when you wake up / resume a database.
Hope this helps, Kendra / comments
Hi Gill,The shadow database for SQL Change Automation doesn't get automatically dropped. If you manually wish to drop the database, it is OK as it will get recreated when it's needed. We don't auto...
Hi @GurpreetGill , I might handle this in different ways, depending on how frequently I expect this pattern to happen in the course of my workflow.
If this may happen frequently, disabling the programmable object feature for the project and using a "versioned" migrations only approach is likely the best option. If you would like to do this, there are two steps. 1) The "ProgrammableObjectHandling" setting can be set to ScriptInMigrations. 2) You likely would also want to modify the "SyncToOfflineSchemaModelObjectTypes" setting so that includes programmable objects in the offline schema model.
If this is a rare occurrence, the simplest option is to split the deployment in two -- deploy the PO first in one deployment and then the migration in the next deployment.
If this is a rare occurrence and splitting the deployments into two is an issue, one could also leave POs enabled and manually create a migration script with the same PO in it for ordering purposes, but manual validation that these are in sync for that deployment is of course needed.
Essentially, the Programmable Object feature does generally make ordering of changes simpler. But for the scenario you mentioned it actually makes it a bit difficult. For that reason we do support disabling the feature for workflows where it may not be a good fit. Hope this helps, Kendra / comments
Hi @GurpreetGill ,I might handle this in different ways, depending on how frequently I expect this pattern to happen in the course of my workflow.
If this may happen frequently, disabling the prog...
Hi, We don't have plans to change this on our roadmap at this time. Do you have a specific use case which is causing you to ask this question / is there a particular scenario that this would enable you to implement? Understanding that need is always useful to us in our planning. Thanks, Kendra / comments
Hi,We don't have plans to change this on our roadmap at this time. Do you have a specific use case which is causing you to ask this question / is there a particular scenario that this would enable ...
Hi @pkettlew , We looked into this, and I believe that we had phrased this in the documentation in a confusing way. I think the following sentence was the source of the confusion: By default, the shadow database will be created on the same SQL Server Instance as the development database with the name [project]_$(username)_SHADOW.
This was meant to explain the naming convention used behind the scenes and represent a placeholder for the user name, not to imply that you could use the variable when customizing the shadow database connection string. I have updated the documentation page a bit to try to make how this works more clear. I changed the sentence quoted above to "[project]_[YourUserName]_SHADOW" so that it doesn't resemble an environment variable. I also reworded a little bit of info below there which may prove useful to you (the second bullet point is new-- we previously had that on a linked page and not here-- and the third is reworded for clarity).
You can change where the shadow database is created by editing the user settings file for your project:
You may need to create the file if it doesn't exist, make sure it follows the naming convention: '<YourProjectName>.sqlproj.user'
This user file contains settings specific to your local environment which shouldn't be committed to version control – see our list of files to be excluded from version control
When using a database as a development source, you may choose to omit the "Initial Catalog" portion of the ShadowConnectionString. In this case the shadow database name will continue to be derived from the development database name and the shadow database will change if the development database is changed.
I hope this helps, and thanks very much for bringing this to our attention. Kendra / comments
Hi @pkettlew ,We looked into this, and I believe that we had phrased this in the documentation in a confusing way. I think the following sentence was the source of the confusion:By default, the sha...
Hi @Kris_Lawton , I am a little confused about what type of pipeline we are talking about. Is this a YAML pipeline? Or one of the classic pipeline types? If it is a YAML pipeline, the build task publishes the build artifact to the pipeline, so you can always pull it down to the local agent with:
- task: DownloadPipelineArtifact@2
displayName: Download build artifact
inputs:
buildType: 'current'
artifactName: 'Database Build Artifact'
targetPath: '$(System.DefaultWorkingDirectory)'
I am not clear why this would have suddenly changed in your pipeline, though. Edit: struggling with code formatting / comments
Hi @Kris_Lawton ,I am a little confused about what type of pipeline we are talking about. Is this a YAML pipeline? Or one of the classic pipeline types? If it is a YAML pipeline, the build task pub...
Hi, We cover the answer to that in this documentation page: https://documentation.red-gate.com/sca/developing-databases/concepts/baseline Hope this helps, Kendra / comments
Hi,We cover the answer to that in this documentation page: https://documentation.red-gate.com/sca/developing-databases/concepts/baselineHope this helps,Kendra
Hi @pkettlew , Currently, External Tables are not supported by the SQL Compare Engine and they are not scripted out.
External Tables / External Data Sources / External File Formats are the main thing that has come up recently which are not scripted.
The main categories of other things which aren't scripted are database configuration properties and settings and instance-level things like SQL Server Agent jobs, linked servers.
There are some "dependency chain" scenarios which cause some objects to script as "unsupported", which is a slightly different scenario.
Work for adding support of External Tables is currently ongoing, and we expect support for External tables to ship this quarter (Q1 2021) in SQL Compare and its related tools (including SQL Change Automation). Creating External Tables in a pre-deployment script is the current workaround. Kendra / comments
Hi @pkettlew ,Currently, External Tables are not supported by the SQL Compare Engine and they are not scripted out.
External Tables / External Data Sources / External File Formats are the main thi...
Hi, The definition of the external tables and anything that the external tables are dependent on (ie the external data source) would need to be set up by the pre-script. Things that reference an external table, like views, procedures, etc, do not need to be modified in the baseline script (the baseline script is run *after* the pre-deployment script.) Hope this helps, Kendra / comments
Hi,The definition of the external tables and anything that the external tables are dependent on (ie the external data source) would need to be set up by the pre-script.Things that reference an exte...
Hi @pkettlew, The shadow database is where scripts are run locally -- more information on it is here: https://documentation.red-gate.com/sca/developing-databases/concepts/shadow-database Cheers, Kendra / comments
Hi @pkettlew,The shadow database is where scripts are run locally -- more information on it is here: https://documentation.red-gate.com/sca/developing-databases/concepts/shadow-databaseCheers,Kendra