Comments
Sort by recent activity
Hi @gail123, We are looking to improve on this functionality and make the ability to deploy from SQL Source Control via migrations much smoother. Would you be open to me emailing you about possibly participating in a preview of a new tool? Cheers, Kendra / comments
Hi @gail123,We are looking to improve on this functionality and make the ability to deploy from SQL Source Control via migrations much smoother.Would you be open to me emailing you about possibly p...
Hi @SimonBarnes, I suspect that this is possible to update by:
Unlinking the static data
Committing
Relinking the static data
Did you happen to test that way with a commit between unlink and relink? (I haven't tested this exact scenario, but in working with other scenarios I have found that pattern causes the static data file to be regenerated.) Kendra / comments
Hi @SimonBarnes,I suspect that this is possible to update by:
Unlinking the static data
Committing
Relinking the static data
Did you happen to test that way with a commit between unlink and relink?...
@LouisSomers thank you so much for following up and posting the solution! / comments
@LouisSomers thank you so much for following up and posting the solution!
Hi @LouisSomers, Does the user account which the build is running under have db_owner permission for the database? Here is our guidance on permissions for the build with the different options: https://productsupport.red-gate.com/hc/en-us/articles/360016293733-SQL-Change-Automation-Powershell-permissions Hope this helps, Kendra / comments
Hi @LouisSomers,Does the user account which the build is running under have db_owner permission for the database? Here is our guidance on permissions for the build with the different options: https...
Hi BlueBSH, I am guessing this is a build for a SQL Source Control project in SQL Change Automation. I suspect that the way to solve it this is to filter the function and handle it in a pre-deployment script, but it's hard for me to say without seeing the exact schema of the objects involved. If you'd prefer to share the schema in a private way / not in a public forum, I'd encourage you to start up a support ticket so we can help with more detail. Kendra / comments
Hi BlueBSH,I am guessing this is a build for a SQL Source Control project in SQL Change Automation.I suspect that the way to solve it this is to filter the function and handle it in a pre-deploymen...
Hi Gill, The shadow database for SQL Change Automation doesn't get automatically dropped. If you manually wish to drop the database, it is OK as it will get recreated when it's needed. We don't automatically drop this because keeping it in place can speed up the process of verifying -- sometimes significantly. You do have a couple of options when it comes to maintaining the database, if you don't want to manually drop it:
You could place the shadow database elsewhere -- even on a non-Azure SQL Database instance, if you wish. Please do note that this will only realistically work if you're not using TSQL which only compiles/successfully executes against Azure SQL Database. (For example if you have a pre-script which creates External Tables.) To configure the shadow database location, you edit your user settings file.
You could use the Serverless tier for your Shadow database if you still wish to use an Azure SQL Database for verification, but wish to not be billed for idle time. This can be done by editing the CreateDatabase.sql script in the Provisioning folder and specifying the SERVICE_OBJECTIVE you wish to use in the script. Please note that when you use the Serverless tier, it is normal for SQL Server to throw an error when you wake up / resume a database.
Hope this helps, Kendra / comments
Hi Gill,The shadow database for SQL Change Automation doesn't get automatically dropped. If you manually wish to drop the database, it is OK as it will get recreated when it's needed. We don't auto...
Hi @GurpreetGill , I might handle this in different ways, depending on how frequently I expect this pattern to happen in the course of my workflow.
If this may happen frequently, disabling the programmable object feature for the project and using a "versioned" migrations only approach is likely the best option. If you would like to do this, there are two steps. 1) The "ProgrammableObjectHandling" setting can be set to ScriptInMigrations. 2) You likely would also want to modify the "SyncToOfflineSchemaModelObjectTypes" setting so that includes programmable objects in the offline schema model.
If this is a rare occurrence, the simplest option is to split the deployment in two -- deploy the PO first in one deployment and then the migration in the next deployment.
If this is a rare occurrence and splitting the deployments into two is an issue, one could also leave POs enabled and manually create a migration script with the same PO in it for ordering purposes, but manual validation that these are in sync for that deployment is of course needed.
Essentially, the Programmable Object feature does generally make ordering of changes simpler. But for the scenario you mentioned it actually makes it a bit difficult. For that reason we do support disabling the feature for workflows where it may not be a good fit. Hope this helps, Kendra / comments
Hi @GurpreetGill ,I might handle this in different ways, depending on how frequently I expect this pattern to happen in the course of my workflow.
If this may happen frequently, disabling the prog...
Hi, We don't have plans to change this on our roadmap at this time. Do you have a specific use case which is causing you to ask this question / is there a particular scenario that this would enable you to implement? Understanding that need is always useful to us in our planning. Thanks, Kendra / comments
Hi,We don't have plans to change this on our roadmap at this time. Do you have a specific use case which is causing you to ask this question / is there a particular scenario that this would enable ...
Hi @pkettlew , We looked into this, and I believe that we had phrased this in the documentation in a confusing way. I think the following sentence was the source of the confusion: By default, the shadow database will be created on the same SQL Server Instance as the development database with the name [project]_$(username)_SHADOW.
This was meant to explain the naming convention used behind the scenes and represent a placeholder for the user name, not to imply that you could use the variable when customizing the shadow database connection string. I have updated the documentation page a bit to try to make how this works more clear. I changed the sentence quoted above to "[project]_[YourUserName]_SHADOW" so that it doesn't resemble an environment variable. I also reworded a little bit of info below there which may prove useful to you (the second bullet point is new-- we previously had that on a linked page and not here-- and the third is reworded for clarity).
You can change where the shadow database is created by editing the user settings file for your project:
You may need to create the file if it doesn't exist, make sure it follows the naming convention: '<YourProjectName>.sqlproj.user'
This user file contains settings specific to your local environment which shouldn't be committed to version control – see our list of files to be excluded from version control
When using a database as a development source, you may choose to omit the "Initial Catalog" portion of the ShadowConnectionString. In this case the shadow database name will continue to be derived from the development database name and the shadow database will change if the development database is changed.
I hope this helps, and thanks very much for bringing this to our attention. Kendra / comments
Hi @pkettlew ,We looked into this, and I believe that we had phrased this in the documentation in a confusing way. I think the following sentence was the source of the confusion:By default, the sha...