Comments
Sort by recent activity
Hi tee, I suspect that the error you are seeing comes from a view or views which reference the linked server. A quick summary of why this is happening: When running a verify in SCA in Visual Studio, or when running a build with SCA, it ensures that all objects can be created in SQL Server successfully. Stored procedures enjoy a feature called "deferred name resolution", which means that the stored procedure can reference objects that don't exist at the time the procedure is created. Views and some limited types of functions don't have this functionality, so when SQL Server tries to create those objects it tries to resolve the items it refers to exist / check that the linked server is there. There are a couple of ways you can resolve this:
Create a synonym that points to the linked server resource, and modify the views and any impacted functions to refer to the synonym. This is a great long-term solution, because possibly you do want to validate code that goes across the linked server in other environments. You can "re-point" the synonym in each environment to whatever you want, and you get a lot of flexibility. Synonyms DO have deferred name resolution, so this also means you don't have to actually create the remote resource if you want.
Filter out the views / functions referencing the linked server from the project.
Hope this helps! Kendra / comments
Hi tee,I suspect that the error you are seeing comes from a view or views which reference the linked server. A quick summary of why this is happening: When running a verify in SCA in Visual Studio,...
Pre- and post- scripts are very similar to migration scripts, they are just executed each time you run them. I think this would be very similar to your current script, you would just have a "guard clause" detecting the conditions on which it would run. You can have multiple pre- or post- scripts, each with an individual name, to help you organize and manage the code long term. Depending on the complexity of what you are doing, that might or might not be useful. / comments
Pre- and post- scripts are very similar to migration scripts, they are just executed each time you run them. I think this would be very similar to your current script, you would just have a "guard ...
Hi there, Pre- and post-deployment scripts are run on each deployment. Is it possible for you to write the seed-date-reload script in a way that it detects if you want to reload the seed data (in case there are some exceptions when you would not want to do so in a deployment), and then put that into a pre- or post- script? Regarding the "delete" operation, I'm not sure what you mean in option one about the risk of loading seed data multiple times? Are you doing something like a truncate before you do the bulk insert? If you are doing an actual DELETE operation, is there a reason you couldn't TRUNCATE? I ask because DELETE logs row by row, which can be slow. TRUNCATE is logged but much faster as it just unhooks the allocated data in the background. Just curious if you could optimize this in some way, no matter which way you are running it. Kendra / comments
Hi there,Pre- and post-deployment scripts are run on each deployment. Is it possible for you to write the seed-date-reload script in a way that it detects if you want to reload the seed data (in ca...
Hi Albert, To clarify, is your requirement to prevent that single migration from deploying *after* having created the Release Artifact? Kendra / comments
Hi Albert,To clarify, is your requirement to prevent that single migration from deploying *after* having created the Release Artifact?Kendra
Hi there, What kind of problems are you encountering with large tables? Since we support several ways of working, it would be helpful to know the extension you're using to author the changes (SQL Change Automation for Visual Studio, SQL Change Automation for SSMS, or SQL Source Control), as well as the scenario where you're hitting the problem. Kendra / comments
Hi there,What kind of problems are you encountering with large tables? Since we support several ways of working, it would be helpful to know the extension you're using to author the changes (SQL Ch...
Hiya, I think there may have been some confusion in the previous answer. We do support connecting to Git and or TFVC repos -- these are the types of repos hosted by Azure DevOps Server as well as Azure DevOps Services. The "Express" edition of Azure DevOps Server doesn't seem to change the behavior of these repos, so it should be supported like any other Git or TFVC repo. Like Microsoft, we do recommend Git repos for new projects, more info is here: https://www.red-gate.com/blog/why-to-use-git-instead-of-tfs-tfvc Cheers, Kendra / comments
Hiya,I think there may have been some confusion in the previous answer. We do support connecting to Git and or TFVC repos -- these are the types of repos hosted by Azure DevOps Server as well as Az...
Hi Albert, Quick check-- are you saying that you have already created a pre-deployment script which configures the filegroup automatically for a new database? I'm not completely sure where you are saying that the filegroup 'ftg_ft' exists. Some more details on how this works:
The error message you shared (thanks for including that!) mentions Catalog=dlmautomation_14501d96-5ec6-4784-952c-70b6b73d496e;
That means that you have the build configured without a hardcoded database name. During the build process, it creates a database on the instance where you have the build specified, uses it for the build, and then cleans up after.
That new database will be created based on the model database of the SQL Server instance where you are building -- so if model only has a primary filegroup, then the build database will only have a primary filegroup
If you would like this dynamically created database (and other databases you deploy to later in your pipeline) to have additional filegroups, you can check for existence of the filegroup and configure it in a pre-deployment script -- this page has sample code.
Hope this helps! Kendra / comments
Hi Albert,Quick check-- are you saying that you have already created a pre-deployment script which configures the filegroup automatically for a new database? I'm not completely sure where you are s...
Hi Joshua, SCA does contain logic to build the views after the dependencies. The only time I have run into this issue with building is when the dependent objects are not present in the exact database you are building -- either because they were dropped or because they are in a different database that isn't present on the build server. Could that be the case here? The error log should mention the specific object names to track down. There are ways to handle it, just depends on exactly where the dependent objects are. Kendra / comments
Hi Joshua,SCA does contain logic to build the views after the dependencies. The only time I have run into this issue with building is when the dependent objects are not present in the exact databas...
The videos are presently hosted on YouTube. Could something be blocking you from viewing that in your location? I just spot-checked a couple of the SCA courses and the videos look OK to me, so I suspect it's either a YouTube issue with your connection, or I'm not looking at the same course/lesson combo that you are. / comments
The videos are presently hosted on YouTube. Could something be blocking you from viewing that in your location?I just spot-checked a couple of the SCA courses and the videos look OK to me, so I sus...