Comments
Sort by recent activity
Thank you, @Monday. It took me a while to find time to dig into this. I had never looked into the actual files within the nupkg file. Now, thanks to you, I do see the db\output\PackageScript.sql. I also see how it has ALL of the migrations including baseline and will intelligently apply them to whatever DB I set in the sqlcmd var DatabaseName. There are a couple of issues with this file (in its raw form) that I will need to resolve to be able to pass it to the ops DBAs so they can execute in SSMS via SQLCMD mode: 1) I'll need to uncomment the sqlcmd vars section 2) I'll need to manually set the ReleaseVersion and PackageVersion. Not sure why the PackageVersion isn't set by New-DatabaseBuildArtifact cmdlet since we are passing that in. Probably a lil powershell will take care of the above concerns, but it would be nice to have options during build to extract this file with these taken care of. UPDATE: I am also able to get the PackageScript.sq content from the PackageScript property of the BuildArtifact object returned from Invoke-DatabaseBuild. This way I can skip the Export-DatabaseBuild step, and just save this to a file and process it. / comments
Thank you, @Monday. It took me a while to find time to dig into this. I had never looked into the actual files within the nupkg file. Now, thanks to you, I do see the db\output\PackageScript.sql...
Thanks, @Monday. So, you're not getting the server name check code section in your TargetedDeploymentScript.sql? I always get it, and I don't have any special drift settings set (all default). Is there a different way to run the New-DataBaseReleaseArtifact that will generate a TargetedDeploymentScript.sql without the check? I still need to pass a single SQL script (can be SQLCMD) to the ops DBAs that will deploy the latest changes (to multiple target DBs/servers). They are not yet willing to run a PS script. I'm trying to avoid using the MSBuild generated scripts, too, but willing to consider if that seems like the right approach. / comments
Thanks, @Monday. So, you're not getting the server name check code section in your TargetedDeploymentScript.sql? I always get it, and I don't have any special drift settings set (all default). I...
Thank you, @Russell D I'm using SCA. I've read the SCA docs on data. That didn't quite get me there, though. I want to use a where clause filter in the SCA project, like I can do inside SQL data compare. / comments
Thank you, @Russell D I'm using SCA. I've read the SCA docs on data. That didn't quite get me there, though. I want to use a where clause filter in the SCA project, like I can do inside SQL data co...
@Mac_F, thank you the reply. However that does not absans my question. Please see my previous post for clarification. / comments
@Mac_F, thank you the reply. However that does not absans my question. Please see my previous post for clarification.
Follow up: Initial prod deployment was a hack - ssms generated ddl and dml scripts. But,I did include the __MigrationLog table and data. Once I cleared up my SCA deployment issues, I have been able to pass on the New-DatabaseReleaseArtifact and Export-DatabaseReleaseArtifact generated TargetedReleaseScript.sql, with the server name check removed, to the ops DBAs. They have been willing to execute that manually in SSMS in SQLCMD mode. This is working as an intermediate process until I can socialize deployment via ci/cd tech. / comments
Follow up:Initial prod deployment was a hack - ssms generated ddl and dml scripts. But,I did include the __MigrationLog table and data.Once I cleared up my SCA deployment issues, I have been able t...
I agree with what @Eddie D said. To add onto that, I would recommend a production db as the "target" when creating the project of you already have a production db. I had some cases where we already had a shared dev db, but not a prod db. For these, I believe the best technique is to not set the "target db" during project creation. Set the development db to your own local, empty db. Then, connect to the shared dev db and import the existing objects as your first migration (rather than baseline). If you do set the shared dev db as the target during project creation, you can run into some deployment challenges down the line related to the "baseline" script if the deployment target is not completely empty (e.g. If you have any users or std objects in your model db). In that case, I have been able to resolve by either removing those objects or by: 1) unchecking "mark first folder as baseline" in the project settings. 2) adding <SkipBaselineCheck>True</SkipBaselineCheck> to the project file xml property group. / comments
I agree with what @Eddie D said. To add onto that, I would recommend a production db as the "target" when creating the project of you already have a production db.I had some cases where we already ...
Follow up: since I included the __migrationLog table and its data as part of the initial, non-SCA generated deployment to our "higher" (ops DBA controlled) environments, I was able to introduce the new idea of using a SSQLCM script to the ops DBAs. The second deployment was the addition of a single new migration in a New-DatabaseReleaseArtifact generated TargetedDeploymentScript.sql. I manually modified it to comment out the target sever check, so they would have one script (currently a requirement) for all of their servers. They successfully ran that in uat and load test servers, so we're at least gaining the benefit of SCA generated deployment scripts and tracking in the __migrationLog table. Concurrently, I've had some meetings to introduce the idea of using PowerShell to deploy using the Use-DatabaseReleaseArtifact cmdlet. That would give us the added benefit of snapshot, drift check, etc. as an intermediate step. I've also showed then a taste of my POC using SCA, bamboo, and Octopus deploy, but my sense is we're months (or longer) away from being able to get buy-in for that move. Thank you for your help, Steve. Honored to have a celebrity work with me in the forums. -Peter / comments
Follow up: since I included the __migrationLog table and its data as part of the initial, non-SCA generated deployment to our "higher" (ops DBA controlled) environments, I was able to introduce the...
Thank you, @Mike U! I think this falls under the umbrella of needing to update the docs. I'd be willing to help you and the team with this process. Cheers. -Peter / comments
Thank you, @Mike U! I think this falls under the umbrella of needing to update the docs. I'd be willing to help you and the team with this process. Cheers.-Peter
Thanks, @way0utwest. I've discovered that my SQL 2014 model database had my domain account as a db user, so it was seeing any "empty" target dbs as not empty, and therefore marking the baseline script as deployed. This made the project error on deployment. Not sure how the user got into the model db - perhaps during instance install, because I definitely didn't put it there. Thus, I used the techniques described above to ensure that the baseline script gets deployed. Other option was to remove that user, and SCA saw the target as empty. Since I don't know exactly how SCA determines l "empty", I'm keeping the skip property in place for now. In other news, I ended up just manually scripting the ddl and dml to get the project delivered to the ops DBAs, including the road in the __migrationLog table. Hoping I can return to SCA based deployment artifacts again. / comments
Thanks, @way0utwest. I've discovered that my SQL 2014 model database had my domain account as a db user, so it was seeing any "empty" target dbs as not empty, and therefore marking the baseline scr...