Comments
Sort by recent activity
Hi @Russell D. To clarify, the challenge is that the client IP is NOT on both lines. See image below (IP address has been fuzzed out. Thank you. [image] / comments
Hi @Russell D. To clarify, the challenge is that the client IP is NOT on both lines. See image below (IP address has been fuzzed out. Thank you.
The client IP is always the same, so that's helpful. I can filter those items. It's the additional log entry with (only) the actual error #, severity, and state that is problematic. / comments
The client IP is always the same, so that's helpful. I can filter those items. It's the additional log entry with (only) the actual error #, severity, and state that is problematic.
Thank you, @Russell D. That is helpful. I'm not sure it will fully fulfill the ideal requirement, though. I want to filter out a list of log entries with a list of 4 error numbers only when that log entry is followed by one with the [CLIENT: ###.###.###.###]. Does the filtering allow for that? Seems like it would be difficult to implement. On a side note, for now, I am just not raising a medium alert unless the severity is >=21 (it was 20). That keeps us from getting spammed when the scanner runs, but does not give us alerts from "real" error log entries that are sev 20. / comments
Thank you, @Russell D. That is helpful. I'm not sure it will fully fulfill the ideal requirement, though. I want to filter out a list of log entries with a list of 4 error numbers only when that...
Revisiting this one... I thought perhaps the SCA cmdlet Sync-DatabaseSchema might provide some additional capabilities to avoid dropping additional columns in the target DB, using -IgnoreAdditionalThe docs for that parm say "When SQL Change Automation performs a sync operation, by default it will drop all additional objects in the target database. If you specify this parameter, additional objects will be ignored." However, when I tried it in a test, it still wants to drop any additional columns in target tables (see below). It was a nice try, though. [image] <p>WARNING: (High) This deployment drops the column(s) [c2] from table [dbo].[T1]. Data in these column(s), will be lost unless additional steps are taken to preserve it.</p><p>Sync-DatabaseSchema : There are warnings that have caused the operation to abort:</p><p>(High) This deployment drops the column(s) [c2] from table [dbo].[T1]. Data in these column(s), will be lost unless additional steps are taken to preserve it.</p><p>To force the operation to succeed regardless of warnings, set the 'AbortOnWarningLevel' parameter to None.</p><p>At C:\Users\Peter.Daniels\Documents\WindowsPowerShell\Scripts\POC\SCA\SyncWithoutDroppingNewColumns.ps1:31 char:1</p><p>+ Sync-DatabaseSchema -Source $srcDB -Target $trgDB -IgnoreAdditional</p><p>+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~</p><p> + CategoryInfo : InvalidData: (database 'trgDB' on server 'localhost':DatabaseConnection) [Sync-DatabaseSchema], TerminatingException</p><p> + FullyQualifiedErrorId : AbortOnWarnings,RedGate.Versioning.Automation.PowerShell.Commands.SyncDatabaseSchemaCommand</p> / comments
Revisiting this one...I thought perhaps the SCA cmdlet Sync-DatabaseSchema might provide some additional capabilities to avoid dropping additional columns in the target DB, using -IgnoreAdditionalT...
Yeah - I do understand that, David. They're "minor objects" and currently outside (or actually inside) the scope/grain of the Additional/Missing/Different objects config. I'd still like to see that functionality someday. / comments
Yeah - I do understand that, David. They're "minor objects" and currently outside (or actually inside) the scope/grain of the Additional/Missing/Different objects config. I'd still like to see th...
That looks like a solid solution, @AlexYates - certainly a direction to strive towards. I appreciate your thoughtful consideration. Now to see if I can teach the devs about branching and maybe even get them to consider git. [image] / comments
That looks like a solid solution, @AlexYates - certainly a direction to strive towards. I appreciate your thoughtful consideration. Now to see if I can teach the devs about branching and maybe ev...
Thanks, master Yates @AlexYates. Nice to hear you chime in. Hoping to move this client to a devops process with source control soon. And, I am steering them towards SCA for that. Meanwhile, I'm being requested to "refresh the dev DBs from prod", and find myself with this junky merge issue. I've considered the prod = master vector, too. That kinda steers the devs towards developing in production, which is what I'm trying to move away from. I've also considered the 3rd party product changes in src. I think that would be a mgmt challenge as these business end users are so used to doing what they want in production. Even getting them to contemplate a "dev environment" might blow a gasket. For now, I'm moving forward with trying to get some process in place where the app-driven schema changes are done first in dev, then in prod. It's my understanding that there is meta-data and schema changes that go along with these, and we've decided not to go down the rabbit hole of XE to find out all of the changes that happen. Along with this, I'm going to: 1) restore latest prod on the dev server as <DB>_FROM_PROD_<yyyMMdd_hhmmss> 2) rename the dev <DB> to <DB>_OLD 3) rename the <DB>_FROM_PROD_<yyyMMdd_hhmmss> db as <DB> It's a hack, but will get be over the hump for now. Thanks again, -Peter / comments
Thanks, master Yates @AlexYates. Nice to hear you chime in.Hoping to move this client to a devops process with source control soon. And, I am steering them towards SCA for that. Meanwhile, I'm be...
Agreed. Usually we see some sort of EAV/metadata solution. This product actually creates new tables, adds columns to tables, and modifies insanely long triggers to work as a column level audit of changes. And, we have our devs making custom views, etc, which should follow a more typical SLDC/DLM process. The mix of these is causing us some challenges for both our devops ideas as well as dev DB refresh processes. / comments
Agreed. Usually we see some sort of EAV/metadata solution. This product actually creates new tables, adds columns to tables, and modifies insanely long triggers to work as a column level audit of...
Thanks for your reply, @David Atkinson. Yes, it is sad that v14 of the best comparison tool around still can't allow me to simply choose to not drop missing columns in the target table. To answer your question, we have a 3rd party product, Deltek Vision, that makes schema modifications when end users customize the application. So, that very quickly creates production "schema drift". / comments
Thanks for your reply, @David Atkinson. Yes, it is sad that v14 of the best comparison tool around still can't allow me to simply choose to not drop missing columns in the target table. To answer...
Thanks, @Russell D . It would be cool if this was built into SQL Monitor, so we could just click into the full SQL + plan. / comments
Thanks, @Russell D . It would be cool if this was built into SQL Monitor, so we could just click into the full SQL + plan.