How can we help you today? How can we help you today?
ales.potocnik

Activity overview

Latest activity by ales.potocnik

I've checked and the version of the API is 6.xx. Perhaps this is a problem only in a specific scenario. To explain I'm not doind a straight forward CompareWith and then outputing the scripts as they are. Because the API has a confirmed problem with dependencies I'm rendering the scripts for each difference by enumerating collection: RGE.Differences differences = databaseSource.CompareWith(databaseTarget, RGE.Options.Default); foreach (RGE.Difference diff in differences) { diff.Selected = (diff.DatabaseObjectType != RGE.ObjectType.User && diff.DatabaseObjectType != RGE.ObjectType.Schema && diff.DatabaseObjectType != RGE.ObjectType.XmlSchemaCollection && diff.DatabaseObjectType != RGE.ObjectType.Role && diff.DatabaseObjectType != RGE.ObjectType.Assembly && diff.DatabaseObjectType != RGE.ObjectType.FullTextCatalog); } After that enumerating: foreach (RGE.Difference diff in differences) { worker = new RGE.Work(); worker.BuildFromDifferences(differences, diff, RGE.Options.Default, true); for (long index = 0; index < worker.ExecutionBlock.BatchCount; index++) { Batch batch = worker.ExecutionBlock.GetBatch(index); // ... store batch } } Could it be that this is only a problem when compiling scripts this way? By the way, I've solved the rpoblem with dependencies. In short, I'm storing the filtered scripts (without fail safe script blocks) into a database just for this purpose, extracting each script information using regular expressions and then finding dependencies on a full-text indexed column with change script. At the end it produces scripts in valid script order even for a clean database. Because I'm storing the scripts I've double checked that the problem with stored procedure bodies appears after the Worker generates the change script. Perhaps there is a problem with how stored procedure bodies are stored in entities and Worker is not getting the whole thing ... It's not a big problme for us for now as there's only 1 stored procedure that's over 4000 characters long but it's a flaw nevertheless. / comments
I've checked and the version of the API is 6.xx. Perhaps this is a problem only in a specific scenario. To explain I'm not doind a straight forward CompareWith and then outputing the scripts as the...
0 votes
Sql Compare API and stored procedures
It seems I've found yet another bug in the compare API. It doesn't seem to pick up stored procedures, longer than 4000 characters up correctly. Procedures, who's body is longer than 4000 get split ...
2 followers 3 comments 0 votes
Michelle, thanks for all the help. Since we managed to put the API to good use, I'll be recomending to our IT director we buy licences for Red Gate SQL Toolkit. Regards / comments
Michelle, thanks for all the help. Since we managed to put the API to good use, I'll be recomending to our IT director we buy licences for Red Gate SQL Toolkit. Regards
0 votes
Thanks for clarification. Transaction safe script makes perfect sense and is desired most of the time. However, there are cases when not everything can be executed within a transaction. As you pointed out full text indexes is one of those cases. Correct me if I'm wrong but in case when database contains full text indexing, those are commonly used with function, stored procedures and views. Since all of those are created within the transaction, the script will always fail, because the indexes simply won't exist at the time when creating/changing objects dependant on those indexes. User defined types are another example. You could solve this by at least moving those to the top of the transaction. Because there is no point in creating a user defined type if you're not using it anywhere. And during my testing it turns out that if the user defined types are created inside the transaction they are not visible to anything untill the transaction is commited. Since the script fails because of missing user defined types, the whole transaction fails and nothing ever gets commited. This sort of excludes any objects that can't be accessed correctly inside the transaction from valid comparison. It makes sense - most of the other engines simply don't generate a script for full text indexes. It is something that has to be manually maintained by the DBA. Still, the API circumvents the problem of manual comparison and provides a reliable list of differences. That part is usefull. Good point is also that the differences generated to script objects are still accessible as blocks and can as such be used to regenerate the script. Is it possible to exclude transaction wrapping from Work.BuildFromDifferences method? That would at least narrow down the script blocks to process and speed up the parsing. I'm currently also working on programatically creating a backup of the database, running the tests scripts (previous post) and then restoring the database from backup. Just thought of another usefull tool which I would like to see but it may be quite difficult to implement: By testing some other API's I managed to put the database into "Suspect" mode. Microsofts recomendation is you simply restore the database from backup in case like that. A tool would be usefull, that analyzes the database and suggests "repair" scripts. / comments
Thanks for clarification. Transaction safe script makes perfect sense and is desired most of the time. However, there are cases when not everything can be executed within a transaction. As you poin...
0 votes
Finally, after many hours of trial and error a solution was found - a workaround for dependency problems. To refresh - a dependency has to be created before the dependant object. The idea was to try and execute the scripts generated from differences between databases against actual database - the one that needs to be updated from the other. Script blocks need to be executed one by one untill the end of the script is reached. Each block either succeeds or fails. If it fails its added to a collection. Once the end of the script has been reached, that error collection is executed again - because by now some of the dependencies were created. This process is repeated untill the error collection is the same size as initial collection - it means no script blocks were successfully executed any more and we can exit. The process was not really simple. A number of script blocks had to be removed (ignored) by the iteration. For instance transaction and internal error handling. But now at the end I get a script that can be run agains the database without any error handling because it has already been tested agains the same database. During this process I found out the following things about the script generated (mind I was comparing our development database agains a new, empty database): - CREATE TYPE statements need to be located outside of the transaction handling. If CREATE TYPE is executed within the transaction itself, the rest of the transaction does not see the user defined type - thus dependency errors - User object and Login object need to be separated. The account used to run the update script does not always see the whole security details (it has to be sysadmin to see al that) and the comparison engine would also script the logins even dough they already exists. - In the original script, generated by the Red Gate API, the full text catalogs (CREATE FULLTEXT CATALOG) were located at the end of the script, same with CREATE FULLTEXT INDEX statements. A lot of tables, stored procedures, views depend on full text indexes if enabled and the catalog and indexes need to be applied immediatelly after the table has been created/altered. - Same goes for any indexes, triggers, foreign keys, primary keys - they need to be scripted immediatelly after table create/alter statement I hope this helps you with improvements on the engine. If it would be helpfull, I can also send you the 2 classes (c#) I have written to handle this. You can run it agains a complex database to see the differences in ordering between original script and script after "live test". One other thing I would like to see on the object representation of database objects (RedGate.SQL.Shared) would be a common property Dependencies - it's probably already there - it would be nice if it was public. Anyway, the final resulting script after this also takes care of any invalid objects in the source database - they are simply excluded from the final script. If everything else goes well, I think we will be going with this API as it still produced by far most reliable and viable results then the rest of similar products. However, the next step will be taking a snapshot/backup of the database to be updated prior to executing/testing the scripts to it so the actuall database is not corrupted. Can you advise me on a way to execute scripts agains a backup? I've read somewhere it is possible to compare a snapshot against actual database. Is it possible to execute sql agains a snapshot as well? Regards, / comments
Finally, after many hours of trial and error a solution was found - a workaround for dependency problems. To refresh - a dependency has to be created before the dependant object. The idea was to tr...
0 votes
Thanks for your answers Michelle, it's not rambly at all. The interesting thing that happened after I posted this topic was trying out the synchronization using the visual interface (SQL Compare 6). When trying to execute the script (from the interface) it resultet in exactly the same error messages as through API, because of ordering. When the interface displays the list of tasks in a tree view the ordering seems correct - the first object causing the error was user defined data type. It showed up as first item in the tree view task list but when the script was generated it was located at completely different part of the batch. Unfortunatelly, I can't provide you with any screenshots or sql samples right now as I managed to put the test database into "Suspect" mode using the script generated by another API ... Appart from that I'm afraid I won't be able to sen you a snapshot of even the development database as same structure is used for live products. Once our DBA restores the test database I'll be able to send relevant bits of SQL generated. Just to clarify - in regard to data comparison - does this mean that if a table does not exists on database B, it exists on A, no insert statements would be generated when synchronizing from A to B? I'm guessing one would have to run the structure scripts first and then the data comparison tool? To clarify what we're trying to do; We have a Development, Staging and Live databases. Development is constantly changing. At the moment, when a developer does any changes to development database, he has to script any data or structure changes. When it's time to release the code and database to staging, the merge of all developers scripts is run and then code deployed (already automated). What we are lookin into next is fully automating the deployment process and implementing nightly builds. This means we already have a utility, which is able of pulling all the code from source control server and creating installable builds of all the applications. We will be updating that tool to automatically check on structure differences between development and staging database and generate all scripts nececary. Tricky part is filtering the changes being done to the staging. Once the comparison has completed, a partitioned version of the change scripts has to be produced. Each morning the release manager will get a list of scripts to confirm - it must be in human readable format (i.e. Add column A to table [image] . Once he has confirmed desired blocks the single file script has to be generated. Another tricky part is the comparison can not be run again when building single sript file because by that time the development database might have changed again and those new changes should not be included. The same goes for table data comparison. That is why I require Work.BuildFromDifferences to be as partitioned as possible or perhaps achieve the same result using Differences class to enumerate and generate a script for single difference. The only problem with that for now is because the script generated also contains transaction information and as such each difference is now a single transaction rather than whole process being one transaction. To answer the suggestion about looking up specifical object and PRINT statments in the script generated - it might work but it would require a lot of parsing and jugling around - in which case it would be easier to just manually write the comparison tool. I hope my explanations describe well enough what I'm trying to achieve. Can you recomend a stable release of Compare API that works correctly in regard to object dependencies? Regards / comments
Thanks for your answers Michelle, it's not rambly at all. The interesting thing that happened after I posted this topic was trying out the synchronization using the visual interface (SQL Compare 6)...
0 votes