How can we help you today? How can we help you today?
Jeff Moden
I don't use SQL Prompt.  I used to work on a whole lot of different systems for many different companies and a lot of them did use it so I never picked up the habit so I wouldn't get used to using it and possibly being lost without it.  Who knows?  It could become a favorite tool. For now, though, and for  the last 10 years, SQL Compare has been my favorite RedGate tool.  The methodology we currently use for final checks after UAT is to do a restore of the production database(s) affected and do the full database deployment on that.  Then, we do a compare between that and the production database.  We also have a rigid standard for how the flower box of code must be laid out especially when it comes to revisions.  Yeah... I know that may sound stupid for this day and age but comparing the flower box with those revision tracking comments have saved our hinnies more than once because people can't seem to handle trunks right for SQL especially for long term projects that aren't even close to being CICD. It's a whole lot easier than pawing through trunks and branches and all sorts of other stuff to make sure that there is no regressive code and, at the same time, ensure that everything actually has been both peer reviewed and reviewed by a DBA. / comments
I don't use SQL Prompt.  I used to work on a whole lot of different systems for many different companies and a lot of them did use it so I never picked up the habit so I wouldn't get used to using ...
0 votes
From the original post: "I am writing a select statement that looks through a column with guid strings..." Why anyone would convert 16 byte GUIDS to 36 characters (72 bytes if Unicode!!!) is beyond me.  I've heard a lot of "reasons" people offer but none of them have been good. And now we need to find the GUIDs that might have upper case letters in them (which IS the default for how SQL Server displays UNIQUEIDENTIFIERS, btw).  I cannot imagine a good reason to do this.  And, no... I'm not blaming the OP for any of this.  Anyway... getting to the problem.  I might be reading this incorrectly but returning the "values" in this context seems to mean returning the entire value from the column if it contains any upper case letters.  Following only that rule, the code is quite simple once you realize that you can easily make the comparison case sensitive by using a binary collation, which is also nasty fast. The thing you're looking for is in the WHERE clause of the following example code. <br>&nbsp;SELECT *<br>&nbsp;&nbsp; FROM (VALUES --This represents the values in a table<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (1,'02830916-85AB-47FF-A2CA-2228441840EE','All UPPER CASE')<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ,(2,'02830916-85ab-47ff-a2ca-2228441840ee','All LOWER CASE')<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ,(3,'02830916-85aB-47ff-a2ca-2228441840ee','One UPPER CASE')<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; )v(RowNum, GUIDString, Explanation)<br>&nbsp; WHERE GUIDString LIKE '%[A-F]%' COLLATE Latin1_General_BIN<br>; / comments
From the original post:"I am writing a select statement that looks through a column with guid strings..." Why anyone would convert 16 byte GUIDS to 36 characters (72 bytes if Unicode!!!) is beyond ...
0 votes
The full answer is a bit more complicated but you could start with the following... make sure you read this whole thing before making any decisions and, most certainly, before you take any action. 1.  Identify where the "bulk" is.  You'll have two types of tables (and I'm speaking specifically about their indexes... hopefully, none of them are HEAPs):  Those that you would consider to be large when compared to all the other tables and, of course, those other tables. 2. Pick the top 10 or say of the larger tables. 3. Determine the total size required for all the indexes for each table.  Generally, this is the page_count of a given index divided by 128.0 times.  You can get more sophisticated/accurate answers by taking things like the Fill Factor and current Page Density into account but that would take a bit longer to explain especially when it comes to things like "Trapped Short Rows". 4. Create a new file group and file (can be on the same drive letter or different drive letter) for each of the larger tables. Of course, there has to be enough room left on whatever drive you select according to Step 3 above. 5. For each of the indexes of a given table, do a CREATE INDEX using the DROP_EXISTING option with a different ON that points to the new file group and file. 6. Once step 5 has been completed for all the larger tables that you identified in Step 2, REBUILD "all" the other indexes  from smallest (by page count) to largest.  DO NOT USE REORGANIZE HERE. 7. Do a "shrink" where you just recover free space without brute forcing the size.  It's sometimes amazing how fast this will go once step 6 is done. 8. If step 7 wasn't real successful, then do you normal brute force shrinks 10GB at a time so that if it gets interrupted, you have something to show for it. Ok... so there you are with a nice shrunken PRIMARY file group and possibly several other file groups for the big tables.  You can do one of two things and possible a mix of those two things: A. Do the CREATE INDEX with the DROP_EXISTING thing to move everything back to the PRIMARY file group and then drop those temporary files and file groups ...OR... B. Leave the data in the new file groups. There's a serious advantage to leaving the new file groups alone and so you need to make sure that you've identified the correct drive and directory in Step 4 above.  That advantage is that, in the future when you decide you need to rebuild the indexes on those large tables again, you can repeat the same steps... create yet another file group/file for each table, do the "rebuild" by doing a CREATE TABLE with DROP_EXISTING on to the new files groups and drop the old file groups/files.  It's like doing an instant shrink with no worries. Like I said, there are some nuances to all of this because the data IS large but I thought I'd at least introduce the overarching concept for you to consider.  You might also want to look into some things like partitioning (I actually like partitioned views better than partitioned tables for a whole lot of reasons) so that when you do need to defrag something, it's not a huge, multi-TB monolithic monster that's going to require some serious downtime to pull off.  This is especially true if you have anything that depends on the log file such as replication, AG, etc.  Clustered Instances don't seem to have a problem with this but you might want to keep an eye on your quorum size the first couple of times you do something like this. Last but not least, consider moving your large history/audit tables to a different database.  In most cases, you don't immediately need such tables to "get back in business" in a DR situation.  This is also why I like Partitioned Views instead of Partitioned Tables... you don't actually need to restore every bloody partition before you can start taking backups again in a DR situation.  It will also allow you to make much smaller copies of your databases when you need such a copy for training, development, or whatever. / comments
The full answer is a bit more complicated but you could start with the following... make sure you read this whole thing before making any decisions and, most certainly, before you take any action. ...
0 votes