How can we help you today? How can we help you today?
EdCarden
I wanted to revisit this now that it’s been over a year and there still has not been an update to Dependency Tracker. I did some comparison and I can get a listing of every object of every type (plus some additional info beyond just the object type and name) using native SQL Queries in less then 1 second. Why then does Dependency Tracker take minutes to do this when its not doing anything more (that I can tell) other than getting all the object names and type and then placing them in a tree node like container/object? What method is Dependency Tracker using to get the DB schema because it’s definitely not native SQL code executed thru an ADO/ADO.Net set of objects. BTW- I’m asking because I want very much to use DT (especially since I paid for it) and I can't and I cannot find a logical reason for why the thing is as slow as it is at the point when it’s just getting DB schema info. I’d understand why it would take a while to parse through the Dependent objects after you have selected an object and added it to your project but Dependency Tracker is as slow loading the list of objects in the schema as it is when you have an object selected and are waiting for it to parse the dependent objects and list those. This is for just 1 level of dependency. I dare not try for any deeper a level then 1. Also… Why does SDT (SQL Dependency Tracker) always return focus to the first node in the tree after you’ve selected and object under one of the trees nodes? For example after I have the DB Schema loaded and I expand the node within the tree that lists all the Stored Procedures, after I scroll down towards the end of the list of Stored Procedures and click on one the focus jumps back to the first stored procedure and so I have to scroll back down to where UI was at if I need to select another object. If there is a way to multi-select and avoid this please share that otherwise can this be added to the list of fixes. Speaking of updates when can we expect an update to SDT which has not seen a major update since October 2011, almost 2 years ago? / comments
I wanted to revisit this now that it’s been over a year and there still has not been an update to Dependency Tracker. I did some comparison and I can get a listing of every object of every type (...
0 votes
peter.peart wrote: Thanks for your reply. With the backend being a database, there's nothing to stop you from looking at the tables and running count queries based on alert time to ascertain how many alerts you had during a given period. We don't however provide schematics of the schema though I'm afraid as it's subject to change with each version we release (and has a couple of times with V2 and again with V3). If you're not purging anything at all, and you're monitoring a decent number of servers which also have a decent number of alerts being generated, 128GB in 6 months isn't inconceivable. Because of the complex schema (which looks to be non-normalized) I'm hesistant to even try and guess at what rows in whcih tables can be safely purged or archived. I don't want to purge this information as it is valuebale data that will one day prove very beneficial for us once Redgate realizes the value in this data that SQL Monitor captures. For us its already greatly asissted in determing our hardware needs for a new reaplacement server for the server that hosts the SQL Server instance we are monitoring. Assesing new hardware needs was easy as the IT Admin was able to do this by reviewing the info in the Analysis tab for the last X months worth of info cpatured in Analysis. The other reason for not purging this data is the value in being able to review it and isolate trends such as which LRQ's (Long Running Queries) are most common and therefore in need of first attention when it next comes time upgrade/change reports and/or anything else that uses the T-SQL that raises these LRQ's alerts. We can't do this now because theres no realistic way to get that information from SQL monitor aside from asking for it one alert at a time using the SQL Monitor Web interface. A set of user-friendly views similiar to SQL Servers DMV's (Data Mgt Views) is whats needed and once that happens, all of this Alert data we've been saving instead of purging is going to be very valuable. Thanks / comments
peter.peart wrote: Thanks for your reply. With the backend being a database, there's nothing to stop you from looking at the tables and running count queries based on alert time to ascertain how...
0 votes
Andrew Hodge wrote: Just looking through the analysis section on monitor 3 and have noticed that there isnt a data file space used (equivelent to the log space used). The data size just returns the size of the mdf files which isnt very useful. I take it we could develop this within the custom reports but think this should probably be included in the base reports Out of curiosity why are you wanting an analysis of %File Used on the Data file? BTW - You can write a custom alert/mertic in version 3.x to catch a change in the percent used value of your database file. The below T-SQL will give you the values for the data and log files which you can then assign to an Alert and then monitor it for when the Percent Used value exceeds some designated value. create table #data(Fileid int NOT NULL, [FileGroup] int NOT NULL, TotalExtents int NOT NULL, UsedExtents int NOT NULL, [Name] sysname NOT NULL, [FileName] varchar(300) NOT NULL) create table #log(dbname sysname NOT NULL, LogSize numeric(15,7) NOT NULL, LogUsed numeric(9,5) NOT NULL, Status int NOT NULL) insert #data exec('DBCC showfilestats with no_infomsgs') insert #log exec('dbcc sqlperf(logspace) with no_infomsgs') WITH CTE_X AS ( select 'DATA' as [Type], [Name], (TotalExtents*64)/1024.0 as [TotalMB], (UsedExtents*64)/1024.0 as [UsedMB] from #data union all select 'LOG', db_name() + ' LOG', LogSize,((LogUsed/100)*LogSize) from #log where dbname = db_name() ) select t.type, t.name, t.totalMB, t.UsedMB, convert(numeric(5,2),(t.usedMB/t.totalMB)*100) AS [PercentUsed] from CTE_X t drop table #data drop table #log / comments
Andrew Hodge wrote: Just looking through the analysis section on monitor 3 and have noticed that there isnt a data file space used (equivelent to the log space used). The data size just returns ...
0 votes
fionag wrote: Hi Ed Can you provide the following information so we can establish if it's an issue with the graphing or the data collection. Run the following query to retrieve data for the relevant period. SELECT * FROM data.Cluster_SqlServer_Database_Storage_StableSamples_View WHERE CollectionDate_DateTime BETWEEN '2012-03-01' AND '2012-03-28' If you can adjust the from and to date as relevant to your situation and also filter by say the field Cluster_Name so we only get data for the server you are interested in. Alternatively can you let us know what the relevant server name is. If you can then run an analysis graph for the period in which you are seeing the wrong values and click on "Export..." and also email that to us. It would also be useful if you can run Performance Monitor against your server during the period in which you expect the log size to change to confirm the results are as you expect (details are in my previous posts). Please can you email the all the results to fiona.gazeley@red-gate.com It's also worth emailing us the log files so we can determine if anything was preventing collection during that period. Please see the following article on where to get your log files. http://www.red-gate.com/supportcenter/c ... LogFilesKB Many thanks Fiona Done. / comments
fionag wrote: Hi Ed Can you provide the following information so we can establish if it's an issue with the graphing or the data collection. Run the following query to retrieve data for the rele...
0 votes