Comments
Sort by recent activity
fionag
I do still have an active support plan for SQL Monitor so the upgrade to version 3 is not an additional cost in terms of dollars just in terms of time and effort which is a resource I am short on currently. That’s why I prefer a solution to sticks with not upgrading at least not for the short term. Then again a fix for version 2.3 that is lengthy and involved would be equally undesirable so in the end I may not have a choice but to upgrade. That said because you are not able to reproduce the problem in 2.3 that leads me to believe that my own problem with SQL Monitor getting/storing incorrect log size values will not go away after an upgrade making the upgrade effort of no value in as far as this issue goes.
I have given you the T-SQL I use to get the Log file metrics and so the only question left to answer is verifying what value(s) I am storing and the answer is all. I store all values returned by the code. The size I quote is form the ‘Log Size (MB)’ column from the results the DBCC call returns. I capture this info every 15 minutes which based on a prior posting by you, is less often then when SQL Monitor captures the log metrics so if anything my own stored results should be less accurate than those of SQL Monitor.
I will await to hear back from you on what you find out once you’ve completed the rest of the steps you’ve outlined in your last post.
Thanks for your help
Ed / comments
fionag
I do still have an active support plan for SQL Monitor so the upgrade to version 3 is not an additional cost in terms of dollars just in terms of time and effort which is a resource I am sho...
fionag wrote:
Ed
Another thing to check - have you tried running the analysis graph for a shorter time period - specifically over the hours where you know there is an issue.
In version 2.3 the graph resolution was lower (this is improved in v3.0) and therefore it may have resulted in the data not being displayed as expected when viewing a longer period.
Thanks, Fiona
Yes / comments
fionag wrote:
Ed
Another thing to check - have you tried running the analysis graph for a shorter time period - specifically over the hours where you know there is an issue.
In version 2.3 the g...
fionag wrote:
Ed
We get the values from a windows performance counter. You can see these by running Performance Monitor and selecting the counter "Log File(s) Size (KB)" from the "SQLServer:Databases" object.
You can also see this value by running the query:
select * from sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:databases'
AND counter_name = 'Log File(s) Size (KB)'
Out of interest, what query are you using to get the data?
We collect quite frequently (minutely) but because it's considered a "stable sample" we only store the value if it is different to the previous one.
Does this shed any light on the matter?
Thanks
Fiona
Since there is no more followup on my replies to the above questions does that mean that the RedGate answer to this is "Upgrade To SQL Monitor 3" ? If I do upgrade to SQL Monitor 3 does that mean I will then see more accurate Log file metrics or does it mean that future capturing of Log FIle metrics will be accurate but past days (what has already been captured) will remain inaccurate?
I believe the info being captured by SQL Monitor 2.X is wrong because it was not actually getting that data as frequently as it was supposed to. Thats because it seems to get the Log File Size right at certain points in time but is off, way off in some cases for the in-between times.
Comments? / comments
fionag wrote:
Ed
We get the values from a windows performance counter. You can see these by running Performance Monitor and selecting the counter "Log File(s) Size (KB)" from the "SQLServer:Dat...
fionag wrote:
Ed
We get the values from a windows performance counter. You can see these by running Performance Monitor and selecting the counter "Log File(s) Size (KB)" from the "SQLServer:Databases" object.
You can also see this value by running the query:
select * from sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:databases'
AND counter_name = 'Log File(s) Size (KB)'
Out of interest, what query are you using to get the data?
We collect quite frequently (minutely) but because it's considered a "stable sample" we only store the value if it is different to the previous one.
Does this shed any light on the matter?
Thanks
Fiona
The T_SQL code used to get the log info is as follows: DBCC SQLPERF(logspace)
/ comments
fionag wrote:
Ed
We get the values from a windows performance counter. You can see these by running Performance Monitor and selecting the counter "Log File(s) Size (KB)" from the "SQLServer:Dat...
Chris Auckland wrote:
Many thanks for the information.
The number of rows in any particular table shouldn't matter, but the cumulative effect of having lots of tables with a large number of columns is an issue we've seen before.
We are actually working on a version of SQL Prompt that will handle schemas with a large number of objects better, which is (optimistically) going to be available on Friday 9th March.
We are also going to reinstate the feature to ignore certain databases at some point, but I don't think it's going to appear in the release on Friday.
I'll let you know on Friday if you can try the new version on not.
If you guys need any testing against a large DB Schema I am happy to d oit since resolving this quirk in SQL Prompt would be most beneficial to me.
Some Metrics on our DB Schema: Totals Number of: TABLES: 2,134 COLUMNS (Across all tables): 33,553 VIEWS: 741 Stored Procedures: 716 UDFs (all types): 285 SQL DML Triggers: 29 Default/Default Constraint: 1,623 / comments
Chris Auckland wrote:
Many thanks for the information.
The number of rows in any particular table shouldn't matter, but the cumulative effect of having lots of tables with a large number of col...
Chris
I did some more testing with this and I'm finding the problem is NOT limited to TABLEA but is occurring on every tabl;e join just as soon as I type in the keyword ON.
In the past it was most noticeable with teh first table join and %90 of my queries always woudl start with TABLEA and so it appeared to be spoecific to that table.
Hopefully that helps some with this.
I will tell you that many of the tables in our DB are top-heavy or wide (have many, many columns). For example the largest and most READ to table in the DB has 130 columns. The table with the largest number of columns has 244 columns and over 1 million rows of data. / comments
Chris
I did some more testing with this and I'm finding the problem is NOT limited to TABLEA but is occurring on every tabl;e join just as soon as I type in the keyword ON.
In the past it was most ...
Chris - I just emailed it to you. / comments
Chris - I just emailed it to you.
Chris,
I tried the query out and got 0 rows back.
Thanks / comments
Chris,
I tried the query out and got 0 rows back.
Thanks
Chris Auckland wrote:
Thanks for your post.
I wasn't aware that upgrading would break the old project files, so I'm sorry this is happening.
Are you able to recreate the project in the new version? Has this happened to all your existing projects?
Chris,
I haven't had time to do this yet because the process is very lengthy. That is the time it takes to set up the Dependency Tracker project I had is rather long due to the number of objects involved.
You see my database has 2,136 USER Tables, 716 Stored Procedures and 285 UDF's. That means it takes any Red-Gate product that searches thru DB Objects like Dependency tracker or SQL Prompt which is even slower because of this, a long time to complete a task and hand control of the app back to the user. When using Dependency tracker it takes a while to create a project and so it really stinks when you find out after the fact that anything you did in a prior verison is not just unusable in the current version but is gone period.
I hope the next DT upgrade takes into account some option for iusers to not loose the stuff they did on the prior version. This kind of bug can be very discouraging to a user when it comes time to upgrade. / comments
Chris Auckland wrote:
Thanks for your post.
I wasn't aware that upgrading would break the old project files, so I'm sorry this is happening.
Are you able to recreate the project in the new versi...