How can we help you today? How can we help you today?
datacentricity
Okay, let’s try leaving SSC out of the mix for a moment. In the “shared†model, all database developers write their code against the same database. Hopefully they connect to this database with their own dedicated login (whether Windows or SQL Server) rather than some global dev user account. There is a single dev database and everyone develops in that. In the "dedicated" model, each developer makes their changes against their own dedicated database. This could be multiple databases on the same SQL Server instance: PubsTom, PubsDick, PubsHarry etc but it is more likely to be set up so that each developer has their own dedicated SQL Server instance. This might be a VM or named instance on a shared physical server or SQL Server Developer edition (or even SQL Express) installed on each developer’s desktop. User is what you think it is – a Windows or SQL Server login. “Sandbox†is just a term used to describe a dedicated development environment where a developer can “play†without impacting on anyone else. It is a physical thing and you can only have a sandbox if operating the “dedicated†model as a shared database cannot, by definition, be a sandbox - unless everyone else has gone home for the night maybe [image] So the question in SQL Source Control when you bring a database under source control relates to whether all your database developers work in one shared database or whether they each develop in their own dedicated database. As far as I know, it is not possible to combine both approaches in SSC. I am not sure how SSC would work if all your developers use a single global user account like “sa†to develop. / comments
Okay, let’s try leaving SSC out of the mix for a moment. In the “shared†model, all database developers write their code against the same database. Hopefully they connect to this database wit...
0 votes
If you operate in "Dedicated" mode, each developer will have their own sandbox development database in which to code, test and experiment. "Shared" mode means that all developers develop and test against a single shared development database. If (as a developer) you are working against a "shared" database, (from memory) SSC will show you all changes (including those made by other team members) but by default will only commit changes made you. Obviously with a "dedicated" development database you would only see your changes. With a "shared" database all developers can see each other's changes immediately - whether they are ready for general consumption or not. This includes experimental schema and code changes, and any bugs. This approach also introduces problems around what happens if two developers are working on the same code module or when one developer starts coding against something written by another developer that is only partially complete or even gets rolled back altogether. The only advantage with the "shared" approach is that it reduces the number of development instances that need to be maintained by the DBAs (ignoring the fact that most SQL devs could quite happily do their own dev instance support/maintenance) The advantages with the "dedicated" model is that the rest of your team will only ever see the changes you have committed to source control after all tests have been completed etc. You can experiment to your heart's content in your sandbox database safe in the knowledge that you won't break everyone’s shared development database/application. The "dedicated" model is also the only way for developers to write and run their own unit tests without clashing with the activities of their colleagues. Generally speaking, the accepted wisdom is that “dedicated†is the only way to go. Some companies worry about cost (SQL Server developer edition is dirt cheap) or, more often, control. If all databases are in the cast iron grip of the DBA team then senior management can sleep soundly at night safe in the (lack of) knowledge that their database developers are working with one hand tied behind their backs and one eye shut. If this situation applies to you, Troy Hunt has quite a nice piece on The evils of shared databases. Benjamin Day also has this to say on the subject. / comments
If you operate in "Dedicated" mode, each developer will have their own sandbox development database in which to code, test and experiment. "Shared" mode means that all developers develop and test ...
0 votes
Simon, just a thought - if you're having issues with FakeTable it might be worth starting a new thread. FakeTable itself is pretty reliable in my experience, but schema-level tasts can introduce some interesting questions around how we do certain types of test. / comments
Simon, just a thought - if you're having issues with FakeTable it might be worth starting a new thread. FakeTable itself is pretty reliable in my experience, but schema-level tasts can introduce so...
0 votes
Simon, Ok so we know that [SetUp] is being called. And, if you've corrected the schema references, we need to find out whether the table is actually created and if it is, at point it disappears again. Try adding this to the end of your SetUp procedure: IF OBJECT_ID(N'BoardSummary.Actual') > 0 RAISERROR('SetUp: [Actual] table exists in BoardSummary schema', 0, 1) WITH NOWAIT; ELSE IF OBJECT_ID(N'dbo.Actual') > 0 RAISERROR('SetUp: [Actual] table not found in BoardSummary schema but deos exist in dbo schema', 0, 1) WITH NOWAIT; ELSE RAISERROR('SetUp: [Actual] table not found in BoardSummary schema', 0, 1) WITH NOWAIT; Then add this at the top of the test procedure IF OBJECT_ID(N'BoardSummary.Actual') > 0 RAISERROR('In Test: [Actual] table exists in BoardSummary schema', 0, 1) WITH NOWAIT; ELSE RAISERROR('In Test: [Actual] table not found in BoardSummary schema', 0, 1) WITH NOWAIT; and then add this just before you try to insert the Actual table: IF OBJECT_ID(N'BoardSummary.Actual') > 0 RAISERROR('Before Assert: [Actual] table exists in BoardSummary schema', 0, 1) WITH NOWAIT; ELSE RAISERROR('Before Assert: [Actual] table not found in BoardSummary schema', 0, 1) WITH NOWAIT; Then just run the one test with tSQLt.Run and see what the output is. Also, you were correct in naming the set-up procedure [SetUp]. tSQLt looks for this before running each test. If you'd left it as [test SetUp], it would have got called once because it was treated as just a regular test. Also, in case it wasn't clear earlier, the reason why I recommend putting the [actual] and [expected] tables in the test schema rather than dbo is because one day you might actually be testing a database that has a table named like that. / comments
Simon, Ok so we know that [SetUp] is being called. And, if you've corrected the schema references, we need to find out whether the table is actually created and if it is, at point it disappears aga...
0 votes
Simon, In terms of achieving DRY, you do seem to have a lot of setup here, And that setup seems to be calling production procedures. Obviously I don't know about the underlying logic but if it were me, I would ask myself whether it really is necessary to call all those, or even any of them. If I understand this correctly, the procedure under test is apPROMsBoardSummaryReport. I am presuming that this manipulates data from a number of tables (presumably populated by the previous procedure calls). If I wanted to isolate this test from everything else, I wopuld endeavour to just populate the tables and columns that are required for this one test. As a simple example, if I had an [OrderDetail] table with [Price] and [Quantity] columns, and the procedure was supposed to generate a [Cost] column, then at the simplest level these are the only columns that need to be populated to fulfil that test. Only populate the tables columns that are essential to the test, use tSQLt.FakeTable to achieve this where necessary. If you do use tSQLt.FakeTable, all columns on the mock table are defined as nullable - precisely to allow you to populate only the columns required for the test. Where this really helps is when another NOT NULL column is added to [OrderDetail], your existing tests won't break because they're not populating that column. Test Driven Development requires a different mindset, look for ways to break functionality down to the lowest common denominator. If you need three or four tables to contain data in order to test apPROMsBoardSummaryReport, than add rows to those directly. As it stands, if the test you demonstrate here fails, how do you know where the failure is? Looking at your test, it could just as easily be in apSaveUploadedScoreCardDetailsNew, apSaveUploadedScoreCardResults or apAddUploadedScorecardHeader. Those should be subject to their own tests that assert they do everything they should. Otherwise, what you have here is an integration test - which is still useful but much harder to identify the cause of any failure. There is pattern called Test Data Builder used in the .Net and Java world that can be used to simplify this kind of set up. For example Adapting the Test Data Builder Pattern for T-SQL. Using TDB, you could write a procedure that might take all the inputs from the production procedure calls you are making and puts the correct values in the appropriate tables and columns. Then you have just one procedure call which can be reused in other tests passing different values. Even if the production procedures you are calling in this test are douing complex calculations, the end result is probably a set of numbers in some rows in some tables - it is those simple numbers you need to set up for this test - not all the intevening logic. If you're familiar with DRY, you will also know about SRP - Single Responsibility Principle. Sorry if this reply goes on a bit but this is a complex subject. TDD really does change how you look at code - resulting in more robust, better designed code. We just have to get through that learning curve first :-) / comments
Simon, In terms of achieving DRY, you do seem to have a lot of setup here, And that setup seems to be calling production procedures. Obviously I don't know about the underlying logic but if it wer...
0 votes
Simon, I'm glad my tutorials have inspired you. The difficulty is that once you start using TDD, you'll struggle to work without it. NB: Odd, I didn't see the refactored code when I first looked Looking at the example code you posted, I'm assuming that this is before refactoring creation of the actual and expected table out into the setup method. The most obvious thing to check is that you also remove the DROP TABLE statements at the beginning of [BoardSummary].[test GetBoardSummary_2ScorecardsAdded_MaxMinUsedForRanges]. In fact arguably the DROPS aren't really required since the CREATES get rolled back at the end of each test. It's my OCD that makes me do it in my code :-) If that's not the problem, try creating a [BoardSummary].[SetUp] that just does RAISERROR('Setup called!', 0, 1) WITH NOWAIT; then run just one test tSQL.Run '[BoardSummary].[test GetBoardSummary_2ScorecardsAdded_MaxMinUsedForRanges]' in another window to see if SetUp does actually get called. Then build things up from there. Another thing to check (although a longshot) if you happen to be running on a case-sensitive database is that you are running the latest version of the tSQLt framework SELECT * FROM tSQLt.Info() should yield 1.0.4504.21220. There was a small issue in an earlier version WRT case-sensitivity of the SetUp call. Mind you if you've installed it via SQL Test this shouldn't be a problem. If these suggestions don't work, let me know and I'll see what else I can come up with / comments
Simon, I'm glad my tutorials have inspired you. The difficulty is that once you start using TDD, you'll struggle to work without it. NB: Odd, I didn't see the refactored code when I first looked L...
0 votes