Comments
1 comment
-
Hi Kean,
I'm assuming you are using git, because you refered to "local repositories". If you aren't using git, please can you let me know. Centralised source control and distributed source control work very differently.
I think you are overcomplicating things. The management of the individual developers' local repos should be up to the individual developers. They can put their local repos wherever they like. It shouldn't need to be centrally managed. It doesn't matter.
As for the structure of the repo, perhaps it would be simpler to split this out into multiple repos? If I'm working on database 7 from server 3, why do I need to waste time and effort including database 2 from server 6? A git repo with between 200 and 600 databases in it sounds terrifying. How long is it going to take to clone the repo? All the branches for all them databases are going to create one hell of a spaghetti soup.
In an ideal world, each database would be in it's own little repo. However, in tightly coupled systems, where sets of databases are littered with cross-database dependencies, it can be easier to treat them as a single component in a single repo. Even in this scenario, it would probably make life easier to split the 2-600 databases into groups of tightly coupled databases.
Probably with a structure a bit like this:
- Server1
- - Database1
- - - state << Link SQL Source Control to this directory
- - - other_stuff (security scripts, test data, build/deployment scripts etc)
- - Database2
- - - state
- - - test_data
- - - build_scripts
- Server2
- - Database1
- - - state
- - - test_data
etc
One final question: Are some of your databases the dev/test/prod versions of the same database? If so you should treat the dev/test/prod versions as a single database in source control. All your deployments, to all the databases in the pipeline, should come from source control. That way you enforce that dev/test/prod etc remain in sync with each other.
Add comment
Please sign in to leave a comment.
Scenario:
My team is comprised of 10 SQL Developers and DBAs. We have 20 SQL Server Instances with each instance containing anywhere from 10-30 databases. Everyone on the team has SSMS installed on their separate desktops/work stations. Most team members also RDP on to our servers and utilize SSMS directly on the box they are connected to. In most cases, this is because a developer will be working with an SSIS package and needs to update a Stored Procedure that the package calls.
I have identified a few different approaches to creating the local repositories and I'm looking to get some insight from more experienced minds.
Approach A:
Every user has local repositories created on their individual desktops for each instance, for each database. The local repositories would be structured as ...\Instance1-20\DatabaseRepo1-30\.
On every server, there would also be local repositories created for each person, for each instance, for each database. The local repositories on each server would be structured as ...\Person1-10\Instance1-20\DatabaseRepo1-30\.
Pro's: This approach covers every combination of Person connecting to any database on any instance from any location (individual desktop + any of the 20 servers they could be RDP'd to).
Con's: Very complex and the local repositories for an individual are no in-sync, meaning that the local repository ...\Person1\Instance1\Database1\ on Server A could be different than the local repository on Server B.
There is also a very good chance Approaches D to Z are better than anything I've identified, hence the purpose of this question. I'm open to any ideas/suggestions you have.
Thanks,
Kean