Bill,
I've done a little digging and the new VS database projects look very nice. It looks like they provide intellisense and validation (ensuring that any referenced objects exists..i.e. that all the tables/columns exists, that called sprocs exists etc.). It also has some very nice deployment tools, including handling some tough situations like renaming of columns (apparently, if you refactor the name of a column/table, it tracks this and then uses that info to not delete the renamed item, but rather calls the rename sproc instead...nice). It appears to be able to generate an update script by comparing the current schema to an existing database. If you are real comfortable with SQL scripts and you want to treat your scripts like source code, thus being able to check it into source control to enable diffing and history for the script, the VS db project looks very nice.
Based on what I've read, it seems that it would work well for a typical internal app, where you have development db(s), a test db and a production db. You could then just use VS to connect to each of these, generate the update script and you're good. However, if you have customers using your app and thus have many installations of your app/db, then the DDT is way more flexible, as you can deploy the current schema as a package to
any installation and it will figure out how to update that db to the current version. Doing this with the VS tools would require deploying every possible update script, determining what version they had, then looking up the correct script. Really, if you have to manage a set of deployments (not a single production db), the DDT is still the way to go. If you have a single production db to deal with, then the VS db project might work.
I have only done a bit of reading, so I'd do some testing first. I'd test the following for VS projects:
- Use an existing project to load the scripts, then take a look and see how that worked. I'd want to know what it does with sprocs/functions/views etc. Does it include comments before a sproc or table? (I often times add comments so I remember what the heck a table is for or what a sproc is supposed to be doing.)
- Check how intellisense works. Does it require a live connection to the db? How does it handle new columns/tables/etc. that haven't been deployed yet?
- Rename a table (using refactoring) and see if I can deploy it without dropping the table (test this on a dev db, in case it does drop the table).
- Rename a column (using refactoring) and see if it can deploy the change without dropping the table.
- Test deployment to a dummy test and dummy prod db. I.e. if I have an up-to-date dev db, how hard is it to update a test db that is a bit out of synch and then update a prod db that is quite out of synch. I'd start with three copies of db, in synch. I'd make a couple of changes to dev, deploying them, then deploy to test. Repeat that 3 times, so the third db is quite out of synch and then try to synch it.
- Test how to deal with data deployment, i.e. for static data/configuration data. Apparently there are pre and post deployment scripts within the project. How would these be used to get configuration data/default users loaded into a db? How would you handle situations where you only want to deploy data no data exists? The DDT really handles this stuff nicely. I'm guessing it would be a lot more work using db projects.
I hope this helps you in your investigation. As I get time I'll try to do some of these tests myself. When I do, I'll report back.
Have a great holiday season!