DDT Questions


Author
Message
Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.5K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
I finally got around to looking at the DDT and I have some questions/comments:



1. I see that you can now install a package and set the db name at runtime. It mentions a new update to do this. My downloads area only has one from 11/20/2006. Does the 11/20/2006 release have this functionality?



2. Is this new feature explained in the new help? If not, how does it work? I see that there is now dynamic naming, e.g. $DbName$. Where can it be used and what else is available?



3. I would strongly encourage you to improve your support migration of existing dbs into DDT. While it makes sense to move business logic into the BOs, thus avoiding the need of triggers, there are also strong reasons to keep the triggers (at least initially). First, in my case, I am migrating an application to SF. The db has 91 tables, 100 views, 156 sprocs, 16 UDF, 56 triggers plus assorted constraints. Importing this db will mean that MOST of the work is by hand and I can't even deal with the triggers, UDFs or non-index/default constraints within the DDT. It does make sense that I could get rid of most/all of the triggers and UDFs eventually, but I don't have time now. I need to do this gradually. The second reason is that not all apps will have their own reporting. There could be an Enterprise reporting tool that is required to be used and it might not be .NET and/or the developer might have NO control of building reports in it (both my case). In this case, sprocs and views are definitely required. So, I'd like to strongly suggest that you include include UDFs, triggers and other types of constraints as items defined by the DDT and that you import ALL of these SQL objects when importing a database, and that this be done sooner rather than later. BigGrin



4. The comment area for sprocs is quite small and I'm sure it is designed for use within the DDT so users can find a sproc. However, if the sproc is a non-trivial, there will input/output parameters and business logic to describe. If I put the comments in the pre-create/environment code area, will it correctly build the sproc? E.g. if I put



/* MyProc

Description....



IN:

@id - description



OUT:

@status - description

*/



in the pre-create area, will this be inserted before the sproc, so I would get something like:



/* MyProc

Description....



IN:

@id - description



OUT:

@status - description

*/

Create Procedure MyProc

@id int,

@status int OUTPUT

AS

-- sproc code here



GO



5. Related to 4., how would I set comments that would be associated with a view in the DDT. I want something like:



/* Viewname

Description

*/

Create View viewname

AS



Anyway to do this? (This is so users of a view don't have to have the DDT to understand what the view is intended to do, especially helpful with report views used by other reporting tools).



6. How does the DDT handle objects that exist in a db but aren't in the DDT? I.e. lets say I have table1, table 2 defined in the DDT, but not table3. However, table3 does exist in the database. When I deploy the package, what happens to table3? (what about views, sprocs etc.)?



7. How does DDT handle data migration issues? I.e. I have a table, table1. In a later version of the db, it is replaced with table2 and table3. The data in table1 needs to be moved into table2 and table3 (according to some SQL statements). Just to spice things up, I can't set FK or indexes on table2 or table3 until the migration is done. So, if I were scripting it, I would create table2 and table3, run the sql migration scripts, set indexes/fk on table2 and table3, then delete table1. How is this sort thing handled (if at all)?



8. I saw mention of using the DbEngine.dll to run scripts, which is apparently how triggers/udfs are deployed. How does this work?



9. Is there anyway to just deploy data? I.e. maybe I don't mess with the DDT for schema related stuff, just to deploy data. How would that work?



10. I realize that the version in DDT is just for DDT use, but I like to have schema version info stored in the database. Is there any way to access this and plug it into a table, when installing a package? That way, a quick glance at the appropriate table lets me know which version a database is on.



OK, that enough for now BigGrin



StrataFrame Team
S
StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)
Group: StrataFrame Developers
Posts: 3K, Visits: 2.5K
I'm not positive that the 11/20/2006 release does not have the option to rename the database at deploy-time, but I think it was after that.  The feature will be included in the next release.  It looks like you have a copy of the help documentation that has been updated to support the new features in the next release, but the release isn't out, yet.

Basically, the functionality works like this:

When you deploy through the DDT, a new panel has been added to the wizard that collects the deployment information (like the SQL Server info and such) showing a grid.  The grid has two columns: the left column shows the name of the databases within the profile and the right column contains the name of the database as it will be deployed on the server.

When you deploy programmatically, you can specify an array of DatabaseRename objects that specify the profile name and the deployment name of the databases.

As for the $DbName$ wildcard, it's only used within the filegroups and data files.  It's used to prevent the error that occurs when you deploy the same database to the same server more than once using the same .mdf name; this way, the actual .mdf file follows the name of the database.

Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.5K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
Thanks for answering the first couple of questions. I was asking about the new functionality based on posts about problems with the features. It seemed like it had been released already (otherwise how could they be having problems with it!) The posts were:



http://forum.strataframe.net/Topic5671-11-1.aspx

http://forum.strataframe.net/Topic5336-13-1.aspx



which is how I knew this was available/about to be available.



When you get time, I'd appreciate comments/answers to the other questions too Hehe
StrataFrame Team
S
StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)
Group: StrataFrame Developers
Posts: 3K, Visits: 2.5K
There are a few people that have the newest build... Tim Dol is one of them.  He needed some specific functionality added to the BusinessLayer class to allow certain fields to be excluded from UPDATEs, so when we fixed the problem, we gave him a "beta" build of the latest release.  So, I don't think the download within the MyAccount area has the functionality within the DDT.

As for the case-sensitive check of the data files, yes, it has been resolved and will be available when the features are posted within the next release.

Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.5K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
Thanks for the update, that explains it.



Here are the other questions/comments which haven't been responded to yet:



3. I would strongly encourage you to improve your support migration of existing dbs into DDT. While it makes sense to move business logic into the BOs, thus avoiding the need of triggers, there are also strong reasons to keep the triggers (at least initially). First, in my case, I am migrating an application to SF. The db has 91 tables, 100 views, 156 sprocs, 16 UDF, 56 triggers plus assorted constraints. Importing this db will mean that MOST of the work is by hand and I can't even deal with the triggers, UDFs or non-index/default constraints within the DDT. It does make sense that I could get rid of most/all of the triggers and UDFs eventually, but I don't have time now. I need to do this gradually. The second reason is that not all apps will have their own reporting. There could be an Enterprise reporting tool that is required to be used and it might not be .NET and/or the developer might have NO control of building reports in it (both my case). In this case, sprocs and views are definitely required. So, I'd like to strongly suggest that you include include UDFs, triggers and other types of constraints as items defined by the DDT and that you import ALL of these SQL objects when importing a database, and that this be done sooner rather than later. BigGrin



4. The comment area for sprocs is quite small and I'm sure it is designed for use within the DDT so users can find a sproc. However, if the sproc is a non-trivial, there will input/output parameters and business logic to describe. If I put the comments in the pre-create/environment code area, will it correctly build the sproc? E.g. if I put



/* MyProc

Description....



IN:

@id - description



OUT:

@status - description

*/





in the pre-create area, will this be inserted before the sproc, so I would get something like:



/* MyProc

Description....



IN:

@id - description



OUT:

@status - description

*/

Create Procedure MyProc

@id int,

@status int OUTPUT

AS

-- sproc code here



GO



5. Related to 4., how would I set comments that would be associated with a view in the DDT. I want something like:



/* Viewname

Description

*/

Create View viewname

AS



Anyway to do this? (This is so users of a view don't have to have the DDT to understand what the view is intended to do, especially helpful with report views used by other reporting tools).



6. How does the DDT handle objects that exist in a db but aren't in the DDT? I.e. lets say I have table1, table 2 defined in the DDT, but not table3. However, table3 does exist in the database. When I deploy the package, what happens to table3? (what about views, sprocs etc.)?



7. How does DDT handle data migration issues? I.e. I have a table, table1. In a later version of the db, it is replaced with table2 and table3. The data in table1 needs to be moved into table2 and table3 (according to some SQL statements). Just to spice things up, I can't set FK or indexes on table2 or table3 until the migration is done. So, if I were scripting it, I would create table2 and table3, run the sql migration scripts, set indexes/fk on table2 and table3, then delete table1. How is this sort thing handled (if at all)?



8. I saw mention of using the DbEngine.dll to run scripts, which is apparently how triggers/udfs are deployed. How does this work?



9. Is there anyway to just deploy data? I.e. maybe I don't mess with the DDT for schema related stuff, just to deploy data. How would that work?



10. I realize that the version in DDT is just for DDT use, but I like to have schema version info stored in the database. Is there any way to access this and plug it into a table, when installing a package? That way, a quick glance at the appropriate table lets me know which version a database is on.

StrataFrame Team
S
StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)StrataFrame Developer (4.7K reputation)
Group: StrataFrame Developers
Posts: 3K, Visits: 2.5K
3)  The database import tool is not meant to be a replete import of all objects within a database.  It is meant to remove some of the grunt work involved with creating the profile from scratch.  When the DDT is updated to support triggers and other not-currently-supported database objects, the import will likely be modified to import them.

4)  Yes, anything place within the pre-create/environment code is copied directly into the sproc.  So, if you edit the sproc on SQL Server, it will contain your comments.

5)  There is not currently a way to add comments outside of the CREATE VIEW statement.  You can, however, place comments within the definition of the view.

6)  Tables that exist in the database but not in the DDT will be leaft alone (assuming that there is not an object name history record for that table).  All sprocs and views are dropped and then recreated.

7)  That functionality is not handled.  The only moving of data between tables by the DDT is done when a table must be moved from one filegroup to another and a new, temp table must be created to house the data while the table is moved to the new file group.

8)  The DBEngine.dll does not deploy triggers or UDFs.  Those must be scripted and deployed manually.  The DBEngine is the heart of the deployment of the DDT.  The DDT itself does not actually deploy a profile to the server, it is handled by the DatabaseMigrator, a class within the DBEngine.  To programmatically deploy a profile, you create an instance of the DatabaseMigrator and call either DeployMetaData or DeployData and supply the necessary parameters.

9)  You can do it programmatically by creating an instance of the DatabaseMigrator class and only calling the DeployData method.

10)  You would have to manually script that and deploy the version from the profile.xml file within the DDT deployment package to the server.

Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.5K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
Thanks for the reply Ben. I have some comments/clarification (when haven't I Wink )



3) The databaseimport tool is not meant to be a replete import of all objects within a database. It is meant to remove some of the grunt work involved with creating the profile from scratch. When the DDT is updated to support triggers and other not-currently-supported database objects, the import will likely be modified to import them.




Just to reiterate, I'm just saying that as it currently stands, it is daunting to migrate to using the DDT if the database(s) is non-trivial. I'm glad these features are on the way (can't be too soon). Of course, using the DDT with a new db would likely be sweeeet.





5) There is not currently a way to add comments outside of the CREATE VIEW statement. You can, however, place comments within the definition of the view.




May I make an enhancement request then..... Smile





6) Tables that exist in the database but not in the DDT will be leaft alone (assuming that there is not an object name history record for that table). All sprocs and views are dropped and then recreated.




Any reason for not treating sprocs and views the same? I.e. only delete and recreate them if they are defined in the DDT.





8) The DBEngine.dll does not deploy triggers or UDFs. Those must be scripted and deployed manually. The DBEngine is the heart of the deployment of the DDT. The DDT itself does not actually deploy a profile to the server, it is handled by the DatabaseMigrator, a class within the DBEngine. To programmatically deploy a profile, you create an instance of the DatabaseMigrator and call either DeployMetaData or DeployData and supply the necessary parameters.




I would have swore I read a post about how to deploy scripts to the database. It was addressing the need to handle things the DDT doesn't. Thus, you would deploy schema changes, then data, then run your own arbitrary scripts. This ringing a bell for anyone?





10) You would have to manually script that and deploy the version from the profile.xml file within the DDT deployment package to the server.




So, the DDT version is included in the xml file, and in the code that does the deployment, I'd retrieve this info, then update the data in the database within the code also?
Trent Taylor
Trent Taylor
StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)
Group: StrataFrame Developers
Posts: 6.6K, Visits: 6.9K
Any reason for not treating sprocs and views the same? I.e. only delete and recreate them if they are defined in the DDT.

Taking this approach defies the nature of meta-data.  If you are using meta-data to deploy your views and sprocs then the meta-data should contain this information to be deployed.

Thus, you would deploy schema changes, then data, then run your own arbitrary scripts. This ringing a bell for anyone?

It sounds like you already have the general idea Smile

So, the DDT version is included in the xml file, and in the code that does the deployment, I'd retrieve this info, then update the data in the database within the code also?

You can use the version in the DDT information, but this is not used by StrataFrame or the deployment engine.  This is a hold-over from the initial version of the DDT.  But before the DDT was even released to the public it quit using this version.  We just left it in for those users who may need to reference it.  The DDT is "smart" and verifies the data structures real-time to check for changes.

Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.5K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
Taking this approach defies the nature of meta-data. If you are using meta-data to deploy your views and sprocs then the meta-data should contain this information to be deployed.




I'm not following you. I'm taking about defining viewA in the DDT. But the db has viewC and ViewD already defined (for a Java based reporting tool, say Smile ). If I understood how the DDT works, it deletes all views, then adds back the ones it knows about, so after deployment, the db has ViewA, but viewC and ViewD are missing (and the reporting tool is broken). Why not just delete ViewA if found, ignore ViewC and ViewD, then add ViewA back in. Not sure how this approach defies the nature of meta-data.





It sounds like you already have the general idea Smile




I did some Googling and realized ADO.NET does this sort of thing in a few lines of code! I keep forgetting how easy it is to use ADO.net to run a script. As I get time, I'll create a generic script runner. I'm currently using a cmd.exe based script runner. Its works well, but is a pain to maintain or change.





You can use the version in the DDT information, but this is not used by StrataFrame or the deployment engine.




Yep, I'm clear on that





We just left it in for those users who may need to reference it.




Ah, users like me! BigGrin I often have two or three versions of the db in test, plus my dev version and the production version(s) (don't ask, it's a mess). I find it incredibly helpful to be able to see what version a database is at by looking at the database, so I can keep the mess (sort of) straight.





The DDT is "smart" and verifies the data structures real-time to check for changes.




This is good. While I like the version info available, I like even better that the DDT is smart BigGrin
Trent Taylor
Trent Taylor
StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)StrataFrame Developer (10K reputation)
Group: StrataFrame Developers
Posts: 6.6K, Visits: 6.9K
Why not just delete ViewA if found, ignore ViewC and ViewD, then add ViewA back in.

Because this would require major changes to our engine.  We use SMO and have to follow certain rules.  It would require much more work to ignore certain views.  Again, this goes back to my previous post.  It defies the nature of meta-data anyway.  We should be able to accurately rely on the contents of the meta-data when deploying.

GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Similar Topics

Reading This Topic

Login

Explore
Messages
Mentions
Search