Using DDT on more than 1 computer


Author
Message
Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.4K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
I appreciate your patience with my ignorance...
No worries.  Check out some of my early posts...been there, done that....

Before I discuss being disconnected, remember that if you can setup a VPN, often you don't need to be disconnected, as you can just VPN and connect that way. Depending on our VPN, you might have to setup another connection to the StrataFrame db (e.g. you need to access the server via IP rather than name).

I often go remote and am not connected to the StrataFrame database, so this is very doable. As to what you loose, lets start by talking about what it is used for:

  • Meta data used by BO Mapper
  • Meta data used by localization 
  • Meta data used by DDT
Thus, if you are not connected to the StrataFrame you won't be able to use the BO Mapper to update/add BOs (there are ways around this, but they aren't fun). And you won't be able to work in the DDT either. Usually, with a smidge of planning, these aren't horrible limitations.

Localization actually causes the biggest issue.  While it is obvious that you can't work in the localization tool to enter localizations while disconnected (as the definitions are stored in the StrataFrame db), it is less obvious that anytime you open a designer, any localized SF controls will attempt to connect to the StrataFrame db and get the appropriate localized value for the property that is localized. This happens even if you don't use localization at all. The result of this is that working in the UI can be very, very, very slow (because it keeps trying to connect to a database that isn't there...which has to time out).  No errors are shown, it's just slow.

There are two solutions: don't work on any forms and/or do have a StrataFrame database installed on the local laptop. The StrataFrame db on the laptop will not have any actual metadata in it! When you go remote, you'll change the connection to point local, and then change it back when you can connect again.   This allows you to edit Forms with no delay. 

Note that if you do this (this is what I do), even though the BO mapper/DDT/Localization tools will all work, none of your projects will be there (because the data is actually in another StrataFrame database that you all share). 

What I actually do is have my desktop setup to connect to the actual StrataFrame db and my laptop setup to connect locally. That way I'm all setup to go remote by just using my laptop.  Then all I have to do is update either machine using source control (I use SVN) to keep them in synch.

Hope that explains how to work remotely!
William Fields
William Fields
StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)
Group: StrataFrame Users
Posts: 80, Visits: 239
That is a big help Greg. I appreciate your patience with my ignorance...

Beyond the DDT, does the StataFrame Framework itself require a connection to the StrataFrame database? Meaning, if we have a central database, can a developer go mobile and still develop applications without a connection to the central StrataFrame database?

I guess a better question would be, "What do we lose, or what are we unable to do, when we don't have a connection to the central StrataFrame database?"

(I know this may be off topic, feel free to move the post if warranted).

Thanks.
Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.4K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
William,

In multi-developer environments, there should only be one StrataFrame database.  All of you should connect to the same StrataFrame database.  Typically only one developer at a time will work on a DDT project.  Once they have made changes, each developer is contacted, they open DDT and update their local development database with the changes (if needed).  I.e. each developer can have his own application specific databases, but there is only one StrataFrame database.

I hope that clears things up.
William Fields
William Fields
StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)
Group: StrataFrame Users
Posts: 80, Visits: 239
OK, we're starting to work out the workflow as I described (in post dated 12/6/2010 4:15:10 PM) . But I've ran into something unexpected...

The DDT stores all its data in the StrataFrame database, which we've all installed locally on our developer workstations. How do we keep the DDT Profile database in sync across developer workstations? I was envisioning the package file as the "master" dataset that could be opened in the DDT.. but I now understand that the package file is only for deployment use. Correct? Is there a way to take a package file and update a profile with it?

Suggestions?

Random thoughts/questions:

Will we need to have a central StrataFrame database that houses the DDT database so all developers use the same profile? Is there a way to "export" a profile that can be opened/used in a local StrataFrame database by other developers? We'd rather be able to work "offline" when we're mobile and not have to maintain a connection to a central StrataFrame repository.

Thanks, and Happy New Year!

UPDATE:

I've found out how to import a package file into a profile, but instead of updating the database in the profile, it creates a new database with "_1" appended to the name. That, and the object name history is empty.

Is there a way to import/apply a package file against a profile and have it update the matching database as well as bring in the object name history?
Edited 14 Years Ago by William Fields
Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.4K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
This seems like a good solution to control db changes, by simply only allowing one developer at a time to make changes.  I assume you'll use SSMS to generate the script for the database after you've updated it via DDT.

The size of a structure only package is nice and small. I created a package from a database with 125 tables, 212 sprocs, 163 view and 20 functions.  The package was 260K.  I assume it would grow as it has to keep track of all the changes, but it is not a huge file at all.

So, it appears that you are on the right track and while the you'll have to track the script separately so you can compare versions, you're going to love deploying the db with DDT. 
William Fields
William Fields
StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)
Group: StrataFrame Users
Posts: 80, Visits: 239
Thanks for the feedback. What do you think of the following workflow:

  1. The pkg file and create script(s) are placed under source control. The pkg file is the mechanism for deploying updates to the other developers and the script is used for an easy way to compare what happened between versions.

    Question, how big can the pkg file get? (our VFP app has 60 tables and about 150 views)
  2. If the pkg/script is checked out, you cannot make changes.
  3. Before you start making changes, you check out the pkg file and "deploy" it to your local dev database. You also check out the script file.
  4. After making changes, you update the pkg and script files and check them in.
  5. When deploying to a central test server, or when building a new packaged release, we just use the latest pkg file.
This for the most part follows our current VFP database change workflow (which has worked well for many years). Does it sound like a feasible way to work with the DDT and SQL Server? Am I completely off base, or am I not asking the right questions?
Greg McGuffey
Greg McGuffey
Strategic Support Team Member (3.4K reputation)
Group: Forum Members
Posts: 2K, Visits: 6.6K
William,

Ultimately, database changes of almost any kind require some level of coordination between the team members. If two developers are working on the same table/view/sproc etc. then there are going to be issues likely, no matter how you do it.   So, assuming there are clear areas of responsibility, so developers aren't stepping on each other, then we are talking scale: how much is the db changing and by how many developers at the same time and how far reaching is the change. 

If the developers are each working in fairly defined areas, then it is OK to save the changes in a central area, because even if the other developers end up deploying others changes, it doesn't impact them.  In this case, using a common repository is a big plus because you find out about issues immediately, rather than having two developers making changes in the same area, delaying the merge, then realizing the both their work is messed up. Now you've got the tangled mess Ivan was referring too.

However, as the changes get more extensive, and could affect more areas (like updating a common auditing sproc), then this doesn't work as well.  If this is a infrequent level of change, then you just communicate the needed change to the team, the other members stop making changes to the db for a bit, one developer handles the tricky stuff, tests it and then indicates they should all update their local development dbs. 

If you have several changes like this or often can't control when others are making changes, then you might need to move to scripts. They can be checked in to source control and you have a bit more control over what others are seeing (especially if you use a distributed source control system like git or mercurial).  However, managing the update scripts and creation scripts can be a big challenge (and one of the reasons the DDT is so nice).  I've used a tool call DbGhost. It is very nice and manages creating update and/or creation scripts from source sql scripts. It has some limitation, like handling changing names of columns (which DDT does very nicely) and you have to create specific scripts based on the state of the target db (DDT just updates any db to current version).  And this could still easily lead to a big mess if they are walking over each other.  Lots of coordination is needed here, no matter the tool.  Databases are more brittle than other forms of source code (where merging works better in many cases).   

In extreme cases, you may need a dba(s) to control changes to the database, if there are hundreds of developers working on a db.

In most cases DDT is a great tool that really, really, really eases the deployment of db changes.  As with any environment where multiple developers are working though, coordination is required.   
Edhy Rijo
E
StrataFrame VIP (4.6K reputation)StrataFrame VIP (4.6K reputation)StrataFrame VIP (4.6K reputation)StrataFrame VIP (4.6K reputation)StrataFrame VIP (4.6K reputation)StrataFrame VIP (4.6K reputation)StrataFrame VIP (4.6K reputation)StrataFrame VIP (4.6K reputation)StrataFrame VIP (4.6K reputation)
Group: StrataFrame Users
Posts: 2.4K, Visits: 23K
Hi William,

I am a single developer and of course do no have those issues with multi-developers, but I can tell you couple of things about the DDT which I use with all my projects and I was a big fan of SDT.

  1. With DDT the repository is on a MS-SQL database, so you cannot use TFS or any other source control for it. 
  2. DDT will keep a record of previous changes so it will automatically update any version of the database to the current one.
  3. When deploying the DDT you can choose which database or databases to update.  So in your case you may have Data1_Production, Data1_Dev, etc. then update the Dev database which may be local in your computer or remote at the main server and when you are ready with your testing, just run the same package for the production database.
Probably others here may jump with more networking group experience, I know that Greg McGuffey uses scripts instead of DDT for some of his projects.

Edhy Rijo

William Fields
William Fields
StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)StrataFrame User (252 reputation)
Group: StrataFrame Users
Posts: 80, Visits: 239
> So if I make structure changes in the DDT, everyone would see them. However, I deploy, while coding, to my local machine which only affects me.

So, you make your changes to a central DDT database/profile, then create a Deployment Data Package and apply that package to your local dev database?

I'm trying to wrap my head around the fact that you've essentially committed your schema changes to the central DDT database prior to development and testing of those changes. What if you're working on some complicated series of stored procedures, or other changes, how does that work?

Do you use source code control with your DDT profiles? Would you mind describing how that works?

FYI - I've just installed StrataFrame, have never used it before, and am working with a colleague on setting up our multi-developer environment with TFS. I work remotely over a VPN WAN connection and our 3 other developers work on a LAN together. I'd really appreciate a  white paper along the lines of "best practices on setting up DDT and TFS source code control in a multi-developer environment."

Coming from a VFP environment, we used the StoneField Database Toolkit (SDT) and our workflow was thus:

1. Check out the DBC "schema" and SDT meta data from TFS source control.

2. Work on any changes on our local "dev" copy of the database. While working on those changes, no one else is able to check out or modify the DBC.

3. Check in the DBC and SDT meta data once our local modifications have been completed, developer tested, and are ready for deployment to the other developers.

4. Tell everyone to "get latest version" from TFS source control, and have them run the SDT "Database Update" process on their local databases.

This is very natural to me and I'm having a hard time re-thinking this process in a StrataFrame DDT context. How do you have revision history to refer to in TFS source control? For us, we output a text document which is a text equivalent of everything in the database at that time and check that file into TFS SCC along with the DBC and SDT meta data.

Any help you could give would be appreciated.

Thanks again.
Ivan George Borges
Ivan George Borges
Strategic Support Team Member (3.5K reputation)Strategic Support Team Member (3.5K reputation)Strategic Support Team Member (3.5K reputation)Strategic Support Team Member (3.5K reputation)Strategic Support Team Member (3.5K reputation)Strategic Support Team Member (3.5K reputation)Strategic Support Team Member (3.5K reputation)Strategic Support Team Member (3.5K reputation)Strategic Support Team Member (3.5K reputation)
Group: StrataFrame MVPs
Posts: 1.9K, Visits: 21K
You are welcome, Melvin. Cool
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Similar Topics

Reading This Topic

Login

Explore
Messages
Mentions
Search