Ron Cates Posted July 27, 2010 Posted July 27, 2010 I would like to set up a development version of my Database so that I can use it to continue developing instead of working on the live file. I am hoping that someone will be willing to walk me through the process of setting this up. I know I need to save a sepearate copy of the file to do my development on. Then when I am ready to upgrade the live file I should delete all records in the dev file, import the current records from the live file and replace it with the dev file. My first question is how should I go about deleting all records from the dev file? Is there a way to do it all at once or does it have to be done table by table in which case how would I write a script to do this? The next question is of course, how would I go about importing all records into the empty dev file?
ejpvi Posted July 27, 2010 Posted July 27, 2010 It may not be the way others do it. But I personally like to do all my development with a copy of my most recent backup. I copy it to my desktop. Work in it. Just use the records that are in it to perform tests. Once ready and debugged..(I know it is double work).. but I begin by systematically moving the pieces into the live one. For instance, in Advanced, you can copy and paste most of your new fields from the Database Manager into the Live one. Then copy whole layouts (I create an empty layout on the live database and paste in all of the elements, once they have been created in the database manager). Create scripts empty, and copy and paste their code in. It is pretty easy to keep both databases open and flip back and forth. A little tedious, but I usually document all of my changes along the way. Most of my projects are pretty self-contained, so I don't modify other scripts too much... if I did, just make note of what you change. Sorry, nothing special, I find it works for me. I would love to hear how others tackle the development environment.
Ron Cates Posted July 28, 2010 Author Posted July 28, 2010 (edited) Thanks for the response ejpvi. I am however, still hopeful that someone can help me with the approach I discribed. Edited July 28, 2010 by Guest
bruceR Posted July 28, 2010 Posted July 28, 2010 First of all read up on separation model. One file holds the interface; one file holds the data. A substantial part of the changes you will make will be in the interface file; in which case you just replace the old interface file with the new one. Sooner or later it is likely that you'll need to make changes to the data file, and in that case you'll need to deal with imports etc. I'm not going to address that part; at least not at this time.
Fenton Posted July 28, 2010 Posted July 28, 2010 (edited) Definitely use the separation model. It will save you tons of time. You do not want to have Import the data every time you change or add scripts, or operational relationships (ie., not needed for field calculations). There is one catch with any "working on a separate copy" method. Which is the user Accounts. Which people could add to the production file in the meantime. Hopefully there at not all the many, and you can manually add them. Otherwise it is possible to create a script to go thru them, test and create missing ones. I had to do that, and it was one of the trickiest things I've ever done (involves lots of reloging in, etc.). Regarding the data file and importing; which you will need to do sometimes, especially if the production files crash at any time. If there a lot of tables, it would be worth your while to create an entirely automated series of scripts to handle that. These are the series of operations I recommend (most are a subscript of their own). 1. Save a Clone of the development DATA file. This is what you'll import into. Remove " Clone" from its name. Put it where the real DATA file should go. 2. Rename the production DATA file with a suffix, like "_old". THIS is what you use to set up the Imports. Bring it into the same folder as the clone DATA file. 3. Set up the Import steps, for each table. Do not allow auto-enter options. Update these Import steps for any table which had fields added. Do NOT EVER Delete a field, unless you're prepared to readjust. It's better to just change it to a global, and mark it as "obsolete, can be reused for something else"; these can be useful to have around, if you even need to add a field to both the production and development files right away.* 4. Include a step to SetNextSerialID. There are a couple methods to do that: 1. Last record + 1, or 2. GetNextSerialNumber script. #2 is more difficult. #1 requires that you do not have "orphaned" child records for the last "parent" record (or they'll end up tied to a new parent). 5. [Optional] Actually, this would come before the import. Have a loop that goes thru the tables, making sure they have no records. This will stop you from importing into a DATA file which still has records. (I've been stopped because an automated routine created report table records, for example.) 6. [Optional] Have a "post-import" routine that goes thru the tables again, checking the "next serial id" of one file vs. those of the other file; capturing table names of those which have a problem. Useful for troubleshooting problems with step #4, saves having to go thru the tables one at a time manually checking this. 7. After the imports are done, and all the above passes, I open Manage, Database, Tables, and take a screenshot of the tables with the # of records. I do this in each file, then put them side-by-side, overlaid to match, and compare # of records. Should be exactly the same. 8. I go to important tables with $ amounts, and compare a Summary field totalling all records in each file. They should be exactly the same. I know this all sounds extreme. But consider the implications of failing to update the next serial id, or a mismatch between import orders. And don't ask how I know :-] *If you ever (and you will) need to add a new fields to both the production and development data files (and you have no "obsolete" fields to cannabalize), without wanting to do a full Import (which is fine for new fields, as they have no data to import anyway), there is a safe way to do so. Create a new layout. It does not matter which table. When you want to add a field, you change this layout's TO to the table. You then look at the table's fields in Creation Order. Put that last field (and only that field) on the layout, in both files. Put a button on the layout, showing a dialog, with the following calculation. FieldNames ( Get (FileName) ; Get (LayoutName) ) & ": " & FieldIDs ( Get (FileName) ; Get (LayoutName) ) Since there's only 1 field, it will tell you its name and its Field ID. That is what FileMaker uses, what determines whether a new field will line up in scripts, etc.. The last field in the relevant table should have the same Field ID. It is possible to adjust (when desparate), by creating then deleting a field in the table which is too low. But you can never go back, and make it smaller. [P.S. A Clone deletes all globals also. I put things like graphics, constants, etc. into a single-record table, and Import that. And/or put them in the Interface file, which is not imported.] Edited July 28, 2010 by Guest
Ron Cates Posted July 29, 2010 Author Posted July 29, 2010 Thanks Bruce and Fenton. I would definitely like to move to a data seperation model at some point. But I am not quite ready to tackle splitting my DB into seperate files yet. It makes my head spin to think about what that would entail.
Fenton Posted July 29, 2010 Posted July 29, 2010 Splitting a file into an "interface" and "data" file is actually not very difficult. You use Save As Clone to get the "interface" file. It is exactly the same as your current file at that point, but with no data. Then, in the Interface file go to File, Manage, External Data Sources, and create a new source, pointing to the original file (which is now the "data" file). It should be a simple relative "file:file name" reference, both in the same folder. Then go to the Relationship Graph and point EVERY table occurrence (TO) to the Data file, to the correct table. Fix the fields in each relationship as you go (after switching both sides to the Data file). It is likely that they will just be lined up already, even complex ones. When you are done with the above, you can Delete ALL the tables in the Interface file. You now have a functioning Interface file. All your layouts will work as is, all your scripts will work as is.* FileMaker has had this capability of abstraction since FileMaker 7. You use it unconsciously whenever you put a table occurrence from another file on your graph. I'd recommend at this point that you create one table in the Interface file, for globals and constants (graphics, etc.; maybe a table for each, as they are used quite differently; cuts down on confusion). These tables would have 1 record only, and would not need to be part of an Import. From now on all scripts and layouts for additional functionality are added to the Interface file only. You can at this point remove unnecessary stuff from the Data file. In fact, that is why it's better to switch to Data Separation model sooner rather than later; less stuff to clean up in the Data file. It does not hurt to leave it, but best to remove "interface only" stuff. *There may be a few scripts (parts of scripts) you still need to call in the Data file, scripts which require Full Access to the data. This is one drawback of the separation. The [x] Run script with Full Access only applies the current file. So sometimes you need to have a (very) few of these subscripts in the Data file. I also leave/put the Import/update data of all tables into a clone scripts in the Data file, as the only time you use this the files are not hosted, and only the Data file is involved. If you have multiple files, it's best to make them all Data files, and add their file references to the Interface file. And, if possible, move them into the Data file. The only files which typically need to be on their own are for such things as large graphics in containers, because of size (affecting backups and file transfers).
Ron Cates Posted July 29, 2010 Author Posted July 29, 2010 Follow up question (and i'm sure there will probably be more). I have written the script to import one table and am now repeating the steps for each subsiquent table. The script for each table looks something like this: Go to Layout [ “Tickets_Form” (Tickets) ] Import Records [ Source: “file:CAM.fp7”; etc....] Sort Records [ Specified Sort Order: Tickets::_pk_ticket_id; ascending ] [ Restore ] Go to Record/Request/Page [ Last ] Set Next Serial Value [ Tickets::_pk_ticket_id; SerialIncrement ( Tickets::_pk_ticket_id ; 1 ) ] As I repeat these steps for each table I am asked each time to provide the user name and password for the source file. Is there a way to script the login so I don't have to login for each of my 30 tables in turn during the process?
Recommended Posts
This topic is 5230 days old. Please don't post here. Open a new topic instead.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now