Jump to content

Database design - move records from one table to another for archive purposes


This topic is 1427 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Hello all - 

 

This may not be the correct subforum, but interestingly enough, there seems to be none for Database Design. 

I have a rather complex FileMaker solution, grown historically and with large-ish file sizes (around 10GB each). I say each because we use 360works mirrorsync to keep several (5 at present) FMS instances in sync. We need to do that as FM clients access the FMS from all 5 continents, where latency becomes an issue in the way FMP client handles list requests. Basically, ever line item compounds two roundtrip ping latencies. So a user in Australia looking at a list served by our New York FMS containing 100 records is looking at a response time of 100 line items x2 roundtrips x150msec ping latency = about 30 seconds just to have that list page load. It's a very different scenario if you have a server within 5msec ping times - locally - so the solution actually remains workable.

 

All files are not in filemaker containers - they are all external on the NFS, so the FM database size is not due to files stored; there is just a lot of txt data in that database. 

 

My issue is that of the about 300.000 records, only 200 or so are 'live' data; the remaining 290.800 are merely archived files that need to be accessible somehow, but should really not be part of the main table. 

 

I am well aware of the pros and cons of maintaining ONE table that holds all like records. A table with 300k+ records should not be an issue. And it's not. UNLESS: 

- Changes in calculated fields are made. If you have a table with 200 records, the update will take 2 seconds. With 300k+ records, it may take over 5 minutes of database locking until all changes are calculated. 

- Same thing applies to re-indexing. 

- What is really the show stopper for us is mirrorsync. If any operation ever changes one flag in too many records, mirrorsync will then be unable to sync hundred of thousands of records. We only need the live cases synced, not the rest of it. 

 

So all these things considered, my easy out would be to split my main table, MAIN, into two: one for active records; another one for archived ones. 

So I would have MAIN_LIVE and MAIN_ARCHIVE. 

I could then only mirrorsync MAIN_LIVE across the 5 FMS instances, with a mere 200 records. MAIN_ARCHIVE would be on one hub server only, and all the other 5 FMS would plug into that hub as external data source, as the archived cases will never need to be displayed in a list view and only one at a time will be parsed, rendering the ping latency no issue. 

 

My issue now with that approach is how to move a record from MAIN_LIVE to MAIN_ARCHIVE once it's done; or back for revisions. 

MAIN_LIVE, as would MAIN_ARCHIVE, has about 300 fields defined. It would be a tremendous burden to script a field-by-field record moving script from MAIN_LIVE to MAIN_ARCHIVE; and the other way. In particular, because that script would need to be updated every time a field definition is changed or a field gets added. 

 

Is there no way in filemaker to simply copy an existing table, and move an entire record in that table to the copied table instance? It sounds straightforward enough, but I don't think this function exists? 

 

Thank you -

 

Data Cruncher  

Edited by DataCruncher
Link to comment
Share on other sites

Update: 

Of course, I attempted the most obvious solution, scripting a copy step: 

Copy Record/Request
Go to Layout (MAIN_ARCHIVE)
New Record/Request
Paste [Select]

This yields no result, other than pasting the contents of all fields of the copied record into the first available field in the new record. There is no mirror paste field by field. 

Link to comment
Share on other sites

The most obvious solution, IMHO, is to import the records into the archive - either directly from the "live" file, or from an interim file exported from the "live" file. Then delete the archived records from the "live" file. The tricky part here is making sure that you do not delete records that for whatever reason did not successfully make it to the archive.

If you prefer to transfer the records one-by-one, as shown in your attempt, then load the source field values into variables and use those variables to set the target fields individually. You could also store all field values in a single JSON string; this can be convenient if you want to pass the data as a parameter to a script in the other file. 

Here too the major concern is to prevent accidental loss of data (as it is with any scenario of moving data).

 

Link to comment
Share on other sites

Import records is a great idea. I somehow forgot that one may script imports. I guess the import would have to be two separate ones - to the archive file and two the live file. 

This does not even have to be real time, I could probably run it overnight and have a delete script in place that only deletes the live record once the archive record is positively consistent. 

 

Thank you.

Link to comment
Share on other sites

This topic is 1427 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.