Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 7478 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

I'm begining the migration process to FM7 and, untrue to form, I'm actually reading the documentation.

I produce a solution that I sell to multiple users at multiple sites around the USA. Updating the solution is a real pain as it requires lots of imports, and/or the use of a "reports" file that draws data from all the other files to create reports.

So in upgrading to FM7, my plan has been to divide my solution into two files. One file with the data, and the other with the layouts and scripts. I really love this idea because, as noted, the workaround to produce anything like this structure in FM6 is tedious at best.

Reading the migration documentation, there seems to be quite a lot of emphasis on dividing data, layouts and scripts, and the "business logic" into seperate files. The business logic is essentially the complex calculations performed on the raw data.

Theoretically, this makes a great deal of sense. But practically, I can't really see how to implement it in a FileMaker context. In Access I can divide the data (tables) from the layouts (forms) from the business logic (queries). Calculations can be built into queries because a query doesn't have a fixed number of records.

But the query doesn't really have a direct counterpart in FileMaker -- calculations are built right into the data table.

So how would one go about separating data from calculations?

One thought is to make sure that every table in the raw data file has a serial number field, and then create a dummy table in a 2nd "business logic" file that also has serial numbers. Relate the serial numbers to one another, and build the calculations into the dummy table.

But you still need to make sure that the dummy table has enough records so that any serial number in raw data file/table has a counterpart. How big is big enough? Just for fun I created a file with 1 table and 1 field -- a sequential serial number. 1 million records produces a file about 18MB in size, which isn't really too bad.

But what if a data table has more records than that? The record with serial number 1,000,001 in the data table won't have a counterpart in the dummy table. The creation of more records in the dummy table can be automated so that it will always have at least as many records as a raw data table. This would, I think, require the use of the Max() function, which can be slow.

So is anyone else wrestling with this issue? Am I missing something obvious? Just hoping to get the basic architecture right the first time.

Thanks for any thoughts and insight,

Dan

Posted

I think you are close. I have been working on a set of files with the separation model. I have followed the lead in Migration Foundations and Methodologies. I have 3 files, Data, Business Logic & Reports (BR), and Interface.

I try to keep data just that. I use one to one relationships to tables in BR. All calculations are in the BR tables. I have some relationships as required by the calculations and reports. All report layouts are in this file. When I create a new record in Data I create one in BR. I have scripts in BR that will create a set of records to match those in the data file, when I do an update. The interface file has no tables and therfore no fields. It does have relationships as required for the interface.

Posted

Thanks, Ralph. I think the part about this that I don't like is "when I do an update". That is, the user has to trigger a script to make sure there are matching records in the Data file and the Logic file. If additional records are created by another user in the Data file, the Logic file will be out of date until the update script is run. This will require knowing the maximum id number in the Data file. The Max() function is a bit slow. Would it help at all to sort the Data file by ID, go to the last record and grab the ID number? Or, internally speaking, is Max really just the same as a sort?

Also, is there any advantage to putting every single calc in the Logic file? If I have two fields FirstName and LastName and I want a calc

Name = LastName & ", " & FirstName

Is there really any sense in moving this out of the Data file and into the Logic file?

Also, in the Logic file, are you creating one table to correspond with every table in the Data file, or are you putting all of the calcs, no matter the related Data table, into one table in the Logic file?

Thanks,

Dan

Posted

I think my script that makes a record to match every record in the data file is very simple compared to what was necessary with Pre 7.

I tried to make make it pure data in the data file. If your are going to have a bunch of calculations in the BR what's one more? In this case (Name = LastName & ", " & FirstName) you could make this an auto-entered using a calcultaion.

I made a table in the BR for each data table that need calculations, a one to one relationship. Since you can have many tables in one file, I don't see and advantage to trying to make this one table.

We are still learning. I started over several times.

Posted

Go to Layout [ Target ]

Show All Records

Delete All Records

Go to Layout [ Source ]

Go to Record/Request/Page [ First ]

Set Field [ Globals::g_Counter; 1 ]

Set Field [ Globals::g_ID; Source::ID ]

Loop

Go to Layout [ "Commitment" (Commitment) ]

New Record/Request

Set Field [ Target::ID; Globals::ID ]

Set Field [ Globals::g_Counter; Globals::g_Counter + 1 ]

Go to Layout [source]

Go to Record/Request/Page [ Next; Exit after last ]

Set Field [ Globals::g_ID; Source::ID ]

End Loop

This topic is 7478 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.