Jump to content
Server Maintenance This Week. ×

Something to Watch Out For...


This topic is 6627 days old. Please don't post here. Open a new topic instead.

Recommended Posts

At the risk of being redundant, repetitive, and repetitious:

Given a two file Separation Model, with a layout in UI that displays fields from a table in Data.

When adding fields to different copies of the Data file, the creation order and the number of fields created must be exactly identical for the fields to function/display properly in the UI file.

Why? Internally, FileMaker's field references are unique serial #s, much like the serial #s used for primary keys. When a field is put on a layout, FileMaker internally stores the field's serial # with the layout's info. In a multi-file solution, these serial #s may end up referring to different fields if one copy of a file is modified differently than another.

Here's an example of what can go wrong:

1) The developer creates fields t_Foo and t_Goo in order in the developer's copy of Data, then places them on a layout in the developer's copy of UI. Everything is tested & it's time to update the client.

2) The client's UI file is replaced with the developer's UI file, and the client files (UI and Data) are opened.

3) The client calls with the inevitable change order: two more fields are needed.

4) The developer creates t_Boo, t_Hoo, t_Foo, and t_Goo in that order in the client's Data file.

5) The fields on the client's UI layout do NOT refer to t_Foo and t_Goo, they refer to t_Boo and t_Hoo!

Why? Because when t_Boo and t_Hoo were created in the client's copy of Data, they got the next serial # available, which were the same two serial #s t_Foo and t_Goo got when created in the developer's copy of Data.

Link to comment
Share on other sites

I was thinking (usually my first mistake..) about ways the Separation Model could result in screwed up files. I knew FileMaker must store a reference to fields in different files/tables, and I assumed it would be a unique serial # (since names can be duplicates). If so, syncronization of fields' ref #s in different files is an issue, so I tested this and voila!

IIRC, I read that FileMaker uses a serial# sequence when creating fields, and keeps track of the highest # ever used (so a # can't be reused).

Link to comment
Share on other sites

My two cents on this: the separation model really only works if the backend structure is moderately stable. If you're changing the backend data structure regularly, the whole point of this model is lost, because you have to (or *should*) find a way to distribute the new structure to the client.

While this example raises valid concerns, I would *never* change my backend data structure and *not* propagate that new structure to my clients, since I can never be sure that somewhere down the line I might make use of the new fields in some UI relationship, script or layout.

David

Link to comment
Share on other sites

The point you made is right on target, but has not really been adequately discussed (at least that I have read). I did consider changing my structure to a separation model, but I always felt what you just expressed: changes to the data structure kill the utility of this model.

Steve

Link to comment
Share on other sites

I keep looking at the Seapration Model, IMHO I keep feeling that it isn't worth it for most instances. I can see it being valuable if some combination of the following are important:

* Data is 2 or more orders of magnitude larger than UI

* Slow network speed between developer & database when there's no VNC/RDC/Timbuktu to import data on site.

* Downtime has to be minimal 24/7 (no time to import)

* Multiple servers necessary for speed or location, so the UI is on one server and the data on another.

I'm an anally-retentive type-A personality perfectionist (a bit too close to OCD... : , but I'm not a purist. Adapting to reality is more important to me than confirming to ideals. FileMaker is a tool, I prefer to use the tool in the best way possible, which often shortcuts purist ideals.

Good idea to keep in mind as we all debate: there's more than one right way to do a job (right being defined as conforming to computer science theory, being deterministic, clean design (no spaghetti code), and as bug-free as possible. I'll stop now...

Link to comment
Share on other sites

Actually, one of the most important advantages of this model is when you have a single solution that is deployed with multiple clients.

I have a package that I built for one client... they mentioned it to their neighbors, who also wanted it, but with a few changes... who mentioned it to some of THEIR friends, who wanted it--but, with a few changes.

Most of these changes have to do with business rules and output--not the data structure.

I would keep copies of the different client packages, but then if I made a change in one, I'd have to change all three. And keep track.

Then, Customer A would call with a bug, and we'd work out the fix over the phone. And I'd have to figure out how to get the update to the others.

The data structures were just complicated enough (in FM6, 12 files) that creating scripts to clone, copy, update and manage the upgrades got very messy and very touchy. The files were big enough that emailing the files wasn't an option, and my clients (farmers) were not high-tech types who could do it some other way.

In the end, I'd end up having each client send me their files, I'd do the migration, and send them all back, with associated down time on their part.

Then I'd find out I'd made some stupid mistake, and I'd have to re-do the whole process.

Now, using the separation model in FM7, I can quickly send out a replacement interface file to my clients (under 2Mb), which gives them new functionality without mucking with their data. It makes small maintenance updates MUCH more viable, which makes me look much more responsive to my clients' needs (not that I wasn't, mind you, just that it looked like I wasn't), which is good for everyone.

Cheers,

David

Link to comment
Share on other sites

  • 1 month later...

The Separation Model seems a disaster waiting to happen.

Consider the two following scenario:

A 2 File solution

Data file stays at version 1 (if you will)

Interface/function version gets upgraded until version 20

The Problem

Experience tells me that in order to improve the functions of the database this usually requires modifications and additions to the data structure of the database. Tinkering with the interface is never enough. Implimenting these changes to either the interface or additional data file is merely asking for trouble, perhaps not immediately, but certainly when the data file itself needs upgrading - and this will eventually be the case.

While interface changes are easier for you and more visible to the client, I can only see further on when a data upgrade becomes necessary and all those functions need to be made resident in the data file and removed elsewhere. The data sep model only seems to delay the headache of a major upgrade and make it slightly worse by doing so.

Add to this the pain of tracking which versions of which files the client is using. This is easy enough until the data file needs to be updated - and this will eventually happen. Then do you reset the interface version to 1, or code so v21 interface expects to find a v2 data file (and vice versa)? Do you keep those files accessible to client in the event of a upgrade problem? Oh God the madness of all that file tracking!

I'm finding it hard to see the logic for the data sep model as it seems a giant step backwards. Trying to reduce the agony of data upgrades from fussy clients is avoiding the problem: the fussy clients!

Link to comment
Share on other sites

PJ--

Your experience and mine are different. I have found that I am able to make substantial changes to the interface without resorting to data structure modification.

I think your issue about datafile versions is interesting, and will probably look into adding (eventually) a datafile field for the version of the datafile (as opposed to the version of the interface), as well as some sort of version testing in the interface file to check for frontend/backend compatibility.

Your concerns about the difficulty of managing changes to the datafile are valid, and other writers in this thread have discussed precisely that question. I'm not clear how keeping your data and interface intermingled in one file simplifies the upgrade process, except that it reduces the number of operating system files by one.

That said, there are three areas in Filemaker where I've had trouble that warrant further discussion here: Global fields, Auto-enter calculations, and Calculated fields.

Global Fields:

In earlier versions of FM, I had globals in each file to allow me to hand data from one place to another. In many of these files, I had built relationships, calculations, auto-enter calcs and scripts on these globals. With version 7, most of these globals are unnecessary and can be eliminated because I have all the scripting and relationship structure in the interface file. In my current SM app, I have a globals table in the interface file, and am trying to use those variables in all scripts and relationships.

Elimination of legacy globals in the data file is a longterm process that will wait until I am sure that all these globals are redundant before elimination (which will force my users to migrate their datafiles).

Auto-enter Calculations:

FM makes it possible to auto-enter values for fields, and I've made use of this to set the content of new records to values stored in global fields. This approach made my scripted creation of records shorter and more readable and allowed information from one file to be pushed into another before record creation, but has had unfortunate repercussions, in that the global fields in the datafile are slated to go.

I am now of the opinion that auto-enter calculations are not a good idea, especially since I create most of my records via scripts. My interim solution will be to put the set field entries into my scripts so that when the next datafile version is pushed out, I can eliminate the globals from the auto-calculations without repercussions.

Calculated Fields:

This is the big one. I have found no simple way to eliminate calculated fields from the datafile; others have suggested a "Business Rules" file, but I haven't been able to get that in my head. I have tried to build an interface file Calcs table, but have not figured out how it could be implemented, since calculation fields have be evaluated from the right context.

David

Link to comment
Share on other sites

Hello again T-square.

My main point regarding the separation model is this: the data structure will EVENTUALLY need upgrading. Period. End of. Can't avoid it. Client's are fussy, your solution will contain errors, new operating systems will poo on your database in unexpected ways, and so on.

As there is no easy upgrade option in a one file solution what benefits the database programmer from having to upgrade many files, all be it the interface file more regularly than the data file? I just don't get it as it seems like more work for you.

In my solution the "upgrade" option (using that term very loosely) amounts to a colossal export script, followed by a bit of jiggery pokery (possibly to be replaced with Troi plug in - applescript seems up to it for now), followed by a different colossal import script, lastly followed by an update script that checks and correct certain variables. (on which note I would like to lead the call NOT to use serial numbers).

With regards global fields...i tried using a global field table in my solution and it just got in the way and became a bit redundant. As most scripts are table dependent, like your solution mine has globals in the major tables and I haven't had a global table as such for quite a while.

With regards auto-enter calcs....I think they are very useful for entering default values if the field is empty say, but allowing an overwrite. It depends if you hardcode the variables yourself - in which case the auto-enter aint needed and the value entry should occur via the script - or as in some parts of my solution where the client determines the default value and therefore is a global but does change.

Lastly, with Calc Fields - can you separate context dependent calcuations from their context and expect them to work? Do fish breath well out of water? Again I just don't get what the point is. It's more work, for what benefit? Is there a contest going who can separate out their data to the most extreme?

Link to comment
Share on other sites

pj--

You don't seem to be getting what I'm saying. Your points about the data structure changes are on the mark. My point is that *in my experience* the interface changes at a much more rapid rate than does the data structure, and since it does, the SM makes it easier to distribute such changes.

As I explained in this thread back in October, it's my experience (I've been using the SM on a major app with 3 clients for a year now) that managing interface changes is light-years easier now than before. I no longer have to use the kind of nightmarish scripts you describe (and I used to have) to create an upgrade for my clients.

I no longer have marathon tech support calls with my clients where I have to talk them through changing the contents of a script to fix a problem they're having. They call with a problem, I look into it and make necessary changes, and I email them the replacement interface file. They copy that file into their database folder, and they have the fix. I focus on the problem; they focus on their actual work.

I no longer have to hold back interface fixes until there are enough of them to warrant the headache of putting out a new version of the software.

At this point, I don't have a lot more to offer on the topic. You've clearly got your own approach to the situation, and I'm glad it works for you. I will gladly keep on down my primrose SM path, as it really has made my development life easier.

Cheers!

David

Link to comment
Share on other sites

  • 2 weeks later...

In response to the original post of this topic, it's a non-issue if you know what to expect. I.e., there is a certain order to making schema changes, and that is "bottom up": define your fields first, and your value lists, and your relationships; create your blank layouts; bring in your scripts; and finally, paste your layout elements in place. You'll find all scripts attached to buttons automatically, and everything else in place. FileMaker does associate elements by name if those elements exist.

Link to comment
Share on other sites

  • 2 months later...

I am also still experimenting with the sep model, but have found it incredibly useful so far.

As T-Square said, the interface tends to change much more frequently than the data. It is true that an update to the Data file is often needed at some point, but the sep model allows you to do many easy interface modifications between data file updates.

I have also gotten into the practice of including some placeholder fields in the data file like:

text1

text2

text3

num1

num2

num3

date1

date2

date3

container1

container2

container3

This allows you to do some expansions to the datafile functionality without having to update the original datafile (and avoids the issue brought up in the original post). Then, during a major update, you can just rename the fields in the datafile. It requires that you are dilligent about recording which values you associate with which placeholder fields, but this can be done easily with the comments in define fields.

I am still trying to work out how to elegantly get all calcs in a logic file though...

-Raz

Link to comment
Share on other sites

This topic is 6627 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.