Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 5727 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Ok, I think I have worked out the ins and outs of moving data from one table to another on the same server. I used the SetField method and am happy.

Next hurdle on my project is moving data from one FMServer to another FMServer.

We have a WAN with multiple sub-nets. To avoid downtime, each sub-net has a "mini-server" that acts as a local server, caching records, and periodically sending the data upstream to our "main-server". Some may call this polling. Some call it replication.

The solution I had was to use a looping script for each local (mini) record and SetField[] with the related (main) record for data exchange. The script works great when testing/debugging from my workstation. The table from the mini-server passes the data to the main-server. Nice!

The problem comes when I create a schedule on the mini-server to process on it's own. I get: Schedule "Upload Details" scripting error (100) at "Details : Send Record To Server v10 : Set Field"

I'm diagnosing this as the FM-Mini-Server not being able to relate to the records to the FM-Main-Server.

These two tables are nearly identical in schema. But again, they reside on two different servers. I do not host both on any one server.

Could FMServer not reference related tables residing on a second server?

Have I configured the FMServer wrong?

Thanks again folks. You are the FM brain trust. We appreciate your solutions and input. You guys have save many a project from strangers I'm sure.

Posted

My understanding that is the engine that runs scripts within FM Server can only see locally hosted files. It cannot see files on other FM Servers.

You need to run the script form a robot computer running a full version of FM Pro.

However I have to wonder why you are doing the multiple server thing in the first place. The whole point of sharing databases to avoid multiple copies, because synchronising databases is always harder than it looks.

Posted (edited)

Ok gang, here is the gig....

In this project, I have 5 "retail" stores (actually carwashes, but the software doesn't know that). Each store has a full POS, handheld data capturing over WiFi, and various other Mac/Win computers all generating and requesting data from shared data sources. Managers and executives all need to share data seamlessly between stores and to anywhere (i.e. the beach in Puerto Vallarta). These are all mission critical chores and mission critical devices (except the beach, unless you're the one on the beach).

To the point... I cannot depend on the Internet to be up. From my experience, even the best ISP goes down. And my experience also tells me that it will go down at peak time, for a longer duration than I can afford for my customers, both retail and internally.

So, how can I have data on a central server at corporate HQ and still ensure that the 5 stores can get and send data under all circumstances? The best solution I can come up with is to have local servers at each store that are always there and always up.

The "mini-servers" job is to cache the data that needs to go upstream and send it when the internet is up (which is not quite 24/7 more like 23.99/7). And they download semi-static data daily. Again, as an onsite, always live, version of data.

I've tried single server. It was not consistent. Stores are dead if the net or server is down. And it was slow when we hit peak data traffic.

I've tried RAIC ala FileMaker Inc. That is... multiple CPUs with databases distributed or shared amoung them. Still ran into the Internet down roadblock.

The best solution I've created is that a store device, ie POS, can access an onsite server to request customer info and/or send transaction data upstream. The onsite server then caches the data and burps it upstream to the main, collective server.

The store devices are smart enough to fail-over to an alternative server if the local server is down.

All of this is done in FileMaker 6, 6 Unlimited, CDML, VPN and C++. Running on Windows, Palm OS, and MacOS platforms in a mixed environment. A really mixed environment!

It works darn well, mostly. There are some limitation in FMP 6 that are annoying. (the 2gb file size limit, and sporadic/spurious crashes). And that's not counting that it's high-tech circa 2003. Hence the drive to convert to a modern FileMaker.

We skipped FMP 7 & 8 because those versions were not FileMaker Inc's best products and were constantly in flux. I finally got my corporate approval during FMP 9 last summer. I got the POS and other devices all working with php in lieu of CDML (quite a re-write). That's when they released 10.

Now, I get to the onsite/mini server part of the project. I'm finding out that there are some really subtle but profound limitations and quirks in the FMP Server 10 scripting that previous version were oblivious to (read version 6). Like relating data with almost any other shared FileMaker installation. I know that some of my issues were here since 9. But nowhere is there a collective knowledge base about the evolution/devolution of the FMP Server. Or, if there is, it is not readily accessible or easy to find.

Please, if someone has alternate ideas speak up. I will entertain anything. I know this post is long, but maybe I can spur some needed knowledge exchange.

- lady canine Session Warning -

It's frustrating! You can only spec out so much of a project without eventually jumping in and start developing. Unfortunately we developers need to be light on our feet when unexpected road blocks get in our way.

But who would have expected that one FileMaker Server could not access or relate data on a second FileMaker Server? That is the reality I guess. But what's the reasoning? I mean, there is no further advanced product path? FileMaker Server Advanced still has this limitation? And there is no FMS Advanced-Advanced? So it's not a money thing. Is it really a technical issue?

I can't be the only developer in the world wanting to do this. Have I out grown FileMaker? What do other installation like mine do to handle data flow?

Edited by Guest
Spelling & grammar
Posted (edited)

Perhaps you want to investigate synching multiple servers. The answer to that is Syncdek.

However, "I cannot depend on the Internet to be up," to me means that any interoffice communication will fail (one server, synched servers, ...) and you have no way to guarantee that each office has exactly the same "live" data.

Edited by Guest
Posted (edited)

Perhaps you want to investigate synching multiple servers. The answer to that is Syncdek.

Thanks! I will check it out.

However, "I cannot depend on the Internet to be up," to me means that any interoffice communication will fail (one server, synched servers, ...) and you have no way to guarantee that each office has exactly the same "live" data.

In my situation, during working hours, the data flows upstream from the remote servers to the main server. Any data that flows downstream can be batched at night. The downstream batch is retried several times during the night if not successful.

I know it's not "true, live" synching, but it has been tweaked for our situation. For example a new customer/member at store #1 will be registered at that store now, his data is uploaded to the main server within minutes, and all other stores will have his data tomorrow morning. An acceptable policy in this situation.

The executives and managers can still check "live" data on the main server to get aggregate sales totals, etc.

Edited by Guest

This topic is 5727 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.