Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 5722 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Hi

I have created a runtime solution that works just fine. When you export the data, you can delete everything in the runtime and export that data back in again and everything is as it was. However, when another user of the same runtime on a different machine tries to import the same file, the data imports inaccurately and is unusable. This happens regardless of the file type, tab, csv etc.

Anybody know what might be causing this?

Posted

Auto-update serial numbers need to be reset.

Take this example: your copy of the solution has been used a lot and the serial numbers are up to 600. The set of records contains serial numbers in the 500-600 range. When you make new records the serial numbers will be 601+.

The other solution has not been used as much and the serial numbers are only up to 500. When the records are imported into this solution they are in the range 500-600, but twhen new records are made they'll have *duplicate serial numbers* because the next serial number will be 501+.

You'll need to script the process to update the serial numbers to ensure they don't duplicate.

BTW this is a very common issue, and one of the "traps" of using auto-entered serial numbers. People have at various times created serial numbering schemes that include date, time, ip number and random strings to prevent this problem, but they inevitably introduce other problems themselves.

Posted

Hi Vaughan

Many thanks for your reply. I specifically didn't export any keys to avoid this problem but am obviously going wrong somewhere or misunderstanding the process. The problems I have are consistent with serial numbers not tallying up so you're obviously right.

Do I need to reset any auto enter serial numbers on the import script and to what value? I get the same problem even if the target runtime is a fresh install. Matters aren't made any simpler by the fact I've had to use multiple tables with one to one relationships

Posted

If you don't import keys, how are all of the existing relationships going to work? You've gotta import everything.

This is the basic process: the "source" refers to the records that you are importing from; "target" refers to the table that the records are being imported into.

1) In the source table, open the field definitions and make a note of all of the "next serial number" values for all auto-entered serial number fields.

2) Import the source records into the target database. Import all fields including the auto-entered serial numbers.

3) In the target database, open the field definitions and set all of the next serial numbers to those noted in step 1.

Much of this can be automated with scripts - for instance, there are functions to calculate the next serial number values directly -- but if it's an infrequent task I would not bother.

Posted

Hi

I've played around with this for a couple of days and made some progress. It turns out I had forgotten to reset the serial numbers before I created the runtime so I fixed that, and also wrote scripts to set all the serials to the same value as the highest one in the source database after every import. This has led to a lot less errors but not eradicated them completely.

The solution is for a teacher assessing children in their school. There are a large amount of different assessment criteria over three different subjects. So, to make things more manageable I created 4 tables - children for name, gender etc, reading results, writing results and maths results all with a one to one relationship. Whenever a new child is entered, scripts create related subject records, link them to the children table and copy the child data into identical fields within each record. Then when the teacher does the assessment he fills in whichever related records are relevant. The user has the choice to import or export reading, writing or maths.

To import, the user selects which set of data to use, and that is imported into the relevant table, reading into reading results etc, updating matching records based on name and adding any others as new. If the child who's data is imported is already in the solution, then that record in the found set will have a key. If it's new, then the key field will be blank. Yet more scripts loop through the imported records to check for this - if there is a key then the new data overwrites the old. If there is no key a new record is created in the children table, the child data copied accross, a new key is created and copied back to the imported record, linking them.

As convoluted as this is, it works apart from every now and again Filemaker behaves strangely when it imports the records. Say I have 6 children in the database, 3 with a complete set of reading writing and maths data, and 3 with just maths data. When I go to import the writing data for the second 3 kids, ie a csv file with 3 records from the writing table, Filemaker occasionally throws up a found set of 4, and it's overwritten one of the records that was already there with one of the imported sets of data. So, when you go back to the child table one child has vanished and two of the same one appear.

It's baffling as I can't consistently recreate it. I'm not sure if it's to do with the way the records are related, but despite having over 5500 fields now I'm considering moving it all back into one big table.

Anyone able to shed any light on this?

Posted

"... despite having over 5500 fields..."

I don't think I've defined that many field in my entire life.

Posted (edited)

"also wrote scripts to set all the serials to the [color:red]same value as the highest one in the source database after every import"

Are you sorting this table after import, then going to the highest ID and using SetNextSerial (keyfieldID +1) ?

Edited by Guest
Posted

However, when another user of the same runtime on a different machine tries to import the same file, the data imports inaccurately and is unusable.

I have a hard time realising, why runtimes are to be used for anything but demoing a solution to a prospect client ... your solution seems way into the realm server/client'ing Land ... it's almost pennywise pound foolish!

Please see if you can get a discount as the kind of user you are, to make you purchase say a 5-user package:

http://www.amazon.com/FileMaker-Pro-User-License-Pack/dp/B001ONSXOC

...in the equation must your hourly salary be included. My take here is that the bare thought of sync'ing runtimes are a quixotic endevour.

--sd

Posted

I believe that you don't need as many fields in Australia as we do in the top half of the world!

Truly, that's a lot of fields. Bet they're named Name1, Name2, etc.

Posted

that's a lot of fields. Bet they're named Name1, Name2, etc.

I'll take that bet.

The solution is for a teacher assessing children in their school. There are a large amount of different assessment criteria over three different subjects.
Posted

lol bcooney's the closest guys. Just to give you an idea, a small sample would be:

Reading L1 AF1a Detail 1

Reading L1 AF1a Detail 2

Reading L1 AF1a Detail 3

I used the GetNextSerialValue function to set a variable to whatever was the highest value, then used the Set Next Serial Value script step to copy it to the other tables.

In light of this experience I think quixotic is probably the best way to describe syncing runtimes as Søren says although I'm sure it could be done eventually with these one to one relationships. I would never attempt anything else for fear of my sanity.

Unfortunately because teachers do a lot of marking at home, and classrooms are poorly networked then a runtime installed on a laptop is the only viable option, with the feature to import/export data between colleagues. I'd have finished by now if I could have used a server/client model. Ho hum...

Posted

I am not sure I follow this fully, but it seems that several teachers are creating new records, all starting from the same serial number. A possible solution to this would be to assign a unique prefix to each user.

BTW, no matter who won the bet, you should still have a separate RECORD for each assessment.

Posted

Each pupil SHOULD have their own record - but assessment results shouldn't be a part of it. I'd suggest you have a look at the following threads:

http://fmforums.com/forum/showtopic.php?tid/180113/

http://fmforums.com/forum/showtopic.php?tid/183639/

http://fmforums.com/forum/showtopic.php?tid/195086/

Posted

Hi Comment.

I agree. I originally built it with four tables: Children, Reading Results, Writing Results and Maths Results. Everything was working perfectly when I tested it on my machine. However, when I created the runtime (for reasons stated above), installed it on another machine and tried to import some data exported from my original, it never linked the tables up properly due to a problem either with my scripts or the serial numbers not incrementing properly. Either way it had become too complex to work through so I'm now putting everything together in one big table. The tables had one to one relationships anyway as each child always has one set of each results so despite making the whole thing cumbersome and unwieldy it should ultimately work.

Posted

Well, the question is what are you recording all this data for. I don't think there's much you can do with it, in terms of getting meaningful results, the way you have it now.

BTW, if each individual result is a record (as it should be), then one child has many results. And they (the results) should be all in the same table.

This topic is 5722 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.