Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 7127 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Hello,

I have been having problems with the find feature on the database. I hope my problem is simple (as well as a solution)

To duplicate a set of Line Items, I have a summary field where the "Job ID# to be matched" is in (pasted in one record, then shows up in all).

From that, a calculation field, "Correct Job", determines if that value in the "Job ID# to be matched" field is the same as the "Item Job ID#" for that record, if so, returns 1, otherwise returns 0.

So, when I perform a find for "Correct job" field = 1, it returns the right records... sometimes. It seems, usually, when I first start up Filemaker it performs the find correctly once, then, if I change the universal value "Job ID# to be matched", to be another ID#, the calculation for the "Correct job" updates to set 1 as the value for the new set of Line Items that match up, however, when the find is performed for "Correct job"=1, it returns the record that were found last time, even though that field now has a value of 0. If I quit Filemaker, then re-open it, it does it correctly, finding the ones that have value = 1.

Is this something to do with finds from calculation fields, a bug, or what?

I'm sorry if this is unclear, but it is difficult to describe. Any help would be great.

Posted

The problem probably is the the summary field is not updating.

Try this: after you make the change to the data, enter preview mode, then return to browse mode. Then perform the find.

I'm not quite sure what you're doing, but your use of the summary field seems unusual, and not necessary for the duplication of related records.

Posted

You mention an easier way to duplicate related records. If you have a different way, it might make this a lot easier.

Basic problem is this:

"Job" record with around 20 related "job item" records in another tables. I want to be able to create a new job record (different dates, different clients, etc.) but also duplicate the 20 related records, so that they can be changed without affecting the original 20 realted records.

If you have an easier way, or any suggestions, I would love to hear them. The way I came up with, frankly, bites.

Posted

Generally the process is to create the new record and put the key in a global; then (one table at a time) use the Go to Related Record script step to "find" all all the related records and using a looping script duplicate each record and change the duplicate's remote key to the value in the global of the new master record.

The script in the related table is

Sort [ key field ]

Go to Record/Request [ first ]

Loop

Duplicate Record/Request

Set Field [ keyfield, master::gKeyfield ]

Go to Record/Request [ next, exit after last ]

End Loop

It is important that the records are sorted before starting the looped duplication process (though the actual field they are sorted by is immaterial) because in a sorted found set, new records are inserted straight after the current record, whereas in unsorted sets the new record is always created at the end.

No summary fields required, nothing too complicated.

But I have to question any data model that requires duplicating so many records...

This topic is 7127 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.