Jump to content

Ocean West

  • Posts

  • Joined

  • Last visited

  • Days Won


Ocean West last won the day on September 10

Ocean West had the most liked content!


About Ocean West

  • Birthday 11/26/1971

Profile Information

  • Slogan
    I have an idea!
  • Title
  • Gender
  • Location
    San Diego

Contact Methods

  • Website URL
  • Skype

FileMaker Experience

  • Skill Level
  • FM Application

Platform Environment

  • OS Platform
  • OS Version
    Big Sur

FileMaker Partner

  • Certification
  • Membership
    FileMaker TechNet
    FileMaker Business Alliance

Recent Profile Visitors

74,250 profile views

Ocean West's Achievements

Grand Master

Grand Master (14/14)

  • Dedicated Rare
  • Very Popular Rare
  • Week One Done
  • One Month Later
  • One Year In

Recent Badges




Community Answers

  1. if you have the time and patience and plenty of backups you can try multiple files make a sand box of files and link them with the most basic connections and quickly you will see how hard it is to manage that many files. I have a customer that has over 125 files - yes FILES. This is a legacy solution that started back in the v6 days or earlier. The reason so many files - back then a 1 Table = 1 File. there have been efforts to migrate and consolidate data but it is not with out a lot of patience and work. And since it's working right now and running the business great car must be made when working on the system its like changing the car tire while we are driving 80 down the freeway. There are challenges. and I wouldn't recommend starting out in this manner. I do have other solutions where the core of all logic resides in an interface file and you can maintain set of data tables based on context such as Attachments - although its not necessary as much these days because attachments I don't store internally to the file but externally on the server. I do have some files for logs or other data warehouse where there millions of records narrow and deep with little in way of other calculations or summary schemas. *clutter* seemingly can be mitigated by a well maintained naming convention and organization skills. If you're considering a data model such as Anchor Buoy or Spider etc these each have their pro's and cons but most developers will have a variation of each, and it's knowing when to break the rules to provide desired results. If you try to plan out the A/B method first you just might end up with table occurrences that are not even touched just because you 'think' you may need them in the future for a lookup or such. From what you mention your profile and other threads you are not developing in a current version of FileMaker, which has provide much more wider array of tools to build a solid solution on. Button Bars, Pop Over Buttons , Slide Panels, JSON, Card Windows. etc.
  2. There is a consideration made for separation of concerns for business logic and audience but for core features unless you hare having millions of records in each table that's when you might consider a separate file. I do not see the need to split this set of data. You would need to maintain logic in multiple places and recreate relationship graph as well - there you must be diligent with regard where cascade delete is enabled or you could potentially have database genocide. In addition you will need to manually replicate and maintain credentials in all files. If this solution would be hosted then you would ideally employ external authentication and establish user accounts and credentials on the host OS.
  3. Give Claris a call they should still have your serial number.
  4. this went a way but on the M1 it seems to be back you cannot go read email or surf the web while uploading hundreds of pdfs FM is forced to be the top application.
  5. I have to remember to turn on Rosetta in order to run some reports.
  6. We had a severe bot net attack on our server the only recourse was to severely restrict which countries can access our servers. If you can send me a private message and a list of IP addresses I may be able to open things up.
  7. I have a process where I am storing aggregated data in to a table. Which consists of the following data points: date | staff | data | value This process will create two records per day for each staff person for the two types of data. The value is Get(FoundCount) after a find request in a Quotes table for the day / staff and a few other criteria including an omit. I store this in a $quotes variable. Then I perform a constrain on this founds set to get another Get(FoundCount) which is $approved quotes. The process then creates a record in the data warehouse. date | staff | data | value 8/28 | Jim | quote | 32 8/28 | Jim | approved | 3 (note: the data actually had full date and staff name - brevity is for this example ) However not all jobs are approved the same day they are quoted. So when I run the update process later I need to be able to update the approved count. But there isn't a 1:1 relationship between this found set and the aggregate record. The update technique. To start this process in the warehouse table I created a new field MD5 which is exactly what it means: GetContainerAttribute ( date & staff & data ; "md5" ) which would become this value for the approved record 959C39DA78749FDEF00AC1EEE4501550 I then put a global text field in the quotes table I call it UPDATE I related this field to the MD5 field in the warehouse table. The update will run again go back a few weeks and establish the exact found set of records for the quotes. It also does the constrain for the approved and now at this point in time there is most likely more approved jobs, in this scenario there are now 13 jobs approved. To update the value by setting the UPDATE field to result of GetContainerAttribute ( $date & $staff & "approved" ; "md5" ) based on the found set of records this time using variables that are currently set in the running script plus hard coding the data type. This will create a 1:1 relationship between quotes and the warehouse record where I then set the value across the relationship to 13 which is based on the current Get(FoundCount) And this is how I update the values, and didn't have to change context while the script is running. Hopefully you find it a useful technique.
  8. Why don't you find in a specific field? How many fields would you expect to have this string in? You said you can't find that value in your spread sheet either. Perhaps that string doesn't exist in your data set?
  9. oh that is strange - for a week I thought someone site was off line because it just would never resolve. I finally rebooted my computer and the site worked. Go figure.
  10. test.txt Strange. - will dig deeper.. drag and drop or did you choose?
  11. what kind of file what was it's size?
  12. perhaps this is the link: (sorry we have some broken links from site conversion years ago - I have a conversion table can manually lookup )
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.