SarahS

Members
  • Content count

    47
  • Joined

  • Last visited

Everything posted by SarahS

  1. I'm embarrassed to discover I never acknowledged your response - thank you very much for your suggestions & help. I currently use Actual Installer for the PC platform (but I think there are probably others equally as good) and a zip file for Mac platform. I can't seem to get a .dmg file to upload or download properly from my website for the Mac platform.
  2. I have a Mac runtime solution that multiple people around the world are using. It was originally created using 13.0.5. Some users tried running it on El Capitan and rightly got error messages that files were missing as it is not compatible with 10.11. I created a new runtime of the same files that were distributed using 14.0.4 and tested simply replacing the runtime engine file only (& none of the data/interface files). The solution seems to run perfectly on El Capitan with this runtime engine upgrade. I see this the same as upgrading the FileMaker Pro application version, and no changes need to be made to the .fmp12 files. Am I making a bad assumption if I simply have the users replace the runtime engine file (& none of the other files) to be compatible with Yosemite?
  3. I just updated all the IDs in my solution to use UUIDs due to issues I was having with duplicate IDs. I am confused about how auto-enter calculations (that replace existing values) work with MirrorSync. My question: How is the value in a field with an auto-enter calculation (that replaces an existing value) different when the field is specified as the primary key field in the MirrorSync configuration versus when it is not? Does the value get changed during synchronization depending on if a global field is referenced? Does having the option to evaluate the field even if the referenced fields are empty also create a difference in behavior? Thank you very much for helping clarify this!!! This is how I used to be creating the IDs and have now abandoned for the UUID method: serialNumber (number) field: number field that auto-enters a serial number - the value of this field gets reset in each table by script using "GetNextSerialValue" each time a user logins and their device prefix is identified ID (text) field: auto-enter calculation (replaces existing value) that combined the device prefix (stored in a global field) & the record serialNumber - this field was used as at the primary key as well as the user-visible record number devicePrefix (text) field: auto-enter calculation (replaces existing value) of the left two characters of the ID field This is how I am now setting the UUID & user-visible IDs: no changes to field def. - serialNumber (number) field: number field that auto-enters a serial number with modifications prohibited - the value of this field gets reset in each table by script using "GetNextSerialValue" each time a user logins and their device prefix is identified new field - ID_OLD (text) field: auto-enter calculation (replaces existing value) that combines the device prefix (stored in a global field) & the record serialNumber - this field was set with the existing IDs prior to converting to UUIDs and is used as the user-visible ID changed field def. - ID (text) field: auto-enter calculation (replaces existing value) of Get(UUID) - this field is still the primary key field changed field def. to reference the ID_OLD field vs. the ID field - devicePrefix (text) field: auto-enter calculation (replaces existing value) of the left two characters of the ID_OLD field
  4. Since making my original post, I have changed the new ID_OLD field and the devicePrefix fields to have auto-enter calculations from the (same) device prefix global field, but they do not replace existing values. There are only a few scripts in my solution that duplicate records, so I am manually resetting the fields by script when a new record is created by duplication. This seems to be working but doesn't seem like the best solution....
  5. I have a solution that is hosted by FMS14 and remotely accessed by my client. My development files have file access protection enabled, and I have authorized the opener file to open the UI file, but they do not have encryption-at-rest (EAR) enabled. Prior to uploading the files to server, I add EAR protection. I am confused because after the encrypted files are uploaded and I go to open them with the opener file (that I thought was authorized), I get the error message that the opener file is not authorized and I am required to input my full access credentials. I am willing to do this, but it means that every time I update the files, I have to re-send my client the opener file that has been authorized for the updated files, and they have to download and replace their opener file for all users and I would like to eliminate this hassle for them. Is there a way to allow the opener file to remain authorized when uploading updated files? Do I need to add authorization for the opener file while the UI and data files have EAR enabled? Thank you for your guidance!
  6. Thank you for your response Steven. Would a simple copy & paste of the file affect the file access protection (FAP)? I am using copies of the UI and data files (made with copy & paste), but not of the opener file. I didn't think copy & paste would affect FAP since when I try to add authorization for the opener file again, it says the opener file is already authorized. Below is what I tried with the snapshot link suggestion: - I logged in to the file using my full access account & created a snapshot link. - I closed the program and tried opening the program again with the snapshot link and got the error message below: "Cannot restore the snapshot link file "open UI_File.fmpsl" Make sure that the status toolbar is accessible in at least one window of the referenced database." I do have the status toolbar enabled in my full access account, but not in the other accounts. I plan to do a file upgrade soon and can test how the snapshot link behaves with newly uploaded files.
  7. My client has not been able to update the files on her devices recently. The MirrorSync script currently on her devices was created with version 2.40510 (last updated 6/1/2015). The MirrorSync version on the server is 2.505 (running FM 14 Server). She is currently running the files in FileMaker 12 & the Go 12 app due to an issue last year with synchronizing from FileMaker 13. I see the current MirrorSync version is 2.6. What sequence would be best to avoid compatibility issues and to get her using at least the Go 14 App (or Go 15), the server to MirrorSync 2.6, and the files on her devices to have a MirrorSync script with the current version? Thank you!
  8. Thank you very much for the helpful & timely response. I'm really thankful the upgrade can be very simple for my users.
  9. I have an old solution that started out in FileMaker 5 with 35 files and I am working to consolidate it down to 5 files. The user interface will be two files and the data will be three files (general data, more sensitive data, & images/containers data). There are currently 106 tables in the general data file and quite a few less in the other two files. 1) Is it unwise to have so many tables in one file if there should be data corruption that requires recovery? 2) I am re-creating the field definitions and just noticed after duplicating a table occurrence that the name does not appear in "Occurrences in Graph". I have tried re-doing the action, opening & closing the file, and duplicating other table's TOs and the same problem occurs. Is there a maximum number of TOs that are displayed in the "Occurrences in Graph" column of the tables tab? (Seeing this happen is what made me wonder if I was doing the wrong thing by consolidating so many tables into the same file...) Is this an issue? Thank you, Sarah
  10. MAC

    TSGal from FileMaker found the solution to my problem, as detailed below. I had an external data source with the same name as the current file and the TOs that had an issue were all pointed to that external data source. I noticed in the Relationships graph that the tables in question are in Italics, which means you are linking to another file's table; not the current file. If you hover over the top left corner of each italic table, you will see you are linked to a file, even if it is the current file. If you hover over the top left corner of a non-italic table, you will see the Data Source set to <current table>. Changing these externally linked tables to current tables will then display the table occurrence in the Tables tab "Occurrences in Graph".
  11. MAC

    I did some more testing and have created an Issue Report in the FM Community, which can be viewed at: https://community.filemaker.com/message/539877#539877 The initial posting in that issue report is copied to below: Product and version: FileMaker 14.0.4 (but I was able to replicate the issue in 13.0.9 as well) OS and version: Mac 10.10.5 Browser and version (for WebDirect only): N/A Hardware: MacBook Description: I have a file with 107 tables and when you duplicate a TO in the relationship graph, that TO is not listed on the "Tables" tab in Manage Database in the "Occurrences in Graph" column. If you add a new TO using the "Add New" button (instead of duplicating an existing one), then the new TO DOES show up in the "Ocurrences in Graph" column of "Tables" in Manage Database. Note: This does not seem to be a problem when there are not fewer data tables (& TOs on the graph). How to replicate: From a file with many data tables, duplicate an existing TO from the relationship graph. Workaround (if any): Use the "Add new TO" button vs. duplicating existing TOs The consistency check as well as running recover on a copy of the file did not indicate that there are any issues with the file. This is a brand new empty file that I began with as "from scratch" recently. It has not crashed at any point. I created the data tables by importing .xls spreadsheets (with one row only for the field names). I would like to know from FileMaker if this bug affects the integrity of my file at all or if it is okay to continue developing my files that I duplicated TOs in vs. adding as new TOs?
  12. I have a solution with a text field that has a varying amount of text stored in it for each record. I am working on a print layout using two columns to print the records found set of this text field. As it is currently designed, if the text in the last record in a column is too long to be displayed in the column, it gets chopped versus wrapping. I have found multiple postings regarding this topic from 2008-2009 time frame, but don't see anything current. If I compiled the information into a variable and then split it up based on a character count, would that work or does anyone have suggestions on how to print multiple column layouts for large text blocks across a found set of records? I could also send the records to a temporary table & split the large text blocks up there for printing, but think that would make the printing operation/script take a lot of time, especially considering usage on iOS. Thank you for your suggestions!
  13. Thank you for your response Mike. I really need to learn more about building installers for both the Mac & PC platforms. Do you have any suggestions regarding where to learn or what installers work best for both platforms? Several users have manually replaced the runtime engine and it is working fine, so hopefully it truly is this time... I will study up for next time.
  14. I am really confused with trying to get Send Mail as SMTP working with a Google Apps email address (i.e. office@mydomain.com as a gmail e-mail address). I have been doing a lot of research on Google and FileMaker forums and can't understand why I can't get it working (as so many others have gotten it work with settings I have tried). I was hopeful when I found Savvy Data's amazing "SMTP Send Mail Interrogation" tool, but the results it produced two times are copied below. Does Send Mail as SMTP have to be run on a hosted file? That doesn't seem to be my issue, but I'm not sure. Does anyone have Send Mail as SMTP working with a Google Apps email address? If so, can you please share what settings are working for you? Thank you very much in advance! Elapsed time: 0:08:16 Out of 189 iterations, 189 failed and 0 succeeded. Unsuccessful count by Error#: ============================= Error# 1501: 2 Error# 1502: 171 Error# 1503: 3 Error# 1504: 1 Error# 1505: 0 Error# 1506: 12 Error# 1507: 0 Other (not tracked): 0 Error# Legend: ----------------------------------- 1501 SMTP authentication failed 1502 Connection refused by SMTP server 1503 Error with SSL 1504 SMTP server requires the connection to be encrypted 1505 Specified authentication is not supported by SMTP server 1506 Email(s) could not be sent successfully 1507 Unable to log in to the SMTP server
  15. Thank you Josh, that seems to have been my problem as well. After I changed that setting & re-ran the "SMTP Send Mail Interrogation", I found the correct settings and have it working! For the reference of those that may read this in the future, the tool is available at: http://www.savvydata.com/blog/2009/11/fixing-smtp-send-mail-error-1506/
  16. I have a (separation model) runtime solution that was bound with FileMaker version 10 and is used by 20-30 persons around the world. They (& I) consider the data they have input into the program as private. I am currently upgrading the solution to use FileMaker 13 (which is obviously necessary to support users using newer OS!) The two options I have thought of to upgrade the users and to save their data are: 1) For them to send me the data files & I will manually convert the files & import the data into the upgraded solution and rebind the solution to send back to them. I have promised them that I will maintain the confidentiality of their data and only analyze it to the extent necessary to ensure it is completely & accurately converted & imported, however several of them are still not comfortable with that. 2) For them to sign up for a trial version of FileMaker 13 so they can convert the runtime data files themselves & then use those converted files to import the data into an upgraded runtime version (by script). This is not my preference as it would require a lot of hand holding & is error-prone (and would sign them up for e-mail communications that they probably don't need/want). Are there any other options to upgrade a runtime solution in which the user can transfer their data from fp7 runtime files to runtime fmp12 files? If not, does anyone have any suggestions on how to make option #2 above a smooth & accurate transition process for the user? Thank you!
  17. That was in fact the issue, thank you Jesse. I hadn't changed my layout, so I'm not sure what happened!
  18. Note: ID_Inv_DataEntry shows up during the mapping process, but ID_Inventory does not. ID_Inv_DataEntry is a standard (indexed) text field.
  19. I am using a developer-managed schema & have an auto-enter text field that is not showing up during configuration. The auto-enter is not at all close to what a UUID would look like. To give you a little more info... The field that does not show up during mapping is in my sales order line items table and is named ID_Inventory. There is another field named ID_Inv_DataEntry that is visible to the user to input the product identifying info (UPC, ISBN, ID_Inventory, etc.) during data entry. Based on what is input, the auto-enter calculation on the ID_Inventory field inputs the ID_Inventory from the Inventory Items table. I don't understand why this text field is not showing up... Thanks for your help!
  20. I am considering using your solution, but one of the questions in my mind is what will be involved in implementing the new version once it is released? Will it require going through the initial setup stages again with the new files? Thank you!
  21. Great. Thank you!
  22. I am working to implement MirrorSync to an existing solution. The solution is currently being used locally on a MacBook. The changes I am making will require the solution to be hosted in order that it can be accessed by several MacBooks, devices, as well as the internet. I am working on the hosted files to develop the solution & implement the MirrorSync tool. These files currently contain test data. The current data still resides on the user's MacBook and she still needs to use the program while I am implementing the changes. Will I run into problems by implementing & debugging MirrorSync with the test data, then later importing the current data and rerunning the sync tool? If so, what procedure do you recommend?
  23. Thank you Jesse. I didn't think there would be an issue after my question #2 above when I asked about doing the initial sync after making changes... I'm sorry to hear that it was an issue. I uploaded two log files for you. One from Sunday 6/22/14 which will perhaps reveal why the initial sync seemed to be stalling out after downloading the new files, and one from yesterday 7/2/14 when the initial sync was completed. I had the user save a copy of the iPad files to a folder on her MacBook yesterday PRIOR TO doing the initial sync, so perhaps those files could be used to revert back to since they shouldn't be device-specific yet. I don't know how it all works, so you can let me know if that is the best option (or even an option!)
  24. Hello Jesse, The user in Mexico did the initial sync last evening (she hadn't been using it a lot since the go-live weekend) & ran into a couple problems. At first there were some conflicts she had to manually resolve, and was able to do that successfully. After completing the manual conflict resolution, she synchronized the iPad to the server, which seemed to go fine. When she synchronized her laptop to the sever, however, all the sales from Sunday were synchronized to be in duplicate. This was only true on the laptop. The ID numbers have a two-digit device prefix and she informed me that the duplicate records each have a different prefix ID. She wrote: "I checked Saturday’s sales and they are fine in both the MacBook and the iPad. As for Sunday’s sales, the numbers are different. On the iPad all start with 08 except for one that starts with 05. On the MacBook, some start with 08 and some with 18." I did a little more research this morning... When I opened the remote files on the server, I see Sunday's sales orders all have the prefix 05 (which is the correct prefix for records created on the server, but these records were created on the devices with the prefix 08 & 18), so I'm not sure what is going wrong... I have the ID Serial & ID fields both defined to have the "not empty" validation unchecked. I would appreciate a little guidance and hope that you understand my description!