Jump to content

Visionjcv

Members
  • Content count

    18
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Visionjcv

  • Rank
    novice

Profile Information

  • Industry
    Technology Consulting
  • Gender
    Male
  • Location
    London, United Kingdom

FileMaker Experience

  • Skill Level
    Intermediate
  • FM Application
    16 Advanced

Platform Environment

  • OS Platform
    Mac

FileMaker Partner

  • Certification
    Not Certified
  1. Hi, I've just upgraded to Filemaker Server 16 and installed an SSL certificate for client/server communications. However, I am confused by the documentation when it comes to communication between the Server and the Filemaker XML API. We're currently making these calls from another server over http, and would like to ensure they are secure. I've attempted changing these requests to be over https but this seems to fail - I haven't investigated where exactly (if it's a limitation of the PyFilemaker Python library we're using or the fact that the connection is not actually secure). Would enabling it for clients also provide security on the API side? Could anyone provide some guidance on where I can look for information on this? Thanks in advance!
  2. Hi Bcooney, What I mean is if I open the file with Filemaker Pro it works perfectly. If I host it on Filemaker Server and then access it with Filemaker Pro it no longer displays the results - it shows an empty set. I can't get rid of the global variable because it gets set as the current user ID when the user logs in, I can't really think of a better way of doing this unless I created a set of relationships that uses Get(AccountName) to identify the user and from there through a relationship identifies the user ID?
  3. Thank you for the replies. bcooney: Because users will be allocated in discreet categories (namely Senior and Junior) there are two relationships enforced by a 'flag' field 'senior' and 'junior'. Therefore in this example, the field cClientAllocatedTo_n would always return a single result. The motive behind the join table is that whilst each client can have ONE active 'junior' and ONE active 'senior' they could in theory have multiple 'inactive' juniors and seniors from previous allocations that are disregarded at the relational level. It's therefore used for a historic trail of the client >> user allocation. Agnes Riley: I have indeed debugged, and since $$userID is used throughout the entire database it is most definitely set. It appears to be a bug in Filemaker itself based on the way hosted files are handled on the server, vs on a client... I was just hoping someone else would have some experience finding a workaround to this. I'd be extremely grateful for any additional insights. Best regards, Jason
  4. Hi everyone, I've been scratching my head with this one for days. I have a database that has a number of tables as follows: Users Clients User Client Join Each of these have primary keys and are related by them. Now, the problem begins when looking at the 'Clients' table - or any instance thereof. I have set the privileges definition based on the users primary key (which is set as a startup script to a global $$userID). I then defined the privileges to say that they should only have access to view or edit a client if they have a record in User Client Join with their user ID. It looks something like this: cClientAllocatedTo_n = $$userID The data for Clients is then displayed in a dashboard portal in the database. Now, this works perfectly. It only shows the records each user is authorised to view. However, it simply does NOT work on Filemaker Server. The same file, if you open on the server itself in a client works perfectly if it is not hosted, and simply will not display any data with that privilege definition when the file is hosted by Filemaker Server. Does anyone have any ideas? One concern to me is that the field cClientAllocatedTo_n is actually an unstored calc field which pulls the value of the userID from the User Client Join table (the field cClientAllocatedTo_n is in the clients table). However, I cannot think of a way of handling this any other way, since different levels of users will each have a record, so it would be unrealistic to store this in the client table itself. Any advice would be extremely useful! The system is currently being used by 20 people and this bug is causing me infinite grief. Thanks in advance.
  5. Exiting to PHP with Filemaker Errors

    Actually, after a bit more thought I came up with a solution - thought I'd document it in case anyone else ever has this problem. The way I got around it, is when I write my internal error log in Filemaker for the errors I'm trying to capture, I pass a second parameter which determines if it is a 'fatal' error or not. Only fatal errors get returned to PHP. If it IS fatal, it constrains the current foundset (in the error table) to the current log record, and aborts the script (with a halt script step). In doing so, you get a record object with the error message. Hope this helps someone one day!
  6. Hi everyone, I find myself in a bit of a predicament with the Filemaker PHP API. I'm running some rather complex Filemaker scripts via the PHP API and need to be able to communicate a message back to PHP depending on various circumstances. It's imperative that I actually retrieve this 'string' back, since it's not necessarily a Filemaker Error, just a set of conditions that have not been satisfied. I'm aware that you can't use an exit script step to send a Script Result back, and have tried a few alternatives mentioned on the forums about using a Global Field whose value would be passed back... However, in many cases, the script is checking up to 5 different layouts for data to be considered, and even if I handle each layout individually at each stage, there are times when an empty foundset is returned (count = 0 in the PHP object) so the Global Error obviously doesn't get returned. I've looked at having an 'error' table that I somehow write to, but that means I would need to pass some form of identifier on every single call to Filemaker in order to be able to pull back any errors that might take place, for that particular user, in that particular session... Basically, the question is: How can I get a message back from Filemaker to PHP that accounts for conditions where no foundset is returned? I'd be very grateful for any help on this. Best regards, Jason
  7. Hi Everyone, I've run into somewhat of a roadblock using ESS with FM Server 11 Advanced (clients using FM 10 or FM 11). The issue concerns a server schedule that executes a script from one of the hosted files. The solution is a back-office application that handles data for servicing of equipment by field-based engineers, who access data on their PDA's. The PDA's write to a MySQL db which is accessed from FM via ESS. All reading/writing is done from the FM side i.e. one script pulls information, the other pushes info to the MySQL db. The solution has been running for a couple of years, though over the last few months the speed of running the 'pulling' script has decreased from about 2/3minutes to well over an hour. All operations are logged in an error-log within an FM Table and it seems it is hanging for 30+ minutes on very basic searches on fields in MySQL that act as 'flags' to detect that an action is required. For instance, running a search on a table that has 100k + records for a field = 1 could take over 30 mins. I've tried searching but found little information online. If the solution were restructured to enforce an 'archiving' system whereby old data would be filtered out relationally, would that improve performance, or does the relational enforcement not deliver any efficiency improvements over a good old search? I'll illustrate an example below: If data over 1 month old were flagged with a '1' in an 'archive' field, and the relationship between FM and MySQL was based on archive=0 between the two tables, and we then ran individual searches in this restricted table instance, would it be more efficient (significantly) than running a search on all the data to begin with? If I've not provided enough information, please let me know. I'd be very grateful for any insight into how FM works so that I can plan a solution accordingly. Thanks, Jason V.
  8. Ok, I have just taken the compacted file online to replace the old one. Whilst it is 'working' the problem is clearly still there. Any commits on this table take about 5 seconds to go through whilst any other table is instant - It is just long enough to get a glimpse of the coffee cup. I'm trying to run a DDR on this new file and it's failing again. So the next step is to delete the table in a copy and see what happens...
  9. Thanks for your prompt reply. I've only been working here for a couple of months and it was already an issue when I came (together with another million problems) and since a restart the first time seemed to solve it for over a month, the problem was just 'ignored'. I am saving a compacted copy at the moment and suspect it will take a little while. I'll be sure to let you know the outcome of that. Regarding the recovery route, how 'bad' could it be to try and recover the file and then run that file? What are the potential pitfalls even if it 'seems' to work ok? I've always recovered a file simply to extract the data from it, but that was mainly in systems I had been a part of building and therefore backups were not an issue. Here however there is no backup I can use that goes back far enough to ensure this corruption doesn't exist in the file. Thanks again for your help.
  10. Hi Everyone, It's been a while since I last posted on here, but I am up against a wall with this problem. I have been working at a new company which has possibly the most intricate and complex FM system I have seen to date. Anyway, I took over here supervising the Filemaker system a couple of months ago and from the very beginning there was a very odd problem where periodically one table in one file would apparently become corrupt. Let me define corrupt in this context: Basically, one table would start causing problems in so far as if you edit (or create a new record) and then commit the changes, it would show the deadly coffee cup and stay that way indefinitely until you force quit the application. Once this happened it would happen to users across the network consistently i.e. every single time any user tried to change anything it would crash. There are no script triggers or scripts executed upon commit or change etc. I have tried this in several layouts from external files and from the data file itself which contains the table and the problem is consistent throughout. How we solve it: - So far, all I do is literally reboot the server itself (Filemaker Server alone doesn't fix this). When it comes back up, the files all pass the verification process and open normally and then voila, no more problem. However, this seems to be happening more and more lately. Today alone it's happened twice, and since the files are several GB in size, and there are over 100,000 connections to the database per day (through clients and XML Web Publishing) this causes a massive inconvenience and brings the entire company to a standstill. There is no backup that goes back far enough to be sure this problem wouldn't happen. I've been tempted to perform a recover and run that file as the main data file, but have been strongly advised not to. I have also checked and disabled any server scripts that could cause this problem i.e. a script running that makes records unavailable etc. but that isn't it either. I'm starting to seriously worry about this problem... Any and all advice/suggestions etc would be very welcome. EDIT: One other thing I noticed and forgot to mention is that when I try to run a database report it crashes (no matter how long you let it run) and you end up with a bunch of empty files that don't contain any information. Could it be that the database is corrupt as a whole? If so, how would I go about fixing this? Keep in mind this is a database with hundreds of layouts, dozens of tables, and possibly thousands of relationships. Cheers, Jason V.
  11. True - but try explaining that to a FTSE 100 company They think they always know best! haha
  12. That wouldn't really work... The whole purpose of recording the Current Level and Final Level is to increase accountability and reduce theft. Since the engineer visiting the site doesn't know how much was left by the previous engineer, it becomes very difficult to be able to keep some of the sales revenue without this being noticed and flagged. But I do of course see your point!
  13. Thanks for your reply. This is something I had explored but ran into a wall that I simply couldn't get around without scripting methods. Essentially this works. However, if the previous level changes (due to an admin error etc.) the next value will not update since it is referencing a related field via an auto-enter. How could I get around this??
  14. Thanks for that improvement... I think originally it wasn't retrieving the first related record - hence the GetNthRecord calculation step, but yes, now it's most definitely pointless. I had a feeling the solution would probably involve a method like this... To be fair, all input is done via one layout, so perhaps it wouldn't be all that bad to just perform a relookup on the later services... Thanks a lot for having a think about this. I'll let you know how it goes. Cheers!
  15. Essentially yes Current Level is the level found on site when an engineer arrives for instance. Final Level is the stock left on site when he leaves.
×

Important Information

By using this site, you agree to our Terms of Use.