Peter Wagemans

  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Peter Wagemans

  • Rank
    just passing through

Profile Information

  • Gender
  • Location

Contact Methods

  • Website URL

FIleMaker Profile

  • FM Application
    15 Advanced
  • Platform
    Cross Platform
  • Skill Level
  • Certification
  • Membership
    FileMaker Business Alliance
    FIleMaker Platinum Member
  • Title
  1. Hi forum, I am upgrading a customer with a FMS 13 setup with Zulu 1 on port 8080 to a FMS 15 with Zulu 2. It was already a problematic setup, as the FileMaker Server runs concurrently with an OSX Server. Hence the 8080 port to keep away from OSX Server hijacking port 80 all the time. The new FMS 15 is installed on port 85/2443 to keep away from the OSX Server web ports. Great. We wanted to keep Zulu on 8080 though, because then we wouldn't have to change the deployment on every machine. Installing Zulu 2, there is no option to choose the ports. It just configures to port 80. I would really like to have Zulu listen on port 8080 and talk to FMS over port 85. I tried modifying the context.xml, but that didn't do anything. For the moment, I have done a custom install using the old TomCat 7 instance listening on 8080, and a ProxyPass instruction on the OSX server, which is very ugly. But it works. What is the best way to do this?
  2. Hi Steven, Thank you for your answer. If the CLI does not work remotely, why does it need credentials for certain operations? You can stop and restart server processes without any password, but you cannot list any open files without providing superadmin credentials. Maybe it is fmsadmin that has to enter the modern Server era. It is still a part of FileMaker Server and is not marked as an obsolete part. It should work correctly or at least consistently. Maybe you should have a look at the latest OSX and Windows Servers to see how much IT administation has to be done through the CLI...:-) things that we were able to do through a GUI in previous versions. I am not in favour of a CLI only adminstration either, but please do not judge so easily.
  3. Hi, I have a simple question about the fmsadmin command. Does it recognise the credentials of an administrator group? I could not find this in the documentation. It would be a real bummer if fmsadmin is ignoring those, and I have a first impression it is NOT working.
  4. Yes, you can't ( no pun intended ). A calculation has a datatype as described above. But there is currently no way to create a calculation field through SQL.
  5. Let's face it. If you have a security consious customer and a large development with different security groups, external authentication, IOS, webdirect, XML and php access, encryption, SSL, firewall setup and whatever I'm forgetting here, you kinda lose track. FileMaker has no conventient way of immediately letting me assign security to an object ( a field, a layout or a script ) when I create it, so there's an additional danger of creating security holes if you are not submitting yourself to the regular ritual of reviewing security after a chunk of development. FileMaker security interface is not bad, but sometimes a bit awkward to use, leaving room for errors. One has to systematically review every security group for layout access, field access and script access, instead of doing this centered from the objects themselves. Having a pessimistic approach to security will not solve this, but results in bug reports of people ( if they care to do so ) not able to access newly made objects. Your security holes are plugged, but your development quality would suffer. It looks to me that regularly reviewing the security properties of a development is a required ritual, and some kind of database system is required that can update itself with a feed from the database and indicate newly created objects, so I can systematically assign the correct security features, and apply them in the development itself. If some security group's privileges change over time, I should be able to get a check list to see what I should change in the FileMaker security dialogs. This database would document how the security should be set in the database. I'm not looking to reinvent the wheel, so I DO NOT want to have an alternative security system in FileMaker. I just want a FileMaker database where I can sytematically document security on all levels. I'm wondering if anybody has ever made such a thing. I'm faced with developing it, and if there is a product available, I could probably cut development cost.
  6. Had the same issue recently, and don't know what caused it to function again. I fiddled here and there, would like to know the scientific approach.
  7. I'm getting curious as well. ScriptMaster works great on the server, but only for FMSE and IWP. If FileMaker continues their move to 64-bit on the server, soon my solutions will break.
  8. bump
  9. BTW, I just tested the imports with the 360Works JDBC plug-in. When I use the queries with the jdbcXmlImportUrl function, the imports are blazing fast, but there's a few problems I ran into. 1. I have to hardcode all the imports, but this can be overcome, it's just a hassle 2. The MSSQL JDBC driver is not working correctly on OSX, but this can be overcome, the imports have to run on the server side, and that's a windows machine 3. The JDBC plug-in does not activate on the server. This is a show stopper. I posted a question on the 360Works subforum, maybe they can fix it.
  10. Hi, I'm trying to activate the JDBC plug-in on a Filemaker 12 server, but the checkbox keeps deactivating after I perform a save. Is it server compatible? I'm using the ScriptMaster plug-in on the server side without any problems, it would be nice to have this one working as well.
  11. Maybe not a good idea to continue in this thrread, but it seems closely related. We are now april 2013, the Java JRE is 7, my server is MSSQL Server 12 and the driver is sqljdbc4.jar, which seems to be the recommended driver if you're on Java 7. See My testing environment is the AdventureWorks2012 database. jdbcLoadDriver( "" ; AW::JDBCdriver ) returns 1 -- OK jdbcOpenDatabase( "jdbc:sqlserver://;databaseName=AdventureWorks2012" ; "sa" ; "test" ; True ) returns 1 -- OK as well, trying an invalid credential nicely returns the reason in jdbcLastError Let ( myResult = jdbcPerformQuery ( "SELECT * FROM dbo.AWBuildVersion" ; "timeout=" & 5*10^3 ) ; Case ( myResult = "ERROR" ; jdbcLastError ; myResult ) ) returns java.lang.NullPointerException It seems something is broken, since I'm supposed to get a driver error in case of impoper use and not a Java error, right? My client is a Mountain Lion Mac, and the driver is untested by MS on OSX, so it seems to be a bug in the driver. Trying the same thing from Windows 7 gives no problem. The jTDS JDBC Driver is not an option anymore, since it's not supporting Java 7 and returns an java.long.UnsupportedClassVersionError wich seems to clearly indicate a problem with the JRE. I really want to use JDBC with the MSSQL server, since I want to compare the performance of inserts using the DoSQL 2 plug-in with ESS vs the scripted import through the JDBC plug-in, which I hope will run well in FMSE ( regular imports from ESS and ODBC imports don't work server side ). I know it's not 360Works their task to provide us with functioning database drivers, they just make the plug-in. But I was hoping somebody would know which JDBC driver to use, or how to work around this problem. Is it possible e.g. to have Java 6 as well installed on the Mac and have the JDBC plug-in use it? Java really gives me major headaches these days. I guess I'll just do my testing from Windows then.
  12. Hi Wim, Thanks! I have been testing some more today. I downloaded the free SQLExpress 2012 server and put it together with a server 12 on a Windows 7 X64 virtual machine. I also downloaded the AdventureWorks database, attached it, and made an ESS connections to it. Then from my Mac, I then put all the 142 tables in my FRG, and made a few scripts and tables to support listing tables and fields. Finally I wrote some code to generate local tables on demand automatically, and do the inserts. The finishing touch was to put some timers on it. Here are my results: It took the script about 1 hour and 9 minutes to create and populate the tables, when I ran it from my Mac. As the VM is on the same machine, this is not an optimal setup for speed, but it's quite a fancy i7 PowerBook with an SSD, so this setup not so unrealistic. The CREATE TABLE command also has some defaults for fields I do not like, as I presumed ( above ) that all those extra field checks slow down the INSERTs. So I created an global option not to create tables and work with existing tables, copied the created FileMaker tables in my other favourite tool :-) and stripped out the options in the XML, then replacied the tables again. Since I did not have much time left today - I immediately went into testing the script server side, after installing the DoSQL 2 plug-in on the server. I was very very surprised that leaving out the options in the fields did not give me any speed improvement. The script is almost done now, and I see I'm going to finish at around the same benchmark time as when it created the tables automatically. The AdventureWorks ESS setup is a nice testing environment, and easy to reproduce. If you want, I can mail you the FileMaker file with the create and insert scripts. Can I enclose stuff here? I don't dare making a backup now while the script is running, I guess it's not a good idea. O yes, a bit of topic, the Production.Document table and the Production.ProductDocument table contain some fields that are not compatible with ESS. So the INSERT does not work of course for those tables.
  13. Had the same problem with a development that does a nightly update of the mirror FileMaker tables of a MS SQL database. Allthough ESS is possible and running in server side scripts, you all of a sudden discover that FileMaker did not go all the way in supporting it server side. Thanks FileMaker, for making me look bad in front of my customer. ODBC doesn't cut it either, not compatible. What left me with "set field" scripts for a large number of tables. Oh my gosh. But then I discovered that the DoSQL 2 plug-in is server side compatible with ESS tables. I used it to turn my import scripts into INSERT SELECT statements and they worked flawlessly on the server side. I installed the plug-in using a server side script, but noticed I still had to enable it using the admin console. No problem though, after enabling it there, it just worked. For most tables, I had an identical structure for the Microsoft SQL tables and the FileMaker tables. So an INSERT INTO <filemakerTable> ( SELECT * FROM <ESS MSSQL TABLE> ) worked nicely. For some other tables I had defined some extra fields in the FIleMaker tables, and the DoSQL 2 plug-in alerted me correctly when trying that, that the number of fields did not match. But since the field names matched, a custom function that created the SQL for all fields present in both the source and target table was quickly constructed. Kinda cool. You are actually talking SQL to FIleMaker who translates the SQL into other SQL to talk to the ESS table. I tested DoSQL 2 already with ESS tables on the client side, but was happily surprised it worked also very well in server side scripts. One thing though. Regardless of the technology used to get the data into the tables, regular import, ODBC import or the DoSQL way. It's easy to make the following mistake: I copied the ESS tables in the field definitions window and just pasted them again, creating local FileMaker tables, and considered myself pretty smart for having found a very fast way to create the local tables. Until I started importing data. Glacial. I then removed all extra bells and whistles from the fields, all the verification stuff, and turned the indexing to automatically again, because locally I'm using these tables for relational reports. So much for my time saving, it took more time to remove all this, then if I had just created the tables from scratch. After modifying them, importing is much much faster. Not lightning fast, but sufficiently fast since the server side script has all night to do the job. So I was finally able to deliver what I promised to the customer. in my initial analysis I did not count in the additional cost of the plug-in though, not to mention the hours I lost trying to solve the problem I unexpectedly ran in to.
  14. Hi Laurent, More awake now. I fiddled with the records to see when and where things go wrong. The first record of the grouping is not counted with COUNT ( DISTINCT.. ). I tried mFMb_DoSQL ( " SELECT T1.Teacher, COUNT(DISTINCT T1.Student) FROM T1 WHERE T1.Teacher = 'TEACHER X' GROUP BY T1.Teacher" ) And of course this returns a blank. Then I change the first teacher to "TEACHER X" and it gives me TEACHER X,0 When I remove the DISTINCT part, it returns 1 instead of 0. The problem seems to be in the COUNT() function. Confirmed on my part.
  15. An interesting loop, that places the called script on top of the stack. I've put this in an XML clip. I tried this with version 10. Run this in the debugger and let it loop a few times. You'll see it switches to the called script. Now put the debug execution pointer next to the "exit script" of the script on top of the stack. You'll see it winds down again, coming back to the called scripts. Rob, I know this is an old thread, but still, I read it and thought about this solution. As for the mechanics of the DoScript, the ZippScript, the EventScript or any other script triggering plug-in there exists: They all have to call the internal API function that FileMaker provides. When FileMaker changes the rules there, ( like in 7->8 ), there's nothing that can be done. It's the FileMaker side that decides where the script will be placed in the execution stack. Here's the CM clip: Let ( myScript = mFMb_DoScript ( "DoScript Loop" ; Get ( FileName ) ; Random ; "resume" ) ; 1 )