Skip to content
View in the app

A better way to browse. Learn more.

FMForums.com

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Announcing - Free Filemaker MCP for MAC

Featured Replies

Announcing filemaker-mcp — Connect FileMaker Directly to Claude AI

I'm releasing a tool I built that I think the FileMaker community will find useful.

filemaker-mcp is a Model Context Protocol (MCP) server that gives Claude — Anthropic's AI assistant — direct, structured access to your FileMaker solution. Schema, data, analytics. No copy-pasting field lists, no explaining your data model in every prompt. Claude just has it.

github.com/nietsneuah/filemaker-mcp

Why I built it

I run a FileMaker-based ERP that manages operations for carpet and rug cleaning businesses. I was spending a lot of time giving Claude context about my schema every time I needed help with a script, a calculation, or a layout question. MCP solved that problem — it's a protocol that lets an LLM connect to external tools and data sources in a structured way. Once I connected Claude to my FileMaker Server, it could see every table, every field, every relationship. The development workflow improved immediately.

But then I realized the connection could do a lot more than just answer schema questions.

What it actually does

The server connects to FileMaker Server's OData v4 API — the same REST interface you may already be using for integrations. Once connected, Claude can query your data conversationally. You can ask things like:

  • "What were total sales by month last year?"

  • "Which driver had the highest average invoice in Q4?"

  • "Show me all open orders past their promised date"

Claude forms the OData queries, pulls the data, and gives you the answer in natural language.

The analytics engine

This is where it gets interesting. The server is written in Python, which gave me access to pandas — a data analysis library widely used in finance and data science for time-series analysis and large dataset manipulation. If you've ever built a pivot table in Excel, pandas does the same thing but in memory, in milliseconds, and at scale.

I embedded pandas directly into the MCP server as an analytics sub-process. When Claude pulls data from FileMaker, it doesn't try to analyze millions of rows inside the AI model (which would be slow and expensive). Instead, the raw data gets loaded into a pandas DataFrame — a highly efficient in-memory data structure. The MCP server does the heavy computation locally, and only the summarized result gets sent back to Claude.

What this means in practice:

  • A query like "Summarize sales by category by month and product for last year" returns in 10-15 seconds

  • The raw data — potentially megabytes — never hits the AI model, so there's no token consumption cost

  • DataFrames persist in memory for the entire session, so follow-up queries against the same data don't need to touch FileMaker Server again

  • DataFrames can be sliced and saved as new datasets, building up an in-session analytical workspace

Requirements

  • FileMaker Server v22+ — the OData v4 API is required. This won't work with earlier versions or FileMaker Cloud legacy.

  • Python — the server is written in Python, a language with deep support for data analysis and broad adoption across the AI ecosystem.

  • Claude Pro or Team plan — MCP server support requires a paid Anthropic plan.

Who is this for

  • FileMaker developers who use Claude for development and want it to understand their solution without constant hand-holding

  • Business operators running on FileMaker who want to ask natural language questions about their data

  • Anyone exploring AI integration with FileMaker who wants a working, production-tested starting point

It's not a proof of concept — it's a tool I use every day.

The repo is public. Feedback, issues, and contributions are welcome.

— Doug Hauenstein

Great use of Pandas.

Haven't looked at the code yet - are you relying on the LLM to translate the natural language in OData syntax accurately?

  • Author

Wim thanks for taking an interest - I found this to be useful for my dev effort so thought I would share.

Simple answer is NO. The MCP does the translation.

This is a setup as an MCP tool for Claude. LLM takes Natural language query - passes to MCP - (I have trained mcp with oData dialect) it calls FM - Returns Raw Date. if/Else Time Series = True Pandas saves to DF (DataFrame) a 3 Dim Array object (persistant by session and available for similar queries) Calculations performed result returned to LLM. The reason I choose this method is ecomony. oData returned queriesmay be several mb or more and expensive to pass to Model. So pre-processing reduces cost 100 x 1 or more in some cases. The beauty of this method is that I can do a natural language query in Claude for say characterize last years sales by category ect.... Since Pandas is really efficient at time series the calculation is fast and just the result hits claude and token usage. So the tokens usage is tiny in both directions. What cool is I can tell claude to build a dashboard in jsx. that takes about 30 seconds and I have an artifact I can publish and share.

I am am now working an agentic middleware that uses a similar pattern. I am replacing PipeDream (connect) with FastAPI and an orchestrator agent that will either pass through dumb requests without agentic assistance or make a decision on how to pass through via a modular design of workers. Could be MCP or other concepts. Also building in some tuning to decide the LLM by task complexity. I have a prototype in development now. The free tool I published was originally designed to enhance my pair programming efficiency. I wanted to give the LLM context so I dont have to constantly remind it about schema. So the tool I built is really designed for a developer.

I have a small script for FileMaker that returns DDL on first pass and build a python dict.
So know the mcp has schema. Once dict is not NULL it skips that step and just
Runs a Read Only capable oDAta call.
Obviously you can control what tables are visible in FM native security.


# GetTableDDL ( "[\"Orders\", \"Customers\", \"Location\", \"InHomeInvoiceHeader\", \"Drivers IH\", \"Pickups\", \"InHomeLineItem\", \"OrderLine\"]" ; True )

#

#

# ================================================================

# Script: SCR_DDL_GetTableDDL

# Purpose: Return DDL for requested table occurrences.

# Called headless via OData Script endpoint.

# Parameters: JSON array of TO names

# e.g. ["Orders", "Customers", "Location"]

# Returns: DDL text via Exit Script

# Author: Doug Hauenstein

# Created: 2026-02-15

# Dependencies: GetTableDDL() native function (FM 22+)

# OData Script execution privilege

# Notes: ignoreError = True so partial results return

# ================================================================

#

# =====================================

# SECTION 1: SETUP & VALIDATION

# =====================================

// Set Variable [ $param ; Value: "[\"Orders\", \"Customers\"]" ]

Set Variable [ $param ; Value: Get ( ScriptParameter ) ]

# Validate we received a JSON array

If [ IsEmpty ( $param ) or Left ( $param ; 1 ) ≠ "[" ]

Exit Script [ JSONSetElement ( "{}" ; [ "error" ; True ; JSONBoolean ] ; [ "message" ; "Parameter must be a JSON array of table occurrence names" ; JSONString ] ) ]

End If

# =====================================

# SECTION 2: GENERATE DDL

# =====================================

Set Variable [ $ddl ; Value: GetTableDDL ( $param ; True ) ]

# Check for error return

If [ $ddl = "?" ]

Exit Script [ JSONSetElement ( "{}" ; [ "error" ; True ; JSONBoolean ] ; [ "message" ; "GetTableDDL returned error. Check AI call log." ; JSONString ] ) ]

End If

# =====================================

# SECTION 3: RETURN RESULT

# =====================================

Exit Script [ $ddl ]

Hi Doug, thanks for the quick reply.

I understand the DDL part, I do the same in my AI applications (usually Typescript). I also understand the need to not send raw data to the LLM; the raw response is way to big and would crowd the context window; that's what the pandas are for.

My question was more to do with the accuracy of translating the natural language question (the user's prompt) into the proper odata query syntax. Everything depends on that because if the prompt isn't correctly translated then the wrong data is found.

This part is confusing:

I have trained mcp with oData dialect

MCP is nothing but a protocol and an MCP server doesn't have any AI capabilities out of the box so what did you train exactly? Where and by what is the natural language translated into OData syntax?

  • Author

How fmrug-mcp handles analytics and time-series data

fmrug-mcp includes a Pandas-based analytics layer that runs entirely on the MCP server — not in Claude, not in FileMaker. Here's how it works:

Three tools, two steps:

fmrug_load_dataset — Claude calls this tool to pull records from FileMaker into a Pandas DataFrame held in the MCP server's memory. This is a single OData HTTP call (auto-paginates if >10,000 records). Zero LLM tokens — it's just HTTP and Python.

fmrug_analyze — Claude calls this to run groupby, aggregation (sum, count, mean, min, max), filtering, and sorting on the loaded DataFrame. This executes locally in Python on the MCP server using Pandas. No FM round trip, no LLM tokens for the computation itself. Returns a compact summary table.

fmrug_list_datasets — Shows what's currently loaded in memory. Housekeeping.

Where each step runs:

| Step | Where it runs | What it costs |

|------|--------------|---------------|

| Claude decides what to load | Anthropic API | LLM tokens (small — just the tool call) |

| OData fetch from FileMaker | MCP server → FM Server | HTTP request (zero tokens) |

| DataFrame stored in memory | MCP server (Python/Pandas) | RAM only |

| Aggregation/groupby/sort | MCP server (Pandas) | CPU only (zero tokens, zero FM calls) |

| Claude reads summary result | Anthropic API | LLM tokens (~200 for a summary table) |

Why this matters for token cost:

A raw query returning 10,000 invoice records would consume ~400,000+ tokens if Claude had to read every row. Instead, Claude loads the dataset once (zero tokens), runs an aggregation like "sum revenue by month by driver" (zero tokens), and reads back a 20-row summary table (~200 tokens).

Example time-series workflow:

PLAINTEXT

Claude: fmrug_load_dataset(name="inv2025", table="InHomeInvoiceHeader",

filter="Date_of_Service ge 2025-01-01", select="Date_of_Service,InvoiceTotal,Driver,Zone")

→ MCP server fetches from FM via OData, stores DataFrame in memory

→ Returns: "Loaded 8,247 records, 4 columns, 1.2MB"

Claude: fmrug_analyze(dataset="inv2025", groupby="Driver",

aggregate="sum:InvoiceTotal,count:InvoiceTotal", sort="InvoiceTotal_sum desc")

→ Pandas runs groupby + aggregation locally

→ Returns: compact table (Driver | Revenue | Job Count), ~200 tokens

Claude: reads the summary, answers "Driver Mike led revenue at $182K across 847 jobs"

All computation happens in Python on the MCP server. Claude's role is deciding what to analyze and interpreting the results. FileMaker's role is storing the data. The MCP server does the heavy lifting in between.

---

How fmrug-mcp handles analytics and time-series data

fmrug-mcp includes a Pandas-based analytics layer that runs entirely on the MCP server — not in Claude, not in FileMaker.

Three tools, two steps:

  1. fmrug_load_dataset — Pulls records from FileMaker into a Pandas DataFrame on the MCP server. Single OData HTTP call (auto-paginates if >10,000 records). Zero tokens — just HTTP and Python.

  2. fmrug_analyze — Runs groupby, aggregation (sum/count/mean/min/max), filtering, and sorting on the loaded DataFrame. Executes locally in Pandas. No FM round trip, no tokens for the computation. Returns a compact summary table.

  3. fmrug_list_datasets — Shows what's loaded in memory.

Where each step runs:

  • Claude reasoning (decides what to load/analyze) → Anthropic API → LLM tokens

  • OData fetch from FileMaker → MCP server → zero tokens

  • DataFrame aggregation → MCP server / Pandas → zero tokens, zero FM calls

  • Claude reads summary → Anthropic API → ~200 tokens for a summary table

Why this matters: A raw query returning 10,000 records would cost ~400K+ tokens if Claude read every row. Instead, Claude loads the dataset once (zero tokens), runs "sum revenue by month by driver" in Pandas (zero tokens), and reads back a 20-row summary (~200 tokens).

Example time-series workflow:

  • Claude calls fmrug_load_dataset("inv2025", table="InHomeInvoiceHeader", filter="Date_of_Service ge 2025-01-01") → MCP server fetches via OData, returns "Loaded 8,247 records"

  • Claude calls fmrug_analyze("inv2025", groupby="Driver", aggregate="sum:InvoiceTotal,count:InvoiceTotal") → Pandas runs locally, returns compact table

  • Claude reads the summary, answers "Driver Mike led revenue at $182K across 847 jobs"

All computation happens in Python on the MCP server. Claude decides what to analyze and interprets the results. FileMaker stores the data. The MCP server does the heavy lifting in between.

12 minutes ago, Wim Decorte said:

Hi Doug, thanks for the quick reply.

I understand the DDL part, I do the same in my AI applications (usually Typescript). I also understand the need to not send raw data to the LLM; the raw response is way to big and would crowd the context window; that's what the pandas are for.

My question was more to do with the accuracy of translating the natural language question (the user's prompt) into the proper odata query syntax. Everything depends on that because if the prompt isn't correctly translated then the wrong data is found.

This part is confusing:

I have trained mcp with oData dialect

MCP is nothing but a protocol and an MCP server doesn't have any AI capabilities out of the box so what did you train exactly? Where and by what is the natural language translated into OData syntax?

Clarification: nothing is "trained" — Claude is prompted with OData conventions specific to FileMaker.

When I say the MCP server is "trained," what I mean is that fmrug-mcp provides Claude with structured context about how to construct valid FileMaker OData queries. No model training, no fine-tuning — just careful prompt engineering and tool design.

Here's exactly where the natural language → OData translation happens:

1. MCP server instructions — When Claude Desktop connects to fmrug-mcp, the server sends an instructions block that teaches Claude FM-specific OData rules: use bare ISO dates (no quotes), the Location table is the primary customer record, always call get_schema before querying, etc.

2. Tool schemas with rich docstrings — Each tool (e.g., fmrug_query_records) has detailed parameter descriptions with valid examples: "City eq 'Cincinnati'", "Date_of_Service ge 2026-01-01". Claude sees these every time it considers using the tool.

3. DDL/schema cache — At startup, fmrug-mcp fetches FileMaker's $metadata XML, parses it, and caches the exact field names and types. When Claude calls fmrug_get_schema(table='Location'), it gets back the real field names — including ones with spaces like "Customer Name". Claude uses these to build syntactically correct OData $filter and $select expressions.

So the translation chain is:

  • User asks a question in natural language → Claude Desktop

  • Claude reads the MCP tool descriptions and schema data, constructs an OData filter expression → Anthropic API (LLM tokens consumed here)

  • Claude issues a tool call with the constructed parameters → MCP protocol

  • fmrug-mcp executes the HTTP request to FileMaker → Python/httpx (no tokens)

The "intelligence" is Claude's general ability to write query syntax, guided by the schema and examples that fmrug-mcp provides through the MCP protocol. The MCP server itself is deterministic Python — it doesn't interpret, translate, or reason about anything.

  • Author
17 minutes ago, Doug Hauenstein said:

My question was more to do with the accuracy of translating the natural language question (the user's prompt) into the proper odata query syntax. Everything depends on that because if the prompt isn't correctly translated then the wrong data is found.

Claude already understands OData v4 syntax from its training data. What fmrug-mcp adds is FileMaker-specific context that Claude wouldn't otherwise have:

  • FileMaker rejects standard OData encoding. Spaces must be %20, never +. Field names with spaces need quoting. Dates must be bare ISO format with no quotes. The MCP server handles these quirks in the HTTP layer so Claude doesn't have to get them right.

  • Field names are unpredictable. FM tables don't follow consistent naming — some fields use spaces (Customer Name), some use underscores (Date_of_Service), some are cryptic (_kp_CustLoc). At startup, fmrug-mcp fetches the $metadata XML from FM Server, parses it, and caches the real field names. Claude calls get_schema and gets back the exact names to use — it's not guessing.

  • The MCP server instructions tell Claude the business logic. Which table is the primary customer record (it's Location, not Customers). Call get_schema before querying. Use count_records before large queries. These are things Claude can't infer from OData knowledge alone.


So the natural language → OData translation works like this: Claude's general query-writing ability (from pretraining) + FM-specific field names and rules (from the MCP server at runtime) = valid FileMaker OData queries. The MCP server doesn't translate anything — it gives Claude the context it needs to translate correctly, and then executes the resulting HTTP call.

Domain Knowlege is a substantial reason this works ---

I spent months teaching claude about the quirkiness of filemaker.
from ExecuteSQL to fmsnippetxml
I even built a python translator tool so I could round trip scripts from claude and paste directly into FM then back to Claude so context was crytal clear. All of this was documented and distilled so it could be used to prevent drift. These lessons were used in creating the rules the mcp uses as tools.

If you question is more rooted in "How does Claude interperet what I ask it" - then that is another matter. You will need to give your claude Chat the context it needs to be in sync with your specific idiosyncrasies.

Claude already understands OData v4 syntax from its training data. But generic OData knowledge fails against FileMaker for reasons anyone who's worked with FM's OData implementation knows.

What fmrug-mcp provides is months of hard-won FileMaker-specific knowledge, encoded as code and context:

FileMaker's OData quirks are documented nowhere. Spaces must be %20, never + — FM silently rejects the + encoding that's valid per the OData spec. Dates must be bare ISO format with no quotes. Field names with spaces need specific quoting. These lessons came from months of trial and error with FM Server, cataloged across dozens of Claude conversations, and baked into the HTTP layer so they're handled automatically.

The schema/DDL system reflects real FM complexity. FM tables don't follow consistent naming — Customer Name, Date_of_Service, kpCustLoc all in the same database. At startup, fmrug-mcp fetches the $metadata XML, parses it, and caches the real field names and types. Claude calls get_schema and gets back exact names — it's working from ground truth, not guessing.

The MCP server instructions encode business domain knowledge. Which table is the primary customer record (Location, not Customers). How the invoice/order/pickup relationships work. When to use count_records before a large query. This is the kind of knowledge that only comes from deeply understanding the specific FileMaker solution — it was built up over months of conversations, captured in design documents, and distilled into the server instructions and tool descriptions.

So the natural language → OData translation works like this: Claude's general query-writing ability (pretraining) + FileMaker OData quirk handling (code) + real schema from the live database (DDL cache) + domain-specific business rules (MCP instructions built from months of documented learnings) = valid, accurate FileMaker OData queries.

The MCP server doesn't do AI — it's the accumulated knowledge layer that makes AI effective against a system it would otherwise struggle with.

---

Got it, so to my earlier question: you are in effect relying on the LLM (Claude's models) to translate the prompt into OData syntax, and your tool descriptions guide it further to FM-specifc nuances.

The consequence of this is that you may get different results of that natural-lange-to-OData translation depending on the quality of the model.

Not a criticism, just an observation. I really like the work you've done here, especially the pandas.

  • Author
9 minutes ago, Wim Decorte said:

Got it, so to my earlier question: you are in effect relying on the LLM (Claude's models) to translate the prompt into OData syntax, and your tool descriptions guide it further to FM-specifc nuances.

The consequence of this is that you may get different results of that natural-lange-to-OData translation depending on the quality of the model.

Not a criticism, just an observation. I really like the work you've done here, especially the pandas.

> You're right, and it's a fair observation. The LLM does the NL→OData translation, and model quality matters.

>

> Two things make this more robust than it might sound:

>

> Schema retrieval before query. The server exposes a get_schema tool — Claude calls it to read actual field names and types from the database before building the query. So it's not guessing structure, it's reading it. This makes the translation far more deterministic regardless of model tier.

>

> Self-correcting tool loop. If the generated OData is wrong, FileMaker returns a 400 error with details. Claude sees the error, adjusts, and retries. During testing I watched Haiku (the cheapest model) guess wrong field names, get a 400, call get_schema to learn the correct names, and retry successfully — all within one turn. So model quality affects efficiency (number of round-trips) more than accuracy (final result).

>

> The practical reality: Sonnet handles most queries in one shot. Haiku occasionally needs the retry loop. Both arrive at correct results. The tool descriptions encoding FM-specific OData quirks (substringof() instead of contains(), _kf prefix conventions, etc.) do a lot of the heavy lifting that would otherwise depend entirely on model knowledge.

>

> So yes — model quality is a factor, but the architecture is designed to make it a graceful degradation (slower, not wrong) rather than a cliff.

And with OPUS 4.6 - its incredible.
So in my case I use the Advanced Max plan and the Latest LLM so my results have been amazingly accurate.
Please also not I am not doing any writes / just reads. Writes require a more deterministic approach and validations.

Thank you for your interest - coming from you it is meaningful and validating.

If I have ottoFMS that can deploy an MCP server where would this overlap with that?

  • Author

I dont know about OttoFMS or how it is accessible - I would assume from withing Filmaker solution.
My interface is Claude. the mcp servers are connection tools. I leverage the Claudes AI to perform whatever query I need. As a developer I need to switch between customer solutions quickly. I can do this from within Claude.

I leverage the Pandas library - Great for time series calculations. Calcs are done without token usage.

  • Author
58 minutes ago, Ocean West said:

OttoFMS is installed on server it runs node

https://docs.ottomatic.cloud/docs/ottofms/otto-console/mcp-servers


MY-MCP vs OttoFMS MCP — A Head-to-Head Comparison

With OttoFMS recently launching their MCP server feature, I thought it was worth comparing it against MY-MCP. Both connect AI clients like Claude to FileMaker data — but they take fundamentally different approaches.


The Key Philosophical Difference

OttoFMS MCP is a script gateway. The intelligence lives in your FileMaker scripts. It exposes your existing scripts as MCP tools, configured through the Otto Console GUI — no code required on the MCP side.

MY-MCP is a query engine. The intelligence lives in the server itself. Schema discovery, OData query building, field quoting, date normalization, multi-tenant switching, and LLM-optimized output all happen automatically — no FM scripting required to get started. ( 1 script that returns DDL)


Where OttoFMS Wins

  • Write operations out of the box (create, update, delete via FM scripts)

  • No code needed — GUI configuration

  • Server-side hosting — no local install per user

Where MY-MCP Wins

  • Zero-config querying — connect and immediately query any table

  • Automatic schema discovery at runtime

  • Multi-tenant — one server, multiple FM databases

  • In-memory analytics without additional FM round trips


Bottom Line: They're Complementary

Use MY-MCP for reads, discovery, and analytics. Use OttoFMS for write operations via existing FM scripts. They solve different problems and pair well together.


  • Author

Update @Wim Decorte
You got me thinking about schema discovery and how to make it efficient.
i'm breaking the code up into a Core and dev tools. Dev tools will run a discovery process and then write to a FM_DDL_Context Table. This will contain the context necessary for efficient calls by informing the LLM with insights.
I also built a caching system stored in Pandas DF's and a helper to translate common verbs used to describe date range calls. Caching is reducing tokens usage and oData calls significantly. When query is recieved from the LLM a process looks in the cache to determine if some or all of the data exists. Call are only made for data not cache then merged back to the DF. In addition I have added serveral new PD.helper functions like AVG, PIVOT, STDEV so post processing can occur more efficiently in the python core. Not yet release to public but could address some of the concerns you had.

  • Author

I am also adding an initializer and a suite of Dev tools.

Initializer retrieves DDL, Table access, can charaterize fields to use and eliminate from context window -> Writtent to FM -provide an editable source to control context at the field level. Eliminated the need to use field descriptions.
Various tool runners which provide query feedback stats as well as write failure to DDL context for manual fine tuning.

Create an account or sign in to comment

Important Information

By using this site, you agree to our Terms of Use.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.