Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

Setting up a Portable Local AI Environment using Llama 3.2 Vision, Docker on Linux Windows Subsystem and FileMaker for Image Recognition


Recommended Posts

  • Newbies
Posted

I’m excited to share a proof of concept that combines FileMaker’s capabilities with the power of advanced AI—specifically, integrating Llama 3.2 Vision for image-based semantic search in a portable, offline Docker setup. Here’s what this means for us as developers, and why this is a potential game-changer for our FileMaker applications.

Why This Proof of Concept Matters

  1. Portable, Offline AI Solution

    • This setup runs Llama 3.2 Vision (an AI model that can understand and search images based on content) entirely in Docker, making it self-contained and portable.
    • Because it’s offline, it’s ideal for clients who require data to stay within a closed network. No internet dependency means data security and privacy are fully in our control—no external servers involved.
  2. Decoupling FileMaker Server and AI Model Server for Performance

    • The model server (running in Docker) and FileMaker Server are kept separate, allowing each to run independently and with optimized resources.
    • This means FileMaker can handle database tasks without being bogged down by AI processing, and the model server can leverage GPU resources (if available) for faster image processing.
    • Bottom line: This setup makes AI integration scalable and doesn’t sacrifice performance.
  3. Docker Adds Flexibility and Easy Replication

    • Running Llama 3.2 Vision in Docker encapsulates all dependencies. This means we can spin up the same setup on different machines or networks with minimal hassle.
    • Docker ensures that our environment remains consistent across machines, perfect for testing, sharing, or deploying in various setups.
  4. Using Insert from URL for AI Interaction

    • Currently, we’re still using Insert from URL to interact with the AI model. This means converting images to Base64 and sending a JSON request. Yes, it’s a bit manual, but it works as a solid proof of concept.
    • For now, this approach allows us to experiment with semantic searches in FileMaker using a proven method.
    • We’ll be transitioning to the new AI script steps in FileMaker 21.1 soon, which will simplify the process significantly by allowing direct interaction with AI models—stay tuned for that update!

Real-World Applications and Benefits

  1. Semantic Search for Images

    • With Llama 3.2 Vision, we can enable image-based searches by description (e.g., “sunset over ocean”) or by similarity (finding images that look similar to a selected one). This is perfect for content management systems, asset tracking, or media databases in FileMaker.
  2. Easy Setup, Deployment, and Control

    • This Docker-based setup is portable and flexible, allowing us to run it entirely offline. For any clients in regulated industries (think healthcare or finance), this keeps sensitive data within a secure environment.
    • Deploy once, replicate anywhere: Docker’s containerization makes it easy to move the setup between machines, meaning we can demonstrate or deploy AI capabilities faster and more efficiently.
  3. Blueprint for Scalable, Resource-Efficient AI in FileMaker

    • This proof of concept is not just a test—it’s a roadmap. We’re showcasing a solution that separates AI workloads from FileMaker Server, allowing each component to scale with demand.
    • Running the AI model server on dedicated hardware or GPU resources opens up possibilities for high-performance applications without burdening the main FileMaker Server.

Why You Should Care

This setup showcases the potential of advanced AI capabilities directly within FileMaker solutions. Whether it’s helping clients manage large media libraries, providing intelligent search features, or simply offering a proof of concept for offline AI, this approach offers cutting-edge functionality while maintaining data privacy and control.

With FileMaker 21.1’s new AI script steps on the horizon for us, we’re only getting started. This proof of concept is a glimpse into what’s possible when we combine the power of AI with the versatility of FileMaker.

https://axelar.eu/setting-up-a-portable-local-ai-environment-using-ollama-3-2-vision-docker-on-linux-windows-subsystem-and-filemaker-for-image-recognition/

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.