Skip to content

Overview

How it works

Every haagsman.ai product follows the same pattern:

  1. Input — You send data (documents, emails, transcripts, invoices) via REST API
  2. Process — The product uses AI to analyze, extract, or generate
  3. Output — You get structured results (answers, categories, action items, drafts)

All products share a common core library that handles authentication, encryption, monitoring, and LLM communication. This means consistent behavior, security, and management across your entire deployment.

Deployment options

Option Best for Complexity
Docker Compose SMBs, single-server deployments Simple
Kubernetes / Helm Enterprise, multi-node, auto-scaling Moderate
Cloud templates AWS, GCP, Azure managed deployments Moderate
Air-gapped (Ollama) Regulated industries, no internet Simple

System requirements

Minimum (single product)

  • 2 CPU cores
  • 2 GB RAM
  • 10 GB disk
  • Docker 24+
  • 8 CPU cores
  • 16 GB RAM
  • 100 GB SSD
  • Docker 24+ with Compose v2

LLM requirements

  • Claude or OpenAI: Internet access to API endpoints
  • Ollama (air-gapped): GPU recommended (NVIDIA with 8GB+ VRAM), 16GB+ system RAM

Security at a glance

  • All data encrypted at rest (AES-256-GCM)
  • All traffic encrypted in transit (TLS 1.3)
  • JWT + API key authentication with role-based access control
  • GDPR-compliant: built-in data export and deletion endpoints
  • NIST 800-190 container security guidelines followed
  • Non-root containers, read-only filesystems, no shell in production images
  • Full audit logging of every API call