Shopfloor Copilot

AI-Powered Manufacturing Execution System (MES)

Prototype Technology Preview Connect · Contextualize · Analyze
Request a Guided Demo → Try Live Demo ↗ Discover Features ↓
scroll

Platform Overview

From Monitoring to
Intelligent Decision Support

Shopfloor Copilot is an AI-enhanced MES designed to modernize industrial monitoring through intelligent decision support and real-time data analysis. It integrates OPC UA protocol connectivity with a semantic engine to transform raw machine signals into actionable insights — OEE metrics, standardized loss classifications, and AI-grounded diagnostics without hallucinations.

30+
Manufacturing KPIs
11
Production Lines
200+
Stations Supported
100%
On-Premise · Zero PII Leakage
🔌
OPC UA Connectivity
Native OPC UA integration with semantic engine. Raw machine signals become OEE metrics and standardized loss classifications automatically.
🤖
RAG-Grounded AI Diagnostics
Evidence-based troubleshooting using Retrieval-Augmented Generation. Answers grounded in your SOPs and Work Instructions — zero hallucinations.
🤝
Digital Collaboration
Structured digital shift handovers, team chat, operator logbooks, and NCR management — replacing paper logs and informal phone calls.
🔒
Local-First Privacy
All AI runs on-premise via Ollama local LLMs and ChromaDB vector store. Production data never leaves your server.

Platform Capabilities

Everything Your Shopfloor Needs

12 integrated modules — from real-time OEE monitoring to AI-powered Q&A, semantic signal mapping, and predictive maintenance.

📊
Real-Time OEE Dashboard
Live Availability, Performance, and Quality metrics per line and station. Color-coded status tiles and shift downtime events.
Learn more →
🧠
AI Diagnostics & Copilot
AI Copilot analyses station data in real-time, providing root-cause analysis with checklist steps grounded in your documentation.
Learn more →
💬
Operator Q&A Knowledge Assistant
Ask natural-language questions. Answers from your WIs, SOPs, and Safety Guides via RAG + ChromaDB.
Open Platform →
🔌
OPC UA Explorer
Browse live OPC UA node trees, read real-time values, and add nodes to a persistent watchlist.
Learn more →
Semantic Signal Mapping
YAML-driven engine maps raw OPC signals to standardized loss categories — availability, performance, quality. No hardcoding.
Open Platform →
🏭
OPC Studio KPI Engine
30+ manufacturing KPIs via AST formula parser: Availability Rate, MTBF, MTTR, Throughput Efficiency, Energy per Unit — live.
Learn more →
🔴
Andon Board
Color-coded station × line grid. Green ≥85% OEE, red <50%. One-click alert acknowledgment. Kiosk mode for displays.
Learn more →
🔧
Predictive Maintenance
Equipment health scores (0–100), failure probability, and predicted failure dates. 48-hour Prophet time-series forecast per asset.
Learn more →
📈
Advanced Analytics
OEE trends, bottleneck detection (TOC), root cause Pareto, comparative cross-line benchmarking, and defect risk predictions.
Open Platform →
🤝
Digital Shift Handover
Structured handover with auto-populated open issues, operator notes, and OEE summaries. Email delivery to incoming shift.
Learn more →
⚠️
Violations Management
Signal threshold violations with SLA timers per severity. Acknowledgment workflow, comment trail, and per-station compliance stats.
Open Platform →
🌐
Visual Line Designer
BPMN drag-and-drop production line editor. Define station types, cycle times, and critical paths. Syncs to live plant model instantly.
Learn more →

System Architecture

Tech Stack & Infrastructure

Modern 3-tier containerised architecture. Local-First by design — no cloud dependency, no PII leakage, AI runs entirely on-premise.

Frontend
🖥️ NiceGUI 2.5⚡ Vue.js / Quasar🎨 Tailwind CSS📱 PWA / Mobile
Backend & AI
🐍 Python 3.11⚡ FastAPI🗄️ PostgreSQL 16🧠 Ollama (Local LLM)🔍 ChromaDB🔗 OPC UA
Infrastructure
🐳 Docker 24.0+📦 Docker Compose🔐 JWT Auth🌐 Nginx Reverse Proxy🔒 On-Premise Only
🏠
Local-First Architecture
No PII leakage. AI runs entirely on-premise. Your production data stays on your server — no cloud API calls for inference.

Deployment

Server Requirements

Runs on standard hardware. Deploy on-premise or on a private VPS. Docker + Linux — no special cloud infrastructure required.

⚡ Minimal Setup
🖥️ 4+ CPU Cores
💾 16 GB RAM
📀 100 GB Storage
🐳 Docker / Linux

Suitable for pilot deployments and evaluation environments.

🏆 Recommended — Production
🖥️ 8+ CPU Cores
💾 32–64 GB RAM
📀 200 GB Storage
🐳 Docker / Linux

Supports 200+ stations and full AI inference workload.

Request a Guided Demo → Try Live Demo ↗ Development Roadmap

Learn & Benchmark

Guides, Glossary & Comparisons

Deep-dive resources for manufacturing engineers and operations leaders — from OEE calculation to MES deployment strategy.