Shopfloor Copilot delivers AI manufacturing diagnostics using Retrieval-Augmented Generation (RAG) and a local large language model. Root cause analysis is grounded in your own SOPs, work instructions, and maintenance manuals — with zero cloud dependency and zero AI hallucinations.
RAG (Retrieval-Augmented Generation) diagnostics is an AI approach where the system answers questions by retrieving relevant content from your documents first, then generating an answer grounded in those documents. Unlike a general-purpose LLM, which answers from training data alone, a RAG system searches your specific knowledge base — SOPs, work instructions, maintenance records, safety data sheets — before generating any response.
The result: AI answers that reference your procedures, not generic internet knowledge. An operator asking "Why is line A02 showing reduced cycle time on station ST05?" receives a checklist derived from the maintenance manual for that specific machine type, not a generic suggestion.
The diagnostic pipeline runs in six steps, entirely on your server:
Analyses current station OEE, active alerts, and downtime history. Proposes ranked root cause hypotheses with supporting evidence from your SOPs.
Natural language Q&A interface. Ask "What is the torque spec for station ST05 assembly?" and get a cited answer from your work instructions.
Ingest an unlimited number of documents. PDFs, DOCX, TXT supported. ChromaDB stores and indexes all content locally.
Powered by Ollama running locally. No API key needed. No production data leaves your server. Compatible with Llama 3, Mistral, Gemma, and others.
Every AI answer includes source citations: document name, relevant excerpt, and similarity score. Operators can verify the source before acting.
Answers are restricted to retrieved document content. If no relevant document exists, the system says so — it does not fabricate procedures.
Most AI diagnostic tools rely on cloud LLMs (GPT-4, Claude, Gemini). This means your production data — machine fault codes, OEE values, operator notes — is transmitted to external servers. For manufacturers in automotive (IATF 16949), aerospace (AS9100D), pharmaceutical (GxP), and defence (ITAR) sectors, this is a compliance and security problem.
Shopfloor Copilot solves this with a local-first AI architecture:
Any document in your existing quality management system (QMS) or maintenance system can be indexed:
Supported file formats: PDF, DOCX, TXT, Markdown. Files are ingested via the platform's document management interface.
RAG (Retrieval-Augmented Generation) diagnostics means the AI answers are grounded in your actual documents — SOPs, work instructions, maintenance manuals. When a machine fault occurs, the AI searches your document knowledge base for relevant procedures, then generates a step-by-step checklist grounded in those real documents. This eliminates AI hallucinations.
No. Shopfloor Copilot runs local LLM inference via Ollama on your own server. No data leaves your facility. No API calls to OpenAI, Anthropic, or any external service. Suitable for air-gapped environments and OT security requirements.
Any model supported by Ollama: Llama 3 (8B, 70B), Mistral, Mixtral, Gemma, Phi-3, and others. The default configuration uses Llama 3 8B, which runs well on a server with 32GB RAM without a dedicated GPU.
Request a live demo showing real-time fault analysis, operator Q&A with document citations, and local LLM inference — all running within a simulated factory environment.
Explore Platform →