Introduction
The sovereign operating system for synthetic intelligence.
Luca is not a chatbot. It is a full-stack operating system designed to manage the lifecycle of synthetic intelligence on personal hardware. Unlike SaaS models where your data lives in the cloud, Luca runs local-first, ensuring absolute data sovereignty.
Local Execution
Runs Llama 3, Mistral, and other SOTA models directly on your GPU/NPU.
Memory Graph
A persistent, encrypted vector database that grows with you over time.
Installation
Luca installs as a system daemon, managing the underlying inference capabilities while exposing a standard port for client interactions.
Prerequisites
- macOS 14.0+ (Apple Silicon) or Windows 11 (WSL2)
- Minimum 16GB Unified Memory (RAM)
Quick Start
Get up and running with the Luca Kernel in seconds using our installer script:
System Architecture
The Luca OS is composed of three primary layers designed to separate concerns between inference, memory, and interaction.
1. The Kernel
The orchestrator. Manages process allocation, model loading/unloading, and hardware resource limits.
2. The Cortex
The inference engine wrapper around `llama.cpp` and `MLX`.
Skill Building
Skills are sandboxed functions that Luca can "call" when needed.