Getting Started

New to local AI? Start here.

Platform Guides

Windows, macOS, Linux, Android, iOS, Docker.

Use Cases

RAG, chatbots, code assistants, voice AI, and more.

advanced 50 minutes

Enterprise Local AI: Deploying LLMs for Your Organization

Deploy local LLMs for enterprise use. Covers architecture patterns, vLLM with NVIDIA GPUs, multi-user interfaces with LibreChat, security hardening, compliance considerations, and cost analysis.

advanced 60 minutes

Fine-Tuning Your Own Local Model: From Data to Deployment

Learn when and how to fine-tune a local LLM. Covers dataset preparation, QLoRA training with Unsloth, evaluation, GGUF export, and deployment with Ollama.

intermediate 30 minutes

Local AI Code Assistant: Setting Up Copilot Without the Cloud

Set up a fully local AI code assistant using Continue with Ollama in VS Code, Tabby for self-hosted completions, and Aider for terminal-based coding. Includes model benchmarks and configuration.

intermediate 45 minutes

Local Image Generation: Stable Diffusion, FLUX, and ComfyUI Guide

Generate images locally with Stable Diffusion, FLUX, and ComfyUI. Covers setup, ControlNet, LoRAs, VRAM management, prompt engineering, and workflow optimization.

intermediate 45 minutes

Building a Local RAG Chatbot: Documents, Embeddings, and Retrieval

Build a fully local RAG (Retrieval-Augmented Generation) chatbot that answers questions about your documents. Covers architecture, chunking strategies, embedding models, vector databases, and prompt engineering.

advanced 60 minutes

Building a Local Voice Assistant: Whisper + LLM + TTS

Build a fully local voice assistant pipeline with speech-to-text (Whisper.cpp), an LLM for processing (Ollama), and text-to-speech (Piper/Kokoro). Includes latency optimization and wake word detection.

Tool Guides

Deep dives into specific tools.