wrenai
by
canner

Description: ⚡️ GenBI (Generative BI) queries any database in natural language, generates accurate SQL (Text-to-SQL), charts (Text-to-Chart), and AI-powered business intelligence in seconds.

View canner/wrenai on GitHub ↗

Summary Information

Updated 2 hours ago
Added to GitGenius on August 31st, 2025
Created on March 13th, 2024
Open Issues/Pull Requests: 285 (+0)
Number of forks: 1,543
Total Stargazers: 14,464 (+1)
Total Subscribers: 90 (+0)
Detailed Description

WrenAI, developed by Canner, is an open-source, locally-run large language model (LLM) designed to provide a privacy-focused and customizable AI experience. Unlike cloud-based LLMs, WrenAI operates entirely on the user's hardware, eliminating data sharing concerns and offering greater control over the model and its outputs. The core philosophy centers around empowering users with AI capabilities without compromising their privacy or requiring constant internet connectivity.

The repository contains the necessary components to run various LLM models locally, primarily focusing on quantized versions of models like Llama 2, Mistral, and others available through platforms like Hugging Face. Quantization is a crucial aspect of WrenAI, reducing the model's size and computational requirements, making it feasible to run on consumer-grade hardware, including laptops and even some mobile devices. The project provides scripts and tools to download, convert, and manage these quantized models, streamlining the setup process for users unfamiliar with the intricacies of LLM deployment. It leverages `llama.cpp` as a backend for efficient inference, benefiting from ongoing optimizations within that project.

WrenAI isn't just a model runner; it offers a user-friendly desktop application built with Electron. This application provides a chat interface similar to popular AI chatbots, allowing users to interact with the loaded LLM through natural language prompts. The application supports features like streaming responses, conversation history, and customizable parameters such as temperature and top_p, enabling users to fine-tune the model's behavior. A key feature is the ability to select from different "personas" which are pre-defined prompt templates that guide the model's responses towards specific roles or styles.

Beyond the desktop application, WrenAI provides an API, allowing developers to integrate the locally-running LLM into their own applications and workflows. This API exposes endpoints for text generation, enabling programmatic access to the model's capabilities. This opens up possibilities for building custom AI-powered tools and services without relying on external APIs or cloud infrastructure. The API is designed to be relatively simple to use, facilitating integration for developers with varying levels of experience.

The repository also includes extensive documentation and examples to help users get started. This includes guides on installing dependencies, downloading models, configuring the application, and utilizing the API. The project actively encourages community contributions, with clear guidelines for submitting bug reports, feature requests, and pull requests. Ongoing development focuses on improving performance, expanding model support, enhancing the user interface, and adding new features based on community feedback. WrenAI represents a significant step towards democratizing access to LLM technology, making it accessible to a wider audience while prioritizing privacy and control.

wrenai
by
cannercanner/wrenai

Repository Details

Fetching additional details & charts...