Flexible GraphRAG initial version

Flexible GraphRAG on GitHub

Flexible GraphRAG is an open source python platform supporting document processing, Knowledge Graph auto-building, Schema support, RAG and GraphRAG setup, hybrid search (fulltext, vector, graph), and AI Q&A query capabilities.

X.com Steve Reiner @stevereiner LinkedIn Steve Reiner LinkedIn

Has a MCP Server, Fast API Backend, Docker support, Angular, React, and Vue UI clients

Built with LlamaIndex which provides abstractions for allowing multiple vector, search graph databases, LLMs to be supported.

Supports currently:

Graph Databases: Neo4j, Kuzu

Vector Databases: Neo4j, Qdrant, Elasticsearch, OpenSearch

Search Databases/Engines: Elasticsearch, OpenSearch, LlamaIndex built-in BM25

LLMs: OpenAI, Ollama

Data Sources: File System, Hyland Alfresco, CMIS

A configurable hybrid search system that optionally combines vector similarity search, full-text search, and knowledge graph GraphRAG on document processed (Docling) from multiple data sources (filesystem, Alfresco, CMIS, etc.). It has both a FastAPI backend with REST endpoints and a Model Context Protocol (MCP) server for MCP clients like Claude Desktop, etc. Also has simple Angular, React, and Vue UI clients (which use the REST APIs of the FastAPI backend) for using interacting with the system.

  • Hybrid Search: Combines vector embeddings, BM25 full-text search, and graph traversal for comprehensive document retrieval

Knowledge Graph GraphRAG: Extracts entities and relationships from documents to create graphs in graph databases for graph-based reasoning

  • Configurable Architecture: LlamaIndex provides abstractions for vector databases, graph databases, search engines, and LLM providers
  • Multi-Source Ingestion: Processes documents from filesystems, CMIS repositories, and Alfresco systems
  • FastAPI Server with REST API: FastAPI server with REST API for document ingesting, hybrid search, and AI Q&A query
  • MCP Server: MCP server that provides MCP Clients like Claude Desktop, etc. tools for document and text ingesting, hybrid search and AI Q&A query.
  • UI Clients: Angular, React, and Vue UI clients support choosing the data source (filesystem, Alfresco, CMIS, etc.), ingesting documents, performing hybrid searches and AI Q&A Queries.
  • Deployment Flexibility: Supports both standalone and Docker deployment modes. Docker infrastructure provides modular database selection via docker-compose includes – vector, graph, and search databases can be included or excluded with a single comment. Choose between hybrid deployment (databases in Docker, backend and UIs standalone) or full containerization.

Check-ins 8/5/25 thru 8/9/25 provided:
1. Added LlamaIndex support, configurability, KG Building, GraphRAG, Hybrid Search, AI Q&A Query, Angular, React, and Vue UIs. Based on CMIS GraphRAG UI and CMIS GraphRAG which didn’t use LlamaIndex (used neo4j-graphrag python package)
2. Also added a FastMCP based MCP Server that uses the FastAPI server.

Check-in today 8/15/25 provided:

Added: Multiple Databases Support, Docker, Schemas, and Ollama support

  1. Leveraging LlamaIndex abstractions, added support for more search, vector and graph databases (beyond previous Neo4j, built-in BM25). Now support:
    Neo4j graph database, or Neo4j graph and vectors (also Neo4j browser / console)
    Elasticsearch search, or search and separate vector (also Kibana dashboard)
    OpenSearch search, or search+vector hybrid search (also OpenSearch Dashboards)
    Qdrant vector database (also its dashboard)
    Kuzu graph database support (also Kuzu explorer)
    LlamaIndex built-in local BM25 full text search
    (Note: LlamaIndex supports additonal vector and graph databases which we could support)
  2. Added composable Docker support
    a. As way to run search, graph, and vector databases. Also dashboards, and alfreso
    (comment out includes for what you have exernally or don’t use)
    b. Databases together with Flexible GraphRAG backend, and Angular, React, and Vue UIs
  3. Added Schema support for Neo4j (optional), and Kuzu (needed). Support default and custom
    schemas you configure in your environment (.env file, etc.)
  4. Added Ollama support in addition to OpenAI. Tested thru Ollama gpt-oss:20b, llama3.1, llama3.2.
    (Note: LlamaIndex supports additonal LLMs which we could support)