ArcadeDB (Apache 2.0) is a next generation Multi-Model Database for Graphs, Documents, Key/Value and Time-Series. Supports SQL, Cypher, Gremlin and MongoDB queries
Flexible GraphRAG is open source python platform supporting Docling document processing, knowledge graph auto-building, schemas, 13 data sources, 10 Vector databases, 7 Graph databases, ElasticSearch and OpenSearch search engines, RAG, GraphRAG, hybrid search, and AI query / chat. Has React, Vue, and Angular frontends, and a FastAPI backend. Also has a FastMCP MCP server.
Added performance testing results to readme.md (6 docs with openai with each graph database (neo4j, kuzu, falkordb)
Added docs/performance.md: has performance testing results for each graph database with 2,4,6 docs with openai and 2,4 docs with ollama
Added support for FalkorDB graph database https://www.falkordb.com/ and https://github.com/FalkorDB/falkordb The abstractions of LlamaIndex, LlamaIndex support for FalkorDB, and the configurability of flexible-graphrag made this a relatively straightforward process.
Added LlamaIndex DynamicLLMPathExtractor support (works on openai, not on ollama currently)
Added config of kg extractor type (simple, schema, or dynamic) to set which LlamaIndex extractor to use (SimpleLLMPathExtractor, SchemaLLMPathExtractor, or DynamicLLMPathExtractor)
Added config of MAX_TRIPLETS_PER_CHUNK and MAX_PATHS_PER_CHUNK
Added readme.md info on system environment setup of ollama for performance and parallelism (OLLAMA_CONTEXT_LENGTH, OLLAMA_NUM_PARALLEL, etc.)
Added new default schema with 35+ relationship combinations, more relations, and entity types: PERSON, ORGANIZATION, TECHNOLOGY, PROJECT, LOCATION
Fixed file upload dialog performance in all 3 front ends: React, Angular, and Vue (chosen files display quickly after dialog ok)
The Angular, React, and Vue frontend clients now have different stages organized into different tabs so they have room. They all can be switched between a dark and light theme using the slider at the top right corner. New functionality beyond the old UI includes a file upload dialog, drag/drop upload, a table with file processing progress bars, and a new Chat UI. Note the github readme.md page has collapse / expand sections to look at screenshots with dark and light themes for React, and only shows the light theme for Angular and Vue.
Sources Tab
Allows you to choose file to upload from the file system, or paths file or folder path in Alfresco or CMIS repositories. For filesystem files you can now use a file upload dialog and drag/drop files onto the drop area in the source tab view.
Here you can modify what files get processed by unselecting / selecting file checkboxes, Remove from processing list by using x a on file row, our use the remove selected button. The click on Start Processing to process selected files. There is an overall progress bar, and per file progress bars. Note currently all files are processed as one batch in the backend, so the file progress bars will be showing the same status. You can cancel processing by using the cancel button
Search Tab
Here you can do a Hybrid Search (Fulltext+Vector RAG+GraphRAG) or (Fulltext+Vector RAG) depending on configuration. This gives you a traditional results list. For now ignore the scores and extra results just check results order.
The Q&A Query, Here you ask a question using conversational style (This is an AI query using the configured LLM and the information submitted in the processing tab (and in full text, vector, and graph “memory”)
Chat Tab
This a traditional chat style UI allowing you the enter multiple conversational Q&A queries (AI queries like the one at a time in the Search Tab). You hit enter or click the arrow button to submit a query. You can also use Shift+Enter to get a extra new line for your question. The chat view area displays a history of questions and answers. The you can clear things with the Clear History button
Flexible RAG
I used Flexible RAG in the title to indicate that Flexible GraphRAG can be configured to just be a RAG system. This would still have the flexibility that LlamaIndex abstractions provide to be able to plug in different search engines/databases, vector databases, and LLMs. You still get Angular, React, and Vue frontends, have MCP server support, a FastAPI backend, and Docker support. You could just configure a search engine. You could just configure a Graph database for auto graph building of knowledge graphs using the configurable schema support.
For RAG configuration: Flexible GraphRAG can be setup to do RAG only without the GraphRAG (see env-sample.txt and setup your environment in .env, etc.):
Have SEARCH_DB and SEARCH_DB_CONFIG set for elasticsearch, opensearch, or bm25
Have VECTOR_DB and VECTOR_DB_CONFIG setup for neo4j, qdrant, elasticsearch, or opensearch
Have GRAPH_DB set to none and ENABLE_KNOWLEDGE_GRAPH=false.
Server Monitoring and Management UI
Basically you can use the docker setup and get a docker compose that run all the following at the same time (or a subset by commenting out a compose include) without having to these up individually: Alfresco docker compose (which has Share and ACA), Neo4j docker (which has a console URL), Kuzu API server (not used, used embedded), Kuzu explorer, Qdrant (which has a dashboard), Elasticsearch, Elasticsearch Kibana dashboard, OpenSearch which has a OpenSearch Dashboards URL.
So you can setup a browser window with tabs for all these dashboards, Alfresco Share / ACA, and Neo4J console. This is your monitoring and management UI.
You can uses the Neo4j, Elasticsearch Kibana, Qdrant dashboard, OpenSearch dashboards to delete full text indexes (Elasticsearch, OpenSearch), delete vector indexes (Qdrant, Neo4j, Elasticsearch, OpenSearch) and delete nodes and relationships (Neo4j and Kuzu consoles).