New Tabbed UI for Flexible GraphRAG (and Flexible RAG)

See Flexible GraphRAG Initial Version Blog Post

Flexible GraphRAG on GitHub

X.com Steve Reiner @stevereiner LinkedIn Steve Reiner LinkedIn

The Angular, React, and Vue frontend clients now have different stages organized into different tabs so they have room. They all can be switched between a dark and light theme using the slider at the top right corner. New functionality beyond the old UI includes a file upload dialog, drag/drop upload, a table with file processing progress bars, and a new Chat UI. Note the github readme.md page has collapse / expand sections to look at screenshots with dark and light themes for React, and only shows the light theme for Angular and Vue.

Sources Tab


Allows you to choose file to upload from the file system, or paths file or folder path in Alfresco or CMIS repositories. For filesystem files you can now use a file upload dialog and drag/drop files onto the drop area in the source tab view.

For Alfresco and CMIS their no file picker UI currently (only a field for folder or file path) Note the file path is a basic CMIS style path like /Shared/GraphRAG/cmispress.txt. You also specify username, password and base URL like prefilled http://localhost:8080/alfresco for Alfresco and http://localhost:8080/alfresco/api/-default-/public/cmis/versions/1.1/atom for CMIS.

You then click on “Configure Processing


Processing Tab

Here you can modify what files get processed by unselecting / selecting file checkboxes, Remove from processing list by using x a on file row, our use the remove selected button.
The click on Start Processing to process selected files.
There is an overall progress bar, and per file progress bars. Note currently all files are processed as one batch in the backend, so the file progress bars will be showing the same status.
You can cancel processing by using the cancel button

Search Tab

Here you can do a Hybrid Search (Fulltext+Vector RAG+GraphRAG) or (Fulltext+Vector RAG) depending on configuration. This gives you a traditional results list. For now ignore the scores and extra results just check results order.

The Q&A Query, Here you ask a question using conversational style (This is an AI query using the configured LLM and the information submitted in the processing tab (and in full text, vector, and graph “memory”)

Chat Tab

This a traditional chat style UI allowing you the enter multiple conversational Q&A queries (AI queries like the one at a time in the Search Tab). You hit enter or click the arrow button to submit a query. You can also use Shift+Enter to get a extra new line for your question. The chat view area displays a history of questions and answers. The you can clear things with the Clear History button

Flexible RAG

I used Flexible RAG in the title to indicate that Flexible GraphRAG can be configured to just be a RAG system. This would still have the flexibility that LlamaIndex abstractions provide to be able to plug in different search engines/databases, vector databases, and LLMs. You still get Angular, React, and Vue frontends, have MCP server support, a FastAPI backend, and Docker support. You could just configure a search engine. You could just configure a Graph database for auto graph building of knowledge graphs using the configurable schema support.

For RAG configuration:
Flexible GraphRAG can be setup to do RAG only without the GraphRAG (see env-sample.txt and setup your environment in .env, etc.):

  • Have SEARCH_DB and SEARCH_DB_CONFIG set for elasticsearch, opensearch, or bm25
  • Have VECTOR_DB and VECTOR_DB_CONFIG setup for neo4j, qdrant, elasticsearch, or opensearch
  • Have GRAPH_DB set to none and ENABLE_KNOWLEDGE_GRAPH=false.

Server Monitoring and Management UI

Basically you can use the docker setup and get a docker compose that run all the following at the same time (or a subset by commenting out a compose include) without having to these up individually: Alfresco docker compose (which has Share and ACA), Neo4j docker (which has a console URL), Kuzu API server (not used, used embedded), Kuzu explorer, Qdrant (which has a dashboard), Elasticsearch, Elasticsearch Kibana dashboard, OpenSearch which has a OpenSearch Dashboards URL.

So you can setup a browser window with tabs for all these dashboards, Alfresco Share / ACA, and Neo4J console. This is your monitoring and management UI.

You can uses the Neo4j, Elasticsearch Kibana, Qdrant dashboard, OpenSearch dashboards to delete full text indexes (Elasticsearch, OpenSearch), delete vector indexes (Qdrant, Neo4j, Elasticsearch, OpenSearch) and delete nodes and relationships (Neo4j and Kuzu consoles).

Flexible GraphRAG initial version

Flexible GraphRAG on GitHub

Flexible GraphRAG is an open source python platform supporting document processing, Knowledge Graph auto-building, Schema support, RAG and GraphRAG setup, hybrid search (fulltext, vector, graph), and AI Q&A query capabilities.

X.com Steve Reiner @stevereiner LinkedIn Steve Reiner LinkedIn

Has a MCP Server, Fast API Backend, Docker support, Angular, React, and Vue UI clients

Built with LlamaIndex which provides abstractions for allowing multiple vector, search graph databases, LLMs to be supported.

Supports currently:

Graph Databases: Neo4j, Kuzu

Vector Databases: Neo4j, Qdrant, Elasticsearch, OpenSearch

Search Databases/Engines: Elasticsearch, OpenSearch, LlamaIndex built-in BM25

LLMs: OpenAI, Ollama

Data Sources: File System, Hyland Alfresco, CMIS

A configurable hybrid search system that optionally combines vector similarity search, full-text search, and knowledge graph GraphRAG on document processed (Docling) from multiple data sources (filesystem, Alfresco, CMIS, etc.). It has both a FastAPI backend with REST endpoints and a Model Context Protocol (MCP) server for MCP clients like Claude Desktop, etc. Also has simple Angular, React, and Vue UI clients (which use the REST APIs of the FastAPI backend) for using interacting with the system.

  • Hybrid Search: Combines vector embeddings, BM25 full-text search, and graph traversal for comprehensive document retrieval

Knowledge Graph GraphRAG: Extracts entities and relationships from documents to create graphs in graph databases for graph-based reasoning

  • Configurable Architecture: LlamaIndex provides abstractions for vector databases, graph databases, search engines, and LLM providers
  • Multi-Source Ingestion: Processes documents from filesystems, CMIS repositories, and Alfresco systems
  • FastAPI Server with REST API: FastAPI server with REST API for document ingesting, hybrid search, and AI Q&A query
  • MCP Server: MCP server that provides MCP Clients like Claude Desktop, etc. tools for document and text ingesting, hybrid search and AI Q&A query.
  • UI Clients: Angular, React, and Vue UI clients support choosing the data source (filesystem, Alfresco, CMIS, etc.), ingesting documents, performing hybrid searches and AI Q&A Queries.
  • Deployment Flexibility: Supports both standalone and Docker deployment modes. Docker infrastructure provides modular database selection via docker-compose includes – vector, graph, and search databases can be included or excluded with a single comment. Choose between hybrid deployment (databases in Docker, backend and UIs standalone) or full containerization.

Check-ins 8/5/25 thru 8/9/25 provided:
1. Added LlamaIndex support, configurability, KG Building, GraphRAG, Hybrid Search, AI Q&A Query, Angular, React, and Vue UIs. Based on CMIS GraphRAG UI and CMIS GraphRAG which didn’t use LlamaIndex (used neo4j-graphrag python package)
2. Also added a FastMCP based MCP Server that uses the FastAPI server.

Check-in today 8/15/25 provided:

Added: Multiple Databases Support, Docker, Schemas, and Ollama support

  1. Leveraging LlamaIndex abstractions, added support for more search, vector and graph databases (beyond previous Neo4j, built-in BM25). Now support:
    Neo4j graph database, or Neo4j graph and vectors (also Neo4j browser / console)
    Elasticsearch search, or search and separate vector (also Kibana dashboard)
    OpenSearch search, or search+vector hybrid search (also OpenSearch Dashboards)
    Qdrant vector database (also its dashboard)
    Kuzu graph database support (also Kuzu explorer)
    LlamaIndex built-in local BM25 full text search
    (Note: LlamaIndex supports additonal vector and graph databases which we could support)
  2. Added composable Docker support
    a. As way to run search, graph, and vector databases. Also dashboards, and alfreso
    (comment out includes for what you have exernally or don’t use)
    b. Databases together with Flexible GraphRAG backend, and Angular, React, and Vue UIs
  3. Added Schema support for Neo4j (optional), and Kuzu (needed). Support default and custom
    schemas you configure in your environment (.env file, etc.)
  4. Added Ollama support in addition to OpenAI. Tested thru Ollama gpt-oss:20b, llama3.1, llama3.2.
    (Note: LlamaIndex supports additonal LLMs which we could support)

Python-Alfresco-MCP-Server 1.1.0 released

Video: Python-Alfresco-MCP-Server with Claude Desktop and MCP Inspector
https://x.com/stevereiner/status/1950418564562706655

Model Context Protocol Server (MCP) for Alfresco Content Services (Community and Enterprise)

This uses FastMCP 2.0 and Python-Alfresco-API

A full featured MCP server for Alfresco in search and content management areas. Features complete documentation, tests, examples,
and config samples for various MCP clients (Claude Desktop, MCP Inspector, references to configuring others).

Python-Alfresco-MCP-Server on Github
https://github.com/stevereiner/python-alfresco-mcp-server

Tools:
Basic search, advanced search, metadata search, and cmis query,
upload, download, check-in, checkout, cancel checkout,
create folder, folder browse, delete node,
get/set properties, repository info.

(With python-alfresco-api having full coverage of the 7 Alfresco REST APIs
you could customize with what tools you want from 191 in core, 29 in workflow,
3 in authentication, 1 in search, 1 in discovery, 18 in model, 1 search sql for solr)

Resources: repository info repeated

Prompts: search and analyze

Latest on Github 7/29/25

  • readme.md focuses on install with uv and uvx
  • docs\install_with_pip_pipx.md covers install with pip and pipx
  • sample configs for Claude Deskop (stdio) with uv, uvx, pipx for windows and mac
  • sample configs for mcp-inspector with uv, uvx, pipx for both http and stdio

Python-Alfresco-MCP-Server v1.1.0 7/25/25

  • Refactored code into single file per tool (organized in tools/search/,
    tools/core/, resources/, prompts/, utils/
  • Changes for python-afresco-api 1.1.1
  • Must better testing (143/143 passing)
  • Added uv support (latest readme and config samples also have uvx)
  • First version on PyPI.org

Python-Alfresco-MCP-Server v1.0 6/24/25
Changed to use FastMCP vs original code

Python-Alfresco-MCP-Server on PyPI
https://pypi.org/project/python-alfresco-mcp-server/
(On PyPI so don’t need source, still need python and optionally fast uv installed)

Thse can be used to test install or run one thing
# Tests that installation worked
uv tool run python-alfresco-mcp-server –help
uvx python-alfresco-mcp-server –help # alias for uv tool run

This install may not be needed
uv tool install python-alfresco-mcp-server

Python-Alfresco-API on Github
https://github.com/stevereiner/python-alfresco-api

Python-Alfresco-API on PyPI
https://pypi.org/project/python-alfresco-api/

X.com
https://x.com/stevereiner

LinkedIn
https://www.linkedin.com/in/steve-reiner-abbb5320/

Python-Alfresco-API Updated

 This is a complete Python client package for developing python code and apps for Alfresco. It supports using all 7 Alfresco REST APIs: Core, Search, Authentication, Discovery, Model, Workflow, Search SQL (Solr admin). It has Event support (activemq or Event Gateway). The project has extensive documentation, examples, and tests.

See Python-Alfresco-MCP-Server . This is a Model Context Protocol (MCP) Server that uses Python Alfresco API

https://github.com/stevereiner/python-alfresco-api

https://pypi.org/project/python-alfresco-api

You need Python 3.10+ installed.

This can be used to install:

pip install python-alfresco-api

The released v1.1.1 version goes well beyond the previous 1.0.x version.

It has a generated well organized hierarchical structure for higher level clients (1.0.x only had 7 wrapper files). Its generated from the openapi-python-client generated low level “raw clients”

Pydantic v2 models are now used the high level clients. Hopefully in v1.2 the low level clients will too. This can be done by configuring the openapi-python-client generator with templates. Some things need to be worked out, so no guarantees. This will simplify things and avoid model conversions.

Added utilities for upload, download, versioning, searching. etc. Using the utilities reduced the amount of code you need to do these operations.

A well organized hierarchical structure of linked md docs for the high level client apis and models documentation is also generated.

Documentation now has diagrams for overall architecture, model levels, and client type.

Readme now covers how to install an Alfresco Community docker from GitHub. In case you don’t already a Enterprise or Community version of Alfresco Content Services. Also see Hyland Alfresco .

Alfresco GenAI Semantic project updated: now adds regular Alfresco tags, uses local Wikidata and DBpedia entity recognizers

The Alfresco GenAI Semantic  github project  now adds regular Alfresco tags when performing auto tagging when enhancing with links to Wikidata and DBpedia. Semantic entity linking info is kept in 3 parallel multi-value properties (labels, links, super type lists) in the WikiData and DBpedia custom aspects. The labels values are used for the tag labels.

I switched to a local, private Wikidata recognizer.  The spaCy-entity-linker python library is used for getting Wikidata entity links without having to call a public serivce api. It was created before spaCy had its own entity linking system. It still has the advantage of not needing to do training. Had previously used the  spaCyOpenTapioca library, which calls an OpenTapioca public web service api URL. Note the URLs in the links properties do go to the public website wikidata.org if used in your application.

I also switched to a local, private DBpedia Spotlight entity recognizer in a docker composed in. The local URL to this docker is given the to the spacy DBpedia Spotlight for SpaCy library. This library was using a public Spotlight web service api URL by default previously. Note the URLs in the links properties do go to to the public website dbpeda.org if used in your application.

For documents with the Wikidata or DBpedia aspects added to them, tags will show up in the Alfresco clients (ACA, ADW, Share) after PDF rendition creation and alfresco-genai-semantic AI Listener gets responses from REST apis in the genai-stack. Shown below are tags in the ACA community content app:

Multi-value Wikidata aspect properties of a document in the ACA client are shown below in the view details expanded out. The labels property repeats what the labels of the tags have. The links properties have URLs to wikidata.org. The super types properties have the zero “” or one or multiple comma separated super types in wikidata for each entity. These supertypes are wikidata ids (are links once you add “http://www.wikidata.org/wiki/” in front of the ids).

The same style DBpedia aspect multivalue properties are shown below in the ACA client. Note that the super types can be from Wikidata, DBpedia, Schema (schema.org), foaf, or DUL (ontologydesignpatterns.org DUL.owl), etc.