📚 MCP Documentation Server (Version 1)

This project is an MCP (Model Context Protocol) Server designed to help fetch and search the latest documentation for libraries such as LangChain, OpenAI, and LlamaIndex.
It allows you to query documentation sources and retrieve clean, plain text using a combination of Google Serper API for search and BeautifulSoup for scraping.

This is Version 1 (In Progress) — future versions will expand with more features and libraries.


🚀 Features

  • 🔍 Search official docs for supported libraries:
  • 🌐 Fetch and clean documentation text directly from web pages.
  • ⚙️ Exposed as an MCP Tool via server.py.
  • Handles timeouts gracefully to avoid breaking workflows.
  • 🔑 Configurable using .env with your Serper API Key.

📂 Project Structure

documentation/
│── services/
│ │──── scrape.py 
│ │──── search.py 
│
│── tools/
│ │──── get_docs.py 
│
│──── config.py
│──── server.py
│──── pyproject.toml
│──── .env.example
│──── .gitignore

Setup & Installation

This project uses uv for dependency management.

1Clone the repository

git clone <your-repo-url>
cd documentation

2 Create a virtual environment

uv venv

3 Activate the virtual environment

Linux/macOS:

source .venv/bin/activate

Windows (PowerShell):

.venv\Scripts\Activate

4 Install dependencies

uv sync

🔑 Environment Variables

Copy .env.example → .env and set your Serper API key:

SERPER_API_KEY="your_api_key_here"

You can get an API key from Serper.dev


▶️ Running the MCP Server

Start the server with:

uv run --with mcp mcp run server.py

This runs the MCP server over stdio, making it available as a tool.


🛠 How It Works

  1. get_docs MCP tool
    • Input: query + library
    • Example: query="embedding models", library="openai"
    • Process:
      • Builds a site-specific query (site:platform.openai.com/docs embedding models)
      • Fetches results using Serper API
      • Scrapes and cleans text from found pages
    • Output: plain text documentation snippets
  2. Can be integrated into other MCP-compatible apps for automated documentation search & retrieval.

📌 Notes

  • This is Version 1 — focused on core functionality.
  • Future improvements:
    • Support for more libraries/frameworks.
    • Improved result filtering & ranking.
    • Error handling & logging improvements.
    • Possible caching layer for repeated queries.

🐳 Docker Support

This project also provides a Dockerfile so you can containerize the MCP Documentation Server.

  • Why Docker?

    • Consistency: run the server with the same environment on any machine.
    • Isolation: dependencies are fully contained, no need to pollute your local Python environment.
    • Portability: easy to share or deploy on servers/CI pipelines.
    • Caching: faster rebuilds thanks to Docker layer caching.
  • When do you need it?

    • If you want to run the MCP server without installing uv or Python locally.
    • If you plan to deploy on a server or inside CI/CD pipelines.
    • If you want a reproducible environment across your team.
  • How to use it?

    1. Build the image:

      docker build -t docs-mcp:dev -f Dockerfile .
      
    2. Run with your .env file (containing SERPER_API_KEY):

      docker run --rm -it --env-file .env docs-mcp:dev
      

    Thats it — the server will start inside the container and be available to any MCP-compatible client (e.g. Claude Desktop).


💻 Running Locally with Claude Desktop

You can also connect this MCP Documentation Server directly to Claude Desktop on your machine.

Steps:

  1. Install Claude Desktop

    • Make sure Claude Desktop is already installed on your system.
  2. Enable Developer Mode

    • Open Claude Desktop.

    • Go to Developer Mode.

    • Click Add Configuration. This will generate a local file called:

      claude_desktop_config.json
      
  3. Edit the Configuration File Open claude_desktop_config.json and add a new entry like this (adjust the paths to match your machine):

    {
        "mcpServers": {
            "documentation": {
            "command": "C:\\Users\\Alawakey\\Desktop\\MCP Server\\documentation\\.venv\\Scripts\\python.exe",
            "args": ["C:\\Users\\Alawakey\\Desktop\\MCP Server\\documentation\\server.py"],
            "cwd": "C:\\Users\\Alawakey\\Desktop\\MCP Server\\documentation"
            }
        }
    }
    
    • command: path to your Python executable inside the virtual environment.
    • args: path to the server.py file.
    • cwd: working directory of your project.
  4. Restart Claude Desktop

    • Fully close Claude Desktop and start it again.
  5. Verify the MCP Server

    • In Claude Desktop, go to Search and Tools (near the text input).
    • You should see your MCP server listed under the name you provided (documentation in the example above).
  6. Try some queries

    • Example prompts:
      how do i implement a chroma db in langchain?
      
      Use get_docs with query="ChatOpenAI" and library="openai"
      

    Claude will ask for permission the first time you use the tool. After granting it, the get_docs tool will fetch results from the documentation.


الوصف
This MCP Server helps you search for some docs like langchain, lammaindex, only using the Query or Prompet
اقرأني 74 KiB
اللغات
Python 77.2%
Dockerfile 22.8%