MCPSERV.CLUB
MCP-Mirror

Search Fetch Server

MCP Server

A lightweight MCP server for notes, web fetching and DuckDuckGo search

Stale(50)
0stars
0views
Updated Dec 25, 2024

About

This TypeScript-based MCP server provides a simple notes system with tools to create, fetch URLs (with optional Puppeteer conversion), and perform DuckDuckGo searches. It also offers prompts for summarizing stored notes, making it ideal for quick content aggregation and LLM prompt generation.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Overview

The Nexon33 Search Fetch Server MCP is a lightweight, TypeScript‑based Model Context Protocol server that turns web content and search queries into structured, editable notes. It solves the common developer pain point of manually ingesting and managing external information for AI assistants: instead of copying text into a local notebook, the server fetches URLs or search results, stores them as first‑class resources, and exposes simple tools for creation, retrieval, and summarization. This streamlines the workflow of building AI‑augmented knowledge bases or conversational agents that need up‑to‑date references without leaving the assistant’s environment.

At its core, the server offers three key capabilities. First, it implements a notes resource system where each note is identified by a URI and contains a title, plain‑text content, and optional metadata. Developers can list all notes or fetch a specific one through the MCP protocol, treating each note like any other data source. Second, the server provides tools that extend this resource layer: lets users programmatically add new notes; retrieves a web page’s content, optionally rendering it with Puppeteer and converting the result to Markdown; and performs a live DuckDuckGo query, returning structured JSON results. These tools eliminate the need for separate scraping or search scripts and keep all interactions within a single MCP contract. Third, a prompt called aggregates all stored notes and produces an LLM‑friendly prompt for generating concise summaries, enabling quick overviews of a growing knowledge base.

The server’s design is deliberately simple yet powerful for real‑world scenarios. In a documentation bot, developers can fetch API docs or tutorials and turn them into searchable notes that the assistant can cite. In a research workflow, a scientist could pull recent papers or search results, store them as notes, and ask the assistant to summarize findings across multiple sources. Because every note is a URI‑based resource, it can be referenced directly in conversations or workflows, allowing the assistant to retrieve or modify content on demand. The optional Puppeteer rendering is especially useful for pages that rely heavily on JavaScript, ensuring that the assistant works with fully rendered content rather than raw HTML.

Integration into existing AI pipelines is straightforward: add the server to a client’s MCP configuration, and expose its tools and resources in prompts or as part of the assistant’s tool set. The server communicates over stdio, so it can run on any platform that supports Node.js, and its minimal dependencies make it easy to embed in containerized or serverless environments. The built‑in MCP Inspector further simplifies debugging, providing a browser‑based interface to trace requests and responses.

Overall, the Nexon33 Search Fetch Server MCP delivers a cohesive solution for ingesting, managing, and summarizing external information. By treating web content and search results as first‑class notes, it gives developers a single, protocol‑compliant channel to enrich AI assistants with fresh data and structured knowledge, all without leaving the MCP ecosystem.