MCPSERV.CLUB
mrkrsl

Web Search MCP Server

MCP Server

Multi‑engine web search without API keys

Stale(60)
205stars
0views
Updated 12 days ago

About

A TypeScript MCP server that performs comprehensive web searches using Bing, Brave, and DuckDuckGo, extracts full page content or snippets, and supports concurrent processing with browser isolation.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Web Search MCP Server Overview

The Web Search MCP Server bridges local large‑language models (LLMs) with the dynamic web by providing a robust, API‑key‑free search capability. Developers can embed real‑time information retrieval into AI assistants without relying on external services, thereby avoiding rate limits, privacy concerns, or vendor lock‑in. This server is especially valuable for teams building knowledge‑intensive applications—such as research assistants, customer support bots, or data‑driven decision tools—that need up‑to‑date content without compromising on performance.

At its core, the server exposes three purpose‑built tools that cover a spectrum of search needs. The flagship tool performs an optimized multi‑engine query, prioritising Bing for breadth, Brave for privacy, and DuckDuckGo as a fallback. It then harvests full page content using a hybrid strategy: lightweight Axios HTTP requests for quick retrieval, and headless browser instances (Playwright Chromium or Firefox) when richer rendering is required. By isolating each engine in its own browser context, the server guarantees clean state and automatic cleanup, while concurrent processing of multiple URLs ensures that latency remains low even for bulk queries. The tool offers a lean alternative that returns only snippet‑level results, making it ideal for quick fact checks or when bandwidth is constrained. Finally, lets callers pull the main body of a specific URL, stripping navigation, ads, and other clutter—perfect for focused content extraction.

Developers integrate these tools into their AI workflows by adding the server to the MCP configuration of clients such as LM Studio or LibreChat. Once registered, a model can invoke any tool by name, passing the required parameters (search query or URL). The server then returns a structured JSON payload that the LLM can consume, transform, or present to users. Because the server runs locally, latency is minimal and privacy is preserved: no user data leaves the host machine. The modular design also allows swapping out search engines or adding new tools without touching client code, giving teams flexibility to adapt as their needs evolve.

Key advantages of this MCP server include:

  • Zero‑API‑key dependency: Direct HTTP and browser scraping eliminate external billing or quota constraints.
  • Multi‑engine resilience: Automatic fallback ensures reliable results even when one provider is down or blocked.
  • Concurrent, isolated browsing: Parallel extraction with per‑engine isolation reduces interference and improves speed.
  • Model compatibility focus: Optimised for recent tool‑friendly models (Qwen3, Gemma 3, Llama 3.x, Deepseek R1), giving developers confidence in stable operation.

In practice, the server empowers use cases such as real‑time research assistants that pull the latest academic articles, customer support bots that fetch product documentation from vendor sites, or knowledge‑base builders that automatically harvest and summarise web content. By seamlessly integrating into existing MCP workflows, it turns a local LLM into a fully fledged, web‑aware conversational agent.