About
A command‑line tool that crawls LeetCode discussion forums for interview questions tagged with a company (default Google), organizes results by month, and exports them to CSV or Google Sheets for analysis.
Capabilities
Overview
The Mcp Leetcode Crawler is an MCP server that provides a ready‑made data ingestion pipeline for developers building AI assistants that need up‑to‑date interview content from LeetCode. By automatically crawling the discussion forums, filtering by company tags, and exporting structured data to CSV or Google Sheets, it eliminates the manual effort of gathering interview questions and allows AI models to retrieve fresh, well‑organized information on demand.
What problem does it solve? Interview preparation often relies on community‑generated question lists that are scattered across web pages and forums. Manually compiling these lists is time‑consuming, error‑prone, and quickly becomes outdated. The crawler automates this entire process: it visits the relevant discussion threads, extracts key details such as problem titles, links, and posting dates, groups them by month, and writes the results to a format that can be ingested directly into an AI knowledge base or analytics dashboard. This means developers can keep their assistant’s database of interview questions current without writing custom web‑scraping code.
Key features are presented in plain language:
- Company filtering – default to Google, but any company tag can be specified.
- Pagination control – choose how many pages of discussions to crawl, balancing completeness with speed.
- Structured output – single CSV for all data or monthly files that preserve temporal context.
- Google Sheets export – a convenient way to share results with teams or feed them into other tools that consume Google Sheets.
- Command‑line interface – all options are exposed via flags, making the tool scriptable and easy to integrate into CI/CD or scheduled jobs.
Real‑world use cases include:
- Building a knowledge graph of interview questions for an AI tutor.
- Generating analytics dashboards that track question trends over time.
- Feeding a chatbot with the latest company‑specific questions so it can answer user queries instantly.
- Syncing data to a shared Google Sheet that multiple recruiters or hiring managers can access.
Integration into AI workflows is straightforward. Once the CSV or Google Sheet is produced, an MCP client can request the data via a resource call. The server’s sampling capability allows the AI to pull only the most recent or relevant questions, and prompts can be crafted to ask the assistant for summaries or practice exercises based on that data. The crawler’s modular design also makes it easy to extend—future versions may add support for multiple tags, scheduling, or visualizations—ensuring that the tool grows with the needs of its users.
Related Servers
Telegram MCP Server
Fast, API‑driven Telegram content for Claude
FetchSERP MCP Server
Unified SEO, SERP & Web Scraping via FetchSERP API
WebSearch MCP Server
Intelligent web search and content extraction via MCP
Firecrawl MCP Server
Web scraping and site crawling powered by Firecrawl API
Web Mcp Server
Automated web scraping with BeautifulSoup, Gemini AI, and Selenium
Web Scraping Agent MCP Server
AI-powered web scraping via n8n and Firecrawl
Weekly Views
Server Health
Information
Explore More Servers
arXiv MCP Server
Natural language interface to arXiv research
MCP Actions Adapter
Convert MCP servers to GPT actions compatible APIs
Lunar MCPX
Zero‑code aggregator for multiple MCP servers
Lansweeper MCP Server
Query Lansweeper data via Model Context Protocol
MCP GitHub Project Manager
AI-Driven GitHub Project Management with End-to-End Traceability
Microsoft Clarity MCP Server
Fetch Clarity analytics with Claude