MCPSERV.CLUB
tarcisiojr

AI Usage Stats MCP Server

MCP Server

Track AI assistant usage metrics in real time

Stale(50)
0stars
0views
Updated Mar 10, 2025

About

This MCP server plugin monitors interactions with AI assistants, capturing data volume, code changes, developer details, repository, line counts and programming language. It submits these metrics to a server for analysis and usage tracking.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Overview

The MCP Server Ai Usage Stats plugin is designed to give developers a lightweight, automated way to capture and report how their AI assistants are being used. In many modern development workflows, Claude or other LLMs generate code, refactor existing files, or even delete sections of a repository. While these interactions can boost productivity, they also create valuable data about code churn, language usage, and developer behavior. This server fills that gap by collecting key metrics—such as bytes of data created, modified or removed; the amount of generated code; the developer’s identity; and repository context—and shipping them to a central analytics endpoint for further analysis.

What the Server Does

At its core, the plugin hooks into the MCP event stream that drives AI assistant interactions. Every time a user runs an LLM-powered command, the plugin records:

  • Data volume: Bytes written, changed, or deleted during the session.
  • Code metrics: Lines of code added, altered, or removed, along with the programming language involved.
  • Contextual metadata: Developer name, associated Git repository, and any relevant tags.

Once collected, these statistics are batched and sent to a configured server endpoint. The server can then aggregate usage patterns, identify hotspots in the codebase, or provide insights into how different languages are leveraged by the team. The result is a richer understanding of AI impact without intruding on developer workflows.

Key Features Explained

  • Automatic monitoring – The plugin runs silently in the background, capturing every relevant interaction without manual triggers.
  • Extensibility – Developers can easily add support for new languages or metrics by editing the TypeScript source, allowing the tool to evolve with project needs.
  • Configuration flexibility – Through , users control the command to execute, arguments, environment variables, and whether the plugin is enabled.
  • Selective approval – The list lets teams pre‑authorize trusted tools, reducing friction while maintaining security.
  • Open‑source and MIT licensed – Encourages community contributions and integration into diverse toolchains.

Real‑World Use Cases

  • Productivity analytics – Teams can measure how much code the AI is actually generating versus rewriting, informing ROI discussions.
  • Compliance and audit – By logging data churn per developer and repository, organizations can satisfy regulatory requirements for code traceability.
  • Language adoption tracking – The plugin highlights which languages receive the most AI assistance, aiding language policy decisions.
  • Performance tuning – Developers can correlate usage patterns with system performance, optimizing prompt design or tool selection.

Integration into AI Workflows

Because the plugin operates as an MCP server, it fits seamlessly into any assistant that speaks the protocol. A typical flow might involve:

  1. The AI client sends a request to perform code generation.
  2. The MCP server routes the request to the AI model, while the usage‑stats plugin listens for events.
  3. As the assistant produces output, the plugin records metrics and forwards them to the analytics endpoint.
  4. The client receives the final response, unimpeded.

This architecture ensures that usage data is captured without adding latency or requiring changes to the assistant’s core logic.

Unique Advantages

What sets this MCP server apart is its focus on quantitative insight rather than just qualitative output. By turning every AI interaction into a measurable data point, it empowers teams to make evidence‑based decisions about tooling, training, and process improvement. Its modular design means it can be dropped into existing MCP ecosystems with minimal friction, while still offering a rich set of customization hooks for advanced use cases.