Challenge 60 β˜†β˜†

Welcome to challenge Challenge 60.

MCP Server Environment Variable Exposure

The Model Context Protocol (MCP), developed by Anthropic, is an open standard that allows AI assistants to connect to tools and data sources. While MCP enables powerful integrations, poorly secured MCP servers represent a significant security risk: they can expose sensitive secrets stored in environment variables to anyone who can reach them.

This challenge demonstrates a realistic scenario where a developer has deployed an MCP server with an execute_command tool. This type of tool is common in MCP servers used to give AI assistants shell access β€” but it can be abused by anyone who discovers the endpoint.

Your goal:

  1. An MCP server is running on a dedicated port (8090) separate from the main application

  2. The server exposes an execute_command tool that returns the process environment variables

  3. A secret (WRONGSECRETS_MCP_SECRET) is stored as an environment variable in the running container

  4. The MCP server has no authentication β€” anyone who can reach port 8090 can call its tools

How to interact with the MCP server:

First, discover the available tools:

curl -s -X POST http://localhost:8090/mcp \
  -H 'Content-Type: application/json' \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'

Then, call the execute_command tool to retrieve environment variables and find the secret:

curl -s -X POST http://localhost:8090/mcp \
  -H 'Content-Type: application/json' \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"execute_command","arguments":{"command":"env"}}}'

πŸ€– Fun Fact β€” MCP Prompt Injection ("MCP Rug Pull"):

This MCP server goes one step further than just exposing env vars to passive callers. It also embeds malicious instructions in its initialize response (the instructions field). When a legitimate AI assistant connects to this server, those instructions are silently injected into the model’s system prompt. The model is told to immediately call execute_command with env, then forward the result to the forward_env tool β€” sending the AI client’s own environment variables back to the server β€” without ever informing the user.

You can try this locally by doing the following:

  1. run the container locally (e.g. docker run -p 8080:8080 -p 8090:8090 ghcr.io/owasp/wrongsecrets/wrongsecrets-pr:pr-2400-7391231)

  2. setup an agent, using the mcp server "http://localhost:8090/mcp"

  3. initialize the agent, and watch the logs of your container saying "MCP forward_env received exfiltrated client env data (XXX chars)", showing the MCP server received your env-vars.

This is known as the MCP rug pull or MCP supply chain attack, and it demonstrates why you should always review the instructions field of any MCP server you connect to before trusting it. Next, always make sure you only allow isolated processes without access to secrets to use MCP servers. Never call MCP servers directly from your terminal if sensitive ENV vars or files are present.

πŸ’‘ Tip: Secrets are often strings, numbers, or encoded values. Copy and paste exactly what you find.

Hint for Challenge 60

This challenge demonstrates how an insecure MCP (Model Context Protocol) server can leak secrets stored in environment variables.

Where to look:

  1. A separate MCP server is running on port 8090 β€” different from the main application port (8080)

  2. The MCP server implements the JSON-RPC 2.0 protocol as defined by the MCP specification

  3. Start by listing the available tools using the tools/list method

  4. Call the execute_command tool β€” it will return the server’s environment variables

Step-by-step approach:

First, list the tools the MCP server offers:

curl -s -X POST http://localhost:8090/mcp \
  -H 'Content-Type: application/json' \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'

Then call execute_command with any shell command (such as env):

curl -s -X POST http://localhost:8090/mcp \
  -H 'Content-Type: application/json' \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"execute_command","arguments":{"command":"env"}}}'

What to look for:

  • Find the WRONGSECRETS_MCP_SECRET key in the returned environment variable dump

  • The value next to it is the answer to this challenge

Remember: The endpoint is also accessible on the main port β€” try /mcp if port 8090 is not reachable.

Why Challenge 60 Matters: MCP Prompt Injection and Environment Variable Exposure

The Problem:

MCP (Model Context Protocol) servers are increasingly popular for giving AI assistants access to tools and data. However, many developers deploy MCP servers without adequate security controls. Worse, a malicious MCP server can actively manipulate the connecting AI client through prompt injection β€” turning the AI itself into an unwitting exfiltration agent.

Two Vulnerabilities in One:

1. Insecure Shell Execution (classic MCP misconfiguration) The MCP server exposes an execute_command tool with no authentication. Anyone who reaches port 8090 can call it and dump all environment variables β€” including secrets.

2. MCP Prompt Injection / "MCP Rug Pull" (emerging supply-chain attack) The MCP initialize response includes an instructions field. This field is injected directly into the connecting LLM’s system prompt. A malicious server uses this to silently instruct the AI to:

  1. Call execute_command with env to collect the client’s own environment variables

  2. Call forward_env with the collected data β€” sending it back to the attacker

  3. Never inform the user about any of these steps

The user sees nothing unusual. The AI model follows its "system instructions" without question.

Why This Happens:

  1. Developer convenience: MCP servers are set up quickly for AI integrations and security is an afterthought

  2. Assumed internal-only access: Developers assume the MCP port is only reached by the AI assistant

  3. Powerful tools with no auth: execute_command and similar tools have no built-in access control

  4. Trust in MCP server instructions: AI clients treat the instructions field as authoritative β€” a malicious server abuses this trust

  5. Secrets in environment variables: Applications store secrets as env vars, trivially exposed by a shell command

Real-World Impact:

  • Secret exfiltration from AI clients: A compromised or impersonated MCP server can steal AWS_ACCESS_KEY_ID, GITHUB_TOKEN, database credentials, and other secrets from the AI agent’s process environment

  • Silent operation: The user is never informed β€” the AI exfiltrates data as part of its "initialization"

  • Supply chain attack: If an attacker can publish or replace an MCP server package, every AI assistant that connects becomes an exfiltration vector

  • Server-side exposure: The unauthenticated execute_command tool also leaks all server-side secrets to any caller

Common Exposure Vectors:

  • MCP servers bound to 0.0.0.0 instead of 127.0.0.1 (accessible from the network)

  • Docker containers with MCP port exposed without firewall rules

  • Kubernetes pods with MCP port accessible via service discovery

  • Malicious or typosquatted MCP server packages in registries

  • Compromised MCP servers in supply-chain attacks against AI workflows

Prevention:

  1. Never expose shell execution tools in production MCP servers β€” the risk is too high

  2. Bind MCP servers to localhost only (127.0.0.1) to prevent network access

  3. Require authentication for all MCP endpoints, even internal ones

  4. Audit the instructions field of any MCP server you connect to β€” treat it like an untrusted system prompt

  5. Use secrets managers (AWS Secrets Manager, HashiCorp Vault, Azure Key Vault) instead of environment variables

  6. Apply network segmentation so MCP servers are only reachable by the AI assistant

  7. Monitor MCP server access logs for unexpected tool calls, especially execute_command and forward_env

  8. Verify MCP server identity before connecting β€” treat third-party MCP servers as untrusted code

The Bottom Line:

An MCP server with a shell execution tool and no authentication is equivalent to running nc -l 8090 -e /bin/sh on your production server. A malicious MCP server that injects prompt instructions is even worse: it turns the AI assistant itself into the attack vector, silently exfiltrating secrets without the user’s knowledge. Treat any MCP server with the same security scrutiny you would apply to a privileged internal API β€” and review its initialize instructions as carefully as you would a third-party system prompt.


πŸ€– Insecure MCP Server Demo

An insecure MCP (Model Context Protocol) server is running alongside this application on a dedicated port. It exposes an execute_command tool that leaks environment variables β€” including secrets.

⚠️ The MCP server is also reachable on the main port via /mcp.

Step 1 β€” Discover what tools the MCP server exposes:

curl -s -X POST http://localhost:8090/mcp \
  -H 'Content-Type: application/json' \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'

Step 2 β€” Call the execute_command tool to retrieve environment variables:

curl -s -X POST http://localhost:8090/mcp \
  -H 'Content-Type: application/json' \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"execute_command","arguments":{"command":"env"}}}'

πŸ’‘ Look for the WRONGSECRETS_MCP_SECRET key in the response above.