[MCP] Getting started with CAST Highlight MCP
The CAST Highlight MCP Server allows AI agents (Claude, Gemini, GPT, Codex, etc.) to interact with your Highlight portfolio using the Model Context Protocol (MCP).
This enables natural-language querying of Highlight data—applications, technologies, vulnerabilities, segmentations, scans, etc.—without exposing any sensitive information.
This guide explains how to install, configure, and validate the Highlight MCP Server using the official hl-ai-hub project structure.
Table of Content
- Overview
- Prerequisites
- Environment Setup
- Launching the MCP Server
- Verifying MCP Connectivity
- Connecting Your AI Client
- (Optional) Add context to Agents
- Testing the MCP Server
- Tools Exposed by the MCP Server
- Data Safety & Security
- Limitations
- FAQ
1. Overview
The Highlight MCP Server provides a unified interface that translates AI requests into secure Highlight API calls.
It is delivered as a Docker-based service, with pre-configured support for multiple AI agents (Gemini, Codex, etc.).
Typical capabilities include:
- Listing and exploring Highlight applications
- Retrieving metrics, technologies, frameworks and vulnerabilities
- Fetching portfolio recommendations and segmentation files
- Monitoring or triggering scans
- Enriching LLM responses using shared resources
All data exchanged comes only from the Highlight REST API and never includes source code or sensitive files.
2. Prerequisites
Before starting, ensure you have:
✔ Docker installed
The server runs fully in containers.
✔ Internet access
Required for pulling docker images and accessing Highlight SaaS.
✔ CAST Highlight API running
Used by the MCP backend to retrieve informations.
✔ A CAST Highlight API key
Used by the MCP backend to authenticate with Highlight.
✔ Port 5185 available
The MCP server runs on this port
3. Environment Setup
The project centralises all configuration parameters inside .env.
Start by creating your .env from the provided template:
cp .env.example .env
Edit the .env file:
# === Highlight / MCP ===
HIGHLIGHT_API_TOKEN=__put_your_token_here__
HIGHLIGHT_DOMAIN=__your_highlight_domain__
# MCP transport configuration (optionnal)
MCP_TRANSPORT=streamable-http
BACK_HOST=0.0.0.0
BACK_PORT=5185
All services—MCP server and AI agents—will automatically load these values.
4. Launching the MCP Server
In the hl-ai-hub project, the MCP server resides in /mcp-server.
Start it using:
cd mcp-server
docker compose up -d
This will:
- Start the mcp-server-highlight container
- Bind the MCP API to the port defined in .env (default 5185)
- Mount persistent data under mcp-server/data/
Confirm that the container is running:
docker ps
Expected output:
mcp-server-highlight 0.0.0.0:5185->5185/tcp Up highlight-mcp
5. Verifying MCP Connectivity
Test that the MCP transport endpoint is responding:
curl -i http://127.0.0.1:5185/mcp/ -H "Accept: application/json"
Expected response:
HTTP/1.1 200 OK
...
If this works, your MCP server is running properly.
6. Connecting Your AI Client
Any MCP-compatible AI client can interact with the Highlight MCP Server.
Below is the standard configuration structure used by agents, current case Codex (Other Agent config can be found in hl-mcp-package/agents ), which can be place in root folder of your agent : EG: User/.code/config.toml
[mcp_servers."mcp-server-highlight"]
url = "http://YourServerIp:Port/mcp" #Port by default 5185
trust = true
startup_timeout_sec = 30
tool_timeout_sec = 30
http_headers = { Accept = "application/json, */*;q=0.1",
highlight_domain = "HIGHLIGHT_API_TOKEN",
highlight_api_key = "HIGHLIGHT_DOMAIN"
}
The MCP server is multi domain so you can pass though the header any domain & token pair, it will override the global parameters :
- HIGHLIGHT_API_TOKEN
- HIGHLIGHT_DOMAIN
Once the client restarts, you should see:
🟢 mcp-server-highlight - Ready (32 tools)
7. (Optional) Add context to Agents
To ensure AI agents correctly understand the Highlight MCP business domain, a structured context must be provided.
All context files are located in:
agents/context/
1️⃣ OpenAI Codex CLI
📁 Required File
Codex reads persistent context from a Markdown file.
agents/context/AGENTS.md
You can directly copy the file under you codex root folder User/.codex/
You can now restart codex normally and the Highlight context will now be taken in account
2️⃣ Gemini CLI
Gemini supports structured system instruction injection via JSON.
📁 Required File
agents/context/AIAssistant-context.json
Option A — Direct Flag
Directly mention the instruction file on each Gemini loading
gemini --system-instruction-file=hl-mcp-package/agents/ressources/AIAssistant-context.json
Option B — Environment Variable (CI/CD Friendly)
Set the global variable in your CI
export GEMINI_SYSTEM_PROMPT="$(cat hl-mcp-package/agents/ressources/AIAssistant-context.j
8. Testing the MCP Server
You can now interact with Highlight using natural language:
Test 1 — List all applications
“List all Highlight applications available for my account.”
Uses: list_applications
Test 2 — Retrieve application details
“Show me key metrics, technologies, and vulnerabilities for the application X.”
Uses:get_application_details, get_technology_info, get_vulnerability_info
Test 3 — Portfolio insights
“What are the portfolio recommendations and why are they important?”
Uses: list_portfolio_recommendations
9. Tools Exposed by the MCP Server (32 total)
🔹 Portfolio Tools
- list_applications – Retrieve all applications available in the Highlight portfolio.
- find_applications – Search applications by name or partial match.
- list_platforms – List all detected technology platforms across the portfolio.
- list_licenses – Retrieve all open-source licenses identified in scans.
- list_frameworks – List frameworks detected in portfolio applications.
- list_tags – Retrieve all tags defined in the Highlight portfolio.
- list_technologies – List all technologies identified in the scanned applications.
- list_third_parties – Retrieve all third-party components detected across the portfolio.
- list_vulnerabilities – List all vulnerabilities detected across Highlight applications.
- list_portfolio_segmentations – Retrieve available segmentation models for the portfolio.
- list_portfolio_recommendations – List portfolio-level recommendations generated by Highlight.
- list_survey_questions – Retrieve available survey questions or assessment items.
🔹 Application Tools
- get_application_details – Retrieve global application details including metrics and detection summaries.
- get_application_info – Retrieve basic metadata for a specific application.
- refresh_application_results – Refresh Highlight analysis results for an application.
- get_segmentation_details – Retrieve segmentation hierarchy and metadata for an application.
- get_segmentation_json_file – Download the full segmentation file (JSON) for a selected application.
- get_recommendation_info – Retrieve detailed information about a specific Highlight recommendation.
- get_license_info – Retrieve details about a specific detected license.
- get_framework_info – Retrieve detailed information about a detected framework.
- get_platform_info – Retrieve metadata about a detected platform.
- get_tag_info – Retrieve detailed information about a specific tag.
- get_technology_info – Retrieve detailed information about a detected technology.
- get_third_party_info – Retrieve metadata about a specific third-party component.
- get_vulnerability_info – Retrieve details for a specific vulnerability ID.
- get_survey_or_question_info – Retrieve full details for a survey or question item.
- get_benchmark – Returns benchmark values used to compare portfolio/application scores.
- get_benchmark_alerts – Returns benchmark alert details.
🔹 Scanning Tools
- scan_volume_as_application – Trigger a scan by mounting a local directory as an application.
- scan_zip_as_application – Trigger a scan from a ZIP archive representing an application.
- monitor_scan_by_pid – Monitor the status of a scan using its process ID.
- monitor_scan_by_container_id – Monitor scan progress using a container ID.
- monitor_scan_stdout – Stream and return scan console output for monitoring purposes.
🔹 Utility Tools
- load_server_cache – Load or refresh cached Highlight API data to improve response times.
10. Data Safety & Security
The Highlight MCP Server strictly respects data-privacy constraints.
✔ The MCP returns only Highlight API data:
- Application metadata
- Technology detections
- Vulnerability summaries
- Segmentation files
- Scanning logs
✔ It never returns:
- Source code
- Credentials
- User data
- Repository contents
- Local file system access
✔ Authentication is handled via your .env → highlight_api_key + highlight_domain
No authentication is ever forwarded to AI agents or stored in clear text outside the container.
11. Limitations
⚠ No access to source code or repository content
The MCP only exposes REST API metadata.
It does not read or return file contents.
⚠ No write operations on portfolio data
The MCP cannot modify or delete applications, metrics, or metadata.
⚠ Dependent on API key permissions
The MCP can only return data permitted by the associated API key.
Permissions are inherited from the Highlight account.
⚠ Limited by API performance and rate limits
Large portfolios or heavy operations may cause slower responses or temporary latency.
⚠ JSON may be large
Some JSON results depending on your portfolio size might be large, currently the MCP server has a max limit of 512Mo per API response.
⚠ AI models may hallucinate interpretations
Although tools deliver structured data, the AI’s explanations or conclusions may vary depending on the model’s reasoning abilities.
12. FAQ
General
Q: What is the Highlight MCP Server?
It is an HTTP-based service that allows AI assistants to interact with portfolio and application metadata through a set of predefined tools.
Q: Do I need to install anything on my machine?
Only Docker is required. The MCP server runs as a container via Docker Compose.
Q: Which AI assistants are compatible?
Any assistant that supports the Model Context Protocol (MCP), including tools integrating Gemini, Codex, and others.
Authentication
Q: How do I authenticate with the MCP server?
You provide your platform API key and domain in the .env file.
AI clients forward these values as headers when calling the MCP.
Q: Is my API key exposed to the AI model?
No.
The API key is passed as an HTTP header to the MCP backend, not to the language model.
Data Privacy
Q: What information is sent to the AI model?
Only REST API metadata: application names, metrics, technologies, vulnerabilities, segmentation summaries, and scan logs.
Q: Is source code ever sent to the AI?
Never.
The MCP server does not access or return source files.
Q: Can the MCP modify or delete portfolio data?
No.
All MCP tools are read-only except scan operations, which only create new scan results.
Technical
Q: My MCP server does not start. What should I check?
- Verify the .env file is present and correctly filled in.
- Make sure no other service uses port 5185.
- Check container logs using: docker logs -f mcp-server-highlight.
Q: The AI assistant says the MCP server is unreachable.
Verify that:
- The container is running
- The MCP URL matches http://127.0.0.1:5185/mcp/
- Your AI client was restarted after configuration changes
Q: Why do some responses take several seconds?
- Large portfolios
- Heavy queries (segmentation, vulnerabilities, scan logs)
- API latency
Q: Why do I get an authentication error?
- Invalid API key
- Domain mismatch
- Missing or incorrect environment variables
Comments