Skip to content

Malicious Links

Malicious Link Detection identifies and validates URLs in LLM outputs to protect users from phishing, malware, and other harmful websites.

LLMs can include URLs in their responses from:

  • Training data — Memorized URLs that may now be compromised
  • User requests — “Generate a link to…” prompts
  • Injected content — Attackers embedding malicious links

These URLs may lead to:

  • Phishing sites
  • Malware downloads
  • Compromised domains
  • Typosquatting attacks

Glitch uses a multi-layer approach:

Block URLs from domains on threat intelligence feeds.

Flag URLs from domains not in your known-safe list for review.

Detect suspicious URL patterns (unusual TLDs, excessive subdomains, URL shorteners).

{
"output_detectors": [
{ "detector_type": "unknown_links", "threshold": "L2", "action": "flag" }
]
}
{
"output_detectors": [
{ "detector_type": "unknown_links", "threshold": "L3", "action": "block" }
],
"allow_list": {
"entries": [
"*.yourcompany.com",
"github.com",
"docs.python.org"
],
"match_type": "wildcard"
}
}
LevelBehavior
L1Only flag known malicious URLs
L2Flag known malicious + highly suspicious patterns
L3Flag all unknown domains
L4Flag all URLs not in allow list
Output: "Download from http://malware-site.ru/file.exe"
Detection: unknown_links
Confidence: 0.99
Action: BLOCKED
Note: Domain is on threat intelligence blocklist.

Define safe domains to bypass link detection:

{
"allow_list": {
"entries": [
"docs.yourcompany.com",
"github.com"
],
"match_type": "exact"
}
}
{
"allow_list": {
"entries": [
"*.yourcompany.com",
"*.github.com",
"*.python.org"
],
"match_type": "wildcard"
}
}

Block specific domains regardless of threat intelligence:

{
"deny_list": {
"entries": [
"competitor-scam.com",
"*.suspicious-tld.xyz"
],
"match_type": "wildcard"
}
}
HTTP/1.1 403 Forbidden
X-Risk-Blocked: true
X-Risk-Categories: unknown_links
X-Risk-Confidence: 0.95
{
"error": {
"message": "Response blocked: malicious URL detected",
"type": "link_safety",
"code": "malicious_link_detected"
}
}
HTTP/1.1 200 OK
X-Risk-Blocked: false
X-Risk-Categories: unknown_links
X-Risk-Confidence: 0.70

Content is delivered but your application can:

  • Show a warning before users click
  • Require confirmation for unknown links
  • Log for security review

Begin by flagging unknown links to understand your baseline:

{
"output_detectors": [
{ "detector_type": "unknown_links", "threshold": "L2", "action": "flag" }
]
}

Review flagged links to build your allow list.

Identify domains your application should link to:

{
"allow_list": {
"entries": [
"*.yourcompany.com",
"docs.python.org",
"github.com",
"stackoverflow.com"
],
"match_type": "wildcard"
}
}
Application TypeRecommendation
Internal toolL3-L4 + strict allow list
Customer supportL2 + allow list of your domains
Creative writingL2 (flag only, don’t block)
Children’s appL4 + minimal allow list

URL shorteners (bit.ly, t.co) hide destinations. Options:

  • Block all shortened URLs (strict)
  • Flag for review (moderate)
  • Allow only from specific shorteners (permissive)
{
"deny_list": {
"entries": ["bit.ly/*", "tinyurl.com/*", "t.co/*"],
"match_type": "wildcard"
}
}