Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Galah: An LLM-powered web honeypot.

License

NotificationsYou must be signed in to change notification settings

0x4D31/galah

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TL;DR: Galah (/ɡəˈlɑː/ - pronounced ‘guh-laa’) is an LLM-powered web honeypot designed to mimic various applications and dynamically respond to arbitrary HTTP requests. Galah supports major LLM providers, including OpenAI, GoogleAI, GCP's Vertex AI, Anthropic, Cohere, and Ollama.

Unlike traditional web honeypots that manually emulate specific web applications or vulnerabilities, Galah dynamically crafts relevant responses—including HTTP headers and body content—to any HTTP request. Responses generated by the LLM are cached for a configurable period to prevent repetitive generation for identical requests, reducing API costs. The caching is port-specific, ensuring that responses generated for a particular port will not be reused for the same request on a different port.

Galah can optionally inspect incoming HTTP requests against a set of Suricata rules, matching on various HTTP buffers including method, URI, headers, cookies, and request body (the current implementation doesn't support all Suricata keywords, and PCRE handling is limited). To enable and configure rule matching, seeSuricata HTTP Rule Matching.

The prompt configuration is key in this honeypot. While you can update the prompt in the configuration file, it is crucial to maintain the segment directing the LLM to produce responses in the specified JSON format.

Note: Galah was developed as a fun weekend project to explore the capabilities of LLMs in crafting HTTP messages. The honeypot may be identifiable through various methods such as network fingerprinting techniques, prolonged response times depending on the LLM provider and model, and non-standard responses. To protect against Denial of Wallet attacks, be sure toset usage limits on your LLM API.

Getting Started

Local Deployment

  • Ensure you have Go version 1.22+ installed.
  • Depending on your LLM provider, create an API key (e.g., fromhere for OpenAI andhere for GoogleAI Studio) or set up authentication credentials (e.g., Application Default Credentials for GCP's Vertex AI).
  • If you want to serve HTTPS ports, generate TLS certificates.
  • Clone the repo and install the dependencies.
  • Update theconfig.yaml file if needed.
  • Build and run the Go binary!
% git clone git@github.com:0x4D31/galah.git%cd galah% go mod download% go build -o galah ./cmd/galah%export LLM_API_KEY=your-api-key% ./galah --help ██████   █████  ██       █████  ██   ██ ██       ██   ██ ██      ██   ██ ██   ██ ██   ███ ███████ ██      ███████ ███████ ██    ██ ██   ██ ██      ██   ██ ██   ██  ██████  ██   ██ ███████ ██   ██ ██   ██   llm-based web honeypot // version 1.0        author: Adel"0x4D31" KarimiUsage: galah --provider PROVIDER --model MODEL [--server-url SERVER-URL] [--temperature TEMPERATURE] [--api-key API-KEY] [--cloud-location CLOUD-LOCATION] [--cloud-project CLOUD-PROJECT] [--interface INTERFACE] [--config-file CONFIG-FILE] [--rules-config-file RULES-CONFIG-FILE] [--event-log-file EVENT-LOG-FILE] [--cache-db-file CACHE-DB-FILE] [--cache-duration CACHE-DURATION] [--log-level LOG-LEVEL] [--suricata-enabled] [--suricata-rules-dir SURICATA-RULES-DIR]Options:  --provider PROVIDER, -p PROVIDER                         LLM provider (openai, googleai, gcp-vertex, anthropic, cohere, ollama) [env: LLM_PROVIDER]  --model MODEL, -m MODEL                         LLM model (e.g. gpt-3.5-turbo-1106, gemini-1.5-pro-preview-0409) [env: LLM_MODEL]  --server-url SERVER-URL, -u SERVER-URL                         LLM Server URL (requiredfor Ollama) [env: LLM_SERVER_URL]  --temperature TEMPERATURE, -t TEMPERATURE                         LLM sampling temperature (0-2). Higher values make the output more random [default: 1, env: LLM_TEMPERATURE]  --api-key API-KEY, -k API-KEY                         LLM API Key [env: LLM_API_KEY]  --cloud-location CLOUD-LOCATION                         LLM cloud location region (requiredfor GCP's Vertex AI) [env: LLM_CLOUD_LOCATION]  --cloud-project CLOUD-PROJECT                         LLM cloud project ID (required for GCP's Vertex AI) [env: LLM_CLOUD_PROJECT]  --interface INTERFACE, -i INTERFACE                         interface to serve on  --config-file CONFIG-FILE, -c CONFIG-FILE                         Path to config file [default: config/config.yaml]  --event-log-file EVENT-LOG-FILE, -o EVENT-LOG-FILE                         Path to event log file [default: event_log.json]  --cache-db-file CACHE-DB-FILE, -f CACHE-DB-FILE                         Path to database filefor response caching [default: cache.db]  --cache-duration CACHE-DURATION, -d CACHE-DURATION                         Cache durationfor generated responses (in hours). Use 0 to disable caching, and -1for unlimited caching (no expiration). [default: 24]  --log-level LOG-LEVEL, -l LOG-LEVEL                         Log level (debug, info, error, fatal) [default: info]  --suricata-enabled     Enable Suricata HTTP rule checking (default: false)  --suricata-rules-dir SURICATA-RULES-DIR                         Directory containing Suricata .rules files to check HTTP requests against  --help, -h             display thishelp andexit

Run in Docker

  • Ensure you have Docker CE or EE installed locally.
  • Clone the repo and build the docker image.
  • You can mount a local directory to the container to store the logs.
  • Run the docker container.
% git clone git@github.com:0x4D31/galah.git%cd galah% mkdir logs%export LLM_API_KEY=your-api-key% docker build -t galah-image.% docker run -d --name galah-container -p 8080:8080 -v$(pwd)/logs:/galah/logs -e LLM_API_KEY galah-image -o logs/galah.json -p openai -m gpt-3.5-turbo-1106

Example Usage

./galah -p openai -m gpt-4.1-mini --suricata-enabled --suricata-rules-dir rules

Test:

curl --http1.1 --path-as-is -X POST \  -H'SOAPAction: "http://purenetworks.com/HNAP1/GetGuestNetworkSettings"' \  -H'Content-Type: text/xml' \  --data'<GetGuestNetworkSettings xmlns="http://purenetworks.com/HNAP1/">' \  http://127.0.0.1:8888/HNAP1/ -vNote: Unnecessary use of -X or --request, POST is already inferred.*   Trying 127.0.0.1:8888...* Connected to 127.0.0.1 (127.0.0.1) port 8888> POST /HNAP1/ HTTP/1.1> Host: 127.0.0.1:8888> User-Agent: curl/8.7.1> Accept:*/*> SOAPAction:"http://purenetworks.com/HNAP1/GetGuestNetworkSettings"> Content-Type: text/xml> Content-Length: 64>* upload completely sent off: 64 bytes< HTTP/1.1 200 OK< Server: TP-LINK HTTP Server/1.0< Date: Mon, 21 Apr 2025 01:28:43 GMT< Content-Length: 545< Content-Type: text/xml; charset=utf-8<<?xml version="1.0" encoding="utf-8"?><GetGuestNetworkSettingsResponse xmlns="http://purenetworks.com/HNAP1/"><GetGuestNetworkSettingsResult>OK</GetGuestNetworkSettingsResult><GuestNetworkEnabled>true</GuestNetworkEnabled><GuestNetworkSSID>TPLink_Guest</GuestNetworkSSID><GuestNetworkSecurity>WPA2-PSK</GuestNetworkSecurity><GuestNetworkPassword>guest1234</GuestNetworkPassword><GuestNetworkIsolation>true</GuestNetworkIsolation><GuestNetworkSSIDBroadcast>true</GuestNetworkSSIDBroadcast>

JSON event log:

{"eventTime":"2025-04-21T02:28:43.583386+01:00","httpRequest": {"body":"<GetGuestNetworkSettings xmlns=\"http://purenetworks.com/HNAP1/\">","bodySha256":"836c42168ebbad0b7192daa70ad8e4ea8d5930097162f513045f5ecb6ae9d5bd","headers": {"Accept":"*/*","Content-Length":"64","Content-Type":"text/xml","Soapaction":"\"http://purenetworks.com/HNAP1/GetGuestNetworkSettings\"","User-Agent":"curl/8.7.1"    },"headersSorted":"Accept,Content-Length,Content-Type,Soapaction,User-Agent","headersSortedSha256":"3a44fecf9284eca3947c45ffeb2301ce6d9b3d0a3cc5a7491ccea2b6ed61edaa","method":"POST","protocolVersion":"HTTP/1.1","request":"/HNAP1/","sessionID":"1745198923587092000_qHLNEPBNPrH6qw==","userAgent":"curl/8.7.1"  },"httpResponse": {"headers": {"Content-Length":"454","Content-Type":"text/xml; charset=utf-8","Server":"TP-LINK HTTP Server/1.0"    },"body":"<?xml version=\"1.0\" encoding=\"utf-8\"?> <GetGuestNetworkSettingsResponse xmlns=\"http://purenetworks.com/HNAP1/\">   <GetGuestNetworkSettingsResult>OK</GetGuestNetworkSettingsResult>   <GuestNetworkEnabled>true</GuestNetworkEnabled>   <GuestNetworkSSID>TPLink_Guest</GuestNetworkSSID>   <GuestNetworkSecurity>WPA2-PSK</GuestNetworkSecurity>   <GuestNetworkPassword>guest1234</GuestNetworkPassword>   <GuestNetworkIsolation>true</GuestNetworkIsolation>   <GuestNetworkSSIDBroadcast>true</GuestNetworkSSIDBroadcast> </GetGuestNetworkSettingsResponse>"  },"level":"info","msg":"successfulResponse","port":"8888","responseMetadata": {"generationSource":"llm","info": {"model":"gpt-4.1-mini","provider":"openai","temperature":1    }  },"sensorName":"mbp","srcHost":"localhost","srcIP":"127.0.0.1","srcPort":"62418","suricataMatches": [    {"msg":"ET WEB_SPECIFIC_APPS D-Link DIR-823G Multiple HNAP SOAPAction Endpoints Authentication Bypass","sid":"2061623"    }  ],"tags":null,"time":"2025-04-21T02:28:43.587152+01:00"}

See more exampleshere.

About

Galah: An LLM-powered web honeypot.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp