- Notifications
You must be signed in to change notification settings - Fork3
Expert-level Postgres monitoring tool designed for humans and AI systems
License
postgres-ai/postgres_ai
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Expert-level Postgres monitoring tool designed for humans and AI systems
Built for senior DBAs, SREs, and AI systems who need rapid root cause analysis and deep performance insights. This isn't a tool for beginners — it's designed for Postgres experts who need to understand complex performance issues in minutes, not hours.
Part ofSelf-Driving Postgres - postgres_ai monitoring is a foundational component of PostgresAI's open-source Self-Driving Postgres (SDP) initiative, providing the advanced monitoring and intelligent root cause analysis capabilities essential for achieving higher levels of database automation.
- Top-down troubleshooting methodology: Follows the Four Golden Signals approach (Latency, Traffic, Errors, Saturation)
- Expert-focused design: Assumes deep Postgres knowledge and performance troubleshooting experience
- Dual-purpose architecture: Built for both human experts and AI systems requiring structured performance data
- Comprehensive query analysis: Complete
pg_stat_statementsmetrics with historical trends and plan variations - Active Session History: Postgres's answer to Oracle ASH and AWS RDS Performance Insights
- Hybrid storage: Victoria Metrics (Prometheus-compatible) for metrics, Postgres for query texts — best of both worlds
📖Read more:postgres_ai monitoring v0.7 announcement - detailed technical overview and architecture decisions.
This tool is NOT for beginners. It requires extensive Postgres knowledge and assumes familiarity with:
- Advanced Postgres internals and performance concepts
- Query plan analysis and optimization techniques
- Wait event analysis and system-level troubleshooting
- Production database operations and incident response
If you're new to Postgres, consider starting with simpler monitoring solutions before using postgres_ai.
Experience the full monitoring solution:https://demo.postgres.ai (login:demo / password:demo)
- Troubleshooting dashboard - Four Golden Signals with immediate incident response insights
- Query performance analysis - Top-N query workload analysis with resource consumption breakdowns
- Single query analysis - Deep dive into individual query performance and plan variations
- Wait event analysis - Active Session History for session-level troubleshooting
- Backups and DR - WAL archiving monitoring with RPO measurements
- Collection: pgwatch v3 (by Cybertec) for metrics gathering
- Storage: Victoria Metrics for time-series data + Postgres for query texts
- Visualization: Grafana with expert-designed dashboards
- Analysis: Structured data output for AI system integration
Infrastructure:
- Linux machine with Docker installed (separate from your database server)
- Docker access - the user running
postgres_aimust have Docker permissions - Access (network and pg_hba) to the Postgres database(s) you want to monitor
Database:
- Supports Postgres versions 14-18
- pg_stat_statements extension must be created for the DB used for connection
WARNING: Security is your responsibility!
This monitoring solution exposes several ports thatMUST be properly firewalled:
- Port 3000 (Grafana) - Contains sensitive database metrics and dashboards
- Port 58080 (PGWatch Postgres) - Database monitoring interface
- Port 58089 (PGWatch Prometheus) - Database monitoring interface
- Port 59090 (Victoria Metrics) - Metrics storage and queries
- Port 59091 (PGWatch Prometheus endpoint) - Metrics collection
- Port 55000 (Flask API) - Backend API service
- Port 55432 (Demo DB) - When using
--demooption - Port 55433 (Metrics DB) - Postgres metrics storage
Configure your firewall to:
- Block public access to all monitoring ports
- Allow access only from trusted networks/IPs
- Use VPN or SSH tunnels for remote access
Failure to secure these ports may expose sensitive database information!
Create a new DB user in the database to be monitored (skip this if you want to just check outpostgres_ai monitoring with a syntheticdemo database):
-- Create a user for postgres_ai monitoringbegin;createuserpostgres_ai_mon with password'<password>';grant connecton database<database_name> to postgres_ai_mon;grant pg_monitor to postgres_ai_mon;grantselecton pg_index to postgres_ai_mon;-- Create a public view for pg_statistic access (optional, for bloat analysis)createviewpublic.pg_statisticasselectn.nspnameas schemaname,c.relnameas tablename,a.attname,s.stanullfracas null_frac,s.stawidthas avg_width, falseas inheritedfrom pg_statistic sjoin pg_class conc.oid=s.starelidjoin pg_namespace nonn.oid=c.relnamespacejoin pg_attribute aona.attrelid=s.starelidanda.attnum=s.staattnumwherea.attnum>0and nota.attisdropped;grantselectonpublic.pg_statistic to postgres_ai_mon;alteruser postgres_ai_monset search_path="$user", public, pg_catalog;commit;
For RDS Postgres and Aurora:
create extension if not exists rds_tools;grant executeon functionrds_tools.pg_ls_multixactdir() to postgres_ai_mon;
For self-managed Postgres:
grant executeon function pg_stat_file(text) to postgres_ai_mon;grant executeon function pg_stat_file(text,boolean) to postgres_ai_mon;grant executeon function pg_ls_dir(text) to postgres_ai_mon;grant executeon function pg_ls_dir(text,boolean,boolean) to postgres_ai_mon;
One command setup:
# Download the CLIcurl -o postgres_ai https://gitlab.com/postgres-ai/postgres_ai/-/raw/main/postgres_ai \&& chmod +x postgres_ai
Now, start it and wait for a few minutes. To obtain a PostgresAI access token for your organization, visithttps://console.postgres.ai (Your org name → Manage → Access tokens):
# Production setup with your Access token./postgres_ai quickstart --api-key=your_access_tokenNote: You can also add your database instance in the same command:
./postgres_ai quickstart --api-key=your_access_token --add-instance="postgresql://user:pass@host:port/DB"Or if you want to just check out how it works:
# Complete setup with demo database./postgres_ai quickstart --demoThat's it! Everything is installed, configured, and running.
- Grafana Dashboards - Visual monitoring athttp://localhost:3000
- Postgres Monitoring - PGWatch with comprehensive metrics
- Automated Reports - Daily performance analysis
- API Integration - Automatic upload to PostgresAI
- Demo Database - Ready-to-use test environment
For developers:
./postgres_ai quickstart --demo
Get a complete monitoring setup with demo data in under 2 minutes.
For production:
./postgres_ai quickstart --api-key=your_key# Then add your databases./postgres_ai add-instance"postgresql://user:pass@host:port/DB"
# Instance management./postgres_ai add-instance"postgresql://user:pass@host:port/DB"./postgres_ai list-instances./postgres_ai test-instance my-DB# Service management./postgres_ai status./postgres_ai logs./postgres_ai restart# Health check./postgres_ai health
After running quickstart:
- 🚀 MAIN: Grafana Dashboard:http://localhost:3000 (login:
monitoring; password is shown at the end of quickstart)
Technical URLs (for advanced users):
- Demo DB: postgresql://postgres:postgres@localhost:55432/target_database
- Monitoring:http://localhost:58080 (PGWatch)
- Metrics:http://localhost:59090 (Victoria Metrics)
./postgres_aihelp# run without installnode ./cli/bin/postgres-ai.js --help# local dev: install aliases into PATHnpm --prefix cli install --no-audit --no-fundnpm link ./clipostgres-ai --helppgai --help# or install globally after publish (planned)# npm i -g @postgresai/cli# postgres-ai --help# pgai --help
Get your access token atPostgresAI for automated report uploads and advanced analysis.
- Host stats for on-premise and managed Postgres setups
pg_wait_samplingandpg_stat_kcacheextension support- Additional expert dashboards: autovacuum, checkpointer, lock analysis
- Query plan analysis and automated recommendations
- Enhanced AI integration capabilities
Python-based report generation lives underreporter/ and now ships with a pytest suite.
Install dev dependencies (includespytest,pytest-postgresql,psycopg, etc.):
python3 -m pip install -r reporter/requirements-dev.txt
Run only unit tests with mocked Prometheus interactions:
pytest tests/reporter
This automatically skips integration tests. Or run specific test files:
pytest tests/reporter/test_generators_unit.py -vpytest tests/reporter/test_formatters.py -v
Run the complete test suite (both unit and integration tests):
pytest tests/reporter --run-integration
Integration tests create a temporary PostgreSQL instance automatically and require PostgreSQL binaries (initdb,postgres) on your PATH. No manual database setup or environment variables are required - the tests create and destroy their own temporary PostgreSQL instances.
Summary:
pytest tests/reporter→Unit tests only (integration tests skipped)pytest tests/reporter --run-integration→Both unit and integration tests
Generate coverage report:
pytest tests/reporter -m unit --cov=reporter --cov-report=html
View the coverage report by openinghtmlcov/index.html in your browser.
We welcome contributions from Postgres experts! Please check ourGitLab repository for:
- Code standards and review process
- Dashboard design principles
- Testing requirements for monitoring components
This project is licensed under the Apache License 2.0 - see theLICENSE file for details.
postgres_ai monitoring is developed byPostgresAI, bringing years of Postgres expertise into automated monitoring and analysis tools. We provide enterprise consulting and advanced Postgres solutions for fast-growing companies.
About
Expert-level Postgres monitoring tool designed for humans and AI systems
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
