- Notifications
You must be signed in to change notification settings - Fork0
Expert-level Postgres monitoring tool designed for humans and AI systems
License
postgres-ai/postgres_ai
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Expert-level Postgres monitoring tool designed for humans and AI systems
Built for senior DBAs, SREs, and AI systems who need rapid root cause analysis and deep performance insights. This isn't a tool for beginners — it's designed for Postgres experts who need to understand complex performance issues in minutes, not hours.
Part ofSelf-Driving Postgres - postgres_ai monitoring is a foundational component of PostgresAI's open-source Self-Driving Postgres (SDP) initiative, providing the advanced monitoring and intelligent root cause analysis capabilities essential for achieving higher levels of database automation.
- Top-down troubleshooting methodology: Follows the Four Golden Signals approach (Latency, Traffic, Errors, Saturation)
- Expert-focused design: Assumes deep Postgres knowledge and performance troubleshooting experience
- Dual-purpose architecture: Built for both human experts and AI systems requiring structured performance data
- Comprehensive query analysis: Complete
pg_stat_statements
metrics with historical trends and plan variations - Active Session History: Postgres's answer to Oracle ASH and AWS RDS Performance Insights
- Hybrid storage: Prometheus for metrics, Postgres for query texts — best of both worlds
📖Read more:postgres_ai monitoring v0.7 announcement - detailed technical overview and architecture decisions.
This tool is NOT for beginners. It requires extensive Postgres knowledge and assumes familiarity with:
- Advanced Postgres internals and performance concepts
- Query plan analysis and optimization techniques
- Wait event analysis and system-level troubleshooting
- Production database operations and incident response
If you're new to Postgres, consider starting with simpler monitoring solutions before using postgres_ai.
Experience the full monitoring solution:https://demo.postgres.ai (login:demo
/ password:demo
)
- Troubleshooting dashboard - Four Golden Signals with immediate incident response insights
- Query performance analysis - Top-N query workload analysis with resource consumption breakdowns
- Single query analysis - Deep dive into individual query performance and plan variations
- Wait event analysis - Active Session History for session-level troubleshooting
- Backups and DR - WAL archiving monitoring with RPO measurements
- Collection: pgwatch v3 (by Cybertec) for metrics gathering
- Storage: Prometheus for time-series data + Postgres for query texts
- Visualization: Grafana with expert-designed dashboards
- Analysis: Structured data output for AI system integration
Infrastructure:
- Linux machine with Docker installed (separate from your database server)
- Docker access - the user running
postgres_ai
must have Docker permissions - Access (network and pg_hba) to the Postgres database(s) you want to monitor
Database:
- Supports Postgres versions 14-18
- pg_stat_statements extension must be created for the DB used for connection
WARNING: Security is your responsibility!
This monitoring solution exposes several ports thatMUST be properly firewalled:
- Port 3000 (Grafana) - Contains sensitive database metrics and dashboards
- Port 58080 (PGWatch Postgres) - Database monitoring interface
- Port 58089 (PGWatch Prometheus) - Database monitoring interface
- Port 59090 (Prometheus) - Metrics storage and queries
- Port 59091 (PGWatch Prometheus endpoint) - Metrics collection
- Port 55000 (Flask API) - Backend API service
- Port 55432 (Demo DB) - When using
--demo
option - Port 55433 (Metrics DB) - Postgres metrics storage
Configure your firewall to:
- Block public access to all monitoring ports
- Allow access only from trusted networks/IPs
- Use VPN or SSH tunnels for remote access
Failure to secure these ports may expose sensitive database information!
Create a new DB user in the database to be monitored (skip this if you want to just check outpostgres_ai
monitoring with a syntheticdemo
database):
-- Create a user for postgres_ai monitoringbegin;createuserpostgres_ai_mon with password'<password>';grant connecton database<database_name> to postgres_ai_mon;grant pg_monitor to postgres_ai_mon;grantselecton pg_stat_statements to postgres_ai_mon;grantselecton pg_stat_database to postgres_ai_mon;grantselecton pg_stat_user_tables to postgres_ai_mon;-- Create a public view for pg_statistic access (required for bloat metrics on user schemas)createviewpublic.pg_statisticasselectn.nspnameas schemaname,c.relnameas tablename,a.attname,s.stanullfracas null_frac,s.stawidthas avg_width, falseas inheritedfrom pg_statistic sjoin pg_class conc.oid=s.starelidjoin pg_namespace nonn.oid=c.relnamespacejoin pg_attribute aona.attrelid=s.starelidanda.attnum=s.staattnumwherea.attnum>0and nota.attisdropped;grantselectonpublic.pg_statistic to pg_monitor;alteruser postgres_ai_monset search_path="$user", public, pg_catalog;commit;
One command setup:
# Download the CLIcurl -o postgres_ai https://gitlab.com/postgres-ai/postgres_ai/-/raw/main/postgres_ai \&& chmod +x postgres_ai
Now, start it and wait for a few minutes. To obtain a PostgresAI access token for your organization, visithttps://console.postgres.ai (Your org name → Manage → Access tokens
):
# Production setup with your Access token./postgres_ai quickstart --api-key=your_access_token
Note: You can also add your database instance in the same command:
./postgres_ai quickstart --api-key=your_access_token --add-instance="postgresql://user:pass@host:port/DB"
Or if you want to just check out how it works:
# Complete setup with demo database./postgres_ai quickstart --demo
That's it! Everything is installed, configured, and running.
- Grafana Dashboards - Visual monitoring athttp://localhost:3000
- Postgres Monitoring - PGWatch with comprehensive metrics
- Automated Reports - Daily performance analysis
- API Integration - Automatic upload to PostgresAI
- Demo Database - Ready-to-use test environment
For developers:
./postgres_ai quickstart --demo
Get a complete monitoring setup with demo data in under 2 minutes.
For production:
./postgres_ai quickstart --api-key=your_key# Then add your databases./postgres_ai add-instance"postgresql://user:pass@host:port/DB"
# Instance management./postgres_ai add-instance"postgresql://user:pass@host:port/DB"./postgres_ai list-instances./postgres_ai test-instance my-DB# Service management./postgres_ai status./postgres_ai logs./postgres_ai restart# Health check./postgres_ai health
After running quickstart:
- 🚀 MAIN: Grafana Dashboard:http://localhost:3000 (login:
monitoring
; password is shown at the end of quickstart)
Technical URLs (for advanced users):
- Demo DB: postgresql://postgres:postgres@localhost:55432/target_database
- Monitoring:http://localhost:58080 (PGWatch)
- Metrics:http://localhost:59090 (Prometheus)
./postgres_aihelp
Get your access token atPostgresAI for automated report uploads and advanced analysis.
- Host stats for on-premise and managed Postgres setups
pg_wait_sampling
andpg_stat_kcache
extension support- Additional expert dashboards: autovacuum, checkpointer, lock analysis
- Query plan analysis and automated recommendations
- Enhanced AI integration capabilities
We welcome contributions from Postgres experts! Please check ourGitLab repository for:
- Code standards and review process
- Dashboard design principles
- Testing requirements for monitoring components
This project is licensed under the Apache License 2.0 - see theLICENSE file for details.
postgres_ai monitoring is developed byPostgresAI, bringing years of Postgres expertise into automated monitoring and analysis tools. We provide enterprise consulting and advanced Postgres solutions for fast-growing companies.
About
Expert-level Postgres monitoring tool designed for humans and AI systems
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors3
Uh oh!
There was an error while loading.Please reload this page.