- Notifications
You must be signed in to change notification settings - Fork91
License
RedHatInsights/insights-host-inventory
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
You've arrived at the repo for the backend of the Host Based Inventory (HBI).If you're looking for API, integration or user documentation for HBIplease seetheInventory section in our Platform Docs site.
- Getting started
- Running the webserver locally
- Running all services locally
- Legacy Support
- Identity
- Payload Tracker integration
- Database migrations
- Schema dumps (for replication subscribers)
- Docker builds
- Metrics
- Release process
- Rollback process
- Updating the System Profile
- Logging System Profile fields
- Running ad hoc jobs using a different image
- Debugging local code with services deployed into Kubernetes namespaces
- Contributing
Before starting, ensure you have the following installed on your system:
- Docker: For running containers and services.
- Python 3.9.x: The recommended version for this project.
- pipenv: For managing Python dependencies.
Local development also requires thepg_config
file, which is installed with the postgres developer library.To install this, use the command appropriate for your system:
sudo dnf install libpq-devel postgresql
sudo apt-get install libpq-dev postgresql
brew install postgresql@16
Create a.env
file in your project root with the following content. Replace placeholders withappropriate values for your environment.
cat>${PWD}/.env<<EOF# RUNNING HBI LocallyPROMETHEUS_MULTIPROC_DIR=/tmpBYPASS_RBAC="true"BYPASS_UNLEASH="true"# Optional legacy prefix configuration# PATH_PREFIX="/r/insights/platform"APP_NAME="inventory"INVENTORY_DB_USER="insights"INVENTORY_DB_PASS="insights"INVENTORY_DB_HOST="localhost"INVENTORY_DB_NAME="insights"INVENTORY_DB_POOL_TIMEOUT="5"INVENTORY_DB_POOL_SIZE="5"INVENTORY_DB_SSL_MODE=""INVENTORY_DB_SSL_CERT=""UNLEASH_TOKEN='*:*.dbffffc83b1f92eeaf133a7eb878d4c58231acc159b5e1478ce53cfc'UNLEASH_CACHE_DIR=./.unleashUNLEASH_URL="http://localhost:4242/api"# Kafka Export Service ConfigurationKAFKA_EXPORT_SERVICE_TOPIC="platform.export.requests"EOF
After creating the file, source it to set the environment variables:
source .env
- Install dependencies:
pipenv install --dev
- Activate virtual environment:
pipenv shell
Provide a local directory for database persistence:
mkdir~/.pg_data
If using a different directory, update thevolumes
section indev.yml.
All dependent services are managed by Docker Compose and are listed in thedev.yml file.Start them with the following command:
docker compose -f dev.yml up -d
By default, the database container will use a bit of local storage so that data you enter will persist across multiplestarts of the container.If you want to destroy that data do the following:
docker compose -f dev.yml downrm -r~/.pg_data# or a another directory you defined in volumes
make upgrade_db
- Run the MQ Service:
make run_inv_mq_service
- Note: You may need to add a host entry for Kafka:
echo"127.0.0.1 kafka"| sudo tee -a /etc/hosts
- Create Hosts Data:
make run_inv_mq_service_test_producer NUM_HOSTS=800
- By default, it creates one host if
NUM_HOSTS
is not specified.
- Run the Export Service:
pipenv shellmake run_inv_export_service
In another terminal, generate events for the export service with:
make sample-request-create-export
By default, it will send a json format request. To modify the data format, use:
make sample-request-create-export format=[json|csv]
You can run the tests using pytest:
pytest --cov=.
Or run individual tests:
# To run all tests in a specific file:pytest tests/test_api_auth.py# To run a specific testpytest tests/test_api_auth.py::test_validate_valid_identity
- Note: Ensure DB-related environment variables are set before running tests.
Prometheus was designed to run in a multithreadedenvironment whereas gunicorn uses a multiprocessarchitecture. As a result, there is some workto be done to make prometheus integrate withgunicorn.
A temp directory for prometheus needs to be createdbefore the server starts. The PROMETHEUS_MULTIPROC_DIRenvironment needs to point to this directory. Thecontents of this directory need to be removed betweenruns.
If running the server in a cluster, you can use this command:
gunicorn -c gunicorn.conf.py run
When running the server locally for development, the Prometheus configuration is done automatically.You can run the server locally using this command:
python3 run_gunicorn.py
Use Honcho to run MQ and web services at once:
honcho start
Some apps still need to use the legacy API path, which by default is/r/insights/platform/inventory/v1/
.In case legacy apps require this prefix to be changed, it can be modified using this environment variable:
export INVENTORY_LEGACY_API_URL="/r/insights/platform/inventory/api/v1"
When testing the API, set the identity header in curl:
x-rh-identity: eyJpZGVudGl0eSI6eyJvcmdfaWQiOiJ0ZXN0IiwidHlwZSI6IlVzZXIiLCJhdXRoX3R5cGUiOiJiYXNpYy1hdXRoIiwidXNlciI6eyJ1c2VybmFtZSI6InR1c2VyQHJlZGhhdC5jb20iLCJlbWFpbCI6InR1c2VyQHJlZGhhdC5jb20iLCJmaXJzdF9uYW1lIjoidGVzdCIsImxhc3RfbmFtZSI6InVzZXIiLCJpc19hY3RpdmUiOnRydWUsImlzX29yZ19hZG1pbiI6ZmFsc2UsImlzX2ludGVybmFsIjp0cnVlLCJsb2NhbGUiOiJlbl9VUyJ9fX0=
This is the Base64 encoding of:
{"identity": {"org_id":"test","type":"User","auth_type":"basic-auth","user": {"username":"tuser@redhat.com","email":"tuser@redhat.com","first_name":"test","last_name":"user","is_active":true,"is_org_admin":false,"is_internal":true,"locale":"en_US" } }}
The above header has the "User" identity type, but it's possible to use a "System" type header as well.
x-rh-identity: eyJpZGVudGl0eSI6eyJvcmdfaWQiOiAidGVzdCIsICJhdXRoX3R5cGUiOiAiY2VydC1hdXRoIiwgInN5c3RlbSI6IHsiY2VydF90eXBlIjogInN5c3RlbSIsICJjbiI6ICJwbHhpMTN5MS05OXV0LTNyZGYtYmMxMC04NG9wZjkwNGxmYWQifSwidHlwZSI6ICJTeXN0ZW0ifX0=
This is the Base64 encoding of:
{"identity": {"org_id":"test","auth_type":"cert-auth","system": {"cert_type":"system","cn":"plxi13y1-99ut-3rdf-bc10-84opf904lfad" },"type":"System" }}
If you want to encode other JSON documents, you can use the following command:
echo -n'{"identity": {"org_id": "0000001", "type": "System"}}'| base64 -w0
For Kafka messages, the Identity must be set in theplatform_metadata.b64_identity
field.
The Identity provided limits access to specific hosts.For API requests, the user can only access Hosts which have the same Org ID as the provided Identity.For Host updates via Kafka messages, A Host can only be updated if not only the Org ID matches,but also theHost.system_profile.owner_id
matches the providedidentity.system.cn
value.
The inventory service integrates with the Payload Tracker service. Configure it using these environment variables:
KAFKA_BOOTSTRAP_SERVERS=localhost:29092PAYLOAD_TRACKER_KAFKA_TOPIC=platform.payload-statusPAYLOAD_TRACKER_SERVICE_NAME=inventoryPAYLOAD_TRACKER_ENABLED=true
- Enabled: Set
PAYLOAD_TRACKER_ENABLED=false
to disable the tracker. - Usage: The tracker logs success or errors for each payload operation. For example, if a payload contains multiplehosts and one fails, it's logged as a "processing_error" but doesn't mark the entire payload as failed.
Generate new migration scripts with:
make migrate_db message="Description of your changes"
- Replicated Tables: If your migration affects replicated tables, ensure you create and apply migrations for themfirst. Seeapp_migrations/README.md for details.
Capture the current HBI schema state with:
make gen_hbi_schema_dump
- Generates a SQL file in
app_migrations
namedhbi_schema_<YYYY-MM-dd>.sql
. - Creates a symbolic link
hbi_schema_latest.sql
pointing to the latest dump.
Note: Use the optionalSCHEMA_VERSION
variable to customize the filename.
Build local development containers with:
docker build. -f dev.dockerfile -t inventory:dev
- Note: Some packages require a subscription. Ensure your host has access to valid RHEL content.
Prometheus integration provides monitoring endpoints:
/health
: Liveness probe endpoint./metrics
: Prometheus metrics endpoint./version
: Returns build version info.
Cron jobs (reaper
,sp-validator
) push metrics toaPrometheus Pushgateway atPROMETHEUS_PUSHGATEWAY
(default:localhost:9091
).
This section describes the process of getting a code change from a pull request all the way to production.
It all starts with apull request.When a new pull request is opened, some jobs are run automatically.These jobs are defined inapp-interfacehere.
- host-inventory pr-checker runs thefollowing:
- database migrations
- code style checks
- unit tests
ci.ext.devshift.net PR build - All tests
runsall the IQE tests on the PR's code.- host-inventory build-masterbuilds the container image, and pushes it to Quay, where it is scanned for vulnerabilities.
Should any of these fail this is indicated directly on the pull request.
When all of these checks pass and a reviewer approves the changes the pull request can be merged by someone fromthe@RedHatInsights/host-based-inventory-committersteam.
When a pull request is merged to master, a new container image is built and taggedasinsights-inventory:latest.This image is then automatically deployed totheStage environment.
Once the image lands in the Stage environment, the QE testing can begin.People in@team-inventory-dev runthe full IQE test suite against Stage, and then report the results inthe#team-insights-inventory channel.
In order to promote a new image to the production environment, it is necessary to updatethedeploy-clowder.ymlfile.Theref
parameter on theprod-host-inventory-prod
namespace needs to be updated to the SHA of the validated image.
Once the change has been made, submit a merge requesttoapp-interface.For the CI pipeline to run tests on your fork, you'll need toadd@devtools-bot as a Maintainer.Seethisguideon how to do that.
After the MR has been opened, somebodyfromAppSRE/insights-host-inventory
will review andapprove the MR by adding a/lgtm
comment.Afterward, the MR will be merged automatically and the changes will be deployed to the production environment.The engineer who approved the MR is thenresponsible for monitoring of the rollout of the new image.
Once that happens,contact@team-inventory-dev andrequest the image to be re-tested in the production environment.The new image will also be tested automatically whentheFull Prod Check pipelineis run (twice daily).
It is essential to monitor the health of the service during and after the production deployment.A non-exhaustive list of things to watch includes:
- Monitor deployment in:
- OpenShiftnamespace:host-inventory-prod
- primarily ensure that the new pods are spun up properly
- Slack channel:Inventory Slack Channel
- for any inventory-related Prometheus alerts
- Grafanadashboard:Inventory Dashboard
- for any anomalies such as error rate or consumer lag
- Kibanalogshere
- for any error-level log entries
- OpenShiftnamespace:host-inventory-prod
Should unexpected problems occur during the deployment,it is possible to do a rollback.This is done by updating the ref parameter indeploy-clowder.yml
to point to the previous commit SHA,or by reverting the MR that triggered the production deployment.
In order to add or update a field on the System Profile, first follow the instructions intheinventory-schemas repo.After an inventory-schemas PR has been accepted and merged, HBI must be updated to keep its own schema in sync.To do this, simply run this command:
make update-schema
This will pull the latest version of the System Profile schema from inventory-schemas and update files as necessary.Open a PR with these changes, and it will be reviewed and merged as perthe standard process.
Use the environment variable SP_FIELDS_TO_LOG to log the System Profile fields of a host.These fields are logged when adding, updating or deleting a host from inventory.
SP_FIELDS_TO_LOG="cpu_model,disk_devices"
This logging helps with debugging hosts in Kibana.
There may be a jobClowdJobInvocation
which requires using a special image that is differentfrom the one used by the parent application, i.e. host-inventory.Clowder out-of-the-box does not allow it.Running a Special Job describes how to accomplish it.
Making local code work with the services running in Kubernetes requires some actionsprovidedhere.
The repository usespre-commit to enforce code style. Install pre-commit hooks:
pre-commit install
If inside the Red Hat network, also ensurerh-pre-commit
is installed as perinstructionshere.
About
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.