You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 13, 2025. It is now read-only.
Configuration for the application is at config/default.js and config/production.js. The following parameters can be set in config files or in env variables:
LOG_LEVEL: the log level
PORT: the server port
CASCADE_PAUSE_MS: how many milliseconds to pause between deleting records during cascade delete (default to 1000)
AUTH_SECRET: TC Authentication secret
VALID_ISSUERS: valid issuers for TC authentication
PAGE_SIZE: the default pagination limit
MAX_PAGE_SIZE: the maximum pagination size
API_VERSION: the API version
DB_NAME: the database name
DB_USERNAME: the database username
DB_PASSWORD: the database password
DB_HOST: the database port
DB_PORT: the database port
AUTH0_URL: Auth0 URL, used to get TC M2M token
AUTH0_AUDIENCE: Auth0 audience, used to get TC M2M token
TOKEN_CACHE_TIME: Auth0 token cache time, used to get TC M2M token
AUTH0_CLIENT_ID: Auth0 client id, used to get TC M2M token
AUTH0_CLIENT_SECRET: Auth0 client secret, used to get TC M2M token
AUTH0_PROXY_SERVER_URL: Proxy Auth0 URL, used to get TC M2M token
BUSAPI_URL: Topcoder Bus API URL
KAFKA_ERROR_TOPIC: The error topic at which bus api will publish any errors
KAFKA_MESSAGE_ORIGINATOR: The originator value for the kafka messages
UBAHN_CREATE_TOPIC: Kafka topic for create message
UBAHN_UPDATE_TOPIC: Kafka topic for update message
UBAHN_DELETE_TOPIC: Kafka topic for delete message
UBAHN_AGGREGATE_TOPIC: Kafka topic that is used to combine all create, update and delete message(s)
ES_HOST: Elasticsearch host
ES.DOCUMENTS: Elasticsearch index, type and id mapping for resources.
ATTRIBUTE_GROUP_PIPELINE_ID: The pipeline id for enrichment with attribute group. Default isattributegroup-pipeline
SKILL_PROVIDER_PIPELINE_ID: The pipeline id for enrichment with skill provider. Default isskillprovider-pipeline
USER_PIPELINE_ID: The pipeline id for enrichment of user details. Default isuser-pipeline
ATTRIBUTE_GROUP_ENRICH_POLICYNAME: The enrich policy for attribute group. Default isattributegroup-policy
SKILL_PROVIDER_ENRICH_POLICYNAME: The enrich policy for skill provider. Default isskillprovider-policy
ROLE_ENRICH_POLICYNAME: The enrich policy for role. Default isrole-policy
ACHIEVEMENT_PROVIDER_ENRICH_POLICYNAME: The enrich policy for achievement provider. Default isachievementprovider-policy
SKILL_ENRICH_POLICYNAME: The enrich policy for skill. Default isskill-policy
ATTRIBUTE_ENRICH_POLICYNAME: The enrich policy for skill. Default isattribute-policy
ELASTICCLOUD_ID: The elastic cloud id, if your elasticsearch instance is hosted on elastic cloud. DO NOT provide a value for ES_HOST if you are using this
ELASTICCLOUD_USERNAME: The elastic cloud username for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
ELASTICCLOUD_PASSWORD: The elastic cloud password for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
MAX_BATCH_SIZE: Restrict number of records in memory during bulk insert (Used by the db to es migration script)
MAX_RESULT_SIZE: The Results Per Query Limits. Default is1000 (Used by the db to es migration script)
MAX_BULK_SIZE: The Bulk Indexing Maximum Limits. Default is100 (Used by the db to es migration script)
ForES.DOCUMENTS configuration, you will find multiple other configurations below it. Each has default values that you can override using the environment variables
Local deployment
Setup your Elasticsearch instance and ensure that it is up and running.
FollowConfiguration section to update config values, like database, etc ..
GotoUBahn-api, runnpm i andnpm run lint
Run the migrations -npm run migrations up. This will create the database, the tables.
Then runnpm run insert-data and insert mock data into the database.
You will then runnpm run migrate-db-to-es to migrate the data to elasticsearch from the database
Startup servernode app.js ornpm run start
Migrations
Migrations are located under the./scripts/db/ folder. Runnpm run migrations up andnpm run migrations down to execute the migrations or remove the earlier ones
Import data from QLDB
Make sureQLDB_NAME,AWS_REGION,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY are set in your environment
Runnpm run migrate-qldb-to-pg to import data from qldb.
Import data from S3 to QLDB
Make sureBUCKET_NAME,QLDB_NAME,AWS_REGION,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY are set in your environment
Runnpm run import-s3-data to import data from s3 to qldb.
Local Deployment with Docker
Make sure all config values are right, and you can run on local successfully, then run below commands
Navigate to the directorydocker
Rename the filesample.env to.env
Set the required AUTH0 configurations, DB configurations and ElasticSearch host in the file.env
Once that is done, run the following command
docker-compose up
When you are running the application for the first time, It will take some time initially to download the image and install the dependencies
You can also head intodocker-pgsql-es folder and rundocker-compose up -d to have docker instances of pgsql and elasticsearch to use with the api