- Notifications
You must be signed in to change notification settings - Fork1
Clear a cached SPARQL endpoint based on cubes
License
zazuko/clear-sparql-cache-endpoint
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
This assumes that your cached endpoint is using Varnish and has thexkey module enabled.You can take a look at our customvarnish-post repository that comes with all required configuration for the cached endpoint.
npm install# Install dependenciescp example.env .env# Copy environment variables file# Open your editor, and fill the environment variables in the `.env` filenpm run start# Start the script
| Name | Description | Default Value |
|---|---|---|
| CACHE_ENDPOINT | The URL of the cache endpoint | "" |
| CACHE_ENDPOINT_USERNAME | The username for the cache endpoint | "" |
| CACHE_ENDPOINT_PASSWORD | The password for the cache endpoint | "" |
| CACHE_DEFAULT_ENTRY_NAME | The default entry name for the cache | "default" |
| CACHE_TAG_HEADER | The header name for the cache tag | "xkey" |
| SUPPORT_URL_ENCODED | Whether to clear the cache for the URL-encoded version of the dataset URI | "true" |
| SPARQL_ENDPOINT_URL | The URL of the SPARQL endpoint | "" |
| SPARQL_USERNAME | The username for the SPARQL endpoint | "" |
| SPARQL_PASSWORD | The password for the SPARQL endpoint | "" |
| S3_ENABLED | Whether to use S3 for caching | "false" |
| S3_LAST_TIMESTAMP_KEY | The key for the last timestamp file in S3 | "last_timestamp.txt" |
| S3_SIMPLE_DATE_WORKAROUND_KEY | The key for the simple date workaround file in S3 | "simple_date_workaround.txt" |
| S3_BUCKET | The S3 bucket name | "default" |
| S3_ACCESS_KEY_ID | The S3 access key ID | "" |
| S3_SECRET_ACCESS_KEY | The S3 secret access key | "" |
| S3_REGION | The S3 region | "default" |
| S3_ENDPOINT | The S3 endpoint | "" |
| S3_SSL_ENABLED | Whether to use SSL for S3 | "false" |
| S3_FORCE_PATH_STYLE | Whether to force path style for S3 | "false" |
IfS3_ENABLED is set totrue, the first time you run the script you might see an error message saying that the last timestamp file does not exist. This is expected, and the script will create the file automatically at the end of the first run, and will update that file every time it runs.You will not see this error message again after the first run.
You might also get a similar error about a simple date workaround file. This is also expected, and the script will create the file automatically at the end of the first run, and will update that file every time it runs.
Using S3 allows us to trick a bit for the cases wheredateModified returned by the SPARQL query is adate and not adateTime.The trick makes sure that the cache is invalidated for this entry only the first time, and the day after thedateModified date.Without this trick, the cache would be invalidated every time the script runs until the day after its value.
This project is licensed under the MIT License - see theLICENSE file for details.
About
Clear a cached SPARQL endpoint based on cubes
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Contributors3
Uh oh!
There was an error while loading.Please reload this page.