Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Cover image for The Home Server Journey - 5b: A Bridge Too Far?
Beppe
Beppe

Posted on • Edited on

The Home Server Journey - 5b: A Bridge Too Far?

Hi all. This is a late addendum tomy last post

As I have found out,like the Allies had for those river crossings at Operation Market Garden,stateful sets are not as trivial as they initially appear: most guides will just tell you what's their purpose and how to get them running, which leaves a false impression that synchronized data replication across pods happens automagically (sic)

Well, it doesn't

Crossing that River. No Matter the Costs

That special type ofdeployment will only give you guarantees regarding the order of pods creation and deletion, their naming scheme and whichpersistent volume they will be bound to. Anything else is on the application logic. You may even violate the principle of using only the first pod for writing and the other ones for reading

When it comes to more niche applications likeConduit, I will probably have to code my own replication solution at some point, but for more widely used software likePostgreSQL there are solutions already available, thankfully

I came across articles byBibin Wilson & Shishir Khandelwal andAlbert Weng (we've seen him here before) detailing how to use a special variant of the database image to get replication working. Although a bit outdated, due to theDocker registry used I'm pretty sure that's based on thePostgreSQL High AvailabilityHelm chart

I don't plan on coveringHelm here as I think it adds complexity over already quite complexK8s manifests. Surely it might be useful for large-scale stuff, but let's keep things simple here. I have combined knowledge from the articles with the updated charts in order to created a trimmed-down version of the required manifests (it would be good to addliveliness and readiness probes though):

apiVersion:v1kind:ConfigMapmetadata:name:postgres-configlabels:app:postgresdata:BITNAMI_DEBUG:"false"# Set to "true" for more debug informationPOSTGRESQL_VOLUME_DIR:/bitnami/postgresqlPGDATA:/bitnami/postgresql/dataPOSTGRESQL_LOG_HOSTNAME:"true"# Set to "false" for less debug informationPOSTGRESQL_LOG_CONNECTIONS:"false"# Set to "true" for more debug informationPOSTGRESQL_CLIENT_MIN_MESSAGES:"error"POSTGRESQL_SHARED_PRELOAD_LIBRARIES:"pgaudit,repmgr"# Modules being used for replicationREPMGR_LOG_LEVEL:"NOTICE"REPMGR_USERNAME:repmgr# Replication userREPMGR_DATABASE:repmgr# Replication information database---apiVersion:v1kind:ConfigMapmetadata:name:postgres-scripts-configlabels:app:postgresdata:# Script for pod terminationpre-stop.sh:|-#!/bin/bashset -o errexitset -o pipefailset -o nounset# Debug sectionexec 3>&1exec 4>&2# Process input parametersMIN_DELAY_AFTER_PG_STOP_SECONDS=$1# Load Libraries. /opt/bitnami/scripts/liblog.sh. /opt/bitnami/scripts/libpostgresql.sh. /opt/bitnami/scripts/librepmgr.sh# Load PostgreSQL & repmgr environment variables. /opt/bitnami/scripts/postgresql-env.sh# Auxiliary functionsis_new_primary_ready() {return_value=1currenty_primary_node="$(repmgr_get_primary_node)"currenty_primary_host="$(echo $currenty_primary_node | awk '{print $1}')"info "$currenty_primary_host != $REPMGR_NODE_NETWORK_NAME"if [[ $(echo $currenty_primary_node | wc -w) -eq 2 ]] && [[ "$currenty_primary_host" != "$REPMGR_NODE_NETWORK_NAME" ]]; theninfo "New primary detected, leaving the cluster..."return_value=0elseinfo "Waiting for a new primary to be available..."fireturn $return_value}export MODULE="pre-stop-hook"if [[ "${BITNAMI_DEBUG}" == "true" ]]; theninfo "Bash debug is on"elseinfo "Bash debug is off"exec 1>/dev/nullexec 2>/dev/nullfipostgresql_enable_nss_wrapper# Prepare env vars for managing rolesreadarray -t primary_node < <(repmgr_get_upstream_node)primary_host="${primary_node[0]}"# Stop postgresql for graceful exit.PG_STOP_TIME=$EPOCHSECONDSpostgresql_stopif [[ -z "$primary_host" ]] || [[ "$primary_host" == "$REPMGR_NODE_NETWORK_NAME" ]]; theninfo "Primary node need to wait for a new primary node before leaving the cluster"retry_while is_new_primary_ready 10 5elseinfo "Standby node doesn't need to wait for a new primary switchover. Leaving the cluster"fi# Make sure pre-stop hook waits at least 25 seconds after stop of PG to make sure PGPOOL detects node is down.# default terminationGracePeriodSeconds=30 secondsPG_STOP_DURATION=$(($EPOCHSECONDS - $PG_STOP_TIME))if (( $PG_STOP_DURATION < $MIN_DELAY_AFTER_PG_STOP_SECONDS )); thenWAIT_TO_PG_POOL_TIME=$(($MIN_DELAY_AFTER_PG_STOP_SECONDS - $PG_STOP_DURATION))info "PG stopped including primary switchover in $PG_STOP_DURATION. Waiting additional $WAIT_TO_PG_POOL_TIME seconds for PG pool"sleep $WAIT_TO_PG_POOL_TIMEfi---apiVersion:v1kind:Secretmetadata:name:postgres-secretdata:POSTGRES_PASSWORD:cG9zdGdyZXM=# Default user(postgres)'s passwordREPMGR_PASSWORD:cmVwbWdy# Replication user's password---apiVersion:apps/v1kind:StatefulSetmetadata:name:postgres-statespec:serviceName:postgres-servicereplicas:2selector:matchLabels:app:postgrestemplate:metadata:labels:app:postgresspec:securityContext:# Container is not run as rootfsGroup:1001runAsUser:1001runAsGroup:1001containers:-name:postgreslifecycle:preStop:# Routines to run before pod terminationexec:command:-/pre-stop.sh-"25"image:docker.io/bitnami/postgresql-repmgr:16.2.0imagePullPolicy:"IfNotPresent"ports:-containerPort:5432name:postgres-portenvFrom:-configMapRef:name:postgres-configenv:-name:POSTGRES_PASSWORDvalueFrom:secretKeyRef:name:postgres-secretkey:POSTGRES_PASSWORD-name:REPMGR_PASSWORDvalueFrom:secretKeyRef:name:postgres-secretkey:REPMGR_PASSWORD# Write the pod name (from metadata field) to an environment variable in order to automatically generate replication addresses-name:POD_NAMEvalueFrom:fieldRef:fieldPath:metadata.name# Repmgr configuration-name:REPMGR_NAMESPACEvalueFrom:fieldRef:fieldPath:metadata.namespace-name:REPMGR_PARTNER_NODES# All pods being synchronized (has to reflect the number of replicas)value:postgres-state-0.postgres-service.$(REPMGR_NAMESPACE).svc.cluster.local,postgres-state-1.postgres-service.$(REPMGR_NAMESPACE).svc.cluster.local-name:REPMGR_PRIMARY_HOST# Pod with write access. Everybody else replicates itvalue:postgres-state-0.postgres-service.$(REPMGR_NAMESPACE).svc.cluster.local-name:REPMGR_NODE_NAME# Current pod namevalue:$(POD_NAME)-name:REPMGR_NODE_NETWORK_NAMEvalue:$(POD_NAME).postgres-service.$(REPMGR_NAMESPACE).svc.cluster.localvolumeMounts:-name:postgres-dbmountPath:/bitnami/postgresql-name:postgres-scriptsmountPath:/pre-stop.shsubPath:pre-stop.sh-name:empty-dirmountPath:/tmpsubPath:tmp-dir-name:empty-dirmountPath:/opt/bitnami/postgresql/confsubPath:app-conf-dir-name:empty-dirmountPath:/opt/bitnami/postgresql/tmpsubPath:app-tmp-dir-name:empty-dirmountPath:/opt/bitnami/repmgr/confsubPath:repmgr-conf-dir-name:empty-dirmountPath:/opt/bitnami/repmgr/tmpsubPath:repmgr-tmp-dir-name:empty-dirmountPath:/opt/bitnami/repmgr/logssubPath:repmgr-logs-dirvolumes:-name:postgres-scriptsconfigMap:name:postgres-scripts-configdefaultMode:0755# Access permissions (owner can execute processes)-name:empty-dir# Use a fake directory for mounting unused but required pathsemptyDir:{}volumeClaimTemplates:# Description of volume claim created for each replica-metadata:name:postgres-dbspec:storageClassName:nfs-smallaccessModes:-ReadWriteOnceresources:requests:storage:8Gi---apiVersion:v1kind:Servicemetadata:name:postgres-servicelabels:app:postgresspec:type:ClusterIP# Default service typeclusterIP:None# Do not get a service-wide addressselector:app:postgresports:-protocol:TCPport:5432targetPort:postgres-port
Enter fullscreen modeExit fullscreen mode

As theBitnami container runs as a non-root user forsecurity reasons, requires a"postgres" administrator name for the database and uses a different path structure, you won't be able to mount the data from the originalPostgreSQL deployment without messing around with some configuration. So, unless you absolutely need the data, just start from scratch by deleting the volumes, maybe doing a backup first

Notice how ourClusterIP is set to not have an shared address, making it aheadless service, so that it only serves the purpose of exposing the target port of each individual pod, still accessible via<pod name>.<service-name>:<container port number>. We do that as our containers here are not meant to be accessed in a random or load-balanced manner

If you're writing your own application, it's easy to define different addresses for writing to and reading from a replicated database, respecting the role of each copy. But a lot of useful software already around assumes a single connection is needed, and there's no simple way to get around that. That's why you need specific intermediaries or proxies likePgpool-II forPostgreSQL, that can appear to applications as a single entity, redirecting queries to the appropriate backend database depending on it's contents:

Postgres vs Postgres-HA
(From thePostgreSQL-HA documentation)

apiVersion:v1kind:ConfigMapmetadata:name:postgres-proxy-configlabels:app:postgres-proxydata:BITNAMI_DEBUG:"true"PGPOOL_BACKEND_NODES:0:postgres-state-0.postgres-service:5432,1:postgres-state-1.postgres-service:5432PGPOOL_SR_CHECK_USER:repmgrPGPOOL_SR_CHECK_DATABASE:repmgrPGPOOL_POSTGRES_USERNAME:postgresPGPOOL_ADMIN_USERNAME:pgpoolPGPOOL_AUTHENTICATION_METHOD:scram-sha-256PGPOOL_ENABLE_LOAD_BALANCING:"yes"PGPOOL_DISABLE_LOAD_BALANCE_ON_WRITE:"transaction"PGPOOL_ENABLE_LOG_CONNECTIONS:"no"PGPOOL_ENABLE_LOG_HOSTNAME:"yes"PGPOOL_NUM_INIT_CHILDREN:"25"PGPOOL_MAX_POOL:"8"PGPOOL_RESERVED_CONNECTIONS:"3"PGPOOL_HEALTH_CHECK_PSQL_TIMEOUT:"6"---apiVersion:v1kind:Secretmetadata:name:postgres-proxy-secretdata:PGPOOL_ADMIN_PASSWORD:cGdwb29s---apiVersion:v1kind:Secretmetadata:name:postgres-users-secretdata:usernames:dXNlcjEsdXNlcjIsdXNlcjM=passwords:cHN3ZDEscHN3ZDIscHN3ZDM=---apiVersion:apps/v1kind:Deploymentmetadata:name:postgres-proxy-deploylabels:app:postgres-proxyspec:replicas:1selector:matchLabels:app:postgres-proxytemplate:metadata:labels:app:postgres-proxyspec:securityContext:fsGroup:1001runAsGroup:1001runAsUser:1001containers:-name:postgres-proxyimage:docker.io/bitnami/pgpool:4imagePullPolicy:"IfNotPresent"envFrom:-configMapRef:name:postgres-proxy-configenv:-name:PGPOOL_POSTGRES_PASSWORDvalueFrom:secretKeyRef:name:postgres-secretkey:POSTGRES_PASSWORD-name:PGPOOL_SR_CHECK_PASSWORDvalueFrom:secretKeyRef:name:postgres-secretkey:REPMGR_PASSWORD-name:PGPOOL_ADMIN_PASSWORDvalueFrom:secretKeyRef:name:postgres-proxy-secretkey:PGPOOL_ADMIN_PASSWORD-name:PGPOOL_POSTGRES_CUSTOM_USERSvalueFrom:secretKeyRef:name:postgres-users-secretkey:usernames-name:PGPOOL_POSTGRES_CUSTOM_PASSWORDSvalueFrom:secretKeyRef:name:postgres-users-secretkey:passwordsports:-name:pg-proxy-portcontainerPort:5432volumeMounts:-name:empty-dirmountPath:/tmpsubPath:tmp-dir-name:empty-dirmountPath:/opt/bitnami/pgpool/etcsubPath:app-etc-dir-name:empty-dirmountPath:/opt/bitnami/pgpool/confsubPath:app-conf-dir-name:empty-dirmountPath:/opt/bitnami/pgpool/tmpsubPath:app-tmp-dir-name:empty-dirmountPath:/opt/bitnami/pgpool/logssubPath:app-logs-dirvolumes:-name:empty-diremptyDir:{}---apiVersion:v1kind:Servicemetadata:name:postgres-proxy-servicelabels:app:postgres-proxyspec:type:LoadBalancer# Let it be accessible inside the local networkselector:app:postgres-proxyports:-protocol:TCPport:5432targetPort:pg-proxy-port
Enter fullscreen modeExit fullscreen mode

I bet most of it is self explanatory by now. Just pay extra attention to theNUM_INIT_CHILDREN,MAX_POLL andRESERVED_CONNECTIONS variables andthe relationship between them, as their default values may not be appropriate at all for your application and result in too many connection refusals (Been there. Done that). Moreover, users other than administrator and replicator are blocked from access unless you add them to the custom lists of usernames and passwords, in the formatuser1,user2,user3,.. andpswd1,pswd2,pswd3,..., here provided asbase64-encoded secrets

With all that configured, we can finally (this time I really mean it) deploy a useful, stateful and replicated application:

$kubectl get all-n choppa-lapp=postgres                                                                                                                                               NAME                   READY   STATUS    RESTARTS   AGEpod/postgres-state-0   1/1     Running   0          40hpod/postgres-state-1   1/1     Running   0          40hNAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGEservice/postgres-service   ClusterIP   None         <none>        5432/TCP   40h$kubectl get all-n choppa-lapp=postgres-proxy                                                                                                                                         NAME                                         READY   STATUS    RESTARTS   AGEpod/postgres-proxy-deploy-74bbdd9b9d-j2tsn   1/1     Running   0          40hNAME                             TYPE           CLUSTER-IP     EXTERNAL-IP                 PORT(S)          AGEservice/postgres-proxy-service   LoadBalancer   10.43.217.63   192.168.3.10,192.168.3.12   5432:30217/TCP   40hNAME                                    READY   UP-TO-DATE   AVAILABLE   AGEdeployment.apps/postgres-proxy-deploy   1/1     1            1           40hNAME                                               DESIRED   CURRENT   READY   AGEreplicaset.apps/postgres-proxy-deploy-74bbdd9b9d   1         1         1       40h
Enter fullscreen modeExit fullscreen mode

(Some nice usage of componentlabels for selection)

Into the Breach!

You should be able to view your database you thepostgres user the same way we did last time. After informing the necessary custom users toPgpoll, now not only I can get myTelegram bridge back running (using the proxy address for the connection string), but also installWhatsApp andDiscord ones. Although they're written inGo rather thanPython, configuration is very similar, with the relevant parts below:

apiVersion:v1kind:ConfigMapmetadata:name:whatsapp-configlabels:app:whatsappdata:config.yaml:|# Homeserver details.homeserver:# The address that this appservice can use to connect to the homeserver.address: https://talk.choppa.xyz# The domain of the homeserver (also known as server_name, used for MXIDs, etc).domain: choppa.xyz# ...# Application service host/registration related details.# Changing these values requires regeneration of the registration.appservice:# The address that the homeserver can use to connect to this appservice.address: http://whatsapp-service:29318# The hostname and port where this appservice should listen.hostname: 0.0.0.0port: 29318# Database config.database:# The database type. "sqlite3-fk-wal" and "postgres" are supported.type: postgres# The database URI.#   SQLite: A raw file path is supported, but `file:<path>?_txlock=immediate` is recommended.#           https://github.com/mattn/go-sqlite3#connection-string#   Postgres: Connection string. For example, postgres://user:password@host/database?sslmode=disable#             To connect via Unix socket, use something like postgres:///dbname?host=/var/run/postgresqluri: postgres://whatsapp:mautrix@postgres-proxy-service/matrix_whatsapp?sslmode=disable# Maximum number of connections. Mostly relevant for Postgres.max_open_conns: 20max_idle_conns: 2# Maximum connection idle time and lifetime before they're closed. Disabled if null.# Parsed with https://pkg.go.dev/time#ParseDurationmax_conn_idle_time: nullmax_conn_lifetime: null# The unique ID of this appservice.id: whatsapp# Appservice bot details.bot:# Username of the appservice bot.username: whatsappbot# Display name and avatar for bot. Set to "remove" to remove display name/avatar, leave empty# to leave display name/avatar as-is.displayname: WhatsApp bridge botavatar: mxc://maunium.net/NeXNQarUbrlYBiPCpprYsRqr# Whether or not to receive ephemeral events via appservice transactions.# Requires MSC2409 support (i.e. Synapse 1.22+).ephemeral_events: true# Should incoming events be handled asynchronously?# This may be necessary for large public instances with lots of messages going through.# However, messages will not be guaranteed to be bridged in the same order they were sent in.async_transactions: false# Authentication tokens for AS <-> HS communication. Autogenerated; do not modify.as_token: <same as token as in registration.yaml>hs_token: <same hs token as in registration.yaml># ...# Config for things that are directly sent to WhatsApp.whatsapp:# Device name that's shown in the "WhatsApp Web" section in the mobile app.os_name: Mautrix-WhatsApp bridge# Browser name that determines the logo shown in the mobile app.# Must be "unknown" for a generic icon or a valid browser name if you want a specific icon.# List of valid browser names: https://github.com/tulir/whatsmeow/blob/efc632c008604016ddde63bfcfca8de4e5304da9/binary/proto/def.proto#L43-L64browser_name: unknown# Proxy to use for all WhatsApp connections.proxy: null# Alternative to proxy: an HTTP endpoint that returns the proxy URL to use for WhatsApp connections.get_proxy_url: null# Whether the proxy options should only apply to the login websocket and not to authenticated connections.proxy_only_login: false# Bridge configbridge:# ...# Settings for handling history sync payloads.history_sync:# Enable backfilling history sync payloads from WhatsApp?backfill: true# ...# Shared secret for authentication. If set to "generate", a random secret will be generated,# or if set to "disable", the provisioning API will be disabled.shared_secret: generate# Enable debug API at /debug with provisioning authentication.debug_endpoints: false# Permissions for using the bridge.# Permitted values:#    relay - Talk through the relaybot (if enabled), no access otherwise#     user - Access to use the bridge to chat with a WhatsApp account.#    admin - User level and some additional administration tools# Permitted keys:#        * - All Matrix users#   domain - All users on that homeserver#     mxid - Specific userpermissions:"*": relay"@ancapepe:choppa.xyz": admin"@ancompepe:choppa.xyz": user# Settings for relay moderelay:# Whether relay mode should be allowed. If allowed, `!wa set-relay` can be used to turn any# authenticated user into a relaybot for that chat.enabled: false# Should only admins be allowed to set themselves as relay users?admin_only: true# ...# Logging config. See https://github.com/tulir/zeroconfig for details.logging:min_level: debugwriters:- type: stdoutformat: pretty-coloredregistration.yaml:|id: whatsappurl: http://whatsapp-service:29318as_token: <same as token as in config.yaml>hs_token: <same hs token as in config.yaml>sender_localpart: SH98XxA4xvgFtlbx1NxJm9VYW6q3BdYgrate_limited: falsenamespaces:users:- regex: ^@whatsappbot:choppa\.xyz$exclusive: true- regex: ^@whatsapp_.*:choppa\.xyz$exclusive: truede.sorunome.msc2409.push_ephemeral: truepush_ephemeral: true---apiVersion:apps/v1kind:Deploymentmetadata:name:whatsapp-deployspec:replicas:1selector:matchLabels:app:whatsapptemplate:metadata:labels:app:whatsappspec:containers:-name:whatsappimage:dock.mau.dev/mautrix/whatsapp:latestimagePullPolicy:"IfNotPresent"command:["/usr/bin/mautrix-whatsapp","-c","/data/config.yaml","-r","/data/registration.yaml","--no-update"]ports:-containerPort:29318name:whatsapp-portvolumeMounts:-name:whatsapp-volumemountPath:/data/config.yamlsubPath:config.yaml-name:whatsapp-volumemountPath:/data/registration.yamlsubPath:registration.yamlvolumes:-name:whatsapp-volumeconfigMap:name:whatsapp-config---apiVersion:v1kind:Servicemetadata:name:whatsapp-servicespec:publishNotReadyAddresses:trueselector:app:whatsappports:-protocol:TCPport:29318targetPort:whatsapp-port
Enter fullscreen modeExit fullscreen mode
apiVersion:v1kind:ConfigMapmetadata:name:discord-configlabels:app:discorddata:config.yaml:|# Homeserver details.homeserver:# The address that this appservice can use to connect to the homeserver.address: https://talk.choppa.xyz# The domain of the homeserver (also known as server_name, used for MXIDs, etc).domain: choppa.xyz# What software is the homeserver running?# Standard Matrix homeservers like Synapse, Dendrite and Conduit should just use "standard" here.software: standard# The URL to push real-time bridge status to.# If set, the bridge will make POST requests to this URL whenever a user's discord connection state changes.# The bridge will use the appservice as_token to authorize requests.status_endpoint: null# Endpoint for reporting per-message status.message_send_checkpoint_endpoint: null# Does the homeserver support https://github.com/matrix-org/matrix-spec-proposals/pull/2246?async_media: false# Should the bridge use a websocket for connecting to the homeserver?# The server side is currently not documented anywhere and is only implemented by mautrix-wsproxy,# mautrix-asmux (deprecated), and hungryserv (proprietary).websocket: false# How often should the websocket be pinged? Pinging will be disabled if this is zero.ping_interval_seconds: 0# Application service host/registration related details.# Changing these values requires regeneration of the registration.appservice:# The address that the homeserver can use to connect to this appservice.address: http://discord-service:29334# The hostname and port where this appservice should listen.hostname: 0.0.0.0port: 29334# Database config.database:# The database type. "sqlite3-fk-wal" and "postgres" are supported.type: postgres# The database URI.#   SQLite: A raw file path is supported, but `file:<path>?_txlock=immediate` is recommended.#           https://github.com/mattn/go-sqlite3#connection-string#   Postgres: Connection string. For example, postgres://user:password@host/database?sslmode=disable#             To connect via Unix socket, use something like postgres:///dbname?host=/var/run/postgresqluri: postgres://discord:mautrix@postgres-proxy-service/matrix_discord?sslmode=disable# Maximum number of connections. Mostly relevant for Postgres.max_open_conns: 20max_idle_conns: 2# Maximum connection idle time and lifetime before they're closed. Disabled if null.# Parsed with https://pkg.go.dev/time#ParseDurationmax_conn_idle_time: nullmax_conn_lifetime: null# The unique ID of this appservice.id: discord# Appservice bot details.bot:# Username of the appservice bot.username: discordbot# Display name and avatar for bot. Set to "remove" to remove display name/avatar, leave empty# to leave display name/avatar as-is.displayname: Discord bridge botavatar: mxc://maunium.net/nIdEykemnwdisvHbpxflpDlC# Whether or not to receive ephemeral events via appservice transactions.# Requires MSC2409 support (i.e. Synapse 1.22+).ephemeral_events: true# Should incoming events be handled asynchronously?# This may be necessary for large public instances with lots of messages going through.# However, messages will not be guaranteed to be bridged in the same order they were sent in.async_transactions: false# Authentication tokens for AS <-> HS communication. Autogenerated; do not modify.as_token: <same as token as in registration.yaml>hs_token: <same hs token as in registration.yaml># Bridge configbridge:# ...# The prefix for commands. Only required in non-management rooms.command_prefix: '!discord'# Permissions for using the bridge.# Permitted values:#    relay - Talk through the relaybot (if enabled), no access otherwise#     user - Access to use the bridge to chat with a Discord account.#    admin - User level and some additional administration tools# Permitted keys:#        * - All Matrix users#   domain - All users on that homeserver#     mxid - Specific userpermissions:"*": relay"@ancapepe:choppa.xyz": admin"@ancompepe:choppa.xyz": user# Logging config. See https://github.com/tulir/zeroconfig for details.logging:min_level: debugwriters:- type: stdoutformat: pretty-coloredregistration.yaml:|id: discordurl: http://discord-service:29334as_token: <same as token as in config.yaml>hs_token: <same hs token as in config.yaml>sender_localpart: KYmI12PCMJuHvD9VZw1cUzMlV7nUezH2rate_limited: falsenamespaces:users:- regex: ^@discordbot:choppa\.xyz$exclusive: true- regex: ^@discord_.*:choppa\.xyz$exclusive: truede.sorunome.msc2409.push_ephemeral: truepush_ephemeral: true---apiVersion:apps/v1kind:Deploymentmetadata:name:discord-deployspec:replicas:1selector:matchLabels:app:discordtemplate:metadata:labels:app:discordspec:containers:-name:discordimage:dock.mau.dev/mautrix/discord:latestimagePullPolicy:"IfNotPresent"command:["/usr/bin/mautrix-discord","-c","/data/config.yaml","-r","/data/registration.yaml","--no-update"]ports:-containerPort:29334name:discord-portvolumeMounts:-name:discord-volumemountPath:/data/config.yamlsubPath:config.yaml-name:discord-volumemountPath:/data/registration.yamlsubPath:registration.yamlvolumes:-name:discord-volumeconfigMap:name:discord-config---apiVersion:v1kind:Servicemetadata:name:discord-servicespec:publishNotReadyAddresses:trueselector:app:discordports:-protocol:TCPport:29334targetPort:discord-port
Enter fullscreen modeExit fullscreen mode

Logging into your account will still change depending on the service being bridged. As always, consultthe official documentation

We may forget for a while the multiple opened messaging windows just to communicate with our peers. That river has been crossed!

Nheko communities

Top comments(0)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

  • Location
    Santa Catarina, Brazil
  • Work
    Mechatronics Engineer
  • Joined

More fromBeppe

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp