Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

🌈 Federated, end-to-end-encrypted, efficient communication among strict network environments.

License

NotificationsYou must be signed in to change notification settings

samply/beam

Repository files navigation

Logo

Build with rust and docker

Samply.Beam is a distributed task broker designed for efficient communication across strict network environments. It provides most commonly used communication patterns across strict network boundaries, end-to-end encryption and signatures, as well as certificate management and validation on top of an easy to use REST API. In addition to task/response semantics, Samply.Beam supports high-performance applications with encrypted low-level direct socket connections.

Latest version: Samply.Beam 0.8.0 – 2024-07-26

This new major version includes some bugfixes, dependency upgrades and improvements tobeam-lib. Please check theChangelog for details.

Find info on all previous versions in theChangelog.

Table of Content

Why use Samply.Beam?

Samply.Beam was developed to solve a principal difficulty of interconnecting federated applications across restrictive network boundaries. Any federated data computation requires some form of communication among the nodes, often in a reliable and high-performance manner. However, in high-security environments such as internal hospital networks, this communication is severely restricted, e.g., by strict firewall rules, forbidding inbound connections and/or using exotic combinations of HTTP proxy servers. Many currently employed solutions place high technical and organizational burdens on each participating site (e.g., message queues requiring servers in a DMZ) or are even considered harmful to the network's security (e.g., VPN overlay networks), suffer from performance issues and introduce additional complexity to the system.

We developed Samply.Beam as a reusable, easy to maintain, secure, high-performance communication layer allowing us to handle most common communication patterns in distributed computation in an efficient and reusable way, while removing complexity from the applications. Samply.Beam handles all "plumbing", such as the negotiation of communication parameters, target discovery, and helps with routinely performed tasks such as authentication and authorization, end-to-end encryption and signatures, and certificate management and validation. This way your application can focus on its main purpose, without getting bogged down by integration tasks. Samply.Beam was created as the latest iteration of theBridgehead's communication layer, but the software is fully content-agnostic: Only your applications have to understand the communication payload. This allows the integration of arbitrary applications in a Samply.Beam federation.

System Architecture

Architecture Schema

Samply.Beam consists of two centrally run components and one proxy at each distributed node. TheSamply.Broker is the central component responsible for facilitating connections, storing and forwarding tasks and messages, and communication with the centralCertificate Authority, aHashicorp Vault instance managing all certificates required for signing and encrypting the payload. The localSamply.Proxy handles all communication with the broker, as well as authentication, encryption and signatures.

Each component in the system is uniquely identified by its hierarchicalBeamId:

app3.proxy2.broker1.samply.de<--------------------------->            AppId     <---------------------->             ProxyId            <--------------->                 BrokerId

Although all IDs may look like fully-qualified domain names:

  • Only theBrokerId has to be a DNS-resolvable FQDN reachable via the network (Proxies will communicate withhttps://broker1.samply.de/...)
  • TheProxyId (proxy2...) is not represented in DNS but via the Proxy's certificate, which statesCN=proxy2.broker2.samply.de
  • Finally, theAppId (app3...) results from using the correct API key in communication with the Proxy (HeaderAuthorization: ApiKey app3.broker2.samply.de <app3's API key>)

In practice,

  • there is one Broker per research network (broker1.samply.de)
  • each site has one Bridgehead with one Proxy instance (proxy2 for site #2)
  • many apps useproxy2 to communicate within the network (app1,app2,app3, ...)

This design ensures that each component, mainly applications but Proxies and Brokers as well, can be addressed in tasks. Should the need arise in the future, this network could be federated by federating the brokers (not unsimilar to E-Mail/SMTP, XMPP, etc.)

The Proxies have to fetch certificates from the central Certificate Authority, however, this communication is relayed by the broker. This ensures, that no external access to the CA is required.

Getting started

Using Docker, you can run a small demo beam network by checking out the git repository (usemain ordevelop branch) and running the following command:

./dev/beamdev demo

This will launch your own beam demo network, which consists of one broker (listening onlocalhost:8080) and two connected proxies (listening onlocalhost:8081 andlocalhost:8082).

The following paragraph simulates the creation and the completion of a taskusingcURL calls. Two parties (and their Samply.Proxies) areconnected via a central broker. Each party has one registered application.In the next section we will simulate the communication between these applications over the beam network.

Note: cURL versions before 7.82 do not support the--json option. In this case, please use--data instead.

The used BeamIds are the following:

SystemBeamID
Brokerbroker
Proxy1proxy1.broker
App behind Proxy 1app1.proxy1.broker
Proxy2proxy2.broker
App behind Proxy 2app2.proxy2.broker

To simplify this example, we use the same ApiKeyApp1Secret for both apps. Also, the Broker has a short name (broker) where in a real setup, it would be required to have a fully-qualified domain name asbroker1.samply.de (seeSystem Architecture).

Creating a task

app1 at party 1 has some important work to distribute. It knows, thatapp2at party 2 is capable of solving it, so it asksproxy1.broker tocreate that new task:

curl -v --json'{"body":"What is the answer to the ultimate question of life, the universe, and everything?","failure_strategy":{"retry":{"backoff_millisecs":1000,"max_tries":5}},"from":"app1.proxy1.broker","id":"70c0aa90-bfcf-4312-a6af-42cbd57dc0b8","metadata":"The broker can read and use this field e.g., to apply filters on behalf of an app","to":["app2.proxy2.broker"],"ttl":"60s"}' -H"Authorization: ApiKey app1.proxy1.broker App1Secret" http://localhost:8081/v1/tasks

Proxy1 replies:

HTTP/1.1 201 CreatedLocation: /tasks/70c0aa90-bfcf-4312-a6af-42cbd57dc0b8Content-Length: 0Date: Mon, 27 Jun 2022 13:58:35 GMT

where theLocation header field is the id of the newly created task. With thatthe task is registered and will be distributed to the appropriate locations.

Listening for relevant tasks

app2 at Party 2 is now able to fetch all tasks addressed to them, especially the task created before:

curl -X GET -v -H"Authorization: ApiKey app2.proxy2.broker App1Secret" http://localhost:8082/v1/tasks?filter=todo

Thefilter=todo parameter instructs the Broker to only send unfinished tasksaddressed to the querying party.The query returns the task, and asapp2 at Proxy 2, we inform the broker thatwe are working on this important task by creating a preliminary "result" with"status": "claimed":

curl -X PUT -v --json'{"from":"app2.proxy2.broker","id":"8db76400-e2d9-4d9d-881f-f073336338c1","metadata":["Arbitrary","types","are","possible"],"status":"claimed","task":"70c0aa90-bfcf-4312-a6af-42cbd57dc0b8","to":["app1.proxy1.broker"]}' -H"Authorization: ApiKey app2.proxy2.broker App1Secret" http://localhost:8082/v1/tasks/70c0aa90-bfcf-4312-a6af-42cbd57dc0b8/results/app2.proxy2.broker

Returning a Result

Party 2 processes the received task. After succeeding,app2 returns the result to party 1:

curl -X PUT -v --json'{"from":"app2.proxy2.broker","metadata":["Arbitrary","types","are","possible"],"status":"succeeded","body":"The answer is 42","task":"70c0aa90-bfcf-4312-a6af-42cbd57dc0b8","to":["app1.proxy1.broker"]}' -H"Authorization: ApiKey app2.proxy2.broker App1Secret" http://localhost:8082/v1/tasks/70c0aa90-bfcf-4312-a6af-42cbd57dc0b8/results/app2.proxy2.broker

Waiting for tasks to complete

Meanwhile,app1 waits on the completion of its task. But not wanting to check for results every couple seconds, it asks Proxy 1 to be informed if the expected number of1 result is present:

curl -X GET -v -H"Authorization: ApiKey app1.proxy1.broker App1Secret" http://localhost:8081/v1/tasks/70c0aa90-bfcf-4312-a6af-42cbd57dc0b8/results?wait_count=1

Thislong polling opens the connection and sleeps until a reply is received. For more information, see the API documentation.

Using direct socket connections

Only available on builds of beam with thesockets feature

Establishing direct socket connections via Beam requires a negotiation phase prior to using the sockets. One application sends a socket request to the other application via their respective Beam.Proxy. The receiving application, upon receipt of the request, upgrades the connection to an encrypted TCP socket connection.

While Beam sockets can be initiated using command line tools, such as curl, netcat, or socat, they are intended to be used by the applications' code. Thus, we show the usage in the following short python application exemplifying both applications, the initiating and the receiving one. Both sides of the communication run concurrently.

importrequestsimportthreadingimportsocketdata=b"Hello beam sockets!"# App running in a beam network with proxy1 available at localhost:8081defapp1():# Post socket request to clientres=requests.post("http://localhost:8081/v1/sockets/app2.proxy2.broker",headers= {"Upgrade":"tcp","Authorization":"ApiKey app1.proxy1.broker App1Secret"    },stream=True)# Get the underlying socket connectionstream=socket.fromfd(res.raw.fileno(),socket.AF_INET,socket.SOCK_STREAM)# Send some datastream.send(data)# App running in a beam network with proxy2 available at localhost:8082defapp2():# Poll for incoming socket requestssocket_task_id=requests.get("http://localhost:8082/v1/sockets",headers={"Authorization":"ApiKey app2.proxy2.broker App1Secret"    }).json()[0]["id"]# Connect to the given id of the socket requestres=requests.get(f"http://localhost:8082/v1/sockets/{socket_task_id}",headers={"Authorization":"ApiKey app2.proxy2.broker App1Secret","Upgrade":"tcp",    },stream=True)# Get the underlying socket connectionstream=socket.fromfd(res.raw.fileno(),socket.AF_INET,socket.SOCK_STREAM)# Receive the data send by the other clientassertstream.recv(len(data))==datathreading.Thread(target=app1).start()threading.Thread(target=app2).start()

Data objects (JSON)

Task

Tasks are represented in the following structure:

{"id":"70c0aa90-bfcf-4312-a6af-42cbd57dc0b8","from":"app7.proxy-hd.broker-project1.samply.de","to": ["app1.proxy-hd.broker-project1.samply.de","app5.proxy-ma.broker-project1.samply.de"  ],"body":"Much work to do","failure_strategy": {"retry": {"backoff_millisecs":1000,"max_tries":5    }  },"ttl":"30s","metadata":"The broker can read and use this field e.g., to apply filters on behalf of an app"}
  • id: UUID to identify the task. Note that when the task is initially submitted, the server is not required to use the submitted ID but may auto-generate its own one. Callers must assume the submission'sid property is ignored and check the reply'sLocation header for the actual URL to the task.
  • from: BeamID of the submitting applications. Is automatically set by the Proxy according to the authentication info.
  • to: BeamIDs ofworkers allowed to retrieve the task and submit results.
  • body: Description of work to be done. Not interpreted by the Broker.
  • failure_strategy: Advises each client how to handle failures. Possible valuesdiscard,retry.
  • failure_strategy.retry: How often to retry (max_tries) a failed task and how long to wait in between each try (backoff_millisecs).
  • ttl: Time-to-live. If not stated differently (by adding 'm', 'h', 'ms', etc.), this value is interpreted as seconds. Once this reaches zero, the broker will expunge the task along with its results.
  • metadata: Associated data readable by the broker. Can be of arbitrary type (seeResult for more examples) and can be handled by the broker (thus intentionally not encrypted).

Result

Each task can hold 0...n results by eachworker defined in the task'sto field.

A succeeded result for the above task:

{"from":"app1.proxy-hd.broker-project1.samply.de","to": ["app7.proxy-hd.broker-project1.samply.de"  ],"task":"70c0aa90-bfcf-4312-a6af-42cbd57dc0b8","status":"succeeded","body":"Successfully quenched 1.43e14 flux pulse devices","metadata": ["Arbitrary","types","are","possible"]}

A failed task:

{"from":"app5.proxy-ma.broker-project1.samply.de","to": ["app7.proxy-hd.broker-project1.samply.de"  ],"task":"70c0aa90-bfcf-4312-a6af-42cbd57dc0b8","status":"permfailed","body":"Unable to decrypt quantum state","metadata": {"complex":"A map (key 'complex') is possible, too"  }}
  • from: BeamID identifying the client submitting this result. This needs to match an entry theto field in the task.
  • to: BeamIDs the intended recipients of the result. Used for encrypted payloads.
  • task: UUID identifying the task this result belongs to.
  • status: Defines status of this work result. Allowed valuesclaimed,tempfailed,permfailed,succeeded. It is up to the application how these statuses are used. For example, some application might require workers to acknowledge the receipt of tasks by settingstatus=claimed, whereas others have only short-running tasks and skip this step.
  • body: Supported and required for allstatuses except forclaimed. Either carries the actual result payload of the task in case the status issucceeded or an error message.
  • metadata: Associated data readable by the broker. Can be of arbitrary type (seeTask) and is not encrypted.

Socket Task

Only available on builds of beam with thesockets feature

While "regular" Beam Tasks transport application data, Socket Task initiate direct socket connections between two Beam.Proxies.

{"from":"app1.proxy1.broker","to": ["app2.proxy2.broker"],"id":"<socket_uuid>","ttl":"60s","metadata":"some custom json value"}
  • from: BeamID of the client requesting the socket connection
  • to: BeamIDs of the intended recipients. Due to the nature of socket connections, the array has to be of exact length 1.
  • id: A UUID v4 which identifies the socket connection and is used by the recipient to connect to this socket (seehere).
  • ttl: The time-to-live of this socket task. After this time has elapsed the recipient can no longer connect to the socket. Already established connections are not affected.
  • metadata: Associated unencrypted data. Can be of arbitrary type same as inTask.

API

Create task

Create a new task to be worked on by defined workers. Currently, the body is restricted to 10MB in size.

Method:POST
URL:/v1/tasks
Body: seeTask
Parameters: none

Returns:

HTTP/1.1 201 CreatedLocation: /tasks/b999cf15-3c31-408f-a3e6-a47502308799Content-Length: 0Date: Mon, 27 Jun 2022 13:58:35 GMT

In subsequent requests, use the URL defined in thelocation header to refer to the task (NOT the one you supplied in your POST body).

If the task contains recipients (to field, seeBeam Task) with invalid certificates (i.e. not certificate exists or it expired), Beamdoes not create the task but returns HTTP status code424 Failed Dependency with a JSON array of the "offending" BeamIDs in the body, e.g.:

HTTP/1.1 424 Failed DependencyContent-Length: 34Date: Thu, 28 Sep 2023 07:16:24 GMT["proxy4.broker", "proxy6.broker"]

In this case, remove or correct these BeamIDs from theto field of your task and re-send.

Retrieve tasks

Workers regularly call this endpoint to retrieve submitted tasks.

Method:GET
URL:/v1/tasks
Parameters:

  • from (optional): Fetch only tasks created by this ID.
  • to (optional): Fetch only tasks directed to this ID.
  • long polling is supported.
  • filter (optional): Fetch only tasks fulfilling the specified filter criterion. Generic queries are not yet implemented, but the following "convenience filters" reflecting common use cases exist:
    • filter=todo: Matches unfinished tasks to be worked on by the asking client. Is a combination of:
      • to contains me and
      • results do not contain a result from me (except results withstatus values ofclaimed,tempfail, to allow resuming those tasks).

Returns an array of tasks, cf.here

HTTP/1.1 200 OKContent-Type: application/jsonContent-Length: 220Date: Mon, 27 Jun 2022 14:05:59 GMT[  {    "id": ...  })

Create a result

Create or update a result of a task. Currently, the body is restricted to 10MB in size.

Method:PUT
URL:/v1/tasks/<task_id>/results/<app_id>
Body: seeResult
Parameters: none

Returns:

HTTP/1.1 204 No ContentContent-Length: 0Date: Mon, 27 Jun 2022 13:58:35 GMT

Retrieve results

The submitter of the task (seeCreate Task) calls this endpoint to retrieve the results.

Method:GET
URL:/v1/tasks/<task_id>/results
Parameters:

Returns an array of results, cf.here

HTTP/1.1 200 OKContent-Type: application/jsonContent-Length: 179Date: Mon, 27 Jun 2022 14:26:45 GMT[  {    "id": ...  }]

Long-polling API access

As part of making this API performant, all reading endpoints support long-polling as an efficient alternative to regular (repeated) polling. Using this function requires the following parameters:

  • wait_count: The API call will block until at least this many results are available. If there are more matching tasks/results available all of them will be returned.
  • wait_time: ... or this time has passed (if not stated differently, e.g., by adding 'm', 'h', 'ms', ..., this is interpreted as seconds), whichever comes first.

For example, retrieving a task's results:

  • GET /v1/tasks/<task_id>/results will return immediately with however many results are available,
  • GET /v1/tasks/<task_id>/results?wait_count=5 will block until at least 5 results are available,
  • GET /v1/tasks/<task_id>/results?wait_count=5&wait_time=30s will block until 5 results are available or 30 seconds have passed (whichever comes first). In the latter case, HTTP code206 (Partial Content) is returned to indicate that the result is incomplete.

Server-sent Events (SSE) API (experimental)

To better support asynchronous use cases, such as web-based user interfaces streaming results, this development version supports a first implementation ofServer-Sent Events forResult retrieval. This allows Beam.Proxies to "subscribe" to tasks and get notifications for every new result without explicit polling. Similar to WebSockets, this is supported natively by JavaScript in web browsers. However, in contrast to WebSockets, SSE are standard long-lived HTTP requests that is likely to pass even strict firewalls.

Please note: This feature is experimental and subject to changes.

Method:GET
URL:/v1/tasks/<task_id>/results?wait_count=3
Header:Accept: text/event-stream
Parameters:

  • The same parameters as for long-polling, i.e.to,from,filter=todo,wait_count, andwait_time are supported.

Returns astream of results, cf.here:

HTTP/1.1 200 OKContent-Type: text/event-streamCache-Control: no-cacheTransfer-Encoding: chunkedDate: Thu, 09 Mar 2023 16:28:47 GMTevent: new_resultdata: {"body":"Unable to decrypt quantum state","from":"app2.proxy1.broker","metadata":{"complex":"A map (key complex) is possible, too"},"status":"permfailed","task":"70c0aa90-bfcf-4312-a6af-42cbd57dc0b8","to":["app1.proxy1.broker"]}event: new_resultdata: {"body":"Successfully quenched 1.43e14 flux pulse devices","from":"app1.proxy1.broker","metadata":["Arbitrary","types","are","possible"],"status":"succeeded","task":"70c0aa90-bfcf-4312-a6af-42cbd57dc0b8","to":["app1.proxy1.broker"]}[...]

You can consume this output natively within many settings, including web browsers. For more information, seeMozilla's developer documentation

Health Check

To monitor the operational status of Samply.Beam, each component implements a specific health check endpoint.

Method:GET
URL:/v1/health
Parameters:

  • None

In the current version, the Beam.Proxy only returns an appropriate status code once/if initialization has succeeded. However, in the future more detailed health information might be returned in the reply body.

HTTP/1.1 200 OKContent-Length: 0Date: Mon, 27 Jun 2022 14:26:45 GMT

The Beam.Broker implements a more informative health endpoint and returns a health summary and additional system details:

HTTP/1.1 200{  "summary": "healthy",  "vault": {    "status": "ok"  }}

or in case of an issue, e.g.:

HTTP/1.1 503{  "summary": "unhealthy",  "vault": {    "status": "unavailable"  }}

Additionally, the broker health endpoint publishes the connection status of the proxies:

Method:GET
URL:/v1/health/proxies/<proxy-id>
Authorization:

  • Basic Auth with an empty user and the configuredMONITORING_API_KEY as a password, so the header looks likeAuthorization: Basic <base64 of ':<MONITORING_API_KEY>'>.

In case of a successful connection between proxy and broker, the call returns HTTP status code200 OK, otherwise404 Not Found.

Querying the endpoint without specifying a ProxyId returns a JSON array of all proxies, that have ever connected to this broker:

Method:GET
URL:/v1/health/proxies
Authorization:

  • Basic Auth with an empty user and the configuredMONITORING_API_KEY as a password, so the header looks likeAuthorization: Basic <base64 of ':<MONITORING_API_KEY>'>.

yields, for example,

HTTP/1.1 200[  "proxy1.broker.example",  "proxy2.broker.example",  "proxy3.broker.example",  "proxy4.broker.example"]

Socket connections

Note: Only available on builds with the featuresockets enabled. Both proxy and broker need to be built with this flag. There are also prebuilt docker images available with this feature.

All API requests require the usual authentication header (seegetting started section).

Initialize a socket connection

Initialize a socket connection with an Beam application, e.g. with AppIdapp2.proxy2.broker:

Method:POST
URL:/v1/sockets/<app_id>
HeaderUpgrade is required, e.g. 'Upgrade: tls'Optionally takes ametadata header which is expected to be a serialized json value.This corresponds to themetadata field onSocket task.

This request will automatically lead to a connection to the other app, after it answers this request.

Receive and answer a socket request

To receive socket connections, the Beam.Proxy needs to be polled for incoming connections.This endpoint also supports thelong polling query string semantics.

Method:GET
URL:/v1/socketsParameters:

  • The same parameters as for long-polling, i.e.to,from,filter=todo,wait_count andwait_time are supported.

Returns an array of JSON objects:

[    {"from":"app1.proxy1.broker","to": ["app2.proxy2.broker"],"id":"<socket_uuid>","ttl":"60s","metadata":"Some json value"    }]

Connecting to a socket request

After the connection negotiation above, the App can proceed to connect to the socket:

Method: GET
URL:/v1/sockets/<socket_uuid>

Development Environment

A dev environment is provided consisting of one broker and two proxies as well as an optional MITM proxy (listening onlocalhost:9090) for debugging. To use it, remove the comment signs for the MITM service and theALL_PROXY environment variables indev/docker-compose.yml. Note that the MITM proxy interferes with SSE.

NOTE: The commands in this section will build the beam proxy and broker locally. To build beam, you need to install libssl-dev.

To start the dev setup:

./dev/beamdev start

Steps may fail and ask you to install tools. In particular, note that you need a current (>= 7.7.0) curl version.

Alternatively, you can run the services in the background and get the logs as follows:

./dev/beamdev start_bgdocker compose logs -f

Confirm that your setup works by running./dev/test noci, which runs the tests against your instances.

To work with the environment, you may run./dev/beamdev defaults to see some helpful values, including the dev default URLs and a working authentication header.

To run the dev setup with additional cargo flags like feature flags or the release flag you may rundev/beamdev start <cargo flags>, i.e.dev/beamdev start --features sockets.

Production Environment & Certificate Infrastructure

A production system needs to operate a production-hardened centralHashicorp Vault and requires a slightly more involved secret management process to ensure, that no secret is accidentally leaked. We can give no support regarding the vault setup, please see theofficial documentation. However, ourdeployment repositories have a basic vault cookbook section, describing a basic setup and the most common operations.

While the development system generates all secrets and certificates locally at startup time, the production system should a) persist the Beam.Proxy certificates at the central CA, and b) allow an easy private key generation and certificate enrollment. As the central components and the Beam.Proxies could be operated by different institutions, (private) key generation must be performed at the sites without involvement of the central CA operators.

Beam.Broker and Beam.Proxy expect the private key as well as the CA root certificate to be present at startup (the location can be changed via the--rootcert-file and--privkey-file command line parameters, as well as the corresponding environment variables). Furthermore, the certificates for the Beam.Proxy common names corresponding to those private keys must be available in the central CA. That means that the Proxy sites must generate a) a private key, b) a certificate request for signing before operation can commence. There are two possible ways to do that:

Method 1: Using the Beam Enrollment Companion Tool

We created acertificate enrollment companion tool, assisting the enrollment process. Please run the docker image via:

docker run --rm -v<output-directory>:/pki samply/beam-enroll:latest --output-file /pki/<proxy_name>.priv.pem --proxy-id<full_proxy_id>

and follow the instructions on the screen. The tool generates the private key file in the given directory and prints the CSR to the console -- ready to be copied into an email to the central CA's administrator without the risk of accidentally sending the wrong, i.e. private, file.

Method 2: Using OpenSSL (manual)

The manual method requiresopenssl to be installed on the system.

To generate both required files simultaneously using the openssl command line tool, enter:

openssl req -nodes -new -newkey rsa:4096 -sha256 -out<proxy_name>.csr.pem

This generates both the private key and the CSR with the given name. Please note, that the private key must remain confidential and at your site!

Next, send the CSR to the central CA's administrator for signing and enrolling the proxy certificate.

Logging

Both the Broker and the Proxy respect the log level in theRUST_LOG environment variable. E.g.,RUST_LOG=debug enables debug outputs. Warning: thetrace log level isvery noisy.

Technical Background Information

End-to-End Encryption

Samply.Beam encrypts all information in thebody fields of both Tasks and Results. The data is encryted in the Samply.Proxy before forwarding to the Beam.Broker. Similarly, the decryption takes place in the Beam.Proxy as well. This is in addition to the transport encryption (TLS) and different in that even the broker is unable to decipher the message's content fields.

The data is symmetrically encrypted using the Authenticated Encryption with Authenticated Data (AEAD) algorithm "XChaCha20Poly1305", a widespread algorithm (e.g., mandatory for the TLS protocol), regarded as highly secure by experts. The usedchacha20poly1305 library was sublected to asecurity audit, with no significant findings. The randomly generated symmetric keys are encapsulated in a RSA encrypted ciphertext using OAEP Padding. This ensures, that only the intended recipients can decrypt the key and subsequently the transferred data.

Health check connection

The beam proxy tries to keep a permanent connection to the broker to make it possible to see which sites are currently connected.This also allows us to detected invalid connection states such as multiple proxies with the same proxy id connecting simultaneously.In that case the second proxy trying to connect will receive a 409 status code and shut down.

Roadmap

  • API Key authentication of local applications
  • Certificate management
  • Certificate enrollment process
  • End-to-End signatures
  • End-to-End encryption
  • Docker deployment packages: CI/CD
  • Broker-side filtering using pre-defined criteria
  • Helpful dev environment
  • Expiration of tasks and results
  • Support TLS-terminating proxies
  • Transport direct socket connections
  • Crate to support the development of Rust Beam client applications
  • File transfers (with efficient support for large files)
  • Broker-side filtering of tasks using the unencrypted metadata fields (probably using JSON queries)
  • Integration of OAuth2 (in discussion)
  • Deliver usage metrics

Cryptography Notice

This distribution includes cryptographic software. The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. Seehttp://www.wassenaar.org/ for more information.

About

🌈 Federated, end-to-end-encrypted, efficient communication among strict network environments.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

[8]ページ先頭

©2009-2025 Movatter.jp