Movatterモバイル変換


[0]ホーム

URL:


    Synapse

      Contributing

      This document aims to get you started with contributing to Synapse!

      1. Who can contribute to Synapse?

      Everyone is welcome to contribute code toSynapse, provided that they are willingto license their contributions to Element under aContributor LicenseAgreement (CLA). This ensures thattheir contribution will be made available under an OSI-approved open-sourcelicense, currently Affero General Public License v3 (AGPLv3).

      Please see theElement blog postfor the full rationale.

      2. What do I need?

      If you are running Windows, the Windows Subsystem for Linux (WSL) is stronglyrecommended for development. More information about WSL can be found athttps://docs.microsoft.com/en-us/windows/wsl/install. Running Synapse nativelyon Windows is not officially supported.

      The code of Synapse is written in Python 3. To do pretty much anything, you'll needa recent version of Python 3. Your Python also needs support forvirtual environments. This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Runningsudo apt install python3-venv should be enough.

      A recent version of the Rust compiler is needed to build the native modules. Theeasiest way of installing the latest version is to userustup.

      Synapse can connect to PostgreSQL via thepsycopg2 Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed withsudo apt install libpq-dev.

      The source code of Synapse is hosted on GitHub. You will also needa recent version of git.

      For some tests, you will needa recent version of Docker.

      3. Get the source.

      The preferred and easiest way to contribute changes is to fork the relevantproject on GitHub, and thencreate a pull request to ask us to pull yourchanges into our repo.

      Please base your changes on thedevelop branch.

      git clone git@github.com:YOUR_GITHUB_USER_NAME/synapse.gitgit checkout develop

      If you need help getting started with git, this is beyond the scope of the document, but youcan find many good git tutorials on the web.

      4. Install the dependencies

      Before installing the Python dependencies, make sure you have installed a recent versionof Rust (see the "What do I need?" section above). The easiest way of installing thelatest version is to userustup.

      Synapse uses thepoetry project to manage its dependenciesand development environment. Once you have installed Python 3 and added thesource, you should installpoetry.Of their installation methods, we recommendinstallingpoetry usingpipx,

      pip install --user pipxpipx install poetry

      but see poetry'sinstallation instructionsfor other installation methods.

      Developing Synapse requires Poetry version 1.3.2 or later.

      Next, open a terminal and install dependencies as follows:

      cd path/where/you/have/cloned/the/repositorypoetry install --extras all

      This will install the runtime and developer dependencies for the project. Be sure to checkthat thepoetry install step completed cleanly.

      For OSX users, be sure to setPKG_CONFIG_PATH to supporticu4c. Runbrew info icu4c for more details.

      Running Synapse via poetry

      To start a local instance of Synapse in the locked poetry environment, create a config file:

      cp docs/sample_config.yaml homeserver.yamlcp docs/sample_log_config.yaml log_config.yaml

      Now edithomeserver.yaml, things you might want to change include:

      And then run Synapse with the following command:

      poetry run python -m synapse.app.homeserver -c homeserver.yaml

      If you get an error like the following:

      importlib.metadata.PackageNotFoundError: matrix-synapse

      this probably indicates that thepoetry install step did not complete cleanly - go back andresolve any issues and re-run until successful.

      5. Get in touch.

      Join our developer community on Matrix:#synapse-dev:matrix.org!

      6. Pick an issue.

      Fix your favorite problem or perhaps find aGood First Issueto work on.

      7. Turn coffee into code and documentation!

      There is a growing amount of documentation located in thedocsdirectory, with a rendered versionavailable online.This documentation is intended primarily for sysadmins running theirown Synapse instance, as well as developers interacting externally withSynapse.docs/developmentexists primarily to house documentation forSynapse developers.docs/admin_api houses documentationregarding Synapse's Admin API, which is used mostly by sysadmins and externalservice developers.

      Synapse's code style is documentedhere. Please followit, including the conventions forconfigurationoptions and documentation.

      We welcome improvements and additions to our documentation itself! Whenwriting new pages, pleasebuilddocs to a bookto check that your contributions render correctly. The docs are written inGitHub-Flavoured Markdown.

      When changes are made to any Rust code then you must call eitherpoetry installormaturin develop (if installed) to rebuild the Rust code. Usingmaturinis quicker thanpoetry install, so is recommended when making frequentchanges to the Rust code.

      8. Test, test, test!

      While you're developing and before submitting a patch, you'llwant to test your code.

      Run the linters.

      The linters look at your code and do two things:

      • ensure that your code follows the coding style adopted by the project;
      • catch a number of errors in your code.

      The linter takes no time at all to run as soon as you'vedownloaded the dependencies.

      poetry run ./scripts-dev/lint.sh

      Note that this scriptwill modify your files to fix styling errors.Make sure that you have saved all your files.

      If you wish to restrict the linters to only the files changed since the last commit(much faster!), you can instead run:

      poetry run ./scripts-dev/lint.sh -d

      Or if you know exactly which files you wish to lint, you can instead run:

      poetry run ./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder

      Run the unit tests (Twisted trial).

      The unit tests run parts of Synapse, including your changes, to see if anythingwas broken. They are slower than the linters but will typically catch more errors.

      poetry run trial tests

      You can run unit tests in parallel by specifying-jX argument totrial whereX is the number of parallel runners you want. To use 4 cpu cores, you would run them like:

      poetry run trial -j4 tests

      If you wish to only runsome unit tests, you may specifyanother module instead oftests - or a test class or a method:

      poetry run trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite

      If your tests fail, you may wish to look at the logs (the default log level isERROR):

      less _trial_temp/test.log

      To increase the log level for the tests, setSYNAPSE_TEST_LOG_LEVEL:

      SYNAPSE_TEST_LOG_LEVEL=DEBUG poetry run trial tests

      By default, tests will use an in-memory SQLite database for test data. For additionalhelp with debugging, one can use an on-disk SQLite database file instead, in order toreview database state during and after running tests. This can be done by settingtheSYNAPSE_TEST_PERSIST_SQLITE_DB environment variable. Doing so will cause thedatabase state to be stored in a file namedtest.db under the trial process'working directory. Typically, this ends up being_trial_temp/test.db. For example:

      SYNAPSE_TEST_PERSIST_SQLITE_DB=1 poetry run trial tests

      The database file can then be inspected with:

      sqlite3 _trial_temp/test.db

      Note that the database file is cleared at the beginning of each test run. Thus itwill always only contain the data generated by thelast run test. Though generallywhen debugging, one is only running a single test anyway.

      Running tests under PostgreSQL

      Invokingtrial as above will use an in-memory SQLite database. This is great forquick development and testing. However, we recommend using a PostgreSQL databasein production (and indeed, we have some code paths specific to each database).This means that we need to run our unit tests against PostgreSQL too. Our CI doesthis automatically for pull requests and release candidates, but it's sometimesuseful to reproduce this locally.

      Using Docker

      The easiest way to do so is to run Postgres via a docker container. In oneterminal:

      docker run --rm -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=postgres -e POSTGRES_DB=postgres -p 5432:5432 postgres:14

      If you see an error like

      docker: Error response from daemon: driver failed programming external connectivity on endpoint nice_ride (b57bbe2e251b70015518d00c9981e8cb8346b5c785250341a6c53e3c899875f1): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use.

      then something is already bound to port 5432. You're probably already running postgres locally.

      Once you have a postgres server running, invoketrial in a second terminal:

      SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_HOST=127.0.0.1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_POSTGRES_PASSWORD=mysecretpassword poetry run trial tests

      Using an existing Postgres installation

      If you have postgres already installed on your system, you can runtrial with thefollowing environment variables matching your configuration:

      • SYNAPSE_POSTGRES to anything nonempty
      • SYNAPSE_POSTGRES_HOST (optional if it's the default: UNIX socket)
      • SYNAPSE_POSTGRES_PORT (optional if it's the default: 5432)
      • SYNAPSE_POSTGRES_USER (optional if using a UNIX socket)
      • SYNAPSE_POSTGRES_PASSWORD (optional if using a UNIX socket)

      For example:

      export SYNAPSE_POSTGRES=1export SYNAPSE_POSTGRES_HOST=localhostexport SYNAPSE_POSTGRES_USER=postgresexport SYNAPSE_POSTGRES_PASSWORD=mydevenvpasswordtrial

      You don't need to specify the host, user, port or password if your Postgresserver is set to authenticate you over the UNIX socket (i.e. if thepsql commandworks without further arguments).

      Your Postgres account needs to be able to create databases; see the postgresdocs forALTER ROLE.

      Run the integration tests (Sytest).

      The integration tests are a more comprehensive suite of tests. Theyrun a full version of Synapse, including your changes, to check ifanything was broken. They are slower than the unit tests but willtypically catch more errors.

      The following command will let you run the integration test with the most commonconfiguration:

      $ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:bullseye

      (Note that the paths must be full paths! You could also write$(realpath relative/path) if needed.)

      This configuration should generally cover your needs.

      • To run with Postgres, supply the-e POSTGRES=1 -e MULTI_POSTGRES=1 environment flags.
      • To run with Synapse in worker mode, supply the-e WORKERS=1 -e REDIS=1 environment flags (in addition to the Postgres flags).

      For more details about other configurations, see theDocker-specific documentation in the SyTest repo.

      Run the integration tests (Complement).

      Complement is a suite of black box tests that can be run on any homeserver implementation. It can also be thought of as end-to-end (e2e) tests.

      It's often nice to develop on Synapse and write Complement tests at the same time.Here is how to run your local Synapse checkout against your local Complement checkout.

      (checkoutcomplement alongside yoursynapse checkout)

      COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh

      To run a specific test file, you can pass the test name at the end of the command. The name passed comes from the naming structure in your Complement tests. If you're unsure of the name, you can do a full run and copy it from the test output:

      COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages

      To run a specific test, you can specify the whole name structure:

      COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages/parallel/Historical_events_resolve_in_the_correct_order

      The above will run a monolithic (single-process) Synapse with SQLite as the database. For other configurations, try:

      • PassingPOSTGRES=1 as an environment variable to use the Postgres database instead.
      • PassingWORKERS=1 as an environment variable to use a workerised setup instead. This option implies the use of Postgres.
        • If settingWORKERS=1, optionally setWORKER_TYPES= to declare which workertypes you wish to test. A simple comma-delimited string containing the worker typesdefined from theWORKERS_CONFIG template inhere.A safe example would beWORKER_TYPES="federation_inbound, federation_sender, synchrotron".See theworker documentation for additional information on workers.
      • PassingASYNCIO_REACTOR=1 as an environment variable to use the Twisted asyncio reactor instead of the default one.
      • PassingPODMAN=1 will use thepodman container runtime, instead of docker.
      • PassingUNIX_SOCKETS=1 will utilise Unix socket functionality for Synapse, Redis, and Postgres(when applicable).

      To increase the log level for the tests, setSYNAPSE_TEST_LOG_LEVEL, e.g:

      SYNAPSE_TEST_LOG_LEVEL=DEBUG COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages

      Prettier formatting withgotestfmt

      If you want to format the output of the tests the same way as it looks in CI,installgotestfmt.

      You can then use this incantation to format the tests appropriately:

      COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -json | gotestfmt -hide successful-tests

      (Remove-hide successful-tests if you don't want to hide successful tests.)

      Access database for homeserver after Complement test runs.

      If you're curious what the database looks like after you run some tests, here are some steps to get you going in Synapse:

      1. In your Complement test comment outdefer deployment.Destroy(t) and replace withdefer time.Sleep(2 * time.Hour) to keep the homeserver running after the tests complete
      2. Start the Complement tests
      3. Find the name of the container,docker ps -f name=complement_ (this will filter for just the Compelement related Docker containers)
      4. Access the container replacing the name with what you found in the previous step:docker exec -it complement_1_hs_with_application_service.hs1_2 /bin/bash
      5. Install sqlite (database driver),apt-get update && apt-get install -y sqlite3
      6. Then runsqlite3 and open the database.open /conf/homeserver.db (this db path comes from the Synapse homeserver.yaml)

      9. Submit your patch.

      Once you're happy with your patch, it's time to prepare a Pull Request.

      To prepare a Pull Request, please:

      1. verify thatall the tests pass, including the coding style;
      2. sign off your contribution;
      3. git push your commit to your fork of Synapse;
      4. on GitHub,create the Pull Request;
      5. add achangelog entry and push it to your Pull Request;
      6. that's it for now, a non-draft pull request will automatically request review from the team;
      7. if you need to update your PR, please avoid rebasing and just add new commits to your branch.

      Changelog

      All changes, even minor ones, need a corresponding changelog / newsfragmententry. These are managed byTowncrier.

      To create a changelog entry, make a new file in thechangelog.d directory namedin the format ofPRnumber.type. The type can be one of the following:

      • feature
      • bugfix
      • docker (for updates to the Docker image)
      • doc (for updates to the documentation)
      • removal (also used for deprecations)
      • misc (for internal-only changes)

      This file will become part of ourchangelog at the nextrelease, so the content of the file should be a short description of yourchange in the same style as the rest of the changelog. The file can contain Markdownformatting, and must end with a full stop (.) or an exclamation mark (!) forconsistency.

      Adding credits to the changelog is encouraged, we value yourcontributions and would like to have you shouted out in the release notes!

      For example, a fix in PR #1234 would have its changelog entry inchangelog.d/1234.bugfix, and contain content like:

      The security levels of Florbs are now validated when receivedvia the/federation/florb endpoint. Contributed by Jane Matrix.

      If there are multiple pull requests involved in a single bugfix/feature/etc, then thecontent for eachchangelog.d file and file extension should be the same. Towncrierwill merge the matching files together into a single changelog entry when we come torelease.

      How do I know what to call the changelog file before I create the PR?

      Obviously, you don't know if you should call your newsfile1234.bugfix or5678.bugfix until you create the PR, which leads to achicken-and-egg problem.

      There are two options for solving this:

      1. Open the PR without a changelog file, see what number you got, andthenadd the changelog file to your branch, or:

      2. Look at thelist of allissues/PRs, add one to thehighest number you see, and quickly open the PR before somebody else claimsyour number.

        Thisscriptmight be helpful if you find yourself doing this a lot.

      Sorry, we know it's a bit fiddly, but it'sreally helpful for us when we cometo put together a release!

      Debian changelog

      Changes which affect the debian packaging files (indebian) are anexception to the rule that all changes require achangelog.d file.

      In this case, you will need to add an entry to the debian changelog for thenext release. For this, run the following command:

      dch

      This will make up a new version number (if there isn't already an unreleasedversion in flight), and open an editor where you can add a new changelog entry.(Our release process will ensure that the version number and maintainer name iscorrected for the release.)

      If your change affects both the debian packagingand files outside the debiandirectory, you will need both a regular newsfragmentand an entry in thedebian changelog. (Though typically such changes should be submitted as twoseparate pull requests.)

      Sign off

      After you make a PR a comment from @CLAassistant will appear asking you to signtheCLA.This will link a page to allow you to confirm that you have read and agreed tothe CLA by signing in with GitHub.

      Alternatively, you can sign off before opening a PR by going tohttps://cla-assistant.io/element-hq/synapse.

      We accept contributions under a legally identifiable name, such asyour name on government documentation or common-law names (namesclaimed by legitimate usage or repute). Unfortunately, we cannotaccept anonymous contributions at this time.

      10. Turn feedback into better code.

      Once the Pull Request is opened, you will see a few things:

      1. our automated CI (Continuous Integration) pipeline will run (again) the linters, the unit tests, the integration tests and more;
      2. one or more of the developers will take a look at your Pull Request and offer feedback.

      From this point, you should:

      1. Look at the results of the CI pipeline.
        • If there is any error, fix the error.
      2. If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.
        • A pull request is a conversation, if you disagree with the suggestions, please respond and discuss it.
      3. Create a new commit with the changes.
        • Please do NOT overwrite the history. New commits make the reviewer's life easier.
        • Push this commits to your Pull Request.
      4. Back to 1.
      5. Once the pull request is ready for review again please re-request review from whichever developer did your initialreview (or leave a comment in the pull request that you believe all required changes have been done).

      Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!

      11. Find a new issue.

      By now, you know the drill!

      Notes for maintainers on merging PRs etc

      There are some notes for those with commit access to the project on how wemanage githere.

      Conclusion

      That's it! Matrix is a very open and collaborative project as you might expectgiven our obsession with open communication. If we're going to successfullymatrix together all the fragmented communication technologies out there we arereliant on contributions and collaboration from the community to do so. Soplease get involved - and we hope you have as much fun hacking on Matrix as wedo!


      [8]ページ先頭

      ©2009-2025 Movatter.jp