Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

💻 Microservice lib designed to ease service building using Python and asyncio, with ready to use support for HTTP + WS, AWS SNS+SQS, RabbitMQ / AMQP, middlewares, envelopes, logging, lifecycles. Extend to GraphQL, protobuf, etc.

License

NotificationsYou must be signed in to change notification settings

kalaspuff/tomodachi

Repository files navigation

tomodachi [友達]means friends — 🦊🐶🐻🐯🐮🐸🐍 — a suitable name for microservices working together. ✨✨
eventsmessagingapipubsubsns+sqsamqphttpqueueshandlersschedulingtasksmicroservicetomodachi

imageimageimageimage


tomodachiis a library designed to make it easy for devs to buildmicroservices usingasyncioon Python.

Includes ready implementations to support handlers built for HTTPrequests, websockets, AWS SNS+SQS and RabbitMQ / AMQP for 🚀 event basedmessaging, 🔗 intra-service communication and 🐶 watchdog handlers.

  • HTTP request handlers (API endpoints) are sent requests via theaiohttp server library. 🪢
  • Events and message handlers are hooked into a message bus, such as a queue, from for example AWS (Amazon Web Services) SNS+SQS (aiobotocore), RabbitMQ / AMQP (aioamqp), etc. 📡

Using the provided handler managers, the need for devs to interface withlow-level libs directly should be lower, making it more of a breeze tofocus on building the business logic. 🪄

image

tomodachi has a featureset to meet most basic needs, for example...

  • 🦸 ⋯ Graceful termination of consumers, listeners and tasks to ensure smooth deployments.
  • ⋯ Scheduled function execution (cron notation / time interval) for building watchdog handlers.
  • 🍔 ⋯ Execution middleware interface for incoming HTTP requests and received messages.
  • 💌 ⋯ Simple envelope building and parsing for both receiving and publishing messages.
  • 📚 ⋯ Logging support viastructlog with template loggers for both "dev console" and JSON output.
  • ⛑️ ⋯ Loggers and handler managers built to support exception tracing, from for example Sentry.
  • 📡 ⋯ SQS queues with filter policies for SNS topic subscriptions filtering messages on message attributes.
  • 📦 ⋯ Supports SQS dead-letter queues via redrive policy -- infra orchestration from service optional.
  • 🌱 ⋯ Designed to be extendable -- most kinds of transport layers or event sources can be added.

Quicklinks to the documentation 📖

This documentation README includes information on how to get startedwith services, what built-in functionality exists in this library, listsof available configuration parameters and a few examples of servicecode.

Visithttps://tomodachi.dev/ foradditional documentation. 📔

Handler types / endpoint built-ins. 🛍️

Service options to tweak handler managers. 🛠️

Use the features you need. 🌮

Recommendations and examples. 🧘


Please note -- this library is a work in progress. 🐣

Considertomodachi as beta software. This library follows an unregularrelease schedule. There may be breaking changes between0.x versions.

Usage

tomodachi is used to execute service code via command line interfaceor within container images. It will be installed automatically when thepackage is installed in the environment.

The CLI endpointtomodachi is then used to run services defined astomodachi service classes.

Start a service with its class definition defined in./service/app.pyby runningtomodachi run service/app.py. Finally stop the service withthe keyboard interrupt<ctrl+c>.

The run command has some options available that can be specified witharguments to the CLI.

Most options can also be set as an environment variable value.

For example setting environmentTOMODACHI_LOGGER=json will yield thesame change to the logger as if running the service using the argument--logger json.


🧩--loop [auto|asyncio|uvloop]
🖥️TOMODACHI_LOOP=...

The value for--loop can either be set toasyncio,uvloop orauto. Theuvloop value can only be used if uvloop is installed inthe execution environment. Note that the defaultauto value willcurrently end up using the event loop implementation that is preferredby the Python interpreter, which in most cases will beasyncio.

🧩--production
🖥️TOMODACHI_PRODUCTION=1

Use--production to disable the file watcher that restarts the serviceon file changes and to hide the startup info banner.

recommendation ✨👀
Highly recommended to enable this option for built docker images andfor builds of services that are to be released to any environment. Theonly time you should run without the--productionoption is duringdevelopment and in local development environment.

🧩--log-level [debug|info|warning|error|critical]
🖥️TOMODACHI_LOG_LEVEL=...

Set the minimum log level for which the loggers will emit logs to theirhandlers with the--log-level option. By default the minimum log levelis set toinfo (which includesinfo,warning,error andcritical, resulting in only thedebug log records to be filteredout).

🧩--logger [console|json|python|disabled]
🖥️TOMODACHI_LOGGER=...

Apply the--logger option to change the log formatter that is used bythe library. The default valueconsole is mostly suited for localdevelopment environments as it provides a structured and colorized viewof log records. The console colors can be disabled by setting the envvalueNO_COLOR=1.

recommendation ✨👀
For released services / images it's recommended to use thejsonoption so that you can set up structured log collection via forexample Logstash, Fluentd, Fluent Bit, Vector, etc.

If you prefer to disable log output from the library you can usedisabled (and presumably add a log handler with anotherimplementation).

Thepython option isn't recommended, but available if required to usethe loggers from Python's built-inlogging module. Note that thebuilt-inlogging module will be used any way. as the library'sloggers are both added as handlers tologging.root and has propagationof records through tologging as well.

🧩--custom-logger <module.attribute|module>
🖥️TOMODACHI_CUSTOM_LOGGER=...

If the template loggers from the option above doesnt' cut it or if youalready have your own logger (preferably astructlog logger) andprocessor chain set up, you can specify a--custom-logger which willalso maketomodachi use your logger set up. This is suitable also ifyour app is using a custom logging setup that would differ in outputfrom what thetomodachi loggers outputs.

If your logger is initialized in for example the moduleyourapp.logging and the initialized (structlog) logger is aptlynamedlogger, then use--custom-logger yourapp.logging.logger (orset as an env valueTOMODACHI_CUSTOM_LOGGER=yourapp.logging.logger).

The path to the logger attribute in the module you're specifying mustimplementdebug,info,warning,error,exception,criticaland preferably alsonew(context: Dict[str, Any]) -> Logger (as that iswhat primarily will be called to create (or get) a logger).

Although non-nativestructlog loggers can be used as custom loggers,it's highly recommended to specify a path that has been assigned avalue fromstructlog.wrap_logger orstructlog.get_logger.

🧩--opentelemetry-instrument
🖥️TOMODACHI_OPENTELEMETRY_INSTRUMENT=1

Use--opentelemetry-instrument to enable OpenTelemetry autoinstrumentation of the service and libraries for which the environmenthas installed instrumentors.

Iftomodachi is installed in the environment, using the argument--opentelemetry-instrument (or setting theTOMODACHI_OPENTELEMETRY_INSTRUMENT=1 env variable value) is mostlyequivalent to starting the service using theopentelemetry-instrumentCLI -- OTEL distros, configurators and instrumentors will be loadedautomatically andOTEL_* environment values will be processed in thesame way.


Getting started 🏃

First off -- installation usingpoetry is fully supported and battle-tested (pip works just as fine)

Installtomodachi in your preferred way, wether it bepoetry,pip,pipenv, etc. Installing the distribution will give your environmentaccess to thetomodachi package for imports as well as a shortcut tothe CLI alias, which later is used to run the microservices you build.

local~$ pip install tomodachi> ...> Installing collected packages: ..., ..., ..., tomodachi> Successfully installed ... ... ... tomodachi-x.x.xxlocal~$ tomodachi --version> tomodachi x.xx.xx

tomodachi can be installed together with a set of "extras" that willinstall a set of dependencies that are useful for different purposes.The extras are:

  • uvloop: for the possibility to start services with the--loop uvloop option.
  • protobuf: for protobuf support in envelope transformation and message serialization.
  • aiodns: to useaiodns as the DNS resolver foraiohttp.
  • brotli: to usebrotli compression inaiohttp.
  • opentelemetry: for OpenTelemetry instrumentation support.
  • opentelemetry-exporter-prometheus: to use the experimental OTEL meter provider for Prometheus.

Services and their dependencies, together with runtime utilities liketomodachi, should preferably always be installed and run in isolatedenvironments like Docker containers or virtual environments.

Building blocks for a service class and microservice entrypoint

  1. import tomodachi and create a class that inheritstomodachi.Service, it can be called anything... or justService to keep it simple.
  2. Add aname attribute to the class and give it a string value. Having aname attribute isn't required, but good practice.
  3. Define an awaitable function in the service class -- in this example we'll use it as an entrypoint to trigger code in the service by decorating it with one of the available invoker decorators. Note that a service class must have at least one decorated function available to even be recognized as a service bytomodachi run.
  4. Decide on how to trigger the function -- for example using HTTP, pub/sub or on a timed interval, then decorate your function with one of these trigger / subscription decorators, which also invokes what capabilities the service initially has.

Further down you'll find a desciption of how each of the built-ininvoker decorators work and which keywords and parameters you can use tochange their behaviour.

Note: Publishing and subscribing to events and messages may requireuser credentials or hosting configuration to be able to access queuesand topics.

For simplicity, let's do HTTP:

  • On each POST request to/sheep, the service will wait for up to one whole second (pretend that it's performing I/O -- waiting for response on a slow sheep counting database modification, for example) and then issue a 200 OK with some data.
  • It's also possible to query the amount of times the POST tasks has run by doing aGET request to the same url,/sheep.
  • By using@tomodachi.http an HTTP server backed byaiohttp will be started on service start.tomodachi will act as a middleware to route requests to the correct handlers, upgrade websocket connections and then also gracefully await connections with still executing tasks, when the service is asked to stop -- up until a configurable amount of time has passed.
importasyncioimportrandomimporttomodachiclassService(tomodachi.Service):name="sleepy-sheep-counter"_sheep_count=0@tomodachi.http("POST",r"/sheep")asyncdefadd_to_sheep_count(self,request):awaitasyncio.sleep(random.random())self._sheep_count+=1return200,str(self._sheep_count)@tomodachi.http("GET",r"/sheep")asyncdefreturn_sheep_count(self,request):return200,str(self._sheep_count)

Run services with:

local~/code/service$ tomodachi run service.py

Beside the currently existing built-in ways of interfacing with aservice, it's possible to build additional function decorators to suitthe use-cases one may have.

To give a few possible examples / ideas of functionality that could becoded to call functions with data in similar ways:

  • Using Redis as a task queue with configurable keys to push or pop onto.
  • Subscribing to Kinesis or Kafka event streams and act on the data received.
  • An abstraction around otherwise complex functionality or to unify API design.
  • As an example to above sentence; GraphQL resolver functionality with built-in tracability and authentication management, with a unified API to application devs.

Additional examples will follow with different ways to trigger functions in the service

Of course the different ways can be used within the same class, forexample the very common use-case of having a service listening on HTTPwhile also performing some kind of async pub/sub tasks.

Basic HTTP based service 🌟

Code for a simple service which would service data over HTTP, prettysimilar, but with a few more concepts added.

importtomodachiclassService(tomodachi.Service):name="http-example"# Request paths are specified as regex for full flexibility@tomodachi.http("GET",r"/resource/(?P<id>[^/]+?)/?")asyncdefresource(self,request,id):# Returning a string value normally means 200 OKreturnf"id ={id}"@tomodachi.http("GET",r"/health")asyncdefhealth_check(self,request):# Return can also be a tuple, dict or even an aiohttp.web.Response# object for more complex responses - for example if you need to# send byte data, set your own status code or define own headersreturn {"body":"Healthy","status":200,        }# Specify custom 404 catch-all response@tomodachi.http_error(status_code=404)asyncdeferror_404(self,request):return"error 404"

RabbitMQ or AWS SNS+SQS event based messaging service 🐰

Example of a service that calls a function when messages are publishedon an AMQP topic exchange.

importtomodachiclassService(tomodachi.Service):name="amqp-example"# The "message_envelope" attribute can be set on the service class to build / parse data.# message_envelope = ...# A route / topic on which the service will subscribe to via RabbitMQ / AMQP@tomodachi.amqp("example.topic")asyncdefexample_func(self,message):# Received message, fordarding the same message as response on another route / topicawaittomodachi.amqp_publish(self,message,routing_key="example.response")

AMQP – Publish to exchange / routing key –tomodachi.amqp_publish

awaittomodachi.amqp_publish(service,message,routing_key=routine_key,exchange_name=...)
  • service is the instance of the service class (from within a handler, useself)
  • message is the message to publish before any potential envelope transformation
  • routing_key is the routing key to use when publishing the message
  • exchange_name is the exchange name for publishing the message (default: "amq.topic")

For more advanced workflows, it's also possible to specify overrides for the routing key prefix or message enveloping class.

AWS SNS+SQS event based messaging service 📡

Example of a service using AWS SNS+SQS managed pub/sub messaging. AWSSNS and AWS SQS together brings managed message queues formicroservices, distributed systems, and serverless applications hostedon AWS.tomodachi services can customize their envelopingfunctionality to both unwrap incoming messages and/or to produceenveloped messages for published events / messages. Pub/sub patterns aregreat for scalability in distributed architectures, when for examplehosted in Docker on Kubernetes.

importtomodachiclassService(tomodachi.Service):name="aws-example"# The "message_envelope" attribute can be set on the service class to build / parse data.# message_envelope = ...# Using the @tomodachi.aws_sns_sqs decorator to make the service create an AWS SNS topic,# an AWS SQS queue and to make a subscription from the topic to the queue as well as start# receive messages from the queue using SQS.ReceiveMessages.@tomodachi.aws_sns_sqs("example-topic",queue_name="example-queue")asyncdefexample_func(self,message):# Received message, forwarding the same message as response on another topicawaittomodachi.aws_sns_sqs_publish(self,message,topic="another-example-topic")

AWS – Publish message to SNS –tomodachi.aws_sns_sqs_publish

awaittomodachi.aws_sns_sqs_publish(service,message,topic=topic)
  • service is the instance of the service class (from within a handler, useself)
  • message is the message to publish before any potential envelope transformation
  • topic is the non-prefixed name of the SNS topic used to publish the message

Additional function arguments can be supplied to also includemessage_attributes, and / orgroup_id +deduplication_id.

For more advanced workflows, it's also possible to specify overrides for the SNS topic name prefix or message enveloping class.

AWS – Send message to SQS –tomodachi.sqs_send_message

awaittomodachi.sqs_send_message(service,message,queue_name=queue_name)
  • service is the instance of the service class (from within a handler, useself)
  • message is the message to publish before any potential envelope transformation
  • queue_name is the SQS queue url, queue ARN or non-prefixed queue name to be used

Additional function arguments can be supplied to also includemessage_attributes, and / orgroup_id +deduplication_id.

For more advanced workflows, it's also possible to set delay seconds, define a custom message body formatter, or to specify overrides for the SNS topic name prefix or message enveloping class.

Scheduling, inter-communication between services, etc. ⚡️

There are other examples available with code of how to use services withself-invoking methods called on a specified interval or at specifictimes / days, as well as additional examples for inter-communicationpub/sub between different services on both AMQP or AWS SNS+SQS as shownabove. See more at theexamplesfolder.


Run the service 😎

# cli alias is set up automatically on installationlocal~/code/service$ tomodachi run service.py# alternatively using the tomodachi.run modulelocal~/code/service$ python -m tomodachi.run service.py

Defaults to output startup banner on stdout and log output on stderr.

image

HTTP service acts like a normal web server.

local~$ curl -v"http://127.0.0.1:9700/resource/1234"# > HTTP/1.1 200 OK# > Content-Type: text/plain; charset=utf-8# > Server: tomodachi# > Content-Length: 9# > Date: Sun, 16 Oct 2022 13:38:02 GMT# ># > id = 1234

Getting an instance of a service

If the a Service instance is needed outside the Service class itself, itcan be acquired withtomodachi.get_service. If multiple Serviceinstances exist within the same event loop, the name of the Service canbe used to get the correct one.

importtomodachi# Get the instance of the active Service.service=tomodachi.get_service()# Get the instance of the Service by service name.service=tomodachi.get_service(service_name)

Stopping the service

Stopping a service can be achieved by either sending aSIGINT<ctrl+c> orSIGTERM signal to to thetomodachi Python process, orby invoking thetomodachi.exit() function, which will initiate thetermination processing flow. Thetomodachi.exit() call canadditionally take an optional exit code as an argument, which otherwisewill default to use exit code 0.

  • SIGINT signal (equivalent to using <ctrl+c>)
  • SIGTERM signal
  • tomodachi.exit() ortomodachi.exit(exit_code)

The process' exit code can also be altered by changing the value oftomodachi.SERVICE_EXIT_CODE, however usingtomodachi.exit with aninteger argument will override any previous value set totomodachi.SERVICE_EXIT_CODE.

All above mentioned ways of initiating the termination flow of theservice will perform a graceful shutdown of the service which will tryto await open HTTP handlers and await currently running tasks usingtomodachi's scheduling functionality as well as await tasks processingmessages from queues such as AWS SQS or RabbitMQ.

Some tasks may timeout during termination according to usedconfiguration (see options such ashttp.termination_grace_period_seconds) if they are long running tasks.Additionally container handlers may impose additional timeouts for howlong termination are allowed to take. If no ongoing tasks are to beawaited and the service lifecycle can be cleanly terminated the shutdownusually happens within milliseconds.

Function hooks for service lifecycle changes

To be able to initialize connections to external resources or to performgraceful shutdown of connections made by a service, there's a fewfunctions a service can specify to hook into lifecycle changes of aservice.

Magic function nameWhen is the function called?What is suitable to put here
_start_serviceCalled before invokers / servers have started.Initialize connections to databases, etc.
_started_serviceCalled after invokers / server have started.Start reporting or start tasks to run once.
_stopping_serviceCalled on termination signal.Cancel eventual internal long-running tasks.
_stop_serviceCalled after tasks have gracefully finished.Close connections to databases, etc.

Changes to a service settings / configuration (by for example modifyingtheoptions values) should be done in the__init__ function insteadof in any of the lifecycle function hooks.

Good practice -- in general, make use of the_start_service (forsetting up connections) in addition to the_stop_service (to closeconnections) lifecycle hooks. The other hooks may be used for moreuncommon use-cases.

Lifecycle functions are defined as class functions and will be calledby the tomodachi process on lifecycle changes:

importtomodachiclassService(tomodachi.Service):name="example"asyncdef_start_service(self):# The _start_service function is called during initialization,# before consumers or an eventual HTTP server has started.# It's suitable to setup or connect to external resources here.returnasyncdef_started_service(self):# The _started_service function is called after invoker# functions have been set up and the service is up and running.# The service is ready to process messages and requests.returnasyncdef_stopping_service(self):# The _stopping_service function is called the moment the# service is instructed to terminate - usually this happens# when a termination signal is received by the service.# This hook can be used to cancel ongoing tasks or similar.# Note that some tasks may be processing during this time.returnasyncdef_stop_service(self):# Finally the _stop_service function is called after HTTP server,# scheduled functions and consumers have gracefully stopped.# Previously ongoing tasks have been awaited for completion.# This is the place to close connections to external services and# clean up eventual tasks you may have started previously.return

Exceptions raised in_start_service or_started_service willgracefully terminate the service.

Graceful termination of a service (SIGINT /SIGTERM)

When the service process receives aSIGINT orSIGTERM signal (ortomodachi.exit() is called) the service begins the process for graceful termination, which in practice means:

  • The service'_stopping_service method, if implemented, is called immediately upon the received signal.
  • The service stops accepting new HTTP connections and closes keep-alive HTTP connections at the earliest.
  • Already established HTTP connections for which a handler call is awaited called are allowed to finish their work before the service stops (up tooptions.http.termination_grace_period_seconds seconds, after which the open TCP connections for those HTTP connections will be forcefully closed if still not completed).
  • Any AWS SQS / AMQP handlers (decorated with@aws_sns_sqs or@amqp) will stop receiving new messages. However handlers already processing a received message will be awaited to return their result. Unlike the HTTP handler connections there is no grace period for these queue consuming handlers.
  • Currently running scheduled handlers will also be awaited to fully complete their execution before the service will terminates. No new scheduled handlers will be started.
  • When all HTTP connections are closed, all scheduled handlers has completed and all pub-sub handlers have been awaited, the service'_stop_service method is finally called (if implemented), where for example database connections can be closed. When the_stop_service method returns (or immediately after completion of handler invocations if any_stop_service isn't implemented), the service will finally terminate.

It's recommended to use ahttp.termination_grace_period_seconds options value of around 30 seconds to allow for the graceful termination of HTTP connections. This value can be adjusted based on the expected time it takes for the service to complete the processing of incoming request.

Make sure that the orchestration engine (such as Kubernetes) waits at least 30 seconds from sending theSIGTERM to remove the pod. For extra compatibility when operating services in k8s and to get around most kind of edge-cases of intermittent timeouts and problems with ingress connections, (and unless your setup includes long running queue consuming handler calls which requires an even longer grace period), set the pod specterminationGracePeriodSeconds to90 seconds and use apreStop lifecycle hook of 20 seconds.

Keep thehttp.termination_grace_period_seconds options value lower than the pod spec'sterminationGracePeriodSeconds value, as the latter is a hard limit for how long the pod will be allowed to run after receiving aSIGTERM signal.

In a setup where long running queue consuming handler calls commonly occurs, any grace period the orchestration engine uses will have to take that into account. It's generally advised to split work up into sizeable chunks that can quickly complete or if handlers are idempotent, apply the possibility to cancel long running handlers as part of the_stopping_service implementation.

Example of a microservice containerized in Docker 🐳

A great way to distribute and operate microservices are usually to runthem in containers or even more interestingly, in clusters of computenodes. Here follows an example of getting atomodachi based service upand running in Docker.

We're building the service' container image using just two smallfiles, theDockerfile and the actual code for the microservice,service.py. In reality a service would probably not be quite thissmall, but as a template to get started.

Dockerfile

FROM python:3.10-bullseyeRUN pip install tomodachiRUN mkdir /appWORKDIR /appCOPY service.py .ENV PYTHONUNBUFFERED=1CMD ["tomodachi","run","service.py"]

service.py

importjsonimporttomodachiclassService(tomodachi.Service):name="example"options=tomodachi.Options(http=tomodachi.Options.HTTP(port=80,content_type="application/json; charset=utf-8",        ),    )_healthy=True@tomodachi.http("GET",r"/")asyncdefindex_endpoint(self,request):# tomodachi.get_execution_context() can be used for# debugging purposes or to add additional service context# in logs or alerts.execution_context=tomodachi.get_execution_context()returnjson.dumps({"data":"hello world!","execution_context":execution_context,        })@tomodachi.http("GET",r"/health/?",ignore_logging=True)asyncdefhealth_check(self,request):ifself._healthy:return200,json.dumps({"status":"healthy"})else:return503,json.dumps({"status":"not healthy"})@tomodachi.http_error(status_code=400)asyncdeferror_400(self,request):returnjson.dumps({"error":"bad-request"})@tomodachi.http_error(status_code=404)asyncdeferror_404(self,request):returnjson.dumps({"error":"not-found"})@tomodachi.http_error(status_code=405)asyncdeferror_405(self,request):returnjson.dumps({"error":"method-not-allowed"})

Building and running the container, forwarding host's port 31337 to port 80

local~/code/service$ docker build. -t tomodachi-microservice# > Sending build context to Docker daemon  9.216kB# > Step 1/7 : FROM python:3.10-bullseye# > 3.10-bullseye: Pulling from library/python# > ...# >  ---> 3f7f3ab065d4# > Step 7/7 : CMD ["tomodachi", "run", "service.py"]# >  ---> Running in b8dfa9deb243# > Removing intermediate container b8dfa9deb243# >  ---> 8f09a3614da3# > Successfully built 8f09a3614da3# > Successfully tagged tomodachi-microservice:latest
local~/code/service$ docker run -ti -p 31337:80 tomodachi-microservice

image

Making requests to the running container

local~$ curl http://127.0.0.1:31337/| jq# {#   "data": "hello world!",#   "execution_context": {#     "tomodachi_version": "x.x.xx",#     "python_version": "3.x.x",#     "system_platform": "Linux",#     "process_id": 1,#     "init_timestamp": "2022-10-16T13:38:01.201509Z",#     "event_loop": "asyncio",#     "http_enabled": true,#     "http_current_tasks": 1,#     "http_total_tasks": 1,#     "aiohttp_version": "x.x.xx"#   }# }
local~$ curl http://127.0.0.1:31337/health -i# > HTTP/1.1 200 OK# > Content-Type: application/json; charset=utf-8# > Server: tomodachi# > Content-Length: 21# > Date: Sun, 16 Oct 2022 13:40:44 GMT# ># > {"status": "healthy"}
local~$ curl http://127.0.0.1:31337/no-route -i# > HTTP/1.1 404 Not Found# > Content-Type: application/json; charset=utf-8# > Server: tomodachi# > Content-Length: 22# > Date: Sun, 16 Oct 2022 13:41:18 GMT# ># > {"error": "not-found"}

It's actually as easy as that to get something spinning. The hardpart is usually to figure out (or decide) what to build next.

Other popular ways of running microservices are of course to use them asserverless functions, with an ability of scaling to zero (Lambda, CloudFunctions, Knative, etc. may come to mind). Currentlytomodachi worksbest in a container setup and until proper serverless supportingexecution context is available in the library, it should be adviced tohold off and use other tech for those kinds of deployments.


Available built-ins used as endpoints 🚀

As shown, there's different ways to trigger your microservice functionin which the most common ones are either directly via HTTP or via eventbased messaging (for example AMQP or AWS SNS+SQS). Here's a list of thecurrently available built-ins you may use to decorate your servicefunctions.

HTTP endpoints

@tomodachi.http

@tomodachi.http(method,url,ignore_logging=[200])defhandler(self,request,*args,**kwargs):    ...

Sets up anHTTP endpoint for the specifiedmethod (GET,PUT,POST,DELETE) on the regexpurl. Optionally specifyignore_logging as a dict or tuple containing the status codes youdo not wish to log the access of.

Can also be set toTrue toignore everything except status code 500.


@tomodachi.http_static

@tomodachi.http_static(path,url)defhandler(self,request,*args,**kwargs):# nooppass

Sets up anHTTP endpoint for static content available asGETHEAD from thepath on disk on the base regexpurl.


@tomodachi.websocket

@tomodachi.websocket(url)defhandler(self,request,*args,**kwargs):asyncdef_receive(data:Union[str,bytes])->None:        ...asyncdef_close()->None:        ...return_receive,_close

Sets up awebsocket endpoint on the regexpurl. The invokedfunction is called upon websocket connection and should return a twovalue tuple containing callables for a function receiving frames(first callable) and a function called on websocket close (secondcallable).

The passed arguments to the function beside the classobject is first thewebsocket response connection which can beused to send frames to the client, and optionally also therequestobject.


@tomodachi.http_error

@tomodachi.http_error(status_code)defhandler(self,request,*args,**kwargs):    ...

A function which will be called if theHTTP request would resultin a 4XXstatus_code. You may use this for example to set up acustom handler on "404 Not Found" or "403 Forbidden" responses.


AWS SNS+SQS messaging

@tomodachi.aws_sns_sqs

@tomodachi.aws_sns_sqs(topic=None,competing=True,queue_name=None,filter_policy=FILTER_POLICY_DEFAULT,visibility_timeout=VISIBILITY_TIMEOUT_DEFAULT,dead_letter_queue_name=DEAD_LETTER_QUEUE_DEFAULT,max_receive_count=MAX_RECEIVE_COUNT_DEFAULT,fifo=False,max_number_of_consumed_messages=MAX_NUMBER_OF_CONSUMED_MESSAGES**kwargs,)defhandler(self,data,*args,**kwargs):    ...

Topic and Queue

This would set up anAWS SQS queue, subscribing to messages ontheAWS SNS topictopic (if atopic is specified),whereafter it will start consuming messages from the queue. The valuecan be omitted in order to make the service consume messages from an existingqueue, without setting up an SNS topic subscription.

Thecompeting value is used when the same queue name should beused for several services of the same type and thus "compete" forwho should consume the message. Sincetomodachi version 0.19.xthis value has a changed default value and will now default toTrue as this is the most likely use-case for pub/sub indistributed architectures.

Unlessqueue_name is specified an auto generated queue name willbe used. Additional prefixes to bothtopic andqueue_name can beassigned by setting theoptions.aws_sns_sqs.topic_prefix andoptions.aws_sns_sqs.queue_name_prefix dict values.

FIFO queues + max number of consumed messages

AWS supports two types of queues and topics, namelystandard andFIFO. The major difference between these is that the latterguarantees correct ordering and at-most-once delivery. By default,tomodachi createsstandard queues and topics. To create them asFIFO instead, setfifo toTrue.

Themax_number_of_consumed_messages setting determines how manymessages should be pulled from the queue at once. This is useful ifyou have a resource-intensive task that you don't want othermessages to compete for. The default value is 10 forstandardqueues and 1 forFIFO queues. The minimum value is 1, and themaximum value is 10.

Filter policy

Thefilter_policy value of specified as a keyword argument will beapplied on the SNS subscription (for the specified topic and queue)as the"FilterPolicy attribute. This will apply a filter on SNSmessages using the chosen "message attributes" and/or their valuesspecified in the filter. Make note that the filter policy dictstructure differs somewhat from the actual message attributes, asvalues to the keys in the filter policy must be a dict (object) orlist (array).

Example: A filter policy value of{"event": ["order_paid"], "currency": ["EUR", "USD"]} would set upthe SNS subscription to receive messages on the topic only where themessage attribute"event" is"order_paid" and the"currency"value is either"EUR" or"USD".

Iffilter_policy is not specified as an argument (default), thequeue will receive messages on the topic as per already specified ifusing an existing subscription, or receive all messages on the topicif a new subscription is set up (default). Changing thefilter_policy on an existing subscription may take several minutesto propagate.

Read more about the filter policy format on AWS:

Related to the above mentioned filter policy, thetomodachi.aws_sns_sqs_publish (which is used for publishingmessages to SNS) andtomodachi.sqs_send_message (which sendsmessages directly to SQS) functions, can specify "messageattributes" using themessage_attributes keyword argument. Valuesshould be specified as a simpledict with keys and values.

Example:{"event": "order_paid", "paid_amount": 100, "currency": "EUR"}.

Visibility timeout

Thevisibility_timeout value will set the queue attributeVisibilityTimeout if specified. To use already defined values fora queue (default), do not supply any value to thevisibility_timeout keyword --tomodachi will then not modify thevisibility timeout.

DLQ: Dead-letter queue

Similarly the values fordead_letter_queue_name in tandem with themax_receive_count value will modify the queue attributeRedrivePolicy in regards to the potential use of a dead-letterqueue to which messages will be delivered if they have been pickedup by consumersmax_receive_count number of times but haven'tbeen deleted from the queue.

The value fordead_letter_queue_nameshould either be a ARN for an SQS queue, which in that case requiresthe queue to have been created in advance, or a alphanumeric queuename, which in that case will be set up similar to the queue nameyou specify in regards to prefixes, etc.

Bothdead_letter_queue_name andmax_receive_count needs to bespecified together, as they both affect the redrive policy. Todisable the use of DLQ, use aNone value for thedead_letter_queue_name keyword and theRedrivePolicy will beremoved from the queue attribute.

To use the already defined valuesfor a queue, do not supply any values to the keyword arguments inthe decorator.tomodachi will then not modify the queue attributeand leave it as is.

Message envelope

Depending on the servicemessage_envelope (previously namedmessage_protocol) attribute if used, parts of the enveloped datawould be distributed to different keyword arguments of the decoratedfunction. It's usually safe to just usedata as an argument. Youcan also specify a specificmessage_envelope value as a keywordargument to the decorator for specifying a specific envelopingmethod to use instead of the global one set for the service.

If you're utilizingfrom tomodachi.envelope import ProtobufBaseand usingProtobufBase as the specified servicemessage_envelopeyou may also pass a keyword argumentproto_class into thedecorator, describing the protobuf (Protocol Buffers) generatedPython class to use for decoding incoming messages. Customenveloping classes can be built to fit your existing architecture orfor even more control of tracing and shared metadata betweenservices.

Encryption at rest via AWS KMS

Encryption at rest for AWS SNS and/or AWS SQS can optionally beconfigured by specifying the KMS key alias or KMS key id astomodachi service optionsoptions.aws_sns_sqs.sns_kms_master_key_id (to configure encryptionat rest on the SNS topics for which the tomodachi service handlesthe SNS -> SQS subscriptions) andoptions.aws_sns_sqs.sqs_kms_master_key_id (to configure encryptionat rest for the SQS queues which the service is consuming).

Note that an option value set to an empty string ("") orFalse willunset the KMS master key id and thus disable encryption at rest. Ifinstead an option is completely unset or set toNone value nochanges will be done to the KMS related attributes on an existingtopic or queue.

It's generally not advised to change the KMS masterkey id/alias values for resources currently in use.

If it's expected that the services themselves, via their IAM credentials orassumed role, are responsible for creating queues and topics, theseoptions could be desirable to use.

Do not use these options if youinstead are using IaC tooling to handle the topics, queues andsubscriptions or that they for example are created / updated as apart of deployments.

See further details about AWS KMS for AWS SNS+SQS at:


AMQP messaging (RabbitMQ)

@tomodachi.amqp

@tomodachi.amqp(routing_key,exchange_name="amq.topic",competing=True,queue_name=None,**kwargs,)defhandler(self,data,*args,**kwargs):    ...

Routing key, Exchange and Queue

Sets up the method to be called whenever aAMQP / RabbitMQ messageis received for the specifiedrouting_key. By default the'amq.topic' topic exchange would be used, it may also beoverridden by setting theoptions.amqp.exchange_name dict value onthe service class.

Thecompeting value is used when the same queue name should beused for several services of the same type and thus "compete" forwho should consume the message. Sincetomodachi version 0.19.xthis value has a changed default value and will now default toTrue as this is the most likely use-case for pub/sub indistributed architectures.

Unlessqueue_name is specified an auto generated queue name willbe used. Additional prefixes to bothrouting_key andqueue_namecan be assigned by setting theoptions.amqp.routing_key_prefix andoptions.amqp.queue_name_prefix dict values.

Message envelope

Depending on the servicemessage_envelope (previously namedmessage_protocol) attribute if used, parts of the enveloped datawould be distributed to different keyword arguments of the decoratedfunction. It's usually safe to just usedata as an argument. Youcan also specify a specificmessage_envelope value as a keywordargument to the decorator for specifying a specific envelopingmethod to use instead of the global one set for the service.

If you're utilizingfrom tomodachi.envelope import ProtobufBaseand usingProtobufBase as the specified servicemessage_envelopeyou may also pass a keyword argumentproto_class into thedecorator, describing the protobuf (Protocol Buffers) generatedPython class to use for decoding incoming messages. Customenveloping classes can be built to fit your existing architecture orfor even more control of tracing and shared metadata betweenservices.


Scheduled functions / cron / triggered on time interval

@tomodachi.schedule

@tomodachi.schedule(interval=None,timestamp=None,timezone=None,immediately=False,)defhandler(self,*args,**kwargs):    ...

Ascheduled function invoked on either a specifiedinterval(you may use the popular cron notation as a str for fine-grainedinterval or specify an integer value of seconds) or a specifictimestamp. Thetimezone will default to your local time unlessexplicitly stated.

When using an integerinterval you may also specify wether thefunction should be calledimmediately on service start or wait thefullinterval seconds before its first invokation.


@tomodachi.heartbeat

@tomodachi.heartbeatdefhandler(self,*args,**kwargs):    ...

A function which will beinvoked every second.


@tomodachi.minutely /@tomodachi.hourly

@tomodachi.minutely@tomodachi.hourly@tomodachi.daily@tomodachi.monthlydefhandler(self,*args,**kwargs):    ...

A scheduled function which will be invoked onceevery minute /hour / day / month.


Scheduled tasks in distributed contexts

What is your use-case for scheduling function triggers or functions that trigger onan interval. These types of scheduling may not be optimal in clusterswith many pods in the same replication set, as all the services runningthe same code will very likely execute at the same timestamp / interval(which in same cases may correlated with exactly when they were lastdeployed). As such these functions are quite naive and should only beused with some care, so that it triggering the functions several timesdoesn't incur unnecessary costs or come as a bad surprise if thefunctions aren't completely idempotent.

To perform a task on a specifictimestamp or on an interval where only one of the available services ofthe same type in a cluster should trigger is a common thing to solve andthere are several solutions to pick from., some kind of distributedconsensus needs to be reached. Tooling exists, but what you need maydiffer depending on your use-case. There's algorithms for distributedconsensus and leader election, Paxos or Raft, that luckily have alreadybeen implemented to solutions like the strongly consistent anddistributed key-value storesetcd andTiKV.

Even primitive solutionssuch asRedisSETNX commands would work, but could be costly or hardto manage access levels around. If you're on k8s there's even a simple"leader election" API available that just creates a 15 seconds lease.Solutions are many and if you are in need, go hunting and find one thatsuits your use-case, there's probably tooling and libraries availableto call it from your service functions.

Implementing proper consensus mechanisms and in turn leader election canbe complicated. In distributed environments the architecture aroundthese solutions needs to account for leases, decision making whenconsensus was not reached, how to handle crashed executors, quickrecovery on master node(s) disruptions, etc.


To extend the functionality by building your own trigger decorators foryour endpoints, studying the built-in invoker classes should the firststep of action. All invoker classes should extend the class for a commondeveloper experience:tomodachi.invoker.Invoker.


Function signatures - keywords with transport centric values 🪄

Function handlers, middlewares and envelopes can specify additionalkeyword arguments in their signatures and receive transport centricvalues.

The following keywords can be used across all kind of handler functions,envelopes and envelopes parsing messages. These can be used to structureapps, logging, tracing, authentication, building more advanced messaginglogic, etc.

AWS SNS+SQS related values - function signature keyword arguments

Use the following keywords arguments in function signatures (for handlers, middlewares and envelopes used for AWS SNS+SQS messages).

message_attributesValues specified as message attributes that accompanies the message body and that are among other things used for SNS queue subscription filter policies and for distributed tracing.
queue_urlCan be used to modify visibility of messages, provide exponential backoffs, move to DLQs, etc.
receipt_handleCan be used to modify visibility of messages, provide exponential backoffs, move to DLQs, etc.
approximate_receive_countA value that specifies approximately how many times this message has been received from consumers onSQS.ReceiveMessage calls. Handlers that received a message, but that doesn't delete it from the queue (for example in order to make it visible for other consumers or in case of errors), will add to this count for each time they received it.
topicSimply the name of the SNS topic. For messages sent directly to the queue (for example viaSQS.SendMessage API calls), instead of via SNS topic subscriptions (SNS.Publish), the value oftopic will be an empty string.
sns_message_idThe message identifier for the SNS message (which is usually embedded in the body of a SQS message). Ths SNS message identifier is the same that is returned in the response when publishing a message withSNS.Publish. Thesns_message_id is read from within the"Body" of SQS messages, if the message body contains a message that comes from an SNS topic subscription. If the SQS message doesn't originate from SNS (if the message isn't type"Notification", and holds a"TopicArn" value), thensns_message_id will result in an empty string.
sqs_message_idThe SQS message identifier, which naturally will differ from the SNS message identifier as one SNS message can be propagated to several SQS queues. Thesqs_message_id is read from the"MessageId" value in the top of the SQS message.
message_typeReturns the"Type" value from the message body. For messages consumed from a queue that was sent there from an SNS topic, themessage_type will be"Notification".
raw_message_bodyReturns the full contents (as a string) from"Body", which can be used to implement custom listeners, tailored for more advanced workflows, where more flexibility is needed.
message_timestampA timestamp of when the original SNS message was published.
message_deduplication_idThe deduplication id for messages in FIFO queues (orNone on messages in non-FIFO queues).
message_group_idThe group id for messages in FIFO queues (orNone on messages in non-FIFO queues).

HTTP related values - function signature keyword arguments

Use the following keywords arguments in function signatures (for handlers and middlewares used for HTTP requests).

requestTheaiohttp request object which holds functionality for all things HTTP requests.
status_codeSpecified when predefined error handlers are run. Using the keyword in handlers and middlewares for requests not invoking error handlers should preferably be specified with a default value to ensure it will work on both error handlers and request router handlers.
websocketWill be added to websocket requests if used.

Middlewares for HTTP and messaging (AWS SNS+SQS, AMQP, etc.) 🧱

Middlewares can be used to add functionality to the service, for exampleto add logging, authentication, tracing, build more advanced logic formessaging, unpack request queries, modify HTTP responses, handleuncaught errors, add additional context to handlers, etc.

Custom middleware functions or objects that can be called are added tothe service by specifying them as a list in thehttp_middleware andmessage_middleware attribute of the service class.

from .middlewareimportlogger_middlewareclassService(tomodachi.Service):name="middleware-example"http_middleware= [logger_middleware]    ...

Middlewares are invoked as a stack in the order they are specified inhttp_middleware ormessage_middleware with the first callable in thelist to be called first (and then also return last).

Provided arguments to middleware functions

  1. The first unbound argument of a middleware function will receive thecoroutine function to call next (which would be either the handlersfunction or a function for the next middleware in the chain).(recommended name:func)
  2. (optional) The second unbound argument of a middleware function willreceive the service class object. (recommended name:service)
  3. (optional) The third unbound argument of a middleware function willreceive therequest object for HTTP middlewares, or themessage(as parsed by the envelope) for message middlewares. (recommendedname:request ormessage)

Use the recommended names to prevent collisions with passed keywords fortransport centric values that are also sent to the middleware if thekeyword arguments are defined in the function signature.

Calling the handler or the next middleware in the chain

When calling the next function in the chain, the middleware functionshould be called as an awaitable function (await func()) and for HTTPmiddlewares the result should most commonly be returned.

Adding custom arguments passed on to the handler

The function can be called with any number of custom keyword arguments,which will then be passed to each following middleware and the handleritself. This pattern works a bit how contextvars can be set up, butcould be useful for passing values and objects instead of keeping themin a global context.

asyncdeflogger_middleware(func:Callable[...,Awaitable],*,traceid:str="")->Any:ifnottraceid:traceid=uuid.uuid4().hexlogger=Logger(traceid=traceid)# Passes the logger and traceid to following middlewares and to the handlerreturnawaitfunc(logger=logger,traceid=traceid)

A middleware can only add new keywords or modify the values or existingkeyword arguments (by passing it through again with the new value). Theexception to this is that passed keywords for transport centric valueswill be ignored - their value cannot be modified - they will retaintheir original value.

While a middleware can modify the values of custom keyword arguments,there is no way for a middleware to completely remove any keyword thathas been added by previous middlewares.

Example of a middleware specified as a function that adds tracing toAWS SQS handlers:

This example portrays a middleware function which adds trace spansaround the function, with the trace context populated from a"traceparent header" value collected from a SNS message' messageattribute. The topic name and SNS message identifier is also added asattributes to the trace span.

asyncdeftrace_middleware(func:Callable[...,Awaitable],*,queue_url:str,topic:str,message_attributes:dict,sns_message_id:str,sqs_message_id:str,)->None:ctx=TraceContextTextMapPropagator().extract(carrier=message_attributes)withtracer.start_as_current_span(f"SNSSQS handler '{func.__name__}'",context=ctx)asspan:span.set_attribute("messaging.system","aws_sqs")span.set_attribute("messaging.operation","process")span.set_attribute("messaging.destination.name",queue_url.rsplit("/")[-1])span.set_attribute("messaging.destination_publish.name",topicorqueue_url.rsplit("/")[-1])span.set_attribute("messaging.message.id",sns_message_idorsqs_message_id)try:# Calls the handler function (or next middleware in the chain)awaitfunc()exceptBaseExceptionasexc:logging.getLogger("exception").exception(exc)span.record_exception(exc,escaped=True)span.set_status(StatusCode.ERROR,f"{exc.__class__.__name__}:{exc}")raiseexc
from .middlewareimporttrace_middlewarefrom .envelopeimportEvent,MessageEnvelopeclassService(tomodachi.Service):name="middleware-example"message_envelope:MessageEnvelope(key="event")message_middleware= [trace_middleware]@tomodachi.aws_sns_sqs("example-topic",queue_name="example-queue")asyncdefhandler(self,event:Event)->None:        ...

Example of a middleware specified as a class:

A middleware can also be specified as the object of a class, in whichcase the__call__ method of the object will be invoked as themiddleware function. Note that bound functions such as self has to beincluded in the signature as it's called as a normal class function.

This class provides a simplistic basic auth implementation validatingcredentials in the HTTP Authorization header for HTTP requests to theservice.

classBasicAuthMiddleware:def__init__(self,username:str,password:str)->None:self.valid_credentials=base64.b64encode(f"{username}:{password}".encode()).decode()asyncdef__call__(self,func:Callable[...,Awaitable[web.Response]],*,request:web.Request,    )->web.Response:try:auth=request.headers.get("Authorization","")encoded_credentials=auth.split()[-1]ifauth.startswith("Basic ")else""ifencoded_credentials==self.valid_credentials:username=base64.b64decode(encoded_credentials).decode().split(":")[0]# Calls the handler function (or next middleware in the chain).# The handler (and following middlewares) can use username in their signature.returnawaitfunc(username=username)elifauth:returnweb.json_response({"status":"bad credentials"},status=401)returnweb.json_response({"status":"auth required"},status=401)exceptBaseExceptionasexc:try:logging.getLogger("exception").exception(exc)raiseexcfinally:returnweb.json_response({"status":"internal server error"},status=500)
from .middlewareimporttrace_middlewareclassService(tomodachi.Service):name="middleware-example"http_middleware= [BasicAuthMiddleware(username="example",password="example")]@tomodachi.http("GET",r"/")asyncdefhandler(self,request:web.Request,username:str)->web.Response:        ...

Logging and log formatting using thetomodachi.logging module 📚

A context aware logger is available from thetomodachi.logging modulethat can be fetched withtomodachi.logging.get_logger() or justtomodachi.get_logger() for short.

The logger is a initiated using the popularstructlog package(structlogdocumentation),and can be used in the same way as the standard library logger, with afew additional features, such as holding a context and logging ofadditional values.

The logger returned fromtomodachi.get_logger() will hold the contextof the current handler task or request for rich contextual log records.

To get a logger with another name than the logger set for the currentcontext, usetomodachi.get_logger(name="my-logger").

fromtypingimportAnyimporttomodachiclassService(tomodachi.Service):name="service"@tomodachi.aws_sns_sqs("test-topic",queue_name="test-queue")asyncdefsqs_handler(self,data:Any,topic:str,sns_message_id:str)->None:tomodachi.get_logger().info("received msg",topic=topic,sns_message_id=sns_message_id)

The log record will be enriched with the context of the current handlertask or request and the output should look something like this if thejson formatter is used (note that the example output below has beenprettified -- the JSON that is actually used outputs the entire logentry on one single line):

{"timestamp":"2023-08-13T17:44:09.176295Z","logger":"tomodachi.awssnssqs.handler","level":"info","message":"received msg","handler":"sqs_handler","type":"tomodachi.awssnssqs","topic":"test-topic","sns_message_id":"a1eba63e-8772-4b36-b7e0-b2f524f34bff"}

Interactions with Python's built-inlogging module

Note that the log entries are propagated to the standard library logger(as long as it wasn't filtered), in order to allow third party handlerhooks to pick up records or act on them. This will make sure thatintegrations such a Sentry's exception tracing will work out of thebox.

Similarly thetomodachi logger will also by default receive recordsfrom the standard library logger as adds alogging.root handler, sothat thetomodachi logger can be used as a drop-in replacement for thestandard library logger. Because of this third party modules usingPython's defaultlogging module will use the same formatter astomodachi. Note that iflogging.basicConfig() is called before thetomodachi logger is initialized,tomodachi may not be able to additslogging.root handler.

Note that when using the standard library logger directly the contextuallogger won't be selected by default.

importloggingfromaiohttp.webimportRequest,ResponseimporttomodachiclassService(tomodachi.Service):name="service"@tomodachi.http("GET",r"/example")asyncdefhttp_handler(self,request:Request)->Response:# contextual loggertomodachi.get_logger().info("http request")# these two rows result in similar log recordslogging.getLogger("service.logger").info("with logging module")tomodachi.get_logger("service.logger").info("with tomodachi.logging module")# extra fields from built in logger ends up as "extra" in log recordslogging.getLogger("service.logger").info("adding extra",extra={"http_request_path":request.path        })returnResponse(body="hello world")

A GET request to/example of this service would result in five logrecords being emitted (as shown formatted with thejson formatter).The four from the example above and the last one from thetomodachi.transport.http module.

{"timestamp":"2023-08-13T19:25:15.923627Z","logger":"tomodachi.http.handler","level":"info","message":"http request","handler":"http_handler","type":"tomodachi.http"}{"timestamp":"2023-08-13T19:25:15.923894Z","logger":"service.logger","level":"info","message":"with logging module"}{"timestamp":"2023-08-13T19:25:15.924043Z","logger":"service.logger","level":"info","message":"with tomodachi.logging module"}{"timestamp":"2023-08-13T19:25:15.924172Z","logger":"service.logger","level":"info","message":"adding extra","extra":{"http_request_path":"/example"}}{"timestamp":"2023-08-13T19:25:15.924507Z","logger":"tomodachi.http.response","level":"info","message":"","status_code":200,"remote_ip":"127.0.0.1","request_method":"GET","request_path":"/example","http_version":"HTTP/1.1","response_content_length":11,"user_agent":"curl/7.88.1","handler_elapsed_time":"0.00135s","request_time":"0.00143s"}

Configuring the logger

Start the service using the--logger json arguments (or settingTOMODACHI_LOGGER=json environment value) to change the log formatterto use thejson log formatter. The default log formatterconsole ismostly suited for local development environments as it provides astructured and colorized view of log records.

It's also possible to use your own logger implementation by specifying--custom-logger ... (or settingTOMODACHI_CUSTOM_LOGGER=...environment value).

Read more about how to start the service with another formatter orimplementation in theusage section


Using OpenTelemetry instrumentation

Installtomodachi using theopentelemetry extras to enableinstrumentation for OpenTelemetry. In addition, install with theopentelemetry-exporter-prometheus extras to use Prometheus exportermetrics.

local~$ pip install tomodachi[opentelemetry]local~$ pip install tomodachi[opentelemetry,opentelemetry-exporter-prometheus]

When added as a Poetry dependency theopentelemetry extras can beenabled by addingtomodachi = {extras = ["opentelemetry"]} to thepyproject.toml file, and when added to arequiements.txt file theopentelemetry extras can be enabled by addingtomodachi[opentelemetry] to the file.

Auto instrumentation:tomodachi --opentelemetry-instrument

Passing the--opentelemetry-instrument argument totomodachi runwill automatically instrument the service with the appropriate exportersand configuration according to the setOTEL_* environment variables.

Iftomodachi is installed in the environment, usingtomodachi --opentelemetry-instrument service.py is mostly equivalentto runningopentelemetry-instrument tomodachi run service.py and willload distros, configurators and instrumentors automatically in the sameway as theopentelemetry-instrument CLI would do.

local~$ OTEL_LOGS_EXPORTER=console \    OTEL_TRACES_EXPORTER=console \    OTEL_METRICS_EXPORTER=console \    OTEL_SERVICE_NAME=example-service \    tomodachi --opentelemetry-instrument run service/app.py

The environment variableTOMODACHI_OPENTELEMETRY_INSTRUMENT if setwill also enable auto instrumentation in the same way.

local~$ OTEL_LOGS_EXPORTER=console \    OTEL_TRACES_EXPORTER=console \    OTEL_METRICS_EXPORTER=console \    OTEL_SERVICE_NAME=example-service \    TOMODACHI_OPENTELEMETRY_INSTRUMENT=1 \    tomodachi run service/app.py

Auto instrumentation using theopentelemetry-instrument CLI

Auto instrumentation using theopentelemetry-instrument CLI can beachieved by starting services usingopentelemetry-instrument [otel-options] tomodachi run [options] <service.py ...>.

# either define the OTEL_* environment variables to specify instrumentation specificationlocal~$ OTEL_LOGS_EXPORTER=console \    OTEL_TRACES_EXPORTER=console \    OTEL_METRICS_EXPORTER=console \    OTEL_SERVICE_NAME=example-service \    opentelemetry-instrument tomodachi run service/app.py# or use the arguments passed to the opentelemetry-instrument commandlocal~$ opentelemetry-instrument \    --logs_exporter console \    --traces_exporter console \    --metrics_exporter console \    --service_name example-service \    tomodachi run service/app.py

Manual instrumentation

Auto instrumentation using eithertomodachi --opentelemetry-instrument, setting theTOMODACHI_OPENTELEMETRY_INSTRUMENT=1 env value or using theopentelemetry-instrument CLI are the recommended ways of instrumentingservices, as they will automatically instrument the service (and libswith instrumentors installed) with the appropriate exporters andconfiguration.

However, instrumentation can also be enabled by importing theTomodachiInstrumentor instrumentation class and calling its'instrument function.

importtomodachifromtomodachi.opentelemetryimportTomodachiInstrumentorTomodachiInstrumentor().instrument()classService(tomodachi.Service):name="example-service"@tomodachi.http(GET,r"/example")asyncdefexample(self,request):return200,"hello world"

Starting such a service with the appropriateOTEL_* environmentvariables would properly instrument traces, logs and metrics for theservice without the need to use theopentelemetry-instrument CLI.

local~$ OTEL_LOGS_EXPORTER=console \    OTEL_TRACES_EXPORTER=console \    OTEL_METRICS_EXPORTER=console \    OTEL_SERVICE_NAME=example-service \    tomodachi run service/app.py

Service name dynamically set if missingOTEL_SERVICE_NAME value

If theOTEL_SERVICE_NAME environment variable value (or--service_name argument toopentelemetry-instrument) is not set, theresource'service.name will instead be set to thename attribute ofthe service class. In case the service class uses the default genericnames (service orapp), the resource'service.name will insteadbe set to the default as specified inhttps://github.com/open-telemetry/semantic-conventions/tree/main/docs/resource#service.

In the rare case where there's multipletomodachi services startedwithin the same Python process, it should be noted that OTEL traces,metrics and logging will primarily use theOTEL_SERVICE_NAME, and ifit's missing then use the name from thefirst instrumented serviceclass. The same goes for theservice.instance.id resource attribute,which will be set to the first instrumented service class'uuid value(which in most cases is automatically assigned on service start).Multi-service execution won't accurately distinguish the service nameof tracers, meters and loggers. The recommended solution if this is anissue, is to split the services into separate processes instead.

Exclude lists to exclude certain URLs from traces and metrics

To exclude certain URLs from traces and metrics, set the environmentvariableOTEL_PYTHON_TOMODACHI_EXCLUDED_URLS (orOTEL_PYTHON_EXCLUDED_URLS to cover all instrumentations) to a stringof comma delimited regexes that match the URLs.

Regexes from theOTEL_PYTHON_AIOHTTP_EXCLUDED_URLS environmentvariable will also be excluded.

For example,

export OTEL_PYTHON_TOMODACHI_EXCLUDED_URLS="client/.*/info,healthcheck"

will exclude requests such ashttps://site/client/123/info andhttps://site/xyz/healthcheck.

You can also pass comma delimited regexes directly to theinstrumentmethod:

TomodachiInstrumentor().instrument(excluded_urls="client/.*/info,healthcheck")

Prometheus meter provider (experimental)

Thetomodachi.opentelemetry module also provides a Prometheus meterprovider that can be used to export metrics to Prometheus. Runopentelemetry-instrument with the--meter_provider tomodachi_prometheus argument (or setOTEL_PYTHON_METER_PROVIDER=tomodachi_prometheus environment value) toenable the Prometheus meter provider.

Environment variables to configure Prometheus meter provider

  • OTEL_PYTHON_TOMODACHI_PROMETHEUS_ADDRESS specifies the host address the Prometheus export server should listen on. (default:"localhost")
  • OTEL_PYTHON_TOMODACHI_PROMETHEUS_PORT specifies the port the Prometheus export server should listen on. (default:9464)
  • OTEL_PYTHON_TOMODACHI_PROMETHEUS_INCLUDE_SCOPE_INFO specifies whether to include scope information asotel_scope_info value. (default:true)
  • OTEL_PYTHON_TOMODACHI_PROMETHEUS_INCLUDE_TARGET_INFO specifies whether to include resource attributes astarget_info value. (default:true)
  • OTEL_PYTHON_TOMODACHI_PROMETHEUS_EXEMPLARS_ENABLED specifies whether exemplars (experimental) should be collected and used in Prometheus export. (default:false)
  • OTEL_PYTHON_TOMODACHI_PROMETHEUS_NAMESPACE_PREFIX specifies the namespace prefix for Prometheus metrics. A final underscore is automatically added if prefix is used. (default:"")

Dependency requirement for Prometheus meter provider

Thetomodachi_prometheus meter provider requires that theopentelemetry-exporter-prometheusandprometheus_client packagespackage are installed.

Usetomodachi extrasopentelemetry-exporter-prometheus toautomatically include a compatible version of the exporter.

OpenMetrics output from Prometheus with exemplars enabled

With exemplars enabled, make sure to call the Prometheus client with theaccept headerapplication/openmetrics-text to ensure exemplars areincluded in the response.

curl http://localhost:9464/metrics -H"Accept: application/openmetrics-text"

💡 Note that if the accept headerapplication/openmetrics-text ismissing from the request, exemplars will be excluded from the response.

Example: starting a service with instrumentation

This example will start and instrument a service with OTLP exportedtraces sent to the endpointotelcol:4317 and metrics that can bescraped by Prometheus from port9464. All metrics except fortarget_info andotel_scope_info will be prefixed with"tomodachi_". Additionallyexemplars will be added to the Prometheuscollected metrics that includes sample exemplars with trace_id andspan_id labels.

local~$ TOMODACHI_OPENTELEMETRY_INSTRUMENT=1 \    OTEL_TRACES_EXPORTER=otlp \    OTEL_EXPORTER_OTLP_ENDPOINT=otelcol:4317 \    OTEL_PYTHON_METER_PROVIDER=tomodachi_prometheus \    OTEL_PYTHON_TOMODACHI_PROMETHEUS_EXEMPLARS_ENABLED=true \    OTEL_PYTHON_TOMODACHI_PROMETHEUS_ADDRESS=0.0.0.0 \    OTEL_PYTHON_TOMODACHI_PROMETHEUS_PORT=9464 \    OTEL_PYTHON_TOMODACHI_PROMETHEUS_NAMESPACE_PREFIX=tomodachi \    tomodachi run service/app.py

Additional configuration options 🤩

In the service class an attribute namedoptions (as atomodachi.Options object) can be set for additional configuration.

importjsonimporttomodachiclassService(tomodachi.Service):name="http-example"options=tomodachi.Options(http=tomodachi.Options.HTTP(port=80,content_type="application/json; charset=utf-8",real_ip_from=["127.0.0.1/32","10.0.0.0/8","172.16.0.0/12","192.168.0.0/16",            ],keepalive_timeout=5,max_keepalive_requests=20,        ),watcher=tomodachi.Options.Watcher(ignored_dirs=["node_modules"],        ),    )@tomodachi.http("GET",r"/health")asyncdefhealth_check(self,request):return200,json.dumps({"status":"healthy"})# Specify custom 404 catch-all response@tomodachi.http_error(status_code=404)asyncdeferror_404(self,request):returnjson.dumps({"error":"not-found"})

Options are read or written via the service'options attribute

A service option can be accessed via the configuration key in numerous ways.

  • options.http.sub_key (example:options.http.port)
  • options[f"http.{sub_key}"] (example:options["http.port"])
  • options["http"][sub_key] (example:options["http"]["port"])

The serviceoptions attribute is an object oftomodachi.Options type.

HTTP server parameters

Configuration keyDescriptionDefault
http.portTCP port (integer value) to listen for incoming connections.9700
http.hostNetwork interface to bind TCP server to."0.0.0.0" will bind to all IPv4 interfaces.None or"" will assume all network interfaces."0.0.0.0"
http.reuse_portIf set toTrue (which is also the default value on Linux) the HTTP server will bind to the port using the socket optionSO_REUSEPORT. This will allow several processes to bind to the same port, which could be useful when running services via a process manager such assupervisord or when it's desired to run several processes of a service to utilize additional CPU cores, etc. Note that thereuse_port option cannot be used on non-Linux platforms.True on Linux, otherwiseFalse
http.keepalive_timeoutEnables connections to use keep-alive if set to an integer value over0. Number of seconds to keep idle incoming connections open.0
http.max_keepalive_requestsAn optional number (int) of requests which is allowed for a keep-alive connection. After the specified number of requests has been done, the connection will be closed. An option value of0 orNone (default) will allow any number of requests over an open keep-alive connection.None
http.max_keepalive_timeAn optional maximum time in seconds (int) for which keep-alive connections are kept open. If a keep-alive connection has been kept open for more thanhttp.max_keepalive_time seconds, the following request will be closed upon returning a response. The feature is not used by default and won't be used if the value is0 orNone. A keep-alive connection may otherwise be open unless inactive for more than the keep-alive timeout.None
http.client_max_sizeThe client’s maximum size in a request, as an integer, in bytes.(1024 ** 2) * 100
http.termination_grace_period_secondsThe number of seconds to wait for functions called via HTTP to gracefully finish execution before terminating the service, for example if service received aSIGINT orSIGTERM signal while requests were still awaiting response results.30
http.real_ip_headerHeader to read the value of the client's real IP address from if service operates behind a reverse proxy. Only used ifhttp.real_ip_from is set and the proxy's IP correlates with the value fromhttp.real_ip_from."X-Forwarded-For"
http.real_ip_fromIP address(es) or IP subnet(s) / CIDR. Allows thehttp.real_ip_header header value to be used as client's IP address if connecting reverse proxy's IP equals a value in the list or is within a specified subnet. For example["127.0.0.1/32", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"] would permit header to be used if closest reverse proxy is"127.0.0.1" or within the three common private network IP address ranges.[]
http.content_typeDefault content-type header to use if not specified in the response."text/plain; charset=utf-8"
http.access_logIf set to the default value (boolean)True the HTTP access log will be output to stdout (loggertomodachi.http). If set to astr value, the access log will additionally also be stored to file using value as filename.True
http.server_header"Server" header value in responses."tomodachi"

AWS SNS+SQS credentials and prefixes

Configuration keyDescriptionDefault
aws_sns_sqs.region_nameThe AWS region to use for SNS+SQS pub/sub API requests.None
aws_sns_sqs.aws_access_key_idThe AWS access key to use for SNS+SQS pub/sub API requests.None
aws_sns_sqs.aws_secret_access_keyThe AWS secret to use for SNS+SQS pub/sub API requests.None
aws_sns_sqs.topic_prefixA prefix to any SNS topics used. Could be good to differentiate between different dev environments.""
aws_sns_sqs.queue_name_prefixA prefix to any SQS queue names used. Could be good to differentiate between different dev environments.""
aws_sns_sqs.sns_kms_master_key_idIf set, will set the KMS key (alias or id) to use for encryption at rest on the SNS topics created by the service or subscribed to by the service. Note that an option value set to an empty string ("") orFalse will unset the KMS master key id and thus disable encryption at rest. If instead an option is completely unset or set toNone value no changes will be done to the KMS related attributes on an existing topic.None (no changes to KMS settings)
aws_sns_sqs.sqs_kms_master_key_idIf set, will set the KMS key (alias or id) to use for encryption at rest on the SQS queues created by the service or for which the service consumes messages on. Note that an option value set to an empty string ("") orFalse will unset the KMS master key id and thus disable encryption at rest. If instead an option is completely unset or set toNone value no changes will be done to the KMS related attributes on an existing queue.None (no changes to KMS settings)
aws_sns_sqs.sqs_kms_data_key_reuse_periodIf set, will set the KMS data key reuse period value on the SQS queues created by the service or for which the service consumes messages on. If the option is completely unset or set toNone value no change will be done to the KMSDataKeyReusePeriod attribute of an existing queue, which can be desired if it's specified during deployment, manually or as part of infra provisioning. Unless changed, SQS queues using KMS use the default value300 (seconds).None

Custom AWS endpoints (for example during development)

Configuration keyDescriptionDefault
aws_endpoint_urls.snsConfigurable endpoint URL for AWS SNS – primarily used for integration testing during development using fake services / fake endpoints.None
aws_endpoint_urls.sqsConfigurable endpoint URL for AWS SQS – primarily used for integration testing during development using fake services / fake endpoints.None

AMQP / RabbitMQ pub/sub settings

Configuration keyDescriptionDefault
amqp.hostHost address / hostname for RabbitMQ server."127.0.0.1"
amqp.portHost post for RabbitMQ server.5672
amqp.loginLogin credentials."guest"
amqp.passwordLogin credentials."guest"
amqp.exchange_nameThe AMQP exchange name to use in the service."amq_topic"
amqp.routing_key_prefixA prefix to add to any AMQP routing keys provided in the service.""
amqp.queue_name_prefixA prefix to add to any AMQP queue names provided in the service.""
amqp.virtualhostAMQP virtualhost settings."/"
amqp.sslTLS can be enabled for supported host connections.False
amqp.heartbeatThe heartbeat timeout value defines after what period of time the peer TCP connection should be considered unreachable (down) by RabbitMQ and client libraries.60
amqp.queue_ttlTTL set on newly created queues.86400

Code auto reload on file changes (for use in development)

Configuration keyDescriptionDefault
watcher.ignored_dirsDirectories / folders that the automatic code change watcher should ignore. Could be used during development to save on CPU resources if any project folders contains a large number of file objects that doesn't need to be watched for code changes. Already ignored directories are"__pycache__",".git",".svn","__ignored__","__temporary__" and"__tmp__".[]
watcher.watched_file_endingsAdditions to the list of file endings that the watcher should monitor for file changes. Already followed file endings are".py",".pyi",".json",".yml",".html" and".phtml".[]

Default options

If no options are specified or if an emptytomodachi.Options object is instantiated, the default set of options will be applied.

>>>import tomodachi>>> tomodachi.Options()∴ http <class: "Options.HTTP" -- prefix: "http">:  | port = 9700  | host = "0.0.0.0"  | reuse_port = False  | content_type = "text/plain; charset=utf-8"  | charset = "utf-8"  | client_max_size = 104857600  | termination_grace_period_seconds = 30  | access_log = True  | real_ip_from = []  | real_ip_header = "X-Forwarded-For"  | keepalive_timeout = 0  | keepalive_expiry = 0  | max_keepalive_time = None  | max_keepalive_requests = None  | server_header = "tomodachi"∴ aws_sns_sqs <class: "Options.AWSSNSSQS" -- prefix: "aws_sns_sqs">:  | region_name = None  | aws_access_key_id = None  | aws_secret_access_key = None  | topic_prefix = ""  | queue_name_prefix = ""  | sns_kms_master_key_id = None  | sqs_kms_master_key_id = None  | sqs_kms_data_key_reuse_period = None  | queue_policy = None  | wildcard_queue_policy = None∴ aws_endpoint_urls <class: "Options.AWSEndpointURLs" -- prefix: "aws_endpoint_urls">:  | sns = None  | sqs = None∴ amqp <class: "Options.AMQP" -- prefix: "amqp">:  | host = "127.0.0.1"  | port = 5672  | login = "guest"  | password = "guest"  | exchange_name = "amq.topic"  | routing_key_prefix = ""  | queue_name_prefix = ""  | virtualhost = "/"  | ssl = False  | heartbeat = 60  | queue_ttl = 86400  · qos <class: "Options.AMQP.QOS" -- prefix: "amqp.qos">:    | queue_prefetch_count = 100    | global_prefetch_count = 400∴ watcher <class: "Options.Watcher" -- prefix: "watcher">:  | ignored_dirs = []  | watched_file_endings = []

Decorated functions using@tomodachi.decorator 🎄

Invoker functions can of course be decorated using custom functionality.For ease of use you can then in turn decorate your decorator with thethe built-in@tomodachi.decorator to ease development. If thedecorator would return anything else thanTrue orNone (or notspecifying any return statement) the invoked function willnot becalled and instead the returned value will be used, for example as anHTTP response.

importtomodachi@tomodachi.decoratorasyncdefrequire_csrf(instance,request):token=request.headers.get("X-CSRF-Token")ifnottokenortoken!=request.cookies.get("csrftoken"):return {"body":"Invalid CSRF token","status":403        }classService(tomodachi.Service):name="example"@tomodachi.http("POST",r"/create")@require_csrfasyncdefcreate_data(self,request):# Do magic here!return"OK"

Good practices for running services in production 🤞

When running atomodachi service in a production environment, it'simportant to ensure that the service is set up correctly to handle thedemands and constraints of a live system. Here's some recommendationsof options and operating practices to make running the services abreeze.

  • Go for a Docker 🐳 environment if possible -- preferably orchestrated with for example Kubernetes to handle automated scaling events to meet demand of incoming requests and/or event queues.

  • Make sure that aSIGTERM signal is passed to thepython process when a pod is scheduled for termination to give it time to gracefully stop listeners, consumers and finish active handler tasks.

    • This should work automatically for services in Docker if theCMD statement in yourDockerfile is starting thetomodachi service directly.
    • In case shell scripts are used inCMD you might need to trap signals and forward them to the service process.
  • To give services the time to gracefully complete active handler executions and shut down, make sure that the orchestration engine waits at least 30 seconds from sending theSIGTERM to remove the pod.

    • For extra compatibility in k8s and to get around most kind of edge-cases of intermittent timeouts and problems with ingress connections, set the pod specterminationGracePeriodSeconds to90 seconds and use apreStop lifecycle hook of 20 seconds.

      spec:terminationGracePeriodSeconds:90containers:lifecycle:preStop:exec:command:["/bin/sh", "-c", "sleep 20"]
  • If your service inbound network access to HTTP handlers from users or API clients, then it's usually preferred to put some kind of ingress (nginx, haproxy or other type of load balancer) to proxy connections to the service pods.

    • Let the ingress handle public TLS, http2 / http3, client facing keep-alives and WebSocket protocol upgrades and let the service handler just take care of the business logic.

    • Use HTTP options such as the ones in this service to have the service rotate keep-alive connections so that ingress connections doesn't stick to the old pods after a scaling event.

      If keep-alive connections from ingresses to services stick for too long, the new replicas added when scaling out won't get their balanced share of the requests and the old pods will continue to receive most of the requests.

      importtomodachiclassService(tomodachi.Service):name="service"options=tomodachi.Options(http=tomodachi.Options.HTTP(port=80,content_type="application/json; charset=utf-8",real_ip_from=["127.0.0.1/32","10.0.0.0/8","172.16.0.0/12","192.168.0.0/16"],keepalive_timeout=10,max_keepalive_time=30,        )    )
  • Use a JSON log formatter such as the one enabled via--logger json (or env variableTOMODACHI_LOGGER=json) so that the log entries can be picked up by a log collector.

  • Always start the service with the--production CLI argument (or set the env variableTOMODACHI_PRODUCTION=1) to disable the file watcher that restarts the service on file changes, and to hide the start banner so it doesn't end up in log buffers.

  • Not related totomodachi directly, but always remember to collect the log output and monitor your instances or clusters.

Arguments totomodachi run when running in production env

tomodachi run service/app.py --loop uvloop --production --log-level warning --logger json

Here's a breakdown of the arguments and why they would be good forthese kinds of environments.

  • --loop uvloop: This argument sets the event loop implementation touvloop, which is known to be faster than the defaultasyncio loop. This can help improve the performance of your service. However, you should ensure thatuvloop is installed in your environment before using this option.

  • --production: This argument disables the file watcher that restarts the service on file changes and hides the startup info banner. This is important in a production environment where you don't want your service to restart every time a file changes. It also helps to reduce unnecessary output in your logs.

  • --log-level warning: This argument sets the minimum log level towarning. In a production environment, you typically don't want to log every single detail of your service's operation. By setting the log level towarning, you ensure that only important messages are logged.

    If your infrastructure supports rapid collection of log entries and you see a clear benefit of including logs of log levelinfo, it would make sense to use--log-level info instead of filtering on at leastwarning.

  • --logger json: This argument sets the log formatter to output logs in JSON format. This is useful in a production environment where you might have a log management system that can parse and index JSON logs for easier searching and analysis.

You can also set these options using environment variables. This can beuseful if you're deploying your service in a containerized environmentlike Docker or Kubernetes, where you can set environment variables inyour service's configuration. Here's how you would set the sameoptions using environment variables:

export TOMODACHI_LOOP=uvloopexport TOMODACHI_PRODUCTION=1export TOMODACHI_LOG_LEVEL=warningexport TOMODACHI_LOGGER=jsontomodachi run service/app.py

By using environment variables, you can easily change the configurationof your service without having to modify your code or your command linearguments. This can be especially useful in a CI/CD pipeline where youmight want to adjust your service's configuration based on theenvironment it's being deployed to.


Requirements 👍

  • Python (3.9+,3.10+,3.11+,3.12+,3.13+)
  • aiohttp (aiohttp is the currently supported HTTP server implementation fortomodachi)
  • aiobotocore andbotocore (used for AWS SNS+SQS pub/sub messaging)
  • aioamqp (used for RabbitMQ / AMQP pub/sub messaging)
  • structlog (used for logging)
  • uvloop (optional: alternative event loop implementation)

Pull requests and bug reports

This library is open source software. Please add a pull request with thefeature that you deem are missing from the lib or for bug fixes that youencounter.

Make sure that the tests and linters are passing. A limited number oftests can be run locally without external services. Use GitHub Actionsto run the full test suite and to verify linting and regressions.Readmore in the contributionguide.

GitHub repository

The latest developer version oftomodachi is always available atGitHub.

Acknowledgements + contributors

🙇 Thank you everyone that has come with ideas, reported issues, builtand operated services, helped debug and made contributions to thelibrary code directly or via libraries that build on the basefunctionality.

🙏 Many thanks to the amazing contributors that have helped to maketomodachi better.

image


Changelog of releases

Changes are recorded in the repo as well as together with the GitHubreleases.


LICENSE

tomodachi is offered under theMIT license.


Additional questions and information

What is the best way to run atomodachi service?

Docker containers are great and can be scaled out in Kubernetes,Nomad or other orchestration engines. Some may instead run severalservices on the same environment, on the same machine if theirworkloads are smaller or more consistent. Remember to gather youroutput and monitor your instances or clusters.

See the section ongood practices for running services inproductionenvironmentsfor more insights.

Are there any more example services?

There are a few examples in theexamplesfolder, including usingtomodachi in anexample Dockerenvironmentwith or without docker-compose. There are examples to publish events/ messages to an AWS SNS topic and subscribe to an AWS SQS queue.There's also a similar code available of how to work with pub/subfor RabbitMQ via the AMQP transport protocol.

What's the recommended setup to run integration tests towards my service?

When unit tests are not enough, you can run integration tests towards your services using the third partylibrarytomodachi-testcontainers. This library provides a way to run your service in a Docker container.

Why should I usetomodachi?

tomodachi is an easy way to start when experimenting with yourarchitecture or trying out a concept for a new service – specially if you'reworking on services that publish and consume messages (pub-sub messaging), such as events or commandsfrom AWS SQS or AMQP message brokers.

tomodachi processes message flows through topics and queues, with enveloping and receiving execution handling.

tomodachi may not have all the features you desire out of the box and it may never do, but I believeit's great for bootstrapping microservices in async Python.

Whiletomodachi provides HTTP handlers, the library may not be the best choice today if you are solely buildingservices that exposes REST or GraphQL API. In such case, you may be better off to use,for examplefastapi orlitestar, perhaps in combination withstrawberry as your preferred interface.

Note that the HTTP layer on top oftomodachi is usingaiohttp, which provides a more raw interface than libraries such asfastapi orstarlette.

I have some great additions

Sweet! Please open a pull request with your additions. Make surethat the tests and linters are passing. A limited number of testscan be run locally without external services. Use GitHub Actions torun the full test suite and to verify linting and regressions. Getstarted at the shortcontributionguide.

Beta software in production?

There are some projects and organizations that already are runningservices based ontomodachi in production. The library is providedas is with an unregular release schedule, and as with most software,there will be unfortunate bugs or crashes. Consider this currentlyas beta software (with an ambition to be stable enough forproduction). Would be great to hear about other use-cases in thewild!

Another good idea is to drop in Sentry or other exception debuggingsolutions. These are great to catch errors if something wouldn'twork as expected in the internal routing or if your service coderaises unhandled exceptions.

Who built this and why?

My name isCarl Oscar Aaro[@kalaspuff] and I'm a coderfrom Sweden. When I started writing the first few lines of thislibrary back in 2016, my intention was to experiment withPython'sasyncio, the event loop, event sourcing and pub-sub messagequeues.

A lot has happened since -- now running services in bothproduction and development clusters, while also using microservicesfor quick proof of concepts and experimentation. 🎉

About

💻 Microservice lib designed to ease service building using Python and asyncio, with ready to use support for HTTP + WS, AWS SNS+SQS, RabbitMQ / AMQP, middlewares, envelopes, logging, lifecycles. Extend to GraphQL, protobuf, etc.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Sponsor this project

 

[8]ページ先頭

©2009-2025 Movatter.jp