Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Full Modular Monolith application with Domain-Driven Design approach.

License

NotificationsYou must be signed in to change notification settings

kgrzybek/modular-monolith-with-ddd

Repository files navigation

Full Modular Monolith .NET application with Domain-Driven Design approach.

Announcement

Learn, use and benefit from this project only if:

  • Youcondemn Russia and its military aggression against Ukraine
  • Yourecognize that Russia is an occupant that unlawfully invaded a sovereign state
  • Yousupport Ukraine's territorial integrity, including its claims over temporarily occupied territories of Crimea and Donbas
  • Youreject false narratives perpetuated by Russian state propaganda

Otherwise, leave this project immediately and educate yourself.

Putin, idi nachuj.

CI

FrontEnd application

FrontEnd application :Modular Monolith With DDD: FrontEnd React application

Table of contents

1. Introduction

  1.1 Purpose of this Repository

  1.2 Out of Scope

  1.3 Reason

  1.4 Disclaimer

  1.5 Give a Star

  1.6 Share It

2. Domain

  2.1 Description

  2.2 Conceptual Model

  2.3 Event Storming

3. Architecture

  3.0 C4 Model

  3.1 High Level View

  3.2 Module Level View

  3.3 API and Module Communication

  3.4 Module Requests Processing via CQRS

  3.5 Domain Model Principles and Attributes

  3.6 Cross-Cutting Concerns

  3.7 Modules Integration

  3.8 Internal Processing

  3.9 Security

  3.10 Unit Tests

  3.11 Architecture Decision Log

  3.12 Architecture Unit Tests

  3.13 Integration Tests

  3.14 System Integration Testing

  3.15 Event Sourcing

  3.16 Database change management

  3.17 Continuous Integration

  3.18 Static code analysis

  3.19 System Under Test SUT

  3.20 Mutation Testing

4. Technology

5. How to Run

6. Contribution

7. Roadmap

8. Authors

9. License

10. Inspirations and Recommendations

1. Introduction

1.1 Purpose of this Repository

This is a list of the main goals of this repository:

  • Showing how you can implement amonolith application in amodular way
  • Presentation of thefull implementation of an application
    • This is not another simple application
    • This is not another proof of concept (PoC)
    • The goal is to present the implementation of an application that would be ready to run in production
  • Showing the application ofbest practices andobject-oriented programming principles
  • Presentation of the use ofdesign patterns. When, how and why they can be used
  • Presentation of somearchitectural considerations, decisions, approaches
  • Presentation of the implementation usingDomain-Driven Design approach (tactical patterns)
  • Presentation of the implementation ofUnit Tests for Domain Model (Testable Design in mind)
  • Presentation of the implementation ofIntegration Tests
  • Presentation of the implementation ofEvent Sourcing
  • Presentation ofC4 Model
  • Presentation ofdiagram as text approach

1.2 Out of Scope

This is a list of subjects which are out of scope for this repository:

  • Business requirements gathering and analysis
  • System analysis
  • Domain exploration
  • Domain distillation
  • Domain-Driven Designstrategic patterns
  • Architecture evaluation, quality attributes analysis
  • Integration, system tests
  • Project management
  • Infrastructure
  • Containerization
  • Software engineering process
  • Deployment process
  • Maintenance
  • Documentation

1.3 Reason

The reason for creating this repository is the lack of something similar. Most sample applications on GitHub have at least one of the following issues:

  • Very, very simple - few entities and use cases implemented
  • Not finished (for example there is no authentication, logging, etc..)
  • Poorly designed (in my opinion)
  • Poorly implemented (in my opinion)
  • Not well described
  • Assumptions and decisions are not clearly explained
  • Implements "Orders" domain - yes, everyone knows this domain, but something different is needed
  • Implemented in old technology
  • Not maintained

To sum up, there are some very good examples, but there are far too few of them. This repository has the task of filling this gap at some level.

1.4 Disclaimer

Software architecture should always be created to resolve specificbusiness problems. Software architecture always supports some quality attributes and at the same time does not support others. A lot of other factors influence your software architecture - your team, opinions, preferences, experiences, technical constraints, time, budget, etc.

Always functional requirements, quality attributes, technical constraints and other factors should be considered before an architectural decision is made.

Because of the above, the architecture and implementation presented in this repository isone of the many ways to solve some problems. Take from this repository as much as you want, use it as you like but remember toalways pick the best solution which is appropriate to the problem class you have.

1.5 Give a Star

My primary focus in this project is on quality. Creating a good quality product involves a lot of analysis, research and work. It takes a lot of time. If you like this project, learned something or you are using it in your applications, please give it a star ⭐. This is the best motivation for me to continue this work. Thanks!

1.6 Share It

There are very few really good examples of this type of application. If you think this repository makes a difference and is worth it, please share it with your friends and on social networks. I will be extremely grateful.

2. Domain

2.1 Description

Definition:

Domain - A sphere of knowledge, influence, or activity. The subject area to which the user applies a program is the domain of the software.Domain-Driven Design Reference, Eric Evans

TheMeeting Groups domain was selected for the purposes of this project based on theMeetup.com system.

Main reasons for selecting this domain:

  • It is common, a lot of people use the Meetup site to organize or attend meetings
  • There is a system for it, so everyone can check this implementation against a working site which supports this domain
  • It is not complex so it is easy to understand
  • It is not trivial - there are some business rules and logic and it is not just CRUD operations
  • You don't need much specific domain knowledge unlike other domains like financing, banking, medical
  • It is not big so it is easier to implement

Meetings

The main business entities areMember,Meeting Group andMeeting. AMember can create aMeeting Group, be part of aMeeting Group or can attend aMeeting.

AMeeting Group Member can be anOrganizer of this group or a normalMember.

Only anOrganizer of aMeeting Group can create a newMeeting.

AMeeting has attendees, not attendees (Members which declare they will not attend theMeeting) andMembers on theWaitlist.

AMeeting can have an attendee limit. If the limit is reached,Members can only sign up to theWaitlist.

AMeeting Attendee can bring guests to theMeeting. The number of guests allowed is an attribute of theMeeting. Bringing guests can be unallowed.

AMeeting Attendee can have one of two roles:Attendee orHost. AMeeting must have at least oneHost. TheHost is a special role which grants permission to editMeeting information or change the attendees list.

AMember can commentMeetings. AMember can reply to, like otherComments.Organizer manages commenting ofMeeting byMeeting Commenting Configuration.Organizer can delete anyComment.

EachMeeting Group must have an organizer with activeSubscription. One organizer can cover 3Meeting Groups by hisSubscription.

Additionally, Meeting organizer can set anEvent Fee. EachMeeting Attendee is obliged to pay the fee. All guests should be paid byMeeting Attendee too.

Administration

To create a newMeeting Group, aMember needs to propose the group. AMeeting Group Proposal is sent toAdministrators. AnAdministrator can accept or reject aMeeting Group Proposal. If aMeeting Group Proposal is accepted, aMeeting Group is created.

Payments

EachMember who is thePayer can buy theSubscription. He needs to pay theSubscription Payment.Subscription can expire soSubscription Renewal is required (bySubscription Renewal Payment payment to keepSubscription active).

When theMeeting fee is required, thePayer needs to payMeeting Fee (throughMeeting Fee Payment).

Users

EachAdministrator,Member andPayer is aUser. To be aUser,User Registration is required and confirmed.

EachUser is assigned one or moreUser Role.

EachUser Role has set ofPermissions. APermission defines whetherUser can invoke a particular action.

2.2 Conceptual Model

Definition:

Conceptual Model - A conceptual model is a representation of a system, made of the composition of concepts that are used to help people know, understand, or simulate a subject the model represents.Wikipedia - Conceptual model

Conceptual Model

PlantUML version:

VisualParadigm version (not maintained, only for demonstration):

Conceptual Model of commenting feature

2.3 Event Storming

While a Conceptual Model focuses on structures and relationships between them,behavior andevents that occur in our domain are more important.

There are many ways to show behavior and events. One of them is a light technique calledEvent Storming which is becoming more popular. Below are presented 3 main business processes using this technique: user registration, meeting group creation and meeting organization.

Note: Event Storming is a light, live workshop. One of the possible outputs of this workshop is presented here. Even if you are not doing Event Storming workshops, this type of process presentation can be very valuable to you and your stakeholders.

User Registration process



Meeting Group creation


Meeting organization


PaymentsDownload high resolution file


3. Architecture

3.0 C4 Model

C4 model is a lean graphical notation technique for modelling the architecture of software systems.

As can be found on the website of the author of this model (Simon Brown):The C4 model was created as a way to help software development teams describe and communicate software architecture, both during up-front design sessions and when retrospectively documenting an existing codebase

Model C4 defines 4 levels (views) of the system architecture:System Context,Container,Component andCode. Below are examples of each of these levels that describe the architecture of this system.

Note: ThePlantUML (diagram as text) component was used to describe all C4 model levels. Additionally, for levels C1-C3, aC4-PlantUML plug-in connecting PlantUML with the C4 model was used.

3.0.1 C1 System Context

3.0.2 C2 Container

3.0.3 C3 Component (high-level)

3.0.4 C3 Component (module-level)

3.0.5 C4 Code (meeting group aggregate)

3.1 High Level View

Module descriptions:

  • API - Very thin ASP.NET MVC Core REST API application. Main responsibilities are:
    1. Accept request
    2. Authenticate and authorize request (using User Access module)
    3. Delegate work to specific module sending Command or Query
    4. Return response
  • User Access - responsible for user authentication and authorization
  • Registrations - responsible for user registration
  • Meetings - implements Meetings Bounded Context: creating meeting groups, meetings
  • Administration - implements Administration Bounded Context: implements administrative tasks like meeting group proposal verification
  • Payments - implements Payments Bounded Context: implements all functionalities associated with payments
  • In Memory Events Bus - Publish/Subscribe implementation to asynchronously integrate all modules using events (Event Driven Architecture).

Key assumptions:

  1. API contains no application logic
  2. API communicates with Modules using a small interface to send Queries and Commands
  3. Each Module has its own interface which is used by API
  4. Modules communicate each other only asynchronously using Events Bus - direct method calls are not allowed
  5. Each Modulehas it's own data in a separate schema - shared data is not allowed
    • Module data could be moved into separate databases if desired
  6. Modules can only have a dependency on the integration events assembly of other Module (seeModule level view)
  7. Each Module has its ownComposition Root, which implies that each Module has its own Inversion-of-Control container
  8. API as a host needs to initialize each module and each module has an initialization method
  9. Each Module ishighly encapsulated - only required types and members are public, the rest are internal or private

3.2 Module Level View

Each Module hasClean Architecture and consists of the following submodules (assemblies):

  • Application - the application logic submodule which is responsible for requests processing: use cases, domain events, integration events, internal commands.
  • Domain - Domain Model in Domain-Driven Design terms implements the applicableBounded Context
  • Infrastructure - infrastructural code responsible for module initialization, background processing, data access, communication with Events Bus and other external components or systems
  • IntegrationEvents -Contracts published to the Events Bus; only this assembly can be called by other modules

Note: Application, Domain and Infrastructure assemblies could be merged into one assembly. Some people like horizontal layering or more decomposition, some don't. Implementing the Domain Model or Infrastructure in separate assembly allows encapsulation using theinternal keyword. Sometimes Bounded Context logic is not worth it because it is too simple. As always, be pragmatic and take whatever approach you like.

3.3 API and Module Communication

The API only communicates with Modules in two ways: during module initialization and request processing.

Module initialization

Each module has a staticInitialize method which is invoked in the APIStartup class. All configuration needed by this module should be provided as arguments to this method. All services are configured during initialization and the Composition Root is created using the Inversion-of-Control Container.

publicstaticvoidInitialize(stringconnectionString,IExecutionContextAccessorexecutionContextAccessor,ILoggerlogger,EmailsConfigurationemailsConfiguration){varmoduleLogger=logger.ForContext("Module","Meetings");ConfigureCompositionRoot(connectionString,executionContextAccessor,moduleLogger,emailsConfiguration);QuartzStartup.Initialize(moduleLogger);EventsBusStartup.Initialize(moduleLogger);}

Request processing

Each module has the same interface signature exposed to the API. It contains 3 methods: command with result, command without result and query.

publicinterfaceIMeetingsModule{Task<TResult>ExecuteCommandAsync<TResult>(ICommand<TResult>command);TaskExecuteCommandAsync(ICommandcommand);Task<TResult>ExecuteQueryAsync<TResult>(IQuery<TResult>query);}

Note: Some people say that processing a command should not return a result. This is an understandable approach but sometimes impractical, especially when you want to immediately return the ID of a newly created resource. Sometimes the boundary between Command and Query is blurry. One example isAuthenticateCommand - it returns a token but it is not a query because it has a side effect.

3.4 Module Requests Processing via CQRS

Processing of Commands and Queries is separated by applying the architectural style/patternCommand Query Responsibility Segregation (CQRS).

Commands are processed usingWrite Model which is implemented using DDD tactical patterns:

internalclassCreateNewMeetingGroupCommandHandler:ICommandHandler<CreateNewMeetingGroupCommand>{privatereadonlyIMeetingGroupRepository_meetingGroupRepository;privatereadonlyIMeetingGroupProposalRepository_meetingGroupProposalRepository;internalCreateNewMeetingGroupCommandHandler(IMeetingGroupRepositorymeetingGroupRepository,IMeetingGroupProposalRepositorymeetingGroupProposalRepository){_meetingGroupRepository=meetingGroupRepository;_meetingGroupProposalRepository=meetingGroupProposalRepository;}publicasyncTaskHandle(CreateNewMeetingGroupCommandrequest,CancellationTokencancellationToken){varmeetingGroupProposal=await_meetingGroupProposalRepository.GetByIdAsync(request.MeetingGroupProposalId);varmeetingGroup=meetingGroupProposal.CreateMeetingGroup();await_meetingGroupRepository.AddAsync(meetingGroup);}}

Queries are processed usingRead Model which is implemented by executing raw SQL statements on database views:

internalclassGetAllMeetingGroupsQueryHandler:IQueryHandler<GetAllMeetingGroupsQuery,List<MeetingGroupDto>>{privatereadonlyISqlConnectionFactory_sqlConnectionFactory;internalGetAllMeetingGroupsQueryHandler(ISqlConnectionFactorysqlConnectionFactory){_sqlConnectionFactory=sqlConnectionFactory;}publicasyncTask<List<MeetingGroupDto>>Handle(GetAllMeetingGroupsQueryrequest,CancellationTokencancellationToken){varconnection=_sqlConnectionFactory.GetOpenConnection();conststringsql=$"""                           SELECT                                [MeetingGroup].[Id] as [{nameof(MeetingGroupDto.Id)}] ,                                [MeetingGroup].[Name] as [{nameof(MeetingGroupDto.Name)}],                                [MeetingGroup].[Description] as [{nameof(MeetingGroupDto.Description)}]                                [MeetingGroup].[LocationCountryCode] as [{nameof(MeetingGroupDto.LocationCountryCode)}],                                [MeetingGroup].[LocationCity] as [{nameof(MeetingGroupDto.LocationCity)}]                           FROM [meetings].[v_MeetingGroups] AS [MeetingGroup]""";varmeetingGroups=awaitconnection.QueryAsync<MeetingGroupDto>(sql);returnmeetingGroups.AsList();}}

Key advantages:

Disadvantage:

  • Mediator pattern introduces extra indirection and is harder to reason about which class handles the request

For more information:Simple CQRS implementation with raw SQL and DDD

3.5 Domain Model Principles and Attributes

The Domain Model, which is the central and most critical part in the system, should be designed with special attention. Here are some key principles and attributes which are applied to Domain Models of each module:

  1. High level of encapsulation

    All members areprivate by default, theninternal - onlypublic at the very edge.

  2. High level of PI (Persistence Ignorance)

    No dependencies to infrastructure, databases, etc. All classes arePOCOs.

  3. Rich in behavior

    All business logic is located in the Domain Model. No leaks to the application layer or elsewhere.

  4. Low level of Primitive Obsession

    Primitive attributes of Entites grouped together using ValueObjects.

  5. Business language

    All classes, methods and other members are named in business language used in this Bounded Context.

  6. Testable

    The Domain Model is a critical part of the system so it should be easy to test (Testable Design).

publicclassMeetingGroup:Entity,IAggregateRoot{publicMeetingGroupIdId{get;privateset;}privatestring_name;privatestring_description;privateMeetingGroupLocation_location;privateMemberId_creatorId;privateList<MeetingGroupMember>_members;privateDateTime_createDate;privateDateTime?_paymentDateTo;internalstaticMeetingGroupCreateBasedOnProposal(MeetingGroupProposalIdmeetingGroupProposalId,stringname,stringdescription,MeetingGroupLocationlocation,MemberIdcreatorId){returnnewMeetingGroup(meetingGroupProposalId,name,description,location,creatorId);}publicMeetingCreateMeeting(stringtitle,MeetingTermterm,stringdescription,MeetingLocationlocation,int?attendeesLimit,intguestsLimit,TermrsvpTerm,MoneyValueeventFee,List<MemberId>hostsMembersIds,MemberIdcreatorId){this.CheckRule(newMeetingCanBeOrganizedOnlyByPayedGroupRule(_paymentDateTo));this.CheckRule(newMeetingHostMustBeAMeetingGroupMemberRule(creatorId,hostsMembersIds,_members));returnnewMeeting(this.Id,title,term,description,location,attendeesLimit,guestsLimit,rsvpTerm,eventFee,hostsMembersIds,creatorId);}

3.6 Cross-Cutting Concerns

To supportSingle Responsibility Principle andDon't Repeat Yourself principles, the implementation of cross-cutting concerns is done using theDecorator Pattern. Each Command processor is decorated by 3 decorators: logging, validation and unit of work.

Logging

The Logging decorator logs execution, arguments and processing of each Command. This way each log inside a processor has the log context of the processing command.

internalclassLoggingCommandHandlerDecorator<T>:ICommandHandler<T>whereT:ICommand{privatereadonlyILogger_logger;privatereadonlyIExecutionContextAccessor_executionContextAccessor;privatereadonlyICommandHandler<T>_decorated;publicLoggingCommandHandlerDecorator(ILoggerlogger,IExecutionContextAccessorexecutionContextAccessor,ICommandHandler<T>decorated){_logger=logger;_executionContextAccessor=executionContextAccessor;_decorated=decorated;}publicasyncTaskHandle(Tcommand,CancellationTokencancellationToken){if(commandisIRecurringCommand){returnawait_decorated.Handle(command,cancellationToken);}using(LogContext.Push(newRequestLogEnricher(_executionContextAccessor),newCommandLogEnricher(command))){try{this._logger.Information("Executing command {Command}",command.GetType().Name);varresult=await_decorated.Handle(command,cancellationToken);this._logger.Information("Command {Command} processed successful",command.GetType().Name);returnresult;}catch(Exceptionexception){this._logger.Error(exception,"Command {Command} processing failed",command.GetType().Name);throw;}}}privateclassCommandLogEnricher:ILogEventEnricher{privatereadonlyICommand_command;publicCommandLogEnricher(ICommandcommand){_command=command;}publicvoidEnrich(LogEventlogEvent,ILogEventPropertyFactorypropertyFactory){logEvent.AddOrUpdateProperty(newLogEventProperty("Context",newScalarValue($"Command:{_command.Id.ToString()}")));}}privateclassRequestLogEnricher:ILogEventEnricher{privatereadonlyIExecutionContextAccessor_executionContextAccessor;publicRequestLogEnricher(IExecutionContextAccessorexecutionContextAccessor){_executionContextAccessor=executionContextAccessor;}publicvoidEnrich(LogEventlogEvent,ILogEventPropertyFactorypropertyFactory){if(_executionContextAccessor.IsAvailable){logEvent.AddOrUpdateProperty(newLogEventProperty("CorrelationId",newScalarValue(_executionContextAccessor.CorrelationId)));}}}}

Validation

The Validation decorator performs Command data validation. It checks rules against Command arguments using the FluentValidation library.

internalclassValidationCommandHandlerDecorator<T>:ICommandHandler<T>whereT:ICommand{privatereadonlyIList<IValidator<T>>_validators;privatereadonlyICommandHandler<T>_decorated;publicValidationCommandHandlerDecorator(IList<IValidator<T>>validators,ICommandHandler<T>decorated){this._validators=validators;_decorated=decorated;}publicTask<Unit>Handle(Tcommand,CancellationTokencancellationToken){varerrors=_validators.Select(v=>v.Validate(command)).SelectMany(result=>result.Errors).Where(error=>error!=null).ToList();if(errors.Any()){varerrorBuilder=newStringBuilder();errorBuilder.AppendLine("Invalid command, reason: ");foreach(varerrorinerrors){errorBuilder.AppendLine(error.ErrorMessage);}thrownewInvalidCommandException(errorBuilder.ToString(),null);}return_decorated.Handle(command,cancellationToken);}}

Unit Of Work

All Command processing has side effects. To avoid calling commit on every handler,UnitOfWorkCommandHandlerDecorator is used. It additionally marksInternalCommand as processed (if it is Internal Command) and dispatches all Domain Events (as part ofUnit Of Work).

publicclassUnitOfWorkCommandHandlerDecorator<T>:ICommandHandler<T>whereT:ICommand{privatereadonlyICommandHandler<T>_decorated;privatereadonlyIUnitOfWork_unitOfWork;privatereadonlyMeetingsContext_meetingContext;publicUnitOfWorkCommandHandlerDecorator(ICommandHandler<T>decorated,IUnitOfWorkunitOfWork,MeetingsContextmeetingContext){_decorated=decorated;_unitOfWork=unitOfWork;_meetingContext=meetingContext;}publicasyncTaskHandle(Tcommand,CancellationTokencancellationToken){awaitthis._decorated.Handle(command,cancellationToken);if(commandisInternalCommandBase){varinternalCommand=await_meetingContext.InternalCommands.FirstOrDefaultAsync(x=>x.Id==command.Id,cancellationToken:cancellationToken);if(internalCommand!=null){internalCommand.ProcessedDate=DateTime.UtcNow;}}awaitthis._unitOfWork.CommitAsync(cancellationToken);}}

3.7 Modules Integration

Integration between modules is strictlyasynchronous using Integration Events and the In Memory Event Bus as broker. In this way coupling between modules is minimal and exists only on the structure of Integration Events.

Modules don't share data so it is not possible nor desirable to create a transaction which spans more than one module. To ensure maximum reliability, theOutbox / Inbox pattern is used. This pattern provides accordingly"At-Least-Once delivery" and"At-Least-Once processing".

The Outbox and Inbox is implemented using two SQL tables and a background worker for each module. The background worker is implemented using the Quartz.NET library.

Saving to Outbox:

Processing Outbox:

3.8 Internal Processing

The main principle of this system is that you can change its state only by calling a specific Command.

Commands can be called not only by the API, but by the processing module itself. The main use case which implements this mechanism is data processing in eventual consistency mode when we want to process something in a different process and transaction. This applies, for example, to Inbox processing because we want to do something (calling a Command) based on an Integration Event from the Inbox.

This idea is taken from Alberto's Brandolini's Event Storming picture called "The picture that explains “almost” everything" which shows that every side effect (domain event) is created by invoking a Command on Aggregate. SeeEventStorming cheat sheet article for more details.

Implementation of internal processing is very similar to implementation of the Outbox and Inbox. One SQL table and one background worker for processing. Each internally processing Command must inherit fromInternalCommandBase class:

internalabstractclassInternalCommandBase:ICommand{publicGuidId{get;}protectedInternalCommandBase(Guidid){this.Id=id;}}

This is important because theUnitOfWorkCommandHandlerDecorator must mark an internal Command as processed during committing:

publicasyncTaskHandle(Tcommand,CancellationTokencancellationToken){awaitthis._decorated.Handle(command,cancellationToken);if(commandisInternalCommandBase){varinternalCommand=await_meetingContext.InternalCommands.FirstOrDefaultAsync(x=>x.Id==command.Id,cancellationToken:cancellationToken);if(internalCommand!=null){internalCommand.ProcessedDate=DateTime.UtcNow;}}awaitthis._unitOfWork.CommitAsync(cancellationToken);}

3.9 Security

Authentication

Authentication is implemented using JWT Token and Bearer scheme using IdentityServer. For now, only one authentication method is implemented: forms style authentication (username and password) via the OAuth2Resource Owner Password Grant Type. It requires implementation of theIResourceOwnerPasswordValidator interface:

publicclassResourceOwnerPasswordValidator:IResourceOwnerPasswordValidator{privatereadonlyIUserAccessModule_userAccessModule;publicResourceOwnerPasswordValidator(IUserAccessModuleuserAccessModule){_userAccessModule=userAccessModule;}publicasyncTaskValidateAsync(ResourceOwnerPasswordValidationContextcontext){varauthenticationResult=await_userAccessModule.ExecuteCommandAsync(newAuthenticateCommand(context.UserName,context.Password));if(!authenticationResult.IsAuthenticated){context.Result=newGrantValidationResult(TokenRequestErrors.InvalidGrant,authenticationResult.AuthenticationError);return;}context.Result=newGrantValidationResult(authenticationResult.User.Id.ToString(),"forms",authenticationResult.User.Claims);}}

Authorization

Authorization is achieved by implementingRBAC (Role Based Access Control) using Permissions. Permissions are more granular and a much better way to secure your application than Roles alone. Each User has a set of Roles and each Role contains one or more Permission. The User's set of Permissions is extracted from all Roles the User belongs to. Permissions are always checked onController level - never Roles:

[HttpPost][Route("")][HasPermission(MeetingsPermissions.ProposeMeetingGroup)]publicasyncTask<IActionResult>ProposeMeetingGroup(ProposeMeetingGroupRequestrequest){await_meetingsModule.ExecuteCommandAsync(newProposeMeetingGroupCommand(request.Name,request.Description,request.LocationCity,request.LocationCountryCode));returnOk();}

3.10 Unit Tests

Definition:

A unit test is an automated piece of code that invokes the unit of work being tested, and then checks some assumptions about a single end result of that unit. A unit test is almost always written using a unit testing framework. It can be written easily and runs quickly. It’s trustworthy, readable, and maintainable. It’s consistent in its results as long as production code hasn’t changed.Art of Unit Testing 2nd Edition Roy Osherove

Attributes of good unit test

  • Automated
  • Maintainable
  • Runs very fast (in ms)
  • Consistent, Deterministic (always the same result)
  • Isolated from other tests
  • Readable
  • Can be executed by anyone
  • Testing public API, not internal behavior (overspecification)
  • Looks like production code
  • Treated as production code

Implementation

Unit tests should mainly test business logic (domain model):

Each unit test has 3 standard sections: Arrange, Act and Assert:

1. Arrange

The Arrange section is responsible for preparing the Aggregate for testing the public method that we want to test. This public method is often called (from the unit tests perspective) the SUT (system under test).

Creating an Aggregate ready for testing involvescalling one or more other public constructors/methods on the Domain Model. At first it may seem that we are testing too many things at the same time, but this is not true. We need to be one hundred percent sure that the Aggregate is in a state exactly as it will be in production. This can only be ensured when we:

  • Use only public API of Domain Model
  • Don't useInternalsVisibleToAttribute class
    • This exposes the Domain Model to the Unit Tests library, removing encapsulation so our tests and production code are treated differently and it is a very bad thing
  • Don't useConditionalAttribute classes - it reduces readability and increases complexity
  • Don't create any special constructors/factory methods for tests (even with conditional compilation symbols)
    • Special constructor/factory method only for unit tests causes duplication of business logic in the test itself and focuses on state - this kind of approach causes the test to be very sensitive to changes and hard to maintain
  • Don't remove encapsulation from Domain Model (for example: change keywords frominternal/private topublic)
  • Don't make methodsprotected to inherit from tested class and in this way provide access to internal methods/properties

Isolation of external dependencies

There are 2 main concepts - stubs and mocks:

A stub is a controllable replacement for an existing dependency (or collaborator) in the system. By using a stub, you can test your code without dealing with the dependency directly.

A mock object is a fake object in the system that decides whether the unit test has passed or failed. It does so by verifying whether the object under test called the fake object as expected. There’s usually no more than one mock per test.Art of Unit Testing 2nd Edition Roy Osherove

Good advice: use stubs if you need to, but try to avoid mocks. Mocking causes us to test too many internal things and leads to overspecification.

2. Act

This section is very easy - we executeexactly one public method on aggregate (SUT).

3. Assert

In this section we check expectations. There are only 2 possible outcomes:

  • Method completed and Domain Event(s) published
  • Business rule was broken

Simple example:

[Test]publicvoidNewUserRegistration_WithUniqueLogin_IsSuccessful(){// ArrangevarusersCounter=Substitute.For<IUsersCounter>();// ActvaruserRegistration=UserRegistration.RegisterNewUser("login","password","test@email","firstName","lastName",usersCounter);// AssertvarnewUserRegisteredDomainEvent=AssertPublishedDomainEvent<NewUserRegisteredDomainEvent>(userRegistration);Assert.That(newUserRegisteredDomainEvent.UserRegistrationId,Is.EqualTo(userRegistration.Id));}[Test]publicvoidNewUserRegistration_WithoutUniqueLogin_BreaksUserLoginMustBeUniqueRule(){// ArrangevarusersCounter=Substitute.For<IUsersCounter>();usersCounter.CountUsersWithLogin("login").Returns(x=>1);// AssertAssertBrokenRule<UserLoginMustBeUniqueRule>(()=>{// ActUserRegistration.RegisterNewUser("login","password","test@email","firstName","lastName",usersCounter);});}

Advanced example:

[Test]publicvoidAddAttendee_WhenMemberIsAlreadyAttendeeOfMeeting_IsNotPossible(){// ArrangevarcreatorId=newMemberId(Guid.NewGuid());varmeetingTestData=CreateMeetingTestData(newMeetingTestDataOptions{CreatorId=creatorId});varnewMemberId=newMemberId(Guid.NewGuid());meetingTestData.MeetingGroup.JoinToGroupMember(newMemberId);meetingTestData.Meeting.AddAttendee(meetingTestData.MeetingGroup,newMemberId,0);// AssertAssertBrokenRule<MemberCannotBeAnAttendeeOfMeetingMoreThanOnceRule>(()=>{// ActmeetingTestData.Meeting.AddAttendee(meetingTestData.MeetingGroup,newMemberId,0);});}

CreateMeetingTestData method is an implementation ofSUT Factory described by Mark Seemann which allows keeping common creation logic in one place:

protectedMeetingTestDataCreateMeetingTestData(MeetingTestDataOptionsoptions){varproposalMemberId=options.CreatorId??newMemberId(Guid.NewGuid());varmeetingProposal=MeetingGroupProposal.ProposeNew("name","description",newMeetingGroupLocation("Warsaw","PL"),proposalMemberId);meetingProposal.Accept();varmeetingGroup=meetingProposal.CreateMeetingGroup();meetingGroup.UpdatePaymentInfo(DateTime.Now.AddDays(1));varmeetingTerm=options.MeetingTerm??newMeetingTerm(DateTime.UtcNow.AddDays(1),DateTime.UtcNow.AddDays(2));varrsvpTerm=options.RvspTerm??Term.NoTerm;varmeeting=meetingGroup.CreateMeeting("title",meetingTerm,"description",newMeetingLocation("Name","Address","PostalCode","City"),options.AttendeesLimit,options.GuestsLimit,rsvpTerm,MoneyValue.Zero,newList<MemberId>(),proposalMemberId);DomainEventsTestHelper.ClearAllDomainEvents(meetingGroup);returnnewMeetingTestData(meetingGroup,meeting);}

3.11 Architecture Decision Log

All Architectural Decisions (AD) are documented in theArchitecture Decision Log (ADL).

More information about documenting architecture-related decisions in this way :https://github.com/joelparkerhenderson/architecture_decision_record

3.12 Architecture Unit Tests

In some cases it is not possible to enforce the application architecture, design or established conventions using compiler (compile-time). For this reason, code implementations can diverge from the original design and architecture. We want to minimize this behavior, not only by code review.

To do this, unit tests of system architecture, design, major conventions and assumptions have been written. In .NET there is special library for this task:NetArchTest. This library has been written based on the very popular JAVA architecture unit tests library -ArchUnit.

Using this kind of tests we can test proper layering of our application, dependencies, encapsulation, immutability, DDD correct implementation, naming, conventions and so on - everything what we need to test. Example:

More information about architecture unit tests here:https://blogs.oracle.com/javamagazine/unit-test-your-architecture-with-archunit

3.13 Integration Tests

Definition

"Integration Test" term is blurred. It can mean test between classes, modules, services, even systems - seethis article (by Martin Fowler).

For this reason, the definition of integration test in this project is as follows:

  • it verifies how system works in integration with "out-of-process" dependencies - database, messaging system, file system or external API
  • it tests particular use case
  • it can be slow (as opposed to Unit Test)

Approach

  • Do not mock dependencies over which you have full control (like database). Full control dependency means you can always revert all changes (remove side-effects) and no one can notice it. They are not visible to others. See next point, please.
  • Use "production", normal, real database version. Some use e.g. in memory repository, some use light databases instead "production" version. This is still mocking. Testing makes sense if we have full confidence in testing. You can't trust the test if you know that the infrastructure in the production environment will vary. Be always as close to production environment as possible.
  • Mock dependencies over which you don't have control. No control dependency means you can't remove side-effects after interaction with this dependency (external API, messaging system, SMTP server etc.). They can be visible to others.

Implementation

Integration test should test exactly one use case. One use case is represented by one Command/Query processing so CommandHandler/QueryHandler in Application layer is perfect starting point for running the Integration Test:

For each test, the following preparation steps must be performed:

  1. Clear database
  2. Prepare mocks
  3. Initialize testing module
[SetUp]publicasyncTaskBeforeEachTest(){conststringconnectionStringEnvironmentVariable="ASPNETCORE_MyMeetings_IntegrationTests_ConnectionString";ConnectionString=Environment.GetEnvironmentVariable(connectionStringEnvironmentVariable,EnvironmentVariableTarget.Machine);if(ConnectionString==null){thrownewApplicationException($"Define connection string to integration tests database using environment variable:{connectionStringEnvironmentVariable}");}using(varsqlConnection=newSqlConnection(ConnectionString)){awaitClearDatabase(sqlConnection);}Logger=Substitute.For<ILogger>();EmailSender=Substitute.For<IEmailSender>();EventsBus=newEventsBusMock();ExecutionContext=newExecutionContextMock(Guid.NewGuid());PaymentsStartup.Initialize(ConnectionString,ExecutionContext,Logger,EventsBus,false);PaymentsModule=newPaymentsModule();}

After preparation, test is performed on clear database. Usually, it is the execution of some (or many) Commands and:
a) running a Query or/and
b) verifying mocks
to check the result.

[TestFixture]publicclassMeetingPaymentTests:TestBase{[Test]publicasyncTaskCreateMeetingPayment_Test(){PayerIdpayerId=newPayerId(Guid.NewGuid());MeetingIdmeetingId=newMeetingId(Guid.NewGuid());decimalvalue=100;stringcurrency="EUR";awaitPaymentsModule.ExecuteCommandAsync(newCreateMeetingPaymentCommand(Guid.NewGuid(),payerId,meetingId,value,currency));varpayment=awaitPaymentsModule.ExecuteQueryAsync(newGetMeetingPaymentQuery(meetingId.Value,payerId.Value));Assert.That(payment.PayerId,Is.EqualTo(payerId.Value));Assert.That(payment.MeetingId,Is.EqualTo(meetingId.Value));Assert.That(payment.FeeValue,Is.EqualTo(value));Assert.That(payment.FeeCurrency,Is.EqualTo(currency));}}

Each Command/Query processing is a separate execution (with different object graph resolution, context, database connection etc.) thanks to Composition Root of each module. This behavior is important and desirable.

3.14 System Integration Testing

Definition

System Integration Testing (SIT) is performed to verify the interactions between the modules of a software system. It involves the overall testing of a complete system of many subsystem components or elements.

Implementation

Implementation of system integration tests is based on approach of integration testing of modules in isolation (invoking commands and queries) described in the previous section.

The problem is that in this case we are dealing withasynchronous communication. Due to asynchrony, ourtest must wait for the result at certain times.

To correctly implement such tests, theSampling technique and implementation described in theGrowing Object-Oriented Software, Guided by Tests book was used:

An asynchronous test must wait for success and use timeouts to detect failure. This implies that every tested activity must have an observable effect: a test must affect the system so that its observable state becomes different. This sounds obvious but it drives how we think about writing asynchronous tests. If an activity has no observable effect, there is nothing the test can wait for, and therefore no way for the test to synchronize with the system it is testing. There are two ways a test can observe the system: by sampling its observable state or by listening for events that it sends out.

Test below:

  1. Creates Meeting Group Proposal in Meetings module
  2. Waits until Meeting Group Proposal to verification will be available in Administration module with 10 seconds timeout
  3. Accepts Meeting Group Proposal in Administration module
  4. Waits until Meeting Group is created in Meetings module with 15 seconds timeout
publicclassCreateMeetingGroupTests:TestBase{[Test]publicasyncTaskCreateMeetingGroupScenario_WhenProposalIsAccepted(){varmeetingGroupId=awaitMeetingsModule.ExecuteCommandAsync(newProposeMeetingGroupCommand("Name","Description","Location","PL"));AssertEventually(newGetMeetingGroupProposalFromAdministrationProbe(meetingGroupId,AdministrationModule),10000);awaitAdministrationModule.ExecuteCommandAsync(newAcceptMeetingGroupProposalCommand(meetingGroupId));AssertEventually(newGetCreatedMeetingGroupFromMeetingsProbe(meetingGroupId,MeetingsModule),15000);}privateclassGetCreatedMeetingGroupFromMeetingsProbe:IProbe{privatereadonlyGuid_expectedMeetingGroupId;privatereadonlyIMeetingsModule_meetingsModule;privateList<MeetingGroupDto>_allMeetingGroups;publicGetCreatedMeetingGroupFromMeetingsProbe(GuidexpectedMeetingGroupId,IMeetingsModulemeetingsModule){_expectedMeetingGroupId=expectedMeetingGroupId;_meetingsModule=meetingsModule;}publicboolIsSatisfied(){return_allMeetingGroups!=null&&_allMeetingGroups.Any(x=>x.Id==_expectedMeetingGroupId);}publicasyncTaskSampleAsync(){_allMeetingGroups=await_meetingsModule.ExecuteQueryAsync(newGetAllMeetingGroupsQuery());}publicstringDescribeFailureTo()=>$"Meeting group with ID:{_expectedMeetingGroupId} is not created";}privateclassGetMeetingGroupProposalFromAdministrationProbe:IProbe{privatereadonlyGuid_expectedMeetingGroupProposalId;privateMeetingGroupProposalDto_meetingGroupProposal;privatereadonlyIAdministrationModule_administrationModule;publicGetMeetingGroupProposalFromAdministrationProbe(GuidexpectedMeetingGroupProposalId,IAdministrationModuleadministrationModule){_expectedMeetingGroupProposalId=expectedMeetingGroupProposalId;_administrationModule=administrationModule;}publicboolIsSatisfied(){if(_meetingGroupProposal==null){returnfalse;}if(_meetingGroupProposal.Id==_expectedMeetingGroupProposalId&&_meetingGroupProposal.StatusCode==MeetingGroupProposalStatus.ToVerify.Value){returntrue;}returnfalse;}publicasyncTaskSampleAsync(){try{_meetingGroupProposal=await_administrationModule.ExecuteQueryAsync(newGetMeetingGroupProposalQuery(_expectedMeetingGroupProposalId));}catch{// ignored}}publicstringDescribeFailureTo()=>$"Meeting group proposal with ID:{_expectedMeetingGroupProposalId} to verification not created";}}

Poller class implementation (based on example in the book):

publicclassPoller{privatereadonlyint_timeoutMillis;privatereadonlyint_pollDelayMillis;publicPoller(inttimeoutMillis){_timeoutMillis=timeoutMillis;_pollDelayMillis=1000;}publicvoidCheck(IProbeprobe){vartimeout=newTimeout(_timeoutMillis);while(!probe.IsSatisfied()){if(timeout.HasTimedOut()){thrownewAssertErrorException(DescribeFailureOf(probe));}Thread.Sleep(_pollDelayMillis);probe.SampleAsync();}}privatestaticstringDescribeFailureOf(IProbeprobe){returnprobe.DescribeFailureTo();}}

3.15 Event Sourcing

Theory

During the implementation of the Payment module,Event Sourcing was used.Event Sourcing is a way of preserving the state of our system by recording a sequence of events. No less, no more.

It is important here to really restore the state of our application from events. If we collect events only for auditing purposes, it is anAudit Log/Trail - not theEvent Sourcing.

The main elements ofEvent Sourcing are as follows:

  • Events Stream
  • Objects that are restored based on events. There are 2 types of such objects depending on the purpose:-- Objects responsible for the change of state. In Domain-Driven Design they will beAggregates.--Projections: read models prepared for a specific purpose
  • Subscriptions : a way to receive information about new events
  • Snapshots: from time to time, objects saved in the traditional way for performance purposes. Mainly used if there are many events to restore the object from the entire event history. (Note: there is currently no snapshot implementation in the project)

Tool

In order not to reinvent the wheel, theSQL Stream Store library was used. As thedocumentation says:

SQL Stream Store is a .NET library to assist with developing applications that use event sourcing or wish to use stream based patterns over a relational database and existing operational infrastructure.

Like every library, it has its limitations and assumptions (I recommend the linked documentation chapter "Things you need to know before adopting"). For me, the most important 2 points from this chapter are:

  1. "Subscriptions (and thus projections) areeventually consistent and always will be." This means that there will always be an inconsistency time from saving the event to the stream and processing the event by the projector(s).
  2. "No support for ambient System.Transaction scopes enforcing the concept of the stream as the consistency and transactional boundary." This means that if we save the event to a events stream and want to save somethingin the same transaction, we must useTransactionScope. If we cannot useTransactionScope for some reason, we must accept the Eventual Consistency also in this case.

Other popular tools:

  • EventStore"An industrial-strength database solution built from the ground up for event sourcing."
  • Marten".NET Transactional Document DB and Event Store on PostgreSQL"

Implementation

There are 2 main "flows" to handle:

  • Command handling: change of state - adding new events to stream (writing)
  • Projection of events to create read models
Command Handling

The whole process looks like this:

  1. We create / update an aggregate by creating an event
  2. We add changes to the Aggregate Store. This is the class responsible for writing / loading our aggregates. We are not saving changes yet.
  3. As part of Unit Of Work a) Aggregate Store adds events to the stream b) messages are added to the Outbox

Command Handler:

publicclassBuySubscriptionCommandHandler:ICommandHandler<BuySubscriptionCommand,Guid>{privatereadonlyIAggregateStore_aggregateStore;privatereadonlyIPayerContext_payerContext;privatereadonlyISqlConnectionFactory_sqlConnectionFactory;publicBuySubscriptionCommandHandler(IAggregateStoreaggregateStore,IPayerContextpayerContext,ISqlConnectionFactorysqlConnectionFactory){_aggregateStore=aggregateStore;_payerContext=payerContext;_sqlConnectionFactory=sqlConnectionFactory;}publicasyncTask<Guid>Handle(BuySubscriptionCommandcommand,CancellationTokencancellationToken){varpriceList=awaitPriceListProvider.GetPriceList(_sqlConnectionFactory.GetOpenConnection());varsubscriptionPayment=SubscriptionPayment.Buy(_payerContext.PayerId,SubscriptionPeriod.Of(command.SubscriptionTypeCode),command.CountryCode,MoneyValue.Of(command.Value,command.Currency),priceList);_aggregateStore.AppendChanges(subscriptionPayment);returnsubscriptionPayment.Id;}}

SubscriptionPayment Aggregate:

publicclassSubscriptionPayment:AggregateRoot{privatePayerId_payerId;privateSubscriptionPeriod_subscriptionPeriod;privatestring_countryCode;privateSubscriptionPaymentStatus_subscriptionPaymentStatus;privateMoneyValue_value;protectedoverridevoidApply(IDomainEvent@event){this.When((dynamic)@event);}publicstaticSubscriptionPaymentBuy(PayerIdpayerId,SubscriptionPeriodperiod,stringcountryCode,MoneyValuepriceOffer,PriceListpriceList){varpriceInPriceList=priceList.GetPrice(countryCode,period,PriceListItemCategory.New);CheckRule(newPriceOfferMustMatchPriceInPriceListRule(priceOffer,priceInPriceList));varsubscriptionPayment=newSubscriptionPayment();varsubscriptionPaymentCreated=newSubscriptionPaymentCreatedDomainEvent(Guid.NewGuid(),payerId.Value,period.Code,countryCode,SubscriptionPaymentStatus.WaitingForPayment.Code,priceOffer.Value,priceOffer.Currency);subscriptionPayment.Apply(subscriptionPaymentCreated);subscriptionPayment.AddDomainEvent(subscriptionPaymentCreated);returnsubscriptionPayment;}privatevoidWhen(SubscriptionPaymentCreatedDomainEvent@event){this.Id=@event.SubscriptionPaymentId;_payerId=newPayerId(@event.PayerId);_subscriptionPeriod=SubscriptionPeriod.Of(@event.SubscriptionPeriodCode);_countryCode=@event.CountryCode;_subscriptionPaymentStatus=SubscriptionPaymentStatus.Of(@event.Status);_value=MoneyValue.Of(@event.Value,@event.Currency);}

AggregateRoot base class:

publicabstractclassAggregateRoot{publicGuidId{get;protectedset;}publicintVersion{get;privateset;}privatereadonlyList<IDomainEvent>_domainEvents;protectedAggregateRoot(){_domainEvents=newList<IDomainEvent>();Version=-1;}protectedvoidAddDomainEvent(IDomainEvent@event){_domainEvents.Add(@event);}publicIReadOnlyCollection<IDomainEvent>GetDomainEvents()=>_domainEvents.AsReadOnly();publicvoidLoad(IEnumerable<IDomainEvent>history){foreach(vareinhistory){Apply(e);Version++;}}protectedabstractvoidApply(IDomainEvent@event);protectedstaticvoidCheckRule(IBusinessRulerule){if(rule.IsBroken()){thrownewBusinessRuleValidationException(rule);}}}

Aggregate Store implementation with SQL Stream Store library usage:

publicclassSqlStreamAggregateStore:IAggregateStore{privatereadonlyIStreamStore_streamStore;privatereadonlyList<IDomainEvent>_appendedChanges;privatereadonlyList<AggregateToSave>_aggregatesToSave;publicSqlStreamAggregateStore(ISqlConnectionFactorysqlConnectionFactory){_appendedChanges=newList<IDomainEvent>();_streamStore=newMsSqlStreamStore(newMsSqlStreamStoreSettings(sqlConnectionFactory.GetConnectionString()){Schema=DatabaseSchema.Name});_aggregatesToSave=newList<AggregateToSave>();}publicasyncTaskSave(){foreach(varaggregateToSavein_aggregatesToSave){await_streamStore.AppendToStream(GetStreamId(aggregateToSave.Aggregate),aggregateToSave.Aggregate.Version,aggregateToSave.Messages.ToArray());}_aggregatesToSave.Clear();}publicasyncTask<T>Load<T>(AggregateId<T>aggregateId)whereT:AggregateRoot{varstreamId=GetStreamId(aggregateId);IList<IDomainEvent>domainEvents=newList<IDomainEvent>();ReadStreamPagereadStreamPage;do{readStreamPage=await_streamStore.ReadStreamForwards(streamId,StreamVersion.Start,maxCount:100);varmessages=readStreamPage.Messages;foreach(varstreamMessageinmessages){Typetype=DomainEventTypeMappings.Dictionary[streamMessage.Type];varjsonData=awaitstreamMessage.GetJsonData();vardomainEvent=JsonConvert.DeserializeObject(jsonData,type)asIDomainEvent;domainEvents.Add(domainEvent);}}while(!readStreamPage.IsEnd);varaggregate=(T)Activator.CreateInstance(typeof(T),true);aggregate.Load(domainEvents);returnaggregate;}
Events Projection

The whole process looks like this:

  1. Special classSubscriptions Manager subscribes to Events Store (using SQL Store Stream library)
  2. Events Store raisesStreamMessageRecievedEvent
  3. Subscriptions Manager invokes all projectors
  4. If projector know how to handle given event, it updates particular read model. In current implementation it updates special table in SQL database.

SubscriptionsManager class implementation:

publicclassSubscriptionsManager{privatereadonlyIStreamStore_streamStore;publicSubscriptionsManager(IStreamStorestreamStore){_streamStore=streamStore;}publicvoidStart(){long?actualPosition;using(varscope=PaymentsCompositionRoot.BeginLifetimeScope()){varcheckpointStore=scope.Resolve<ICheckpointStore>();actualPosition=checkpointStore.GetCheckpoint(SubscriptionCode.All);}_streamStore.SubscribeToAll(actualPosition,StreamMessageReceived);}publicvoidStop(){_streamStore.Dispose();}privatestaticasyncTaskStreamMessageReceived(IAllStreamSubscriptionsubscription,StreamMessagestreamMessage,CancellationTokencancellationToken){vartype=DomainEventTypeMappings.Dictionary[streamMessage.Type];varjsonData=awaitstreamMessage.GetJsonData(cancellationToken);vardomainEvent=JsonConvert.DeserializeObject(jsonData,type)asIDomainEvent;usingvarscope=PaymentsCompositionRoot.BeginLifetimeScope();varprojectors=scope.Resolve<IList<IProjector>>();vartasks=projectors.Select(async projector=>{awaitprojector.Project(domainEvent);});awaitTask.WhenAll(tasks);varcheckpointStore=scope.Resolve<ICheckpointStore>();awaitcheckpointStore.StoreCheckpoint(SubscriptionCode.All,streamMessage.Position);}}

Example projector:

internalclassSubscriptionDetailsProjector:ProjectorBase,IProjector{privatereadonlyIDbConnection_connection;publicSubscriptionDetailsProjector(ISqlConnectionFactorysqlConnectionFactory){_connection=sqlConnectionFactory.GetOpenConnection();}publicasyncTaskProject(IDomainEvent@event){awaitWhen((dynamic)@event);}privateasyncTaskWhen(SubscriptionRenewedDomainEventsubscriptionRenewed){varperiod=SubscriptionPeriod.GetName(subscriptionRenewed.SubscriptionPeriodCode);await_connection.ExecuteScalarAsync("UPDATE payments.SubscriptionDetails "+"SET "+"[Status] = @Status, "+"[ExpirationDate] = @ExpirationDate, "+"[Period] = @Period "+"WHERE [Id] = @SubscriptionId",new{subscriptionRenewed.SubscriptionId,subscriptionRenewed.Status,subscriptionRenewed.ExpirationDate,period});}privateasyncTaskWhen(SubscriptionExpiredDomainEventsubscriptionExpired){await_connection.ExecuteScalarAsync("UPDATE payments.SubscriptionDetails "+"SET "+"[Status] = @Status "+"WHERE [Id] = @SubscriptionId",new{subscriptionExpired.SubscriptionId,subscriptionExpired.Status});}privateasyncTaskWhen(SubscriptionCreatedDomainEventsubscriptionCreated){varperiod=SubscriptionPeriod.GetName(subscriptionCreated.SubscriptionPeriodCode);await_connection.ExecuteScalarAsync("INSERT INTO payments.SubscriptionDetails "+"([Id], [Period], [Status], [CountryCode], [ExpirationDate]) "+"VALUES (@SubscriptionId, @Period, @Status, @CountryCode, @ExpirationDate)",new{subscriptionCreated.SubscriptionId,period,subscriptionCreated.Status,subscriptionCreated.CountryCode,subscriptionCreated.ExpirationDate});}}

Sample view of Event Store

SampleEvent Store view after execution of SubscriptionLifecycleTests Integration Test which includes following steps:

  1. Creating Price List
  2. Buying Subscription
  3. Renewing Subscription
  4. Expiring Subscription

looks like this (SQL Stream Store table -payments.Messages):

3.16 Database Change Management

Database change management is accomplished bymigrations/transitions versioning. Additionally, the current state of the database structure is also versioned.

Migrations are applied using a simpleDatabaseMigrator console application that uses theDbUp library. The current state of the database structure is kept in theSSDT Database Project.

The database update is performed by running the following command:

dotnet DatabaseMigrator.dll"connection_string""scripts_directory_path"

The entire solution is described in detail in the following articles:

  1. Database change management (theory)
  2. Using database project and DbUp for database management (implementation)

3.17 Continuous Integration

Definition

As defined onMartin Fowler's website:

Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.

YAML Implementation [OBSOLETE]

Originally the build was implemented using yaml and GitHub Actions functionality. Currently, the build is implemented with NUKE (see next section). SeebuildPipeline.yml file history.

Pipeline description

CI was implemented usingGitHub Actions. For this purpose, one workflow, which triggers on Pull Request tomaster branch or Push tomaster branch was created. It contains 2 jobs:

  • build test, execute Unit Tests and Architecture Tests
  • execute Integration Tests

Steps description
a) Checkout repository - clean checkout of git repository
b) Setup .NET - install .NET 8.0 SDK
c) Install dependencies - resolve NuGet packages
d) Build - build solution
e) Run Unit Tests - run automated Unit Tests (see section 3.10)
f) Run Architecture Tests - run automated Architecture Tests (see section 3.12)
g) Initialize containers - setup Docker container for MS SQL Server
h) Wait for SQL Server initialization - after container initialization MS SQL Server is not ready, initialization of server itself takes some time so 30 seconds timeout before execution of next step is needed
i) Create Database - create and initialize database
j) Migrate Database - execute database upgrade usingDatabaseMigrator application (see 3.16 section)
k) Run Integration Tests - perform Integration and System Integration Testing (see section 3.13 and 3.14)

Workflow definition

Workflow definition:buildPipeline.yml

Example workflow execution

Example workflow output:

NUKE

Nuke isthe cross-platform build automation solution for .NET with C# DSL.

The 2 main advantages of its use over pure yaml defined in GitHub actions are as follows:

  • You run the same code on local machine and in the build server. SeebuildPipeline.yml
  • You use C# with all the goodness (debugging, compilation, packages, refactoring and so on)

This is how one of the stage definition looks like (execute Build, Unit Tests, Architecture Tests)Build.cs:

partialclassBuild:NukeBuild{/// Support plugins are available for:///   - JetBrains ReSharper        https://nuke.build/resharper///   - JetBrains Rider            https://nuke.build/rider///   - Microsoft VisualStudio     https://nuke.build/visualstudio///   - Microsoft VSCode           https://nuke.build/vscodepublicstaticintMain()=>Execute<Build>(x=>x.Compile);[Parameter("Configuration to build - Default is 'Debug' (local) or 'Release' (server)")]readonlyConfigurationConfiguration=IsLocalBuild?Configuration.Debug:Configuration.Release;[Solution]readonlySolutionSolution;TargetClean=> _=>_.Before(Restore).Executes(()=>{EnsureCleanDirectory(WorkingDirectory);});TargetRestore=> _=>_.Executes(()=>{DotNetRestore(s=>s.SetProjectFile(Solution));});TargetCompile=> _=>_.DependsOn(Restore).Executes(()=>{DotNetBuild(s=>s.SetProjectFile(Solution).SetConfiguration(Configuration).EnableNoRestore());});TargetUnitTests=> _=>_.DependsOn(Compile).Executes(()=>{DotNetTest(s=>s.SetProjectFile(Solution).SetFilter("UnitTests").SetConfiguration(Configuration).EnableNoRestore().EnableNoBuild());});TargetArchitectureTests=> _=>_.DependsOn(UnitTests).Executes(()=>{DotNetTest(s=>s.SetProjectFile(Solution).SetFilter("ArchTests").SetConfiguration(Configuration).EnableNoRestore().EnableNoBuild());});TargetBuildAndUnitTests=> _=>_.Triggers(ArchitectureTests).Executes(()=>{});}

If you want to see more complex scenario when integration tests are executed (with SQL Server database creation using docker) seeBuildIntegrationTests.cs file.

SQL Server database project build

Currently, compilation of database projects is not supported by the .NET Core and dotnet tool. For this reason, theMSBuild.Sdk.SqlProj library was used. In order to do that, you need to create .NET standard library, change SDK and create links to scripts folders. Finaldatabase project looks as follows:

<ProjectSdk="MSBuild.Sdk.SqlProj/1.6.0">    <PropertyGroup>        <TargetFramework>netstandard2.0</TargetFramework>    </PropertyGroup>    <ItemGroup>        <ContentInclude="..\CompanyName.MyMeetings.Database\administration\**\*.sql" />        <ContentInclude="..\CompanyName.MyMeetings.Database\app\**\*.sql" />        <ContentInclude="..\CompanyName.MyMeetings.Database\meetings\**\*.sql" />        <ContentInclude="..\CompanyName.MyMeetings.Database\payments\**\*.sql" />        <ContentInclude="..\CompanyName.MyMeetings.Database\users\**\*.sql" />        <ContentInclude="..\CompanyName.MyMeetings.Database\Security\**\*.sql" />    </ItemGroup></Project>

3.18 Static code analysis

In order to standardize the appearance of the code and increase its readability, theStyleCopAnalyzers library was used. This library implements StyleCop rules using the .NET Compiler Platform and is responsible for the static code analysis.

Using this library is trivial - it is just added as a NuGet package to all projects. There are many ways to configure rules, but currently the best way to do this is to edit the.editorconfig file. More information can be found at the link above.

Note! Static code analysis works best when the following points are met:

  1. Each developer has an IDE that respects the rules and helps to follow them
  2. The rules are checked during the project build process as part of Continuous Integration
  3. The rules are set tohelp your system grow.Static analysis is not a value in itself. Some rules may not make complete sense and should be turned off. Other rules may have higher priority. It all depends on the project, company standards and people involved in the project. Be pragmatic.

3.19 System Under Test SUT

There is always a need to prepare the entire system in a specific state, e.g. for manual, exploratory, UX / UI tests. The fact that the tests are performed manually does not mean that we cannot automate the preparation phase (Given / Arrange). Thanks to the automation of system state preparation (System Under Test), we are able to recreate exactly the same state in any environment. In addition, such automation can be used later to automate the entire test (e.g. through an3.13 Integration Tests).

The implementation of such automation based on the use of NUKE and the test framework is presented below. As in the case of integration testing, we use the public API of modules.

Below is a SUT whose task is to go through the whole process - from setting up aMeeting Group, through itsPayment, adding a newMeeting and signing up for it by another user.

publicclassCreateMeeting:TestBase{protectedoverrideboolPerformDatabaseCleanup=>true;[Test]publicasyncTaskPrepare(){awaitUsersFactory.GivenAdmin(UserAccessModule,"testAdmin@mail.com","testAdminPass","Jane Doe","Jane","Doe","testAdmin@mail.com");varuserId=awaitUsersFactory.GivenUser(UserAccessModule,ConnectionString,"adamSmith@mail.com","adamSmithPass","Adam","Smith","adamSmith@mail.com");ExecutionContextAccessor.SetUserId(userId);varmeetingGroupId=awaitMeetingGroupsFactory.GivenMeetingGroup(MeetingsModule,AdministrationModule,ConnectionString,"Software Craft","Group for software craft passionates","Warsaw","PL");awaitTestPriceListManager.AddPriceListItems(PaymentsModule,ConnectionString);awaitTestPaymentsManager.BuySubscription(PaymentsModule,ExecutionContextAccessor);SetDate(newDateTime(2022,7,1,10,0,0));varmeetingId=awaitTestMeetingFactory.GivenMeeting(MeetingsModule,meetingGroupId,"Tactical DDD",newDateTime(2022,7,10,18,0,0),newDateTime(2022,7,10,20,0,0),"Meeting about Tactical DDD patterns","Location Name","Location Address","01-755","Warsaw",50,0,null,null,0,null,newList<Guid>());varattendeeUserId=awaitUsersFactory.GivenUser(UserAccessModule,ConnectionString,"rickmorty@mail.com","rickmortyPass","Rick","Morty","rickmorty@mail.com");ExecutionContextAccessor.SetUserId(attendeeUserId);awaitTestMeetingGroupManager.JoinToGroup(MeetingsModule,meetingGroupId);awaitTestMeetingManager.AddAttendee(MeetingsModule,meetingId,guestsNumber:1);}}

You can create this SUT using followingNUKE target providing connection string and particular test name:

 .\build PrepareSUT --DatabaseConnectionString"connection_string" --SUTTestName CreateMeeting

3.20 Mutation Testing

Description

Mutation testing is an approach to test and evaluate our existing tests. During mutation testing a special framework modifies pieces of our code and runs our tests. These modifications are calledmutations ormutants. If a givenmutation does not cause a failure of at least once test, it means that the mutant hassurvived so our tests are probably not sufficient.

Example

In this repository, theStryker.NET framework was used for mutation testing. In the simplest use, after installation, all you need to do is enter the directory of tests that you want to mutate and run the following command:

dotnet stryker

The result of this command is themutation report file. Assuming we want to test the unit tests of the Meetings module, such areport has been generated. This is its first page:

Let us analyze one of the places where the mutant survived. This is theAddNotAttendee method of theMeeting class. This method is used to add aMember to the list of people who have decided not to attend the meeting. According to the logic, if the same person previously indicated that he was going to theMeeting and later changed his mind, then if there is someone on theWaiting List, he should be added to the attendees. Based on requirements, this should be the person who signed up on theWaiting Listfirst (based onSignUpDate).

As you can see, the mutation framework changed our sorting in linq query (from default ascending to descending). However, each test was successful, so it means that mutant survived so we don't have a test that checks the correct sort based onSignUpDate.

From the example above, one more important thing can be deduced -code coverage is insufficient. In the given example, this code is covered, but our tests do not check the given requirement, therefore our code may have errors. Mutation testing allow to detect such situations. Of course, as with any tool, we should use it wisely, as not every case requires our attention.

4. Technology

List of technologies, frameworks and libraries used for implementation:

5. How to Run

Install .NET 8.0 SDK

Create database

  • Download and install MS SQL Server Express or other
  • Create an empty database usingCreateDatabase_Windows.sql orCreateDatabase_Linux.sql. Script addsapp schema which is needed for migrations journal table. Change database file path if needed.
  • Run database migrations usingMigrateDatabase NUKE target by executing the build.sh script present in the root folder:
.\build MigrateDatabase --DatabaseConnectionString"connection_string"

"connection_string" - connection string to your database

Seed database

  • ExecuteSeedDatabase.sql script
  • 2 test users will be created - check the script for usernames and passwords

Configure connection string

Set a database connection string calledMeetingsConnectionString in the root of the API project's appsettings.json or useSecrets

Example config setting in appsettings.json for a database calledMyMeetings:

{"MeetingsConnectionString":"Server=(localdb)\\mssqllocaldb;Database=MyMeetings;Trusted_Connection=True;"}

Configure startup in IDE

  • Set the Startup Item in your IDE to the API Project, not IIS Express

Authenticate

  • Once it is running you'll need a token to make API calls. This is done via OAuth2Resource Owner Password Grant Type. By default IdentityServer is configured with the following:
  • client_id = ro.client
  • client_secret = secret(this is literally the value - not a statement that this value is secret!)
  • scope = myMeetingsAPI openid profile
  • grant_type = password

Include the credentials of a test user created in theSeedDatabase.sql script - for example:

  • username = testMember@mail.com
  • password = testMemberPass

Example HTTP Request for an Access Token:

POST /connect/token HTTP/1.1Host: localhost:5000grant_type=password&username=testMember@mail.com&password=testMemberPass&client_id=ro.client&client_secret=secret

This will fetch an access token for this user to make authorized API requests using the HTTP request headerAuthorization: Bearer <access_token>

If you use a tool such as Postman to test your API, the token can be fetched and stored within the tool itself and appended to all API calls. Check your tool documentation for instructions.

Run using Docker Compose

You can run whole application usingdocker compose from root folder:

docker-compose up

It will create following services:

  • MS SQL Server Database
  • Database Migrator
  • Application

Run Integration Tests in Docker

You can run all Integration Tests in Docker (exactly the same process is executed on CI) usingRunAllIntegrationTests NUKE target:

.\build RunAllIntegrationTests

6. Contribution

This project is still under analysis and development. I assume its maintenance for a long time and I would appreciate your contribution to it. Please let me know by creating an Issue or Pull Request.

7. Roadmap

List of features/tasks/approaches to add:

NameStatusRelease date
Domain Model Unit TestsCompleted2019-09-10
Architecture Decision Log updateCompleted2019-11-09
Integration automated testsCompleted2020-02-24
Migration to .NET Core 3.1Completed2020-03-04
System Integration TestingCompleted2020-03-28
More advanced Payments moduleCompleted2020-07-11
Event Sourcing implementationCompleted2020-07-11
Database Change ManagementCompleted2020-08-23
Continuous IntegrationCompleted2020-09-01
StyleCop Static Code AnalysisCompleted2020-09-05
FrontEnd SPA applicationCompleted2020-11-08
Docker supportCompleted2020-11-26
PlantUML Conceptual ModelCompleted2021-03-22
C4 ModelCompleted2021-03-29
Meeting comments featureCompleted2021-03-30
NUKE build automationCompleted2021-06-15
Database project compilation on CICompleted2021-06-15
System Under Test implementationCompleted2022-07-17
Mutation TestingCompleted2022-08-23
Migration to .NET 8.0Completed2023-12-09

NOTE: Please don't hesitate to suggest something else or a change to the existing code. All proposals will be considered.

8. Authors

Kamil Grzybek

Blog:https://kamilgrzybek.com

Twitter:https://twitter.com/kamgrzybek

LinkedIn:https://www.linkedin.com/in/kamilgrzybek/

GitHub:https://github.com/kgrzybek

8.1 Main contributors

9. License

The project is underMIT license.

10. Inspirations and Recommendations

Modular Monolith

Domain-Driven Design

Application Architecture

Software Architecture

System Architecture

Design

Craftsmanship

Testing

UML

Event Storming

Event Sourcing


[8]ページ先頭

©2009-2025 Movatter.jp