Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
/saastackPublic template

SaaS Architecture for OnPremise Applications on IIS#74

HERCH started this conversation inGeneral
Discussion options

Hellojezzsantos, I’m exploring the SaaStack template on GitHub, and its subdomain-based architecture for a SaaS model caught my attention. I’m starting with an application that will initially be OnPremise, but I’m considering an architecture that allows me to unify the codebase for both OnPremise and Cloud versions.

In our case, we will work with a desktop application that will consume services locally through an IIS server. We understand that opting for an OnPremise approach might result in losing many features typically available in a cloud environment, but our primary focus is on managing repositories, pointing to an MSSQL Server instance.

Is it possible to adapt an architecture like SaaStack’s to support both modes? What recommendations do you have for designing a solution that facilitates this unification without compromising security, scalability, and efficient data management in either environment?

You must be logged in to vote

Replies: 6 comments 7 replies

Comment options

Hi@HERCH

Thanks for getting in touch.

Yes, yes, yes! Good news.
Precisely what it is designed for, running in different environments, with different technology stacks, be them Local, OnPrem or Cloud.

You said,

We understand that opting for an OnPremise approach might result in losing many features typically available in a cloud environment

I'm guessing you might mean that in Desktop<->OnPrem systems you don't have the common cloud services, we might be used to seeing, especially the messaging components?

In terms of adapting to different environments, that is trivial really, You simply swap out and plugin the technology adapters that you have in your environment - no other code changes.
(All interfaces for all external services, including databases, etc. have been abstracted and well-defined, as you would expect in a Clean/Onion/Hexagonal architecture..)

Essentially the architecture as it is, depends on the following list of infrastructure services to operate correctly as designed, albeit, they are all completely technology agnostic:

For any runtime environment, be it OnPrem, Cloud, Local whatever, you would need to have something for all these persistence services.
(You can see this working on your local machine, after cloning and compiling, with no external infrastructure installed. It is all handled by local adapters to your local disk - only for local development, of course!)

  • AnIDataStore. in your case this would be SQLServer (existingAzureSqlServerStore adapter will work perfectly fine in Cloud or OnPrem - it really just usesSqlClient).

  • AnIQueueStore. reliable single consumer JSON messages. You need this available to code from your server. It could be any OnPrem technology you want. e.g. RabbitMq, Redis Queues, MSMQ, etc, or you can simply connect to one of these services online, or to your own instance in any cloud provider Azure/AWS, etc.

  • AnIMessageBusStore reliable FIFO topics - multi-consumer JSON messages. You need this available to code from your server. Same as queues above.

  • AnIBlobStore to store binary files (pictures etc.). You need this available to code from your server. It could be any OnPrem technology you want. e.g. FileSystem, Relational Database, NoSql database, etc, or you can simply connect to an online services like DropBox, OneDrive, or to your own instance of AzureStorageAccount or AWS S3 buckets, etc.

  • AnIEventStore to store the events from event sourced aggregates. You can choose to have event-sourced aggregates or not (we recommend you do). If you do, you need a robust EventStore for those events. You can do that in any Sql or NoSql database technology (it wont require any schema), the includedAzureSqlServerStore is built to do this for you also.

If memory serves me well, I think everything else that is built in can either be stubbed out, or you can redirect to external 3rd party services, like for Email, Metrics, etc.

You must be logged in to vote
1 reply
@HERCH
Comment options

Hi@jezzsantos,

I have been exploring the architecture and following the steps for implementation. I cloned the repository and started running the application to evaluate its functionality.

Currently, I am registering a user using the PasswordCredentialsApi -> /passwords/register endpoint (success) and confirming the account with the generated token (success). However, when I try to authenticate this new user, the request fails. After investigating, I noticed that the UserProfile is not being registered. Could you guide me on the correct approach to resolve this, or is there something I might be missing?

Additionally, I tried generating an ApiKey using the RegisterMachineCredential endpoint, but I am unable to access the corresponding APIs. Could you provide some guidance on this matter?

Since I am interested in an On-Premises version, I have added a new OnPremises project to the repository. I implemented this version using RabbitMQ, following the same pattern you previously mentioned. You can find my implementation in my repository:https://github.com/HERCH/saastack/tree/feature/on-premises-migration. If you find it relevant, I can submit a PR for integration.

image

Looking forward to your feedback.
Best regards.

Comment options

Hey@HERCH

Take a look at the HttpClient scripts intools/httpclient folder, specifically:CreateUser
image
OR
have a look at the integration test forPasswordCredentialsApiSpec.cs

You will notice that with a user registration (using credentials) that its a multi step process.
Register -> Email -> Confirm -> Authenticate -> Access

Whereas registration with SSO is slightly different.
Authenticate -> Access

Also, if you running locally (on your local machine) we DO NOT have the AzureServiceBus (or equivalent) running, so there is not guaranteed delivery of the events from topics to subscribers.
So, sometimes, we have to trigger that process manually - to disseminate thedomain_events.
Hence the need to call the endpoint/domain_events/drain manually in both integration tests and in local debugging.

We actually do have a background service that is running in theTestingStubApiHost that does try to process thedomain-events on the local message bus, but it is a polled thing, not real-time.

To your second point.
Your problem with machine APIKeys is probably related to above point. In other words, if the events have not propagated properly, the data is probably not in place to allow you to create machine keys yet, That is a guess, not knowing what issues you see, specifically

To your third topic.
Yeah a RabbitMQ QueueStore of would be a great contribution. Yes please.
Have you added the necessaryIntegration.Persistence tests for it - like the others, to make sure it works properly?
If so then we can probably run it in a Docker container (like the other server components) for testing.
Yes, I would be keen to have that as an adapter in the codebase.

You must be logged in to vote
0 replies
Comment options

Hello again@jezzsantos,

I am back with some updates. I have now migrated the Azure infrastructure, specifically regarding ServiceBusStore, QueueStore, and BlobStore. I have practically adapted the IDataStore and IEventStore implementations without changes for our on-premises version.

Additionally, I have completed the implementation of RabbitMQ and created the integration tests, adding a new emulator for RabbitMQ and replicating the Azure tests. So far, everything is working fine.

However, I encountered a small issue, which I describe below:

When practically cloning the file and renaming it (from AzureSqlServerStore.IEventStore.cs to SqlServerStore.IEventStore.cs), in lines 54-55:

Common.Resources.EventStore_ConcurrencyVerificationFailed_StreamAlreadyUpdated.Format( streamName, version));

I am unable to access the resource Common.Resources.EventStore_ConcurrencyVerificationFailed_StreamAlreadyUpdated, as Resources is not accessible.

Could you please guide me regarding this point?

Thank you in advance.

You must be logged in to vote
0 replies
Comment options

Hey@HERCH

We very recently updated the concurrency handling for the store, and moved this particular resource in that commit:36b7b77.
I would advise upgrading your code with the changes in SaaStack, as of: Feb 10th.

I am not sure if you are on a fork, or on a clone, the process to update will be different.
For example,
we have a clone, and simply use git patches to stay on top of updates. If you are on forks, you can either use git patches, or you can use standard git tools to fast forward latest updates.

You must be logged in to vote
0 replies
Comment options

Hi@jezzsantos,

I’ve created two Pull Requests (PRs) for the integration of the On-Premises version:

1️⃣ First PR: Includes the RabbitMQ emulator setup, persistence implementation, and Service Bus & Queue management.
2️⃣ Second PR: Adds the ONPREMISES global variable and makes the necessary adjustments in Infrastructure.Web.Hosting.Common.Extensions to enable execution from ApiHost1.

🔍 Request for Guidance
Now, I need some guidance on how to deploy this in a real (production) environment. I’m considering that the best approach is to develop a WorkerService, and I already have an initial implementation. However, at this stage, I feel a bit lost:

📌 Main Question:

My test worker is currently using localhost:5656, but in my case, I have only implemented the worker with the consumer.
I'm unsure about the best way to execute the workflow:
📌 Option 1: Use an API (5656) to manage interactions.
📌 Option 2: Allow handlers to directly interact with the workflow.
Given your expertise, I would greatly appreciate your guidance on this matter. I’ve been working on microservices development for several years, and this project has been a fantastic learning opportunity for me.

I’ll share my progress on the workers so we can stay aligned and discuss the best approach.

📌 Current Code - EmailMessageHandler
Below is the code I currently have working for handling email messages in the system:

using Application.Persistence.Shared.ReadModels;using Infrastructure.Web.Api.Interfaces.Clients;using Infrastructure.Workers.Api;using Newtonsoft.Json;namespace OnPremisesWorkerService.Workers.MessageHandlers;public class EmailMessageHandler : IMessageHandler{    private readonly ILogger<EmailMessageHandler> _logger;    private readonly IServiceClient _serviceClient;    private readonly IQueueMonitoringApiRelayWorker<EmailMessage> _emailMessage;    public EmailMessageHandler(        ILogger<EmailMessageHandler> logger,        IServiceClient serviceClient,        IConfiguration configuration,        IQueueMonitoringApiRelayWorker<EmailMessage> sendEmail        )    {        _logger = logger;        _serviceClient = serviceClient;        _emailMessage = sendEmail;    }    public bool CanHandle(string routingKey) => routingKey == "emails";    public async Task HandleAsync(string message, CancellationToken cancellationToken)    {        var emailMessage = JsonConvert.DeserializeObject<EmailMessage>(message);        if (emailMessage == null)        {            _logger.LogError("Invalid email message");            return;        }        try        {            await _emailMessage.RelayMessageOrThrowAsync(emailMessage, cancellationToken);            _logger.LogInformation(                "Email sent : {To} - ID: {Id}",                emailMessage.Html.ToEmailAddress, emailMessage.MessageId);            await Task.CompletedTask;        }        catch (Exception ex)        {            _logger.LogError(ex, $"Error sending email: {ex.Message}");            throw;        }    }}

Thanks in advance for your support! Looking forward to your insights.

Best regards,

You must be logged in to vote
5 replies
@jezzsantos
Comment options

Hi@HERCH

Looking at this now, but give me a little time.
There are a few things here to pull apart to answer all your questions properly. Would be better to separate them out.
I will try to address separately

@jezzsantos
Comment options

Okay, I have various comments in the PRs related to the PRs tthemselves.
TL;DR I think we need to restructure some of the code in the PRs, and be more explicit about the intent of each one. There are a couple things being conflated (environment and technology vendor) that need teasing out from each other.

@jezzsantos
Comment options

@HERCH

Okay, onto the tother questions:

how to deploy this in a real (production) environment.

You would build in theReleaseNoTestDeploy flavor of the build. That will set TESTINGONLY to false, and thus remove all non-production code. In that flavor, you will be injecting the OnPremise adapters. The last piece is configuring your now injected adapters with the correct configuration. See.github/workflows/deploy-azure.yml for how that is done in Azure.

We have all this detailed out in thedocs/DEPLOYMENT.md document specifically for an Azure environment. You can derive what you would need to do in an OnPremise environment.

Regardless of what runtime you have, you need 3 hosts running to make it all work.

In a cloud deployment Azure or AWS or GC, you need these equivalent processes running:

  1. ApiHost1 - is your API, hosted in some webserver, (i.e. Kestrel, IIS, an AWS Lambda, or other webserver)
  2. WebsiteHost - is your BEFFE, hosted in some webserver, (i.e. Kestrel, IIS, an AWS Lambda, or other webserver)
  3. AzureFunctions - are monitoring the various Queues and MessageBus Topics, and when a new message arrives on any of them, these processes relay that message to a respective API of ApiHost1.

TheTestingStubApiHost is ONLY for running in local debug/testing environments, and basically replaces all your 3rd party services. In production builds, you would be expecting to connect to real 3rd party services. this is all done by production configuration, that does not use localhost:5656 in your appsettings.json of the ApiHost1, WebsiteHost and AzureFunctionsHost.
I would avoid confusing things, by reusing localhost:5656 addresses at all!

If you are creating a OnPremise service to effectively replace what the AzureFunctions do ( which I assume is what you are showing me in the code example), then that is definitely the way to go.
I don't know of a technology that can do that off the top of my head, but hopefully you can see a way forward there, even if you have to build it yourself.

We don't really have a prototype you can follow to achieve all that, but we do have implementations you can reuse for pulling messages off queues and topics, and calling the respective APIs. We have nothing yet to replace the triggers.

A good idea is to imagine how to trigger these things (with RabbitMq) and then use the same kind of code that you see being used in theAzureFunctions.Api.WorkerHost.

But yeah, once you have that figured but an OnPremise version ofAzureFunctions.Api.WorkerHost would be a good solution to use, and contribute to the project.

@HERCH
Comment options

Since I am contributing to the OnPremise version, I am providing what I believe is necessary for the migration. I have already created a new project called "OnPremiseWorkerService", which is responsible for message consumption and retransmission, based on the "AzureFunctions.Api.WorkerHost" project.

I hope I am on the right path. I consider myself very curious about this architecture. However, I am still learning a lot, and while my contribution may have a somewhat weak integration at this stage, I believe it is a good starting point. If I could get some guidance, that would be great, as I am using this process as a learning opportunity.

@HERCH
Comment options

The integration for consuming Queues and Topics is already working correctly. However, the issue I am facing is that certain parameters are not being properly provided during the retransmission of notifications.

fail: OnPremises :-: RabbitMq Worker[0]      Request: de644ea4283d41619e496e6a704432c4, (by xxx_maintenance00000000001)  Queued message usages_d36e77f8761c48fb8b6914c9dbd9b0b2 of type Application.Persistence.Shared.ReadModels.AuditMessage failed delivery to API      System.InvalidOperationException: 400: Bad Request, **The audit message is missing a 'AuditCode'**

image

Comment options

In the first PR, you mentioned that the implementation of the new assembly didn’t make sense. However, after reviewing the second PR, you reconsidered and stated that the creation of an additional platform called HOSTEDONPREMISES now made sense.

What I did was clone the pre-existing Azure project and adjust the AzureServiceBusStore and AzureStorageAccountQueueStore files for implementation using RabbitMQ. Additionally, I modified AzureStorageAccountBlobStore so that storage is handled within a database (DB).

After deleting the two previous PRs, new doubts arose due to a comment where you suggested creating a dedicated adapter for RabbitMQ (which I have already implemented). Now, the question is how to handle SQL Server:

Should I reuse the Azure implementation (which I don't think is ideal)?
Or would it be better to create a dedicated SQL Server assembly that can be shared across both Azure and On-Premises?
From my perspective, it seems better to revert to the previous version, continue working with the HOSTEDONPREMISES platform, and create two new assemblies: RabbitMQ and SQL Server, which would be shared between Azure and On-Premises.

You must be logged in to vote
1 reply
@HERCH
Comment options

We have been working using the Rider Development ID, so we should have already improved many of the points that were mentioned in previous PR reviews.

I want to share that the implementation of the RabbitMQ adapter for OnPremise has been completed.

image

image

The data is now correctly stored locally (OnPremise).

image

I will be pushing the changes to the same PR that was recently submitted so that it can be reviewed within that context.

I would really appreciate your support in reviewing the integration and, most importantly, providing feedback and any necessary corrections to improve this implementation.

Looking forward to your comments and suggestions.

Thank you for your time and support!

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
General
Labels
None yet
2 participants
@HERCH@jezzsantos

[8]ページ先頭

©2009-2025 Movatter.jp