Microservices, here I come

It’s been quite a while since I’ve started gathering some knowledge about the microservices architectural pattern that’s been on a hype recently. After reading many articles, some books like Microservices in .NET Core and talking with smart guys in the Devs PL Slack channel, I’ve eventually decided that the time has come to try to make the microservices happen in the real world project. That’s the beginning of my journey into the distributed programming and architecture, so please keep that in mind while reading this newbie’s post and remember I’d be more than happy to hear your opinions and feedback about the approach that I’m about to present.


 

The aforementioned real world project (or problem) is my open source project called Warden which aims to simplify the monitoring of the resources. Currently, we’re trying to build a whole new backend that will support a new web panel (and the other type of clients like mobile at some point in the future). So, what’s all the fuss about?
Imagine the following scenario: as the user, you can integrate the instance of your Warden (e.g. a simple monitoring application) with the API and send so-called check results created by the Watchers (responsible for monitoring the particular resources like websites, databases, files etc.) – the end result is that you’ll be able to see a dashboard (neat looking status page) with real-time updates about services or resources being currently (un)available as well as some additional stuff like the historical data.
As you may have already guessed, such data (single check result) can be pushed to the API quite often e.g. every second by many different watchers and even though the object itself ain’t that big (about 10 properties and 3-5 kB of JSON size), if you multiply it by hundreds or thousands of users (one day, hopefully) the amount of data can get really huge, not to mention the API that should process all of these requests quite fast. That’s where the microservices come into play.

Warden backend overview

Warden backend overview

Above, you can see a beautiful, corporate drawing created with the Visio. Let me explain, step by step what I’m trying to achieve (and how), and please leave a comment if you (dis)agree with my point of view, as I’m rather a newbie in that matter. So, let’s begin with my humble explanations.
There’s a user using the Web panel’s dashboard to find out what’s actually happening with his system. The web interface makes use of the API which is merely a gateway. Of course, there might be many instances of the API and the same principle goes to the other services, as the scalability is one of the most important features here – let’s just assume that there’s already some intelligent router in place that can redirect the traffic and user requests.

The user can send a command request to the API which will be immediately sent to the service bus (RabbitMQ user here) and processed by one of the services that subscribes to the particular command or event. On the other hand, the user may also send a query request (e.g. fetch the historical data or so) and this one will be actually processed by the Storage service about which I’ll talk later. Let’s get back to the command processing – for example, there might be a CreateUser command handled by the User service which will then throw an UserCreated event that might be handled by another service etc. The goal is to have no dependencies between the services – they shall only react to the incoming commands and events via service bus, and of course possess their own local storage (database) for saving and querying the important data. Here are the service responsibilites so far:

  • User – handle the users registration, authentication, profile and settings.
  • Organization – manage users organizations (a workspace to which both, the users and wardens might belong to).
  • Real-time – push the incoming check result via sockets (SignalR) so that web panel gets the live updates.
  • Stats – update the statistical information e.g. about the (in)valid check results or system uptime/downtime, so that when user asks for some stats the data is already prepared.
  • Warden check – store the check result into some database like MongoDB or Azure DocumentDB and also validate whether the user hasn’t reached his monthly usage limit.
  • Spawn – integrate with the Warden Spawn project which basically translates the JSON configuration into the C# language and creates a new Warden process (not important for now).

The last remaining piece is the Storage service – its only job, is to provide a read-only storage for fast reads of the data using the API. It will also handle the events like UserCreated or WardenCheckResultSaved in order to save such objects in the most appropriate form (flatten them or something like this). The crucial part here is to select which data will be most commonly fetched by the users, yet my guess is that things like organizations (workspace), some recent stats and check results from the last 24 hours will be sufficient to cover 90-95% of the scenarios. For the remaining part, it might take a little bit longer to fetch the data from the Storage service that will first need to make an HTTP request to the services like Stats service if the user, for example, would like to see what happened 7 days ago.

In terms of storing the data, for the Storage service it would be more like partitioning (all instances of these services need to have the same data, which means all of them will process the same events and also make a data sync from time to time), while for the remaining services performing the actual business logic, the data storage might look more like sharding e.g. the first instance of User service processes the incoming CreateUser command and stores account info in its own local database. However, after giving it some more thought, such service may produce the UserCreated event that could be handled by the remaining instances of the User services, so at least in theory all of them will have the same data.

It looks like I’m going to heavily use the CQRS pattern and actually I’m quite happy about such outcome. Speaking of the code and technologies – we’re building everything on top of the .NET Core (so it runs cross-platform) using the NancyFX for the API, OWIN with Kestrel for hosting and the RabbitMQ service bus with RawRabbit client along with some other technologies. Feel free to check the repository which is currently under the heavy development (and refactoring). Here’s the sample of the simple microservice wrapper that I’ve created on my own:

You may wonder, why the heck does this guy already thinks about scaling multiple instances of services or creating a data storage for fast reads? There are 2 main factors behind this decision. The first one is that I really want to try to make a system that won’t be difficult to scale and handle a big traffic almost from its inception. I’d rather spend twice as much time now and build an infrastructure containing a set of separated, loosely coupled microservices, than a typical monolithic application which eventually will be cut it into pieces (services) once the performance issues start to occur.
And the second factor, equally important as the first one in my opinion, is that I want to learn something new. The paradigm shift which is moving into the cloud services has already started and as a software engineer I have to improve my skills all the time. Plus, it’s a pattern which I’ll probably use in my other projects that I’m working on like the sports & training app Fortitudo and the one that we’ve just started on GitHub within our Noordwind company.

I hope that my first architectural overview makes at least any sense – I’m looking forward to your comments and remarks, and hopefully I’ll be able to present the achieved goals later on this year.

21 Comments Microservices, here I come

  1. Pingback: Microservices, here I come - How to Code .NET

    1. Piotr Gankiewicz

      Thanks! I really want to keep the services separately from each other (in terms of making HTTP requests between them or so). The only way to communicate will be the events produced by other services, so you are correct :).

      Reply
      1. MK

        Hello, about communication, how would you handle user story like this:
        “As a user, I forgot my password to the system, I want to perform forgotten password procedure and get an email with the link to set a new password.”

        Suppose we have two microservices – ‘User’ (uses LDAP or our DB) and ‘Email’ (wraps our email server). Bussiness decision is that all microservices have own REST api, so third-party developers can easily use it, also we have RabbitMQ as event bus.
        There are two ways to do it:

        1. Do all logic in ‘User’ service and just raise event ‘UserForgotPassword’. This event will be handled by ‘Email’ service, which sends actual email.
        Good parts: decoupling.
        Bad parts: it violates an open-closed principle, if we raise a new event (email related), we have to change ‘Email’ service. One user story is scattered over more microservices – future maintenance hell.

        2. Do all logic in ‘User’ service and raise command ‘SendEmail’ with all necessary values (target email, body, subject, etc.). ‘Email’ service will be straightforward and just handle that command.
        Good parts: follows open-closed principle.
        Bad parts: it could end up as ESB and we just want lightweight microservices. We have to send all values in command, which could contain some private information – we will have to have some authorization for commands (more complexity, we already have authorization for REST api).

        Right now we ended up with direct calls between our services and simple events for third-party components. In a case of this story: ‘User’ service do all logic and call REST endpoint on ‘Email’ service (it is an authorized call over https). And also raise simple event ‘UserForgotPassword’ just with a user id. Anyone can handle this event, if they need more information, they have to call endpoint on ‘User’ service with that user id (they must be authorized).

        Does somebody have any idea, how to do it in a better way?

        Reply
        1. Piotr Gankiewicz

          Hi,
          I’d probably go with something similar to the second solution. Usually for sending the emails, there’s already some 3rd party service involved like SendGrid, MailChimp etc. – in the end it all boils down to making a request to their API and providing the template id along with the parameters that will be put into the body of the message template.
          I’d just create a command ResetUserPassword that would be handled by UserService which would be generated e.g. a specialized token and then produce e.g. ResetUserPasswordTokenGenerated. Next, such event could be handled either by the EmailService directly or by some other service like EmailTemplateService that would consume the event, look for the template id and eventually produce it’s own command like SendEmail with the proper template id + message body which would be consumed by the EmailService.
          I’m not sure if there’s really an ideal solution, as you need to always somehow provide a mapping between the event/command type and a message template.

          Reply
        2. Oldrich Kepka

          The second one is correct one, if you are planning to use the email service for other emailing than user has forgotten a password. Because any time, in any service you will need to send an email, you will have to touch the email service.

          Reply
  2. Pingback: “We feel free because we lack the very link to articulate our unfreedom.” - Magnus Udbjørg

  3. Pingback: .NET on Linux – bye, Windows 10. | Piotr Gankiewicz

  4. Pingback: .NET on Linux – bye, Windows 10 – Starter

  5. Eric

    This looks good. I’m taking a similar approach with a mobile application I’m building. In my case I have a private network on the back end that is super fast, and it connects to other virtual machines in the data center. My web server and database server currently live on the same vm, but all of the commands that cause updates to the database are also published to the bus. The other database vms are subscribers and by responding to events their databases are kept up to date. I did this so I would always have local backups of the data in case something bad happens to the main Internet-facing server.

    By the way, what do you ride? I’m on a Kawasaki Ninja 650. 🙂 It helps to just get out and ride if I get stuck on a problem.

    Thanks for the article! Cheers.

    Reply
    1. Piotr Gankiewicz

      Yeah, for early version / testing purposes hosting everything on the same VM makes sense, I’m doing pretty much the same thing :). I’ve managed to put everything into the Docker containers (via Docker Hub and Cloud for Continous Deployment) so it’s getting even better.

      I do ride on KTM Duke 690 – love it so far (bought it 3 years ago) but I’m already thinking about getting MV Agusta Brutale 800 maybe next year :). BTW you can look me up (and my bike) on Instagram (@spetzu).

      Cheers!

      Reply
  6. Pingback: 在Linux开发.NET 拜拜了Win10 - 代码哥

  7. Pingback: 在Linux开发.NET 拜拜了Win10 | 同创卓越

  8. Pingback: .NET Core + RabbitMQ = RawRabbit | Piotr Gankiewicz

  9. Pingback: So I’ve been doing microservices | Piotr Gankiewicz

  10. Pingback: 微服务之旅的经验分享 - 莹莹之色

  11. Pingback: 微服务之旅的经验分享 - 范东

  12. Pingback: Warden vNext | Piotr Gankiewicz

  13. Pingback: 微服务之旅的经验分享 - 爱好博客

  14. Pingback: Linux开发.net – Framework应用框架

Leave A Comment

Your email address will not be published. Required fields are marked *