Комментарии:
This is perfect timing. I always thought monoliths were inherently bad until I came across modular monoliths where you separate functionality by modules and use event sourcing, message queues, event buses, etc for communication between them resulting in your modules being independent and loosely coupled from others, which makes migrating specific modules to a microservice less of a nightmare when performance starts to become an issue since a lot of the patterns used comes from a microservice architecture.
ОтветитьCORRECT! In fact, you can often more successfully achieve what people intend to achieve with microservices architecture, by using thin slicing, CQRS and event sourcing in a monolith.
ОтветитьGreat analysis
ОтветитьI find that microservices is more about the journey than the destination. Too often we try to jump straight to the destination and we lose what the journey teaches us about our design and domain. Great video as always!
ОтветитьOne question: if some service inside a monolith needs data (pure querying) from another service, how will you deal with that? Does it means that your boundaries are not correct?
My view difference between monolith and microservices is that the physical boundaries make a clear restriction for developers what they can do or not, especially working in large teams.
Yeah, in the last 5 years I've worked with monoliths and I've taken ideas from microservices to work with them. Monoliths are great for these smallish teams/companies and with internal decoupling you can go a long way. Introducing microservices in these scenarios only means waaaaay more work for deployment and troubleshooting. The sysadmin/devops guy will hate you even more.
ОтветитьLoved it. Derek can you post the link to the Greg's video? Thanks!
ОтветитьMy experience with a monolith is that boundaries have not been an issue, but rather the enormous codebase with long history. One repository, one database and over 5 million lines of code. The methods here don't apply since subdomains are already separated in most of the layers. There are huge basic data dependencies that are too costly to split. And of course that part is the oldest. Overall the development and deployment is manageable. Everything just takes more time and resources. Some of us can't handle the slowness so devs get tired. The refactoring ship has sailed, but fortunately the software stilk makes money. Which is usually what really matters.
ОтветитьThank you, great explanation
ОтветитьI think why people think microservices solves the problem is because it forces the boundaries. If every dev which touched the code was disciplined enough to stick to the logic boundaries, then monoliths would never have been a problem in the first place.
ОтветитьWell said. Although I would also add that having a "traditional" message broker triggering changes asynchronously isn't necessarily required. As with anything, it also has its own complexities, pros and cons. Depending on requirements of a particular system, sometimes a simple in-process "broker" is sufficient. MediatR's notifications is a good example. Yes, it means more logic executed synchronously as part of the same executing scope, but again- it all depends and in certain scenarios such approach may be the best solution.
ОтветитьHighly available and modular monoliths are the real deal in terms of design.
ОтветитьLong live the modulith!
ОтветитьThe devil is always in the details. You have many videos about dividing application into bounded context and that's fine, it's good thing. But... I am always afraid that what I have divided I have to put it together in order to display it on one screen. Unfortunately clients often wants to display (divided) data at once. No problem when you want to display details of one thing. What if client want it on pageable, sortable list with many different filters. Can you tell us more about solutions for that problem? I know that I can de-normalize duplicate that data. I have zillions records, what then?
ОтветитьGreat video on modularity topic!
Especially like how code areas of each module are divided into Interface, Implementation and Tests, which is fundamental to achieve the modularity: Module needs clear interface through which is accessed, but also needs to be able to test in isolation of other modules.
This is what is called "package by feature" source code organisation. It is considered a good practice to prepare the ground for breaking the monolith to microservices. However, I think there are couple of advantages to gain from actually moving to microservices:
- You can scale them horizontally
- Each microservice can be developed with its own optimal technology. Provide low-risk opportunity to try new technology
This one is so important! I’ve managed to scale an app I work on to thousands of requests/sec using this exact architecture. More entry points to the monolith when app scalability is needed (web servers, web sockets, queues, cron workers etc). Scale entry points horizontally behind a load balancer.
Scale single database node up horizontally with more CPU cores. Easy peasy.
The way I see it, if you have built these modules that communicate asynchronously and don't share a database, using eventual consistency instead of simple ACID transactions, you have solved 90% of the challenges of microservices. I think the single unit of deployment is definitely an advantage for small teams, but in the other hand, splitting these modules to microservices will give you a physical boundary that match the logical boundary, making it a lot harder to break module boundaries. Personally I like this separation and think that if you have gone this far to create async communication between modules, you are better off making your boundaries more rigid, even at the cost of having to deploy more units
ОтветитьJust weeks ago I was thinking what about when I deploy so called microservices in the same machine but different processes and use inter process communication not network. I still have boundries and could easily refactor to out of process. If I deploy same machine but multiple process I could still deploy single instance of service just by closing and starting the process again.
ОтветитьHi Derek. Your videos are excellent. No bloat and right down to the point. Can I ask why the emphasis on an out of process message broker? Wouldn't an easier option to use something like Mediatr?
ОтветитьWe had something like this when we were transitioning to microservices many years ago. There were only a few problems with it that I think some cleaver CI/CD could have solved. The big problem we had was developing a big feature while also making bug fixes. The feature could take a few weeks to work on, but the bug fix could take a day. Making sure the feature (that was deployed to staging) didn't get deployed to production when the bug fix was deployed to production was a pain.
ОтветитьBTW: There are ways to automate the dependency check (CI) in order to ensure that the boundaries are respected.
ОтветитьHi Derek,
Thanks for this one. Quick question: how would you share/access/reference contracts from one module into another if it’s needed (let’s say you have admin part where you can add/disable countries and in another module you need the info about country. Would you reference it via (aggregate/entity) id and use repository to fetch it from a command handler to construct valid domain model, use materialized view or some other technique?)?
Hi Derek,
Thank you for your wonderful work !
Is monolith means mono-repo ?
If yes, I'm interested in your opinion about releasing and delivering a monolith. Let's imagine we have a perfect decoupled monolith with two business domains (sales and warehouse).
Developers of both domains (two separate teams) push code to the mono-repo. After some sprints, "sales" team need to release, test and deploy to production because of business requirements but "warehouse" team cannot release because of a big issue on their side.
How do you manage this kind of situation ?
One of the pro of micro-services is that they have their own lifecycle. IMHO, it's more a team organization than a technical architecture (but it implies technical complexity).
Great video! It's so true that logical separation and clean interfaces is the important thing regardless if communication goes on in a single process or between multiple processes.
ОтветитьHere is an idea for the next video: "Querying data between multiple microservices". Sometimes you need to query data that can't be queried within one boundary and requires knowledge of other domains. There are multiple ways to solve this issue. I would like to hear what do you think on this topic and what is the best practices with all advantages and disadvantages
Ответитьassuming that I do not want to go with microservices later, what I gain by doing the messaging async?
I feel that I could leave messaging sync and still have database transactions in the process. With async messaging I lose ability to do database transactions.
One nice thing about decoupled monoliths is they're ready to become distributed systems, if needed - if the architecture is already asynchronous with well defined boundaries, there is a much more well defined path from a module to a microservice than with a heavily integrated monolith. Instead of assuming everything needs to scale, you can address those needs where and when they arise. For most greenfield projects, this is probably the only sensible approach. 🙂👍
ОтветитьCorespondence betweeen logical boundary(bounded context) and a app module is one to one, or not?
Thank you!
Thanks for this great content. When transitioning to a microservice, for instance, how would you transition the schema of a particular boundary to a separate DB without losing previous data of its schema in the shared DB? Don't know if the question is understood 🤔
ОтветитьAs soon as you need high scalability (IoT and similar things) your monolith will fail you epically.
Stateless microservices can be scaled on a microscopic level and if you use MQs and other techniques to decouple everything you can scale almost indefinitely.
How do you move things to be event-driven under the constraint that they're in a request-response context?
ОтветитьI prefer components with private data and interaction through shared services. A centralized context manages those shared services.
1. Centralized context manages shared services
2. Component interaction occurs through shared services
3. Components do not expose their private data to each other, except through shared services
4. Component-component interaction is done via injected services, not directly.