Комментарии:
very interesting, thanks
ОтветитьAll the problems discussed in this video and more can be automatically solved with the Actor model architecture. It's astonishing that most people are not familiar with this architecture.
Ответитьthanks for sharing, this is gold.
Ответить24 carat gold 🤘
ОтветитьIf you move to queuing, have a fallback for your queues too. :)
ОтветитьWould love to see a video on whatever tips you may have on gathering metrics and/or setting up alerts.
ОтветитьWhile I agree with asynchronous queue based approach to deal payments in general, this approach ay not work and only synchronous approach only works when consumer is mandated to input otp on sms/email before payment is authorized even when payment is triggered based on stored card details.
Ответитьwhen you talk about having fallbacks with 3rd party services, could that be for example a caché? that would only work for queries, what about fallbacks for POST endpoints or something to create data? thanks!
ОтветитьThanks, this is really a helpful composition of tipps. I think I will contact my colleague who wants to define standards for application monitoring and bring it together with my EDA plans and your tipps for synchronous fallbacks.
ОтветитьFor the competing consumers use case, does this work for messages for which order needs to be pertained? If not, how do you handle this case? I guess we could do some routing in the broker?
ОтветитьThe advice in this video is broadly consonant with the Reactive Manifesto, especially with the point that asynchronous message passing supports both elasticity and resilience (which together ensure responsiveness in both the happy and unhappy situations). The more of the system that is based on asynchronous message passing, the more resilient, elastic, and responsive it will tend to be.
When building a system around asynchronous message passing, the metrics around the queues allow backpressure: a component that's not able to keep up with the load effectively communicates that it's not keeping up (from queue metrics alone, this is measurable from the derivative of queue depth). The producers to those queues can slow themselves down or perform other traffic-shaping in response.
Solid info, thank you
ОтветитьThanks for this valuable contetnt you producing.
Keep on that! :)
Very good video. Generally speaking, error handling can easily be harder to handle, than the happy path.
For example:
Your timeout example of 500 ms, might suddenly cause duplicate data downstream if the external system does eventually process your initial request, and you retry it too eagerly. Is that a problem? Maybe.
Depends on the external system, and how that works. Maybe it rejects duplicates, maybe it overwrites, or maybe it duplicates.
Maybe duplicates isn't a problem, maybe it is a critical issue.
As long as you keep in mind how to handle error scenarios just as much as the actual work, then you should be fine.