@sachiemhetre

Hi Wade,

Thank you for the informative video. I have a few questions:

1. Why is this pattern necessary when we already have CDC and DB connectors?
2. If I'm manually handling the Kafka publishing by reading data from the outbox table and publishing it to Kafka, how should I manage the scenario where the Kafka publish is successful but deleting the entry from the outbox table fails?

@mehrdadk.6816

I've been using Outbox pattern, and it works nicely.
I also used Kafka transactions and it works even better. Kafka transactions are available to link to database transactions.

@saikrishna8429

How Delivery Process updates Outbox table after publishing event to kafka. I want to understand what if Delivery process fails to send event to kafka (As publishing event happens asynchronously) but deletes the event in Outbox table.

@nithinnambiar

I prefer to keep it simple, in the scenario where the application receives the message from the broker and processes it and then got to write to the db and kafka, the app wouldn't return till it successfully does both the writes. If the db commit is successful and kafka publishing fails the message would be processes again but it wouldn't write to the db since an entry already exists, it will then proceed with kafka publishing.

@zhoulingyu

Great series. I have some experience with streaming. I find this very easy to understand.

@kevinding0218

Thanks a lot for the explanation of the Outbox Pattern; I think I grasp the workflow, but I'm trying to get a better understanding about what benefit it brings.

For benefit of outbox pattern, compare to if the original microservice updates the state in the database first and, only upon success, proceeds to publish the event with a retry mechanism. I sense that a benefit of using the Outbox Pattern might be 1

1) Un-blocking the original service while having to let it hanging there perform the retry if event produce failed.
2) Record the event in a different place so it won't be lost if original service goes down after only persisting the state in DB
3) Isolate the event produce with at-least-delivery manner while having to respect eventual-consistency

is there anything missed by using the Outbox Pattern?

@schallereqo

Hi Wade, Thank you for the great video. This new series of videos is amazing. I am a customer of Confluent and use the Event Router product to implement the Outbox pattern. The issue is that this product feels incomplete compared to some of your other well-established connectors. We have made a few attempts to contact Confluent regarding fixing minor things and adding Protobuf support to this connector but with no luck. I'm glad you are aware of the Outbox pattern but wish you gave such a product more emphasis to help your customers make use of this solution.

@paulcalinovici8808

I am not really sure that I understand how the data is stored in the outbox table. Is it some kind of event store from CQRS pattern, where we store events in a serialized format(json as example)?

@CodingHaribo

This sounds great! My problem is I have microservices that use both Postgres and Mongo, depending on the kind of state being changed. Of course, rarely does it write to both on the same command, but can happen. So in those cases, I don't feel like I can rely on a transaction inside one database, or an outbox that lives in one of those databases. I need an external transaction, that's where it gets a little messier.

@chengchen9032

Hi wade, your video is very well-made! I have been implementing event-driven architecture for some years, the way I dealt with the dual problem has always been event-sourcing + listen-to-your self pattern. However, it turns out that whenever we deliver messages through kafka, there might be an unstable delay. In scenarios like user signup and change of information, it could be quite a problem since user might panic if they don't see the change being applied immediatly, so I feel like the outbox pattern could be a solution for this. The reason I intentionally avoided the outbox pattern is that I am worried about periodically scan the database for events and simutaneously modify them, i had some bad experience with situation like this which eventually led to dead-locks and the whole DB just shut down.

@matveyivanov1310

Great explanation, thank you! But I might be missing something... let's consider you have a table with events. When you read events from it and send to Kafka, you want to mark event as sent. But isn't that dual-write problem again? I mean there is clearly a possibility that event is marked as sent but sending to Kafka went wrong. Still looks like safe way to send events, but I don't see 100% guarantees of sending all events tho

@asifanwar7942

Would a suitable solution to dual write problem be - setting appropriate unique constraints on the DB then either creating or updating or doing nothing to a record ?

E.g. Use if the record already exists on the DB as a way to de-duplicate when writing to the database

@patronovski

How different is KIP-892 with this?

@dinhthaile8648

Hi Wade. Thanks for the great series. I have a question.
What if the event has been emitted, but the deletion in outbox table has been failed? Once again, we still want it works in a transactional fashion, and we can't ensure the consistency.

@deependrachansoliya106

Hi Wade, Outbox pattern may cause performance issues for high volume?let's say system at peak hours can handle 16K QPS but after implementation of outbox, database performance degrade.

What's better to use a JDBC connector or CDC like debezium in my case?

@quanta9236

Hi Wade, thank you for the great video. I have a question is what is the differences between CDC and outbox, I think they are quite similar with respect to know what was changed in a database

@gabrielpiazzalunga5540

Hello, i currently have an implementation of the trabsactional outbox pattern. But I'm considering moving to an in-memory outbox (like mass transit) what do you think about it?

@Pentatonic_Hardcore

didn't understant will try to find another video, thanks.

@combatLaCarie

I'm a bit confused. If my code wants to write to the db and then kafka it's already the case that I wouldn't write to Kafka if the db entry fails.

```
success = write_to_db_with_transaction()  
if success:
    write_to_kafka()
```

Maybe in this video we're assuming the db write is async?