4 Comments
User's avatar
Russell Brown's avatar

I loved reading this. Thanks for publishing it. I have some observations, but overall this is exacly the kind of thing I've long been wanting to read more of: real world application of CRDT techniques.

First up I want to point out that this isn't really a direct like for like comparision. That doesn't mean it isn't a brilliant take on some of the ideas from https://josephg.com/blog/crdts-go-brrr/, and I love declartive merge policies (but what if policies conflict?!)

How is it not directly comparable? You have schema. You don't support structural removes/path edits. Nor do you have the same semantics as the Ditto map (add wins/remove wins structural with recorded type conflict?)

What really comes out of this for me is something I have been saying for years: CRDT _techniques_ are valuable, but implementation is *very* use case specific. The Ditto map has a great deal more flexibility but the cost of that is not zero. There are other ways I am sure Ditto could address performance too, "container CRDTs" are not per-se the problem. One method worth looking into further is decomposed CRDTs that match storage (see e.g. Bigsets)

Despite the non-comparable comparison, I think this is valuable work and I love to see actual industry posts on applying CRDTs in production.

I'd love to see a follow up about schema changes and maybe something about how you verified correctness.

The takeaway for me is don't pay for flexibility you don't need, which is often what you do if you use a general purpose solution to a problem in a specific domain. A solution tailor made for the domain will always win.

Adam Share's avatar

Thanks for the thoughtful feedback, Russell!

On policy conflicts: Great callout. We can't fully constrain policies within protobuf definitions—you could theoretically add a counter to a non-numeric field or specify conflicting strategies. However, since we have the schema, we build and validate resolvers at runtime and throw exceptions for invalid combinations before writing data. This is far better than container-based solutions where devices define merge strategies at write-time, allowing two devices to write conflicting strategies to the same field.

On direct comparability: Fair point—the blogpost simplifies for readability. For context: we built an abstraction layer over both our CRDT and Ditto's SDK, enabling A/B testing between implementations. Part of that abstraction necessarily limited operations like direct path edits, which affects the comparison.

On schema: Agreed. Ditto's query language and relational database presentation effectively imply schema, even with runtime flexibility. Most users with poorly-defined schemas would encounter conflicts anyway, defeating the purpose of conflict-free merging.

On path edits: We technically support them—our delta protocol uses path-based changes over the wire. However, our architecture (single protobuf blob + LRU cache) makes whole-object manipulation far more efficient. Unmodified fields use O(1) reference comparison, so even single-field changes merge efficiently. You could argue path edits expose implementation inefficiencies to users—forcing granular edits because whole-object merges perform poorly leaks database concerns into application logic.

On merge semantics: Our library is factory-based—each field maps to merge logic when resolvers build at runtime. Maps could use add-wins or remove-wins. We chose LWW with configurable tombstone policies (TTLs, max counts) for our use case. Low tombstone limits effectively approximate add-wins behavior since removes lack tombstones to win during merge.

We can support any merge strategy that doesn't require additional runtime metadata beyond what's in the protobuf message.

On type conflicts: Protobuf's forward compatibility (incremental fields, deprecation/reservation) ensures types never conflict across peers—a key reason we chose protobuf initially. It enforces schema consistency across versions without needing runtime type conflict resolution.

Hopefully this will become open source so you can dig deeper into the implementation details and we can get more feedback from folks like you who understand this space!

CK Engineering's avatar

> Hopefully this will become open source so you can dig deeper into the implementation details and we can get more feedback from folks like you who understand this space!

We'll aim to open source it this quarter.

Russell Brown's avatar

Thank you for the detailed reply. It is really interesting work. I'd _love_ to see an open sourcing one day. I have a little free time the coming weeks and might have a shot at a "clean room" implementation of what you've described.

Anyway, great work, and a great post, thanks for sharing (and the follow up).