Skip to content

Conversation

@mattmatravers
Copy link
Contributor

@mattmatravers mattmatravers commented Sep 2, 2022

Checklist

@mattmatravers
Copy link
Contributor Author

mattmatravers commented Sep 9, 2022

This PR describes many of the points raised in #5635

@mattmatravers mattmatravers marked this pull request as ready for review September 9, 2022 16:29
@anshbansal anshbansal added rfc See https://github.com/linkedin/datahub/blob/master/docs/rfc.md for more details docs Issues and Improvements to docs community-contribution PR or Issue raised by member(s) of DataHub Community labels Sep 10, 2022
We wish to avoid losing data silently when two clients make updates to the same aspect. This can be quite likely in
an event-driven world driving downstream operations on Datasets. We should offer clients the chance to conditionally
update an aspect on the basis that what they recently observed is still the case and signal to the client if the
state changed before the update was possible. Essentially we need a long "compare-and-swap" operation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me. Thanks for the clear description up to this point!

* The client needs to be able to reference the same state to a GMS "update" endpoint if it wants to ensure the
aspect has not changed between fetching it and mutating it.
* A client could include multiple aspects in its precondition state, in case one update relies on the state of many
other aspects. We need to defend against race conditions for all the aspects given.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

per-aspect tagging, this makes sense.

an aspect has not changed when making an update to it.

There are some potential spin-offs of this design which could involve writing some kind of PATCH update where a client
supplies a diff instead of a complete new state, but this is out of scope of this particular RFC.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RyanHolstien Is currently working on PATCH API support on the GMS ingest path. I'm wondering if we should account for it. I'm assuming that by default PATCH would ignore the version tag


We don't necessarily have to overhaul the versioning approach here, as all we need is a way of uniquely identifying
the aspect instance that is to be changed. We don't need to rely on numerical version ordering for this, but
something will need to be added as the general aspect level to allow for this new functionality.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is true

Bad
* No guarantee of state between the read and write
* No guarantee that the plugin code will run
* Specific business logic getting tied up with GMS code
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the biggest issue with this. I think most companies using DataHub simply wouldn't bother with this, and thus would not benefit from it!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I re-read this document fully yesterday and I need to remove some ideas from the main section. This is one of them.


### 2: Read and Conditional Update

Read the current aspect and pass it back to the client. The client can propose an update on the basis that what they
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does the client know what they read is still valid?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess they'll never know, but they will know the version of the aspect they just read. The clients then have the option of passing that version back to GMS during an update request.


Bad
* Slow, especially under high contention
* Client will need to handle retries
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a conflict occurs, what should the client do? Just abort or log a warning or something, presumably?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client will have to manage the retry operation or just fail depending on business requirements. GMS won't have enough context to manage this without a suitable patching language that @RyanHolstien is working on.

* Fast, as retry loop will be confined to GMS

Bad
* GMS will need to understand partial (PATCH) updates
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RyanHolstien Please have a look here.

With Ryan's changes, we'll be able to support a specific set of PATCH formats, but it may not always suffice for addressing the above use case (if we don't formally support a patch of the given type. Ryan can elaborate.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as the path drills down far enough, JsonPatch semantics are capable of covering this scenario. Basically you can specify precise fields to modify and it will not impact the rest of the aspect, but if you target the top level of the aspect you may be overwriting things. Example:

JsonPatch for Ownership at top level:
op: ADD
path: /
value:
{ owners: { urn: urn:li:corpuser:owner1 : {...} }

vs JsonPatch at lower level:
op: ADD
path: /owners/urn:li:corpuser:owner1
value: {...}

The first one will wipe out other values in the array, but the second one will not, this is why the SDK implementation is very limited in scope since JsonPatch is almost too flexible in what it allows. It can very much accommodate this use case though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, thanks. I'd be keen to know if JsonPatch can handle idempotent updates like "ensure this set contains these values". But I guess clients will need some form of serialisation guarantees regardless of whether it's a PATCH update or a full replacement.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are two different things in my mind:

  1. Idempotency of updates -> Guarantees that if you add owner1 multiple times, it doesn't add it repeatedly. This is ensured

  2. Verification of particular values -> Guarantees that owner1 is only added if owner2 is present (or something to this effect? Not sure exactly what you mean here). This is not ensured. Maybe you mean:

add owner1 happens at same time as add owner2, owner1 AND owner2 should appear?

This should also be ensured per transactional consistency.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. That's good news, thanks 👍 .
  2. I'm talking about a scenario where one property of an aspect gets updated using another property as a base state. If the base state is changed then that update is no longer valid, so we will need to offer serialisation for such stateful updates (whether PATCH or UPSERT).

### 4: Serialised Updates Exclusively via Kafka

Here, the client never updates the aspect HTTP but instead passes it via the MetadataChangeProposal topic in Kafka.
I don't think this is a viable solution due to the possibility of issuing out-of-date updates.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still has the overwrite / clobbering problem

* No need for business-specific logic in GMS code

Bad
* Slow, especially under high contention
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very slow! Also doesn't well support the MCE pathway, as you mentioned.

1. Previous state: The version(s) of aspects required in order for a GM update to succeed.
2. Preconditions: A wider set of assertions which must be true in order for a GM update to succeed.

We don't really cover "Preconditions" in this RFC, and in fact I argue we should never do, as this leaks business
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check out the term "etag" -- its used across other DBs to provide a "version tag" to a document. I'm hoping we can simply add something similar to all of our get / put pathways

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag seems to cover roughly what we need, though we'd need to think about the multiple aspect use case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


### Aspect Versioning

Currently, [aspect versioning](/docs/advanced/aspect-versioning.md) uses 0 as the latest aspect version. We would
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can just use total aspect count, instead of version. Serves the same purpose

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed on video chat, this might not work if we limit the retention of older versions. We need something that is deterministic and not subject to race conditions.

> If we implemented this proposal, how will existing users / developers adopt it? Is it a breaking change? Can we write
> automatic refactoring / migration tools? Can we provide a runtime adapter library for the original API it replaces?

This rollout would be done as either a new API endpoint or an optional additional parameter to an existing API, so
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be backwards compatible to simply add totalAspect count somewhere

@mattmatravers
Copy link
Contributor Author

I'm going to re-organise a lot of this document to be clear what designs are viable and which are not.

If a patch language were available, it might be possible to update collections in such a way that it's not
necessary for clients to know about the previous state of an aspect.

MongoDB offers an update mode which allows clients to [add items to a set](https://mongodb.github.io/mongo-java-driver/4.7/apidocs/mongodb-driver-core/com/mongodb/client/model/Updates.html#addToSet(java.lang.String,TItem)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The challenge here is that we'd need to write our own operation DSL. This is what MongoDB has done!


### Serialised Updates Exclusively via Kafka

Here, the client never updates the aspect via HTTP but instead passes it via the MetadataChangeProposal topic in Kafka.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's possible that we do provide an optional version tag in the Kafka ingest path as well, but there is a higher likelihood of out of date updates. Also, it's obviously not possible for a client to perform a retry since they won't know what failed...

@jjoyce0510
Copy link
Collaborator

Merging this RFC to keep for history. Thanks for the awesome contrib and hoping to get this implementation in shortly!

@jjoyce0510 jjoyce0510 merged commit 54153ea into datahub-project:master Jan 10, 2023
@mattmatravers mattmatravers deleted the rfc-serial-updates branch January 10, 2023 05:59
ericyomi pushed a commit to ericyomi/datahub that referenced this pull request Jan 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-contribution PR or Issue raised by member(s) of DataHub Community docs Issues and Improvements to docs rfc See https://github.com/linkedin/datahub/blob/master/docs/rfc.md for more details

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants