Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: introduce serde_v8 #9722

Merged
merged 24 commits into from
Mar 26, 2021
Merged

core: introduce serde_v8 #9722

merged 24 commits into from
Mar 26, 2021

Conversation

AaronO
Copy link
Contributor

@AaronO AaronO commented Mar 8, 2021

This PR introduces serde_v8 into core and uses it to reduce verbosity (making code arguably more readable and rusty).

It mainly serves to showcase serde_v8 as an early alpha and gather feedback before using it for a new op-ABI.

Subtasks

  • Implement serde_v8
  • Implement serde_v8::Value for passthrough (implementation is pretty "hacky", uses transmute)
  • Showcase verbosity improvements (on get_proxy_details, get_promise_details, eval_context)
  • Bring serde_v8 into repo's tree (best mid-term solution IMO)
  • Flesh out README with: quickstart, best-practices, etc...
  • Add tests for json!() compat
  • Implement serialize_map and deserialize_map (mainly to support serde_json::Value, HashMaps, etc...)

Follow-ups

In future PRs:

  1. Use serde_v8 to reduce verbosity of JsStackFrame decoding
  2. Restore micro-benches from the original repo (using bencher or criterion.rs)
  3. Use serde_v8 to optimize op-ABI (refactor(core): new optimized op-layer using serde_v8 #9843)

Related to #9540

@AaronO AaronO changed the title core: introduce serde_v8 refactor(core): introduce serde_v8 Mar 8, 2021
Copy link
Contributor

@bnoordhuis bnoordhuis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll need to look at serde_v8 in more detail but about this PR:

  1. On the one hand, it removes quite a bit of code, which is definitely nice.

  2. On the other hand, the serialization feels a little magical.

Things like tuple structs getting marshalled into JS arrays is kind of surprising to me but I also kind of see why it makes sense. Maybe I just need to get used to it.

core/bindings.rs Outdated Show resolved Hide resolved
core/bindings.rs Outdated Show resolved Hide resolved
@lucacasonato
Copy link
Member

Things like tuple structs getting marshalled into JS arrays is kind of surprising to me but I also kind of see why it makes sense.

For reference, serde_json has the same behavior.

@AaronO
Copy link
Contributor Author

AaronO commented Mar 9, 2021

@bnoordhuis I completely understand the sentiment, however I don't think "feeling magical" is a negative here.

I think serialization layers can be broadly scored on: expressiveness, consistency/reliability and efficiency.

serde_v8 is substantially more expressive than the "manual" v8-encoding, it's equally as efficient if not more so in some cases (since it uses internalized strings for struct/obj keys, ...) and I think you could argue that it's more consistent/reliable than the manual method since the data is now type-checked, preventing things such as typos in obj keys (and is consistent with the already adopted patterns and conventions of serde/serde_json).

serde_v8 was originally built/prototyped for #9540, to improve the efficiency of the op-serialization layer whilst preserving it's expressiveness (the original/current op-serialization [json] trades efficiency for expressiveness), mainly to drive overall performance gains and bridge the gap between json-ops & bin-ops (PR to come).

Whilst applying it to the core functions wasn't the original intention, it's a natural fit anywhere we're mapping v8<>rust. For example in my heapstats PR, serde_v8 will replace these ~40-lines with a one-liner to_v8() call.

I personally found parts of core/bindings hard to grok at first, get_proxy_details/get_promise_details/eval_context are conceptually very simple but they were somewhat "drowned" in a lot of encoding complexity that made it hard(er) to discern what they're really doing. I believe reducing that conceptual overhead provides substantial gains in clarity, so we can focus on the essence of what we're doing and delegate 99% of the v8<>rust bijections to serde_v8.

This PR does make specific changes to core/bindings but I also wanted to use it to bootstrap the conversation on serde_v8 and adopting it in a new op-ABI.

Context on op-encoding perfs: The latest benches, show that using serde_v8 would reduce our op-encoding overhead by up-to ~15x, likely ~10x in most cases and at least ~3x in the "worst cases" (dealing with string values which cause extra allocs, string keys are decently fast since they are deduped by v8's internalized strings and be further improved with a KeyCache)

@AaronO
Copy link
Contributor Author

AaronO commented Mar 9, 2021

I think it could also be neat to have serde_v8::assign_to(scope, obj, v) to "merge" structs with existing v8::Objects.

We could also add serde_v8::Function similar to Value (also Accessor ...), but that might be a bit too magic ...

With assign_to or merge_v8 we could use structs to encode the global context via CoreGlobal, CoreFunctions etc...

Upside is that it would be "typed", you could grasp the structure at a glance and serde would handle camelCase-ification and other details.

@bnoordhuis
Copy link
Contributor

string keys are decently fast since they are deduped by v8's internalized strings

I left a comment on the relevant commit in serde_v8 but for posterity: the problem with interned strings is that they leak memory unless you know in advance that the set of strings is bounded.

I don't know if that's true for deno, but even if it's true now, it might not stay true going forward.

@AaronO
Copy link
Contributor Author

AaronO commented Mar 10, 2021

@bnoordhuis Thanks for the feedback!
(Just a heads up, the current code is a bit rough since I mainly focused on getting it working, benches etc... will do a second pass on it and some cleanup/polish)

Ultimately the plan is to use our own KeyCache to dedupe and use onebyte-static-external-strings for all struct keys. I don't have data on it but I assume the size/number of struct keys of op-ABI payloads is relatively small & bounded for most programs in practice, since it's essentially dependant on struct keys used by internal deno ops. However, with our own KeyCache we could easily use a bounded LRU.

I haven't implemented Maps in serde_v8 yet, but I would exclude their keys from the KeyCache and use regular v8::Strings or use a separate cache to avoid cross-pollution since there should be much more variability in map-keys vs struct-keys and thus cache churn.

We could also use regular strings for this first release if you're concerned about the (potentially) unbounded mem-usage. They are naturally slower and will cause higher GC-pressure, but overall it's still faster than the current implementation

Quick benches

Internalized Strings

test abi_add_json     ... bench:     279,130 ns/iter (+/- 53,193)
test abi_add_v8       ... bench:      19,628 ns/iter (+/- 949)
test abi_add_void     ... bench:       5,399 ns/iter (+/- 1,221)
test abi_promote_json ... bench:     329,163 ns/iter (+/- 55,322)
test abi_promote_v8   ... bench:     108,140 ns/iter (+/- 8,793)
test abi_promote_void ... bench:      25,326 ns/iter (+/- 4,033)
test abi_sum_json     ... bench:     282,355 ns/iter (+/- 57,741)
test abi_sum_v8       ... bench:      35,914 ns/iter (+/- 3,013)
test abi_sum_void     ... bench:       5,654 ns/iter (+/- 1,445)

Regular Strings

test abi_add_json     ... bench:     282,485 ns/iter (+/- 47,440)
test abi_add_v8       ... bench:      27,178 ns/iter (+/- 688)
test abi_add_void     ... bench:       7,079 ns/iter (+/- 2,267)
test abi_promote_json ... bench:     346,326 ns/iter (+/- 37,595)
test abi_promote_v8   ... bench:     128,139 ns/iter (+/- 5,183)
test abi_promote_void ... bench:      24,808 ns/iter (+/- 494)
test abi_sum_json     ... bench:     297,426 ns/iter (+/- 47,036)
test abi_sum_v8       ... bench:      33,818 ns/iter (+/- 1,184)
test abi_sum_void     ... bench:       5,571 ns/iter (+/- 1,253)

Bench conclusions

Regular v8 strings are ~20-40% slower on these benches (decoding struct keys), but it's still an order of magnitude faster than the current implementation. So I think we can ship regular strings in the first release and then dedupe with our own bounded KeyCache in the future.

@bartlomieju
Copy link
Member

@AaronO what's is your idea for further integration? Ie. which parts of codebase could benefit the most from introducing serde_v8 in its current form?

I wonder if you could bring serde_v8 into the tree for this PR so we could run it properly on CI.

@AaronO
Copy link
Contributor Author

AaronO commented Mar 18, 2021

@bartlomieju serde_v8 is now in tree, CI pending.

what's is your idea for further integration? Ie. which parts of codebase could benefit the most from introducing serde_v8 in its current form?

serde_v8's main use-case is to optimize the op-ABI. This PR aimed to kickstart the conversation on serde_v8 starting with some small and scoped changes.

Beyond the op-ABI, it could further be used in core/bindings.rs to strongly-type our core-scope and replace set_func() stuff, see my previous comment: #9722 (comment)

@AaronO AaronO marked this pull request as ready for review March 18, 2021 22:55
Reduce encoding verbosity of some core functions
Mainly removed ABI stuff and benches (used nightly toolchain)
@AaronO AaronO changed the title refactor(core): introduce serde_v8 core: introduce serde_v8 Mar 24, 2021
Copy link
Member

@bartlomieju bartlomieju left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AaronO could you please add copyrights to the files?

core/bindings.rs Show resolved Hide resolved
Comment on lines +229 to +230
Magic(MagicSerializer<'a, 'b>),
Regular(ObjectSerializer<'a, 'b>),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the difference between those two? Maybe a comment here could help

Comment on lines +1 to +33
use rusty_v8 as v8;
use std::sync::Once;

pub fn js_exec<'s>(
scope: &mut v8::HandleScope<'s>,
src: &str,
) -> v8::Local<'s, v8::Value> {
let code = v8::String::new(scope, src).unwrap();
let script = v8::Script::compile(scope, code, None).unwrap();
script.run(scope).unwrap()
}

pub fn v8_init() {
let platform = v8::new_default_platform().unwrap();
v8::V8::initialize_platform(platform);
v8::V8::initialize();
}

pub fn v8_shutdown() {
unsafe {
v8::V8::dispose();
}
v8::V8::shutdown_platform();
}

pub fn v8_do(f: impl FnOnce()) {
static V8_INIT: Once = Once::new();
V8_INIT.call_once(|| {
v8_init();
});
f();
// v8_shutdown();
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move contents of this file to serde_v8/tests

Comment on lines +491 to +497
macro_rules! not_reachable {
($($name:ident($ty:ty, $lt:lifetime);)*) => {
$(fn $name(self, _v: $ty) -> JsResult<$lt> {
unreachable!();
})*
};
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this macro is superfluous, it would be more readable (although repetitive) to spell out those methods explicitly

use std::convert::TryFrom;

#[derive(Deserialize)]
struct MagicOp<'s> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does magic have some special meaning in this file?

type SerializeStructVariant =
VariantSerializer<'a, 'b, StructSerializers<'a, 'b>>;

forward_to! {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto, could you please remove this macro in favor of repeating method defintions?

Copy link
Member

@ry ry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Aaron, this is an amazing piece of code artistry. We've talked about this idea before, but never took it seriously. We thought it would be too hard - thought it wouldn't pay off. We were wrong on both accounts as you have demonstrated here.

I have benchmarked http_bench_json_ops on main and #9843 and see pretty massive improvements: https://gist.github.com/ry/c4008dc9d7a8a08bce1fdbad7d70e8e0

Once we replace JSON ops with serde_v8, this will be the single biggest performance improvement in Deno in two years. I suspect it will vastly simplify a lot of the juggling in deno_core.

It may not be very visible to users immediately, but this really changes the game.

}
}

// from_v8 deserializes a v8::Value into a Deserializable / rust struct
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use triple slash comments here and elsewhere.

@ry ry merged commit 3d2e05d into denoland:main Mar 26, 2021
@AaronO AaronO deleted the core/intro-serde-v8 branch March 26, 2021 11:42
hardfist pushed a commit to hardfist/deno that referenced this pull request Aug 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants