Attendees:
Name | Abbreviation | Organization |
---|---|---|
Chris de Almeida | CDA | IBM |
Ujjwal Sharma | USA | Igalia |
Waldemar Horwat | WH | Invited Expert |
Ben Allen | BAN | Igalia |
Jesse Alama | JMN | Igalia |
Linus Groh | LGH | Bloomberg |
Ron Buckton | RBN | Microsoft |
Daniel Minor | DLM | Mozilla |
Chip Morningstar | CM | Consensys |
Philip Chimento | PFC | Igalia |
Michael Saboff | MLS | Apple |
Mikhail Barash | MBH | Uni. Bergen |
Samina Husain | SHN | Ecma |
Keith Miller | KM | Apple |
Richard Gibson | RGN | Agoric |
Justin Ridgewell | JRL | |
Aki Braun | AKI | Ecma Secretariat |
Jordan Harband | JHD | HeroDevs |
Istvan Sebestyen | IS | Ecma |
Dan Gohman | DGN | Invited Expert |
Josh Blaney | JPB | Apple |
Dmitry Makhnev | DJM | JetBrains |
Chengzhong Wu | CZW | Bloomberg |
Ashley Claymore | ACE | Bloomberg |
Presenter: Rob Palmer (RPR)
RPR: welcome everyone to the 103rd meeting. We are here I believe in Los Angeles today or pretend it is Los Angeles well at least the West Coast at least, and close to home for many people here. All right, I will say looking at the people we have in attendance, I think most people know who we are but I am Rob one of your chair group here and as well as Uijwal and Chris, we are your cochairs, and we also have some facilitators, and Justin [INAUDIBLE]. We have two our facilitators who are not with us. But they are part of our facilitator group. Hopefully, if you are here, and you are dialed in through WebEx today, you will have reached here via the entry form. If you have entered the link in any other means and go back to the reflector invite and sign in. And so it is utmost importance to record all entries. So if you have received URL vast other means tell the person who is distributing it, do not distribute it in the future. And we have a Code of Conduct which you can find on our main web page. And please do your best to respect the Code of Conduct in the spirit in which it is written and not just the letter of law. And we should all try to be excellent to each other, if anything happens that makes you uncomfortable, you will report, and reporting is anonymous.
RPR: okay. That is fine my connection is currently on full to 4G as I can see. I am on 4G. We begin two hour if the morning for a break and two hours in the afternoon. Or evening, and we have the most important which is TCQ, and so, you will find the link on the reflector invite. You can view the queue of all items and click on the artist to get details and you will see current agenda item, and there are some buttons to help you discuss and enter the conversation. Those buttons include and you should be using these generally further left most button, and we are first starting new topics as that keeps the conversation spread out distributed and you can organize it and if you want to reply to the current topic, the lighter blue is your color, second button. And if you just want to briefly interrupt between briefly get to the top, you can ask a clarifying question and if you really need to interrupt and if there is some kind of – if you cannot hear me, you can raise a points of order. And if you are using – you are holding the mic and if you are speaking, and you will get this button, “You am done speaking” please use this button it and helps the chairs when you actively use your time by clicking this button. We also have four text-based communication, we have matrix, and most people are already in there. And if you are not, you can consult the admin and business to get yourself there and message one of the chairs, and if you have any trouble
- no proposal
- slides
SHN: Thank you. welcome everybody for the TC39 meeting and as always, it is a pleasure to be here. As you see, it is myself and Aki Rose who are the secretaries for TC39. I would like to thank István for supporting TC39 for many years, and I want to thank you for the continued guidance that you are offering not only to me but also TG5.
SHN: Thank you to the notetakers. I also want to recognize Aki, who has been supporting me as the TC39 co-secretary. I had not made a slide on this for the previous meeting and I want everybody to be clearly aware of Aki’s role. I also thank Aki for the efforts already and the efforts to help investigate and find a solution and implement what was really needed to be done for the ES documents in PDF form and that is one step of many you will be supporting us, so thank you.
SHN: just a couple of things I want to cover today and as usual, from the secretariat. We will talk about some approval and projects and new things that are happening at the GA and with the annex slides you have the Code of Conduct, and some of the next meetings that we have for the general assembly and ExeCom. The highlight is the executive meeting coming up in October has shifted to the 22nd and 23rd of October.
SHN: First of all congratulations on the approval of the two standards, at the last general assembly both ECMA262, and ECMA402 both were approved, so thank you for your efforts on making that happen and this is active committee and it is great that we have the next edition and it has been noted on the website and added on the news, so if you have anything missing regarding these two please let me know. And I think we got it all highlighted on the ECMA website, so thank you for these efforts. And I also want to highlight another new standard which may be of some interest and some from TC39 members that would like to participate in, as you may remember in previous discussions we have talked about CycloneDX Bill of Materials and this comes through a membership of OWASP, so we have the ECMA424 1st edition. And so and so I looked through at the specification that this coarse and all of the works that has been is first minuted through YouTube and agenda is open on GitHub, so if you like to look at that and make extent that is great and the meetings cycle for the next work and the next edition has become the 25th of July last week was our first meeting and in addition to this technical committee, we have two technical task groups on transparency exchange API and package URL. And both of active efficiently and their scope and charter are published and look at that on our website and look at the page ahead of you and if you wish to participate, please just reach out. It is a very active group.
SHN: We have the new proposal TC55 and so this is runtime, and this is in conjunction with working with W3C and this is up for vote and we have discussed this throughout this year for couple of different executive meeting and there has been a number of discussions regarding the process, objectives and the scope of work, and it has been quite fine-tuned and DE has been drawing this together with LCA others. I have received some issues regarding the TC and as we proceed with this particular new proposal we are covering the correct group, and making sure that we don’t have any risks and a few comments have come up and they will be worked up and I will highlight that to the committee that we have this potential new work proposal. We have some new members that has been approved, and of course have been provisional members earlier in the year and many of them were already active always executive invited experts and I think it has worked out very well, and thank you for the membership and so we welcome these four new members: Replay.io, HeroDevs, Functional Software, and the Open Source Business Alliance to ECMA and TC39.
SHN: At our general assembly we had June 26/27 and we approved, and we do as usual a complete presentation from chairs on their technical committees and with some highlights are and where the direction is going. And we had two members of TC39 present at the last GA in Geneva and. SFC was there and presented on TG2, and both received very positive feedback and it was quite interesting to see some of the questions that came up from the general assembly attendees and the report that comes from the technical chair could add a bit more detail on what the TG’s are doing because it is of t of the GA and to the Executive committee and so this is some positive information and feedback and if we can add more information on the chair works and future works of the TG’s that would be important for the members and those to know what is going on in the technical committees.
SHN: And that is the extent of my presentation, and it was quite short today and it was specific to bring you the feedback on the positive approval on the standards and the work moving forward with TC54 , and highlight to all members that we have a new proposal TC55 and so I will move to the annex and what we have choose approve of Aki andI working on that to getting approval on licensing and I do not have approval for today, and invite all of invited experts that are on the call and their support and the ones that are members of organizations eventually could consider to become a member of ECMA that is always appreciated. The Code of Conduct already mentioned by RPR on his presentation, some reason and conclusions very important thank you very much for the last meeting, we had quite a few, and I hope we can continue that and it makes for a strong commitment to the committee. And last but not least and reminder of all the list of documents and I have listed them there, and you may be able to access them through your TC chairs and some documents and general assembly and so the meeting that is taking place and Tokyo will be the next one in-person and the general assembly dates, and committee dates and know that the meeting date for the executive committee Haitian been pushed off and if there is discussions with approval, and I would like to keep the chairs in mind I will give you a heads-up when we are close to 60 day mark to the GA and if you need to bring anything up, you may do so. Good I think that is my last slide and thank you, and I will stop sharing and I am open to any questions or comments if there is any on the queue?
RPR: Thank you SHN. Very good report. Any comments or questions on this? CDA?
CDA: Can you talk more about the concerns were about the IPR policy? For the proposed TC.
SHN: For proposed TC55 the concern is on the IPR policy and that the work is being done predominantly in a group with W3C and that is with ECMA and that it is IPR’s that are well aligned with the W3c working group which is not fully charted by WC3 and that the policy part of ECMA is covered and any historical work and make sure that the policies are Word ever covered. This is not a showstopper what it requires is a clear verbiage in our proposal and our verbiage was not clear enough, and that is why the concerns have come up. And I need to look at it further and the concerns did come up later Friday night and I have just received some feedback that will allow me some time to go through that, but our intentions that it is IPR’s are done that the ECMA IPR policies are governed. CDA that is best I can answer at this point in time.
CDA: Thanks.
SHN: If there are no further questions, I do have one comments, and I apologize, Aki is there any comments you would like for the presentation, please take the time to do that now.
CDA: Thank you, sorry Rob, I just assumed there was no more queues since I heard nothing.
RPR: You are correct.
RPR: We will work diligent on the key points and the conclusions and anyone here is free to intervene, and decide if we forget ourselves. And stop to write this down.
SHN: Thank you very much and for the TC55 if anybody has any further comments, feel free to share with me on email if you are not a voting member, I am open to hear any voice regarding these concerns. Thank you.
The Secretary's Report, presented by Samina Husain, covers several key updates and developments related to various technical committees and ongoing projects. The summary encapsulates the key updates and issues discussed in the report, including acknowledgments, updates on standards and committee work, new memberships, IPR policy concerns, and upcoming meetings.
Below are the main points:
- Expressed gratitude to István for his long-term support of TC39 and acknowledged
- Aki Rose for her role as the TC39 co-secretary.
- The General Assembly (GA) and Executive Committee meetings were highlighted, with changes noted for the upcoming Executive Committee meeting, now scheduled for October 22-23.
- The committee was congratulated on the approval of two standards, ECMA262 15th edition and ECMA4 402 11th edition, which have been published on the ECMA website.
- A new standard related to CycloneDX Bill of Materials was noted, and TC54 is active with meetings and task groups already established.
- A new proposal for TC55, WinterCG, collaboration with W3C, was discussed. The proposal has undergone significant review and refinement, and a vote on this proposal is upcoming.
- Concerns raised about the Intellectual Property Rights (IPR) policy related to the proposed TC55 were noted and under review.
- Four new members, Replay.io, HeroDevs, Sentry and JetBrains have been approved to join ECMA and will be participating in TC39.
A reminder for the technical notes was provided and encouraged to have a format which ensures that anyone reading the summary and conclusion can quickly understand the discussion, main points, and resulting actions. Summary captures the objective or main topic of the discussion. Conclusion includes the agreements, resolutions and next steps.
Further comments or questions were invited regarding the TC55 proposal and other topics discussed in the report.
- no proposal
- no slides
SYG: Meeting is normative change consensus a while back, where the async generator was not handling promises that can be broken in a way where you like add a sneaky getter on some properties. I forget which one, I think return or something like that. But this got consensus and it shipped in chrome and Safari, and I don’t know about I FireFox implementation status but any case it got two implementations and it was merged. And there is a fix.
SYG: There is another normative fix that is not yet merged but there is no additional presentation in this meeting about it and there is an obvious spec bug in the modular loading machinery with a module graph with weight. And module graph there is possibility for spec algorithm to spec, and there is a fix and the PR is ready to go and there is FYI for folks who care about the space to take a look because it is an obvious spec bug following precedent that we had from previous spec bug issues and we are not asking for consensus here and this is not a thing that we have a choice about. If you do care, please read it and if there is no objections and we will try to merge this by the end of this – it says the end of the day but the end of the meeting is more reasonable.
SYG: And so noticeable editorial changes since the last meeting: and so the only one to call out is that the @@ notation has changed to the percent sign notation to align with other intrinsics and the @ this is generally less understandable and reactants. So for proposal office so prefer percent sign notation. This list is mostly the same as before, but just to show it again here. And that is it. Any questions?
Please look at PR number 3357 for folks who are well versed in the module machinery to look at the spec fix there.
Presenter: Ben Allen (BAN)
- no proposal
- slides
BAN: The okay looks like we are visible. So, this before I get going on this, there are two normative changes that are up for discussion at this meeting. And they are small in terms of text but they are large in terms of potential need for debate and discussion. The first is we discovered—I forget who discovered this, apologies—but the way ECMA-262 and ECMA-402 validate ranks is inconsistent. 402 when it is determining for example the number of integers to include. And it takes those values as numbers, and it tests whether they are in ranks and truncates them and if they are out of range, and to 226 will truncate and then range. It is a small inconsistency, and in the PR that we discuss for 402 to match 262 but maybe that is not the right way around.
BAN: The other one that is if you look at the comment thread on the associated issue it is very, very long. But so currently no browser implementation allows for dynamic updates to locales and so JS has done this for a some time. And if you allow this in the context of browser specifically, it becomes a new finger printing service if those updates are observable and there is a tremendous amount of discussion on the issue that has been going on for months and months. So this might require a fairly large amount of discussion. In terms of editorial changes, what we have is a bit of matching 262 and a bit of matching temporal and we have these format by taking the time nanoseconds and this makes format pattern infallible and so they are moving to the common AL. And we have released use of the take macro which we use in one place, and in that one place describing the operates is clear. And this reflects changes to be made in 262, and there is once GPR for this lens type will become a full AL instead of macro.
BAN: And so we moved away from the @@ notation to match 262, and I believe it was in work for over five years. All right, that is it for the 402 editor’s update.
402 has been updated to match 262 temporal in several ways.
Presenter: Chris de Almeida (CDL)
- no proposal
- no slides
Private discussion
Addition of BAN to editors group by acclamation.
BAN: I am back.
CDA: Welcome Ben, you are now one of the editors of the ECMA402 specification.
BAN: Thank you so much! It reminds me of when you got a PhD and you step outside of the room for the committee to deliberate.
CDA: Unfortunately, I cannot open the door and say come back, Doctor.
BAN: All right, but nevertheless thank you so much.
Presenter: Chip Morningstar (CM)
- no proposal
- no slides
CM: So every plenary I get up and have to find a way to declare the eternal immutability of JSON and the stability and compatibility of the ECMA-404 spec. This time it is exactly the same.
RPR: one day CM, one day. Thank you very much. All right moving on. We have test 262 status updates with PFC.
Presenter: Philip Chimento (PFC)
- no proposal
- no slides
Prepared statement:
RegExp.escape
,Atomics.pause
,Math.sumPrecise
, base64, source phase imports now have full or almost-full test coverage. Thanks to the proposal champions for pitching in with tests.- Large PRs: we are working through resizable ArrayBuffer, landing it piece by piece, and have started to tackle explicit resource management.
- We'll continue to work towards the goal of having a testing plan for each proposal, to make it easier for people to write tests.
- Igalia's grant is ending in September or October and is not renewable. We are working on applying for other grants so that we can keep the current level of staffing.
PFC: Short status update for test262 and I don’t have slides but I will paste the points into the notes afterward. Um, so happy to report that regex pause and Base64 have almost full test coverage test to the proposal Champions and everyone else who pitched in with the tests.
PFC: You may know that we have some very large requests pending in test262 which we are steadily working through and currently working through array buffer piece by piece and we have tackled explicit resource management and we will continue to work towards the goal of having a testing plan for each proposal to make it easier for people to write tests and to know when a proposal is fully covered and then some less good news, the brand that is funding Igalia in test 262 is ending in September or October and they have a policy – we are working to other grant to obtain the current level of staffing on test262 and if you have any information on that topic, let me know?
RPR: any questions for Phil on test 2 62? No questions on the queue and I will put myself on there and is there a ballpark minimum amount that helps with test262 funding and guidance on what people will and their companies to look into?
PFC: I did cover that at one point, but I don’t have the figure off the top of my head, I will take a look and get back.
RPR: okay, thank you. All right there is still nothing in the queue, so that is all. So, thank you, thank you PFC.
Presenter: Chris de Almeida (CDA)
- no proposal
- no slides
CDA: Sure, the brief update TG3 meetings are weekly and content has focused lately pretty much exclusively on discussing security impact of proposals that are in various stage. So, that is pretty much it. Please join us at TG3 if you are interested in security and specifically security on proposal and there is another item we have discussed recently but it has its own topic here that’s scheduled for tomorrow if I am not mistaken. So, we will wait until that time to discuss that item. Thank you.
RPR: Thank you Chris. All right, continuing on wards. We have TG4. John.
Presenter: Jon Kuperman (JKP)
- no proposal
- no slides
JKP: So I am going to share my screen. So I am representing TG4 resource test group and with last month we had our second hackathon and with our friends at Google Cloud Platform expel we had friends from Google, Igalia, and Mozilla, probably missing some others but we had a good turnout and we spent two full days basically working together on some of the these proposals and getting some demos working and going through specification and working out all things source maps. So the scopes proposal that I am proposing is biggest new work which is essentially adding to the specification in a special way to get variable names and functional names to get persist through source maps and what compilers use, and so you can show there is a nonfunction. And we got a full working and prototype of the script proposal which is nice, and we build it both in the authoring side where web tooling tools will add these new fields to source map and the reading side work would build these out the source map and use them to change the panel.
JKP: And we went through the existing specifications in hones of getting it ready for the October plenary which we will seeking consensus on it and we started to exploring new spaces which is exciting and so if you are interested in talking about them. And these are stage are even map stage. And the idea of embedding dependency information in source maps and ways we can improve Wasm support and showing some entities better and some debugging API and you can manipulate Wasm and so stress mode and con contingent subsequent on the new feature. We got a demo working with webpack, Babel, and gen mapping and so for the dependency map implementation and this would allow web tools and this will give you suggestions for tree shaking and for Wasm debugging and we had a big meeting with Mozilla, and Google folks prior before and we hope that it will lead to qualifying and there is underspecified API’s and that we can better control your language and how it deplays in dev tools, and so these are debugging or expression languages. And then the strict validation would be discussion if there is any way that we can kind of tighten up the existing spec which has been often underspecified and like what happens if a field is missing or if there is errors, and all of tools try to fail as gracefully as possible, and so validate the field and so we had discussion about these new fields that can correspond to stricter validation, and so the last thing
JKP: I will cover is testing repository which is supported by Igalia, and so we have the entire specification done in unit test and there is additional areas that need to be flushed out but it is near completion, and it has already been integrated into Mozilla search library and that is dev tools, and I think that is being integrated into Chrome. So now our work has continued to add test coverage for the upcoming Scopes proposal. Cool and last thing kind of a heads up on these that we are planning on coming to the October plenary with the clean up specification and hop hoping proposal for TG4 and we will make sure that is centered around that proposal, and thank you so much.
Presenter: Chris de Almeida (CDA )
- no proposal
- no slides
Private discussion
CDA: Welcome back convener NRO, you have been elected by acclamation.
Presenter: Mikhail Barash (MBH)
- no proposal
- no slides
MBH: Hello, so just a brief update this time. So, we still have our meetings every month. We will arrange the TG5 Workshop co-located with the plenary in Tokyo, this will be on Friday the 11th of October, the first half of the day. It will be hosted by the computing software group at the University of Tokyo. So we will talk about formalization of the grammar models that are used in the ECMA262 spec. This is a topic relevant both to that research group, and to the TC39 committee. And also, KAIST promised to talk about their current work in Wasm SpecTec. So everyone is welcome to attend the workshop, and the registration link is in the Reflector post with the information about the next plenary in Tokyo. So yes, are there any questions?
RPR: there is no one in the queue. So if there is any questions, and I will say that thank you for advertising this. Last time we had on the reflector and not everyone saw this, and thank you for this and you can all attend if you wish. On the Tokyo trip. All right we are doing well on the agenda. And onward we go to updates from the code of conduct committee, Chris?
Presenter: Chris de Almeida (CDA)
- no proposal
- no slides
CDA: The conduct goes well for the most part. We have not had any new reports and nothing notable in our spaces and discourse or GitHub.
CDA: We have not – we prefer not to be busy on the Code of Conduct committee, as always, I will repeat my pitch for folks who would like to join the Code of Conduct committee and please reach out to one of us on the Code of Conduct committee. The main ask is that you are available for a period of one hour every two weeks. Often we do not have to meet during that hour because we have nothing to talk about but it is important that it is blocked out on your calendar so we can function as a committee when we need to. So thank you.
RPR: all right thanks Chris. We have a cool collection there for anyone who wants to be a volunteer. And please consider it. Next up, we have – I clicked the button. Ben Allen with Intl.DurationFormat display negative sign on
Presenter: Ben Allen (BAN)
- no proposal
- no slides
BAN: So there is one small number that changed is essentially about WebEx and I wanted to bring it here of 0 abundance of question. So one thing with duration format is that we have two fundamentally different ways of displaying durations, and one in pro’s, and one in sort of this style digital clock and a couple of meetings ago we changed how we handled negative signs such that negative signs or negative durations is only displayed on the largest unit and we have a embarrassing bug and that embarrassing bug is that negative sign was dropped in the following cases when the duration is negative, and digital clock style is requested and the person displayed is hours and minutes and that value is 0 and so I don’t know if that is large enough to be legible but embarrassing dropped that negative sign in that particular combination of cases. There is a PR up to fix that, and like I said, out of abundance of question, I would like to ask the group for consensus to change the current debugging verse where this that specific case we drop negative signs to on that displace the current negative sign.
RPR: before we go to that call for consensus, is there any clarifying questions?
WH: Just curious what happens if the signs of duration components are heterogeneous?
BAN: Sorry, those are rejected, since the start of duration format mixed signs durations are not allowed.
WH: Okay thank you.
CDA: I cannot read the example.
BAN: My apologies, and how about if I copy it into matrix, will that be fair?
new Intl.DurationFormat('en', {hours: "numeric"}).format({minutes: -1, seconds: -2});
// currently: 0:01:02
// should be -0:01:02
BAN: I will call for consensus.
RPR: support on the queue for DLM. Any other, is there any more support or any other objections? All right no objections and so congratulations and this has consensus.
BAN: Fantastic thank you very much.
RPR: With the emojis on the call, helps make it a real life meeting. Next up we have Luca with source phase imports update.
- Consensus to fix
Intl.DurationFormat
spec bug that drops negative sign erroneously
Presenter: Luca Cassonato (LCA)
- no proposal
- no slides
LCA: Very quick update today. I just wanted to quickly update on what we have been up to. Though as you may remember, the main – I’m not going to do a full recap of source update, it’s only Stage 3. The main thing here was to officially support Wasm on the web and particularly this import source which would give you a simply dot module to simply dot instance to create a new instance for it and you need this to pass additional options to the constructor that you could not do if you’re doing this up here. That’s what we’re going for.
LCA: Wasm imports are now approved Wasm Stage 3 and reached Stage 3 in April. And the HTML integration is actually well under way and it’s complete and it’s been reviewed by Mozilla and Apple an waiting on the final approval of V8/chromium. If anyone hear from V8 would like to weigh in, that would be great. The quick recap of the integration is there’s no import attribute required because we consider Wasm to be the same privilege level as Java script. Wasm can import other Java script and it doesn’t have import attributes to import on the web. Those in the know of the JS string built in for Wasm for defaults and able to import JS string of imports if you do the Wasm import.
SYG: Just wanted to make sure did you mean phase 3 in Wasm phase 3?
LCA: Yes.
LCA: Okay. That’s the update. I don’t know if there’s any questions to answer. Otherwise, excited to hopefully get this by next – one of the next weekends. I will not say next weekend.
RPR: All right. Any more questions or comments for Luca? No, all right, thank you. We’re doing very well on time. So I think we’ve slightly reordered the agenda now to bring forward larger item. Chris, are we supposed to be going to Ben? TCQ has not taken me there.
Presenter: Michael Ficarra (MF)
- no proposal
- slides
MF: So this proposal is for an addition of a normative convention in how-we-work. This is a document that is non-binding but generally guides our design around various aspects of the language based on what we have learned over many years. So this particular addition is a continuation of a discussion that we had at the last plenary when we were discussing I believe it was joint iteration and talking about iterating strings and we seemed to have full agreement from everybody in the room that we just never want to accept string primitives in positions where we are expecting iterators or iterables and codifies that and extends it to all primitives. Strings have a built-in Symbol.iterator and someone can make Numbers iterable if they added Symbol.iterator to Number.prototype. Those positions where we expect an iterable, we will reject those primitives.
MF: So I don’t know, I could read out the full text of it. But basically the first paragraph is saying what I just said there and the second paragraph is giving some context that we now understand that strings should not have been default iterable because there are many different abstractions that the string can be providing. We don’t really know what you actually want so that gives some context so that people understand our reasoning. And we note that it is new just like all of the other normative conventions. So happy to have any discussion or clarifying questions. We can go to the queue now.
SYG: I support to make strings not iterable in new APIs but we can’t take back they are iterable and any thoughts on education on – because there are new APIs that don’t accept it. From a less advanced programmer’s point of view it may seem like iterables are inconsistent.
MF: Yes, I mean, that is the problem with a lot of these normative conventions. Where we no longer do coercing in certain areas, it looks like the built-ins are inconsistent about whether you are supposed to pass an undefined value to things that accept objects or something like that. With all these normative conventions – we don’t document the kind of things that we already follow. We document things that are a break from the norm and I think that risk exists for all of them. Hopefully the community also uses these normative conventions as a guide so that the language going forward and the ecosystem going forward kind of align. But I don’t have a better answer than that for you. There will be inconsistency in the language for any of the things that we have in this document.
SYG: But it seems like the set of API Is that will accept strings as iterables that will monger grow if we follow this and at least enumerate these and also send the memo out to other web API and HTML and stuff like that?
MF: Yeah, I could try to enumerate the set of – it wouldn’t be just like built-ins, it would also be syntax like for-of. It will not accept a string. I can try to enumerate all of the positions in the language that would still accept primitives and we could include that in the document as well if you would like.
SYG: Yeah, that would be good, thanks.
MF: Sure. I can update it like that.
DLM: We support this and quite happy to see more design principles being made.
MF: I didn’t quite hear the last part of it.
DLM: I notice this is ongoing process. We’re happy to see things being written down.
MF: Yeah, me too.
LCA: Just wanted to talk real quick about something that we were just dealing with in the web API world and web IDL getting a new Async iterable and the one we have thought is not iterating in the string and aligns and because it’s a web and the API with async align with this and sequences already do not support strings.
MF: Then I would like to ask for consensus on this request with the addition of enumeration of the built in positions that will still accept primitives being included in the document. I’ll have to get a review of that as well. I will do my best, but someone has to double check that work.
RPL: Support from Shu. Are there any objections to going forward with this subject and adding the list of existing cases? No objections. You have consensus. Would you like to, MF, read out what you think are the key points that have been discussed here?
MF: Key points are the committee agrees that strings in particular and primitives in general should not be treated as iterables in APIs and other language constructs going forward.
RPL: Thank you. Would you like to ask for a review?
MF: Would somebody like to do a review of those positions? I’m pretty sure I can get them right. I'm pretty good at finding all occurrences of something in the spec at this point. But I would like somebody to do a review afterwards. So if we could appoint somebody and after that person approves it, we merge it, that would be great.
SYG: I will volunteer since I requested it.
MF: Okay, thank you.
RPL: Thank you Shu for volunteering. All right. I think that wraps us up MF. Thank you. So speed run continues. We are going to move on to the item that we brought forward from Ben, which is normative make default number option in ECMA 402 truncate options before validating range.
- Consensus on the normative convention, but with all existing violations documented alongside.
Normative: Make DefaultNumberOption in ECMA-402 truncate options before validating range Presenter: Ben Allen (BAN)
BAN: There’s a decision to be made between making forward change to match 262 and make 402 match 402 or leaving them inconsistent. It’s a discussion how to put it concerns things that come from well, well before I joined the committee. My plan is largely get out of the way of that discussion once it starts. So while reviewing a PR in temporal, discovered that 402 inherent behaviour from 402 the way that 402 handles range validation different from how 262 does it.
BAN: This is the corner of the corner of the case because it involves people using fractional values are integers are expected. Default number option in 402 takes the value and converts it to the number and compares it to the pair of integer range bounds and returns the integer truncation or throws whether it is in bounds. This is used in 402 to process options like minimum integer digits and minimum fraction digits and simply the number of digits to display in the formated version of numbers. So let’s see. I will flit back over to the non-formated version for this. And I apologize for the sort of indenting here. When default number option is checking to see if the range – or the number of digits given is within these two integers, well, it converts it from the number of a mathematical number and test to see whether it’s in range, whether it’s between these two integers and then returns the mathematical floor of the mathematical value version of the number.
BAN: But there’s these several methods from 262 that you do a similar thing that check whether the number is within in this case hard coded range. They behaviour as follows: Again, I will flip over to the version that is hopefully more legible but might be sort of otherwise zany in formatting. So, for example, number.prototype.to fixed but all of the methods behaviour in the same way. First makes a call to the ToIntegerOrInfinity AO which will convert this number to an integer and then it checks and sees whether it’s within range. So 402 validates whether it’s in range and then converts and 262 does it the other way around. So here is some usage examples.
BAN: Again, this is something that I would say that no one ever does but there are more things done in JavaScript than exist in one’s philosophy. If someone requests converts a number to a fixed point number and requests that negative 0.5 digits be displayed, I’m not quite sure what that means. That will end up treating it as the number of digits after the decimal point is treated as, yeah, you get one digit. Like wise, if you’re above the maximum which is 100 digits, well, first it truncates it to 100 and then displays it with 100 digits. 402 works the other way around. So we expect if someone specifies negative 0.5 fractional digits, it will call that out of range. Well, the minimum is going to be zero, negative 0.5 is out of range. We can’t do that, we are going to throw. Likewise if it requests – if the value, the number of digits is above the maximum by a fractional value, it will say that value is out of range.
BAN: Here are the relevant discussions, again, this was discovered by RGN from a PR in temporal. So we have options. The PR that I have up changes 402’s behaviour to match 262’s. First truncates it, first truncates it to the integer and checks and sees whether the integer is in range. The other option is to make 262’s behaviour more strict. So have 262 behave like 402 currently does which is, well, if someone gives a fractional value that is – someone gives a value that is the minimum or maximum either minus a fractional value or plus a fractional value so negative 0.5 or positive 100.5, the other option is to have 262 reject that as being out of range 100.5 is more than 100, that’s out of range. We’re going to call it a range error. The other option is to leave it inconsistent to have in this very sort of odd edged case 402 and 262 behave differently. From our discussions in TG2, we have a mild preference for making ECMA 262’s behaviour more strict, so the value is out of range by a fraction to say that’s out of range and throw therange error. But it is as I understand it from looking back over the notes, it’s a relatively mild preference. So actually our preference is to reject this PR and instead make a PR against 262. That said, this is a matter for discussion. So I am going to hand it over to people to discuss this. Like I said, I’m not necessarily the best person to answer questions about this. I’m going to throw it over to the group for discussion. Again, 402 currently validates and then truncates, 262 does it the other way around, truncates and then validates. I would like to open it up for discussion now.
MF: Okay, so I see at least two more options here. The first one being that 402 can just reject non-integral inputs in the first place and the second option would be 402 can reject non-integral inputs and see if you can try to get 262 APIs to also reject non-integral inputs. I think that SYG will touch on this later, but I think option 2 and that latter one that I said probably aren’t worth doing. I personally prefer the former alternative that I just recommended that 402 rejecting non-integral inputs.
BAN: So just make clear that option is 402 rejects nonintegral inputs and then not necessarily 262.
MF: Yes, just because I think it’s going to be harder for 262.
WH: I originally wrote the toFixed
, toPrecision
, and toExponential
spec text in 262. Back then the committee consensus was that operations were supposed to be as permissible as possible, which is why these accept fractions and truncate them. Nowadays I think we should just reject nonintegral inputs but trying to retrofit toFixed
, toPrecision
, and toExponential
is not worth the effort.
SYG: I agree with both MF and WH here. I think given that we already have consensus on for future APIs to be less permissive, I think the right path forward is obviously to – maybe not obvious to everyone. But I think the right path forward is reject nonintegral inputs in ranges it’s supposed to be integral. toFixed and toPrecision are very old. So I think I feel more strong than – stronger than what I said on TCQ, it seems like it will be a lot of effort to figure out if we can change them. I’m not excited to see if we can change their behaviour. I don’t know who is signing up for that on 402, if 402 wants to change their behaviour of this. But I feel like the most fruitful path forward is for new APIs including temporal perhaps that we reject nonintegral and leave the existing things as is. To be clear, that’s a question to the Intl experts in the room, do you feel there is enough momentum to see if it’s possible to do this change?
BAN: Yes, I believe so. I’ll defer to others in TG2 if people disagree. Currently, this PR doesn’t reject nonintegral values and instead tests the range after coercing. I don’t want to speak for other members of TG2, but I would be perfect fine with investigating whether we can just reject nonintegral inputs.
SYG: Specifically it’s an ask to the browsers, if they’re willing to check this.
WH: If we do accept fractional inputs and truncate, then I think the behavior should be as it is for toFixed
, toPrecision
, and toExponential
— do the coercion first (the coercion in this case being truncation) and then do the range check. But, notwithstanding that, my position is also that whenever possible we should not do coercions or truncations.
RGN: I agree with WH and thank him for the feedback. If implementers are willing, then I think there’s at least broad consensus that the best outcome is to be strict and not truncate at all. But even if we can’t get that, I would love to see alignment on truncation rather than this gratuitous discrepancy.
PCO: So SYG, you mentioned new APIs including temporal. Generally we haven’t gone back and applied new design conventions to existing proposals, but I guess if we want to go back and apply the convention to temporal, we should have consensus there ought to be a normative change for that.
SYG: First to respond to the previous point I think was it RG I don’t think the browsers – I can only speak for V8 obviously but I’m not going to sign up for the work of figuring out if to fixed and to precision can change. So the inconsistency will be here to stay for that part. For PCO’s question about temporal, I mean if we all agree that this is the right design and nobody has shipped temporal or fully implemented it yet at this point, this is the best time to do it, right? So why not?
MF: I just want to give a counterexample to that. I know for at least RegExp.escape, we were paying attention to new normative conventions and we removed a coercion that was happening in the earlier form of the proposal. I don’t recall us ever discussing explicitly whether we would go through proposals and apply the conventions. I think we just address them as we see appropriate. So I would be in favour of applying normative conventions to Temporal as much as implementers are willing to.
KM: I guess on the similar comment, I don’t think WebKit or Safari has any automated way to see if these things would be compatible. I think it would be pretty hard. It seems like the work would be basically just trying to ship it and then most likely it would fail. It would be a lot of effort for something that probably doesn’t have a high success chance any way.
PCO: So I might be misremembering, but I think in the – at least one of the stop coercing thing topics from previous meetings, we explicitly said it was a nongoal to go apply this to all of the in flight proposals. I could be remembering that wrong, but that’s what stuck in my Memory.
RGN: KM said that it would be a lot of effort for likely failure. I was wanting clarification if that were regarding the ECMA-262 operations or the ECMA-402 ones.
KM: I was talking about the 262 version, yeah.
RGN: Okay. Do you share that opinion for the ECMA-402 operations?
KM: That I just don’t have enough knowledge of the usage of to know. So I guess my answer is just, yeah, I plead the fifth or whatever.
RGN: All right. So, an expectation that it won’t work for ECMA-262 but unknown for ECMA-402.
NRO: I would say I hear the position to find and opposition of point 1 here. I think it’s – if we agree that being more strict is good, the valueOf being more strict and trying to catch more user errors is much higher than being consistent which is also the reason we’re going with the normative conventions and trying to design better. So I think here consistency is like not good enough point to remove some.
RPR: Has a plus 1 comment NRO. And then down to RG.
RGN: Just to get this in the record, one problem with current behaviour is that low-level operations tend to be used by new functionality, and that makes it a risk for spec authors to accidentally inherit behaviour that we really don’t approve of as a committee any longer. If a normative change is not possible, then we should—and I personally will—still pursue an editorial change to clearly mark the misbehaving operations as legacy. We have done that in a couple other cases already in ECMA-402. And it’s important to keep that in mind as we do look forward that even if we can’t fix the old contents, we can put changes in place to discourage new things from using them.
SYG: I think you answered my question.
WH: The previous comment was in regards to the 262 operations or the 402 operations?
RGN: I would advocate for both. But I have more direct control over ECMA-402 and so that’s the one that I would influence personally.
WH: I think that trying to put comments in toFixed
, toPrecision
, and toExponential
would open up a Pandora’s box. There are many things in 262 which coerce and I would not want to single out toFixed
, toPrecision
, and toExponential
as being the bad guys.
RGN: Right. For clarity, the pattern that we follow in ECMA-402 is to rename the operation rather than adding a bunch of comments to arbitrary operations. So if you think about things in ECMA-262 like ToIntegerOrInfinity, there are similarly low operations in ECMA-402 like GetOption and what we do with those having behavior we no longer advocate is renaming them to something like “Legacy{VerbPhrase}” and then we reference that legacy operation from all of the legacy APIs.
WH: What you’re suggesting is, hypothetically, renaming ToInteger to ToLegacyInteger or to ToIntegerLegacy or something like that?
RGN: I don’t know off the top of my head if that’s warranted for ToInteger, but if so, then yes. ToInteger would be renamed LegacyToInteger and we’d introduce a new ToInteger with the good behaviour.
WH: Yeah, this is veering off into a completely different subject area and I think that any proposal like that would need to have some significant committee discussion.
RGN: Sure. I can refer you to parts of the ECMA-402 spec if you want to see what it looks like in practice.
NRO: I like this approach if we decide that we don’t want the behaviour anymore, with the guide proposal out to make sure that everybody is following the behaviour.
SYG: To clarify, so I agree with WH that certainly not going to be adding comments to existing built ins, the editorial thing I think it’s basically all in 402 because 402 already has the organization in the spec of basically everything calling out to the same options bag handling AOs and that doesn’t really exist in 262. It may in the future for options bag handling and we should get that right the first time. Right meaning reject nonintegral input for integral ranges. So I think the editorial thing doesn’t really apply to 262 today in that the choice between coercion and no coercion is whether you, you know, do the coercion and that’s not something that we can fix with AOs whereas in 402 like things do bottom out and some option handling AO and easier to do there by calling one of them legacy.
KG: This is in response to something that PCO said earlier about whether it was a goal to go back and update existing proposals and when we agreed to not do coercion going forward. It’s correct, when I presented, I asked the committee did we want to go update existing proposals and no one spoke in favour of it. So I said, great, we’ll leave things as they are. That said, we didn’t have an extensive discussion about it. It was sort of a brief item at the end of my existing presentations. So if someone wants to push for that, go ahead and push for that. But the place we left it last time is that things that were already in Stage 3, we were going to leave alone. Dot escape was not in Stage 3. What wasn’t in Stage 3, it makes sense to adopt these. Stage 3 proposals we don’t normally make as many changes to.
RPR: Thank you Kevin. So we’ve got about four minutes left on the time box, Ben.
BAN: Let me see if I can summarize. So currently there’s not support for making 402 behave in a less strict way that 262 does. Am I correct in there being a sense that TG2 should investigate whether it’s possible to reject integers all together?
MF: WH and I had the preference that, if we cannot reject integers — if that way does not work for some reason — then it would be preferable to take option 1 that you have here over option 2. That would be making 402 less strict, doing the coercion first.
BAN: Okay.
RPR: I think WH is agreeing. WH are you agreeing?
WH: Yes. Also I think calling one of them stricter than the other is somewhat misleading here. It’s just a matter of whether you do the coercion or validation first.
BAN: All right. Okay. So since that seems to be the sense of the committee, I suppose I will ask for consensus on this PR to make 402 like 226 coerce and then validate.
SYG: Wait, sorry. Whether TG2 is interested in that change is a separate question on whether we can do it. And KM spoke to – he doesn’t have an intuition on how widespread the 402 built ins that you’re trying to change is, I also have no intuition at all. I don’t think we have use counters in Chrome to check most built ins. So I’m not comfortable giving consensus that we will do the work. So I don’t know where that leaves you.
BAN: Okay. Then I suppose the consensus is that TG2 should investigate.
BAN: Recap of key points: The behaviour in 262 likely cannot change for 402 and as I understand it new proposals going forward, we should consider the more strict behaviour to validate and then coerce and if anyone wants to add to that list of key points.
WH: This is backwards from what we just discussed.
BAN: Backwards in the sense there will be no changes to 262 and that 402 should consider the validate then coerce behaviour.
WH: No. Coerce and then validate behaviour.
BAN: Okay. Yeah, of course.
WH: For future things, we should not coerce.
BAN: All right.
WH: For future things, we should just reject fractions all together if they’re not meaningful.
BAN: So 262 not change. 402 investigate whether we should change to match 262 or actually reject nonintegers in future proposals, reject nonintegers.
WH: Yes.
BAN: Okay, fantastic.
RPR: BAN, could you go to the notes now to update that and then perhaps WH, you could be a reviewer on that. It will only take like five minutes.
NRO: I just had a clarifying question. Is it correct that validate and then coerce and then we’re changing to the other thing but then for the future we’re going to do the other way?
BAN: Right now 402 validates and then coerces. And for the future, simply reject all nonintegers.
NRO: Validate first and then reject.
BAN: So reject if someone gives a noninteger value that is in range or out of range, it is rejected. Validate and then coerce checks it accepts fractional values that are in range in terms of like the integral truncation, so currently it rejects fractional values at the edges of the range, accepts others.
- Not worth the effort to change this behaviour in 262 - Value of checking for more user errors going forward is greater than value of consistency
- Future proposals should adopt new normative conventions
- Not necessary to change in-flight proposals (see Stop Coercing Things presentation from earlier meeting)
- ECMA-262 to remain unchanged
- TG2 to investigate whether to should keep validate-then-coerce behaviour or adopt 262’s coerce-then-validate behaviour
- TG2 to investigate whether the relevant AOs can reject non-integral values
- Proposals going forward should reject non-integral values
Presenter: Jordan Harband (JHD)
- proposal
- no slides
JHD: You can scroll down a little bit. The current status of this proposal, the issues we discussed in a previous meeting have been incorporated. All the escaping that is expected is being done. The spec has sufficient sign off, Test262 tests exist and will be merged soon as the proposal reaches Stage 3 and I would like to ask for Stage 3.
RPR: All right. Are there any questions to JHD on this? MM support with plus one.
RPR: CDA has support. No need to speak. DLM has support and no need to speak.
JHD: Awesome.
RegExp.escape promoted with consensus to stage 3
Presenter: Nicolo Ribaudo (NRO)
NRO: So as you might remember, a year ago, I think, we changed – we tried to change the keyword by the proposal from assert to wait because we had to change the semantics and the it was not discovered. So the problem with change the word from assert to with. And we’re not sure whether it was web combatable to remove it or not. I’m happy to announce and thanks a lot by the team for working with me on this that both node and Chrome successfully unshipped the keyword and we can remove the proposal. I will ask you to have consensus to remove the assert keyword. It’s this orange piece of spec text on screen. Then there’s also equivalent piece of spec text on the part, if you can go to 13.3.10. This other case which is check if the import option has the assert property. So do we have consensus for removing this to normative option from the Proposal?
RPR: On the queue, you have mark Miller with a plus one.
RPR: And DLM says thanks to everyone involved. Support. Anything more to say DLM?
DLM: No, that’s fine. Just appreciate the effort. It’s nice to see this. I appreciate the effort everyone made to make that possible.
RPR: WH with another +1. All right. This is sounding positive. Let’s just do one last call, are there any objections to dropping assert from import attributes? No objections. Congratulations. You have consensus.
NRO: Thanks everybody. Thanks again to Shu for working with me removing from Chrome. Just for planning I plan to bring the proposal to Stage 4 for next plenary and shift in Safari in Chrome with the new spec text. Thanks everybody.
RPR: Wow, thank you. Amazing. Really good collaboration and working our way through the journey. So we have six minutes remaining. Do we have any other tiny things to bring in.
When we changed the keyword from assert
to with
, we didn't know if it was web compatible to remove assert
because it was shipped in Node.js and Chrome, so we left it as a "deprecated syntax". Both Chrome and Node.js successfully unshipped assert
now, so we can remove it from the proposal.
The proposal will be presented for Stage 4 at the next plenary.
- The committee has consensus for removing
assert
from the proposal
Presenter: Shu-yu Guo (SYG)
- proposal
- no slides
CDA: Time to start after the lunch break. Two more people join in. Let’s repeat the call for notetakers.
USA: Let’s start with the rest of the agenda. First up, we have atomics.pause for Stage 3. Are you ready?
SYG: Yes. Let me share the thing. One second. So as I said before the lunch break, there is no normative changes to the proposal since last time. It is a quick recap, it is a new method on the Atomics object that has no observable behaviour, but gives the implementation a chance to emit a pause instruction that is used in spin loops such as inside mutex implementations. Currently Stage 2.7. Test262 tests have landed. With that go to the queue, which before the lunch break, WH had an item that said bad spec text and like to hear more about that.
WH: At the last meeting, when advancing to Stage 2.7, we agreed to a couple changes. One of them was ±0, which was fixed. The other one is the spec text presented today has reverse of the meaning of what we had agreed to. The relationship between the iteration of the number and how long the thing waits was supposed to be linear. Instead we got Note 3 that makes no sense and contradicts the rest of the spec.
SYG: What is supposed to be linear?
WH: The relationship between iteration number and how long the thing waits.
SYG: I don’t recall there being consensus that the time Atomic.pause would wait for an iteration number was linear. It’s probably clearer to show the example. I don’t have an example written. I thought I did. Apologies. The idea is that you wait for some bounded number. You pause for some number bounded of times. The iteration number is just a simple for loop that bounds the number of times you would pause. So the longest amount of time you would wait is for iteration zero and then subsequent iterations would wait shorter and shorter time. Usually this would be exponential back off, but you could have other kinds of back offs.
WH: I don’t understand what you’re saying. This is the same problem that we run into at the previous meeting, in that we were talking past each other. Please be very specific.
SYG: Very specifically, when you call Atomics.pause with iteration number equals zero, suppose that waits some amount of time. If you call then Atomics.pause(1) that would wait a shorter or equal amount of time than zero and so on and so forth.
WH: What? Atomics.pause(1000) waits less than Atomics.pause(5)?
SYG: What was the first thing you said?
WH: Atomics.pause(1000) waits much less than Atomics.pause(5)?
SYG: That’s correct. That’s how it’s all been in the proposal. Iteration number is not a hint for how long Atomics.pause waits. What iteration are you in a spin loop, how many spins have you done? So there’s usually exponential back off. As you spin longer, you should wait shorter and shorter. You wait for the most amount of time. Because if you look at the example – I will switch tabs for a second. Can you see this? I don’t need to switch. Can you see this example?
WH: Yes. We can see it.
SYG: This is a pseudocode version of how the fast path of mutexes is usually written with a spin lock. They spin for a bounded amount of time. Each time they go through this outer do-while loop, they spin a little bit before going back to try to acquire the lock again. The best practice as far as I can tell is that the initial iteration spins for the longest amount of time and then there’s back off for how short you wait until you exhaust the spin count and then you go to the slow path and put the thread to sleep. And this API is designed so you pass in this counter of spins directly to Atomics.pause. It is not a hint for how long to wait. But it is the iteration number of your spin loop. That’s why it’s called iteration number.
WH: So that is entirely unclear from the way it’s written now.
SYG: Is it? Like, it says that the number of times – Note 3 basically says that pause(n) waits at most as long as pause(n+1) and N +1 wait is equal or shorter. Why is this unclear?
KM: I have a question that might help this misunderstanding. I guess it’s not super obvious why you would want to wait the longest on the first iteration. Sorry to interrupt. I’m not sure if that’s okay.
SYG: That’s fine. I think it’s because that you – I’m not really sure. I had figured it was due to empirical evidence. Like, every spin lock I had gotten my hands on to read, including Safari’s parking lot, including the spin lock in the allocator inside Chrome, have the exponential back off when the lock is contended and want to spin for a little bit to try to acquire the lock, you try to wait the longest in the beginning and then you just wait progressively shorter. I had figured this was – I don’t know, people knowing how architectures work and had divined this somehow and then figured it out with empirical evidence and I was copying that as best practice.
WH: For spin locks, you pause for the shortest amount of time before checking again and then you progressively do longer and longer pauses.
SYG: That is the opposite of every implementation of the fast path of a mutex that I have read.
WH: That’s how exponential backoff works. I think you’re looking at this as spinning and I’m looking at this as pausing.
SYG: Okay. Can this be resolved by looking at an actual implementation to see what is actually done?
WH: Okay. The issue right now is the spec is not intelligible. We’re both looking at it and seeing different meanings.
SYG: It seems like we disagree on whether the first – whether N equals zero should wait the longest or the shortest. My intention was that it waits the longest. But your contention is it should wait the shortest? That is a different issue than the spec being unintelligible that I think this says that zero is the longest as well as step 2 here that says zero is the longest. We may disagree on what it should do, but I’m not seeing why this is confusing Editorially.
JRL: This is the same topic. I don’t understand what signals means, but Note 3 is incorrect. It is the opposite of what you intend?
SYG: How is it the opposite of what I intend?
JRL: Atomics.pause(n) should wait as most as long N +1 that means N is less than N +1.
KM: I think that’s what he intends. But that is unintuitive. May not be what you expect.
JRL: He was saying the semantics you want is N is longer than N +1. The larger the iteration, the shorter it waits. That note is incorrect given the semantic assumption.
SYG: Can you say why? Wait as long as N +1.
JRL: Means that N +1 is larger because N can be smaller. At most as long means N can be smaller than N +1 which is the opposite of the semantics that you just told us.
SYG: That is not my reading. But I can see this sentence is confusing. I would reword and say pause N should wait for a longer –
JRL: More than N +1.
SYG: Wait longer than, yeah.
JRL: I still don’t like this semantics. I agree with WH that the semantics are incorrect. The note is confusing to me. I have no idea what signal sent means in the spec text.
SYG: Okay. This sentence has no teeth. There is no observable behaviour here. So the signal sent basically – (?) and I wordsmithed a bit in the editor calls and basically says if your CPU has a pause instruction, you can execute the pause instruction. And after wordsmithing it, we decided on support a signal to be more I guess more generic than saying something directly like your CPU has a pause instruction. The signal here means execute the pause instruction. You may want to execute the pause instruction more than once, which again is unobservable because it’s just timing. That’s what “signal sent” means. Are you saying you would prefer more direct wording like execute a pause instruction or something similar, rather than something abstract in case some underlying architecture is – there’s no instruction but some other way to signal that the code is in a spin wait loop?
JRL: I don’t understand what signal is sent means unless I know the implementation that it’s being sent to. I would like it to be worded in a way where the exact behaviour that you want is reflected here. And that could be by taking Note 3 and rewording it honestly, but then having Note 3 be spec text, saying implementation-defined N waits less time than N +1.
SYG: Very well. I see that editorially this is less confusing.
USA: In the queue we have KKL.
KKL: I feel like I might be able to help us get speaking on the same page. I don’t have an iron in this particular fire, so I’m just going to attempt to help connect some dots. I think that perhaps the point of confusion is the word”backoff” leads to an opposite conclusion if you’re coming from an understanding of exponential “backoff” and retry loops in a fault tolerance system. In exponential back off loops, each subsequent iteration backs off for a longer period in general, whereas this seems to be the opposite where you’re making the assumption if you spun a long time, maybe if you spin more a shorter amount of time, you’re converging on spinning the least amount of time waiting to acquire the contended lock. And that the theory of this is that you should spend less time the closer you get to that – to the point where you might win the lock. Does that sound correct to you SYG?
SYG: That sounds reasonable. Again, I don’t really know why every lock I have read works this way? But I can imagine that explanation. Which if I can try to reword it, it’s basically we know this is basically contended. We want to try for a bounded amount of time without putting the thread to sleep because that would result in higher throughput to try to reacquire the lock. So we spin the CPU, and we know it’s contended at the very beginning so we wait for like some period of time. But if it remains contended, the longer it remains contended – I think the assumption is that the longer it remains contended, the highly the likelihood it will remain contended to the point where the thread should go to sleep. So because of that assumption, you spin for fewer and fewer cycles the more times through this try retry loop until that you spin for so little and then you still can’t acquire the lock and then go to sleep because that’s the slow path. So in that kind of interpretation, yes, I think your explanation makes sense.
KKL: So I propose that something that would help editorially is not using the word “back off”, and maybe there’s another term in literature that’s more close to the meaning.
SYG: My thinking right now is I wouldn’t talk about implementation strategies at all. I would basically only talk about the expected use, which is that you should spin for a bounded amount of time and pass the loop counter to this argument. And then the amount of time paused for smaller loop counters is longer than for larger loop counters. Does that seem clear?
USA: We are at time for this. Shu, do you think there’s a way out?
SYG: I would like to request let’s say I will be conservative. I will say a ten-minute extension to talk through my proposed, the editorial change that I just proposed with WH to see if that’s clear?
USA: We have five minutes overflow. Can you do with five?
SYG: Sure, I can try.
SYG: Okay. Thank you, KKL. WH, did what I just said sound reasonable?
WH: Offhand, no. I think there is a significant issue that we need to discuss, and it’s clear to me that we did not achieve consensus. We thought that we achieved consensus at the last meeting, but we were unknowingly talking past each other.
SYG: Okay. So I’m not sure what the disagreement is. The disagreement is that you think it’s incorrect design that a smaller iteration number pauses for a longer amount of time than a larger iteration number?
WH: I was under the impression from the way it was written is that iteration number determines how long the thing pauses.
SYG: Right. It is the opposite. It does determine how long it pauses, but in the opposite way than you expected.
WH: Yes. So if that’s the behavior you want, then what you should do is: if you want to pause less and less, you should start with a higher Atomics.pause argument and decrease it over time.
SYG: I feel like that is less ergonomic from the implementation point of view where you have a simple for loop and number of spins and i <= spin count and i++ and you pass i. In your preference, it would be – you would have to do manual subtraction to pass the iteration Number.
WH: The issue is that there are different kinds of backoffs you can do. You can do exponential backoffs and quadratic backoffs and linear backoffs. As of now, there’s no way to express —
SYG: That is by design.
WH: If the argument Atomics.pause directly controlled how long the thing waits, you could do any of those algorithms however you like.
SYG: It’s a hint, first of all, it is not a direct control. The intention was for the implementation to choose the best back off strategy given the execution tier that the JavaScript code is currently running in. If this call were to be inlined in the highest tier, then the number of pauses would be different than if you were running unoptimized JavaScript code because for example the JS call overhead is so high, you don’t want to execute multiple pause instructions. Pure implementation hint.
WH: Yeah, the linear factors, but I’m not ok with —
USA: Can we continue with the queue?
USA: KM is next.
KM: I guess I will make a comment on that point before I go to my comment. You could just run the loop backwards and set i equal to the iteration and upper level and subtract each iteration of the loop and wanted to have Waldemar’s suggestion and less ergonomic. And as far as I can tell reading the code base it is the same amount, we do the same pause with every iteration.
SYG: Interesting, okay.
GCL: I just want to say there are a lot of different implementations of spin locks and I know several including the Linux kernel and delay is shorter or longer with different signals to the lock as it’s attempting to be acquired. I think it would be best if we just don’t specify how the delay relates to the iteration count.
SYG: I mean, I will respond to that real quickly. Recall that this note basically has no normative force because it can’t be observed, and it’s kind of a compromise that people last time wanted a step to say something, because originally step 2 didn’t exist. But okay let’s try to drain the queue as quickly as possible.
LCA: I also don’t want to add more fuel to the fire. But I just looked through two different respites that both implement spin locks and both of them do the opposite behaviour as wait time increases. I agree with you we should probably not specify any specific behaviour here and let the implementation decide.
RBN: So just to clarify a couple things. This is very close or in agreement with what KKL was saying as well. But most implementations of spin locks and user accessible spin waiting APIs they’re high-level like this is designed to avoid CPU and spinning on the tight loop of the single thread and avoid contention because you have multiple or trying to compete for resource. So the counter specified or pass to pause is not a back off. It is to avoid contention. And how that generally works is that it’s used as a hint to make certain decisions. Sometimes it will not pause. And this all purely depends on implementations. Sometimes it won’t do anything and be a no-op and sometimes pause, and often times it’s based on how many times the iteration has been seen and how close you are to the context switch or kernel transition that is expensive and as a result, early on like if you’ve already done some expensive work and just entered into the context, you might see shorter or longer wait times. It’s all CPU-dependent based on what the most optimal thing for the CPU architecture determines whether the pause is efficient. As you kind of approach context which you want to check more frequently and avoid the context switch because it is expensive. Which is kind of why the spec text is written as it is right now and indicate the higher the iteration number is, it doesn’t mean you want to wait a certain amount of time, it means that you are attempting to pause more frequently so the higher the number gets, the more frequently you are trying to do this which means you want to have shorter times and want to say this is trying to check for contention. It’s very heavily involved in trying to check for contention and we’re getting and approaching a context switch and know we want to return faster and faster to see if it can get that lock. Once you get to the context switch, then all bets are off. You can reset everything where you’re concerned about with the count to have slower back off. Because now it doesn’t really matter until you start approaching the context switch curl transition with the priority on the thread again. So saying back off is a bad term and we should probably just strike that from the spec. And in a way, we probably should replace the entire spec text of two, it’s just a runtime-specific optimization purely based on CPU architecture and let everybody fill it in because it will always be dependent on CPU architecture and specific implementation and specific tuning concerns of the person writing the code at the very core level right within the implementation and spec text and what it says isn’t going to matter because of that. All that matter is the iteration may not have much to do with how much time you’re waiting and no guarantee it will.
USA: So we’re at time. SYG, do you think that there’s a way to end this?
SYG: I don’t know how to make progress from here because we’re debating on something that has no observable behaviour. Like, my current thinking is that I will come back at some point – like, at the next meeting and because doing what RBN exactly said and remove Note 3, replace basically these last two sentences with something that says iteration number may be used by the ECMAScript implementation to determine the amount of time that is paused. But the paused time here is unobservable and we’re talking about on the order of hundreds at most CPU cycles that is a very very short amount of wall clock time. I see Waldemar has something on the queue about diversion behaviours. I don’t think you can check on that level of divergence given how most of the actual timers are. Every call to atomic.pause argument will look like an immediate return. So I will come back next meeting with the editorial change. But I want to be clear that this is – like, there’s no actual normative behaviour that we’re discussing. Is there any disagreement on that? If we have disagreement on that, there’s a deeper issue. Okay, WH disagrees. Can you please engage on the repo by filing an issue. I would have liked to have gotten ahead of this before – I would like to get ahead of this before the next Meeting.
USA: Thank you, Shu. So I think we could capture the queue from today and come back to this.
DE: Can we agree, as part of the conclusion, that people who disagree with calling the behavior "implementation-defined" will engage with SYG offline before the next meeting? e.g., WH, MLS.
USA: Shu, did you agree with the conclusion?
SYG: Yes, I agree with the conclusion.
SYG: More a question to address to Michael and WH?
USA: Would you like to speak to that?
WH: Also I think it’s clear that we were talking past each other at the last meeting and did not achieve consensus at the last meeting.
DE: Do you agree to engage between this meeting and next meeting off line with SYG?
WH: Yes. I also wanted it recorded in the notes that consensus we achieved at the last meeting was an illusion.
DE: All right. MLS, you have a comment in the queue also. Are you able to commit to engaging between the meetings?
MLS: Well, I agree with WH. Yeah, our team can engage if there’s no normative text for the argument, we’ll probably ignore it. We don’t know what to do with it.
USA: All right. Thanks, every one. Let’s move on to the next item for today which is concurrency control. MF, are you ready?
MF: Yes, I can do that.
USA: Just before we move on, Shu, would you like to state some of the key points of the discussion to wrap up the conclusion.
SYG: Key points are there is confusion about the – that there’s confusion about the iteration number argument to the Atomic.pause method, and that this needs to be resolved before the next meeting before stage advancement.
USA: Thank you, Shu.
WH: I think it’s more than that. I don’t think the stage advancement at the last meeting was valid, since we didn’t agree what the behavior actually is.
USA: All right. I believe that has been recorded. MF, you may start.
There is confusion about the iteration number argument to the Atomic.pause
method. This needs to be resolved before the next meeting before stage advancement.
MF: Can I get confirmation that LCA is here.
LCA: I’m here.
MF: I wanted to make sure you’re around in case you wanted to add anything. This is a new proposal and looking for Stage 1. Concurrency Control, championed by both LCA and myself. So there’s a lot of background to this proposal. A long time ago, we had a very large iterator helpers proposal. We were able to make progress on that by cutting it down to what we call the MVP, that is the essential methods mostly mirroring the methods that were available on Array.prototype. It went through Stage 3 including async variants of those MVP methods. At the last minute, so I think the meeting following Stage 3 advancement, we realized that our strategy for async iterator helpers was not ideal. We wanted to revise it. At the time, we wanted the understanding of async iterator helpers to be how you would naively write each helper using an async generator. That has some benefits. You know, that is why we had that strategy at the time. But it also has a drawback of limiting the concurrency of the underlying iterators that you’re applying transforms to because of the queuing behaviour that is present in async generators. So we pulled async iterator helpers out of iterator helpers to resolve that and allow transforms to preserve the support for concurrency of iterators that you’re applying those transforms to. So we’ve accomplished that with the async iterator helpers and any of those transform methods do allow – so if you have two outstanding nexts, the promises that they returned and not resolved you will have two outstanding nexts on the iterator. When you map with concurrency, you retain concurrency. But there’s no way at the moment as part of that proposal to concurrently drive async iterators. That’s where this proposal comes in. So while async iterator helpers was split out to add support for preserving already supported concurrency, this proposal is split out from async iterator helpers in order to support actually driving iterators concurrently. So that’s where we are.
MF: So there are three components of this proposal. The first is this governor protocol which we’ll get into in a moment. The next is what we’re calling Semaphores that are just a counter that implements the governor protocol. And then the third is integration back into the async iterator prototype methods to accomplish this goal we have of driving async iterators concurrently.
MF: So on each of these slides, I will have a little section that marks off what is nonessential. This proposal I acknowledge going for Stage 1 is slightly overworked, but I feel like it was necessary to show the full vision of the proposal. So, you know, a lot of detail exists here and also nonessential components exist here, but know that that is all still open to change. The parts that are essential are not highlighted.
MF: So the governor protocol is very simple. It just gives you an acquire method that returns a promise of these things that we’re calling governor tokens and the governor tokens have a method called release that is compatible with the explicit resource management proposal to be disposable. The way that you interact with the governor is you ask to acquire a token. When you eventually do, you later release it. That’s how you manage a limited resource.
MF: So semaphores, as I said before, they implement the governor interface and they just act as simple counters. So you initialize them with the capacity and then they will hand out — only that number of tokens at the same time may be valid. So if you initialize with the capacity of five and you acquire five different tokens, if you attempt to acquire a sixth, that promise will not resolve until one of those first five has been released or used with the explicit resource management proposal.
MF: I didn’t even go over the nonessential parts. The first component, let me talk about them just real quick. Yes, we have this protocol which hands out these tokens. That is necessary. But we could also have a class called Governor that is abstract because it throws when it’s constructed directly but things can extend and Semaphore might be a thing that extends it and other complex governors that extend it and with that you can get the helper methods. The helper methods, I will go over them briefly. with
is a way to execute a function only if there is capacity in the governor at the moment. wrap
is a way to, you know, do that repeatedly. So you get back a function that does that. And then wrapIterator
is a way to do that but for iterating instead of for function calling.
LCA: Just want to clarify that it’s not if – like, can you go back to the previous slide. With will not – it’s not conditionally calling the function based on whether you currently have capacity. But it will wait until you have capacity to call the current function.
MF: Sorry, yes. That’s correct. So that brings me back to here. Semaphore would extend governor if there is such a class. It would also be nice to be shareable across threads to pass it to workers and share a concurrent resource. And then finally part 3 is the integration back into the async iterator prototype. This has a couple of parts to it. If you recall in async iterator helpers, there is a buffered helper that is added as part of that proposal. Remember, that’s a Stage 2 proposal. And it takes a limit which is an integer and possibly some way to tell it to pre-populate We haven’t finalized that. We will extend the first parameter to accept an integer and something that supports the governor interface. Optionally we add a new helper called limit. Now that we have this governor capability, it would be useful to be able to limit concurrency. That means if you pass a limit of three and then try to next the resulting iterator five times concurrently, there will be only three outstanding promises for the underlying iterator. It will not further try to next the underlying iterator until one of those has resolved. But that is not essential. That can be done as a follow-up. But it’s pretty obvious given this new capability. And then finally for any of the consuming methods on async iterator prototype, we will add a parameter which is a governor or integer so it matches the first parameter of buffered and controls how concurrently they drive their underlying iterator. So the toArray, forEach, some, every, find, and reduce. Those are the consuming methods that are in async iterator helpers, on AsyncIterator.prototype.
MF: So this part deserves some explanation. Why do we have this really general concept of a governor? Why is a protocol necessary here, why not just integers? Why not always pass three? Well, you can. In any position that we are accepting governors, we are allowing an integer to be shorthand for just that amount of maximum concurrency. Very similar to a semaphore used in that position. Sometimes you have a resource that you want to share among non-coordinating consumers. Let’s say you want to make fetches, you have two different things that will be independently making their fetches and you want to have no more than like five fetches going on simultaneously, but it might be all five, you know, for consumer A and zero for customer B or split three and two. It just changes over time. That’s why you would use a semaphore. You create a semaphore that controls access to the resource. And then they have shared concurrency. It’s shared simple concurrency around still a counter and capacity of five and fixed over time. We generalize further to a governor when we want that concurrency to be more complex. When we say over time this might change. We might have exponential back off that we want to implement in the governor. We might have just resource limits that are dependent on configuration that may change. All kinds of things. That allows you to implement any kind of governor with as complex logic as you want for determining how concurrent access to a resource may be and in a way that may change over time. So that’s why we see this – the need to generalize in these three steps.
MF: So I am asking for Stage 1 for this proposal today. I want to make very clear what Stage 1 means and what I’m asking for. So I have four points. First is the consumer methods of async iterator prototype will have a way to drive the underlying iterator concurrently. We saw that we accomplished that in the concrete proposal using a new parameter on these six methods on AsyncIterator.prototype. We want a simple way for the common case. We do acknowledge the common case is we just want like one method to independently manage its concurrency independent of what anyone else is doing and in this proposal we solved that by just allowing integers to be passed and it’s a really simple way to ignore the complexity of governors or even of semaphores if you don’t care, which is probably a lot of cases. Point 3, we want to be able to split that concurrency efficiently over non-coordinating consumers. That’s why we introduced semaphores. And point 4, we also want to support concurrency for resources that change in capacity over time or change in capacity based on other external factors that may not just be time. And that’s why we have the governor protocol that allows you to implement your own governors or possibly built in governors in the future that are more complicated than Semaphore. So I would like to ask for Stage 1 today. And at this point, break to take any questions and discuss any Stage 1 concerns. I do have more slides later that we can talk about in the post-Stage 1 concerns and even if we achieve Stage 1, I would like to go through those. But I would like to get all of that Stage 1 discussion out of the way so that we don’t risk going over time on that.
USA: MF, would you like to pick from the queue. There is a queue.
MF: You can manage the queue, that’s fine.
USA: On the queue first we have JHD.
JHD: All right. So I have a few – I have a number of items of feedback here. My queue item topic is one of them. A number of them are definitely during Stage 2, like post-Stage 1 concerns so I can defer them to later in the agenda item. Maybe I missed it. Did you provide a concrete example for number 4 on the slide or two ago, resources of changing capacity over time?
MF: I provided one verbally.
JHD: I must have missed it. Can you repeat it?
MF: So the example I gave for governor that – what example did I give?
LCA: One example, for example, is a governor that is backed not by something that is local to the system in memory but backed by the distributed concurrency control mechanism and you have a database that stores an integer that increments and you want to share this governor, I guess, across multiple different machines, and JavaScript doesn’t open databases and you have to open this by yourself and that’s the governor protocol allows you to do that.
JHD: Okay. If you were talking to an API and it had knowledge that the local machine didn’t or Something?
JHD: Get back to me when we get to the post-Stage 1 stuff.
USA: Okay. Then we have a reply by KG.
KG: Just briefly. Lots of APIs have a thing to do a hundred requests in the first minute and 50 requests per minute thereafter. That’s a very common thing for APIs.
RBN: So very briefly able to go over some of this proposal. I in a way kind of correlated how I look at things with dot net with the test parallel library and use something along the lines of what a governor does to perform things like partitioning to do chunking and kind of organize how you want to handle your concurrency across various things. I think that makes sense. I do have a few concerns over names. I know I reached out to you, MF, prior to the meeting to set something up. I hadn’t had time to set up something. Try to do it in the next couple of weeks. I have concerns with the name like semaphore. It’s very heavily used in metric multi-threading concurrency and coordination primitives. And I want to make sure that depending on how it’s structured, if you use something like semaphore that that doesn’t preclude something like shared STRUCT 16789 and multi-threading work with MUTEXs and semaphores as a coordination mechanism for shared memory multi-threading and make sure we’re not stepping on each other’s toes with those design. That’s my only real concern.
MF: I have some ideas for alternative names that we can get to here.
RBN: Again, those are Stage 2 concerns. That wouldn’t be blocking for me.
KKL: First I’m very much in support of investigating concurrency and control in Stage 1. You have my support there. I want to pile on with reservations for the name semaphore but for different reasons than I expect most folks have on this call. In particular, governor and semaphore are both steam engine metaphors, but in the steam engine metaphor, a governor is not a kind of semaphore and a semaphore is not a kind of governor. But governors are very, very good term for what is being proposed here, a specific kind of control loop. As long as we’re doing control loops, you might want to look at whether this protocol would allow you to implement something like AIMD, the additive increase multiplicative protocol TCPs and while you’re at it whether the error channel is observable because that is input for that particular class of algorithm. I think this might be sufficient for control delay as it’s written. But totally worth trying out. And apart from that, that’s me.
MF: Okay. I would love for you to open issues on those so I can learn more about them.
KKL: Love to, thanks.
MM: So echoing concern with the name semaphore, but it’s more than that. Concern with entangling this with multi-threading even to the extent of trying to anticipate what the – trying to make it shareable between threads and trying to appear what the semantics should be when shareable between threads. And that goes even beyond whether it’s named semaphore or not. As used within a single agent, this is non-blocking. I assume that the anticipated semantics even when shared between agents is for each agent, it is still non-blocking. And that just violates all of the normal understanding of what a multi-threading semaphore is. So support Stage 1 but with reservations of everything and anything that you do that touches Multi-threading.
USA: KG has a reply to that.
KG: Off of the main thread, it can be blocking. I mean, it would be like a different function that you couldn’t call in the main thread with atomics but there’s no reason not to have a blocking one off the main thread.
MM: I would certainly object to introducing any more blocking operations in those language.
KG: Outside of the main thread? Like we have lots of those. That’s like a totally normal thing we do now.
MM: Give me an example of another blocking operation in the language.KG: So the language contains almost no things but every host has these.
?: Not talking about –
KG: Most of the common hosts.
?: I don’t think that’s a useful distinction but we don’t need to get into that.
USA: Should we move ahead?
USA: Next we have PCO on the queue.
PCO: Okay. We discussed this and I was on the position that giving this much control over something that JavaScript developers generally don’t have to think about is kind of un-JavaScript like. I think it would be good to explore simpler designs during Stage 1 like we had the slide with an integer for simple limitation, semaphore for a shared resource, governor for more advanced use cases. Like, how much of what JavaScript developers want to do with this could we cover with the integer, for example? I’d like to see only exploration of that. Because if we’re adding two separate classes so that you can use those primitives to build any sort of regulation concurrency regulation that you want, I think – I can see why that would be useful but I have my doubts about whether that would be generally useful for 99% of what people want from concurrency control.
MF: Okay. I think I can provide some references to you of npm libraries and their popularity and use where they are currently being used to manage concurrency in this way. And maybe that will help convince you. But I also encourage you to open up an issue where we will talk about this in more detail.
PCO: Okay.
USA: Next we have Dan Minor.
DLM: Sure, thank you. So I think probably fairly similar feedback to what you have already received. We definitely support investigating concurrency control and that makes a lot of sense. We opened a couple of issues with concerns that came up in the proposal meeting and share concerns around naming. Another area that we are interested in seeing is use cases outside of async iterator helpers. At the moment, obviously those exist, but I would love to see those flushed out a little bit more.
USA: On the queue we have WH.
WH: This is more of a clarifying question about what happens when you take code which calls forEach
and start adding governors to these calls. Let’s say you’re calling forEach
with a governor and function F and then F itself calls forEach
with the same governor, what are the consequences of that?
MF: If it is the same Governor, it is pulling from the same capacity pool, so it will try to acquire again and it's the same thing if we use the Governor in a way that is not reentrant.
WH: So this could deadlock?
MF: Um,.
KG: Yes it can deadlock, any concurrency control looks anything like this will get deadlock. If you have good ideas for avoiding that, that sounds great.
WH: And the deadlock would show up as what?
KG: Good question, um, I have not thought about that. I mean yes, the most basic thing is that you just the premises never results because we are waiting for the resource but it is possible that the engine can help you in this case, but I have not thought about that.
WH: Okay thank you.
USA: Next on the queue we have SYG?
SYG: I will really do not want this thing to be called a semaphore. Given the mutual exclusion thing that is usually called semaphore. I would be much happier like an accounting Governor or something but that is the only feedback that I have. It is certainly not a Stage 1 blocker.
KG: How is this different from a semaphore?
SYG: It has a very different API, like this thing that hands out tokens rather than a counter. I understand the conceptually it is counting but have more exclusion synchronization Primitives.
MF: I think we have seen enough opposition to the name “semaphore” and that we should already be investigating other names anyway.
KG: I would like to understand what the objection to this name is? Because it makes it hard to explore other names. If the objection is that this does not look like semaphore, then sure, I guess I don’t understand in what way it does not look like the semantics of a semaphore so –
MF: Maybe we can chat in matrix.
USA: On the queue we have JHD.
JHD: Um, I can wait until we are in post Stage 1 stuff.
USA: Okay Chris would you like to go first?
KKL: Sure, I am just follow on from the last and if there is a semaphore in this language, I would expect to have an API-like increment and decrement and where it would block if the resource is unavailable and it is my expectation that the language should never have such a thing. But if it did, I would expect it to look like a semaphore in another language. Um, and I wanted to clarify that this HI could lead to deadlock and deadlock is not possible without something like a true semaphore but that is mostly a nomenclature difference and I wanted to clarify that the classes of deadlock, data lock, and live lock. Deadlock means that the program cannot make progress due to cyclic depends tea and deadlock means it cannot make progress because it is spinning the CPU and the program can continue to make progress in other events. And yeah. My expectation is that in mistaken this API could cause data lock. As is possible with cyclic dependency among progress among other things.
MM: Really does not have much to do with the current topic but sinceLL: offered a taxonomy of lock names I wanted to put gridlock and it is like deadlock because of insufficient buffering space because if there is more buffering space it would deadlock later.
USA: Thanks for that. MF would you like to ask for Stage 1?
MF: Yes so from what I have heard, the only possibly Stage 1 blocking concern was from Igalia and PFC about us potentially investigating the wrong area, and we should be investigating only the needs that are satisfied by a simple integer. Can Igalia speak more about whether that is Stage 1 blocking? Or how they would like to proceed?
NRO: It was not blocking, but we would like to see those resources that you mentioned.
MF: Okay so additionally investigate whether a reduced form of this proposal could provide most or all of the benefits that we are looking for?
NRO: Yes we are fine with investigating but with the investigation we need to solve other things.
PCO: That seems like the kind of thing that is something we should investigate during Stage 1.
MF: Then in this case I would like to ask for Stage 1. As a reminder, this is what this proposal’s goals were for Stage 1.
USA: For you to speak up and add any words for support or disagreements in the queue?
USA: I think we already heard a couple of folks express support earlier. So nothing in the queue, and MF up, and DE says yes on Stage 1, thank you DE. Yup, MF you have Stage 1.
MF: I would like to leave the remainder of my time –
USA: So far we have six minutes and one item from JHD
MF: Okay, JHD do you mind if I go over the listed items? And so relationship to the explicit resource management. I know in the explicit resource management proposal we had intentionally left the door open for an acquiring API for resources. It goes hand in hand with the proposal that maybe like automatically invoked by the using syntax when acquiring a disposable. I do want to investigate of how that relates to this, whether Governor is a more specific version of that. So I hope to work with the champion of that proposal, RBN, on figuring that out. As far as open design questions for governors, should the protocol be Symbol-based? But acquire is also nice. But like how often are you manually invoking it? I don’t know. Should there be a synchronous acquire that will throw if it cannot immediately provide you with a governor token or like returns null or something? Maybe. Please talk about that in issue 2. And maybe if we have a Governor class, maybe the constructor should be a convenient way to construct a Governor without subclassing. And I think this is probably unnecessary. I think Governor is a good name and KKL supported it. And possibly Regulator is another name I can come up with.
MF: And so semaphore's alternative name could be CountingGovernor. This is one recommended by SYG. And I'm perfectly happy with calling it a CountingGovernor. I think that describes what it is and it may not be as initially intuitive as semaphore was but other people thought semaphore was possibly misleading. So in order to drop that baggage, CountingGovernor is fine.
LCA: I will add this to the matrix, Governor.Semaphore.
MF: Sure we can do that too, that will help if we have a Governor class, which I do hope that we can have and I think that is really nice. For semaphores, I am looking for cases where people feel that idle listeners might be useful and this is where the semaphore is at capacity. Sorry, no, um, yeah – at capacity, I mean there is no Governor token. Not all of the possible active Governor tokens but zero, there is nothing currently using that resource. And there seems possibly concerns about semaphores across agents from MM, so if you can outline that possible issue we can make a decision and I would also like to discuss the possible benefits of that. I think there are valid use cases. And finally, for async iterator and the reduce parameter, if you noticed is kind of gross, and it comes after the initial value, and I don’t know how to really resolve that better. If you have ideas for that, because then it will force you to provide initial value, which you may not want to do or be able to do. I would like to give the remaining time to JHD now on the queue.
JHD: Okay, so this is definitely all post Stage 1 feedback. And my initial reaction was like there is too many nouns and too many classes having to do too many things and a lot of these things can be perhaps simpler and over time I can make more concrete suggestions but this is general feedback for right now. The wrap method seems like it is just parens arrow with. And so it does not seem that it is useful. And then I had a question about the dot burden and all the asing iterator buffer. And it looks like buffered it to give you asing iterator that will give a Governor parameter like preset. Right?
MF: Buffered does not change from how it is in async iterator helpers. It is already existing.
MF: It just also accepts a Governor.
JHD: But what I am wondering say I call it with a 3 or something, um, does that mean that is the concurrency that is already used by the other helpers? After that?
MF: Um, no that is the number of –
JHD: Oh okay. Got it. Okay, so the concurrency and methods, the methods below that, are not directly related to the buffered helper, that was my confusion?
MF: No, no that concurrency parameter is how those methods are driving the underlying iterator and so again how many outstanding promises they will keep.
JHD: Okay. Thank you.
USA: And we are on time? Okay let’s see. There is nothing in the queue.
MF: Okay great! If you think of anything else, please open an issue. Thank you.
USA: Would you like to dictate a summary?
MF: Not really?
USA: Could you edit it into the notes?
MF: Sure. Sure I will write it in the notes.
- concurrency control advanced to Stage 1
- there was much concern about the naming of Semaphore
- there was some skepticism about the motivation for the full Governor abstraction from Igalia and Mozilla
- MM had concerns about cross-Agent sharing of Semaphores
CZW: All right, I will share my screen. It is black for me, I don’t know.
ABO: This is Andreu, and so this the AsyncContext update about integration, and some – next slide.
DE: If I can jump in, I got in touch with some framework maintainers about AsyncContext, and people are generally very excited about this feature, and both for client and server. Some use cases include:
- keeping track of what the component is
- performance tracing
- maintaining hooks state
- Generally, allowing adoption of async/await in frameworks
I had some hopes that AsyncContext would make sense for the equivalent of React Context, but these are already implemented efficiently by referencing the component tree, so there is not a lot of demand for that particular usage mode.
CZW: We also – next slide please. So we designed syntax API and use case and this is a [INDISCERNIBLE]. So, this is start of API which is using stable subset and this stable subset is about AsyncContext. And open tell met tree and so with this part of syntax, we can have the benefit of that storing global variables because in the lab and also have the support of –
ABO: Yeah, so, proposal 100 is proposal for the web integration and we have API that takes a callback, and so snapshot is active and so the idea of the general approach that we are going with is basically apply snapshot.wrap of every callback that is on the API. So the snapshot is stored whenever callback is passed, and then when that callback is called, then snapshot is restored. Feel free to take a look at the proposal. Like we want a feedback from more developers and web developers and vendors and anyone, and here we have an example of like we call set time-out inside of context that set v to run, and we can observe that v.get is equal to 1.
ABO: So, events is essentially the main – most of the complications, very complex. So the same basic general idea applies and you call listener with a callback or call .onClick whatever with a callback it will store a snapshot, and when the Senator calls it will use the signal shock, and there is relevant snapshot whenever the API is called, and we will see more of this later, but this snapshot is not going to be exposed by default because like in order to make this work consistently, you would have to thread a lot of snapshot and multiple API’s and most events can expose that. So here we have specific example that this is going to be the initial like in our initial proposal to for the web integration, and the error rejection and event is fired when you have rejection and this event object would have rejection snapshot property which would be a context of snapshot with the context where the project was rejected and we have something for the error event is when you have unhandled throws, uncaught throws,, and that will have a snapshot property.
ABO: So here you can see there is like this is the context size object at the time that promise is rejected and at the time the host promise reject – I forgot the name of that Host hook when that is called. And another thing is what is the active context whenever you run a module, when you have module evaluations. So one of the goals of the module system is that module execution should be deterministic and because if you import the same module multiple times, that should only be loaded once. And you should not be able to observe that. So the idea is that there would be a host provided initial snapshot that is set whenever you import a module or whenever you evaluate a module, that would be the context at the top level of the module evaluation.ABO: And the proposal is Stage 2, there is still some more work on integration but most there is still some investigation to be done with the integration with this proposal but the work integration and so the we will see the feedback and how we resolve them. Do you want to continue, CZW?
CZW: We can see if there is any comments in the queue?
MM: I just got myself in the queue, so the most of what I heard, here is what a particular host and particular browser can do with this context and hour it is integrated with host level semantics, and all the final and neutralized but does not finalize what the neutral semantics is but some of the things – well one thing in particular stood out to me as being different than what I understood to be the agreed language semantics which is the capturing and propagation of the snapshot through promise resolution. What I understood is that the propagation was always on registering the callback, never on the resolving of the promise. And seems like it requiring violating that; am I confused?
ABO: So, um, the like whenever you have like promise you set up a promise resolution that like – the only thing that happens when you have like when you have like a promise resolution, this already happens like the spec calls into a host hook to like queue the rejection event. And this would like 100%. And so creating like new context at that point and expose it on the rejection event. And for hosts that does not have reaction events and that simple exit the process about there is a rejection this would not have any change in behaviour or any global state.
MM: So with an unhandled rejection, the host action of unhandled rejection that sounds reasonable but the unhandled rejection does not happen at the point of rejection. It happens at the point where the language notices that the rejection was not handled. You need to capture the context where the rejection happened, right?
DE: Can I address here your question Mark, and I am on the queue?
MM: Please.
DE: So, yes, for promises, Promise.then
is consistently handled as well as Promise.catch
: both reactions consistently run within the context snapshot where the reaction was scheduled. So whether you do await
or you do .then
or .catch
or .finally
, it is always executing within that "registration" context.
DE: Now, our proposed behaviour with callbacks in the web platform is to basically do that as well. Just like Promises, they restore the context snapshot which was active when the callback was received as an argument. Sometimes there is other contextual information that is relevant. And so, the solution we are going for and rather than doing something fancy and automatic is to make it so that it is as if you constructed a new AsyncContext.Snapshot at the point when some interesting thing happened, and that is made available somewhere else for explicit use.
DE: For example, when you reject a promise, the snapshot is sort of eagerly captured. And then if it turns out it is not handled, then that snapshot will get passed to the event handler, exposed as a property on the Event object.
DE: An alternative semantic is, when promise is allocated and capture snapshot in case it is an unhandled rejection. These two sound different but turns out that they are actually similar. And we were not really able to find that cases that one behaved differently than the other for developers and we are proposing where it is at the time of rejection. Note that, when the platform has an API which returns a Promise, there might be no JavaScript active when the Promise is rejected, and so the platform must cache the Snapshot earlier, e.g., when the Promise is allocated.
MM: You should note by the way the idea of capturing these is attractive to me. I like the idea of it. I am just – what I am raising is how it fits with the rest of the SA men ticks and you answered that very well, and I will clarify two things. And that it is only – if you are capturing the rejection, it is only want rejection, and not on fulfillment.
DE: That is right. Our proposal is that, on fulfillment, nothing is captured.
ABO: It should be noted that this is 100% in the host like inside of the host. And when a promise is rejected, it will call the promise rejection host at that time like if you call the reject handler, and at that reject function. At that time synchronously the host will get called and it is the host that stores that and eventually fires the 100 rejection event. And so it is the host who is taking it is snapshot and saving it somewhere.
MM: What happens when ever promise rejection in the language, there is a call out to a host hook to every rejection?
ABO: From my understanding is that the V8 doesn’t do it like that. Like, V8 implements the host hook. And like it is all with some blink of host callback. And I guess other implementations are similar. But yes, that is the way that the spec defines it.
MM: Okay, um, so I am satisfied. And this all sounds like it is going in a good direction and this was update rather than a phase advancement and I will register that I am happy.
ABO: Okay, um, we also have a few possible follow-on proposal –
DE: Before we go on to the follow-on slides, I would ask everybody and the plan is to propose this for Stage 2.7 at the following meeting and we need to go through rounds of review on integration but other than that, we believe this proposal is completely done and want to take out time with the queue to hear out any concerns that people have. And one example of a concern would be this web integration does not give me the context that I need for XYZ use case. We have been able to think of a lot of theoretical things that people might want. But we are trying to go for minimal version right now. Because that should be simple for everybody and simple for us to learn the feature and for people to implement it and for the specification. But we do not wants to make it too simple. And the following slides going the direction of possible extensions. But we did want to – our intention is to just be proposing this thing for Stage 2.7 next meeting and so I want to give a bit more pause to see if anybody else has more comment for Stage 2.7?
CDA: On the queue DLM?
DLM: I got to take the time to get review of the HTML integration, so I will be following up on that this week. But not off hand that would be done in time for Tokyo.
DE: Okay, thank you.
CDA: Anybody else in the queue?
ABO: As part of follow-on proposal, and we mentioned dimension, this is like some other context that is not currently being used, and I wanted to mention this. So as I previously mentioned, for many events there is another relevant snapshot which is not the one where you call a listener and this is snapshot that is active whatever you play that causes the event to be fired. It is called, and so there is HTML element, and you call the click method and that will fire click event synchronously and that would be the snapshot where you call onClick or xhr.send will start a fetch and load event and many other event and the snapshot for those would be when the snapshot where you call send. So each specific event of class could add property to add snapshot and none of those unless you want to count error and rejection as this kind of relevant snapshot.
ABO: So those are only ones that are present in the initial roll out but if you find use cases and if this breaks your work flow or if you really need it, please tell us. And we might add it in the initial roll out but the idea is still hopefully go adding more like eventually if we find that there are more use cases to eventually add those in the future. We want to know which one of those snapshots are important. Do you want to continue with the rest?
CZW: Thank you. And so we have feedbacks from the community that the current API is already satisfying the use cases but there are some improvements and the experiments on the notation on the context variables and so the API and syntax variables are strictly scoped anded conditions in the libraries will never be able to effect any similar cause in the code. Like in this example, this library can be an async function but it is also asynchronously involved, so this asynchronous implementation and I think, and by await promise notice dissertation in the library will not be visible in the scope of user function. And we are not proposing this change – but there are alternatives, – next slide.
CZW: There are some improvement that can be done in follow-on extension to improve the experience by utilizing explicit resource and using declaration, and with that proposal, the notation is still kept using declaration and it can help us reduce the number of closures and nested callbacks with the run API. Also, since with the generators are well supported, so you can say that even with yield in the scope using the declaration can still preserve the variable Values across yield. And next slide please.
CZW: Also, there has been request in the community that they want to observe changes notation in library code which can be observed as written value with variable. We are not going to include this behaviour in the current proposal – this pattern is different from the specific legacy node.js callback style value passing, so it can be mutations can be visible to parent async scope when you observe the fulfillment as callback as well. So in the community, they are requesting a new kind of variable and context variable and in the follow-on proposal, and if they find the motivation to be sufficient to support the people. So we are considered this but we are not going to add this new kind of variable to the proposal at the moment. And that is all the possible follow-on extension. And we can go to the queue if there are any questions?
CDA: Nothing right now. DE?
DE: So does anything that the follow-on proposal is a good or bad idea? For example, I expected that the SES folks would find continuation variable to be a bad idea? What do people Think?
CDA: CM?
CM: My intuition, and I think DE called it correctly, is this is probably a bad idea, but I don’t understand the subtleties of what is being proposed well enough to just fling that out “Oh this is a bad idea, don’t do that”. I don’t understand it enough, and I think this warrants further exploration and a more detailed explanation of how this would actually work.
CZW: There is a document linked on the slide, and it describes the semantics comparison between the continuation variables and their proposal variable and some of you can review the Document.
CM: Okay.
CZW: Thank you.
DE: About this one in particular, I want to note that some people involved in performance monitoring, have found that it is – it would be nice to have this kind of variable, but however at the same time, the open telemetry node does not use this. So at least you can do certain kinds of API’s without this and this is just the fact that no support enter with which will give you some kind of capability similar to this.
CZW: Well the enterWith
is much more of a feature and there is not declaration one before That.
DE: Yeah you are right, thank you for the correction. So, are there any particular events that people think they may want to find the causal version of? We would be happy to accept this input offline but this is the main thing that we have been scratching our heads with, and how complex do we have to make this. Any input There?
KKL: I want to – I am on the queue. In the previous life I was involved in the open context project, and precursor to open tel., and so this comes to propagation that will come to bear on this, and this is interesting because reverse context propagation reverses the arrow in the way that goes from one to many to many to one. Which sometimes implies that you have to use completely different data structures to pull data out of reverse context. I don’t know whether this comes to bear on this, and really just wanted to propose that we talk through it on TG3, and if you can add yourself to a future agenda that would be great. The thing about reverse context that happens if you fanout to multiple parties and any of them might return reverse context is that the return context has to be aggregated some way in order to be made sense of at the conception site. So get – the get on the last line of this example would effectively require a reduction of possibly many changes. And in any case, again I don’t know whether that comes to bear, but it would be great to talk through.
DE: Yeah, we have had a bit of debate about this within the group, and I agree that there are some multiple context that are relevant. Which is kind of why we would hoping to go with this simple policy. But yeah, let’s get on the TG3 agenda.
CZW: Yes thank you and that sounds good, and as I want to point out too that merging is also mentioned in the document in 94, and we have been discussing it what the possible ways to merge multiple context in this context to one final value. So yeah, I think it would be valuable to the idea TG3 general.
KKL: To add to that and the solution that we arrived at in the open context proposal was that the reduction would be delayed until the receiver because like middleware would be not be in a position to know how to do the reduction. So you end up with aggregating a set and then at the point of the get, you would – that the caller would provide a reducer.
KKL: Also that work was abandoned because there was a lot of complexity.
CZW: Yeah, and I think the values, that is kind of the design that we want to avoid because that is a basic point of margin in the documentary – um, in the document that we don’t want to – there is a decision to the user instead.
CDA: All right nothing else in the queue.
CZW: All right, and sounds good, this is just a Stage 2 update about the web integration before Stage 2.7, and we appreciate the feedback on the web integration because yeah, before we ask for Stage advancement we appreciate any feedbacks on the integration. Thank you.
DE: Okay, any additional feedback you have about this proposal, would be very welcomed and we have meetings every two weeks that on the TC39 calendar and getting in check with in Champions by any means would be really helpful. And again the intention is to propose Stage 2.7 in Tokyo.
This is just a Stage 2 update, AsyncContext will seek stage 2.7 advancement at the 104th meeting of TC39