Threads for vhf

    1. 1

      Pasting my comment that was update response to this comment

      -module(ccc).
      
      -export([worker/0]).
      -export([start/1]).
      
      worker() ->
          timer:sleep(10000),
          parent ! done.
      
      spawn_many(0) -> done;
      spawn_many(N) ->
          spawn(?MODULE, worker, []),
          spawn_many(N - 1).
      
      start(Args) ->
          {Count, []} = string:to_integer(hd(Args)),
          register(parent, self()),
          spawn_many(Count),
          await(Count),
          io:format("Done"),
          halt().
      
      await(0) -> done;
      await(Count) ->
          receive
              _ -> await(Count-1)
          end.
      

      some updates to the codebase and bunch of flags:

      erl -run ccc start 1000000 -noshell -pa -nonstick -noinput +hms 1 +a 16 +d +L -start_epmd false +ec +P 2000000
      

      Results in:

      1006190592 peak memory footprint
      

      on my machine. So it is a little bit less than 1 GiB, which tops the Go code by factor of 2 using the Go code from the example (2660564992, so almost 2.5 GiB) on my machine, and it seems that it would put in on par with Java Virtual Threads.

      I was surprised as well, but all of that with almost negligible performance impact and on my machine (with bunch of other stuff going in the background, so take that with grain of salt) Erlang (OTP 25.3.2 without JIT enabled) code was consistently faster by about 2s than Go code (Go 1.20.4).

      1. 9

        Changing Go from

                defer wg.Done()
                time.Sleep(10 * time.Second)
        

        to

                time.Sleep(10 * time.Second)
                wg.Done()
        

        Improves Go CPU time by about 2X. Go’s defer is a runtime thing, and would have some impact in a micro bench.

        1. 1

          For anyone else wandering why this is:

          How’s defer pushes the function call onto a stack at runtime and then calls all the functions in the stack at function exit.

          Some other languages (like zig) have a defer statement that is executed at scope exit, and these are easily implemented without runtime support.

          Presumably Go uses a runtime stack because it’s a simple way to properly order defers inside loops and conditional defers.

          1. 3

            Presumably Go uses a runtime stack because it’s a simple way to properly order defers inside loops and conditional defers.

            Also because defers act as panic (exception) handlers. That go’s defers are function scoped regularly takes people by surprise, especially because it’s easy to accumulate defers in a loop and get what amounts to a resource leak.

        2. 1

          Nice, I knew that something must be off.

      2. 2

        I translated your code to Elixir, my 3yo macbook reports:

        Physical footprint (peak):  972.3M
        completed in 11.637862s
        

        using

        elixir 1.14.3-otp-25
        erlang 25.2.3
        
        1. 1

          I quite intentionally written it in Erlang to omit starting up and launching all the additional applications that Elixir uses (for example logger) to reduce the memory footprint.

    2. 5

      Good write-up!

      Phoenix comes with a huge pack of tools and utils (ORM,

      I’d suggest correcting this, one of the things that caused me issues and concerns with frameworks like Django was ORMs, being very familiar with SQL and having a preference for the repository pattern. Ecto not being an ORM is IMO one of the biggest strength of Phoenix.

      Call it a query builder or a SQL query DSL, it’s definitely not an ORM in a functional language that doesn’t have objects. ;)

    3. 22

      I agree lots of people don’t, because they never even bother to learn anything past GRANT ALL…

      So, who out there has used these features?

      We use PG’s row-level security exclusively. It is 100% worth the introduction pain. Every user of our software has their own PG DB login, and that’s how they login to the application.

      How did that impact your application/environment?

      The application does only 1 thing with access control, what shows up on the menus(well and logging in). 100% of the rest of the app is done via PG RLS, and the app code is a bunch of select * from employees; kind of thing.

      What have you used them for?

      Everything, always! :) lol. (also see next answer)

      Do they provide an expected benefit or are they more trouble than they’re worth?

      When we got a request to do a bunch of reporting stuff, we connected Excel to PG, had them login with their user/password to the PG DB, and they were off and running. If the user knows SQL, we just hand them the host name of the PG server, and let them go to town, they can’t see anything more than the application gives them anyway.

      When we added Metabase, for even more reporting, we had to work hard, and added a reporting schema, then created some views, and metabase handles the authorization, it sucks. Metabase overall is great, but It’s really sad there isn’t anything in reporting land that will take advantage of RLS.

      How did you decide to use them?

      When we were designing the application, PG was just getting RLS, we tried it out, and was like, holy cow.. why try to create our own, when PG did all the work for us!

      Trying to get access control right in an application is miserable.

      Put permissions with the data, you won’t be sorry.

      1. 6

        Doesn’t this require a lot of open connections? IME, postgres starts to struggle past a couple of hundred open connections. Have you run into that at all?

        1. 5

          If you run everything inside of transactions you can do some cleverness to set variables that the RLS checks can refer to, emulating lots of users but without requiring more connections.

          1. 2

            See my other comment, but you don’t have to work quite that hard, PG has a way now to become another user.

            see: https://www.postgresql.org/docs/12/sql-set-session-authorization.html

            1. 2

              How many users does this support? I might be being overly cautious, but we have been told to look at user counts in the millions.

              1. 2

                We are an internal staff application, we max around 100 live open DB connections, so several hundred users. This runs in a stable VM with 32GB ram and < 1TB of data. We will never be a Google or a Facebook.

                One can get really far by throwing hardware at the problem, and PG can run on pretty big hardware, but even then, there is a max point. Generally I recommend not optimizing much at all for being Google size, until you start running into Google sized problems.

                Getting millions of active users out of a single PG node would be hard to do, regardless of anything else.

        2. 2

          In our experience, the struggle is around memory, PG connections take up some memory, and you have to account for that. I don’t remember the amount per connection, but this is what I remember.

          It’s not entirely trivial, but you can re-use connections. You authenticate as a superuser(or equivalent) send AUTH or something like that after you connect, to lazy to go look up the details.

          We don’t currently go over about 100 or so open active connections and have no issues, but we do use pgbouncer for the web version of our application, where most users live.

          EDIT: it’s not AUTH but almost as easy, see: https://www.postgresql.org/docs/12/sql-set-session-authorization.html

      2. 3

        How do RLS policies impact performance? The Postgres manual describes policies as queries that are evaluated on every returned row. In practice, does that impact performance noticeably? Were there gotchas that you discovered and had to work around?

        1. 3

          Heavily.

          It is important to keep your policies as simple as possible. E.g. if you mark your is_admin() as VOLATILE instead of STABLE, PG is going to happily call it for every single row, completely destroying performance. EXPLAIN is your best friend.

          But even then, some queries are performed needlessly. Imagine you use transitive ownership. For example Users own Boxes, Boxes contain Widgets. When you want to determine what Widgets can User manipulate, you usually cache the User-Boxes set at the application server level and query “downwards”. With RLS, you need to establish a link between the Widget and User, joining over Boxes “upwards” as there is no cache.

          The real problem here is that with sufficiently large schema, the tools are lacking. It’s really inconvenient to develop within pgAdmin4, away from git, basically within a “live” system with its object dependencies and so on.

          1. 2

            It can, as I mentioned in my other comment in this thread, we have only run into a few instances where performance was an issue we had to do something about.

            As for tools, We use liquibase[0], and our schema’s are in git, just like everything else.

            0: https://www.liquibase.org/

            1. 1

              I’ll check it out.

            2. 1

              How does the Liquibase experience compare to something like Alembic or Django migrations?

              The main difference I see is whether your migrations tool is more tightly coupled to your app layer or persistence layer.

              With Alembic you write migration modules as imperative Python code using the SQL Alchemy API. It can suggest migrations by inspecting your app’s SQL Alchemy metadata and comparing it to the database state, but these suggestions generally need refinement. Liquibase appears to use imperative changesets that do basically the same thing, but in a variety of file formats.

              1. 2

                I’m not very familiar with alembic or django migrations. Liquibase(LB) has been around a long time, it was pretty much the only thing doing schema in VCS back when we started using it.

                Your overview tracks with my understanding of those. I agree LB doesn’t really care about the file format, you can pick whatever suits you best.

                The LB workflow is pretty much:

                • Figure out the structure of the change you want in your brain, or via messing around with a development DB.

                • Open your favourite editor, type in that structure change into your preferred file format.

                • Run LB against a test DB to ensure it’s all good, and you didn’t mess anything up.

                • Run LB against your prod DB.

                • Go back to doing whatever you were doing.

        2. 1

          We actually use an OSS extension veil[0], and while performance can be an issue, like @mordae mentions, if you are careful about your use, it’s not to bad. We have only had a few performance issues here and there, but with explain and some thinking we have always managed to work around it without much hassle. You absolutely want indexes on the things you are using for checking permissions with.

          Veil makes the performance a lot less painful, in our experience.

          0: https://github.com/marcmunro/veil Though note, veil2 is the successor and more relevant for new implementations, that we don’t currently use(and have no experience with): https://github.com/marcmunro/veil2

          Veil2 talks about performance here in 23.1: https://marcmunro.github.io/veil2/html/ar01s23.html

      3. 3

        Same here, I used row level security everywhere on a project and it was really great!

        1. 2

          One mistake I’ve made is copy-pasting the same permission checks on several pages of an app. Later I tried to define the permissions all in one place, but still in the application code (using “django-rules”). But you still had to remember to check those permissions when appropriate. Also, when rendering the page, you want to hide or gray-out buttons if you don’t have permission on that action (not for security, just niceness: I’d rather see a disabled button than click and a get 403).

          With row-level permissions in the DB, is there a way to ask the DB “Would I have permission to do this update?”

          1. 2

            Spitballing but maybe you could try run the query in a transaction and then roll it back?

            Would be a bit costly because you’d have to run the query twice, once to check permissions and then again to execute, but it might be the simplest solution.

          2. 1

            Maybe what you need is to define a view that selects the things you can update and use that view to define the RLS. Then you can check whether the thing you want to update is visible through the view.

          3. 1

            With row-level permissions in the DB, is there a way to ask the DB “Would I have permission to do this update?”

            Yes, the permissions are just entries in the DB, so you can query/update whatever access you want(provided you have the access to view/edit those tables).

            I’m writing this from memory, so I might be wrong in the details… but what we do is have a canAccess() function that takes a row ID and returns the permissions that user has for that record. So on the view/edit screens/pages/etc, we get the permissions as well returned to us. So it’s no big deal to handle.

      4. 1

        Follow question: How did you handle customers (accidentally even) writing expensive sql queries?

        1. 2

          We would admonish them appropriately :) Mostly the issue is making sure they know about the WHERE clause. It hasn’t been much of an issue so far. We have _ui table views, that probably do 90% of what they are wanting anyways, and they know to just use those most of the time. The _ui views flatten the schema out, to make the UI code easier, and use proper where clauses and FK joins, to minimize resource usage.

          If our SQL user count grew enough that we couldn’t handle it off-hand like this, we would probably just spin up a RO slave mirror, and let them battle each other over resources and otherwise ignore the problem until we got complaints enough to upgrade resources again.

    4. 11

      My hard-earned response to this is: just don’t do it. There is a minefield of gotchas under Mnesia, and they will maim you and your production system. Mnesia was built for configuration management, not for OLTP. You wouldn’t suggest using Apache Zookeeper as a production database, why suggest mnesia?

      1. 4

        I’d be interested in some examples of gotchas, documentation references, and the like, if you have specific references handy. I know of a few basic ones like the disc copies vs ram only vs disc only, but I haven’t used it enough to encounter other gotchas. I hold most of that knowledge from reading books or documentation.

        1. 4
          1. Two-phase commit.

          2. Since Mnesia detects deadlocks, a transaction can be restarted any number of times. This function will attempt a restart as specified in Retries. Retries must be an integer greater than 0 or the atom infinity. Default is infinity.

          This is from: http://www1.erlang.org/documentation/doc-5.1/lib/mnesia-4.0/doc/html/mnesia.html
          In practice this means that a big transaction can be preempted in perpetuity by an onslaught of smaller transactions on (a subset of) the same data.

          1. When you get “Mnesia is overloaded” warnings in production. At 4 am.

          2. Bad performance on sync transactions -> you move to async -> then move to async_dirty. Now you could have simply be optimistically !-ing to other nodes’ ets-owning processes without the headaches of mnesia cluster setup.

          Most oldtime erlangers have good mnesia stories, talk to them and be amazed :)

      2. 2

        This is a fascinating assertion. Thanks for chiming in, I’ll read up a bit more.

      3. 2

        This discussion is timely for the thing I’m currently building. I had already come to the conclusion that Mnesia had too many gotchas for me to handle but I’m still hesitating between using lbm_kv and going the riak_core+partisan route. Both options seem built on top of Mnesia, iirc riak uses a patched version of Mnesia.

        I got thousands of long-running processes updating their state every few seconds. This will soon take too much memory (because of process heap size) and later on it will have to be distributed anyway. My idea was to store state as native Erlang terms (to preserve read/write perfs and be responsive enough), and have drastically less processes than one per “state” with a pool of workers that would update the store instead and be released to the pool to move to the next thing to do. I also think that it will make it easier to move to distributed later on.

        Do you have thoughts on this?

        1. 2

          riak did not use Mnesia. neither does partisan’s version of riak_core, iirc.

        2. 1

          I think the Whatsapp scaling video has some details on this. Not sure if it is 100% relevant to you but it is worth a try.

          https://www.youtube.com/watch?v=FJQyv26tFZ8

          1. 2

            I rewatched last week and it’s not that relevant IMO but thanks anyway :)

            The thing is, my current challenge is going from a single node to distributed, their challenge was to overcome the practical limits of the maximum number of nodes in a cluster (fully connected mesh), so basically going from a ~1000 nodes cluster to >10k nodes cluster. What I’m working on is too specific to ever reach close to 1000 nodes but unfortunately still a bit too big to stay on a single node (or to be more precise: we could probably scale up but one machine with loads of ram is more expensive than a few smaller machines).

            Still a great talk for people interested in pushing things to the extreme!

            1. 2

              Yes, sorry I could not be more useful. I think the riak_pg might be still useful to you when going from 1 node to N. Or maybe you are not trying to go this route either. Anyways, if you write a blog post about your experience solving this problem I would be happy to read about it. I need to jump this hoop soon with my pet project so it is relevant to me.

              https://github.com/cmeiklejohn/riak_pg

    5. 1

      I’m using Lightroom because that’s what I use for most of the editing and I really like how I can simply rely on folder structure to organize my pictures. I do yyyy/mm/dd_topic.

      I’m getting close to 1TB now, on an external HDD, it’s still fast enough. I regularly sync it to Backblaze b2 storage just in case, it costs around 4$/no for my usage.

    6. 3

      While this is not a good idea, I appreciate that someone made the effort to write it down and share it. It’s likely that it’s more than just the author who learned that this is not a good idea from the discussion the post catalysed, and so for that I hope we can refrain from being mean, or from flagging it as spam — which it certainly isn’t. I may have been wrong about this 😕

      1. 2

        The ‘spam’ flags might not be related to this particular story: >75% of this user’s submissions are their own posts similar to this, with a total of 0 comments on lobsters. It can give the impression that the author here only uses lobsters to promote their blog since they don’t even contribute to the discussion on their own content.

        1. 2

          This link

          https://www.pulltech.net/article/1582362508-Let-s-talk-about-JSON-stringify%28%29

          redirects to

          https://www.pixelstech.net/article/1582362508-Let-s-talk-about-JSON-stringify%28%29

          which is the domain where the majority of the submissions by @pxlet are from: https://lobste.rs/newest/pxlet

          It’s quite obvious to me that @pxlet bumped up against the number of submissions to one domain from one user recently implemented, and is using a new domain + a redirect to get around this.

          Ping @pushcx for review.

          1. 3

            Yep. They bumped the limit trying to submit this story under the pixelstech.net domain and then submitted this a minute later. Looks like the domain name and redirect have been around a while. Looking at the profile, they’re pretty clearly here to promote their blog - 27 of 41 stories have been to it, no comments, and I can see they’ve done no voting.

            I opened my laptop this morning to back that code out, though. As discussed, it has too many false positives. I’d like to post a meta thread about a comment from @zem I see as best capturing what bothers everyone most about this kind of behavior, but the clock is ticking towards the start of my workday.

    7. 1

      Recording my screen as I write a program. Then reviewing the footage and seeing how I could have written the program faster.

      Is there software that allows you to live-stream programming but blurs out API-key-shaped stuff?

      1. 3

        There’s a VS Code plugin called Cloak that does this. I haven’t used it though.

      2. 1

        Note that there’s no need to stream the video for this exercise.

        That said, you could write a little script for your editor or terminal, I suppose. Probably easier to just manually pause/unpause the stream, tho.

        1. 2

          Right. That was more a thought inspired by the article than a comment on the article itself.

          The problem is, of course, that I might accidentally reveal keys.

          1. 2

            You can make OBS capture only a specific window. I choose a small Xephyr (i.e. nested X session) window so I know exactly what people will see and what they won’t.

            I have seen people with special plugins which blocks/blurs windows except for ones which are whitelisted.

            Other people have hotkeys which swap the stream to a static image.

    8. 2

      Great project, I started using it and I’m enjoying it!

      Just a tiny note about the README: https://github.com/hakanu/pervane#keep-the-engine-running , <style> is disallowed by github flavored markdown: https://github.github.com/gfm/#disallowed-raw-html-extension- :)

      1. 1

        wow thanks for the feedback! This motivates me more to improve it upon. PS: I also use it every day :) Feel free to submit feature requests/bugs from https://github.com/hakanu/pervane/issues

        <style> is disallowed by github flavored markdown

        This is a bummer, guess i need to remove that button. Thanks!

      1. 1

        Overworked, underpaid (and proud of it!), and stacked almost exclusively with deeply-PC/‘woke’ folk. I’ll, uh, pass.

        1. 2

          stacked almost exclusively with deeply-PC/‘woke’ folk. I’ll, uh, pass.

          I’m curious; how do you know this? Is it just from their “Diversity & Inclusion” mission statement?

          1. 2

            That, casual conversation with some of their older Ops folk, and a chat with Syd himself from ‘back in the day’.

            1. 10

              Thanks. It’s definitely a red flag, which is unfortunate because at least superficially, “social justice” sounds like a good thing. Unfortunately, there’s a large overlap between that and hateful tribalism. For example, from this job ad

              with the goal to change the IT industry from a white, bearded clump to something that’s a little less monochrome and have a few more x-chromosomes

              Being genuinely inclusive is good and important. Casting aspersions on an entire group of people (their own employees, no less!) for their genitalia and/or skin colour is never ok. For some reason this is given a pass when it comes from proponents of the correct political ideology.

              1. 1

                Wouldn’t with the goal to make the IT industry more diverse amount to the same? That’s what I understand from this quote, the only difference being that the quote clearly states the current state of affairs and what would make it more diverse.

                1. 6

                  I find it totally offensive for myself or any of my peers to be described as a “white, bearded clump”.

              2. 1

                I am curious to understand why you immediately redflagged this after law’s statement and rejected the massive evidence (https://www.glassdoor.co.uk/Overview/Working-at-GitLab-EI_IE1296544.11,17.htm) – at least compared to a one-line statement – that Gitlab is, at the very least, a nice place to work in.

                1. 2

                  Good question. I think it’s because it’s far riskier for one’s own political capital or reputation to say something critical, and I think this is especially true of criticising political correctness. Nobody ever got fired for saying “oh yeah, it’s great. I am happy, everyone is happy.”

                  Or perhaps looking at it another way: a “woke” culture in a company is a good thing to some people. There are many people who are that flavour of political extremist, and would feel welcome among their own. The original observation was indeed “this is a woke company”, and not “this is a bad company.”

                  Glassdoor are not letting me read reviews without an account, but if the company were an echo chamber (likely, since I don’t believe the diversity movement is interested in diversity of opinion), then what’s to correct for all the positive reviews coming from people who 1. want to save their own skin, and/or 2. are quite comfortable with political correctness?

                  1. 2

                    How is law risking anything by saying what he said – or anything for that matter – under a nickname?

                    1. 2

                      I don’t know about this person specifically, but it’s not uncommon to be able to deduce who a person is by combing through their post history, and possibly cross-referencing it against content they’ve authored in other online communities.

                      1. 3

                        I don’t won’t to be impolite by insisting (sorry if I am) but you actually trusted this person’s single-line statement rather than publicly available, verified, anonymous feedback.

                        1. 3

                          Don’t worry, I don’t think you’ve been impolite. It’s totally fair to ask.

                          You are right, I drew a likely (in my mind) conclusion from a single source over an entire repository of reviews. I’ve presented my justification for this; perhaps it’s not entirely legitimate and it will be based on some of my own experiences and biases.

                          I wouldn’t say I “trust” the above anecdote comprehensively, but it’s certainly a signal. I could see a motive for someone to say some company is “bad”, but I don’t understand why someone would describe a company’s culture as “woke” if it isn’t.

        2. 1

          I was shocked to see how much less I’d make at Gitlab - my pay would be literally half what it is right now. They index their remote pay to cost of living wher eyou live, and in the United States it’s indexed for an entire state. In my home state, cost of living varies WIDELY based on what part of the state you are in, and this acted much to my detriment.

          I understand and appreciate the difficulty of figuring out what to pay remote workers in a global workforce, but I definitely think Gitlab hasn’t solved it yet. I’m also grateful their salary transparency after the introductory interview meant that we weren’t wasting each others’ time - I wish more companies did this.

        3. 1

          Dodged a bullet, thanks.

    9. 1

      I’ve been using a fork of https://github.com/sindresorhus/pure – which also supports asynchronous git status checking – after getting frustrated from a synchronous git status check a few years ago. It’s been working really well for me!

      1. 1

        Please share!

    10. 21

      ASCII only, displays everywhere

      What? UTF-8 is widely supported. If you care about a consistent view, please drop all your CSS. This makes the whole website only suitable for users who write in the Latin script. If you hate emojis, I would suggest to block that specific range of UTF-8 codepoints.

      1. 7

        This makes the whole website only suitable for users who write in the Latin script.

        Although I agree with your statement, it is worth to mention that my native language uses Latin script and still needs characters out of the ASCII range to clearly distinguish different words.

        The situation is even worse for the one I am learning recently: in it, there are necessary letter which aren’t in ASCII, although still uses Latin script.

        1. 5

          Yeah that’s a weird thing isn’t it. ASCII was supposed to support American English. It doesn’t and I don’t think there’s any major natural language that can be written using ASCII only. Maybe its inventors have been a bit naïve, or they just needed something to put on their résumé…

          1. 6

            It was created in 1963. The world was very different back then!

    11. 3

      I’m trying to build a tiny triplestore with sparql support.

      I don’t care about perfs, I care about it being very easy to install and to use. One of the hurdles when trying to play with linked data is getting some data into a store. Existing stores are complicated and heavy.

      The thing wouldn’t be used for any production workload, only as a toy to get started with.

    12. 2

      Die, floppy disks. DVDs and audio jacks: you’re next.

      Just kidding about the audio jack.

      1. 4

        I bought a new phone recently, and I was so mad (at Google, but also at myself for not checking) to find that I had to use USB-C headphones and that I had to install and configure them before they worked. Why is this necessary? (It reminded me of the first USB key I got in ~1999—it was utterly useless because I needed to install drivers for every machine I used.)

        1. 10

          Not only that, I’ve seen people move their USB Type C 3.11 for Workgroups with Power Delivery charger around each port of their computer to try and find the one which will actually accept power. There are tiny dark-grey-on-black hieroglyphs next to each port on my new laptop marking which ones are USB Type C 3.11 for Workgroups with LightningStrike or Thunderbolt OSR2 Enhanced or whatever it’s called, while others are USB Type C 3.11 for Workgroups with DisplayPort alt-mode LTS Edition. Thank god my eyesight is in normal human range; I’d hate to try and work this out with vision difficulties! The laptop will only boot from USB Type C 3.11 for Workgroups Mass Storage Edition on certain ports, and blithely ignore boot media in others. This is unmarked and undocumented, so my passable eyesight is no help here.

          The cable situation is even worse. There are a zillion different types of cables, which are supposed to have markings (i.e., black-on-black embossings that nobody will be able to see). These will allegedly identify which cables are base USB C 3.11 for Workgroups, and which ones support delivering a value meal along with your data, or whatever other hare-brained scheme they cram in there next. Presumably they’re following the logic of whoever makes SD cards: make them look like NASCAR jackets and maybe people will learn what all the weird symbols mean. But of course most of the cables are made in China and are totally unmarked, so the iconography is moot. The only cable you can trust is the one that came with your gizmo.

          We’ve gone from having function-specific ports that were visually distinct, though an all-too-brief golden age of “match the plugs and it’ll probably work”, to a bunch of function-specific ports which all look the same. Anyone involved with USB C 3.11 for Workgroups should be deeply ashamed of themselves, with the exception of that Benson guy who calls people out on their terrible cables.

          1. -1

            Not only that, I’ve seen people move their USB Type C 3.11 for Workgroups with Power Delivery charger around each port of their computer to try and find the one which will actually accept power.

            So rather than acknowledging that when companies do the right thing, and make all USB-C ports accept power, it’s easier for the user, you instead choose to blame the standard which allows said ease of use, on the shitty manufacturer who implemented it in a half-assed way to save a few dollars.

            The laptop will only boot from USB Type C 3.11 for Workgroups Mass Storage Edition on certain ports, and blithely ignore boot media in others

            Yet again, completely unrelated to USB - your laptop is a POS.

            We’ve gone from having function-specific ports that were visually distinct, though an all-too-brief golden age of “match the plugs and it’ll probably work”, to a bunch of function-specific ports which all look the same.

            We’ve gone from dozens of single-use ports that are fucking useless for the user if they don’t happen to have that type of peripheral, and will make the peripheral useless with their next computer because the specific set of single-use ports will have changed and converters are simply not practical or available, to the ability for manufacturers to provide ports that are multi-purpose, and can connect multiple legacy single-use ports with inexpensive, readily available adapters.

            This same argument (single-use ports are better) is made about even expensive laptops, like the MacBook Pro. People whine and whinge about the lack of HDMI and fucking SD card readers - and ignore that they’re completely useless for a whole bunch of people.

            1. 3

              So rather than acknowledging that when companies do the right thing

              I have literally never seen anyone do this completely right.

              You instead choose to blame the standard

              No, I blame everyone. You know this industry: the ideal world specified by a standard and the set of implementations people must interoperate with are often two distinct worlds.

              […] connect multiple legacy single-use ports with inexpensive, readily available adapters.

              The few adapters I have seen have neither of these properties. Sitting a laptop in a plate of dongle-spaghetti is not an improvement. And then you have to break out your magnifying glass to find out whether this particular adapter talks DisplayPort alt-mode or DisplayLink. Reading online, one is painless and the other is impossible.

              Oh, and: this is painful enough for people who work with tech for a living. I feel for all the normal people who have had this shoved onto them; I have no idea how anyone not immersed in this stuff could make head or tail of it.

              1. 1

                I have literally never seen anyone do this completely right.

                Apple’s TB3-supporting computers all do it “right”, and even their now-discontinued Macbook (which had USB-C but not TB3) did it “right”, from what I can see.

                The few adapters I have seen have neither of these properties.

                Few? Have you actually looked for any? USB-C to <Insert combination of USB-A, Ethernet, Some form of video, Some form of card reader> are ridiculously common amongst accessory makers.

                Sitting a laptop in a plate of dongle-spaghetti is not an improvement. So, before USB-C was a thing, the devices somehow didn’t have wires? With adapters you’re doing one of two things:

                • you’re connecting one or more devices via single-port adapters - in which case you just have a slightly longer cable(s); or
                • you’re connecting multiple devices to a single multi-port adapter - in which case you’ve moved the ‘spaghetti’ of multiple cables away from your computer..

                And then you have to break out your magnifying glass to find out whether this particular adapter talks DisplayPort alt-mode or DisplayLink.

                I don’t even understand this complaint, unless you just searched for “weird proprietary confusing display tech” and got a result for DisplayLink. The manufacturers who support it in hardware seem to be limited to those who also make the same shitty decisions like “hey we’ll put 8 USB-A ports, but only 2 of them are high speed, guess which”.

        2. 1

          Annoying, isn’t it? And they can’t undo the decision.

          The flipside is that I just carry around wired earpods in my pocket wherever I go. It’s okayish.

          I keep telling myself “They need that room on the hardware for other things, like AR.” But I’m unfamiliar with hardware engineering, so that’s just a bedtime story.

          The worst is that all the adapters for car <-> phone are useless now. And bluetooth cars aren’t really prolific, at least among my family members.

          1. 1

            I think the reality is that they save a little bit of money on the BOM by leaving out the jack and associated components, and when they sell thousands/millions of units they earn a bit extra.

            1. 5

              Adding to your BOM idea, it’s also more expensive to waterproof an audio jack, from what I have heard.

              1. 2

                I’m not the expert, but I don’t why waterproof headphone jacks would be more expensive than waterproof USB ports.

                1. 4

                  You already need to have a USB port, so a headphone jack is one more thing to waterproof/IP certify.

                  1. 1

                    But is the cost of that anywhere near significant on the total cost of developing a new phone that will sell millions of units?

                    1. 3

                      Bean counters are that way… There was a managed switch by Ubiquiti where the OS had serial console support, the board had the controller, and even had the RS232 header in place, but the port was not soldered. Some people ended up cutting a hole in the enclosure and soldering the port to it.

                      That would be a very cheap addition with a lot of value for the customer. But someone probably got a bonus for saving $0.01 per unit.

                    2. 1

                      That’s kind of my whole point above.. saving a few dollars on a unit when you expect to sell millions of them adds up.

                      1. 1

                        That only considers the cost side. There’s also a benefit side: more people interested / not turned off, so you sell more units. If cost is low enough, adding a feature is a no-brainer. I wonder about the math here.

                        1. 1

                          That’s not what is happening though. It would seem that consumers are ‘too invested’ in the Apple brand, for example, to move away entirely from the product line when Apple decides to remove features.

              2. 2

                Yes, I think it was so claimed by Apple when they got rid of the headphone jack and added IP67 dust and water resistance — both in the same iteration with iPhone 7.

                TBH, it doesn’t necessarily make much sense — what’s the big deal with simply designing a proper IP67-rated headphone jack component like they already do with all the other parts?

      2. 3

        I’d be fine using Bluetooth everywhere if it actually heckin’ worked. I tried to pair my phone with my car once to play music without an aux cable. Never again.

      3. 2

        What’s wrong with DVDs? They’re now so wide open pretty much anything will play them.

        1. 4

          DVD-ROMs are okay, but video DVD format builds on top of PAL/SECAM/NTSC analog television with interlacing, which is too harsh legacy. It’s basically a crudely digitized VHS. It have to go just like Kodak Photo CD, despite jpeg, maybe, is even more ancient tech.

        2. 3

          Image quality is what’s wrong with DVDs IMO.

        3. 3

          I literally don’t have a device in my house that will play DVDs. I have two 2018 computers (one mini desktop, one laptop), a 2011 laptop, and a 2018 (purchased, probably 2017 model) receiver.

          The weird thing is my car will (apparently, I’ve never actually tried it) play a DVD.

        4. 1

          They lose data very quickly. Even allegedly archival quality DVDs

    13. 6

      Slightly off-topic, but I bet someone could make a business out of auditing npm packages for malicious or obviously harmful code. Your company can send me the package.lock.json for your app, and I’ll go through and check every single package at a rate of X dollars per thousand lines of code, using a combination of automated tooling and manual review. Every time you do an update, I’ll review the packages and how they’ve changed. Any malicious stuff that I find gets posted publicly in the node security alerts.

      Sure, someone could write some really clever code that avoids detection, but you can rest easy knowing that at least one person has looked at that pile of code in your node_modules folder before it was shipped to production.

      1. 5

        Already a business! I know a few companies offering what you suggest: npm itself, GitHub, Snyk…

    14. 2

      I wonder how fast it is compared to V8. I don’t know of any published numbers for how fast V8 runs the ECMAScript Test Suite, which was the main metric provided in this post.

      1. 3

        I assume V8 is much faster since it JITs.

        1. 4

          JITs help most with repetitive / tightly looped code. I don’t think that’s the common case for JS. Certainly it’s an important case for some types of applications, e.g. I’m sure Google Sheets couldn’t handle large spreadsheets without a JIT. But I’m willing to bet the majority of websites see no measurable benefit from V8’s JIT. So I’m much more interested in comparing speed evaluating the ECMAScript Test Suite than, say, rendering the Mandelbrot set.

          1. 2

            These days V8 has an interpreter to aid fast startup and to avoid doing unnecessary work for code that’s only run once or twice. Given the effort the various JS engines have made over the past 15 years or so in improving performance of real world JS I generally trust they’re doing what they can.

            1. 2

              Right, I’m not saying I think QuickJS might beat V8. I’m just wondering how close it comes. 10% of V8 would not impress me, but 80% (for JIT-unfriendly workloads) would be a significant achievement.

              1. 2

                Folks reported it is closer to 3%

                1. 2

                  Wait, as in 3ms in v8 takes 100ms in QuickJS (eg. 97% slower)? Or, what takes 97ms in V8 takes 100ms (eg, 3% slower)?

                  My guess, given Peter’s framing is the former…

                  1. 4

                    300µs startup and teardown time is pretty quick though. On my MacBook Pro nodejs takes 40ms wall time to launch and stop.

                    node <<< ‘’ 0.04s user 0.01s system 91% cpu 0.058 total

                    So for quick scripts where the wall time would be dominated by those 40ms, QuickJS would win. That immediately makes me think of cloud serverless scripts (Google Cloud Functions, AWS Lambda, Azure Functions).

                    I’m also curious about @st3fan’s 3% figure, what people? And where? But it seems plausible to me.

                    1. 2

                      It’s not a fair comparison though. Node is a framework, it’s not a JS engine. Try comparing with d8, which is the v8 shell.

                      For instance:

                      TIMEFORMAT='%3R'; time (./qjs -e '')
                      0.007
                      TIMEFORMAT='%3R'; time (v8-7.6.303 -e '')
                      0.031
                      TIMEFORMAT='%3R'; time (node <<< '')
                      0.069
                      

                      Still a big difference between v8 and quickjs obviously, but now we’re not looking at how long node takes to load the many javascript files that it reads by default (for instance to enable require). :)

      2. 2

        I’m hosting my blog on GitLab Pages, so that should be unlikely.

        1. 2

          Not sure where exactly the issue is but I can’t access it either. DNS doesn’t resolve here.

          ❯ dig ls-la.fyi
          
          ; <<>> DiG 9.10.6 <<>> ls-la.fyi
          ;; global options: +cmd
          ;; Got answer:
          ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 16188
          ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
          
          ;; OPT PSEUDOSECTION:
          ; EDNS: version: 0, flags:; udp: 4096
          ;; QUESTION SECTION:
          ;ls-la.fyi.			IN	A
          
          ;; Query time: 6 msec
          ;; SERVER: 192.168.178.1#53(192.168.178.1)
          ;; WHEN: Fri May 24 23:08:38 CEST 2019
          ;; MSG SIZE  rcvd: 38
          
        1. 2

          Much appreciated!

    15. 3

      I think it’s not different enough from Markdown to be interesting. Sure it has text diagrams, but you can put text diagrams in Markdown too. You can also add some JavaScript to your Markdown files to have them render automatically in a browser (although I’m not sure I see a point to this, as opposed to just rendering the file to HTML before publishing it).

      It basically feels like a proprietary variant of Markdown, tied to a specific JS lib, so once you start using it it makes it hard to move your data. As opposed to regular Markdown (based for instance on CommonMark), which will work pretty much everywhere.

      1. 4

        While I agree with your sentiment I should point out that the link says that markdeep is an open-source hobby project, not proprietary. (I know you wrote feels like but I thought this note was still worth pointing out.)

      2. 3

        I use it in my blog, and Markdeep is amazing.

        I’m using the diagram feature, and for me it’s basically the decision of either using Markdeep, or not providing diagrams at all, because Markdeep just works.

        1. 1

          Do you do diagrams by hand? I always get annoyed tweaking the whitespace.

          1. 1

            Well there’s always things like artist-mode in emacs, or DrawIt in vim. More convoluted than dia or visio, but they do have the advantage of being inline in a readme.

      3. 1

        I think adding graphs is valuable however I think it maybe should be done as a contribution to markdown.

        1. 8

          There’s no contributing to Markdown, which is the genesis story of CommonMark.

    16. 1

      any site that has a “close and accept” policy - i close the tab

      i dont do well with fait accompli

      1. 2

        It’s wordpress.com, all blogs there have it, I think.

      2. 1

        That’s the point. They are obliged to inform you and to get your acknowledgement you have been informed. If you choose not to be informed or not accept the explanation, you should not consume the content. So closing the window is exactly what they want you to do.

        Have you even read what you are acknowledging though?

        1. 2

          If you read some of these popups, you see many that tell you they are not doing such things and e.g. only place a cookie to keep you logged in. As @gerikson says, they are, or feel, required to do so for legal reasons.

          You come across as flippant and unreasonable.

        2. 1

          You are missing the point here. It is a requirement to comply with the “EU cookie law”: https://en.m.wikipedia.org/wiki/Privacy_and_Electronic_Communications_Directive_2002#Cookies

          Most people click “accept” without reading or understanding but websites still have to implement this to avoid legal issues, the same with those lengthy fine prints agreements you accept to install or use most (proprietary) software or the sign up terms of service on Google or Amazon. Same with most OSS licenses (IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO…).

          The vast majority doesn’t read nor care, yet it must be there to comply with some laws or regulations or to avoid costly, easily avoidable legal problems.

        3. 0

          If it’s for GDPR - there are hefty penalties for not being seen as informing visitors on how browsing info will be used.

          You are paying the price for the service provider not getting nuisance-sued by someone wanting a big payout from a judgement for not following GDPR.

          1. 1

            It’s against the law not to do it. It doesn’t matter if it works or not. And people have to pick their battles.

            Spawning a big offtopic thread about it is kind of unfortunate since the universal upvote algorithm guarantees that such noise will typically rise to the top. It’s easier to upvote a complaint about illogical laws than a thoughtful comment about EOF.

            1. 0

              Sure, but there are people who feel similarly to you. In fact, I’ve become the crotchety old man that my younger self would roll their eyes at.

  • 0

    These dialogues are the modern version of the question on the 1938 austrian referendum (where the ballots had famously different-sized boxes).

  • 2

    The description implies that Elixir is supported through Linux, but that Erlang can run within a RTOS. I am confused; isn’t Elixir code just Erlang code? Wouldn’t Elixir code therefore be able to run without Linux being involved?

    1. 7

      There are two software stacks involved here: our GRiSP stack, that supports Erlang and Elixir and links together with RTEMS (something you would call unikernel) which lets the BEAM (the bytecode that Erlang and Elixir compile to) run directly on the hardware. RTEMS is not really a layer since you can access hardware registers directly.

      But we also support the Nerves software stack which runs the Erlang VM from process number 1 under an embedded Linux kernel giving you full control on what runs in user space.

      Both approaches have their advantages and disadvantages.

      So there are 3 options: GRiSP using Erlang, GRiSP using Elixir and Nerves using Elixir.

    2. 3

      Nerves (an embedded elixir platform/framework) requires Linux, I think that’s what the description says.

      Elixir code is not Erlang code but both compile to the same byte-code and this byte-code should be able to run on GRiSP 2 without Linux.

      1. 1

        Is this an authoritative answer, or a guess? Cause I already have one of those two. :)

        1. 2

          Not athoritative, first time I hear about this project but since it talks about a complete, full-blown Erlang VM I feel confident stating that OTP releases don’t need anything more than that. :)

          It also seems that GRiSP already had support for Elixir (compiling an OTP release using Distillery).

          1. 6

            It also seems that GRiSP already had support for Elixir (compiling an OTP release using Distillery).

            Hi there. I can definitively say that you can run bare metal Elixir. I have done so. You can read more about it here https://medium.com/@toensbotes/going-bare-metal-with-elixir-and-grisp-8fa8066f3d39 and here https://medium.com/@toensbotes/iex-remote-shell-into-your-elixir-driven-grisp-board-76faa8f2179e. I have also done some other things such as communicate with an Arduino over spi, but I have not gotten around to writing anything about that.

            1. 1

              Fantastic, thanks for the reply!

              1. 2

                No problem. I am quite keen on boosting the adoption of embedded erlang and Elixir. Pleas do not hesitate to get involved.

      2. 1

        Actually Elixir code is Erlang code as the Elixir compiler generates Erlang AST. Just to be nit-picky. :-)