That’s because it’s mostly used as a building block for other things. For example, SIP or Jingle (over XMPP) will handle session initiation for RTP streams, so the equivalent of a URI is in the session-initiation protocol and there is no well-known port because it’s dynamically negotiated by that protocol. Similarly, RTP is used in WebRTC where the addresses are typically the HTTP endpoints of the control programs and the port is again dynamically assigned.
RTP isn’t quite as widely used as HTTP, but it’s incredibly common and, among other things, is supported by all mainstream web browsers. It’s not exactly niche.
But CoAP is not limited to UDP. I should have elaborated why I think it’s nice. Here’s some points:
It runs over many transports: UDP (RFC 7252), TCP, TLS and WebSockets (RFC 8323) and probably many more. Some experimental transports include the Bundle Protocol for delay-tolerant-networking.
It’s designed for constrained environments and as is fairly simple to implement (low-parsing complexity is a stated goal).
It’s bi-directional. The Connection Acceptor can also make requests to the Connection Initatior.
Many simultaneous in-flight requests are allowed, allowing good performance over high-latency links.
I’m quite familiar with CoAP (having contributed a little bit of code to Californium), but I wasn’t familiar with the standardization of CoAP over TCP, TLS and websockets. That opens up quite a bit of new applications!
If that is still true (and I don’t believe it is), it will change soon. QUIC also uses UDP and HTTP/3 uses QUIC instead of TLS+TCP. CloudFlare reported HTTP/3 was 30% of the traffic that they were seeing over a year ago. I believe Chrome now defaults to it. Anything that doesn’t work well with UDP is going to be hard to use in the near future.
At the low end, CoAP(+DTLS)+UDP requires about half as much instruction memory as MQTT-TLS+TCP, which makes it very attractive for cost-sensitive devices.
If anything, I wouldn’t be surprised if using TCP becomes a marker of a legacy protocol in the next decade. QUIC provides so many benefits relative to TCP+TLS that there’s little reason to prefer the latter in any place where you have an option.
It has, though on resource-constrained devices not using TCP or TLS is a huge part of the win. Removing TCP roughly halves the code + data memory requirements for the TCP/IP stack I’m working with. Allowing a simple DTLS implementation instead of a full TLS implementation also significantly reduces code size.
Some comments on protocol design. As background, I’ve spent a long time doing different protocols, and have written a bunch of RFCs.
The protocol has a header with fixed fields. But different types of headers randomly have additional (and different) fixed fields. This means that the protocol is not extensible or changeable. Nearly every protocol ends up needing changes, so designing this in now would be good.
Integers as little endian is just wrong. All IP protocols are big endian. Little endian protocols are generally designed by vendors without input from anyone else.
63 bits for a token identifying requests and responses is likely too much. Is the protocol going to have 2^63 outstanding requests? is there a way to pipeline multiple outstanding requests on one TCP stream?
Or can the token be used across multiple connections? i.e. “READ foo”, then the connection is closed, and a subsequent connection opened and does “READ foo” with the same token. Can the server assume that the reply can be the same? if so, can it be cached? For now long? What happens if the second connection uses the same token, but asks for “READ bar”?
If the token can’t be used across multiple connections, then why is it 63 bits? Is the connection going to be open long enough that it used 2^63 requests?
It’s a good idea to punt on compression and encryption. Either just use TLS, or request a zipped file.
Some comments on protocol design. As background, I’ve spent a long time doing different protocols, and have written a bunch of RFCs.
Thank you for your time and input!
The protocol has a header with fixed fields. But different types of headers randomly have additional (and different) fixed fields. This means that the protocol is not extensible or changeable. Nearly every protocol ends up needing changes, so designing this in now would be good.
The idea was that version negotiation will happen through an outer layer. Although, that would require burning through 1 port per version, so it’s probably not the best idea. the latency of one more handshake is probably not noticeable when you’re downloading gigabytes of data.
Integers as little endian is just wrong. All IP protocols are big endian. Little endian protocols are generally designed by vendors without input from anyone else.
I’ve seen little endian in network-related formats more and more recently. I understand that, in theory, big endian allows partial processing before the entire message is read, but i was under the impression this is only really relevant for protocols processed by hardware, and is barely relevant at all with modern connection speeds.
63 bits for a token identifying requests and responses is likely too much. Is the protocol going to have 2^63 outstanding requests? is there a way to pipeline multiple outstanding requests on one TCP stream?
this was mainly done for alignment reasons in order to not require an unaligned 64 bit read for the following fields.
also, tokens are expected to be unique for the lifetime of the connection, not just while they are outstanding. READ requests reference the token of previous OPEN requests.
and yes, the protocol is designed so that you can send a OPEN request, then several READ requests, without waiting for the first request to complete.
It’s a good idea to punt on compression and encryption. Either just use TLS, or request a zipped file.
encryption is a layer above (TLS) and data compression is a layer below.
how does a client handle windowing for concurrent requests? that has often been a serious performance problem for similar protocols such as sftp, especially for high latency links
the random access read model requires the server to buffer the whole of a dynamically generated file, it can’t stream to the client like HTTP
how does a client handle windowing for concurrent requests? that has often been a serious performance problem for similar protocols such as sftp, especially for high latency links
can you elaborate on this? i was under the impression that allowing QUIC as a transfer protocol would solve this problem.
Your intro says it’s supposed to work over TCP or TLS, in which case concurrent transfers will probably need something like http/2 flow control. To make the most out of a link with a large bandwidth-delay product you need to keep enough requests in flight to fill the link. You might decide that you only support firehose mode, since your protocol isn’t interactive, in which case you can probably ditch flow control. IIRC the reason for the hpn-ssh patches was that ssh is fairly chatty so its throughput could be accidentally limited by its buffer size.
In general, multiplexing over TCP is a big danger area in protocol design, leading to lots of performance and complexity problems if you aren’t careful. When I see a protocol doing it I hope to see some discussion in the spec about how the dangers are avoided or otherwise addressed.
this is false. due to using client-assigned identifiers, it is entirely possible (and indeed, recommended) to send an OPEN and READ request in a single TCP packet.
the random access read model requires the server to buffer the whole of a dynamically generated file, it can’t stream to the client like HTTP
the protocol is not designed for transfer of dynamically generated files.
Ah I totally missed which end assigns the file handle. I thought that was part of the usual concurrent protocol pattern in which there’s a request ID which is echoed in the corresponding response.
On re-reading it now it looks like it only allows one open per connection since I can’t see what links READs to OPENs? What happens if there’s more than one OPEN?
If it doesn’t support uploads or dynamic queries then the title is overselling it just a wee bit.
On re-reading it now it looks like it only allows one open per connection since I can’t see what links READs to OPENs? What happens if there’s more than one OPEN?
ah, yeah, i guess i was thinking the token field does that, but that’s also supposed to be unique-per request… oops.
i guess READ needs an open_token field in addition to it’s own token field.
If it doesn’t support uploads or dynamic queries then the title is overselling it just a wee bit.
yeah, i definitely didn’t do a good job of communicating the protocol’s limited scope.
i’ll consider it for the next iteration of this idea.
although, it’s worth noting, HTTP/2 doesn’t actually use the http version negotiation system. systems like IRC have shown that a properly specified “unregognized message” behavior can also be a workable route for extensions. HTTP/2 also takes advantage of this via the fact that unrecognized headers are ignored.
In general, you just want to leave yourself at least one place where expansions can go: a “wish for more wishes” so you don’t have to start over when it doesn’t turn out perfect the first time (because nothing does). Limiting request type to a single bit is a sort of extreme case, since adding the 3rd request type will be a breaking change.
One of the best ideas I’ve seen is having a pair of slots for options: one for mandatory options (immediately fail if you don’t understand it) and one for optional options (ignore if you don’t understand). For example, an option requesting compression can be ignored if it’s not supported, but an option requesting the list of the SHA256 of each file chunk can’t.
I see the author posted this here themself, and I’m assuming the intent is to gather feedback from a highly technical and thoughtful community.
The author’s intended use case – distribution of large files – sounds close to the same rationale that has often led others to adopt the Bittorrent protocol. For example, within Facebook’s datacenter, they at one point were using Bittorrent to distribute code updates across server clusters. I’m surprised the “Goals” section so flippantly discards Bittorrent (and the other protocols). I’d be curious to know more about why RTP is expected to improve single-source performance over Bittorrent, or improve “simplicity” over HTTP?
The proposal has many design flaws, as others have eloquently identified. But more broadly, I feel the author would be more successful in conveying their ideas with a writing style that has a more respect for both their own reader and for the authors of existing protocols.
The basics of HTTP are not complex. Certainly not basic serving of static files, even using Range requests. I’ve implemented this stuff several times in my career.
The nice thing about HTTP is also that you generally don’t have to implement it; any platform down to the lowly embedded ones already has an HTTP library.
There’s also a vast ecosystem around HTTP for proxying, filtering, debugging, analytics, you name it.
If you just want to invent a boutique protocol for fun — which is, I suppose, why Gemini exists — then go for it; but I don’t see any reason this new protocol is necessary or an improvement. In general it just contributes to the “now you have 14 standards” problem.
If all you want is a 90% solution, sure, it’s easy.
I’ve spent the past few months fixing and fighting 90% solutions.
sure, there’s a lot of tools for working with http, but they’re mostly focused on the 90% that already works. i haven’t been able to find any sort of forward proxy to automatically retry connections (the closest i know of is plan9’s aan, but i no longer run plan9), which is what i need to work around http clients that are insufficiently robust to random tcp disconnects.
I quite like the ideas behind the protocol. I think it’s a good example of how supporting only one specific use-case (reliable transfer of large files whose content is static) allows optimizing for it much better (e.g. there’s no need for cache-control since the file contents can never be out-of-date, mirrors don’t require any trust). I am looking forwards to reading the revised version; here’s a couple more notes about the design that you might want to consider when doing it.
The length of the resource is indicated in the URL using the l query parameter, instead of being retrieved from the server. However, since the l parameter is not obligatory, this means a client might not know how large the file it is downloading is, and the protocol has to accomodate this by allowing for short reads. While the rationale for the short reads is to allow a client to request the maximum possible size and receive the full contents, as-is the protocol allows short reads for any reason, even if there is more data to transfer still.
Like with *nix read(2), a correct implementation will need to have a loop around any read that advances the offsets and retries, until the whole operation either fails or succeeds. As this is to accomodate clients not knowing the size of the request, I’d expect servers to almost never do short reads, even if it is permitted by the protocol. This means (again like with *nix read(2)) that an incorrect implementation that expects the read to return all available data will seem to work correctly most of the time, allowing for subtle bugs.
An alternative approach would be to have OPEN also return the size of the file. While this would add a round-trip of latency for files where the length is not specified in the URL, it would allow simplifying the protocol and the implementations by making all READs exact-length.
The clients are expected to verify the hash(es) in the URL when downloading the file. If you do want to support only downloading specific subset of the data (as the rationale for READ.offset suggests), you do not have the data required to calculate the hash locally. While I think it’s fair to punt on this for the initial design of the protocol, in the future this could be implemented using something like Bao – it would make sense to leave some way of extending the protocol (e.g. more possible values for requests that you could use to fetch the child hashes).
I think this proposal has potential, particularly in the context of distributed social media. Imagine a Mastodon post that includes an image with an RTP link that says “you can download this image from my instance or one of 5 other instances I trust”, spreading out the load when the entire network requests that image from its original server. Or a Nostr post that embeds an image with several possible sources, ideally preventing bitrot as image hosts inevitably go down.
The paper mentions a relationship between RTP and BitTorrent, but it doesn’t seem like RTP is a p2p protocol? Is it just the magnet links stuff? And I’m confused about the benefit of the magnet links content hashing stuff. Could you elaborate?
because http is a complete mess that is impossible to get right.
my ISP likes randomly closing TCP sockets, and i am tired of hacking around insufficiently robust http implementations.
as far as i can tell, there is not a single extant http library that actually does things correctly out of the box. curl can get pretty close by throwing --continue-at - --retry-all-errors at everything, but even that doesn’t correctly handle mid-air collisions (you need If-Range for that, and I don’t know if there’s any easy way to do that with curl).
I think the protocol still needs a bit of work, but has potential. i like the simplicity, which is in the same class as gemini, and addresses its problems of delivering larger files properly.
There is already a widely used protocol called RTP, so the name is far from ideal.
weird that it doesn’t have a registered url scheme or service name
RTP doesn’t use URLs or a fixed port number.
That’s because it’s mostly used as a building block for other things. For example, SIP or Jingle (over XMPP) will handle session initiation for RTP streams, so the equivalent of a URI is in the session-initiation protocol and there is no well-known port because it’s dynamically negotiated by that protocol. Similarly, RTP is used in WebRTC where the addresses are typically the HTTP endpoints of the control programs and the port is again dynamically assigned.
RTP isn’t quite as widely used as HTTP, but it’s incredibly common and, among other things, is supported by all mainstream web browsers. It’s not exactly niche.
Because it’s not that type of protocol, it’s a generic transport protocol like TCP.
[Comment removed by author]
Don’t forgot the other RTP protocol (real-time payments.)
I think CoAP should be the one protocol to rule them all.
I also like it, but using UDP tends to limit its usability from my experience.
But CoAP is not limited to UDP. I should have elaborated why I think it’s nice. Here’s some points:
I think the goals of the RTP proposal are met pretty well with CoAP.
I’m quite familiar with CoAP (having contributed a little bit of code to Californium), but I wasn’t familiar with the standardization of CoAP over TCP, TLS and websockets. That opens up quite a bit of new applications!
If that is still true (and I don’t believe it is), it will change soon. QUIC also uses UDP and HTTP/3 uses QUIC instead of TLS+TCP. CloudFlare reported HTTP/3 was 30% of the traffic that they were seeing over a year ago. I believe Chrome now defaults to it. Anything that doesn’t work well with UDP is going to be hard to use in the near future.
At the low end, CoAP(+DTLS)+UDP requires about half as much instruction memory as MQTT-TLS+TCP, which makes it very attractive for cost-sensitive devices.
If anything, I wouldn’t be surprised if using TCP becomes a marker of a legacy protocol in the next decade. QUIC provides so many benefits relative to TCP+TLS that there’s little reason to prefer the latter in any place where you have an option.
As mentioned above, CoAP over TCP, TLS and Websockets also has standards RFCs, which I wasn’t familiar with.
It has, though on resource-constrained devices not using TCP or TLS is a huge part of the win. Removing TCP roughly halves the code + data memory requirements for the TCP/IP stack I’m working with. Allowing a simple DTLS implementation instead of a full TLS implementation also significantly reduces code size.
Some comments on protocol design. As background, I’ve spent a long time doing different protocols, and have written a bunch of RFCs.
The protocol has a header with fixed fields. But different types of headers randomly have additional (and different) fixed fields. This means that the protocol is not extensible or changeable. Nearly every protocol ends up needing changes, so designing this in now would be good.
Integers as little endian is just wrong. All IP protocols are big endian. Little endian protocols are generally designed by vendors without input from anyone else.
63 bits for a token identifying requests and responses is likely too much. Is the protocol going to have 2^63 outstanding requests? is there a way to pipeline multiple outstanding requests on one TCP stream?
Or can the token be used across multiple connections? i.e. “READ foo”, then the connection is closed, and a subsequent connection opened and does “READ foo” with the same token. Can the server assume that the reply can be the same? if so, can it be cached? For now long? What happens if the second connection uses the same token, but asks for “READ bar”?
If the token can’t be used across multiple connections, then why is it 63 bits? Is the connection going to be open long enough that it used 2^63 requests?
It’s a good idea to punt on compression and encryption. Either just use TLS, or request a zipped file.
Thank you for your time and input!
The idea was that version negotiation will happen through an outer layer. Although, that would require burning through 1 port per version, so it’s probably not the best idea. the latency of one more handshake is probably not noticeable when you’re downloading gigabytes of data.
I’ve seen little endian in network-related formats more and more recently. I understand that, in theory, big endian allows partial processing before the entire message is read, but i was under the impression this is only really relevant for protocols processed by hardware, and is barely relevant at all with modern connection speeds.
this was mainly done for alignment reasons in order to not require an unaligned 64 bit read for the following fields.
also, tokens are expected to be unique for the lifetime of the connection, not just while they are outstanding.
READ
requests reference the token of previousOPEN
requests.and yes, the protocol is designed so that you can send a
OPEN
request, then severalREAD
requests, without waiting for the first request to complete.encryption is a layer above (TLS) and data compression is a layer below.
Some things that immediately occur to me:
no support for MIME types
no support for QUERY or POST
no redirects
the open/read pattern is poor for startup latency
how does a client handle windowing for concurrent requests? that has often been a serious performance problem for similar protocols such as sftp, especially for high latency links
the random access read model requires the server to buffer the whole of a dynamically generated file, it can’t stream to the client like HTTP
Oh, another thing: If you have a custom URL format, you need to specify how that relates to TLS server name authentication, as described in RFC 9525
can you elaborate on this? i was under the impression that allowing QUIC as a transfer protocol would solve this problem.
Your intro says it’s supposed to work over TCP or TLS, in which case concurrent transfers will probably need something like http/2 flow control. To make the most out of a link with a large bandwidth-delay product you need to keep enough requests in flight to fill the link. You might decide that you only support firehose mode, since your protocol isn’t interactive, in which case you can probably ditch flow control. IIRC the reason for the hpn-ssh patches was that ssh is fairly chatty so its throughput could be accidentally limited by its buffer size.
In general, multiplexing over TCP is a big danger area in protocol design, leading to lots of performance and complexity problems if you aren’t careful. When I see a protocol doing it I hope to see some discussion in the spec about how the dangers are avoided or otherwise addressed.
this is false. due to using client-assigned identifiers, it is entirely possible (and indeed, recommended) to send an
OPEN
andREAD
request in a single TCP packet.the protocol is not designed for transfer of dynamically generated files.
Ah I totally missed which end assigns the file handle. I thought that was part of the usual concurrent protocol pattern in which there’s a request ID which is echoed in the corresponding response.
On re-reading it now it looks like it only allows one open per connection since I can’t see what links READs to OPENs? What happens if there’s more than one OPEN?
If it doesn’t support uploads or dynamic queries then the title is overselling it just a wee bit.
ah, yeah, i guess i was thinking the token field does that, but that’s also supposed to be unique-per request… oops.
i guess READ needs an
open_token
field in addition to it’s owntoken
field.yeah, i definitely didn’t do a good job of communicating the protocol’s limited scope.
To this end, you mention 9p: why not just use 9p?
Article states a goal is:
Oh duh
If you don’t come for version negotiation, version negotiation comes for you.
gemini seems to have avoided it so far.
i’ll consider it for the next iteration of this idea.
although, it’s worth noting, HTTP/2 doesn’t actually use the http version negotiation system. systems like IRC have shown that a properly specified “unregognized message” behavior can also be a workable route for extensions. HTTP/2 also takes advantage of this via the fact that unrecognized headers are ignored.
In general, you just want to leave yourself at least one place where expansions can go: a “wish for more wishes” so you don’t have to start over when it doesn’t turn out perfect the first time (because nothing does). Limiting request type to a single bit is a sort of extreme case, since adding the 3rd request type will be a breaking change.
One of the best ideas I’ve seen is having a pair of slots for options: one for mandatory options (immediately fail if you don’t understand it) and one for optional options (ignore if you don’t understand). For example, an option requesting compression can be ignored if it’s not supported, but an option requesting the list of the SHA256 of each file chunk can’t.
I see the author posted this here themself, and I’m assuming the intent is to gather feedback from a highly technical and thoughtful community.
The author’s intended use case – distribution of large files – sounds close to the same rationale that has often led others to adopt the Bittorrent protocol. For example, within Facebook’s datacenter, they at one point were using Bittorrent to distribute code updates across server clusters. I’m surprised the “Goals” section so flippantly discards Bittorrent (and the other protocols). I’d be curious to know more about why RTP is expected to improve single-source performance over Bittorrent, or improve “simplicity” over HTTP?
The proposal has many design flaws, as others have eloquently identified. But more broadly, I feel the author would be more successful in conveying their ideas with a writing style that has a more respect for both their own reader and for the authors of existing protocols.
I would like to read an introductory section that provides context. E.g. I like Simon Peyton Jones’ 4-point approach:
I really thought the “Goals” section was enough to outline what problem is.
basically, http is too complex and easy to mess up, while simpler protocols don’t have the required features for robust error handling.
The basics of HTTP are not complex. Certainly not basic serving of static files, even using Range requests. I’ve implemented this stuff several times in my career.
The nice thing about HTTP is also that you generally don’t have to implement it; any platform down to the lowly embedded ones already has an HTTP library.
There’s also a vast ecosystem around HTTP for proxying, filtering, debugging, analytics, you name it.
If you just want to invent a boutique protocol for fun — which is, I suppose, why Gemini exists — then go for it; but I don’t see any reason this new protocol is necessary or an improvement. In general it just contributes to the “now you have 14 standards” problem.
If all you want is a 90% solution, sure, it’s easy.
I’ve spent the past few months fixing and fighting 90% solutions.
sure, there’s a lot of tools for working with http, but they’re mostly focused on the 90% that already works. i haven’t been able to find any sort of forward proxy to automatically retry connections (the closest i know of is plan9’s aan, but i no longer run plan9), which is what i need to work around http clients that are insufficiently robust to random tcp disconnects.
I quite like the ideas behind the protocol. I think it’s a good example of how supporting only one specific use-case (reliable transfer of large files whose content is static) allows optimizing for it much better (e.g. there’s no need for cache-control since the file contents can never be out-of-date, mirrors don’t require any trust). I am looking forwards to reading the revised version; here’s a couple more notes about the design that you might want to consider when doing it.
The length of the resource is indicated in the URL using the
l
query parameter, instead of being retrieved from the server. However, since thel
parameter is not obligatory, this means a client might not know how large the file it is downloading is, and the protocol has to accomodate this by allowing for short reads. While the rationale for the short reads is to allow a client to request the maximum possible size and receive the full contents, as-is the protocol allows short reads for any reason, even if there is more data to transfer still.Like with *nix
read(2)
, a correct implementation will need to have a loop around any read that advances the offsets and retries, until the whole operation either fails or succeeds. As this is to accomodate clients not knowing the size of the request, I’d expect servers to almost never do short reads, even if it is permitted by the protocol. This means (again like with *nixread(2)
) that an incorrect implementation that expects the read to return all available data will seem to work correctly most of the time, allowing for subtle bugs.An alternative approach would be to have
OPEN
also return the size of the file. While this would add a round-trip of latency for files where the length is not specified in the URL, it would allow simplifying the protocol and the implementations by making allREAD
s exact-length.The clients are expected to verify the hash(es) in the URL when downloading the file. If you do want to support only downloading specific subset of the data (as the rationale for
READ.offset
suggests), you do not have the data required to calculate the hash locally. While I think it’s fair to punt on this for the initial design of the protocol, in the future this could be implemented using something like Bao – it would make sense to leave some way of extending the protocol (e.g. more possible values for requests that you could use to fetch the child hashes).Allowing short reads also allows embedded servers that use fix-sized buffers for responses.
I think this proposal has potential, particularly in the context of distributed social media. Imagine a Mastodon post that includes an image with an RTP link that says “you can download this image from my instance or one of 5 other instances I trust”, spreading out the load when the entire network requests that image from its original server. Or a Nostr post that embeds an image with several possible sources, ideally preventing bitrot as image hosts inevitably go down.
No support for dynamic data… Not really useful for dynamic applications.
These images aren’t dynamic in that way; their content hashes are (can be) known.
https://datatracker.ietf.org/wg/ppsp/documents/ may be of interest as prior work.
The paper mentions a relationship between RTP and BitTorrent, but it doesn’t seem like RTP is a p2p protocol? Is it just the magnet links stuff? And I’m confused about the benefit of the magnet links content hashing stuff. Could you elaborate?
RTP is not a p2p protocol, but is designed to support swarm downloads without sacrificing single-server performance.
Well… If no one else is gonna, I will. For posterity, of course: https://xkcd.com/927/
Seems meh. Why?
because http is a complete mess that is impossible to get right.
my ISP likes randomly closing TCP sockets, and i am tired of hacking around insufficiently robust http implementations.
as far as i can tell, there is not a single extant http library that actually does things correctly out of the box. curl can get pretty close by throwing
--continue-at - --retry-all-errors
at everything, but even that doesn’t correctly handle mid-air collisions (you needIf-Range
for that, and I don’t know if there’s any easy way to do that with curl).And how does a new protocol that no one uses and will never use help with that?
The point is to explore alternate design spaces.
I think the protocol still needs a bit of work, but has potential. i like the simplicity, which is in the same class as gemini, and addresses its problems of delivering larger files properly.
how can i follow the protocol development?
subscribe to my blog! you can follow it either via ActivityPub or RSS.
if you don’t care about my various rust ramblings, you can subscribe to just my posts about networking by putting https://paper.wf/binarycat/tag:networking/feed/ into your feed reader.
the revised proposal is now up