{"id": "q-en-7710bis-18089d5a8413692b2c416e1df3d2d3bf04184bdc0ffbfde974fbc7b585d7b9dc", "old_text": "interacting with the captive portal is out of scope of this document. [ This document is being collaborated on in Github at: https://github.com/wkumari/draft-ekwk-capport-rfc7710bis. The most recent version of the document, open issues, etc should all be available here. The authors (gratefully) accept pull requests. Text in square brackets will be removed before publication. ] 1.", "comments": "\u2026ptive-portals/vIDCvkyKXdvMGjwvOHuTsSgFzt8/\nLGTM, for this change to address 1/2 of the referenced comments.", "new_text": "interacting with the captive portal is out of scope of this document. [ This document is being collaborated on in Github at: https://github.com/capport-wg/7710bis. The most recent version of the document, open issues, etc should all be available here. The authors (gratefully) accept pull requests. Text in square brackets will be removed before publication. ] 1."} {"id": "q-en-7710bis-18089d5a8413692b2c416e1df3d2d3bf04184bdc0ffbfde974fbc7b585d7b9dc", "old_text": "If the URIs learned via more than one option described in option are not all identical, this condition should be logged for the device owner or administrator. URI precedence in this situation is not specified by this document. 4.", "comments": "\u2026ptive-portals/vIDCvkyKXdvMGjwvOHuTsSgFzt8/\nLGTM, for this change to address 1/2 of the referenced comments.", "new_text": "If the URIs learned via more than one option described in option are not all identical, this condition should be logged for the device owner or administrator. Implementations can select their own precedence order. 4."} {"id": "q-en-7710bis-18089d5a8413692b2c416e1df3d2d3bf04184bdc0ffbfde974fbc7b585d7b9dc", "old_text": "credentials, etc. By handing out a URI using which is protected with TLS, the captive portal operator can attempt to reassure the user that the captive portal is not malicious. Operating systems should conduct all interactions with the API in a sand-boxed environment and with a configuration that minimizes tracking risks. ", "comments": "\u2026ptive-portals/vIDCvkyKXdvMGjwvOHuTsSgFzt8/\nLGTM, for this change to address 1/2 of the referenced comments.", "new_text": "credentials, etc. By handing out a URI using which is protected with TLS, the captive portal operator can attempt to reassure the user that the captive portal is not malicious. "} {"id": "q-en-ace-dtls-profile-5345ea0e50c8f08b4314e4cc5284486c9d6eebdf710baff56b2417b93377af64", "old_text": "but not the requested action. The client cannot always know a priori if an Authorized Resource Request will succeed. If the client repeatedly gets error responses containing AS Creation Hints (cf. Section 5.1.2 of I-D.ietf-ace- oauth-authz as response to its requests, it SHOULD request a new access token from the authorization server in order to continue communication with the resource server. Unauthorized requests that have been received over a DTLS session SHOULD be treated as non-fatal by the RS, i.e., the DTLS session", "comments": "This change addresses the issue raised by NAME among the authors that it is worth mentioning This is already addressed in the framework document but to avoid implementation errors it seems worth mentioning here as well.\nMerged as two other authors have agreed.", "new_text": "but not the requested action. The client cannot always know a priori if an Authorized Resource Request will succeed. It must check the validity of its keying material before sending a request or processing a response. If the client repeatedly gets error responses containing AS Creation Hints (cf. Section 5.1.2 of I-D.ietf-ace-oauth-authz as response to its requests, it SHOULD request a new access token from the authorization server in order to continue communication with the resource server. Unauthorized requests that have been received over a DTLS session SHOULD be treated as non-fatal by the RS, i.e., the DTLS session"} {"id": "q-en-ace-oauth-5d9f01555b374c47906823be5c7883295e9ffed9b618c0904e1e88417de150c8", "old_text": "if the token is later transferred over an insecure connection (e.g. when it is sent to the authz-info endpoint). Developers MUST ensure that the ephemeral credentials (i.e., the private key or the session key) are not leaked to third parties. An adversary in possession of the ephemeral credentials bound to the access token will be able to impersonate the client. Be aware that this is a real risk with many constrained environments, since adversaries can often easily get physical access to the devices and therefore use phyical extraction techniques to gain access to memory contents. This risk can also be mitigated to some extent by making sure that keys are refreshed more frequently. 6.3.", "comments": "I removed the part about the DoS via AS Discovery hints since it was unrealistic.\nI changed the text. Text added. Text added. Here is the text from Section 7.2 of draft-ietf-ace-dtls-authorize: URL All editorial comments addressed.\nPR addressing comments can be found here: URL\nMerged", "new_text": "if the token is later transferred over an insecure connection (e.g. when it is sent to the authz-info endpoint). Care must by taken by developers to prevent leakage of the PoP credentials (i.e., the private key or the symmetric key). An adversary in possession of the PoP credentials bound to the access token will be able to impersonate the client. Be aware that this is a real risk with many constrained environments, since adversaries may get physical access to the devices and can therefore use phyical extraction techniques to gain access to memory contents. This risk can be mitigated to some extent by making sure that keys are refreshed frequently, by using software isolation techniques and by using hardware security. 6.3."} {"id": "q-en-ack-frequency-482db15bdba88c51025764422f5a6d6c5ab7f4c8af24a8418cdd2a3536a25540", "old_text": "9. There are tradeoffs inherent in a sender sending an ACK_FREQUENCY frame to the receiver. As such it is recommended that implementers experiment with different strategies and find those which best suit their applications and congestion controllers. There are, however, noteworthy considerations when devising strategies for sending ACK_FREQUENCY frames. 9.1. A sender relies on receipt of acknowledgements to determine the amount of data in flight and to detect losses, e.g. when packets experience reordering, see QUIC-RECOVERY. Consequently, how often a receiver sends acknowledgments determines how long it takes for losses to be detected at the sender. 9.2. Many congestion control algorithms have a startup mechanism during the beginning phases of a connection. It is typical that in this period the congestion controller will quickly increase the amount of data in the network until it is signalled to stop. While the mechanism used to achieve this increase varies, acknowledgments by the peer are generally critical during this phase to drive the congestion controller's machinery. A sender can send ACK_FREQUENCY frames while its congestion controller is in this state, ensuring that the receiver will send acknowledgments at a rate which is optimal for the the sender's congestion controller. 9.3. Congestion controllers that are purely window-based and strictly adherent to packet conservation, such as the one defined in QUIC- RECOVERY, rely on receipt of acknowledgments to move the congestion window forward and send additional data into the network. Such controllers will suffer degraded performance if acknowledgments are delayed excessively. Similarly, if these controllers rely on the timing of peer acknowledgments (an \"ACK clock\"), delaying acknowledgments will cause undesirable bursts of data into the network. 9.4.", "comments": "I don't fully understand when would someone want to set ignore-order = true, i.e., what is a valid use case for this? Lets say for whatever reason the sender sets this to True, this would if there is actual packet loss and receiver receives OOO packets, it won't send an immediate ACK. Meaning the sender won't enter loss recovery immediately and will continue sending packets at the same rate. This is not good, as it directly increases queuing on the network node and for a tail drop queue (most widespread deployment), it will also affect other flows. Meaning, other flows will experience packet loss due to non-responsiveness of this flow for certain period of time. I probably missed past conversations about ignore-order field, so my apologies if this has already been discussed. I read the draft and didn't see any explanation about where would this be useful and what are the fallbacks of setting this to true.\nOne use case that I know and care is for sending QUIC traffic from a server cluster. Some packets would be sent directly from a server node that governs the connections, but others would be sent via different nodes within the same cluster. Once we deploy a sender like this, the receiver would observe what looks like reordering (because most packets will be essentially delivered through an additional hop, and that would cause delays). But we do not want the receiver to send ACK for each packet being received. Regarding the problem, I'm not sure I agree with the impact. Typically, congestion controllers react to congestion after 1 RTT. When packet-number-based detection is turned off, that changes to 9/8 RTT. That does cause more pressure on the bottleneck queue, but it is no worse than that caused by a path with an 9/8 greater RTT. Assuming that congestion controllers are designed to be fair against flows having different RTTs, I tend to believe that this is a non-issue.\nI agree with NAME though I think adding a SHOULD NOT use a delay longer than an RTT(URL) would be a good addition, since if the delay is extremely large(ie multiple RTTs), that could be bad.\nIn general, the spirit of this draft is to provide control over acking behavior to the sender, since it is the consumer of this information and ought to be able to control how frequently it needs to see this information. If there are subtle consequences to these decisions, it is worth noting them in the draft. It seems useful to add some text on the consequences of using Ignore Order, so that senders can make informed decisions. We can do that.\nNAME is this something that Fastly does or is planning to do? How would you share the connection secrets between different server nodes? When we set ignore-reorder = true, and lets assume that max ACK delay > 1RTT, it can be 1RTT or 2 or 10RTT. this would result in loss recovery to be delayed by at least an additional RTT, i.e. it will take >16/8RTT (>2RTT) instead of 9/8RTT to declare packets lost. That is significant IMO. >Assuming that congestion controllers are designed to be fair against flows having different RTTs, I tend to believe that this is a non-issue. CUBIC throughput is inversely proportional to RTT in New Reno mode. Nonetheless, I am more concerned about high queuing in bottleneck due to late loss recovery. Specifying the use-cases where ignore-order = true SHOULD be used will prevent new implementors from using it randomly. And as Jana said, some text around the consequences when it is used would allow informed decision making.\nNAME I do not speak for Fastly's plan, but the OSS branch of H2O has a working prototype that does this: URL The calculation is correct. At the same time, I do not think that senders will be willing to use Max Ack Delay as large as that, when setting the Ignore Order bit. The problem here is the impact on loss recovery. If Max Ack Delay is set to a value as large as 1 RTT, then the sender needs to spend more than 2 RTTs for detecting a loss. That means that the size of the send and receive buffers have to be twice as large, and that it would take as twice as long to recover from a loss. Based on existing advice that we have in other RFCs, I think we should discourage Max Ack Delay that is greater than 9/8 RTT when Ignore Order bit is used (please see my comment on ).\nI'd suggest 5/4, but otherwise agree with you NAME\nShould the Max ACK Delay be restricted to 1/8 or 1/4 RTT as there is inherent 1RTT time spent while sending data and receiving ACKs? That would make a total of 1 + 1/8 or 1 +1/4 RTT total/\nI think this is best as a sender-side SHOULD, since it knows what time threshold it's using. For example, we use 1/4 RTT, but have considered making it adaptive(ie: 1/16 RTT to 1 RTT) based on observed reordering. Also, if your RTT is 100us, and your timers have coarser granularity, 1/8 RTT or 1/4 RTT just doesn't make sense. I'm not sure what does, because I'm not an expert in datacenter networking, but that's why I think this is a SHOULD, not a MUST. Of course, it's also unenforceable, which is a good reason for a SHOULD.\n+1 to SHOULD. The only argument I might disagree with NAME is that it is unenforceable (IMO a receiver can enforce by capping Ack Delay to a fraction of the RTT that the receiver has observed). But I do not think there's any reason to require the receiver to enforce, considering the fact that a sender can send as many packets as it likes, regardless of ACKs the receiver sends (or lack of).\nI am fine with SHOULD and with a mention of how a sender would compute Max ACK Delay when he sets ignore-order = true. An example is fine too.\nI believe this is the same issue as\nThere are two things that are impacted by setting a high value for ack delay: congestion control and time-threshold loss detection. On congestion control, we need some protection for the network, so we should recommend something. On loss detection, a sender that chooses and sets an ack delay higher than the time-threshold loss detection time period risks triggering unnecessary retransmissions. So, I propose adding the following for a sender: SHOULD set the ack frequency to be at least one ack per RTT (so that congestion control responses are reasonably quick) a word of caution that setting the ack delay to be longer than time threshold can cause the sender to unnecessary retransmit\nIn 9.3. Window-based Congestion Controllers I thought that unless there is loss reported, a QUIC ACK Frame releases sending window. In a similar manner as the accurate byte counting style in TCP, a QUIC sender solely operates on the basis of bytes acked, not the number of ACK frames received. So while a delayed ACK could delay a round of growth when the ACK Ratio is larger, it is only delayed by the time to ACK the set of received packets. In looking at various QUIC CC over medium and longer path RTTs this effect was quite small, and the default ACK delay was not unreasonable; for shorter path RTTs this might be different\nSee PR\nTo me, ACK Clocking is a separate point that applies to any type of CC when the cwnd limits the sender. I agree there could be a concern is around whether a less frequent ACK policy can induce a cwnd-limited sender to send bursts of packets that could induce loss or disrupt sharing of the path with other flows. The QUIC specification already permits an initial window of 10 packets, and motivates the need for pacing. I think it is important this text refers to the QUIC transport section on pacing to mitigate bursts.\nBetter worded perhaps in PR .\nSection 9.1 - There are CC considerations with respect to how long it is reasonable for a flow to hold-off detecting congestion and responding. This needs to be discussed. When an endpoint detects persistent congestion, it MUST promptly reduce the rate of transmission when it receive or detects an indication of congestion (e.g., loss or ECN marking) [RFC2914], the Ignore Order value of true (0x01) in this ID allows a sender to extend that period, postponing detection of loss. That might be reasonable, but excessive delay can be dangerous - and therefore the impact really needs to be discussed: delaying a fraction of a RTT is in my mind safe, intentionally delaying by an RTT is arguable. A delay of many RTTs is endangers other flows, and we need to at least say that in some way ([RFC8084] took an alternate view of how long might be safe).\nSee PR\nI think this warrants a note about how a sender SHOULD NOT use a delay that is larger than an RTT unless the sender has other information about either the network or current network conditions. Otherwise, the receiver does not need to enforce anything here.\nNAME Delay of loss detection has negative impact on loss recovery as well. Considering that the time threshold defined in RFC 9002 is 9/8 RTT, I might ague that, when Ignore Order is set to true, then the maximum delay being advertised SHOULD be no greater than 9/8 RTT. These values provide responsiveness comparable to Rack when a packet is lost - the loss would be detected within 5/4 RTT. (stuck out, as this point is already covered by Section 8).", "new_text": "9. This section provides some guidance on a sender's choice of acknowledgment frequency and discusses some additional considerations. Implementers can select an appropriate strategy to meet the needs of their applications and congestion controllers. 9.1. A sender needs to be responsive to notifications of congestion, such as a packet loss or an ECN CE marking. Also, window-based congestion controllers that strictly adhere to packet conservation, such as the one defined in QUIC-RECOVERY, rely on receipt of acknowledgments to send additional data into the network, and will suffer degraded performance if acknowledgments are delayed excessively. To enable a sender to respond to potential network congestion, a sender SHOULD cause a receiver to send an acknowledgement at least once per RTT if there are unacknowledged ack-eliciting packets in flight. A sender can accomplish this by sending an IMMEDIATE_ACK frame once per round-trip time (RTT), or it can set the Ack-Eliciting Threshold and Request Max Ack Delay values to be no more than a congestion window and an estimated RTT, respectively. 9.2. Receiving an acknowledgement can allow a sender to release new packets into the network. If a sender is designed to rely on the timing of peer acknowledgments (\"ACK clock\"), delaying acknowledgments can cause undesirable bursts of data into the network. A sender MUST limit such bursts. In keeping with Section 7.7 of QUIC-RECOVERY, a sender can either employ pacing or cause a receiver to send an acknowledgement for at least each initial congestion window of received data. 9.3. Acknowledgements are fundamental to reliability in QUIC. Consequently, delaying or reducing the frequency of acknowledgments can cause loss detection at the sender to be delayed. A QUIC sender detects loss using packet thresholds on receiving an acknowledgement (Section 6.1.1 of QUIC-RECOVERY); delaying the acknowledgement therefore delays this method of detecting losses. Reducing acknowledgement frequency reduces the number of RTT samples that a sender receives (Section 5 of QUIC-RECOVERY), making a sender's RTT estimate less responsive to changes in the path's RTT. As a result, any mechanisms that rely on an accurate RTT estimate, such as time-threshold loss detection (Section 6.1.2 of QUIC- RECOVERY) or Probe Timeout (Section 6.2 of QUIC-RECOVERY), will be less responsive to changes in the path's RTT, resulting in either delayed or unnecessary packet transmissions. To limit these consequences of reduced acknowledgement frequency, a sender SHOULD cause a receiver to send an acknowledgement at least once per RTT if there are unacknowledged ack-eliciting packets in flight. A sender can accomplish this by sending an IMMEDIATE_ACK frame once per round-trip time (RTT), or it can set the Ack-Eliciting Threshold and Request Max Ack Delay values to be no more than a congestion window and an estimated RTT, respectively. A sender might use timers to detect loss of PMTUD probe packets. A sender SHOULD bundle an IMMEDIATE_ACK frame with any PTMUD probes to avoid triggering such timers. 9.4."} {"id": "q-en-ack-frequency-482db15bdba88c51025764422f5a6d6c5ab7f4c8af24a8418cdd2a3536a25540", "old_text": "connection SHOULD update and send a new ACK_FREQUENCY frame immediately upon confirmation of connection migration. 9.5. A sender might use timers to detect loss of PMTUD probe packets. A sender SHOULD bundle an IMMEDIATE_ACK frame with any PTMUD probes to avoid triggering such timers. 10. TBD.", "comments": "I don't fully understand when would someone want to set ignore-order = true, i.e., what is a valid use case for this? Lets say for whatever reason the sender sets this to True, this would if there is actual packet loss and receiver receives OOO packets, it won't send an immediate ACK. Meaning the sender won't enter loss recovery immediately and will continue sending packets at the same rate. This is not good, as it directly increases queuing on the network node and for a tail drop queue (most widespread deployment), it will also affect other flows. Meaning, other flows will experience packet loss due to non-responsiveness of this flow for certain period of time. I probably missed past conversations about ignore-order field, so my apologies if this has already been discussed. I read the draft and didn't see any explanation about where would this be useful and what are the fallbacks of setting this to true.\nOne use case that I know and care is for sending QUIC traffic from a server cluster. Some packets would be sent directly from a server node that governs the connections, but others would be sent via different nodes within the same cluster. Once we deploy a sender like this, the receiver would observe what looks like reordering (because most packets will be essentially delivered through an additional hop, and that would cause delays). But we do not want the receiver to send ACK for each packet being received. Regarding the problem, I'm not sure I agree with the impact. Typically, congestion controllers react to congestion after 1 RTT. When packet-number-based detection is turned off, that changes to 9/8 RTT. That does cause more pressure on the bottleneck queue, but it is no worse than that caused by a path with an 9/8 greater RTT. Assuming that congestion controllers are designed to be fair against flows having different RTTs, I tend to believe that this is a non-issue.\nI agree with NAME though I think adding a SHOULD NOT use a delay longer than an RTT(URL) would be a good addition, since if the delay is extremely large(ie multiple RTTs), that could be bad.\nIn general, the spirit of this draft is to provide control over acking behavior to the sender, since it is the consumer of this information and ought to be able to control how frequently it needs to see this information. If there are subtle consequences to these decisions, it is worth noting them in the draft. It seems useful to add some text on the consequences of using Ignore Order, so that senders can make informed decisions. We can do that.\nNAME is this something that Fastly does or is planning to do? How would you share the connection secrets between different server nodes? When we set ignore-reorder = true, and lets assume that max ACK delay > 1RTT, it can be 1RTT or 2 or 10RTT. this would result in loss recovery to be delayed by at least an additional RTT, i.e. it will take >16/8RTT (>2RTT) instead of 9/8RTT to declare packets lost. That is significant IMO. >Assuming that congestion controllers are designed to be fair against flows having different RTTs, I tend to believe that this is a non-issue. CUBIC throughput is inversely proportional to RTT in New Reno mode. Nonetheless, I am more concerned about high queuing in bottleneck due to late loss recovery. Specifying the use-cases where ignore-order = true SHOULD be used will prevent new implementors from using it randomly. And as Jana said, some text around the consequences when it is used would allow informed decision making.\nNAME I do not speak for Fastly's plan, but the OSS branch of H2O has a working prototype that does this: URL The calculation is correct. At the same time, I do not think that senders will be willing to use Max Ack Delay as large as that, when setting the Ignore Order bit. The problem here is the impact on loss recovery. If Max Ack Delay is set to a value as large as 1 RTT, then the sender needs to spend more than 2 RTTs for detecting a loss. That means that the size of the send and receive buffers have to be twice as large, and that it would take as twice as long to recover from a loss. Based on existing advice that we have in other RFCs, I think we should discourage Max Ack Delay that is greater than 9/8 RTT when Ignore Order bit is used (please see my comment on ).\nI'd suggest 5/4, but otherwise agree with you NAME\nShould the Max ACK Delay be restricted to 1/8 or 1/4 RTT as there is inherent 1RTT time spent while sending data and receiving ACKs? That would make a total of 1 + 1/8 or 1 +1/4 RTT total/\nI think this is best as a sender-side SHOULD, since it knows what time threshold it's using. For example, we use 1/4 RTT, but have considered making it adaptive(ie: 1/16 RTT to 1 RTT) based on observed reordering. Also, if your RTT is 100us, and your timers have coarser granularity, 1/8 RTT or 1/4 RTT just doesn't make sense. I'm not sure what does, because I'm not an expert in datacenter networking, but that's why I think this is a SHOULD, not a MUST. Of course, it's also unenforceable, which is a good reason for a SHOULD.\n+1 to SHOULD. The only argument I might disagree with NAME is that it is unenforceable (IMO a receiver can enforce by capping Ack Delay to a fraction of the RTT that the receiver has observed). But I do not think there's any reason to require the receiver to enforce, considering the fact that a sender can send as many packets as it likes, regardless of ACKs the receiver sends (or lack of).\nI am fine with SHOULD and with a mention of how a sender would compute Max ACK Delay when he sets ignore-order = true. An example is fine too.\nI believe this is the same issue as\nThere are two things that are impacted by setting a high value for ack delay: congestion control and time-threshold loss detection. On congestion control, we need some protection for the network, so we should recommend something. On loss detection, a sender that chooses and sets an ack delay higher than the time-threshold loss detection time period risks triggering unnecessary retransmissions. So, I propose adding the following for a sender: SHOULD set the ack frequency to be at least one ack per RTT (so that congestion control responses are reasonably quick) a word of caution that setting the ack delay to be longer than time threshold can cause the sender to unnecessary retransmit\nIn 9.3. Window-based Congestion Controllers I thought that unless there is loss reported, a QUIC ACK Frame releases sending window. In a similar manner as the accurate byte counting style in TCP, a QUIC sender solely operates on the basis of bytes acked, not the number of ACK frames received. So while a delayed ACK could delay a round of growth when the ACK Ratio is larger, it is only delayed by the time to ACK the set of received packets. In looking at various QUIC CC over medium and longer path RTTs this effect was quite small, and the default ACK delay was not unreasonable; for shorter path RTTs this might be different\nSee PR\nTo me, ACK Clocking is a separate point that applies to any type of CC when the cwnd limits the sender. I agree there could be a concern is around whether a less frequent ACK policy can induce a cwnd-limited sender to send bursts of packets that could induce loss or disrupt sharing of the path with other flows. The QUIC specification already permits an initial window of 10 packets, and motivates the need for pacing. I think it is important this text refers to the QUIC transport section on pacing to mitigate bursts.\nBetter worded perhaps in PR .\nSection 9.1 - There are CC considerations with respect to how long it is reasonable for a flow to hold-off detecting congestion and responding. This needs to be discussed. When an endpoint detects persistent congestion, it MUST promptly reduce the rate of transmission when it receive or detects an indication of congestion (e.g., loss or ECN marking) [RFC2914], the Ignore Order value of true (0x01) in this ID allows a sender to extend that period, postponing detection of loss. That might be reasonable, but excessive delay can be dangerous - and therefore the impact really needs to be discussed: delaying a fraction of a RTT is in my mind safe, intentionally delaying by an RTT is arguable. A delay of many RTTs is endangers other flows, and we need to at least say that in some way ([RFC8084] took an alternate view of how long might be safe).\nSee PR\nI think this warrants a note about how a sender SHOULD NOT use a delay that is larger than an RTT unless the sender has other information about either the network or current network conditions. Otherwise, the receiver does not need to enforce anything here.\nNAME Delay of loss detection has negative impact on loss recovery as well. Considering that the time threshold defined in RFC 9002 is 9/8 RTT, I might ague that, when Ignore Order is set to true, then the maximum delay being advertised SHOULD be no greater than 9/8 RTT. These values provide responsiveness comparable to Rack when a packet is lost - the loss would be detected within 5/4 RTT. (stuck out, as this point is already covered by Section 8).", "new_text": "connection SHOULD update and send a new ACK_FREQUENCY frame immediately upon confirmation of connection migration. 10. TBD."} {"id": "q-en-ack-frequency-2d928aa6d61774c22f29b0ab1c12d4cf2139337fe894d8a6e525a7e77f4700e0", "old_text": "6.3. For performance reasons, an endpoint can receive incoming packets from the underlying platform in a batch of multiple packets. This batch can contain enough packets to cause multiple acknowledgements to be sent. To avoid sending multiple acknowledgements in rapid succession, an endpoint can process all packets in a batch before determining whether to send an ACK frame in response, as stated in Section 13.2.2", "comments": "Defer to RFC9000.\nThe text says: \"For performance reasons, an endpoint can receive incoming packets from the underlying platform in a batch of multiple packets.\" is this the network layer... or something else?\nIt could be the OS or the hardware.\nI think the sentence needs some work - I can believe the network; interface; operating system; etc. Are examples of reasons why there is bunching of packets so that a batch of packets arrive at the receiver. The current text doesn't really say quite what I'd expect.", "new_text": "6.3. To avoid sending multiple acknowledgements in rapid succession, an endpoint can process all packets in a batch before determining whether to send an ACK frame in response, as stated in Section 13.2.2"} {"id": "q-en-ack-frequency-705b65c091296cf9ac80add81ed62da6beb6c1e03d16f9d7cfc9e4af8f97a868", "old_text": "number of acknowledgments allows connection throughput to scale much further. Unfortunately, there are undesirable consequences to simply reducing the acknowledgement frequency, especially to an arbitrary fixed value, as follows: A sender relies on receipt of acknowledgements to determine the amount of data in flight and to detect losses, see QUIC-RECOVERY. Consequently, how often a receiver sends acknowledgments dictates how long it takes for losses to be detected at the sender. Starting a connection up quickly without inducing excess queue is important for latency reduction, for both short and long flows. The sender often needs more frequent acknowledgments during this phase. Congestion controllers that are purely window based and strictly adherent to packet conservation, such as the one defined in QUIC- RECOVERY, rely on receipt of acknowledgments to move the congestion window forward and release additional data. Such controllers suffer performance penalties when acknowledgements are not sent frequently enough. On the other hand, for long-running flows, congestion controllers that are not window-based, such as BBR, can perform well with very few acknowledgements per RTT. New sender startup mechanisms will need a way for the sender to increase the frequency of acknowledgements when fine-grained feedback is required. QUIC-TRANSPORT currently specifies a simple delayed acknowledgement mechanism that a receiver can use: send an acknowledgement for every", "comments": "I think the motivations section as-is is too long. It spends a lot of time talking about what are effectively the tradeoffs. IMO the motivations section should specify why it's useful to have the sender control the receiver's ACK frequency, and then we can have a separate section for the tradeoffs. This takes a stab at that section, lifting things from motivations and adding some new text specifically addressing the nascent stages of a connection. I also made some editorial changes to the existing text. I am happy to add any additional considerations we think might need to be called out. This attempts to\nThanks for the PR, NAME ! I like the new text you propose (I've proposed some changes), but I think you might be assuming that ACK_FREQUENCY is only used for delaying acks further. An important point that bears bringing out in the motivation is that a sender might want more frequent acks as well, for example during startup. That's worth capturing in the modified motivation section.\nNAME this PR is almost done, any chance you can do another round of updates and we can get it in?\nNAME sorry, I'll do some updates today.\nNAME NAME I've updated the PR, I'll watch and iterate on any remaining feedback quickly this time.\nthanks NAME Something about vim and the macbook pro keyboard apparently makes me incapable of spelling things properly. I want my desk keyboard back\nAs discussed on in the base draft, we have some confidence in a particular heuristic (default tolerance of 2 for the first 100 packets, followed by a tolerance of 10) which we know to work fairly well in multiple deployments serving \"typical\" Internet resources to typical users. Do we want to make that recommendation explicit, or otherwise give guidance in this extension?\nI think it's a good idea to describe the values for which we have experience, while encouraging experimentation. We don't need to recommend the strategy, we just need to document it as an example strategy.\nI think a section on considerations and maybe past experience would be very helpful. I think it's going to be a bit of a challenge to write, but I'm happy to take a stab if no one else wants to.\nWhat I'd want to see is just a simple explanation of what a sender could do... I would not describe results from experience, since those are anecdotal and are going to change over time. This could be a fairly short section.\nNAME NAME I can take a stab at writing something and we can go from there.\nThanks, NAME !\nI do think the pitfalls of e.g. should be conveyed in such \"cautionary\" text as well. I will submit a PR this week time permitting between interop.\nThanks, NAME", "new_text": "number of acknowledgments allows connection throughput to scale much further. As discussed in implementation, there are undesirable consequences to congestion control and loss recovery if a receiver uniltaerally reduces the acknowledgment frequency. Consequently, a sender needs the ability to express its constraints on the acknowledgement frequency to maximize congestion controller performance. QUIC-TRANSPORT currently specifies a simple delayed acknowledgement mechanism that a receiver can use: send an acknowledgement for every"} {"id": "q-en-ack-frequency-705b65c091296cf9ac80add81ed62da6beb6c1e03d16f9d7cfc9e4af8f97a868", "old_text": "8. TBD. 9. TBD. ", "comments": "I think the motivations section as-is is too long. It spends a lot of time talking about what are effectively the tradeoffs. IMO the motivations section should specify why it's useful to have the sender control the receiver's ACK frequency, and then we can have a separate section for the tradeoffs. This takes a stab at that section, lifting things from motivations and adding some new text specifically addressing the nascent stages of a connection. I also made some editorial changes to the existing text. I am happy to add any additional considerations we think might need to be called out. This attempts to\nThanks for the PR, NAME ! I like the new text you propose (I've proposed some changes), but I think you might be assuming that ACK_FREQUENCY is only used for delaying acks further. An important point that bears bringing out in the motivation is that a sender might want more frequent acks as well, for example during startup. That's worth capturing in the modified motivation section.\nNAME this PR is almost done, any chance you can do another round of updates and we can get it in?\nNAME sorry, I'll do some updates today.\nNAME NAME I've updated the PR, I'll watch and iterate on any remaining feedback quickly this time.\nthanks NAME Something about vim and the macbook pro keyboard apparently makes me incapable of spelling things properly. I want my desk keyboard back\nAs discussed on in the base draft, we have some confidence in a particular heuristic (default tolerance of 2 for the first 100 packets, followed by a tolerance of 10) which we know to work fairly well in multiple deployments serving \"typical\" Internet resources to typical users. Do we want to make that recommendation explicit, or otherwise give guidance in this extension?\nI think it's a good idea to describe the values for which we have experience, while encouraging experimentation. We don't need to recommend the strategy, we just need to document it as an example strategy.\nI think a section on considerations and maybe past experience would be very helpful. I think it's going to be a bit of a challenge to write, but I'm happy to take a stab if no one else wants to.\nWhat I'd want to see is just a simple explanation of what a sender could do... I would not describe results from experience, since those are anecdotal and are going to change over time. This could be a fairly short section.\nNAME NAME I can take a stab at writing something and we can go from there.\nThanks, NAME !\nI do think the pitfalls of e.g. should be conveyed in such \"cautionary\" text as well. I will submit a PR this week time permitting between interop.\nThanks, NAME", "new_text": "8. There are tradeoffs inherent in a sender sending an ACK_FREQUENCY frame to the receiver. As such it is recommended that implementers experiment with different strategies and find those which best suit their applications and congestion controllers. There are, however, noteworthy considerations when devising strategies for sending ACK_FREQUENCY frames. 8.1. A sender relies on receipt of acknowledgements to determine the amount of data in flight and to detect losses, e.g. when packets experience reordering, see QUIC-RECOVERY. Consequently, how often a receiver sends acknowledgments determines how long it takes for losses to be detected at the sender. 8.2. Many congestion control algorithms have a startup mechanism during the beginning phases of a connection. It is typical that in this period the congestion controller will quickly increase the amount of data in the network until it is signalled to stop. While the mechanism used to achieve this increase varies, acknowledgments by the peer are generally critical during this phase to drive the congestion controller's machinery. A sender can send ACK_FREQUENCY frames while its congestion controller is in this state, ensuring that the receiver will send acknowledgments at a rate which is optimal for the the sender's congestion controller. 8.3. Congestion controllers that are purely window-based and strictly adherent to packet conservation, such as the one defined in QUIC- RECOVERY, rely on receipt of acknowledgments to move the congestion window forward and send additional data into the network. Such controllers will suffer degraded performance if acknowledgments are delayed excessively. Similarly, if these controllers rely on the timing of peer acknowledgments (an \"ACK clock\"), delaying acknowledgments will cause undesirable bursts of data into the network. 9. TBD. 10. TBD. "} {"id": "q-en-ack-frequency-654a9fbe49099e8b245a27785590be2d36a9134c46a360b6cb0956dd7ae0805b", "old_text": "receivers to ignore obsolete frames, see multiple-frames. A variable-length integer representing the maximum number of ack- eliciting packets after which the receiver sends an acknowledgement. A value of 1 will result in an acknowledgement being sent for every ack-eliciting packet received. A value of 0 is invalid. Receipt of an invalid value MUST be treated as a connection error of type FRAME_ENCODING_ERROR. If an endpoint receives an ACK-eliciting threshold value that is larger than the maximum value it can represent, the endpoint MUST use the largest representable value instead. A variable-length integer representing the value to which the endpoint requests the peer update its \"max_ack_delay\"", "comments": "Also updates the transport parameter codepoint to 0xff03de1a from 0xff02de1a.\nWhile reading the latest ack-frequency draft, I am confused with the wording that was added in URL If I have ack-eliciting threshold = 1, to me it means that I can receive 1 ack-eliciting packet before sending an ACK, i.e. I have to send ACK after receiving 1 packet. But looking at the example of 0, this interpretation looks wrong. NAME suggested, maybe the \"before\" needs to be \"without\" and an additional sentence added: \"An immediate acknowledgement is sent when more than this number of packets have been received.\" I agree with his suggestion.\nThanks for the issues, this is a bit unclear. PTAL at\nThe definition that I use internally is \"the number of packets you can receive without sending an immediate acknowledgment\". This is different than saying \"please acknowledge every N packets\". The main benefit of this definition is that there are no invalid values for the number. I recommend making this adjustment in the frame definition: it will be more efficient (by a tiny amount), but it also removes the need to validate the value.\nThis seems like a good change, it's just a matter of writing the text.", "new_text": "receivers to ignore obsolete frames, see multiple-frames. A variable-length integer representing the maximum number of ack- eliciting packets the recipient of this frame can receive before sending an immediate acknowledgment. A value of 0 will result in an immediate acknowledgement whenever an ack-eliciting packet received. If an endpoint receives an ACK-eliciting threshold value that is larger than the maximum value it can represent, the endpoint MUST use the largest representable value instead. A variable-length integer representing the value to which the endpoint requests the peer update its \"max_ack_delay\""} {"id": "q-en-ack-frequency-654a9fbe49099e8b245a27785590be2d36a9134c46a360b6cb0956dd7ae0805b", "old_text": "Every time an acknowledgement is sent, bundled or otherwise, all counters and timers related to delaying of acknowledgments are reset. 6.1. As specified in Section 13.2.1 of QUIC-TRANSPORT, endpoints are", "comments": "Also updates the transport parameter codepoint to 0xff03de1a from 0xff02de1a.\nWhile reading the latest ack-frequency draft, I am confused with the wording that was added in URL If I have ack-eliciting threshold = 1, to me it means that I can receive 1 ack-eliciting packet before sending an ACK, i.e. I have to send ACK after receiving 1 packet. But looking at the example of 0, this interpretation looks wrong. NAME suggested, maybe the \"before\" needs to be \"without\" and an additional sentence added: \"An immediate acknowledgement is sent when more than this number of packets have been received.\" I agree with his suggestion.\nThanks for the issues, this is a bit unclear. PTAL at\nThe definition that I use internally is \"the number of packets you can receive without sending an immediate acknowledgment\". This is different than saying \"please acknowledge every N packets\". The main benefit of this definition is that there are no invalid values for the number. I recommend making this adjustment in the frame definition: it will be more efficient (by a tiny amount), but it also removes the need to validate the value.\nThis seems like a good change, it's just a matter of writing the text.", "new_text": "Every time an acknowledgement is sent, bundled or otherwise, all counters and timers related to delaying of acknowledgments are reset. The receiver of an ACK_FREQUENCY frame can continue to process multiple available packets before determining whether to send an ACK frame in response, as stated in Section 13.2.2 of QUIC-TRANSPORT. 6.1. As specified in Section 13.2.1 of QUIC-TRANSPORT, endpoints are"} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "6. Prior to receiving an ACK_FREQUENCY frame, endpoints send acknowledgements as specified in Section 13.2.1 of QUIC-TRANSPORT.", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "6. The IMMEDIATE_ACK Frame is a frame which causes the peer to send a packet containing an ACK frame immediately, similar to the receipt of Initial and Handshake packets during the QUIC handshake. Receivers of the IMMEDIATE_ACK frame MAY choose to delay sending the ACK if the vast majority of received packets contain an IMMEDIATE_ACK or the receiver is under heavy load. Senders MAY include multiple IMMEDIATE_ACK frames in a single QUIC packet, but the behavior is identical to a single IMMEDIATE_ACK frame. 7. Prior to receiving an ACK_FREQUENCY frame, endpoints send acknowledgements as specified in Section 13.2.1 of QUIC-TRANSPORT."} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "multiple available packets before determining whether to send an ACK frame in response, as stated in Section 13.2.2 of QUIC-TRANSPORT. 6.1. As specified in Section 13.2.1 of QUIC-TRANSPORT, endpoints are expected to send an acknowledgement immediately on receiving a", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "multiple available packets before determining whether to send an ACK frame in response, as stated in Section 13.2.2 of QUIC-TRANSPORT. 7.1. As specified in Section 13.2.1 of QUIC-TRANSPORT, endpoints are expected to send an acknowledgement immediately on receiving a"} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "instead continues to use the peer's \"Ack-Eliciting Threshold\" and \"max_ack_delay\" thresholds for sending acknowledgements. 6.2. As specified in Section 13.2.1 of QUIC-TRANSPORT, an endpoint SHOULD immediately acknowledge packets marked with the ECN Congestion Experienced (CE) codepoint in the IP header. Doing so reduces the peer's response time to congestion events. 6.3. For performance reasons, an endpoint can receive incoming packets from the underlying platform in a batch of multiple packets. This", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "instead continues to use the peer's \"Ack-Eliciting Threshold\" and \"max_ack_delay\" thresholds for sending acknowledgements. 7.2. As specified in Section 13.2.1 of QUIC-TRANSPORT, an endpoint SHOULD immediately acknowledge packets marked with the ECN Congestion Experienced (CE) codepoint in the IP header. Doing so reduces the peer's response time to congestion events. 7.3. For performance reasons, an endpoint can receive incoming packets from the underlying platform in a batch of multiple packets. This"} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "whether a threshold has been met and an acknowledgement is to be sent in response. 7. On sending an update to the peer's \"max_ack_delay\", an endpoint can use this new value in later computations of its Probe Timeout (PTO)", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "whether a threshold has been met and an acknowledgement is to be sent in response. 8. On sending an update to the peer's \"max_ack_delay\", an endpoint can use this new value in later computations of its Probe Timeout (PTO)"} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "optimization requires some care in implementation, since it can cause premature PTOs under packet loss when \"ignore_order\" is enabled. 8. There are tradeoffs inherent in a sender sending an ACK_FREQUENCY frame to the receiver. As such it is recommended that implementers", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "optimization requires some care in implementation, since it can cause premature PTOs under packet loss when \"ignore_order\" is enabled. 9. There are tradeoffs inherent in a sender sending an ACK_FREQUENCY frame to the receiver. As such it is recommended that implementers"} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "noteworthy considerations when devising strategies for sending ACK_FREQUENCY frames. 8.1. A sender relies on receipt of acknowledgements to determine the amount of data in flight and to detect losses, e.g. when packets", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "noteworthy considerations when devising strategies for sending ACK_FREQUENCY frames. 9.1. A sender relies on receipt of acknowledgements to determine the amount of data in flight and to detect losses, e.g. when packets"} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "receiver sends acknowledgments determines how long it takes for losses to be detected at the sender. 8.2. Many congestion control algorithms have a startup mechanism during the beginning phases of a connection. It is typical that in this", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "receiver sends acknowledgments determines how long it takes for losses to be detected at the sender. 9.2. Many congestion control algorithms have a startup mechanism during the beginning phases of a connection. It is typical that in this"} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "that the receiver will send acknowledgments at a rate which is optimal for the the sender's congestion controller. 8.3. Congestion controllers that are purely window-based and strictly adherent to packet conservation, such as the one defined in QUIC-", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "that the receiver will send acknowledgments at a rate which is optimal for the the sender's congestion controller. 9.3. Congestion controllers that are purely window-based and strictly adherent to packet conservation, such as the one defined in QUIC-"} {"id": "q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3", "old_text": "acknowledgments will cause undesirable bursts of data into the network. 9. TBD. 10. TBD. 11. References 11.1. URIs [1] https://mailarchive.ietf.org/arch/search/?email_list=quic", "comments": "by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming.", "new_text": "acknowledgments will cause undesirable bursts of data into the network. 10. TBD. 11. TBD. 12. References 12.1. URIs [1] https://mailarchive.ietf.org/arch/search/?email_list=quic"} {"id": "q-en-ack-frequency-05cec385e5b75d6ef4e0d46fe169b19e0392baa8bc2bba3e9e5d405589509290", "old_text": "Discussion of this draft takes place on the QUIC working group mailing list (quic@ietf.org), which is archived at https://mailarchive.ietf.org/arch/search/?email_list=quic [1]. Source code and issues list for this draft can be found at https://github.com/quicwg/ack-frequency [2]. Working Group information can be found at https://github.com/quicwg [3]; 1.", "comments": "You aren't using URL, so remove it. I updated the links so that the text rendering doesn't include them twice.", "new_text": "Discussion of this draft takes place on the QUIC working group mailing list (quic@ietf.org), which is archived at . Source code and issues list for this draft can be found at . Working Group information can be found at . 1."} {"id": "q-en-ack-frequency-05cec385e5b75d6ef4e0d46fe169b19e0392baa8bc2bba3e9e5d405589509290", "old_text": "11. TBD. 12. References 12.1. URIs [1] https://mailarchive.ietf.org/arch/search/?email_list=quic [2] https://github.com/quicwg/ack-frequency [3] https://github.com/quicwg ", "comments": "You aren't using URL, so remove it. I updated the links so that the text rendering doesn't include them twice.", "new_text": "11. TBD. "} {"id": "q-en-ack-frequency-eb778e13102852058b779cc406a3abbc5aef553309c1bd9179533fec48361b4d", "old_text": "QUIC-TRANSPORT currently specifies a simple delayed acknowledgement mechanism that a receiver can use: send an acknowledgement for every other packet, and for every packet when reordering is observed. This simple mechanism does not allow a sender to signal its constraints. This extension provides a mechanism to solve this problem. 3.", "comments": "NAME tell me if this is sufficient, or if you wanted more references to RFC9000.\nThanks, I'm open to other suggestions if you have them, but nothing else stuck out at me. I'm reluctant to repeat the full definition of out of order in this document, given it's well defined in RFC9000.\nWhen reviewing the draft I got caught on this part: As specified in Section 13.2.1 of [QUIC-TRANSPORT], endpoints are expected to send an acknowledgement immediately on receiving a reordered ack-eliciting packet. This extension modifies this behavior. There is an issue here in how this draft uses \"reordering\". Section 13.2.1 of RFC 9000 does say that ACKs should be sent immediately if: \"In order to assist loss detection at the sender, an endpoint SHOULD generate and send an ACK frame without delay when it receives an ack-eliciting packet either: when the received packet has a packet number less than another ack-eliciting packet that has been received, orwhen the packet has a packet number larger than the highest-numbered ack-eliciting packet that has been received and there are missing packets between that packet and this packet.\" Although the first bullet clearly is reordering, the second could be caused by three different mechanisms: Reordering of packets, loss of packets, or sender intentional gap (as discussed in ) Thus, I think this section needs to be clear if actually means both of the above bullets for when to send ACK or only one of them?\nThanks Magnus, the intent was to follow the RFC9000 definition, so I'll write a PR to clarify that was the intent.\nA couple of minor edits, but lgtmOk, I think this is truly the bare minimal to resolve my issue. There exist a clearer chain back towards Section 13.2.1 in RFC9000 and I guess that is sufficient.", "new_text": "QUIC-TRANSPORT currently specifies a simple delayed acknowledgement mechanism that a receiver can use: send an acknowledgement for every other packet, and for every packet that is received out of order (Section 13.2.1 of QUIC-TRANSPORT). This simple mechanism does not allow a sender to signal its constraints. This extension provides a mechanism to solve this problem. 3."} {"id": "q-en-ack-frequency-eb778e13102852058b779cc406a3abbc5aef553309c1bd9179533fec48361b4d", "old_text": "A 1-bit field representing a boolean truth value. This field is set to \"true\" by an endpoint that does not wish to receive an immediate acknowledgement when the peer observes reordering (reordering). 0 represents 'false' and 1 represents 'true'. A 1-bit field representing a boolean truth value. This field is set to \"true\" by an endpoint that does not wish to receive an immediate acknowledgement when the peer receives CE-marked packets (reordering). 0 represents 'false' and 1 represents 'true'. ACK_FREQUENCY frames are ack-eliciting. However, their loss does not require retransmission if an ACK_FREQUENCY frame with a larger", "comments": "NAME tell me if this is sufficient, or if you wanted more references to RFC9000.\nThanks, I'm open to other suggestions if you have them, but nothing else stuck out at me. I'm reluctant to repeat the full definition of out of order in this document, given it's well defined in RFC9000.\nWhen reviewing the draft I got caught on this part: As specified in Section 13.2.1 of [QUIC-TRANSPORT], endpoints are expected to send an acknowledgement immediately on receiving a reordered ack-eliciting packet. This extension modifies this behavior. There is an issue here in how this draft uses \"reordering\". Section 13.2.1 of RFC 9000 does say that ACKs should be sent immediately if: \"In order to assist loss detection at the sender, an endpoint SHOULD generate and send an ACK frame without delay when it receives an ack-eliciting packet either: when the received packet has a packet number less than another ack-eliciting packet that has been received, orwhen the packet has a packet number larger than the highest-numbered ack-eliciting packet that has been received and there are missing packets between that packet and this packet.\" Although the first bullet clearly is reordering, the second could be caused by three different mechanisms: Reordering of packets, loss of packets, or sender intentional gap (as discussed in ) Thus, I think this section needs to be clear if actually means both of the above bullets for when to send ACK or only one of them?\nThanks Magnus, the intent was to follow the RFC9000 definition, so I'll write a PR to clarify that was the intent.\nA couple of minor edits, but lgtmOk, I think this is truly the bare minimal to resolve my issue. There exist a clearer chain back towards Section 13.2.1 in RFC9000 and I guess that is sufficient.", "new_text": "A 1-bit field representing a boolean truth value. This field is set to \"true\" by an endpoint that does not wish to receive an immediate acknowledgement when the peer receives a packet out of order (out-of-order). 0 represents 'false' and 1 represents 'true'. A 1-bit field representing a boolean truth value. This field is set to \"true\" by an endpoint that does not wish to receive an immediate acknowledgement when the peer receives CE-marked packets (out-of-order). 0 represents 'false' and 1 represents 'true'. ACK_FREQUENCY frames are ack-eliciting. However, their loss does not require retransmission if an ACK_FREQUENCY frame with a larger"} {"id": "q-en-ack-frequency-eb778e13102852058b779cc406a3abbc5aef553309c1bd9179533fec48361b4d", "old_text": "Since the last acknowledgement was sent, \"max_ack_delay\" amount of time has passed. reordering, congestion, and batch describe exceptions to this strategy. An endpoint is expected to bundle acknowledgements when possible.", "comments": "NAME tell me if this is sufficient, or if you wanted more references to RFC9000.\nThanks, I'm open to other suggestions if you have them, but nothing else stuck out at me. I'm reluctant to repeat the full definition of out of order in this document, given it's well defined in RFC9000.\nWhen reviewing the draft I got caught on this part: As specified in Section 13.2.1 of [QUIC-TRANSPORT], endpoints are expected to send an acknowledgement immediately on receiving a reordered ack-eliciting packet. This extension modifies this behavior. There is an issue here in how this draft uses \"reordering\". Section 13.2.1 of RFC 9000 does say that ACKs should be sent immediately if: \"In order to assist loss detection at the sender, an endpoint SHOULD generate and send an ACK frame without delay when it receives an ack-eliciting packet either: when the received packet has a packet number less than another ack-eliciting packet that has been received, orwhen the packet has a packet number larger than the highest-numbered ack-eliciting packet that has been received and there are missing packets between that packet and this packet.\" Although the first bullet clearly is reordering, the second could be caused by three different mechanisms: Reordering of packets, loss of packets, or sender intentional gap (as discussed in ) Thus, I think this section needs to be clear if actually means both of the above bullets for when to send ACK or only one of them?\nThanks Magnus, the intent was to follow the RFC9000 definition, so I'll write a PR to clarify that was the intent.\nA couple of minor edits, but lgtmOk, I think this is truly the bare minimal to resolve my issue. There exist a clearer chain back towards Section 13.2.1 in RFC9000 and I guess that is sufficient.", "new_text": "Since the last acknowledgement was sent, \"max_ack_delay\" amount of time has passed. out-of-order, congestion, and batch describe exceptions to this strategy. An endpoint is expected to bundle acknowledgements when possible."} {"id": "q-en-acme-a76774ec5cc1f0890f23630c6570e7123c765922dfc54ddffaa99336df19da20", "old_text": "Different challenges allow the server to obtain proof of different aspects of control over an identifier. In some challenges, like HTTP and TLS SNI, the client directly proves its ability to do certain things related to the identifier. In the Proof of Possession challenge, the client proves historical control of the identifier, by reference to a prior authorization transaction or certificate. The choice of which challenges to offer to a client under which circumstances is a matter of server policy. A CA may choose different sets of challenges depending on whether it has interacted with a domain before, and how. For example: New domain with no known certificates: Domain Validation (HTTP or TLS SNI) Domain for which known certs exist from other CAs: DV + Proof of Possession of previous CA-signed key Domain with a cert from this CA, lost account key: DV + PoP of ACME-certified Subject key Domain with a cert from this CA, all keys and recovery mechanisms lost: Out of band proof of authority for the domain The identifier validation challenges described in this section all relate to validation of domain names. If ACME is extended in the", "comments": "There has been no implementer interest in the proof-of-possession challenge, and it makes the document more complex by having a completely different structure than the other challenges. If it is decided later that such a challenge is necessary, it can be added in a follow-on extension specificiation.\nGenerally this seems fine, and I wholeheartedly agree with the purpose of the removal (for now). One problem I can't open a comment for: Line 2099, the last paragraph before \"Preventing Authorization Hijacking\", still refers to the PoP Challenge. Without the PoP Challenge wording, a new reader may take that this to be referring to the implicit Proof of Possession of the ACME Account Key and thus be confused. Since it's non-normative, perhaps you could simply modify that paragraph into something more like:", "new_text": "Different challenges allow the server to obtain proof of different aspects of control over an identifier. In some challenges, like HTTP and TLS SNI, the client directly proves its ability to do certain things related to the identifier. The choice of which challenges to offer to a client under which circumstances is a matter of server policy. The identifier validation challenges described in this section all relate to validation of domain names. If ACME is extended in the"} {"id": "q-en-acme-a76774ec5cc1f0890f23630c6570e7123c765922dfc54ddffaa99336df19da20", "old_text": "7.4. The Proof of Possession challenge verifies that a client possesses a private key corresponding to a server-specified public key, as demonstrated by its ability to sign with that key. This challenge is meant to be used when the server knows of a public key that is already associated with the identifier being claimed, and wishes for new authorizations to be authorized by the holder of the corresponding private key. For DNS identifiers, for example, this can help guard against domain hijacking. This method is useful if a server policy calls for issuing a certificate only to an entity that already possesses the subject private key of a particular prior related certificate (perhaps issued by a different CA). It may also help enable other kinds of server policy that are related to authenticating a client's identity using digital signatures. This challenge proceeds in much the same way as the proof of possession of the authorized key pair in the main ACME flow (challenge + authorizationRequest). The server provides a nonce and the client signs over the nonce. The main difference is that rather than signing with the private key of the key pair being authorized, the client signs with a private key specified by the server. The server can specify which key pair(s) are acceptable directly (by indicating a public key), or by asking for the key corresponding to a certificate. The server provides the following fields as part of the challenge: The string \"proof-of-possession-01\" An array of certificates, in base64url-encoded DER format, that contain acceptable public keys. In response to this challenge, the client uses the private key corresponding to one of the acceptable public keys to sign a JWS object including data related to the challenge. The validation object covered by the signature has the following fields: The string \"proof-of-possession\" A list of identifiers for which the holder of the prior key authorizes the new key The client's account public key This JWS is NOT REQUIRED to have a \"nonce\" header parameter (as with the JWS objects that carry ACME request objects). This allows proof- of-possession response objects to be computed off-line. For example, as part of a domain transfer, the new domain owner might require the old domain owner to sign a proof-of-possession validation object, so that the new domain owner can present that in an ACME transaction later. The validation JWS MUST contain a \"jwk\" header parameter indicating the public key under which the server should verify the JWS. The client's response includes the server-provided nonce, together with a signature over that nonce by one of the private keys requested by the server. The string \"proof-of-possession\" The validation JWS To validate a proof-of-possession challenge, the server performs the following steps: Verify that the public key in the \"jwk\" header of the \"authorization\" JWS corresponds to one of the certificates in the \"certs\" field of the challenge Verify the \"authorization\" JWS using the key indicated in its \"jwk\" header Decode the payload of the JWS as UTF-8 encoded JSON Verify that there are exactly three fields in the decoded object, and that: * The \"type\" field is set to \"proof-of-possession\" * The \"identifier\" field contains the identifier for which authorization is being validated * The \"accountKey\" field matches the account key for which the challenge was issued If all of the above verifications succeed, then the validation is successful. Otherwise, the validation fails. 7.5. When the identifier being validated is a domain name, the client can prove control of that domain by provisioning a resource record under it. The DNS challenge requires the client to provision a TXT record", "comments": "There has been no implementer interest in the proof-of-possession challenge, and it makes the document more complex by having a completely different structure than the other challenges. If it is decided later that such a challenge is necessary, it can be added in a follow-on extension specificiation.\nGenerally this seems fine, and I wholeheartedly agree with the purpose of the removal (for now). One problem I can't open a comment for: Line 2099, the last paragraph before \"Preventing Authorization Hijacking\", still refers to the PoP Challenge. Without the PoP Challenge wording, a new reader may take that this to be referring to the implicit Proof of Possession of the ACME Account Key and thus be confused. Since it's non-normative, perhaps you could simply modify that paragraph into something more like:", "new_text": "7.4. When the identifier being validated is a domain name, the client can prove control of that domain by provisioning a resource record under it. The DNS challenge requires the client to provision a TXT record"} {"id": "q-en-acme-a76774ec5cc1f0890f23630c6570e7123c765922dfc54ddffaa99336df19da20", "old_text": "DNS: The MAC covers the account key, and the MAC key is derived from an ECDH public key signed with the account private key. Proof of possession of a prior key: The signature by the prior key covers the account public key. The association of challenges to identifiers is typically done by requiring the client to perform some action that only someone who effectively controls the identifier can perform. For the challenges", "comments": "There has been no implementer interest in the proof-of-possession challenge, and it makes the document more complex by having a completely different structure than the other challenges. If it is decided later that such a challenge is necessary, it can be added in a follow-on extension specificiation.\nGenerally this seems fine, and I wholeheartedly agree with the purpose of the removal (for now). One problem I can't open a comment for: Line 2099, the last paragraph before \"Preventing Authorization Hijacking\", still refers to the PoP Challenge. Without the PoP Challenge wording, a new reader may take that this to be referring to the implicit Proof of Possession of the ACME Account Key and thus be confused. Since it's non-normative, perhaps you could simply modify that paragraph into something more like:", "new_text": "DNS: The MAC covers the account key, and the MAC key is derived from an ECDH public key signed with the account private key. The association of challenges to identifiers is typically done by requiring the client to perform some action that only someone who effectively controls the identifier can perform. For the challenges"} {"id": "q-en-acme-a76774ec5cc1f0890f23630c6570e7123c765922dfc54ddffaa99336df19da20", "old_text": "DNS: Provision DNS resource records for the domain Proof of possession of a prior key: Sign using the private key specified by the server There are several ways that these assumptions can be violated, both by misconfiguration and by attack. For example, on a web server that allows non-administrative users to write to .well-known, any user can", "comments": "There has been no implementer interest in the proof-of-possession challenge, and it makes the document more complex by having a completely different structure than the other challenges. If it is decided later that such a challenge is necessary, it can be added in a follow-on extension specificiation.\nGenerally this seems fine, and I wholeheartedly agree with the purpose of the removal (for now). One problem I can't open a comment for: Line 2099, the last paragraph before \"Preventing Authorization Hijacking\", still refers to the PoP Challenge. Without the PoP Challenge wording, a new reader may take that this to be referring to the implicit Proof of Possession of the ACME Account Key and thus be confused. Since it's non-normative, perhaps you could simply modify that paragraph into something more like:", "new_text": "DNS: Provision DNS resource records for the domain There are several ways that these assumptions can be violated, both by misconfiguration and by attack. For example, on a web server that allows non-administrative users to write to .well-known, any user can"} {"id": "q-en-acme-a76774ec5cc1f0890f23630c6570e7123c765922dfc54ddffaa99336df19da20", "old_text": "process, by performing normal ACME transactions and providing a validation response for his own account key. The risks due to hosting providers noted above are a particular case. For identifiers where the server already has some credential associated with the domain this attack can be prevented by requiring the client to complete a proof-of-possession challenge. 10.3.", "comments": "There has been no implementer interest in the proof-of-possession challenge, and it makes the document more complex by having a completely different structure than the other challenges. If it is decided later that such a challenge is necessary, it can be added in a follow-on extension specificiation.\nGenerally this seems fine, and I wholeheartedly agree with the purpose of the removal (for now). One problem I can't open a comment for: Line 2099, the last paragraph before \"Preventing Authorization Hijacking\", still refers to the PoP Challenge. Without the PoP Challenge wording, a new reader may take that this to be referring to the implicit Proof of Possession of the ACME Account Key and thus be confused. Since it's non-normative, perhaps you could simply modify that paragraph into something more like:", "new_text": "process, by performing normal ACME transactions and providing a validation response for his own account key. The risks due to hosting providers noted above are a particular case. For identifiers where the server already has some public key associated with the domain this attack can be prevented by requiring the client to prove control of the corresponding private key. 10.3."} {"id": "q-en-acme-b6656bcdb106d5e264ebc8085cd75ddf064b6bbd6f265f9bbe8c8426a96c5f95", "old_text": "any web server that is accessible to the ACME server, even if it is not accessible to the ACME client. The risk of SSRF through this channel is limited by the fact that the attacker can only control the domain of the URL, not the path. Nonetheless, in order to further limit the SSRF risk, ACME server operators should ensure that validation queries can only be sent to servers on the public Internet, and not, say, web services within the server operator's internal network. Since the attacker could make requests to these public servers himself, he can't gain anything extra through an SSRF attack on ACME aside from a layer of anonymization. 9.5.", "comments": "NAME does this look better?\nLGTM, though I'd change the attacker from a \"he\" to a \"they\" on general principles.\nNot a huge fan of the singular \"they\", but can't be bothered to rewrite around it :)\nYes, this is clearer. Thanks. Saying \"will cause the server to query an arbitrary URI\" is good. Technically you can even specify a non-HTTP protocol in an HTTP redirect, and this language should cover that too. (Go's net/http used by Boulder only does HTTP and HTTPS I think, but things like libcurl support a slightly insane number of protocols.) I'm not sure if the usages of \"URI\" vs \"URL\" in this section are correct, you may want to make them consistent. I never know which one to actually use. This looks good to me.", "new_text": "any web server that is accessible to the ACME server, even if it is not accessible to the ACME client. It might seem that the risk of SSRF through this channel is limited by the fact that the attacker can only control the domain of the URL, not the path. However, if the attacker first sets the domain to one they control, then they can send the server an HTTP redirect (e.g., a 302 response) which will cause the server to query an arbitrary URI. In order to further limit the SSRF risk, ACME server operators should ensure that validation queries can only be sent to servers on the public Internet, and not, say, web services within the server operator's internal network. Since the attacker could make requests to these public servers himself, he can't gain anything extra through an SSRF attack on ACME aside from a layer of anonymization. 9.5."} {"id": "q-en-acme-f0e944758afd6239cf36636e5054d798fcbbda7fa3a6029234c45f5e649d02af", "old_text": "The JWK representation of the new key Both of these thumbprints MUST be computed as specified in RFC7638, using the SHA-256 digest. The values in the \"oldKey\" and \"newKey\" fields MUST be the base64url encodings of the thumbprints. The client then encapsulates the key-change object in a JWS, signed with the client's current account key (i.e., the key matching the \"oldKey\" value).", "comments": "We aren't using thumbprints for key-change anymore, so this paragraph doesn't make sense.", "new_text": "The JWK representation of the new key The client then encapsulates the key-change object in a JWS, signed with the client's current account key (i.e., the key matching the \"oldKey\" value)."} {"id": "q-en-acme-2442469ce4b098457b260297c0ec033feab7bc9b55cef909366602964d88393d", "old_text": "MUST be the exact string provided in the Location header field in response to the new-registration request that created the account. The JWK representation of the original key (i.e., the client's current account key) The JWK representation of the new key The client then encapsulates the key-change object in a JWS, signed", "comments": "This was part of URL (see removal of oldKey from example payload), but I missed these two other spots. oldKey is fully specified by the account URL in the payload.", "new_text": "MUST be the exact string provided in the Location header field in response to the new-registration request that created the account. The JWK representation of the new key The client then encapsulates the key-change object in a JWS, signed"} {"id": "q-en-acme-2442469ce4b098457b260297c0ec033feab7bc9b55cef909366602964d88393d", "old_text": "Check that the \"account\" field of the key-change object contains the URL for the registration matching the old key Check that the \"oldKey\" field of the key-change object contains the current account key. Check that the \"newKey\" field of the key-change object contains the key used to sign the inner JWS.", "comments": "This was part of URL (see removal of oldKey from example payload), but I missed these two other spots. oldKey is fully specified by the account URL in the payload.", "new_text": "Check that the \"account\" field of the key-change object contains the URL for the registration matching the old key Check that the \"newKey\" field of the key-change object contains the key used to sign the inner JWS."} {"id": "q-en-acme-8454e93f92a2d4c27bb82e408a5005e232747aff78119088c9be6dba284d2ed1", "old_text": "6.3.2. A client may wish to change the public key that is associated with a account in order to recover from a key compromise or proactively mitigate the impact of an unnoticed key compromise.", "comments": "This is a variant of that uses a MAC key to perform the account binding instead of a bearer token. Thanks to NAME for pointing out the need for this on the ACME mailing list.\nQuestion: will you require a CA that doesn't use external account bindings to reject a request if it contains an external-account-binding member? nit: s/field/member/g, consistent with RFC 7159.\nStartCom plan to use ACME protocol for StartEncrypt, we need to identify the client's validation level, so the subscriber administration can generate a special token in the URL account that send this token to the email address used in the registration. At the registration, user need to enter email and this token with the certificate to let the CA system know this customer's validation level. After the CA system receive the email, the token and signing certificate, CA system know what type of certificate we can issue to this client; if this client account is class 4 validated, then the client can get EV SSL certificate, not DV SSL. please add this a parameter to the ACME protocol, thanks.\nThis request should be sent to the ACME mailing list (EMAIL) for further discussion, it would also be very useful to know what your EV (or OV) validation flow would look like using ACME as most likely the current certificate application object would need to be changed to identify those requirements to the server.\nThis was fixed by\nMinor editorial comments.", "new_text": "6.3.2. The server MAY require a value to be present for the \"external- account-binding\" field. This can be used to an ACME account with an existing account in a non-ACME system, such as a CA customer database. To enable ACME account binding, a CA needs to provision the ACME client with a MAC key and a key identifier. The key identifier MUST be an ASCII string. The MAC key SHOULD be provided in base64url- encoded form, to maximize compatibility between provisioning systems and ACME clients. The ACME client then computes a binding JWS to indicate the external account's approval of the ACME account key. The payload of this JWS is the account key being registered, in JWK form. The protected header of the JWS MUST meet the following criteria: The \"alg\" field MUST indicate a MAC-based algorithm The \"kid\" field MUST contain the key identifier provided by the CA The \"nonce\" field MUST NOT be present The \"url\" field MUST be set to the same value as the outer JWS The \"signature\" field of the JWS will contain the MAC value computed with the MAC key provided by the CA. When a CA receives a new-registration request containing an \"external-account-binding\" field, it must decide whether or not to verify the binding. If the CA does not verify the binding, then it MUST NOT reflect the \"external-account-binding\" field in the resulting account object (if any). To verify the account binding, the CA MUST take the following steps: Verify that the value of the field is a well-formed JWS Verify that the JWS protected meets the above criteria Retrieve the MAC key corresponding to the key identifier in the \"kid\" field Verify that the MAC on the JWS verifies using that MAC key Verify that the payload of the JWS represents the same key as was used to verify the outer JWS (i.e., the \"jwk\" field of the outer JWS) If all of these checks pass and the CA creates a new account, then the CA may consider the new account associated with the external account corresponding to the MAC key, and MUST reflect value of the \"external-account-binding\" field in the resulting account object. If any of these checks fail, then the CA MUST reject the new- registration request. 6.3.3. A client may wish to change the public key that is associated with a account in order to recover from a key compromise or proactively mitigate the impact of an unnoticed key compromise."} {"id": "q-en-acme-8454e93f92a2d4c27bb82e408a5005e232747aff78119088c9be6dba284d2ed1", "old_text": "responds with an error status code and a problem document describing the error. 6.3.3. A client may deactivate an account by posting a signed update to the server with a status field of \"deactivated.\" Clients may wish to do", "comments": "This is a variant of that uses a MAC key to perform the account binding instead of a bearer token. Thanks to NAME for pointing out the need for this on the ACME mailing list.\nQuestion: will you require a CA that doesn't use external account bindings to reject a request if it contains an external-account-binding member? nit: s/field/member/g, consistent with RFC 7159.\nStartCom plan to use ACME protocol for StartEncrypt, we need to identify the client's validation level, so the subscriber administration can generate a special token in the URL account that send this token to the email address used in the registration. At the registration, user need to enter email and this token with the certificate to let the CA system know this customer's validation level. After the CA system receive the email, the token and signing certificate, CA system know what type of certificate we can issue to this client; if this client account is class 4 validated, then the client can get EV SSL certificate, not DV SSL. please add this a parameter to the ACME protocol, thanks.\nThis request should be sent to the ACME mailing list (EMAIL) for further discussion, it would also be very useful to know what your EV (or OV) validation flow would look like using ACME as most likely the current certificate application object would need to be changed to identify those requirements to the server.\nThis was fixed by\nMinor editorial comments.", "new_text": "responds with an error status code and a problem document describing the error. 6.3.4. A client may deactivate an account by posting a signed update to the server with a status field of \"deactivated.\" Clients may wish to do"} {"id": "q-en-acme-8db756a6603b4402ceac8cc34760d130d5506ebf2a6f2f7ff418a3636d5779d4", "old_text": "attacker from guessing it. It MUST NOT contain any characters outside the URL-safe Base64 alphabet. Number of tls-sni-01 iterations A client responds to this challenge by constructing a key authorization from the \"token\" value provided in the challenge and the client's account key. The client first computes the SHA-256 digest Z0 of the UTF8-encoded key authorization, and encodes Z0 in UTF-8 lower-case hexadecimal form. The client then generates iterated hash values Z1...Z(n-1) as follows: The client generates a self-signed certificate for each iteration of Zi with a single subjectAlternativeName extension dNSName that is \"..acme.invalid\", where \"Zi[0:32]\" and \"Zi[32:64]\" represent the first 32 and last 32 characters of the hex- encoded value, respectively (following the notation used in Python). The client then configures the TLS server at the domain such that when a handshake is initiated with the Server Name Indication extension set to \"..acme.invalid\", the corresponding generated certificate is presented. The response to the TLS SNI challenge simply acknowledges that the client is ready to fulfill this challenge.", "comments": "Simplify the TLS SNI challenge by removing the iterations concept, which I believe is not necessary in practice and complicates the protocol needlessly.\nImplementing TLS SNI is currently really complicated, yes.\nFYI, URL implements protocol according to the contents of this PR - i.e. without iterations.\n(so does current Boulder)\nI think this will be OK to land once lands. We should consider whether we want to bump the version number in the challenge. Principle would dictate that we should, since this breaks compatibility with what was published in -01. But since I'm not aware of any implementation of \"tls-sni-01\" as specified in -01 (vs. the off-spec implementation in boulder), I'm inclined to just leave it.\nFWIW on the client side of things, xenolf/lego did not implement the iterations for the tls-sni-01 challenge.\nURL implements iterations but takes the absence of an \"n\" parameter to be equivalent to an \"n\" parameter of 1, which is compatible.\nGreat, I'm taking this as sufficient evidence that a version bump is not needed :)", "new_text": "attacker from guessing it. It MUST NOT contain any characters outside the URL-safe Base64 alphabet. A client responds to this challenge by constructing a key authorization from the \"token\" value provided in the challenge and the client's account key. The client computes the SHA-256 digest Z of the UTF8-encoded key authorization, and encodes Z in UTF-8 lower- case hexadecimal form. The client generates a self-signed certificate for with a single subjectAlternativeName extension dNSName that is \"..acme.invalid\", where \"Z[0:32]\" and \"Z[32:64]\" represent the first 32 and last 32 characters of the hex-encoded value, respectively (following the notation used in Python). The client then configures the TLS server at the domain such that when a handshake is initiated with the Server Name Indication extension set to \"..acme.invalid\", the corresponding generated certificate is presented. The response to the TLS SNI challenge simply acknowledges that the client is ready to fulfill this challenge."} {"id": "q-en-acme-8db756a6603b4402ceac8cc34760d130d5506ebf2a6f2f7ff418a3636d5779d4", "old_text": "Given a Challenge/Response pair, the ACME server verifies the client's control of the domain by verifying that the TLS server was configured appropriately. Choose a subset of the N iterations to check, according to local policy. For each iteration, compute the Zi-value from the key authorization in the same way as the client. Open a TLS connection to the domain name being validated on the requested port, presenting the value \"..acme.invalid\" in the SNI field (where the comparison is case-insensitive). Verify that the certificate contains a subjectAltName extension", "comments": "Simplify the TLS SNI challenge by removing the iterations concept, which I believe is not necessary in practice and complicates the protocol needlessly.\nImplementing TLS SNI is currently really complicated, yes.\nFYI, URL implements protocol according to the contents of this PR - i.e. without iterations.\n(so does current Boulder)\nI think this will be OK to land once lands. We should consider whether we want to bump the version number in the challenge. Principle would dictate that we should, since this breaks compatibility with what was published in -01. But since I'm not aware of any implementation of \"tls-sni-01\" as specified in -01 (vs. the off-spec implementation in boulder), I'm inclined to just leave it.\nFWIW on the client side of things, xenolf/lego did not implement the iterations for the tls-sni-01 challenge.\nURL implements iterations but takes the absence of an \"n\" parameter to be equivalent to an \"n\" parameter of 1, which is compatible.\nGreat, I'm taking this as sufficient evidence that a version bump is not needed :)", "new_text": "Given a Challenge/Response pair, the ACME server verifies the client's control of the domain by verifying that the TLS server was configured appropriately, using these steps: Compute the Z-value from the key authorization in the same way as the client. Open a TLS connection to the domain name being validated on the requested port, presenting the value \"..acme.invalid\" in the SNI field (where the comparison is case-insensitive). Verify that the certificate contains a subjectAltName extension"} {"id": "q-en-acme-8db756a6603b4402ceac8cc34760d130d5506ebf2a6f2f7ff418a3636d5779d4", "old_text": "no other dNSName entries of the form \"*.acme.invalid\" are present in the subjectAltName extension. It is RECOMMENDED that the ACME server verify a random subset of the N iterations with an appropriate sized to ensure that an attacker who can provision certs for a default virtual host, but not for arbitrary simultaneous virtual hosts, cannot pass the challenge. For instance, testing a subset of 5 of N=25 domains ensures that such an attacker has only a one in 25/5 chance of success if they post certs Zn in random succession. (This probability is enforced by the requirement that each certificate have only one Zi value.) It is RECOMMENDED that the ACME server validation TLS connections from multiple vantage points to reduce the risk of DNS hijacking attacks.", "comments": "Simplify the TLS SNI challenge by removing the iterations concept, which I believe is not necessary in practice and complicates the protocol needlessly.\nImplementing TLS SNI is currently really complicated, yes.\nFYI, URL implements protocol according to the contents of this PR - i.e. without iterations.\n(so does current Boulder)\nI think this will be OK to land once lands. We should consider whether we want to bump the version number in the challenge. Principle would dictate that we should, since this breaks compatibility with what was published in -01. But since I'm not aware of any implementation of \"tls-sni-01\" as specified in -01 (vs. the off-spec implementation in boulder), I'm inclined to just leave it.\nFWIW on the client side of things, xenolf/lego did not implement the iterations for the tls-sni-01 challenge.\nURL implements iterations but takes the absence of an \"n\" parameter to be equivalent to an \"n\" parameter of 1, which is compatible.\nGreat, I'm taking this as sufficient evidence that a version bump is not needed :)", "new_text": "no other dNSName entries of the form \"*.acme.invalid\" are present in the subjectAltName extension. It is RECOMMENDED that the ACME server validation TLS connections from multiple vantage points to reduce the risk of DNS hijacking attacks."} {"id": "q-en-acme-04e027f76281c5469b3b37d503e64cb5558ad1d086f57e169a28ac8694f9c16d", "old_text": "This registry lists field names that are defined for use in ACME order objects. Fields marked as \"client configurable\" may be included in a new-account request. Template:", "comments": "This PR fixes a typo reported by NAME in the \"Fields in Order Objects\" section. Previously it referred to the listed fields as relating to the \"new-account request\". This PR changes the text to make it clear they relate to a \"new-order request\". Resolves URL\nMerging based on manual review; CircleCI seems to be busted at the moment.\nThe section on the \"\" registry says that \"Fields marked as \"client configurable\" may be included in a new-account request\". This seems like a copy-paste-o, and perhaps should say that such fields may be included in a new-order request.\nGood catch. I opened URL to fix. Thanks!", "new_text": "This registry lists field names that are defined for use in ACME order objects. Fields marked as \"client configurable\" may be included in a new-order request. Template:"} {"id": "q-en-acme-58881b3299e3c24dc7dbb5eefbd7b7a649c84091b7413b5ea404664d78bb2bd6", "old_text": "Each ACME function is accomplished by the client sending a sequence of HTTPS requests to the server, carrying JSON messages RFC7159. Use of HTTPS is REQUIRED. Clients SHOULD support HTTP public key pinning RFC7469, and servers SHOULD emit pinning headers. Each subsection of certificate-management below describes the message formats used by the function and the order in which messages are sent. In most HTTPS transactions used by ACME, the ACME client is the HTTPS client and the ACME server is the HTTPS server. The ACME server acts", "comments": "Public key pinning isn't implemented in most HTTPS libraries outside of browsers, so this is a considerable burden on implementers. Public key pinning carries a fairly high risk of footgunning. The consequence of a failed pin for a CA that serves many ACME clients would be that some of those clients would fail to renew their certs, causing cascading breakage. There is relatively little confidential information conveyed in ACME, and there are other defenses built into ACME (like including the account key as part of the challenge data), so HPKP is not strongly necessary.", "new_text": "Each ACME function is accomplished by the client sending a sequence of HTTPS requests to the server, carrying JSON messages RFC7159. Use of HTTPS is REQUIRED. Each subsection of certificate-management below describes the message formats used by the function and the order in which messages are sent. In most HTTPS transactions used by ACME, the ACME client is the HTTPS client and the ACME server is the HTTPS server. The ACME server acts"} {"id": "q-en-acme-79e298d6ffca6589214af0bd1246719b0b75624a8dfeb1483f2d56867acb4d7e", "old_text": "As noted above, DNS forgery attacks against the ACME server can result in the server making incorrect decisions about domain control and thus mis-issuing certificates. Servers SHOULD verify DNSSEC when it is available for a domain. When DNSSEC is not available, servers SHOULD perform DNS queries over TCP, which provides better resistance to some forgery attacks than DNS over UDP. 10.2.", "comments": "Instead of just when not using DNSSEC since you don't know if a domain is secured by DNSSEC until the query is finished. Also rename the section 'DNS Security' since it also talks about DNSSEC. Also combines the 'DNS Security' and 'Use of DNSSEC resolvers sections.", "new_text": "As noted above, DNS forgery attacks against the ACME server can result in the server making incorrect decisions about domain control and thus mis-issuing certificates. Servers SHOULD perform DNS queries over TCP, which provides better resistance to some forgery attacks than DNS over UDP. An ACME-based CA will often need to make DNS queries, e.g., to validate control of DNS names. Because the security of such validations ultimately depends on the authenticity of DNS data, every possible precaution should be taken to secure DNS queries done by the CA. It is therefore RECOMMENDED that ACME-based CAs make all DNS queries via DNSSEC-validating stub or recursive resolvers. This provides additional protection to domains which choose to make use of DNSSEC. An ACME-based CA must use only a resolver if it trusts the resolver and every component of the network route by which it is accessed. It is therefore RECOMMENDED that ACME-based CAs operate their own DNSSEC-validating resolvers within their trusted network and use these resolvers both for both CAA record lookups and all record lookups in furtherance of a challenge scheme (A, AAAA, TXT, etc.). 10.2."} {"id": "q-en-acme-79e298d6ffca6589214af0bd1246719b0b75624a8dfeb1483f2d56867acb4d7e", "old_text": "the namespace used for the TLS-based challenge (the \"acme.invalid\" namespace for \"tls-sni-02\"). 10.3. An ACME-based CA will often need to make DNS queries, e.g., to validate control of DNS names. Because the security of such validations ultimately depends on the authenticity of DNS data, every possible precaution should be taken to secure DNS queries done by the CA. It is therefore RECOMMENDED that ACME-based CAs make all DNS queries via DNSSEC-validating stub or recursive resolvers. This provides additional protection to domains which choose to make use of DNSSEC. An ACME-based CA must use only a resolver if it trusts the resolver and every component of the network route by which it is accessed. It is therefore RECOMMENDED that ACME-based CAs operate their own DNSSEC-validating resolvers within their trusted network and use these resolvers both for both CAA record lookups and all record lookups in furtherance of a challenge scheme (A, AAAA, TXT, etc.). 11. References 11.1. URIs", "comments": "Instead of just when not using DNSSEC since you don't know if a domain is secured by DNSSEC until the query is finished. Also rename the section 'DNS Security' since it also talks about DNSSEC. Also combines the 'DNS Security' and 'Use of DNSSEC resolvers sections.", "new_text": "the namespace used for the TLS-based challenge (the \"acme.invalid\" namespace for \"tls-sni-02\"). 11. References 11.1. URIs"} {"id": "q-en-acme-051e3b280b3e3947eb6773938267306261abb99365cd72607252f4d952ebd271", "old_text": "other certificate management functions, such as certificate revocation. DISCLAIMER: This is a work in progress draft of ACME and has not yet had a thorough security analysis. RFC EDITOR: PLEASE REMOVE THE FOLLOWING PARAGRAPH: The source for this draft is maintained in GitHub. Suggested changes should be submitted as pull requests at https://github.com/ietf-wg-acme/acme", "comments": "Addresses Martin Stiemerling's TSV-ART review\nMerging based on NAME review\nOn the mailing list, Rifaat Shekh-Yusuf pointed out these issues: Section 7.3.5, first paragraph, second line: A \"bind\" word is missing between the words \"to\" and \"an\" Section 7.4.1, second paragraph, second sentence: \"case\" should be \"cases\". When the server builds the authorization object, the document is stating that the response would include \"challenges\" and \"combinations\". Remove the \"combinations\" as it is not being used. Section 7.5.1, the Request URIs in the examples: Should not this be /acme/authz/1234/0?\nIn \"Responding to challenges,\" we use the term \"response fields.\" These fields are defined by each challenge type, but are not described as \"response fields\" there. We should add a heading describing them as such in the challenge type definitions.\nThe other challenges don't expect you to post a payload containing a type field, so it's inconsistent for OOB-01 to do so. It should just take an empty object \"{}\" as its payload.\nThis seems fine. It seems to be protected from replay by the and parameters. I think we're OK leaving the challenge name at \"oob-01\", since this isn't a breaking change.", "new_text": "other certificate management functions, such as certificate revocation. RFC EDITOR: PLEASE REMOVE THE FOLLOWING PARAGRAPH: The source for this draft is maintained in GitHub. Suggested changes should be submitted as pull requests at https://github.com/ietf-wg-acme/acme"} {"id": "q-en-acme-051e3b280b3e3947eb6773938267306261abb99365cd72607252f4d952ebd271", "old_text": "7.3.5. The server MAY require a value to be present for the \"external- account-binding\" field. This can be used to an ACME account with an existing account in a non-ACME system, such as a CA customer database. To enable ACME account binding, a CA needs to provision the ACME client with a MAC key and a key identifier. The key identifier MUST", "comments": "Addresses Martin Stiemerling's TSV-ART review\nMerging based on NAME review\nOn the mailing list, Rifaat Shekh-Yusuf pointed out these issues: Section 7.3.5, first paragraph, second line: A \"bind\" word is missing between the words \"to\" and \"an\" Section 7.4.1, second paragraph, second sentence: \"case\" should be \"cases\". When the server builds the authorization object, the document is stating that the response would include \"challenges\" and \"combinations\". Remove the \"combinations\" as it is not being used. Section 7.5.1, the Request URIs in the examples: Should not this be /acme/authz/1234/0?\nIn \"Responding to challenges,\" we use the term \"response fields.\" These fields are defined by each challenge type, but are not described as \"response fields\" there. We should add a heading describing them as such in the challenge type definitions.\nThe other challenges don't expect you to post a payload containing a type field, so it's inconsistent for OOB-01 to do so. It should just take an empty object \"{}\" as its payload.\nThis seems fine. It seems to be protected from replay by the and parameters. I think we're OK leaving the challenge name at \"oob-01\", since this isn't a breaking change.", "new_text": "7.3.5. The server MAY require a value to be present for the \"external- account-binding\" field. This can be used to associate an ACME account with an existing account in a non-ACME system, such as a CA customer database. To enable ACME account binding, a CA needs to provision the ACME client with a MAC key and a key identifier. The key identifier MUST"} {"id": "q-en-acme-051e3b280b3e3947eb6773938267306261abb99365cd72607252f4d952ebd271", "old_text": "In some cases, a CA running an ACME server might have a completely external, non-ACME process for authorizing a client to issue for an identifier. In these case, the CA should provision its ACME server with authorization objects corresponding to these authorizations and reflect them as already valid in any orders submitted by the client.", "comments": "Addresses Martin Stiemerling's TSV-ART review\nMerging based on NAME review\nOn the mailing list, Rifaat Shekh-Yusuf pointed out these issues: Section 7.3.5, first paragraph, second line: A \"bind\" word is missing between the words \"to\" and \"an\" Section 7.4.1, second paragraph, second sentence: \"case\" should be \"cases\". When the server builds the authorization object, the document is stating that the response would include \"challenges\" and \"combinations\". Remove the \"combinations\" as it is not being used. Section 7.5.1, the Request URIs in the examples: Should not this be /acme/authz/1234/0?\nIn \"Responding to challenges,\" we use the term \"response fields.\" These fields are defined by each challenge type, but are not described as \"response fields\" there. We should add a heading describing them as such in the challenge type definitions.\nThe other challenges don't expect you to post a payload containing a type field, so it's inconsistent for OOB-01 to do so. It should just take an empty object \"{}\" as its payload.\nThis seems fine. It seems to be protected from replay by the and parameters. I think we're OK leaving the challenge name at \"oob-01\", since this isn't a breaking change.", "new_text": "In some cases, a CA running an ACME server might have a completely external, non-ACME process for authorizing a client to issue for an identifier. In these cases, the CA should provision its ACME server with authorization objects corresponding to these authorizations and reflect them as already valid in any orders submitted by the client."} {"id": "q-en-acme-051e3b280b3e3947eb6773938267306261abb99365cd72607252f4d952ebd271", "old_text": "the \"challenges\" dictionary. The client sends these updates back to the server in the form of a JSON object with the response fields required by the challenge type, carried in a POST request to the challenge URL (not authorization URL) once it is ready for the server to attempt validation. For example, if the client were to respond to the \"http-01\" challenge in the above authorization, it would send the following request: The server updates the authorization document by updating its representation of the challenge with the response fields provided by the client. The server MUST ignore any fields in the response object that are not specified as response fields for this type of challenge. The server provides a 200 (OK) response with the updated challenge", "comments": "Addresses Martin Stiemerling's TSV-ART review\nMerging based on NAME review\nOn the mailing list, Rifaat Shekh-Yusuf pointed out these issues: Section 7.3.5, first paragraph, second line: A \"bind\" word is missing between the words \"to\" and \"an\" Section 7.4.1, second paragraph, second sentence: \"case\" should be \"cases\". When the server builds the authorization object, the document is stating that the response would include \"challenges\" and \"combinations\". Remove the \"combinations\" as it is not being used. Section 7.5.1, the Request URIs in the examples: Should not this be /acme/authz/1234/0?\nIn \"Responding to challenges,\" we use the term \"response fields.\" These fields are defined by each challenge type, but are not described as \"response fields\" there. We should add a heading describing them as such in the challenge type definitions.\nThe other challenges don't expect you to post a payload containing a type field, so it's inconsistent for OOB-01 to do so. It should just take an empty object \"{}\" as its payload.\nThis seems fine. It seems to be protected from replay by the and parameters. I think we're OK leaving the challenge name at \"oob-01\", since this isn't a breaking change.", "new_text": "the \"challenges\" dictionary. The client sends these updates back to the server in the form of a JSON object with contents as specified by the challenge type, carried in a POST request to the challenge URL (not authorization URL) once it is ready for the server to attempt validation. For example, if the client were to respond to the \"http-01\" challenge in the above authorization, it would send the following request: The server updates the authorization document by updating its representation of the challenge with the response object provided by the client. The server MUST ignore any fields in the response object that are not specified as response fields for this type of challenge. The server provides a 200 (OK) response with the updated challenge"} {"id": "q-en-acme-051e3b280b3e3947eb6773938267306261abb99365cd72607252f4d952ebd271", "old_text": "Clients SHOULD NOT respond to challenges until they believe that the server's queries will succeed. If a server's initial validation query fails, the server SHOULD retry the query after some time. While the server is still trying, the status of the challenge remains \"pending\"; it is only marked \"invalid\" once the server has given up. The server MUST provide information about its retry state to the client via the \"errors\" field in the challenge and the Retry-After", "comments": "Addresses Martin Stiemerling's TSV-ART review\nMerging based on NAME review\nOn the mailing list, Rifaat Shekh-Yusuf pointed out these issues: Section 7.3.5, first paragraph, second line: A \"bind\" word is missing between the words \"to\" and \"an\" Section 7.4.1, second paragraph, second sentence: \"case\" should be \"cases\". When the server builds the authorization object, the document is stating that the response would include \"challenges\" and \"combinations\". Remove the \"combinations\" as it is not being used. Section 7.5.1, the Request URIs in the examples: Should not this be /acme/authz/1234/0?\nIn \"Responding to challenges,\" we use the term \"response fields.\" These fields are defined by each challenge type, but are not described as \"response fields\" there. We should add a heading describing them as such in the challenge type definitions.\nThe other challenges don't expect you to post a payload containing a type field, so it's inconsistent for OOB-01 to do so. It should just take an empty object \"{}\" as its payload.\nThis seems fine. It seems to be protected from replay by the and parameters. I think we're OK leaving the challenge name at \"oob-01\", since this isn't a breaking change.", "new_text": "Clients SHOULD NOT respond to challenges until they believe that the server's queries will succeed. If a server's initial validation query fails, the server SHOULD retry the query after some time, in order to account for delay in setting up responses such as DNS records or HTTP resources. The precise retry schedule is up to the server, but server operators should keep in mind the operational scenarios that the schedule is trying to accommodate. Given that retries are intended to address things like propagation delays in HTTP or DNS provisioning, there should not usually be any reason to retry more often than every 5 or 10 seconds. While the server is still trying, the status of the challenge remains \"pending\"; it is only marked \"invalid\" once the server has given up. The server MUST provide information about its retry state to the client via the \"errors\" field in the challenge and the Retry-After"} {"id": "q-en-acme-051e3b280b3e3947eb6773938267306261abb99365cd72607252f4d952ebd271", "old_text": "for a human user to navigate to. If the user chooses to complete this challenge (by visiting the website and completing its instructions), the client indicates this by sending a simple acknowledgement response to the server. The string \"oob-01\" On receiving a response, the server MUST verify that the value of the \"type\" field is \"oob-01\". Otherwise, the steps the server takes to", "comments": "Addresses Martin Stiemerling's TSV-ART review\nMerging based on NAME review\nOn the mailing list, Rifaat Shekh-Yusuf pointed out these issues: Section 7.3.5, first paragraph, second line: A \"bind\" word is missing between the words \"to\" and \"an\" Section 7.4.1, second paragraph, second sentence: \"case\" should be \"cases\". When the server builds the authorization object, the document is stating that the response would include \"challenges\" and \"combinations\". Remove the \"combinations\" as it is not being used. Section 7.5.1, the Request URIs in the examples: Should not this be /acme/authz/1234/0?\nIn \"Responding to challenges,\" we use the term \"response fields.\" These fields are defined by each challenge type, but are not described as \"response fields\" there. We should add a heading describing them as such in the challenge type definitions.\nThe other challenges don't expect you to post a payload containing a type field, so it's inconsistent for OOB-01 to do so. It should just take an empty object \"{}\" as its payload.\nThis seems fine. It seems to be protected from replay by the and parameters. I think we're OK leaving the challenge name at \"oob-01\", since this isn't a breaking change.", "new_text": "for a human user to navigate to. If the user chooses to complete this challenge (by visiting the website and completing its instructions), the client indicates this by sending a simple acknowledgement response to the server. The payload of this response is an empty JSON object (\"{}\", or \"e30\" base64url-encoded). On receiving a response, the server MUST verify that the value of the \"type\" field is \"oob-01\". Otherwise, the steps the server takes to"} {"id": "q-en-acme-1325dd2b1aefaec24ac83f8a4c781ed21de3111a86e363ccff69b34779a52e42", "old_text": "according to this encoding, then the verifier MUST reject the JWS as malformed. 5.6. Certain elements of the protocol will require the establishment of a shared secret between the client and the server, in such a way that an entity observing the ACME protocol cannot derive the secret. In these cases, we use a simple ECDH key exchange, based on the system used by CMS RFC5753: Inputs: Client-generated key pair Server-generated key pair Length of the shared secret to be derived Label Perform the ECDH primitive operation to obtain Z (Section 3.3.1 of SEC1) Select a hash algorithm according to the curve being used: For \"P-256\", use SHA-256 For \"P-384\", use SHA-384 For \"P-521\", use SHA-512 Derive the shared secret value using the KDF in Section 3.6.1 of SEC1 using Z and the selected hash algorithm, and with the UTF-8 encoding of the label as the SharedInfo value In cases where the length of the derived secret is shorter than the output length of the chosen hash algorithm, the KDF referenced above reduces to a single hash invocation. The shared secret is equal to the leftmost octets of the following: 6. In this section, we describe the certificate management functions", "comments": "As discussed at the F2F meeting at IETF94, MAC-based account recovery is not adding a lot of value, and it adds significant complexity to the spec. (And nobody has implemented it.) This PR removes MAC-based recovery and the crypto that it depends on.\nSeeing no complaints on the mailing list, merging.", "new_text": "according to this encoding, then the verifier MUST reject the JWS as malformed. 6. In this section, we describe the certificate management functions"} {"id": "q-en-acme-1325dd2b1aefaec24ac83f8a4c781ed21de3111a86e363ccff69b34779a52e42", "old_text": "registration, to allow a client to retrieve the \"new-authorization\" and \"terms-of-service\" URI 6.3.1. If the client wishes to establish a secret key with the server that it can use to recover this account later (a \"recovery key\"), then it must perform a simple key agreement protocol as part of the new- registration transaction. The client and server perform an ECDH exchange through the new-registration transaction (using the technique in key-agreement), and the result is the recovery key. To request a recovery key, the client includes a \"recoveryKey\" field in its new-registration request. The value of this field is a JSON object. The client's ECDH public key The length of the derived secret, in octets. In the client's request, this object contains a JWK for a random ECDH public key generated by the client and the client-selected length value. Clients need to choose length values that balance security and usability. On the one hand, a longer secret makes it more difficult for an attacker to recover the secret when it is used for recovery (see mac-based-recovery). On the other hand, clients may wish to make the recovery key short enough for a user to easily write it down. The server MUST validate that the elliptic curve (\"crv\") and length value chosen by the client are acceptable, and that it is otherwise willing to create a recovery key. If not, then it MUST reject the new-registration request. If the server agrees to create a recovery key, then it generates its own random ECDH key pair and combines it with the client's public key as described in key-agreement above, using the label \"recovery\". The derived secret value is the recovery key. The server then returns to the client the ECDH key that it generated. The server MUST generate a fresh key pair for every transaction. The server's ECDH public key On receiving the server's response, the client can compute the recovery key by combining the server's public key together with the private key corresponding to the public key that it sent to the server. Clients may refresh the recovery key associated with a registration by sending a POST request with a new recoveryKey object. If the server agrees to refresh the recovery key, then it responds in the same way as to a new registration request that asks for a recovery key. 6.4. Once a client has created an account with an ACME server, it is", "comments": "As discussed at the F2F meeting at IETF94, MAC-based account recovery is not adding a lot of value, and it adds significant complexity to the spec. (And nobody has implemented it.) This PR removes MAC-based recovery and the crypto that it depends on.\nSeeing no complaints on the mailing list, merging.", "new_text": "registration, to allow a client to retrieve the \"new-authorization\" and \"terms-of-service\" URI 6.4. Once a client has created an account with an ACME server, it is"} {"id": "q-en-acme-1325dd2b1aefaec24ac83f8a4c781ed21de3111a86e363ccff69b34779a52e42", "old_text": "storage provider, and give the encryption key to the user as a recovery value. 6.4.1. With MAC-based recovery, the client proves to the server that it holds a secret value established in the initial registration transaction. The client requests MAC-based recovery by sending a MAC over the new account key, using the recovery key from the initial registration. The string \"mac\" The URI for the registration to be recovered. A JSON-formatted JWS object using an HMAC algorithm, whose payload is the JWK representation of the public key of the new account key pair. On receiving such a request the server MUST verify that: The base registration has a recovery key associated with it The \"alg\" value in the \"mac\" JWS represents a MAC algorithm The \"mac\" JWS is valid according to the validation rules in RFC7515, using the recovery key as the MAC key The JWK in the payload represents the new account key (i.e. the key used to verify the ACME message) If those conditions are met, and the recovery request is otherwise acceptable to the server, then the recovery process has succeeded. The server creates a new registration resource based on the base registration and the new account key, and returns it on a 201 (Created) response, together with a Location header indicating a URI for the new registration. If the recovery request is unsuccessful, the server returns an error response, such as 403 (Forbidden). 6.4.2. In the contact-based recovery process, the client requests that the server send a message to one of the contact URIs registered for the account. That message indicates some action that the server requires the client's user to perform, e.g., clicking a link in an email. If the user successfully completes the server's required actions, then the server will bind the account to the new account key. (Note that this process is almost entirely out of band with respect to ACME. ACME only allows the client to initiate the process, and the server to indicate the result.) To initiate contact-based recovery, the client sends a POST request to the server's recover-registration URI, with a body specifying which registration is to be recovered. The body of the request MUST be signed by the client's new account key pair. The string \"contact\" The URI for the registration to be recovered. If the server agrees to attempt contact-based recovery, then it creates a new registration resource containing a stub registration object. The stub registration has the client's new account key and contacts, but no authorizations or certificates associated. The server returns the stub contact in a 201 (Created) response, along with a Location header field indicating the URI for the new registration resource (which will be the registration URI if the recovery succeeds). After recovery has been initiated, the server follows its chosen recovery process, out-of-band to ACME. While the recovery process is", "comments": "As discussed at the F2F meeting at IETF94, MAC-based account recovery is not adding a lot of value, and it adds significant complexity to the spec. (And nobody has implemented it.) This PR removes MAC-based recovery and the crypto that it depends on.\nSeeing no complaints on the mailing list, merging.", "new_text": "storage provider, and give the encryption key to the user as a recovery value. The client requests recovery by asking that the server send a message to one of the contact URIs registered for the account. That message indicates some action that the server requires the client's user to perform, e.g., clicking a link in an email. If the user successfully completes the server's required actions, then the server will bind the account to the new account key. (Note that this process is almost entirely out of band with respect to ACME. ACME only allows the client to initiate the process, and the server to indicate the result.) To initiate recovery, the client sends a POST request to the server's recover-registration URI, with a body specifying which registration is to be recovered. The body of the request MUST be signed by the client's new account key pair. The string \"contact\" The URI for the registration to be recovered. If the server agrees to attempt recovery, then it creates a new registration resource containing a stub registration object. The stub registration has the client's new account key and contacts, but no authorizations or certificates associated. The server returns the stub contact in a 201 (Created) response, along with a Location header field indicating the URI for the new registration resource (which will be the registration URI if the recovery succeeds). After recovery has been initiated, the server follows its chosen recovery process, out-of-band to ACME. While the recovery process is"} {"id": "q-en-acme-1325dd2b1aefaec24ac83f8a4c781ed21de3111a86e363ccff69b34779a52e42", "old_text": "is the only one that can choose the new account key that receives the capabilities held by the account being recovered. MAC-based recovery can be performed if the attacker knows the account key and registration URI for the account being recovered. Both of these are difficult to obtain for a network attacker, because ACME uses HTTPS, though if the recovery key and registration URI are sufficiently predictable, the attacker might be able to guess them. An ACME MitM can see the registration URI, but still has to guess the recovery key, since neither the ECDH in the provisioning phase nor HMAC in the recovery phase will reveal it to him. ACME clients can thus mitigate problems with MAC-based recovery by using long recovery keys. ACME servers should enforce a minimum recovery key length, and impose rate limits on recovery to limit an attacker's ability to test different guesses about the recovery key. Contact-based recovery uses both the ACME channel and the contact channel. The provisioning process is only visible to an ACME MitM, and even then, the MitM can only observe the contact information provided. If the ACME attacker does not also have access to the contact channel, there is no risk. The security of the contact-based recovery process is entirely dependent on the security of the contact channel. The details of this will depend on the specific out-of-band technique used by the server. For example: If the server requires a user to click a link in a message sent to a contact address, then the contact channel will need to ensure", "comments": "As discussed at the F2F meeting at IETF94, MAC-based account recovery is not adding a lot of value, and it adds significant complexity to the spec. (And nobody has implemented it.) This PR removes MAC-based recovery and the crypto that it depends on.\nSeeing no complaints on the mailing list, merging.", "new_text": "is the only one that can choose the new account key that receives the capabilities held by the account being recovered. Account recovery uses both the ACME channel and the contact channel. The provisioning process is only visible to an ACME MitM, and even then, the MitM can only observe the contact information provided. If the ACME attacker does not also have access to the contact channel, there is no risk. The security of the recovery process is entirely dependent on the security of the contact channel. The details of this will depend on the specific out-of-band technique used by the server. For example: If the server requires a user to click a link in a message sent to a contact address, then the contact channel will need to ensure"} {"id": "q-en-acme-1325dd2b1aefaec24ac83f8a4c781ed21de3111a86e363ccff69b34779a52e42", "old_text": "In practice, many contact channels that can be used to reach many clients do not provide strong assurances of the types noted above. In designing and deploying contact-based recovery schemes, ACME servers operators will need to find an appropriate balance between using contact channels that can reach many clients and using contact- based recovery schemes that achieve an appropriate level of risk using those contact channels. 9.4.", "comments": "As discussed at the F2F meeting at IETF94, MAC-based account recovery is not adding a lot of value, and it adds significant complexity to the spec. (And nobody has implemented it.) This PR removes MAC-based recovery and the crypto that it depends on.\nSeeing no complaints on the mailing list, merging.", "new_text": "In practice, many contact channels that can be used to reach many clients do not provide strong assurances of the types noted above. In designing and deploying recovery schemes, ACME servers operators will need to find an appropriate balance between using contact channels that can reach many clients and using contact-based recovery schemes that achieve an appropriate level of risk using those contact channels. 9.4."} {"id": "q-en-acme-b6ac1936e8081ee2b67d8bd49cce2465375557b667240763704afff3d241f21d", "old_text": "certificate. If the revocation succeeds, the server responds with status code 200 (OK). If the revocation fails, the server returns an error. 8.", "comments": "NAME This definitely needs to be reviewed.\nThis looks good to me.\nI like the idea of a special compound error type to indicate that the subproblems field is used.\nIn s7.6, it seems like we should indicate what error is returned: If the revocation fails, the server returns an error. The draft this nicely in many others places in the draft.\nI mentioned this earlier on the mailing list, but it did not receive any feedback: as a ACME client developer, I would like to be able to distinguish between errors which do not require user interaction (that would be \"certificate is already revoked, so can't revoke it again\"), and ones where the user must be warned that the operation did not succeed (\"certificate could not be revoked\", \"bad credentials\", etc.). This is currently only possible with server-specific ugly hacks (for example, returns this information, but other servers might return it differently or this might change in later versions of Boulder as it is not specified).\nThis is a good idea, sorry for missing it earlier. NAME - Do you have any thoughts on other cases besides \"already revoked\" that would (a) not require user interaction and (b) not be covered by other current error codes? NAME - Would you mind making a PR to add an error code (and anything else that NAME comes up with), and note it in the revocation section?\nI thought of one to consider in :\nIt looks like Boulder uses 404 and a generic malformed problem type for this case: URL URL\nNAME So far I couldn't think of anything else. I'm totally happy with :-)\nFixed by\n:+1: Thanks!", "new_text": "certificate. If the revocation succeeds, the server responds with status code 200 (OK). If the revocation fails, the server returns an error. For example, if the certificate has already been revoked the server returns an error response with status code 400 (Bad Request) and type \"urn:ietf:params:acme:error:alreadyRevoked\". 8."} {"id": "q-en-acme-4035c793ae2283d0753719ef87d6e9ef74e10581d3a78e6117e145dfff4fdd22", "old_text": "configuring their TLS implementations. ACME servers that support TLS 1.3 MAY allow clients to send early data (0-RTT). This is safe because the ACME protocol itself includes anti-replay protections (see replay-protection). ACME clients MUST send a User-Agent header field, in accordance with RFC7231. This header field SHOULD include the name and version of", "comments": "URL URL\nThanks NAME looks fine, just adding clarifications. Benjamin Kaduk has entered the following ballot position for draft-ietf-acme-acme-14: Discuss When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL DISCUSS: This is a great thing to have, and I intend to eventually ballot Yes, but I do have some questions that may require further discussion before this document is approved. It looks like the server returns an unauthenticated \"badSignatureAlgorithm\" error when the client sends a JWS using an unsupported signature algorithm (Section 6.2). What prevents an active attacker from performing a downgrade attack on the signature algorithm used? Similarly, since we include in the threat model a potentially hostile CDN/MitM between the ACME client and ACME server, can that attacker strip a success response and replace it with a badNonce error, causing the client to retry (and thus duplicate the request processing on the server)? I am not an ART AD, but there is not yet an internationalization directorate, and seeing statements like \"inputs for digest computations MUST be encoded using the UTF-8 character set\" (Section 5) without additional discussion of normalization and/or what the canonical form for the digest input is makes me nervous. Has sufficient internationalization review been performed to ensure that there are no latent issues in this space? Section 6.1 has text discussing TLS 1.3's 0-RTT mode. If this text is intended to be a profile that defines/allows the use of TLS 1.3 0-RTT data for the ACME protocol, I think you need to be more specific and say something like \"MAY allow clients to send early data (0-RTT); there are no ACME-specific restrictions on which types of requests are permitted in 0-RTT\", since the runtime configuration is just 0-RTT yes/no, and the protocol spec is in charge of saying which PDUs are allowed or not. Section 6.2 notes that servers MUST NOT respond to GET requests for sensitvie resources. Why are account resources the only sensitive ones? Are authorizations not sensitive? Or are those considered to fall under the umbrella of \"account resources\" (Section 7.1 seems pretty clear that they do not)? Section 7.1.1 discusses how the server can include a caaIdentities element in its directory metadata; does this (or anything else) need to be integrity protected by anything stronger than the Web PKI cert authenticating the HTTPS connection? It seems that a bogus caaIdentities value could lead to an unfortunate DoS in some cases. I am also a bit uncertain if the document is internally consistent about whether one challenge verification suffices or there can be cases when multiple challenge verifications are required for a successful authorization. I attmpted to note relevant snippets of the text in my comments on Section 7.1.4. I also have some important substantive comments in the section-by-section COMMENTS, since they would not in and of themselves block publication. COMMENT: This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): 2 K60BWPrMQG9SDxBDSxtSw 3 IXVHDyxIRGcTE0VSblhPzw 4 JHb54aTKTXBWQOzGYkt9A Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Section 1 Different types of certificates reflect different kinds of CA verification of information about the certificate subject. \"Domain Validation\" (DV) certificates are by far the most common type. The only validation the CA is required to perform in the DV issuance process is to verify that the requester has effective control of the domain. Can we get an (informative) ref for the \"required\"/requirements? Section 6.1 W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. Section 6.2 IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? Section 6.3 For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? Section 6.4 In order to protect ACME resources from any possible replay attacks, ACME requests have a mandatory anti-replay mechanism. We don't seem to actually define what an \"ACME request\" is that I can see. >From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) Section 6.6 IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. [...] Servers MUST NOT use the ACME URN namespace Section 9.6 for errors other than the standard types. \"standard\" as determined by inclusion in this document, or in the IANA registry? Section 7.1 The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Section 7.1.1 [...] It is a JSON object, whose field names are drawn from the following table and whose values are the corresponding URLs. Er, from the IANA registry, right? The following metadata items are defined, all of which are OPTIONAL: Maybe also refer to the registry? Section 7.1.2 IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) Section 7.1.2.1 IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? Section 7.1.4 challenges (required, array of objects): For pending authorizations, the challenges that the client can fulfill in order to prove possession of the identifier. For final authorizations (in the \"valid\" or \"invalid\" state), the challenges that were used. Each array entry is an object with parameters required to validate the challenge. A client should attempt to fulfill one of these challenges, and a server should consider any one of the challenges sufficient to make the authorization valid. This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. wildcard (optional, boolean): For authorizations created as a result of a newOrder request containing a DNS identifier with a value that contained a wildcard prefix this field MUST be present, and true. Is there a difference between false and absent? Section 7.3 A client creates a new account with the server by sending a POST request to the server's new-account URL. The body of the request is a stub account object optionally containing the \"contact\" and \"termsOfServiceAgreed\" fields. Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) Section 7.3.2 It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. Section 7.3.5 The server MAY require a value for the \"externalAccountBinding\" field to be present in \"newAccount\" requests. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? To enable ACME account binding, the CA operating the ACME server needs to provide the ACME client with a MAC key and a key identifier, using some mechanism outside of ACME. This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. [...] The payload of this JWS is the account key being registered, in JWK form. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. If a CA requires that new-account requests contain an \"externalAccountBinding\" field, then it MUST provide the value \"true\" in the \"externalAccountRequired\" subfield of the \"meta\" field in the directory object. If the CA receives a new-account request without [...] nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. Section 7.3.7 IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section Section 7.4 Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of URL that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Section 7.4.1 Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". Section 7.5 \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. Section 7.5.1 To do this, the client updates the authorization object received from the server by filling in any required information in the elements of the \"challenges\" dictionary. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. Section 8 IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? Section 8.1 [...] A key authorization is a string that expresses a domain holder's authorization for a specified key to satisfy a specified challenge, [...] I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. Section 8.3 I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory, [RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. Verify that the body of the response is well-formed key authorization. The server SHOULD ignore whitespace characters at the end of the body. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Section 8.4 Should this \"token\" description include the same text about entropy as for the HTTP challenge? Section 9.7.1 There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client are accepted when posted to the account URL, e.g., for account deactivation. Section 10.1 Can there be overlap between the \"validation server\" function and the \"ACME client\" function? Section 10.2 [...] The key authorization reflects the account public key, is provided to the server in the validation response over the validation channel and signed afterwards by the corresponding private key in the challenge response over the ACME channel. I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Section 10.4 Some server implementations include information from the validation server's response (in order to facilitate debugging). Such Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. Section 11.1 IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. Section 11.3 The http-01, and dns-01 validation methods mandate the usage of a nit: spurious comma. [...] Secondly, the entropy requirement prevents ACME clients from implementing a \"naive\" validation server that automatically replies to challenges without participating in the creation of the initial authorization request. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request. Thanks for addressing my Discuss and other points; as promised, I'm switching to Yes. (But I support Adam's discuss and look forward to seeing it come to a close as well.) This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Can we get an (informative) ref for the \"required\"/requirements? W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? We don't seem to actually define what an \"ACME request\" is that I can see. From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. \"standard\" as determined by inclusion in this document, or in the IANA registry? The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Er, from the IANA registry, right? Maybe also refer to the registry? IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. Is there a difference between false and absent? Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section 7.3.2. Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of www.example.com that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory,[RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Should this \"token\" description include the same text about entropy as for the HTTP challenge? There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client *are* accepted when posted to the account URL, e.g., for account deactivation. Can there be overlap between the \"validation server\" function and the \"ACME client\" function? I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. nit: spurious comma. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request.", "new_text": "configuring their TLS implementations. ACME servers that support TLS 1.3 MAY allow clients to send early data (0-RTT). This is safe because the ACME protocol itself includes anti-replay protections (see replay-protection) in all cases where they are required. For this reason, there are no restrictions on what ACME data can be carried in 0-RTT. ACME clients MUST send a User-Agent header field, in accordance with RFC7231. This header field SHOULD include the name and version of"} {"id": "q-en-acme-4035c793ae2283d0753719ef87d6e9ef74e10581d3a78e6117e145dfff4fdd22", "old_text": "\"status\" field. For pending authorizations, the challenges that the client can fulfill in order to prove possession of the identifier. For final authorizations (in the \"valid\" or \"invalid\" state), the challenges that were used. Each array entry is an object with parameters required to validate the challenge. A client should attempt to fulfill one of these challenges, and a server should consider any one of the challenges sufficient to make the authorization valid. For authorizations created as a result of a newOrder request containing a DNS identifier with a value that contained a wildcard", "comments": "URL URL\nThanks NAME looks fine, just adding clarifications. Benjamin Kaduk has entered the following ballot position for draft-ietf-acme-acme-14: Discuss When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL DISCUSS: This is a great thing to have, and I intend to eventually ballot Yes, but I do have some questions that may require further discussion before this document is approved. It looks like the server returns an unauthenticated \"badSignatureAlgorithm\" error when the client sends a JWS using an unsupported signature algorithm (Section 6.2). What prevents an active attacker from performing a downgrade attack on the signature algorithm used? Similarly, since we include in the threat model a potentially hostile CDN/MitM between the ACME client and ACME server, can that attacker strip a success response and replace it with a badNonce error, causing the client to retry (and thus duplicate the request processing on the server)? I am not an ART AD, but there is not yet an internationalization directorate, and seeing statements like \"inputs for digest computations MUST be encoded using the UTF-8 character set\" (Section 5) without additional discussion of normalization and/or what the canonical form for the digest input is makes me nervous. Has sufficient internationalization review been performed to ensure that there are no latent issues in this space? Section 6.1 has text discussing TLS 1.3's 0-RTT mode. If this text is intended to be a profile that defines/allows the use of TLS 1.3 0-RTT data for the ACME protocol, I think you need to be more specific and say something like \"MAY allow clients to send early data (0-RTT); there are no ACME-specific restrictions on which types of requests are permitted in 0-RTT\", since the runtime configuration is just 0-RTT yes/no, and the protocol spec is in charge of saying which PDUs are allowed or not. Section 6.2 notes that servers MUST NOT respond to GET requests for sensitvie resources. Why are account resources the only sensitive ones? Are authorizations not sensitive? Or are those considered to fall under the umbrella of \"account resources\" (Section 7.1 seems pretty clear that they do not)? Section 7.1.1 discusses how the server can include a caaIdentities element in its directory metadata; does this (or anything else) need to be integrity protected by anything stronger than the Web PKI cert authenticating the HTTPS connection? It seems that a bogus caaIdentities value could lead to an unfortunate DoS in some cases. I am also a bit uncertain if the document is internally consistent about whether one challenge verification suffices or there can be cases when multiple challenge verifications are required for a successful authorization. I attmpted to note relevant snippets of the text in my comments on Section 7.1.4. I also have some important substantive comments in the section-by-section COMMENTS, since they would not in and of themselves block publication. COMMENT: This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): 2 K60BWPrMQG9SDxBDSxtSw 3 IXVHDyxIRGcTE0VSblhPzw 4 JHb54aTKTXBWQOzGYkt9A Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Section 1 Different types of certificates reflect different kinds of CA verification of information about the certificate subject. \"Domain Validation\" (DV) certificates are by far the most common type. The only validation the CA is required to perform in the DV issuance process is to verify that the requester has effective control of the domain. Can we get an (informative) ref for the \"required\"/requirements? Section 6.1 W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. Section 6.2 IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? Section 6.3 For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? Section 6.4 In order to protect ACME resources from any possible replay attacks, ACME requests have a mandatory anti-replay mechanism. We don't seem to actually define what an \"ACME request\" is that I can see. >From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) Section 6.6 IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. [...] Servers MUST NOT use the ACME URN namespace Section 9.6 for errors other than the standard types. \"standard\" as determined by inclusion in this document, or in the IANA registry? Section 7.1 The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Section 7.1.1 [...] It is a JSON object, whose field names are drawn from the following table and whose values are the corresponding URLs. Er, from the IANA registry, right? The following metadata items are defined, all of which are OPTIONAL: Maybe also refer to the registry? Section 7.1.2 IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) Section 7.1.2.1 IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? Section 7.1.4 challenges (required, array of objects): For pending authorizations, the challenges that the client can fulfill in order to prove possession of the identifier. For final authorizations (in the \"valid\" or \"invalid\" state), the challenges that were used. Each array entry is an object with parameters required to validate the challenge. A client should attempt to fulfill one of these challenges, and a server should consider any one of the challenges sufficient to make the authorization valid. This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. wildcard (optional, boolean): For authorizations created as a result of a newOrder request containing a DNS identifier with a value that contained a wildcard prefix this field MUST be present, and true. Is there a difference between false and absent? Section 7.3 A client creates a new account with the server by sending a POST request to the server's new-account URL. The body of the request is a stub account object optionally containing the \"contact\" and \"termsOfServiceAgreed\" fields. Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) Section 7.3.2 It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. Section 7.3.5 The server MAY require a value for the \"externalAccountBinding\" field to be present in \"newAccount\" requests. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? To enable ACME account binding, the CA operating the ACME server needs to provide the ACME client with a MAC key and a key identifier, using some mechanism outside of ACME. This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. [...] The payload of this JWS is the account key being registered, in JWK form. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. If a CA requires that new-account requests contain an \"externalAccountBinding\" field, then it MUST provide the value \"true\" in the \"externalAccountRequired\" subfield of the \"meta\" field in the directory object. If the CA receives a new-account request without [...] nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. Section 7.3.7 IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section Section 7.4 Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of URL that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Section 7.4.1 Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". Section 7.5 \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. Section 7.5.1 To do this, the client updates the authorization object received from the server by filling in any required information in the elements of the \"challenges\" dictionary. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. Section 8 IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? Section 8.1 [...] A key authorization is a string that expresses a domain holder's authorization for a specified key to satisfy a specified challenge, [...] I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. Section 8.3 I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory, [RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. Verify that the body of the response is well-formed key authorization. The server SHOULD ignore whitespace characters at the end of the body. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Section 8.4 Should this \"token\" description include the same text about entropy as for the HTTP challenge? Section 9.7.1 There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client are accepted when posted to the account URL, e.g., for account deactivation. Section 10.1 Can there be overlap between the \"validation server\" function and the \"ACME client\" function? Section 10.2 [...] The key authorization reflects the account public key, is provided to the server in the validation response over the validation channel and signed afterwards by the corresponding private key in the challenge response over the ACME channel. I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Section 10.4 Some server implementations include information from the validation server's response (in order to facilitate debugging). Such Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. Section 11.1 IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. Section 11.3 The http-01, and dns-01 validation methods mandate the usage of a nit: spurious comma. [...] Secondly, the entropy requirement prevents ACME clients from implementing a \"naive\" validation server that automatically replies to challenges without participating in the creation of the initial authorization request. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request. Thanks for addressing my Discuss and other points; as promised, I'm switching to Yes. (But I support Adam's discuss and look forward to seeing it come to a close as well.) This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Can we get an (informative) ref for the \"required\"/requirements? W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? We don't seem to actually define what an \"ACME request\" is that I can see. From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. \"standard\" as determined by inclusion in this document, or in the IANA registry? The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Er, from the IANA registry, right? Maybe also refer to the registry? IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. Is there a difference between false and absent? Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section 7.3.2. Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of www.example.com that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory,[RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Should this \"token\" description include the same text about entropy as for the HTTP challenge? There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client *are* accepted when posted to the account URL, e.g., for account deactivation. Can there be overlap between the \"validation server\" function and the \"ACME client\" function? I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. nit: spurious comma. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request.", "new_text": "\"status\" field. For pending authorizations, the challenges that the client can fulfill in order to prove possession of the identifier. For valid authorizations, the challenge that was validated. For invalid authorizations, the challenge that was attempted and failed. Each array entry is an object with parameters required to validate the challenge. A client should attempt to fulfill one of these challenges, and a server should consider any one of the challenges sufficient to make the authorization valid. For authorizations created as a result of a newOrder request containing a DNS identifier with a value that contained a wildcard"} {"id": "q-en-acme-4035c793ae2283d0753719ef87d6e9ef74e10581d3a78e6117e145dfff4fdd22", "old_text": "Authorization objects are created in the \"pending\" state. If one of the challenges listed in the authorization transitions to the \"valid\" state, then the authorization also changes to the \"valid\" state. If there is an error while the authorization is still pending, then the authorization transitions to the \"invalid\" state. Once the authorization is in the valid state, it can expire (\"expired\"), be deactivated by the client (\"deactivated\", see deactivating-an-", "comments": "URL URL\nThanks NAME looks fine, just adding clarifications. Benjamin Kaduk has entered the following ballot position for draft-ietf-acme-acme-14: Discuss When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL DISCUSS: This is a great thing to have, and I intend to eventually ballot Yes, but I do have some questions that may require further discussion before this document is approved. It looks like the server returns an unauthenticated \"badSignatureAlgorithm\" error when the client sends a JWS using an unsupported signature algorithm (Section 6.2). What prevents an active attacker from performing a downgrade attack on the signature algorithm used? Similarly, since we include in the threat model a potentially hostile CDN/MitM between the ACME client and ACME server, can that attacker strip a success response and replace it with a badNonce error, causing the client to retry (and thus duplicate the request processing on the server)? I am not an ART AD, but there is not yet an internationalization directorate, and seeing statements like \"inputs for digest computations MUST be encoded using the UTF-8 character set\" (Section 5) without additional discussion of normalization and/or what the canonical form for the digest input is makes me nervous. Has sufficient internationalization review been performed to ensure that there are no latent issues in this space? Section 6.1 has text discussing TLS 1.3's 0-RTT mode. If this text is intended to be a profile that defines/allows the use of TLS 1.3 0-RTT data for the ACME protocol, I think you need to be more specific and say something like \"MAY allow clients to send early data (0-RTT); there are no ACME-specific restrictions on which types of requests are permitted in 0-RTT\", since the runtime configuration is just 0-RTT yes/no, and the protocol spec is in charge of saying which PDUs are allowed or not. Section 6.2 notes that servers MUST NOT respond to GET requests for sensitvie resources. Why are account resources the only sensitive ones? Are authorizations not sensitive? Or are those considered to fall under the umbrella of \"account resources\" (Section 7.1 seems pretty clear that they do not)? Section 7.1.1 discusses how the server can include a caaIdentities element in its directory metadata; does this (or anything else) need to be integrity protected by anything stronger than the Web PKI cert authenticating the HTTPS connection? It seems that a bogus caaIdentities value could lead to an unfortunate DoS in some cases. I am also a bit uncertain if the document is internally consistent about whether one challenge verification suffices or there can be cases when multiple challenge verifications are required for a successful authorization. I attmpted to note relevant snippets of the text in my comments on Section 7.1.4. I also have some important substantive comments in the section-by-section COMMENTS, since they would not in and of themselves block publication. COMMENT: This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): 2 K60BWPrMQG9SDxBDSxtSw 3 IXVHDyxIRGcTE0VSblhPzw 4 JHb54aTKTXBWQOzGYkt9A Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Section 1 Different types of certificates reflect different kinds of CA verification of information about the certificate subject. \"Domain Validation\" (DV) certificates are by far the most common type. The only validation the CA is required to perform in the DV issuance process is to verify that the requester has effective control of the domain. Can we get an (informative) ref for the \"required\"/requirements? Section 6.1 W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. Section 6.2 IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? Section 6.3 For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? Section 6.4 In order to protect ACME resources from any possible replay attacks, ACME requests have a mandatory anti-replay mechanism. We don't seem to actually define what an \"ACME request\" is that I can see. >From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) Section 6.6 IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. [...] Servers MUST NOT use the ACME URN namespace Section 9.6 for errors other than the standard types. \"standard\" as determined by inclusion in this document, or in the IANA registry? Section 7.1 The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Section 7.1.1 [...] It is a JSON object, whose field names are drawn from the following table and whose values are the corresponding URLs. Er, from the IANA registry, right? The following metadata items are defined, all of which are OPTIONAL: Maybe also refer to the registry? Section 7.1.2 IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) Section 7.1.2.1 IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? Section 7.1.4 challenges (required, array of objects): For pending authorizations, the challenges that the client can fulfill in order to prove possession of the identifier. For final authorizations (in the \"valid\" or \"invalid\" state), the challenges that were used. Each array entry is an object with parameters required to validate the challenge. A client should attempt to fulfill one of these challenges, and a server should consider any one of the challenges sufficient to make the authorization valid. This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. wildcard (optional, boolean): For authorizations created as a result of a newOrder request containing a DNS identifier with a value that contained a wildcard prefix this field MUST be present, and true. Is there a difference between false and absent? Section 7.3 A client creates a new account with the server by sending a POST request to the server's new-account URL. The body of the request is a stub account object optionally containing the \"contact\" and \"termsOfServiceAgreed\" fields. Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) Section 7.3.2 It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. Section 7.3.5 The server MAY require a value for the \"externalAccountBinding\" field to be present in \"newAccount\" requests. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? To enable ACME account binding, the CA operating the ACME server needs to provide the ACME client with a MAC key and a key identifier, using some mechanism outside of ACME. This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. [...] The payload of this JWS is the account key being registered, in JWK form. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. If a CA requires that new-account requests contain an \"externalAccountBinding\" field, then it MUST provide the value \"true\" in the \"externalAccountRequired\" subfield of the \"meta\" field in the directory object. If the CA receives a new-account request without [...] nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. Section 7.3.7 IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section Section 7.4 Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of URL that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Section 7.4.1 Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". Section 7.5 \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. Section 7.5.1 To do this, the client updates the authorization object received from the server by filling in any required information in the elements of the \"challenges\" dictionary. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. Section 8 IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? Section 8.1 [...] A key authorization is a string that expresses a domain holder's authorization for a specified key to satisfy a specified challenge, [...] I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. Section 8.3 I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory, [RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. Verify that the body of the response is well-formed key authorization. The server SHOULD ignore whitespace characters at the end of the body. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Section 8.4 Should this \"token\" description include the same text about entropy as for the HTTP challenge? Section 9.7.1 There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client are accepted when posted to the account URL, e.g., for account deactivation. Section 10.1 Can there be overlap between the \"validation server\" function and the \"ACME client\" function? Section 10.2 [...] The key authorization reflects the account public key, is provided to the server in the validation response over the validation channel and signed afterwards by the corresponding private key in the challenge response over the ACME channel. I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Section 10.4 Some server implementations include information from the validation server's response (in order to facilitate debugging). Such Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. Section 11.1 IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. Section 11.3 The http-01, and dns-01 validation methods mandate the usage of a nit: spurious comma. [...] Secondly, the entropy requirement prevents ACME clients from implementing a \"naive\" validation server that automatically replies to challenges without participating in the creation of the initial authorization request. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request. Thanks for addressing my Discuss and other points; as promised, I'm switching to Yes. (But I support Adam's discuss and look forward to seeing it come to a close as well.) This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Can we get an (informative) ref for the \"required\"/requirements? W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? We don't seem to actually define what an \"ACME request\" is that I can see. From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. \"standard\" as determined by inclusion in this document, or in the IANA registry? The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Er, from the IANA registry, right? Maybe also refer to the registry? IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. Is there a difference between false and absent? Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section 7.3.2. Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of www.example.com that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory,[RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Should this \"token\" description include the same text about entropy as for the HTTP challenge? There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client *are* accepted when posted to the account URL, e.g., for account deactivation. Can there be overlap between the \"validation server\" function and the \"ACME client\" function? I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. nit: spurious comma. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request.", "new_text": "Authorization objects are created in the \"pending\" state. If one of the challenges listed in the authorization transitions to the \"valid\" state, then the authorization also changes to the \"valid\" state. If the client attempts to fulfill a challenge and fails, or if there is an error while the authorization is still pending, then the authorization transitions to the \"invalid\" state. Once the authorization is in the valid state, it can expire (\"expired\"), be deactivated by the client (\"deactivated\", see deactivating-an-"} {"id": "q-en-acme-4035c793ae2283d0753719ef87d6e9ef74e10581d3a78e6117e145dfff4fdd22", "old_text": "can't obtain illegitimate authorization by acting as an ACME client (legitimately, in terms of the protocol). 10.2. ACME allows anyone to request challenges for an identifier by", "comments": "URL URL\nThanks NAME looks fine, just adding clarifications. Benjamin Kaduk has entered the following ballot position for draft-ietf-acme-acme-14: Discuss When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL DISCUSS: This is a great thing to have, and I intend to eventually ballot Yes, but I do have some questions that may require further discussion before this document is approved. It looks like the server returns an unauthenticated \"badSignatureAlgorithm\" error when the client sends a JWS using an unsupported signature algorithm (Section 6.2). What prevents an active attacker from performing a downgrade attack on the signature algorithm used? Similarly, since we include in the threat model a potentially hostile CDN/MitM between the ACME client and ACME server, can that attacker strip a success response and replace it with a badNonce error, causing the client to retry (and thus duplicate the request processing on the server)? I am not an ART AD, but there is not yet an internationalization directorate, and seeing statements like \"inputs for digest computations MUST be encoded using the UTF-8 character set\" (Section 5) without additional discussion of normalization and/or what the canonical form for the digest input is makes me nervous. Has sufficient internationalization review been performed to ensure that there are no latent issues in this space? Section 6.1 has text discussing TLS 1.3's 0-RTT mode. If this text is intended to be a profile that defines/allows the use of TLS 1.3 0-RTT data for the ACME protocol, I think you need to be more specific and say something like \"MAY allow clients to send early data (0-RTT); there are no ACME-specific restrictions on which types of requests are permitted in 0-RTT\", since the runtime configuration is just 0-RTT yes/no, and the protocol spec is in charge of saying which PDUs are allowed or not. Section 6.2 notes that servers MUST NOT respond to GET requests for sensitvie resources. Why are account resources the only sensitive ones? Are authorizations not sensitive? Or are those considered to fall under the umbrella of \"account resources\" (Section 7.1 seems pretty clear that they do not)? Section 7.1.1 discusses how the server can include a caaIdentities element in its directory metadata; does this (or anything else) need to be integrity protected by anything stronger than the Web PKI cert authenticating the HTTPS connection? It seems that a bogus caaIdentities value could lead to an unfortunate DoS in some cases. I am also a bit uncertain if the document is internally consistent about whether one challenge verification suffices or there can be cases when multiple challenge verifications are required for a successful authorization. I attmpted to note relevant snippets of the text in my comments on Section 7.1.4. I also have some important substantive comments in the section-by-section COMMENTS, since they would not in and of themselves block publication. COMMENT: This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): 2 K60BWPrMQG9SDxBDSxtSw 3 IXVHDyxIRGcTE0VSblhPzw 4 JHb54aTKTXBWQOzGYkt9A Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Section 1 Different types of certificates reflect different kinds of CA verification of information about the certificate subject. \"Domain Validation\" (DV) certificates are by far the most common type. The only validation the CA is required to perform in the DV issuance process is to verify that the requester has effective control of the domain. Can we get an (informative) ref for the \"required\"/requirements? Section 6.1 W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. Section 6.2 IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? Section 6.3 For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? Section 6.4 In order to protect ACME resources from any possible replay attacks, ACME requests have a mandatory anti-replay mechanism. We don't seem to actually define what an \"ACME request\" is that I can see. >From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) Section 6.6 IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. [...] Servers MUST NOT use the ACME URN namespace Section 9.6 for errors other than the standard types. \"standard\" as determined by inclusion in this document, or in the IANA registry? Section 7.1 The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Section 7.1.1 [...] It is a JSON object, whose field names are drawn from the following table and whose values are the corresponding URLs. Er, from the IANA registry, right? The following metadata items are defined, all of which are OPTIONAL: Maybe also refer to the registry? Section 7.1.2 IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) Section 7.1.2.1 IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? Section 7.1.4 challenges (required, array of objects): For pending authorizations, the challenges that the client can fulfill in order to prove possession of the identifier. For final authorizations (in the \"valid\" or \"invalid\" state), the challenges that were used. Each array entry is an object with parameters required to validate the challenge. A client should attempt to fulfill one of these challenges, and a server should consider any one of the challenges sufficient to make the authorization valid. This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. wildcard (optional, boolean): For authorizations created as a result of a newOrder request containing a DNS identifier with a value that contained a wildcard prefix this field MUST be present, and true. Is there a difference between false and absent? Section 7.3 A client creates a new account with the server by sending a POST request to the server's new-account URL. The body of the request is a stub account object optionally containing the \"contact\" and \"termsOfServiceAgreed\" fields. Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) Section 7.3.2 It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. Section 7.3.5 The server MAY require a value for the \"externalAccountBinding\" field to be present in \"newAccount\" requests. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? To enable ACME account binding, the CA operating the ACME server needs to provide the ACME client with a MAC key and a key identifier, using some mechanism outside of ACME. This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. [...] The payload of this JWS is the account key being registered, in JWK form. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. If a CA requires that new-account requests contain an \"externalAccountBinding\" field, then it MUST provide the value \"true\" in the \"externalAccountRequired\" subfield of the \"meta\" field in the directory object. If the CA receives a new-account request without [...] nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. Section 7.3.7 IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section Section 7.4 Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of URL that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Section 7.4.1 Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". Section 7.5 \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. Section 7.5.1 To do this, the client updates the authorization object received from the server by filling in any required information in the elements of the \"challenges\" dictionary. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. Section 8 IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? Section 8.1 [...] A key authorization is a string that expresses a domain holder's authorization for a specified key to satisfy a specified challenge, [...] I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. Section 8.3 I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory, [RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. Verify that the body of the response is well-formed key authorization. The server SHOULD ignore whitespace characters at the end of the body. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Section 8.4 Should this \"token\" description include the same text about entropy as for the HTTP challenge? Section 9.7.1 There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client are accepted when posted to the account URL, e.g., for account deactivation. Section 10.1 Can there be overlap between the \"validation server\" function and the \"ACME client\" function? Section 10.2 [...] The key authorization reflects the account public key, is provided to the server in the validation response over the validation channel and signed afterwards by the corresponding private key in the challenge response over the ACME channel. I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Section 10.4 Some server implementations include information from the validation server's response (in order to facilitate debugging). Such Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. Section 11.1 IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. Section 11.3 The http-01, and dns-01 validation methods mandate the usage of a nit: spurious comma. [...] Secondly, the entropy requirement prevents ACME clients from implementing a \"naive\" validation server that automatically replies to challenges without participating in the creation of the initial authorization request. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request. Thanks for addressing my Discuss and other points; as promised, I'm switching to Yes. (But I support Adam's discuss and look forward to seeing it come to a close as well.) This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Can we get an (informative) ref for the \"required\"/requirements? W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? We don't seem to actually define what an \"ACME request\" is that I can see. From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. \"standard\" as determined by inclusion in this document, or in the IANA registry? The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Er, from the IANA registry, right? Maybe also refer to the registry? IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. Is there a difference between false and absent? Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section 7.3.2. Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of www.example.com that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory,[RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Should this \"token\" description include the same text about entropy as for the HTTP challenge? There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client *are* accepted when posted to the account URL, e.g., for account deactivation. Can there be overlap between the \"validation server\" function and the \"ACME client\" function? I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. nit: spurious comma. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request.", "new_text": "can't obtain illegitimate authorization by acting as an ACME client (legitimately, in terms of the protocol). ACME does not protect against other types of abuse by a MitM on the ACME channel. For example, such an attacker could send a bogus \"badSignatureAlgorithm\" error response to downgrade a client to the lowest-quality signature algorithm that the server supports. A MitM that is present on all connections (such as a CDN), can cause denial- of-service conditions in a variety of ways. 10.2. ACME allows anyone to request challenges for an identifier by"} {"id": "q-en-acme-1765c416a346afce98b0955aa8cbaaac45a6ea5156a1f87d123b4a2122056962", "old_text": "9.1. A file of this type contains one or more certificates encoded with the PEM textual encoding, according to RFC 7468 RFC7468. In order to provide easy interoperation with TLS, the first certificate MUST be an end-entity certificate. Each following certificate SHOULD directly certify the one preceding it. Because certificate validation requires that trust anchors be distributed independently, a certificate that represents a trust anchor MAY be omitted from the chain, provided that supported peers are known to possess any omitted certificates. The \"Media Types\" registry should be updated with the following additional value:", "comments": "In addition to forbidding explanatory text, this PR restricts the PEM-certificate-chain structure to the \"strict\" production defined in RFC 7468, Figure 3.\ns/Figure 3 of RFC 7468/Section 3 of RFC 7468/", "new_text": "9.1. A file of this type contains one or more certificates encoded with the PEM textual encoding, according to RFC 7468 RFC7468. The textual encoding of certificates in this file MUST use the strict encoding and MUST NOT include explanatory text. The ABNF for this format is as follows, where \"stricttextualmsg\" and \"eol\" are as defined in Section 3 of RFC 7468: In order to provide easy interoperation with TLS, the first certificate MUST be an end-entity certificate. Each following certificate SHOULD directly certify the one preceding it. Because certificate validation requires that trust anchors be distributed independently, a certificate that represents a trust anchor MAY be omitted from the chain, provided that supported peers are known to possess any omitted certificates. The \"Media Types\" registry should be updated with the following additional value:"} {"id": "q-en-acme-450fb8aa84a32be59953bcca021e70ba0ad6e3edda98c63b0de068766228d495", "old_text": "anchors. Clients can fetch these alternates and use their own heuristics to decide which is optimal. An ACME client MAY attempt to fetch the certificate with a GET request. If the server does not allow GET requests for certificate resources, then it will return an error as described in post-as-get. On receiving such an error, the client SHOULD fall back to a POST-as- GET request. A certificate resource represents a single, immutable certificate. If the client wishes to obtain a renewed certificate, the client initiates a new order process to request one.", "comments": "Thanks to Felipe Gasper for .\nMerging despite spurious CI failure, since it seems pretty clear that the CI failure is spurious. Hi all, Draft 16 has this: An ACME client MAY attempt to fetch the certificate with a GET request. If the server does not allow GET requests for certificate resources, then it will return an error as described in Section 6.3 . On receiving such an error, the client SHOULD fall back to a POST-as- GET request. From what I can discern from this list\u2019s most recent posts on the topic, I\u2019m under the impression that plain GET for certificates is not to be part of the principal ACME spec; should the above, then, be removed? -FG", "new_text": "anchors. Clients can fetch these alternates and use their own heuristics to decide which is optimal. A certificate resource represents a single, immutable certificate. If the client wishes to obtain a renewed certificate, the client initiates a new order process to request one."} {"id": "q-en-acme-c87a480ce29fa5604bd3e11ed069456a1e426f755bc6c47e9b5ead5679f4214e", "old_text": "with an array of supported \"alg\" values. See errors for more details on the structure of error responses. Because client requests in ACME carry JWS objects in the Flattened JSON Serialization, they must have the Content-Type header field set to \"application/jose+json\". If a request does not meet this", "comments": "NAME Sure. I've just posted this: URL\nLGTMThe bullet points improve readability quite a bit, thanks NAME !also not purely editorial, but okay. I realize it's very late for making non-editorial changes to draft-ietf-acme-acme, but I'd like to propose adding a new badPublicKey error. This error would be returned by the server whenever it does not support, or wishes to reject, a \"jwk\" public key supplied in a client's request. Proposed text: URL The 'array of supported \"alg\" values' in a badSignatureAlgorithm response is useful, but ISTM that it doesn't provide detailed enough information to assist a client in generating a suitable public key. (If the consensus is that it's too late to add a new error type, then my alternative proposal will be to use \"malformed\" instead of adding \"badPublicKey\", but keep the rest of PR 478 as is; I think it's a good idea to call out the need for a server to sanity check each client-supplied public key). Rob Stradling Senior Research & Development Scientist Sectigo Limited", "new_text": "with an array of supported \"alg\" values. See errors for more details on the structure of error responses. If the server supports the signature algorithm \"alg\" but either does not support or chooses to reject the public key \"jwk\", then the server MUST return an error with status code 400 (Bad Request) and type \"urn:ietf:params:acme:error:badPublicKey\". The problem document detail SHOULD describe the reason for rejecting the public key; some example reasons are: \"alg\" is \"RS256\" but the modulus \"n\" is too small (e.g., 512-bit) \"alg\" is \"ES256\" but \"jwk\" does not contain a valid P-256 public key \"alg\" is \"EdDSA\" and \"crv\" is \"Ed448\", but the server only supports \"EdDSA\" with \"Ed25519\" the corresponding private key is known to have been compromised Because client requests in ACME carry JWS objects in the Flattened JSON Serialization, they must have the Content-Type header field set to \"application/jose+json\". If a request does not meet this"} {"id": "q-en-acme-b17794c36b0d6aa244e189a99db30250aaec098bb58532e1e7a065c62433d722", "old_text": "process can be represented and performed by Internet protocols with no out-of-band human intervention. When an operator deploys a current HTTPS server, it generally prompts him to generate a self-signed certificate. When an operator deploys an ACME-compatible web server, the experience would be something like this: The ACME client prompts the operator for the intended domain name(s) that the web server is to stand for.", "comments": "Avoid changing the \"him\" in the MitM paragraph, because of the inherent notion of a Man in the Middle, which is a larger discussion.\nLGTM. Thanks for the elegant fix.", "new_text": "process can be represented and performed by Internet protocols with no out-of-band human intervention. When deploying a current HTTPS server, an operator generally gets a prompt to generate a self-signed certificate. When an operator deploys an ACME-compatible web server, the experience would be something like this: The ACME client prompts the operator for the intended domain name(s) that the web server is to stand for."} {"id": "q-en-acme-4dc4fae5b2b73b8e5ce76e7f794e91bce2705175888b40411891dfdd89395d62", "old_text": "certificate was issued, and one with relation \"author\" to indicate the registration under which this certificate was issued. The server MAY include an Expires header as a hint to the client about when to renew the certificate. (Of course, the real expiration of the certificate is controlled by the notAfter time in the certificate itself.) If the CA participates in Certificate Transparency (CT) RFC6962, then they may want to provide the client with a Signed Certificate Timestamp (SCT) that can be used to prove that a certificate was", "comments": "For discussion.\nSee also: URL\nI proposed on the mailing list to remove implicit renewal some time ago, and it received some support. Creating this issue to track it. All discussion should be done on the mailing list.", "new_text": "certificate was issued, and one with relation \"author\" to indicate the registration under which this certificate was issued. If the CA participates in Certificate Transparency (CT) RFC6962, then they may want to provide the client with a Signed Certificate Timestamp (SCT) that can be used to prove that a certificate was"} {"id": "q-en-acme-4dc4fae5b2b73b8e5ce76e7f794e91bce2705175888b40411891dfdd89395d62", "old_text": "for a certificate using a Link relation header field with relation \"ct-sct\". A certificate resource always represents the most recent certificate issued for the name/key binding expressed in the CSR. If the CA allows a certificate to be renewed, then it publishes renewed versions of the certificate through the same certificate URI. Clients retrieve renewed versions of the certificate using a GET query to the certificate URI, which the server should then return in a 200 (OK) response. The server SHOULD provide a stable URI for each specific certificate in the Content-Location header field, as shown above. Requests to stable certificate URIs MUST always result in the same certificate. To avoid unnecessary renewals, the CA may choose not to issue a renewed certificate until it receives such a request (if it even allows renewal at all). In such cases, if the CA requires some time to generate the new certificate, the CA MUST return a 202 (Accepted) response, with a Retry-After header field that indicates when the new certificate will be available. The CA MAY include the current (non- renewed) certificate as the body of the response. Likewise, in order to prevent unnecessary renewal due to queries by parties other than the account key holder, certificate URIs should be structured as capability URLs W3C.WD-capability-urls-20140218. From the client's perspective, there is no difference between a certificate URI that allows renewal and one that does not. If the client wishes to obtain a renewed certificate, and a GET request to the certificate URI does not yield one, then the client may initiate a new-certificate transaction to request one. 6.6.", "comments": "For discussion.\nSee also: URL\nI proposed on the mailing list to remove implicit renewal some time ago, and it received some support. Creating this issue to track it. All discussion should be done on the mailing list.", "new_text": "for a certificate using a Link relation header field with relation \"ct-sct\". A certificate resource represents a single, immutable certificate. If the client wishes to obtain a renewed certificate, the client initiates a new-certificate transaction to request one. Because certificate resources are immutable once issuance is complete, the server MAY enable the caching of the resource by adding Expires and Cache-Control headers specifying a point in time in the distant future. These headers have no relation to the certificate's period of validity. 6.6."} {"id": "q-en-acme-712c6ae80dba189e69afb486db9ddcb61a120e727068d30ace0c5ceb0e02c488", "old_text": "In addition, the client MAY advise the server at which IP the challenge is provisioned: An IPv4 or IPv6 address which, if given, MUST be included in the set of IP addresses to which the domain name resolves. If given, the server will connect to that specific IP address instead of arbitrarily choosing an IP from the set of A and AAAA records to which the domain name resolves. If the field is given but equal to \"\"peer\"\", the server MUST treat the field as if it contained the IP address from which the response was received. On receiving a response, the server MUST verify that the key authorization in the response matches the \"token\" value in the", "comments": "The format should be specified more clearly, by reference to a couple of RFCs The \"peer\" provision seems kind of awkward. For server implementations where the validation is done by a separate entity than the ACME interactions (like ), this means that the challenge no longer has all the information that the validation agent needs to do the validation. I'm inclined to drop it and put the burden on the client.", "new_text": "In addition, the client MAY advise the server at which IP the challenge is provisioned: An IPv4 or IPv6 address which, in dotted decimal form or RFC4291 form, respectively. If given, this address MUST be included in the set of IP addresses to which the domain name resolves. If given, the server SHOULD connect to that specific IP address instead of arbitrarily choosing an IP from the set of A and AAAA records to which the domain name resolves. On receiving a response, the server MUST verify that the key authorization in the response matches the \"token\" value in the"} {"id": "q-en-anima-bootstrap-83fec0478b56ab73b87836a1f6f7c7bef6c8fad43a2a8446e728e5b41ec2c093", "old_text": "IANA is requested to register the following Service Names: 8. 8.1.", "comments": "(n.b., i was not able to \"make\" and test the change, due to external pyang dependencies of some form.", "new_text": "IANA is requested to register the following Service Names: 7.5. The IANA is requested to list the name \"masa\" in the MUD extensions registry defined in I-D.ietf-opsawg-mud. Its use is documented in mud-extension. 8. 8.1."} {"id": "q-en-api-drafts-07a29a2ad6465dc1c331fe8f30a88df8d2d6316becaabf8f8af82ef614f5106f", "old_text": "\"Connection Priority\" on one Connection does not change it on the other Connections in the same Connection Group. Message Properties set on a Connection also apply only to that Connection.", "comments": "Does this make sense outside of a protocol-specific context? I am aware that this is a different can of worms adding protocol specific parameters to clone, but this looks like a strange concept to add an Integer here that may have different side effects in QUIC and SCTP\u2026\nThat's a very good question. This feature request came up in the QuIC mapping part of the discussion in Vienna, see: URL Not sure what to do: keep it as it is, remove it, or add the protocol: assign: \"if protocol = x, then stream ID = y\" read: \"stream ID is y, and your protocol is x\" I have to say, I don't like this particular way, as it would complicate matters quite a bit. But how else to do it?\nAll of the multistreaming protocols we know about have an integer stream identifier. I suppose a future protocol could use some sort of binary blob for identifying streams (which are... pedantically... representable as arbitrary-length integers)\nInterim discussion: let's make Clone() and Initiate() take a group of protocol-specific parameters (optionally) which allow protocols like QUIC and SCTP to extend this.\nI just made a very minimalist update, where \"protocolSpecificProperties\" is only a parameter of Clone(). Why not list it as readable properties, and why not as a parameter of Initiate, as we discussed? What's the point of saying \"there can be protocol-specific properties which are currently not defined, and they can be readable\"? This is for a future spec to say, when it proposes a new transport property. Regarding Initiate(), I think this should in fact not be a parameter to the Initiate Action itself, but a transport property to be supplied when creating a Preconnection. Again, this made me find that I would only write something like \"There can also be transport properties that are not yet defined\", which doesn't seem to be useful text. In conclusion, I found that only the Clone() Action needs this as a new parameter. Everything else is for future mapping documents. Thoughts?\nMany thanks! I incorporated your suggestion, and made a tiny edit to it (behaviour => behavior, underlaying => underlying, and \"stream or connection\" instead of \"stream/connection\").\n... combined with \"if the transport is mux'ing, else ...\"\nThanks \u2013 I guess we have a fairly good solution.", "new_text": "\"Connection Priority\" on one Connection does not change it on the other Connections in the same Connection Group. The optional \"connectionProperties\" parameter allows passing Transport Properties that control the behavior of the underlying stream or connection to be created, e.g., protocol-specific properties to request specific stream IDs for SCTP or QUIC. Message Properties set on a Connection also apply only to that Connection."} {"id": "q-en-api-drafts-f944d21e4cb1b9800a1bf71b64dc9591ea2e4c9ed425caecc64a9289ddfe9bf7", "old_text": "Data Unit: Datagram The Transport Services API mappings for UDP-Lite are identical to UDP. Properties that require checksum coverage are not supported by UDP-Lite, such as \"msgChecksumLen\", \"fullChecksumSend\", \"recvChecksumLen\", and \"fullChecksumRecv\". 10.5.", "comments": "Initially I thought the description for UDP-Lite was incomplete, but mapping back to the details of the API draft I realized it was incorrect. UDP-Lite supports the msgChecksumLen and recvChecksumLen properties. The API for theses primitives is a bit messy. As specified I think it means the following: UDP can use them to set full coverage or no checksum using the special value of 0. UDP-Lite can use them to set full coverage or a partial coverage. Setting the checksum to only cover the header, although supported by UDP-Lite, will not be possible as 0 is a special value. The special value of 0 can not be used by UDP-Lite, but I left this detail out of the pull request as I thought it got very detailed. It can be added if you think it is needed.", "new_text": "Data Unit: Datagram The Transport Services API mappings for UDP-Lite are identical to UDP. In addition, UDP-Lite supports the \"msgChecksumLen\" and \"recvChecksumLen\" Properties that allow an application to specify the minimum number of bytes in a message that need to be covered by a checksum. 10.5."} {"id": "q-en-api-drafts-7ef246f06216bc339222fba7f6580842960885c33305acfeb165dcab08d30f3e", "old_text": "of efficiency. This is not a strict requirement. The default is to not have this option. Notification of special errors (excessive retransmissions, ICMP error message arrival): This boolean property specifies whether an application considers it useful to be informed in case sent data was retransmitted more often than a certain threshold, or when an ICMP error message arrives. This property applies to Connections and Connection Groups. This is not a strict requirement. The default is to have this option. Control checksum coverage on sending or receiving: This boolean property specifies whether the application considers it useful to", "comments": "What makes these errors \"special\"? It seems a strange term to use, and the meaning is unclear.\nI didn't write this, but I think it was an effort to lump together, from minset, two quite different types of errors: 1) notification of excessive retransmissions; 2) notification of icmp error message arrival. Any suggestions for a common term?\nIf it's just those two types of errors, we can probably just list them...\nURL is what I did in minset :)\nso the suggestion is to split these out into two protocol selection properties?\nyes, I think so\nall right then, see\nlgtm", "new_text": "of efficiency. This is not a strict requirement. The default is to not have this option. Notification of excessive retransmissions: This boolean property specifies whether an application considers it useful to be informed in case sent data was retransmitted more often than a certain threshold. This property applies to Connections and Connection Groups. This is not a strict requirement. The default is to have this option. Notification of ICMP error message arrival: This boolean property specifies whether an application considers it useful to be informed when an ICMP error message arrives. This property applies to Connections and Connection Groups. This is not a strict requirement. The default is to have this option. Control checksum coverage on sending or receiving: This boolean property specifies whether the application considers it useful to"} {"id": "q-en-api-drafts-d1664269d48618fab54cc97c026535842c40acf02f7aeb26f7a062f4db701dc4", "old_text": "Remote Endpoint and the Local Endpoint, such as IP addresses resolved from a DNS hostname. 5. RFC-EDITOR: Please remove this section before publication.", "comments": "This is a reworking of the text from Section 4.2 of draft-trammell-taps-post-sockets-03, intended to help motivate the choice of Message-based APIs with framing for the transport services architecture. The level of detail here seems appropriate for the architecture document, rather than the API.\nWell written, NAME\nWhat's convenient about the framer? If I have messages of 100 bytes and the length is what defines my message, what's the point of requiring me to define a framer function that I give to the system? I just send, send, send, off go 3 100-byte messages. Hard to get more convenient than that :) I understand that the hook to a sender-side and receiver-side framer can allow you to offer a library that does framing when needed; I don't personally see this as a big win, but okay... and I understand that forcing apps to use a receiver side de-framer can, in principle, let us do cool things on the receiver side (accessing messages out of the bytestream). But for sending this just seems to be an uncomfortable add-on. I guess I shouldn't be arguing it as somehow it's again just syntactical sugar, and I said before that I don't care too much about syntax here. I get it, for the 100-byte messages the framer just won't do anything, and for sending I have to use Connection.Send(Framer.Frame(Message)) instead of Connection.Send(Message)... these calls could basically be renamed into SendMessage vs. SendSomethingHuge. I said before that I don't care too much about these syntactical bits. I guess it's just that tying \"Connection.Send(Message)\" to \"we assume your Message is big and terminates a TCP connection\" is really counterintuitive to me.\n... and I think I answered to NAME in the wrong place :( sorry github\nNAME what\u2019s convenient about the framer? In a C based implementation, very little. In a high level language implementation, however, it means I can and have it convert the high-level structured data into the on the wire format automatically, using a reusable library.\nno, no, that was right. Framers can take an object by reference or value of any type, and turn them into an appropriately framed octet array. Syntactic sugar, yes, and pointless in some languages. But have a look at URL -- here, you're pushing down the thing that converts application structures into wire structures into the API, so the API speaks to you in terms of your own structures (in Go, this is generally called a ; IIRC from Java is the terminology there). If your application-internal structures are already serialized byte arrays, you're right, there is zero win here.\nNAME Framers are still very very useful in C. You can use callbacks, function pointers, etc, to help lay our your data, and doing that in the context of the networking thread improves efficiency (beyond the simpler API).\nNAME even better, then!\nalso: so this whole line of argument is extremely perplexing to me, and indicative of some sort of fundamental failure to communicate on this particular point. absolutely nobody is saying that this interface implies \"your Message... terminates a TCP connection\" on the sender side. I can't see where you're getting the impression otherwise in the text. (When receiving, yes, if the receiver has no way to deframe the stream, you get a message that never ends, as a series of partial reads that looks just like a bunch of calls on a stream; i.e. when it fails, it fails to something that basically works exactly like sockets with callbacks.) I really suggest we table this whole discussion until London, because we don't seem to be making much progress in here, and ask you to trust me until then when I say that there is never, ever, ever a situation in which an implementation of this interface that was actually working would ever look at a single message on the sender side \"oh, cool, that's done, guess I'll send a FIN then!\", ever, because that would be an incredibly silly thing to do. (where's the blink tag in github markdown? i really need the blink tag for this)\nOn this: Sorry, there's a lack of context - I interpreted this out of private emails from Tommy, which seemed to pretty clearly say that, if you just call \"send\" without providing a framer, you do send a FIN - and I just found that awkward. But heck! Clearly we all don't want silliness to happen and we want a nice clean interface, and clearly we can quickly agree on this in London. So no worries, and see y'all soon!", "new_text": "Remote Endpoint and the Local Endpoint, such as IP addresses resolved from a DNS hostname. 4.2.3. While some transports expose a byte stream abstraction, most higher level protocols impose some structure onto that byte stream. That is, the higher level protocol operates in terms of messages, protocol data units (PDUs), rather than using unstructured sequences of bytes, with each message being processed in turn. Protocols are specified in terms of state machines acting on semantic messages, with parsing the byte stream into messages being a necessary annoyance, rather than a semantic concern. Accordingly, the Transport Services architecture exposes messages as the primary abstraction. Protocols that deal only in byte streams, such as TCP, represent their data in each direction as a single, long message. When framing protocols are placed on top of byte streams, the messages used in the API represent the framed messages within the stream. Providing a message-based abstraction also provides: the ability to associate deadlines with messages, for transports that care about timing; the ability to provide control of reliability, choosing what messages to retransmit in the event of packet loss, and how best to make use of the data that arrived; the ability to manage dependencies between messages, when some messages may not be delivered due to either packet loss or missing a deadline, in particular the ability to avoid (re-)sending data that relies on a previous transmission that was never received. All require explicit message boundaries, and application-level framing of messages, to be effective. Once a message is passed to the transport, it can not be cancelled or paused, but prioritization as well as lifetime and retransmission management will provide the protocol stack with all needed information to send the messages as quickly as possible without blocking transmission unnecessarily. The transport services architecture facilitates this by handling messages, with known identity (sequence numbers, in the simple case), lifetimes, niceness, and antecedents. Transport protocols such as SCTP provide a message-oriented API that has similar features to those we describe. Other transports, such as TCP, do not. To support a message oriented API, while still being compatible with stream-based transport protocols, implementations of the transport services architecture should provide APIs for framing and de-framing messages. That is, we push message framing down into the transport services API, allowing applications to send and receive complete messages. This is backwards compatible with existing protocols and APIs, since the wire format of messages does not change, but gives the protocol stack additional information to allow it to make better use of modern transport services. 5. RFC-EDITOR: Please remove this section before publication."} {"id": "q-en-api-drafts-8c737d136b0ccc650027f7574ed630121b72a4c9bd1903b5ed3331bf66cecea2", "old_text": "Boolean If true, it specifies that a Message should be delivered to the other side after the previous Message which was passed to the same Connection via the Send Action. If false, the Message may be delivered out of order. This property is used for protocols that support preservation of data ordering, see prop-ordering, but allow out-of-order delivery for certain messages. 7.3.4.", "comments": "I didn't like something about reliability (let's not throw an error when we can't be UNreliable, only when we can't be reliable). I updated the text on ordering a bit, to say that the Property is ignored for the first Message on a Connection, and rephrased it slightly to talk about the receiver-side transport system and the receiving application - before, the text might have talked about a sender-side behavior. I added a TODO DISCUSS comment about per-message Niceness.\nFine with me, maybe fix minor knitThanks, all good now I think.", "new_text": "Boolean If true, it specifies that the receiver-side transport protocol stack only deliver the Message to the receiving application after the previous ordered Message which was passed to the same Connection via the Send Action, when such a Message exists. If false, the Message may be delivered to the receiving application out of order. This property is used for protocols that support preservation of data ordering, see prop-ordering, but allow out-of-order delivery for certain messages. 7.3.4."} {"id": "q-en-api-drafts-8c737d136b0ccc650027f7574ed630121b72a4c9bd1903b5ed3331bf66cecea2", "old_text": "This property specifies that a message should be sent in such a way that the transport protocol ensures all data is received on the other side without corruption. Changing the 'Reliable Data Transfer' property on Messages is only possible if the transport protocol supports partial reliability (see prop-partially-reliable). Therefore, for protocols that always transfer data reliably, this property is always true and for protocols that always transfer data unreliably, this property is always false. Changing it may generate an error. 7.3.8.", "comments": "I didn't like something about reliability (let's not throw an error when we can't be UNreliable, only when we can't be reliable). I updated the text on ordering a bit, to say that the Property is ignored for the first Message on a Connection, and rephrased it slightly to talk about the receiver-side transport system and the receiving application - before, the text might have talked about a sender-side behavior. I added a TODO DISCUSS comment about per-message Niceness.\nFine with me, maybe fix minor knitThanks, all good now I think.", "new_text": "This property specifies that a message should be sent in such a way that the transport protocol ensures all data is received on the other side without corruption. Changing the 'Reliable Data Transfer' property on Messages is only possible if the Connection supports reliability. When this is not the case, changing it will generate an error. 7.3.8."} {"id": "q-en-api-drafts-8c737d136b0ccc650027f7574ed630121b72a4c9bd1903b5ed3331bf66cecea2", "old_text": "Preconnection, Connection (read only) This property specifies whether the application wishes to use a transport protocol that ensures that all data is received on the other side without corruption. This also entails being notified when a Connection is closed or aborted. The default is to enable Reliable", "comments": "I didn't like something about reliability (let's not throw an error when we can't be UNreliable, only when we can't be reliable). I updated the text on ordering a bit, to say that the Property is ignored for the first Message on a Connection, and rephrased it slightly to talk about the receiver-side transport system and the receiving application - before, the text might have talked about a sender-side behavior. I added a TODO DISCUSS comment about per-message Niceness.\nFine with me, maybe fix minor knitThanks, all good now I think.", "new_text": "Preconnection, Connection (read only) This property specifies whether the application needs to use a transport protocol that ensures that all data is received on the other side without corruption. This also entails being notified when a Connection is closed or aborted. The default is to enable Reliable"} {"id": "q-en-api-drafts-8c737d136b0ccc650027f7574ed630121b72a4c9bd1903b5ed3331bf66cecea2", "old_text": "12.3.22. Integer Message Property - see msg-niceness. Note that this property is not a per-message override of the connection Niceness - see conn-niceness. Both Niceness properties may interact, but can be used indepentendly and be realized by different mechanisms. 12.3.23.", "comments": "I didn't like something about reliability (let's not throw an error when we can't be UNreliable, only when we can't be reliable). I updated the text on ordering a bit, to say that the Property is ignored for the first Message on a Connection, and rephrased it slightly to talk about the receiver-side transport system and the receiving application - before, the text might have talked about a sender-side behavior. I added a TODO DISCUSS comment about per-message Niceness.\nFine with me, maybe fix minor knitThanks, all good now I think.", "new_text": "12.3.22. [TODO: Discuss: should we remove this? Whether we need this or the other depends on how we want to implement multi-streaming. We don't need both, so we should make a decision.] Integer Message Property - see msg-niceness. Note that this property is not a per-message override of the connection Niceness - see conn-niceness. Both Niceness properties may interact, but can be used independently and be realized by different mechanisms. 12.3.23."} {"id": "q-en-api-drafts-6309dae426c347cee6995e94fd6829f86118b2c0ca55ce0320ec95c38b7f9ee1", "old_text": "as an asynchronous interface, they generally use a try-and-fail model. If the application wants to read, but data has not yet been received from the peer, the call to read will fail. The application then waits for a notification to indicate when it can try again. All interaction with a Transport Services system is expected to be asynchronous, and use an event-driven model unlike sockets events.", "comments": "I guess here is not nessecarily a notification required in sychronuous communication\nMerged in master to avoid conflict, and tweaked working to remove \"some times\", which felt a bit too colloquial in tone.", "new_text": "as an asynchronous interface, they generally use a try-and-fail model. If the application wants to read, but data has not yet been received from the peer, the call to read will fail. The application then waits and can try again later. All interaction with a Transport Services system is expected to be asynchronous, and use an event-driven model unlike sockets events."} {"id": "q-en-api-drafts-7cb45e3de603ecd6010ecdbc5e9b3594228234b0fd3b31bfb6f10696a34dda8a", "old_text": "5.2.10. This property specifies whether an application wants to use the connection for sending and/or receiving data. Possible values are:", "comments": "Notable changes: added control over using multipath (which was a late addition to minset) added a timeout for conn. establishment, separate from the other conn. abort timeout (see ) renamed InitiateWithIdempotentSend into InitiateWithSend, and adjusted the text to recommend declaring idempotence. This addresses the minset function that goes back to TCP's send-with-SYN.\nStatus: NAME said \"LGTM modulo tiny nits\" and I think these nits are now addressed. NAME requests something more than just a boolean to enable/disable usage of multiple paths. It seems to me that this needs more discussion, but it also doesn't seem like a show-stopper for the boolean to begin with; I think we can discuss this more after landing this PR. NAME hasn't reacted (or am I missing this), but this has an ok from NAME and NAME In conclusion: I think this is good to go. I'll wait until tomorrow; unless someone speaks up by then, I'll merge this PR.\nIf you make it a boolean it cannot be used as a selection property, so I do not think that is a good choice. Or how should it be interpreted for the selection?\nNAME what do you mean, that it should be a (binary - yes/no) Preference instead? I thought this would be more limiting but I'm fine with this as well.\nNAME I meant that it should be a Preference as pointed out in the original comment by NAME I do not understand what a boolean means during selection (is it a requirement or a preference?) I also do not understand what it means in the middle of a connection. You mean you should try and use a single or several subflows for scheduling data, assuming you already have a multipath connection? I guess this is possible, but then the description needs to be more clear that this is what it means. In the appendix about minset your have \"Disable MPTCP: \"Parallel Use of Multiple Paths\" Property.\" I assume this refers to the Protocol Selection phase? It does not make sense to disable or enable MPTCP in the middle of a connection I think. I guess part of how to specify the property depends on how to interpret this part from the intro to the Managing Connections section: \"The application can set and query Connection Properties on a per-Connection basis. Connection Properties that are not read-only can be set during pre-establishment (see {{selection-props}}), as well as on connections directly using the SetProperty action\" I interpreted this as properties set during pre-establishment are selection properties, but maybe this is not the intention as they may need to be expressed differently? If they are not intended as selection properties, we are missing a selection property for multipath I guess.\nNAME Ok. That's what I meant, yes - but I agree that's not covered by the description, it really is a selection preference only as it's written now. OK (though you CAN do it with the \"real\" API, by closing subflows, but this goes beyond the spec'd protocol, and I'm also not sure it's a good idea, especially when limited to only enabling / disabling it altogether). I would have understood this as in your first interpretation: \"properties set during pre-establishment are selection properties\".\nNAME Closing a subflow is not the same as disabling MPTCP to me. You will still be using MPTCP, only with a single subflow. And you can certainly not open a subflow unless MPTCP is already enabled. But seems we now have a common understanding.\nNAME ack, we've converged. I just pushed a commit that turns the \"Parallel Use of Multiple Paths\" property into a Selection property.\nthese are a few minor changes to apply, and properties to add (e.g. \"disable MPTCP\")\nthis is done, yes?\nNo. But I'm planning to do a PR to address this issue this week.\nNote that this also requires closing , which I re-opened with a concrete proposal to address something from minset.\nLGTM modulo tiny nitsSorry, missed this until just now. Looks fine with me, I just have a small question and my 2 cents to the multipath discussion (which we can have at a later point).", "new_text": "5.2.10. This property specifies whether an application considers it useful to transfer data across multiple paths between the same end hosts. Generally, in most cases, this will improve performance (e.g., achieve greater throughput). One possible side-effect is increased jitter, which may be problematic for delay-sensitive applications. The recommended default is to have this option. 5.2.11. This property specifies whether an application wants to use the connection for sending and/or receiving data. Possible values are:"} {"id": "q-en-api-drafts-7cb45e3de603ecd6010ecdbc5e9b3594228234b0fd3b31bfb6f10696a34dda8a", "old_text": "connections are not supported by the transport protocol, the system should fall back to bidirectional transport. 5.3. Most security parameters, e.g., TLS ciphersuites, local identity and", "comments": "Notable changes: added control over using multipath (which was a late addition to minset) added a timeout for conn. establishment, separate from the other conn. abort timeout (see ) renamed InitiateWithIdempotentSend into InitiateWithSend, and adjusted the text to recommend declaring idempotence. This addresses the minset function that goes back to TCP's send-with-SYN.\nStatus: NAME said \"LGTM modulo tiny nits\" and I think these nits are now addressed. NAME requests something more than just a boolean to enable/disable usage of multiple paths. It seems to me that this needs more discussion, but it also doesn't seem like a show-stopper for the boolean to begin with; I think we can discuss this more after landing this PR. NAME hasn't reacted (or am I missing this), but this has an ok from NAME and NAME In conclusion: I think this is good to go. I'll wait until tomorrow; unless someone speaks up by then, I'll merge this PR.\nIf you make it a boolean it cannot be used as a selection property, so I do not think that is a good choice. Or how should it be interpreted for the selection?\nNAME what do you mean, that it should be a (binary - yes/no) Preference instead? I thought this would be more limiting but I'm fine with this as well.\nNAME I meant that it should be a Preference as pointed out in the original comment by NAME I do not understand what a boolean means during selection (is it a requirement or a preference?) I also do not understand what it means in the middle of a connection. You mean you should try and use a single or several subflows for scheduling data, assuming you already have a multipath connection? I guess this is possible, but then the description needs to be more clear that this is what it means. In the appendix about minset your have \"Disable MPTCP: \"Parallel Use of Multiple Paths\" Property.\" I assume this refers to the Protocol Selection phase? It does not make sense to disable or enable MPTCP in the middle of a connection I think. I guess part of how to specify the property depends on how to interpret this part from the intro to the Managing Connections section: \"The application can set and query Connection Properties on a per-Connection basis. Connection Properties that are not read-only can be set during pre-establishment (see {{selection-props}}), as well as on connections directly using the SetProperty action\" I interpreted this as properties set during pre-establishment are selection properties, but maybe this is not the intention as they may need to be expressed differently? If they are not intended as selection properties, we are missing a selection property for multipath I guess.\nNAME Ok. That's what I meant, yes - but I agree that's not covered by the description, it really is a selection preference only as it's written now. OK (though you CAN do it with the \"real\" API, by closing subflows, but this goes beyond the spec'd protocol, and I'm also not sure it's a good idea, especially when limited to only enabling / disabling it altogether). I would have understood this as in your first interpretation: \"properties set during pre-establishment are selection properties\".\nNAME Closing a subflow is not the same as disabling MPTCP to me. You will still be using MPTCP, only with a single subflow. And you can certainly not open a subflow unless MPTCP is already enabled. But seems we now have a common understanding.\nNAME ack, we've converged. I just pushed a commit that turns the \"Parallel Use of Multiple Paths\" property into a Selection property.\nthese are a few minor changes to apply, and properties to add (e.g. \"disable MPTCP\")\nthis is done, yes?\nNo. But I'm planning to do a PR to address this issue this week.\nNote that this also requires closing , which I re-opened with a concrete proposal to address something from minset.\nLGTM modulo tiny nitsSorry, missed this until just now. Looks fine with me, I just have a small question and my 2 cents to the multipath discussion (which we can have at a later point).", "new_text": "connections are not supported by the transport protocol, the system should fall back to bidirectional transport. 5.2.12. This property specifies how long to wait before aborting a Connection during establishment. 5.3. Most security parameters, e.g., TLS ciphersuites, local identity and"} {"id": "q-en-api-drafts-7cb45e3de603ecd6010ecdbc5e9b3594228234b0fd3b31bfb6f10696a34dda8a", "old_text": "7.6. For application-layer protocols where the Connection initiator also sends the first message, the InitiateWithIdempotentSend() action combines Connection initiation with a first Message sent, provided that message is idempotent. Without a message context (as in send-basic): With a message context (as in message-props): The message passed to InitiateWithIdempotentSend() is, as suggested by the name, considered to be idempotent (see msg-idempotent) regardless of declared message properties or defaults. If protocol stacks supporting 0-RTT establishment with idempotent data are available on the Preconnection, then 0-RTT establishment may be used with the given message when establishing candidate connections. For a non-idemponent initial message, or when the selected stack(s) do not support 0-RTT establishment, InitiateWithIdempotentSend is identical to Initiate() followed by Send(). Neither partial sends nor send batching are supported by InitiateWithIdempotentSend(). The Events that may be sent after InitiateWithIdempotentSend() are equivalent to those that would be sent by an invocation of Initate() followed immediately by an invocation of Send(), with the caveat that a send failure that occurs because the Connection could not be established will not result in a SendError separate from the InitiateError signaling the failure of Connection establishment. 7.7.", "comments": "Notable changes: added control over using multipath (which was a late addition to minset) added a timeout for conn. establishment, separate from the other conn. abort timeout (see ) renamed InitiateWithIdempotentSend into InitiateWithSend, and adjusted the text to recommend declaring idempotence. This addresses the minset function that goes back to TCP's send-with-SYN.\nStatus: NAME said \"LGTM modulo tiny nits\" and I think these nits are now addressed. NAME requests something more than just a boolean to enable/disable usage of multiple paths. It seems to me that this needs more discussion, but it also doesn't seem like a show-stopper for the boolean to begin with; I think we can discuss this more after landing this PR. NAME hasn't reacted (or am I missing this), but this has an ok from NAME and NAME In conclusion: I think this is good to go. I'll wait until tomorrow; unless someone speaks up by then, I'll merge this PR.\nIf you make it a boolean it cannot be used as a selection property, so I do not think that is a good choice. Or how should it be interpreted for the selection?\nNAME what do you mean, that it should be a (binary - yes/no) Preference instead? I thought this would be more limiting but I'm fine with this as well.\nNAME I meant that it should be a Preference as pointed out in the original comment by NAME I do not understand what a boolean means during selection (is it a requirement or a preference?) I also do not understand what it means in the middle of a connection. You mean you should try and use a single or several subflows for scheduling data, assuming you already have a multipath connection? I guess this is possible, but then the description needs to be more clear that this is what it means. In the appendix about minset your have \"Disable MPTCP: \"Parallel Use of Multiple Paths\" Property.\" I assume this refers to the Protocol Selection phase? It does not make sense to disable or enable MPTCP in the middle of a connection I think. I guess part of how to specify the property depends on how to interpret this part from the intro to the Managing Connections section: \"The application can set and query Connection Properties on a per-Connection basis. Connection Properties that are not read-only can be set during pre-establishment (see {{selection-props}}), as well as on connections directly using the SetProperty action\" I interpreted this as properties set during pre-establishment are selection properties, but maybe this is not the intention as they may need to be expressed differently? If they are not intended as selection properties, we are missing a selection property for multipath I guess.\nNAME Ok. That's what I meant, yes - but I agree that's not covered by the description, it really is a selection preference only as it's written now. OK (though you CAN do it with the \"real\" API, by closing subflows, but this goes beyond the spec'd protocol, and I'm also not sure it's a good idea, especially when limited to only enabling / disabling it altogether). I would have understood this as in your first interpretation: \"properties set during pre-establishment are selection properties\".\nNAME Closing a subflow is not the same as disabling MPTCP to me. You will still be using MPTCP, only with a single subflow. And you can certainly not open a subflow unless MPTCP is already enabled. But seems we now have a common understanding.\nNAME ack, we've converged. I just pushed a commit that turns the \"Parallel Use of Multiple Paths\" property into a Selection property.\nthese are a few minor changes to apply, and properties to add (e.g. \"disable MPTCP\")\nthis is done, yes?\nNo. But I'm planning to do a PR to address this issue this week.\nNote that this also requires closing , which I re-opened with a concrete proposal to address something from minset.\nLGTM modulo tiny nitsSorry, missed this until just now. Looks fine with me, I just have a small question and my 2 cents to the multipath discussion (which we can have at a later point).", "new_text": "7.6. For application-layer protocols where the Connection initiator also sends the first message, the InitiateWithSend() action combines Connection initiation with a first Message sent. Without a message context (as in send-basic): With a message context (as in message-props): Whenever possible, a messageContext should be provided to declare the message passed to InitiateWithSend as idempotent. This allows the transport system to make use of 0-RTT establishment in case this is supported by the available protocol stacks. When the selected stack(s) do not support transmitting data upon connection establishment, InitiateWithSend is identical to Initiate() followed by Send(). Neither partial sends nor send batching are supported by InitiateWithSend(). The Events that may be sent after InitiateWithSend() are equivalent to those that would be sent by an invocation of Initate() followed immediately by an invocation of Send(), with the caveat that a send failure that occurs because the Connection could not be established will not result in a SendError separate from the InitiateError signaling the failure of Connection establishment. 7.7."} {"id": "q-en-api-drafts-7cb45e3de603ecd6010ecdbc5e9b3594228234b0fd3b31bfb6f10696a34dda8a", "old_text": "9.1.6. Integer This property specifies how long to wait before aborting a Connection during establishment, or before deciding that a Connection has failed after establishment. It is given in seconds. 9.1.7.", "comments": "Notable changes: added control over using multipath (which was a late addition to minset) added a timeout for conn. establishment, separate from the other conn. abort timeout (see ) renamed InitiateWithIdempotentSend into InitiateWithSend, and adjusted the text to recommend declaring idempotence. This addresses the minset function that goes back to TCP's send-with-SYN.\nStatus: NAME said \"LGTM modulo tiny nits\" and I think these nits are now addressed. NAME requests something more than just a boolean to enable/disable usage of multiple paths. It seems to me that this needs more discussion, but it also doesn't seem like a show-stopper for the boolean to begin with; I think we can discuss this more after landing this PR. NAME hasn't reacted (or am I missing this), but this has an ok from NAME and NAME In conclusion: I think this is good to go. I'll wait until tomorrow; unless someone speaks up by then, I'll merge this PR.\nIf you make it a boolean it cannot be used as a selection property, so I do not think that is a good choice. Or how should it be interpreted for the selection?\nNAME what do you mean, that it should be a (binary - yes/no) Preference instead? I thought this would be more limiting but I'm fine with this as well.\nNAME I meant that it should be a Preference as pointed out in the original comment by NAME I do not understand what a boolean means during selection (is it a requirement or a preference?) I also do not understand what it means in the middle of a connection. You mean you should try and use a single or several subflows for scheduling data, assuming you already have a multipath connection? I guess this is possible, but then the description needs to be more clear that this is what it means. In the appendix about minset your have \"Disable MPTCP: \"Parallel Use of Multiple Paths\" Property.\" I assume this refers to the Protocol Selection phase? It does not make sense to disable or enable MPTCP in the middle of a connection I think. I guess part of how to specify the property depends on how to interpret this part from the intro to the Managing Connections section: \"The application can set and query Connection Properties on a per-Connection basis. Connection Properties that are not read-only can be set during pre-establishment (see {{selection-props}}), as well as on connections directly using the SetProperty action\" I interpreted this as properties set during pre-establishment are selection properties, but maybe this is not the intention as they may need to be expressed differently? If they are not intended as selection properties, we are missing a selection property for multipath I guess.\nNAME Ok. That's what I meant, yes - but I agree that's not covered by the description, it really is a selection preference only as it's written now. OK (though you CAN do it with the \"real\" API, by closing subflows, but this goes beyond the spec'd protocol, and I'm also not sure it's a good idea, especially when limited to only enabling / disabling it altogether). I would have understood this as in your first interpretation: \"properties set during pre-establishment are selection properties\".\nNAME Closing a subflow is not the same as disabling MPTCP to me. You will still be using MPTCP, only with a single subflow. And you can certainly not open a subflow unless MPTCP is already enabled. But seems we now have a common understanding.\nNAME ack, we've converged. I just pushed a commit that turns the \"Parallel Use of Multiple Paths\" property into a Selection property.\nthese are a few minor changes to apply, and properties to add (e.g. \"disable MPTCP\")\nthis is done, yes?\nNo. But I'm planning to do a PR to address this issue this week.\nNote that this also requires closing , which I re-opened with a concrete proposal to address something from minset.\nLGTM modulo tiny nitsSorry, missed this until just now. Looks fine with me, I just have a small question and my 2 cents to the multipath discussion (which we can have at a later point).", "new_text": "9.1.6. This property specifies how long to wait before deciding that a Connection has failed after establishment. 9.1.7."} {"id": "q-en-api-drafts-7cb45e3de603ecd6010ecdbc5e9b3594228234b0fd3b31bfb6f10696a34dda8a", "old_text": "Abort terminates a Connection without delivering remaining data: A ConnectionError can inform the application that the other side has aborted the Connection; however, there is no guarantee that an Abort will indeed be signaled. 11.", "comments": "Notable changes: added control over using multipath (which was a late addition to minset) added a timeout for conn. establishment, separate from the other conn. abort timeout (see ) renamed InitiateWithIdempotentSend into InitiateWithSend, and adjusted the text to recommend declaring idempotence. This addresses the minset function that goes back to TCP's send-with-SYN.\nStatus: NAME said \"LGTM modulo tiny nits\" and I think these nits are now addressed. NAME requests something more than just a boolean to enable/disable usage of multiple paths. It seems to me that this needs more discussion, but it also doesn't seem like a show-stopper for the boolean to begin with; I think we can discuss this more after landing this PR. NAME hasn't reacted (or am I missing this), but this has an ok from NAME and NAME In conclusion: I think this is good to go. I'll wait until tomorrow; unless someone speaks up by then, I'll merge this PR.\nIf you make it a boolean it cannot be used as a selection property, so I do not think that is a good choice. Or how should it be interpreted for the selection?\nNAME what do you mean, that it should be a (binary - yes/no) Preference instead? I thought this would be more limiting but I'm fine with this as well.\nNAME I meant that it should be a Preference as pointed out in the original comment by NAME I do not understand what a boolean means during selection (is it a requirement or a preference?) I also do not understand what it means in the middle of a connection. You mean you should try and use a single or several subflows for scheduling data, assuming you already have a multipath connection? I guess this is possible, but then the description needs to be more clear that this is what it means. In the appendix about minset your have \"Disable MPTCP: \"Parallel Use of Multiple Paths\" Property.\" I assume this refers to the Protocol Selection phase? It does not make sense to disable or enable MPTCP in the middle of a connection I think. I guess part of how to specify the property depends on how to interpret this part from the intro to the Managing Connections section: \"The application can set and query Connection Properties on a per-Connection basis. Connection Properties that are not read-only can be set during pre-establishment (see {{selection-props}}), as well as on connections directly using the SetProperty action\" I interpreted this as properties set during pre-establishment are selection properties, but maybe this is not the intention as they may need to be expressed differently? If they are not intended as selection properties, we are missing a selection property for multipath I guess.\nNAME Ok. That's what I meant, yes - but I agree that's not covered by the description, it really is a selection preference only as it's written now. OK (though you CAN do it with the \"real\" API, by closing subflows, but this goes beyond the spec'd protocol, and I'm also not sure it's a good idea, especially when limited to only enabling / disabling it altogether). I would have understood this as in your first interpretation: \"properties set during pre-establishment are selection properties\".\nNAME Closing a subflow is not the same as disabling MPTCP to me. You will still be using MPTCP, only with a single subflow. And you can certainly not open a subflow unless MPTCP is already enabled. But seems we now have a common understanding.\nNAME ack, we've converged. I just pushed a commit that turns the \"Parallel Use of Multiple Paths\" property into a Selection property.\nthese are a few minor changes to apply, and properties to add (e.g. \"disable MPTCP\")\nthis is done, yes?\nNo. But I'm planning to do a PR to address this issue this week.\nNote that this also requires closing , which I re-opened with a concrete proposal to address something from minset.\nLGTM modulo tiny nitsSorry, missed this until just now. Looks fine with me, I just have a small question and my 2 cents to the multipath discussion (which we can have at a later point).", "new_text": "Abort terminates a Connection without delivering remaining data: A ConnectionError informs the application that data to could not be delivered after a timeout, or the other side has aborted the Connection; however, there is no guarantee that an Abort will indeed be signaled. 11."} {"id": "q-en-api-drafts-7cb45e3de603ecd6010ecdbc5e9b3594228234b0fd3b31bfb6f10696a34dda8a", "old_text": "events: Ready<> occurs when a Connection created with Initiate() or InitiateWithIdempotentData() transitions to Established state. ConnectionReceived<> occurs when a Connection created with Listen() transitions to Established state.", "comments": "Notable changes: added control over using multipath (which was a late addition to minset) added a timeout for conn. establishment, separate from the other conn. abort timeout (see ) renamed InitiateWithIdempotentSend into InitiateWithSend, and adjusted the text to recommend declaring idempotence. This addresses the minset function that goes back to TCP's send-with-SYN.\nStatus: NAME said \"LGTM modulo tiny nits\" and I think these nits are now addressed. NAME requests something more than just a boolean to enable/disable usage of multiple paths. It seems to me that this needs more discussion, but it also doesn't seem like a show-stopper for the boolean to begin with; I think we can discuss this more after landing this PR. NAME hasn't reacted (or am I missing this), but this has an ok from NAME and NAME In conclusion: I think this is good to go. I'll wait until tomorrow; unless someone speaks up by then, I'll merge this PR.\nIf you make it a boolean it cannot be used as a selection property, so I do not think that is a good choice. Or how should it be interpreted for the selection?\nNAME what do you mean, that it should be a (binary - yes/no) Preference instead? I thought this would be more limiting but I'm fine with this as well.\nNAME I meant that it should be a Preference as pointed out in the original comment by NAME I do not understand what a boolean means during selection (is it a requirement or a preference?) I also do not understand what it means in the middle of a connection. You mean you should try and use a single or several subflows for scheduling data, assuming you already have a multipath connection? I guess this is possible, but then the description needs to be more clear that this is what it means. In the appendix about minset your have \"Disable MPTCP: \"Parallel Use of Multiple Paths\" Property.\" I assume this refers to the Protocol Selection phase? It does not make sense to disable or enable MPTCP in the middle of a connection I think. I guess part of how to specify the property depends on how to interpret this part from the intro to the Managing Connections section: \"The application can set and query Connection Properties on a per-Connection basis. Connection Properties that are not read-only can be set during pre-establishment (see {{selection-props}}), as well as on connections directly using the SetProperty action\" I interpreted this as properties set during pre-establishment are selection properties, but maybe this is not the intention as they may need to be expressed differently? If they are not intended as selection properties, we are missing a selection property for multipath I guess.\nNAME Ok. That's what I meant, yes - but I agree that's not covered by the description, it really is a selection preference only as it's written now. OK (though you CAN do it with the \"real\" API, by closing subflows, but this goes beyond the spec'd protocol, and I'm also not sure it's a good idea, especially when limited to only enabling / disabling it altogether). I would have understood this as in your first interpretation: \"properties set during pre-establishment are selection properties\".\nNAME Closing a subflow is not the same as disabling MPTCP to me. You will still be using MPTCP, only with a single subflow. And you can certainly not open a subflow unless MPTCP is already enabled. But seems we now have a common understanding.\nNAME ack, we've converged. I just pushed a commit that turns the \"Parallel Use of Multiple Paths\" property into a Selection property.\nthese are a few minor changes to apply, and properties to add (e.g. \"disable MPTCP\")\nthis is done, yes?\nNo. But I'm planning to do a PR to address this issue this week.\nNote that this also requires closing , which I re-opened with a concrete proposal to address something from minset.\nLGTM modulo tiny nitsSorry, missed this until just now. Looks fine with me, I just have a small question and my 2 cents to the multipath discussion (which we can have at a later point).", "new_text": "events: Ready<> occurs when a Connection created with Initiate() or InitiateWithSend() transitions to Established state. ConnectionReceived<> occurs when a Connection created with Listen() transitions to Established state."} {"id": "q-en-api-drafts-9a697faff3c2d3c83ccf4472228c7966c3bb9c825737b10667b5a6fd65ea9952", "old_text": "Selection properties are represented as preferences, which can have one of five preference levels: Internally, the transport system will first exclude all protocols and paths that match a Prohibit, then exclude all protocols and paths that do not match a Require, then sort candidates according to", "comments": "This closes issue\nWeren't we supposed to stop making PRs and issues until March 5? See Zahed's email to the TAPS list\nNAME sorry \u2013 I did not read list mails since yesterday morning.\nYes, let's hold off on these until we move repos!\nWe discussed the use of a \"Default\" preference level that restores the system default in . In addition, we should check whether we really need to fix the default values for all Selection Properties in the API document or can make some of them to be \"Implementation Specific\".\nClosed by PR\nLooks good to me.", "new_text": "Selection properties are represented as preferences, which can have one of five preference levels: In addition, the pseudo-level \"Default\" can be used to reset the property to the default level used by the implementation. This level will never show up when queuing the value of a preference - the effective preference must be returned instead. Internally, the transport system will first exclude all protocols and paths that match a Prohibit, then exclude all protocols and paths that do not match a Require, then sort candidates according to"} {"id": "q-en-api-drafts-17379de1d39ad7e7f0bf4014940c8e3c7bce399b0f986971fed0010b73ea0df8", "old_text": "Passive open is the Action of waiting for Connections from remote endpoints, commonly used by servers in client-server interactions. Passive open is supported by this interface through the Listen Action: Before calling Listen, the caller must have initialized the Preconnection during the pre-establishment phase with a Local Endpoint specifier, as well as all properties necessary for Protocol Stack selection. A Remote Endpoint may optionally be specified, to constrain what Connections are accepted. The Listen() Action consumes the Preconnection. Once Listen() has been called, no further properties may be added to the Preconnection, and no subsequent establishment call may be made on the Preconnection. Listening continues until the global context shuts down, or until the Stop action is performed on the same Preconnection: After Stop() is called, the preconnection can be disposed of. The ConnectionReceived Event occurs when a Remote Endpoint has established a transport-layer connection to this Preconnection (for Connection-oriented transport protocols), or when the first Message has been received from the Remote Endpoint (for Connectionless protocols), causing a new Connection to be created. The resulting", "comments": "This adds an explicit Listener object to the API ( to clarify behaviour of multi-streaming protocols ( clarify behaviour and enable re-use of Preconnections (\nWe talk about listen objects in architecture and implementation, but not in the API. Changes needed: Let return a Listener Move the the Listener Allow Preconnection to be used multiple times.\nThis issue popped up while discussing and seem to be a wired QUIC multi-streaming issue, possible affecting SCTP too): When a multi-streaming protocol like QUIC allows both ends to open new streams (e.g. by cloning the QUIC connection representing the stream), the both side needs a way to get a ConnectionReceived event for a new stream. That might result in the following weird pattern: I think this needs to be documented somehow and we may need to add a Selection Property do turn off this behaviour.\nCalling is an active open. Once the connection is open, the peer can open new streams, but those would surely be events on the not on the .\nNAME I am not sure with this one: assuming you have a bunch of ? The first one? What happens if I close the first one?\nThat's a good point, and I'm not sure I know the answer, but I don't think the should fire the events. A is a potential connection, yet to be instantiated, but these connections come from an already active connection. I've never been entirely comfortable with cloning a connection as the multi-streaming abstraction. This is perhaps the first scenario where it struggles though.\nI do not understand the pattern in the example. You do not get a ConnectionReceived after an active open, you need a Listen?\nNAME Your assumption is true for TCP, but not for multi-streaming transports like QUIC. With QUIC, you can indeed get additional TAPS connections (QUIC streams) based on an existing, initiated connections. The question is how to handle this. NAME A possible way out would be making itself is returned by an event then.\nNAME the TAPS interface is the same weather you have TCP or QUIC underneath. For TAPS an active open generates a READY event not a ConnectionReceived? I do not think your example is a valid TAPS sequence.\nNAME the Ready event is not shown in the example (it would be within the first ). The problem I raised is how to model the addition of streams to a multi-streaming connection. In case of QUIC, additional streams can be initiated by the server/listening side of the QUIC connection. If we model QUIC streams as TAPS connections, this results in events at the client (and server) side for each new QUIC stream.\nNAME agree we could add , although it's a significant shift in the model. If we did that, we might also want to revisit as the way of creating a new stream on a connection, in favour of a method on . NAME did you implement this with QUIC? How does your implementation work?\nI also see a problem with the communication pattern here. To me, modelling a stream as a TAPS connection means that for every new connection that a peer creates (be it a stream of an existing connection or not), I'll have to listen. So: if I expect 5 incoming connections, which may e.g. be 1 connection with 1 stream and 4 later-added streams below the API, I'll have to listen 5 times. What this also means is that the grouping notion should be a readable property - I believe that we currently only describe it as something that's actively created via clone(). Regarding code, no idea how NAME has done this, but I believe that the NEAT implementation is like I describe here. For more details on this specific aspect, see: URL\nNAME We create a Listener from a Connection to handle inbound streams associated with the same parent transport connection. NAME Currently, our Listener object doesn't require calling Listen() for each new connection\u2014you can get the event invoked many times, so it doesn't require knowing how many streams may be opened by the peer. In order to get back pressure, we're talking about having a window for how many new inbound connections will be allowed at a time.\nOk, that also sounds perfectly reasonable. Actually it seems that this (receiving multiple ConnectionReceived events on one Preconnection) is already supported by our current text, sorry for missing this! So... what is the problem we're discussing here - what do we need to change in the text? I think we need only the following: do we have a use case for one Listener getting multiple Connections without having these Connections grouped? (I guess not? Grouping is not a promise of multi-streaming.) If so, we need the ability to query whether connections are grouped (currently grouping is only created actively via \"clone\"). Otherwise, we can just statically say that all Connections that are created from the same Preconnections are grouped. some kind of ConnectionReceived event for the Rendezvous case, with text saying that all Connections created off the same PreConnection are grouped. (right?) a way to provide back pressure. Here, we already have PreConnection.Stop(). In the interest of keeping things simple, do we think that intelligently switching between a listening and a non-listening mode would be enough? If so, maybe we should have a call similar to Stop which stops listening, but does not dispose of the PreConnection, such that Listen could be called again. Maybe have an \"End\" call which stops and disposes, and let Stop only stop listening (because \"Stop\", to me, indicates only stopping but nothing else).\nNAME This solution sounds very reasonable \u2013 I guess we should do it the same way for TAPS and add that as Text to the API NAME Some comments: It depends \u2013 if writing a server, you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener. So the behaviour wether to group should be a property of the Listener. I would not group connections received through the same Rendezvous (analogous to server-style listener). We might want to add a property to the listener\nNAME I agree, but: just because groups may encompass more than the Connections from one Listener doesn't mean that it would be bad to always group the Connections from one Listener (i.e., group them among themselves, but maybe group them with others too). Besides, I think that wouldn't be the listener deciding anyway. Just to understand this, are you saying \"no, see my argument for case 1\" ? Then we're just discussing case 1, my answer is above. Otherwise I'd say: why not? yes ... but what do you mean with ? We have , just the semantics are not what I suggested. Regarding , I think that's a good idea - my own proposal was me overdoing \"keep it simple, let's not add features\". Nobody needs to use this unless they want to.\nNAME I guess the answer really depends on what a connection group is for you. I always saw connection groups as an abstraction for multi-streaming connections (or their emulation) \u2013 therefore I see no use for grouping connections from multiple listeners. My opinion is auto-grouping for multi-streaming is fine, otherwise it is not. really falls out of 1 I want your semantics, but call it\nNAME Me too. I can imagine other things ( this paper of ours comes to mind: URL ), but generally I agree and don't think we should complicate the API for this. But I'm getting confused: you wrote \"you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener\". That sounded to me like you want to support multiple streams, but from several listeners?! ACK Agreed!\nNAME let me elaborate on this\u2026 You might want all streams of a multi-streaming connection grouped, e.g., all streams within a QUIC connection You might not want all multi-streaming connections (e.g., QUIC connections) received through the same Listener object grouped You definitely do not want all all streams of all QUIC connection received through one listener grouped (except if you have hierarchical groups that allows you to sort that out the cases above)\nI agree with all this. Anyway I think it's a moot point: whether they're grouped or not is not up to the listener to decide? So grouping needs to be something that can be queried.\nOk \u2013 fair enough \u2013 but I still think the Listener is the right place to configure whether connections that come out of it should be grouped by default or not.\nNot unless we have a concrete idea on what to do with this information inside the transport system. => I think no text update is needed as a result of this issue and lean towards closing it...\nI don't think we really need a text update for the grouping, but for the original question. To address the handling of incoming streams in multi-streaming connections, I would like to have the solution NAME outlined reflected in the API.\nOne question on this, for NAME the description of your Listener object being able to accept multiple events, is that how it works in general or was that specif for a multi-streaming scenario? As specified I am not sure that our Listen action does not allow multiple ConnectionReceived events? The text says\nNAME The listener object for all cases (both multi streaming and receiving brand-new transport connections) allows multiple events. I think the existing text is a bit ambiguous, but it seems that \"listening continues\" implies that the event can certainly fire again.\nRevisit after is done\nRegarding this text: This seems to imply that the Preconnection object, which may be full of many carefully tuned settings that could be useful for another Connection, is no longer usable. While I understand that the specific Preconnection cannot change for a Connection, this is inconvenient to the application. I see two good options to solve this: Make a deep copy of the Preconnection upon Initiate(), and say that a Preconnection may be used or changed but changes can no longer influence the Connection (this is what Network.framework currently does) Allow a URL() method to let the applications do the deep copy themselves when and if they want to re-use.\n+1 to reusing the preconnection. Do you even need a deep copy. The connection is a new object that should be independent of the preconnection anyway, no? Another thought: Are there actually ways to get different connections out of the preconenction. E.g. you do racing and end up with two open connections and (suddenly) want to use both...?\nI'd like to see this as well. To me it makes most sense to allow calling Initiate() on the same Preconnection multiple times to get multiple Connections, but not allow to change this Preconnection anymore after the first Initiate. This way it would be similar to Listen() and Rendezvous(), where we have a Preconnection that yields multiple Connections, and that cannot be changed anymore after Listen() or Rendezvous(). So here we already have a binding between the Preconnection and the Connections that come out of it. If the application wants to change the Preconnection before re-using it, I'd be in favor of letting it explicitly clone the Preconnection - not sure if we actually want to call it clone() though, because we already have Connection.Clone(), which creates a Connection Group, a completely different thing.\nWhy do you think it should not be allowed that a preconnection can be changed after the first initiate? If you don't change it you should get the same kind of connection, if you change it you might get something different. Your choice!\nI wanted the API contract to be similar for different forms of Connection establishment. But I may be overthinking this.\nFirst, I like the idea of reusing Preconnections. I am not sure whether we should define now whether they should be changeable after the first initiation. There might be kinds of Preconnections where this is a good idea, there might be others where it is not. I anticipate one can realize connection pools as a sub-class of Preconnection. In this case, in such cases, it should not be possible to change a Preconnection.\nSo the difference is if you think you morph a preconnection into a connection, or if the preconnection create the connection. i think the first think does not make sense because their are two completely different things; they have not actions or events in common.\nNAME The morphing one has its beauty for modeling TCP fastopen / 0-RTT data.\nWhether it makes sense to re-use a perhaps depends on how fully specified it is? Reusing a completely specified , that will just re-open an identical 5-tuple as an existing might not make sense. Reusing a partially specified , e.g., with the source port left as a wildcard to new connections can open without conflicts, seems reasonable.\nNAME right, using the exact same set of fully-specified properties isn't going to work. But having a partially specified one and copying it, or modifying one that was used (by switching out the Remote, etc) would be useful.\nThis might work better the other way -- on , the Preconnection corresponding to the initiated/initiating connection gets deep-copied into the new connection, leaving the existing Preconnection in the state that it was. That deep copy, inside the connection, cannot be reused, but the existing one could be. (This probably only works for connections on which you . turns the preconnection into a listener from which multiple , each of which will have their own deep copy of the listener, will spawn. might be even weirder, but there's a potentially bigger win to allowing reuse, but I don't quite have my head around how it would work)\nSo i think there is actually not that much too deep copy because a preconception is not a super class of a connection. At the moment when you call initiate or rendezvous the preconception should create a connection and store all information regarding this connection in the connection object which is later returned. The only thing that needs \"copying\" is the properties but you need to create a new object anyway because preconnection properties have preferences and connection properties not but have to be marked as mutable or unmutable. These two things are also not the same. So the only that that's left to copy maybe are the endpoints. Not sure if the preconnection and connection both have a copy of the endpoints or if an endpoint only exists once and both have a reference (which means if the endpoint gets somehow updated that's relevant for both) or maybe even the connection does not need a reference to the endpoint anymore...?\nNAME right \u2013 but I don't know a way of specifying \"a is consumed if fully specified, else may be reusedin any current programming language. Maybe tracking lifetimes so closely doesn't matter though, and we can just write this in the draft.\nI have no problem with an Interface where some Preconnections can be reused and others can not. That makes a pretty nice case for -- preconnection reuses itself for multiple incoming connections. For ` I see this API-wise as some kind of listen called at both sides.\nSide meeting says we can reuse the initiate call. Implementation should specify that a deep copy is made on Initiate() or Listen()\nWill be trivial to fix after is done\nSorry to reopen an old issue, but when reading the current API draft, I noticed that it still says that reusing the Preconnection is possible for Listeners, but not possible for Initiate() and Rendezvous(). For example, see Section 6.1: \"The Initiate() Action consumes the Preconnection [\u2026]\" From what I read in this issue, I think Preconnections should be allowed to be reused for Initiate() and Rendezvous() as well (possibly with a note on deep-copy).\nThis looks just right to me, thanks a lot for doing this! In particular having read again, I believe that NAME should definitely have the last word on this one. So I'll go with whatever NAME says.", "new_text": "Passive open is the Action of waiting for Connections from remote endpoints, commonly used by servers in client-server interactions. Passive open is supported by this interface through the Listen Action and returns a Listener object: Before calling Listen, the caller must have initialized the Preconnection during the pre-establishment phase with a Local Endpoint specifier, as well as all properties necessary for Protocol Stack selection. A Remote Endpoint may optionally be specified, to constrain what Connections are accepted. The Listen() Action returns a Listener object. Once Listen() has been called, properties added to the Preconnection have no effect on the Listener and the Preconnection can be disposed of or reused. Listening continues until the global context shuts down, or until the Stop action is performed on the Listener object: After Stop() is called, the Listener can be disposed of. The ConnectionReceived Event occurs when a Remote Endpoint has established a transport-layer connection to this Listener (for Connection-oriented transport protocols), or when the first Message has been received from the Remote Endpoint (for Connectionless protocols), causing a new Connection to be created. The resulting"} {"id": "q-en-api-drafts-17379de1d39ad7e7f0bf4014940c8e3c7bce399b0f986971fed0010b73ea0df8", "old_text": "ready to use as soon as it is passed to the application via the event. A ListenError occurs either when the Preconnection cannot be fulfilled for listening, when the Local Endpoint (or Remote Endpoint, if specified) cannot be resolved, or when the application is prohibited from listening by policy. A Stopped event occurs after the Preconnection has stopped listening. 6.3.", "comments": "This adds an explicit Listener object to the API ( to clarify behaviour of multi-streaming protocols ( clarify behaviour and enable re-use of Preconnections (\nWe talk about listen objects in architecture and implementation, but not in the API. Changes needed: Let return a Listener Move the the Listener Allow Preconnection to be used multiple times.\nThis issue popped up while discussing and seem to be a wired QUIC multi-streaming issue, possible affecting SCTP too): When a multi-streaming protocol like QUIC allows both ends to open new streams (e.g. by cloning the QUIC connection representing the stream), the both side needs a way to get a ConnectionReceived event for a new stream. That might result in the following weird pattern: I think this needs to be documented somehow and we may need to add a Selection Property do turn off this behaviour.\nCalling is an active open. Once the connection is open, the peer can open new streams, but those would surely be events on the not on the .\nNAME I am not sure with this one: assuming you have a bunch of ? The first one? What happens if I close the first one?\nThat's a good point, and I'm not sure I know the answer, but I don't think the should fire the events. A is a potential connection, yet to be instantiated, but these connections come from an already active connection. I've never been entirely comfortable with cloning a connection as the multi-streaming abstraction. This is perhaps the first scenario where it struggles though.\nI do not understand the pattern in the example. You do not get a ConnectionReceived after an active open, you need a Listen?\nNAME Your assumption is true for TCP, but not for multi-streaming transports like QUIC. With QUIC, you can indeed get additional TAPS connections (QUIC streams) based on an existing, initiated connections. The question is how to handle this. NAME A possible way out would be making itself is returned by an event then.\nNAME the TAPS interface is the same weather you have TCP or QUIC underneath. For TAPS an active open generates a READY event not a ConnectionReceived? I do not think your example is a valid TAPS sequence.\nNAME the Ready event is not shown in the example (it would be within the first ). The problem I raised is how to model the addition of streams to a multi-streaming connection. In case of QUIC, additional streams can be initiated by the server/listening side of the QUIC connection. If we model QUIC streams as TAPS connections, this results in events at the client (and server) side for each new QUIC stream.\nNAME agree we could add , although it's a significant shift in the model. If we did that, we might also want to revisit as the way of creating a new stream on a connection, in favour of a method on . NAME did you implement this with QUIC? How does your implementation work?\nI also see a problem with the communication pattern here. To me, modelling a stream as a TAPS connection means that for every new connection that a peer creates (be it a stream of an existing connection or not), I'll have to listen. So: if I expect 5 incoming connections, which may e.g. be 1 connection with 1 stream and 4 later-added streams below the API, I'll have to listen 5 times. What this also means is that the grouping notion should be a readable property - I believe that we currently only describe it as something that's actively created via clone(). Regarding code, no idea how NAME has done this, but I believe that the NEAT implementation is like I describe here. For more details on this specific aspect, see: URL\nNAME We create a Listener from a Connection to handle inbound streams associated with the same parent transport connection. NAME Currently, our Listener object doesn't require calling Listen() for each new connection\u2014you can get the event invoked many times, so it doesn't require knowing how many streams may be opened by the peer. In order to get back pressure, we're talking about having a window for how many new inbound connections will be allowed at a time.\nOk, that also sounds perfectly reasonable. Actually it seems that this (receiving multiple ConnectionReceived events on one Preconnection) is already supported by our current text, sorry for missing this! So... what is the problem we're discussing here - what do we need to change in the text? I think we need only the following: do we have a use case for one Listener getting multiple Connections without having these Connections grouped? (I guess not? Grouping is not a promise of multi-streaming.) If so, we need the ability to query whether connections are grouped (currently grouping is only created actively via \"clone\"). Otherwise, we can just statically say that all Connections that are created from the same Preconnections are grouped. some kind of ConnectionReceived event for the Rendezvous case, with text saying that all Connections created off the same PreConnection are grouped. (right?) a way to provide back pressure. Here, we already have PreConnection.Stop(). In the interest of keeping things simple, do we think that intelligently switching between a listening and a non-listening mode would be enough? If so, maybe we should have a call similar to Stop which stops listening, but does not dispose of the PreConnection, such that Listen could be called again. Maybe have an \"End\" call which stops and disposes, and let Stop only stop listening (because \"Stop\", to me, indicates only stopping but nothing else).\nNAME This solution sounds very reasonable \u2013 I guess we should do it the same way for TAPS and add that as Text to the API NAME Some comments: It depends \u2013 if writing a server, you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener. So the behaviour wether to group should be a property of the Listener. I would not group connections received through the same Rendezvous (analogous to server-style listener). We might want to add a property to the listener\nNAME I agree, but: just because groups may encompass more than the Connections from one Listener doesn't mean that it would be bad to always group the Connections from one Listener (i.e., group them among themselves, but maybe group them with others too). Besides, I think that wouldn't be the listener deciding anyway. Just to understand this, are you saying \"no, see my argument for case 1\" ? Then we're just discussing case 1, my answer is above. Otherwise I'd say: why not? yes ... but what do you mean with ? We have , just the semantics are not what I suggested. Regarding , I think that's a good idea - my own proposal was me overdoing \"keep it simple, let's not add features\". Nobody needs to use this unless they want to.\nNAME I guess the answer really depends on what a connection group is for you. I always saw connection groups as an abstraction for multi-streaming connections (or their emulation) \u2013 therefore I see no use for grouping connections from multiple listeners. My opinion is auto-grouping for multi-streaming is fine, otherwise it is not. really falls out of 1 I want your semantics, but call it\nNAME Me too. I can imagine other things ( this paper of ours comes to mind: URL ), but generally I agree and don't think we should complicate the API for this. But I'm getting confused: you wrote \"you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener\". That sounded to me like you want to support multiple streams, but from several listeners?! ACK Agreed!\nNAME let me elaborate on this\u2026 You might want all streams of a multi-streaming connection grouped, e.g., all streams within a QUIC connection You might not want all multi-streaming connections (e.g., QUIC connections) received through the same Listener object grouped You definitely do not want all all streams of all QUIC connection received through one listener grouped (except if you have hierarchical groups that allows you to sort that out the cases above)\nI agree with all this. Anyway I think it's a moot point: whether they're grouped or not is not up to the listener to decide? So grouping needs to be something that can be queried.\nOk \u2013 fair enough \u2013 but I still think the Listener is the right place to configure whether connections that come out of it should be grouped by default or not.\nNot unless we have a concrete idea on what to do with this information inside the transport system. => I think no text update is needed as a result of this issue and lean towards closing it...\nI don't think we really need a text update for the grouping, but for the original question. To address the handling of incoming streams in multi-streaming connections, I would like to have the solution NAME outlined reflected in the API.\nOne question on this, for NAME the description of your Listener object being able to accept multiple events, is that how it works in general or was that specif for a multi-streaming scenario? As specified I am not sure that our Listen action does not allow multiple ConnectionReceived events? The text says\nNAME The listener object for all cases (both multi streaming and receiving brand-new transport connections) allows multiple events. I think the existing text is a bit ambiguous, but it seems that \"listening continues\" implies that the event can certainly fire again.\nRevisit after is done\nRegarding this text: This seems to imply that the Preconnection object, which may be full of many carefully tuned settings that could be useful for another Connection, is no longer usable. While I understand that the specific Preconnection cannot change for a Connection, this is inconvenient to the application. I see two good options to solve this: Make a deep copy of the Preconnection upon Initiate(), and say that a Preconnection may be used or changed but changes can no longer influence the Connection (this is what Network.framework currently does) Allow a URL() method to let the applications do the deep copy themselves when and if they want to re-use.\n+1 to reusing the preconnection. Do you even need a deep copy. The connection is a new object that should be independent of the preconnection anyway, no? Another thought: Are there actually ways to get different connections out of the preconenction. E.g. you do racing and end up with two open connections and (suddenly) want to use both...?\nI'd like to see this as well. To me it makes most sense to allow calling Initiate() on the same Preconnection multiple times to get multiple Connections, but not allow to change this Preconnection anymore after the first Initiate. This way it would be similar to Listen() and Rendezvous(), where we have a Preconnection that yields multiple Connections, and that cannot be changed anymore after Listen() or Rendezvous(). So here we already have a binding between the Preconnection and the Connections that come out of it. If the application wants to change the Preconnection before re-using it, I'd be in favor of letting it explicitly clone the Preconnection - not sure if we actually want to call it clone() though, because we already have Connection.Clone(), which creates a Connection Group, a completely different thing.\nWhy do you think it should not be allowed that a preconnection can be changed after the first initiate? If you don't change it you should get the same kind of connection, if you change it you might get something different. Your choice!\nI wanted the API contract to be similar for different forms of Connection establishment. But I may be overthinking this.\nFirst, I like the idea of reusing Preconnections. I am not sure whether we should define now whether they should be changeable after the first initiation. There might be kinds of Preconnections where this is a good idea, there might be others where it is not. I anticipate one can realize connection pools as a sub-class of Preconnection. In this case, in such cases, it should not be possible to change a Preconnection.\nSo the difference is if you think you morph a preconnection into a connection, or if the preconnection create the connection. i think the first think does not make sense because their are two completely different things; they have not actions or events in common.\nNAME The morphing one has its beauty for modeling TCP fastopen / 0-RTT data.\nWhether it makes sense to re-use a perhaps depends on how fully specified it is? Reusing a completely specified , that will just re-open an identical 5-tuple as an existing might not make sense. Reusing a partially specified , e.g., with the source port left as a wildcard to new connections can open without conflicts, seems reasonable.\nNAME right, using the exact same set of fully-specified properties isn't going to work. But having a partially specified one and copying it, or modifying one that was used (by switching out the Remote, etc) would be useful.\nThis might work better the other way -- on , the Preconnection corresponding to the initiated/initiating connection gets deep-copied into the new connection, leaving the existing Preconnection in the state that it was. That deep copy, inside the connection, cannot be reused, but the existing one could be. (This probably only works for connections on which you . turns the preconnection into a listener from which multiple , each of which will have their own deep copy of the listener, will spawn. might be even weirder, but there's a potentially bigger win to allowing reuse, but I don't quite have my head around how it would work)\nSo i think there is actually not that much too deep copy because a preconception is not a super class of a connection. At the moment when you call initiate or rendezvous the preconception should create a connection and store all information regarding this connection in the connection object which is later returned. The only thing that needs \"copying\" is the properties but you need to create a new object anyway because preconnection properties have preferences and connection properties not but have to be marked as mutable or unmutable. These two things are also not the same. So the only that that's left to copy maybe are the endpoints. Not sure if the preconnection and connection both have a copy of the endpoints or if an endpoint only exists once and both have a reference (which means if the endpoint gets somehow updated that's relevant for both) or maybe even the connection does not need a reference to the endpoint anymore...?\nNAME right \u2013 but I don't know a way of specifying \"a is consumed if fully specified, else may be reusedin any current programming language. Maybe tracking lifetimes so closely doesn't matter though, and we can just write this in the draft.\nI have no problem with an Interface where some Preconnections can be reused and others can not. That makes a pretty nice case for -- preconnection reuses itself for multiple incoming connections. For ` I see this API-wise as some kind of listen called at both sides.\nSide meeting says we can reuse the initiate call. Implementation should specify that a deep copy is made on Initiate() or Listen()\nWill be trivial to fix after is done\nSorry to reopen an old issue, but when reading the current API draft, I noticed that it still says that reusing the Preconnection is possible for Listeners, but not possible for Initiate() and Rendezvous(). For example, see Section 6.1: \"The Initiate() Action consumes the Preconnection [\u2026]\" From what I read in this issue, I think Preconnections should be allowed to be reused for Initiate() and Rendezvous() as well (possibly with a note on deep-copy).\nThis looks just right to me, thanks a lot for doing this! In particular having read again, I believe that NAME should definitely have the last word on this one. So I'll go with whatever NAME says.", "new_text": "ready to use as soon as it is passed to the application via the event. A ListenError occurs either when the Properties of the Preconnection cannot be fulfilled for listening, when the Local Endpoint (or Remote Endpoint, if specified) cannot be resolved, or when the application is prohibited from listening by policy. A Stopped event occurs after the Listener has stopped listening. 6.3."} {"id": "q-en-api-drafts-17379de1d39ad7e7f0bf4014940c8e3c7bce399b0f986971fed0010b73ea0df8", "old_text": "6.4. Groups of Connections can be created using the Clone Action: Calling Clone on a Connection yields a group of two Connections: the parent Connection on which Clone was called, and the resulting cloned", "comments": "This adds an explicit Listener object to the API ( to clarify behaviour of multi-streaming protocols ( clarify behaviour and enable re-use of Preconnections (\nWe talk about listen objects in architecture and implementation, but not in the API. Changes needed: Let return a Listener Move the the Listener Allow Preconnection to be used multiple times.\nThis issue popped up while discussing and seem to be a wired QUIC multi-streaming issue, possible affecting SCTP too): When a multi-streaming protocol like QUIC allows both ends to open new streams (e.g. by cloning the QUIC connection representing the stream), the both side needs a way to get a ConnectionReceived event for a new stream. That might result in the following weird pattern: I think this needs to be documented somehow and we may need to add a Selection Property do turn off this behaviour.\nCalling is an active open. Once the connection is open, the peer can open new streams, but those would surely be events on the not on the .\nNAME I am not sure with this one: assuming you have a bunch of ? The first one? What happens if I close the first one?\nThat's a good point, and I'm not sure I know the answer, but I don't think the should fire the events. A is a potential connection, yet to be instantiated, but these connections come from an already active connection. I've never been entirely comfortable with cloning a connection as the multi-streaming abstraction. This is perhaps the first scenario where it struggles though.\nI do not understand the pattern in the example. You do not get a ConnectionReceived after an active open, you need a Listen?\nNAME Your assumption is true for TCP, but not for multi-streaming transports like QUIC. With QUIC, you can indeed get additional TAPS connections (QUIC streams) based on an existing, initiated connections. The question is how to handle this. NAME A possible way out would be making itself is returned by an event then.\nNAME the TAPS interface is the same weather you have TCP or QUIC underneath. For TAPS an active open generates a READY event not a ConnectionReceived? I do not think your example is a valid TAPS sequence.\nNAME the Ready event is not shown in the example (it would be within the first ). The problem I raised is how to model the addition of streams to a multi-streaming connection. In case of QUIC, additional streams can be initiated by the server/listening side of the QUIC connection. If we model QUIC streams as TAPS connections, this results in events at the client (and server) side for each new QUIC stream.\nNAME agree we could add , although it's a significant shift in the model. If we did that, we might also want to revisit as the way of creating a new stream on a connection, in favour of a method on . NAME did you implement this with QUIC? How does your implementation work?\nI also see a problem with the communication pattern here. To me, modelling a stream as a TAPS connection means that for every new connection that a peer creates (be it a stream of an existing connection or not), I'll have to listen. So: if I expect 5 incoming connections, which may e.g. be 1 connection with 1 stream and 4 later-added streams below the API, I'll have to listen 5 times. What this also means is that the grouping notion should be a readable property - I believe that we currently only describe it as something that's actively created via clone(). Regarding code, no idea how NAME has done this, but I believe that the NEAT implementation is like I describe here. For more details on this specific aspect, see: URL\nNAME We create a Listener from a Connection to handle inbound streams associated with the same parent transport connection. NAME Currently, our Listener object doesn't require calling Listen() for each new connection\u2014you can get the event invoked many times, so it doesn't require knowing how many streams may be opened by the peer. In order to get back pressure, we're talking about having a window for how many new inbound connections will be allowed at a time.\nOk, that also sounds perfectly reasonable. Actually it seems that this (receiving multiple ConnectionReceived events on one Preconnection) is already supported by our current text, sorry for missing this! So... what is the problem we're discussing here - what do we need to change in the text? I think we need only the following: do we have a use case for one Listener getting multiple Connections without having these Connections grouped? (I guess not? Grouping is not a promise of multi-streaming.) If so, we need the ability to query whether connections are grouped (currently grouping is only created actively via \"clone\"). Otherwise, we can just statically say that all Connections that are created from the same Preconnections are grouped. some kind of ConnectionReceived event for the Rendezvous case, with text saying that all Connections created off the same PreConnection are grouped. (right?) a way to provide back pressure. Here, we already have PreConnection.Stop(). In the interest of keeping things simple, do we think that intelligently switching between a listening and a non-listening mode would be enough? If so, maybe we should have a call similar to Stop which stops listening, but does not dispose of the PreConnection, such that Listen could be called again. Maybe have an \"End\" call which stops and disposes, and let Stop only stop listening (because \"Stop\", to me, indicates only stopping but nothing else).\nNAME This solution sounds very reasonable \u2013 I guess we should do it the same way for TAPS and add that as Text to the API NAME Some comments: It depends \u2013 if writing a server, you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener. So the behaviour wether to group should be a property of the Listener. I would not group connections received through the same Rendezvous (analogous to server-style listener). We might want to add a property to the listener\nNAME I agree, but: just because groups may encompass more than the Connections from one Listener doesn't mean that it would be bad to always group the Connections from one Listener (i.e., group them among themselves, but maybe group them with others too). Besides, I think that wouldn't be the listener deciding anyway. Just to understand this, are you saying \"no, see my argument for case 1\" ? Then we're just discussing case 1, my answer is above. Otherwise I'd say: why not? yes ... but what do you mean with ? We have , just the semantics are not what I suggested. Regarding , I think that's a good idea - my own proposal was me overdoing \"keep it simple, let's not add features\". Nobody needs to use this unless they want to.\nNAME I guess the answer really depends on what a connection group is for you. I always saw connection groups as an abstraction for multi-streaming connections (or their emulation) \u2013 therefore I see no use for grouping connections from multiple listeners. My opinion is auto-grouping for multi-streaming is fine, otherwise it is not. really falls out of 1 I want your semantics, but call it\nNAME Me too. I can imagine other things ( this paper of ours comes to mind: URL ), but generally I agree and don't think we should complicate the API for this. But I'm getting confused: you wrote \"you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener\". That sounded to me like you want to support multiple streams, but from several listeners?! ACK Agreed!\nNAME let me elaborate on this\u2026 You might want all streams of a multi-streaming connection grouped, e.g., all streams within a QUIC connection You might not want all multi-streaming connections (e.g., QUIC connections) received through the same Listener object grouped You definitely do not want all all streams of all QUIC connection received through one listener grouped (except if you have hierarchical groups that allows you to sort that out the cases above)\nI agree with all this. Anyway I think it's a moot point: whether they're grouped or not is not up to the listener to decide? So grouping needs to be something that can be queried.\nOk \u2013 fair enough \u2013 but I still think the Listener is the right place to configure whether connections that come out of it should be grouped by default or not.\nNot unless we have a concrete idea on what to do with this information inside the transport system. => I think no text update is needed as a result of this issue and lean towards closing it...\nI don't think we really need a text update for the grouping, but for the original question. To address the handling of incoming streams in multi-streaming connections, I would like to have the solution NAME outlined reflected in the API.\nOne question on this, for NAME the description of your Listener object being able to accept multiple events, is that how it works in general or was that specif for a multi-streaming scenario? As specified I am not sure that our Listen action does not allow multiple ConnectionReceived events? The text says\nNAME The listener object for all cases (both multi streaming and receiving brand-new transport connections) allows multiple events. I think the existing text is a bit ambiguous, but it seems that \"listening continues\" implies that the event can certainly fire again.\nRevisit after is done\nRegarding this text: This seems to imply that the Preconnection object, which may be full of many carefully tuned settings that could be useful for another Connection, is no longer usable. While I understand that the specific Preconnection cannot change for a Connection, this is inconvenient to the application. I see two good options to solve this: Make a deep copy of the Preconnection upon Initiate(), and say that a Preconnection may be used or changed but changes can no longer influence the Connection (this is what Network.framework currently does) Allow a URL() method to let the applications do the deep copy themselves when and if they want to re-use.\n+1 to reusing the preconnection. Do you even need a deep copy. The connection is a new object that should be independent of the preconnection anyway, no? Another thought: Are there actually ways to get different connections out of the preconenction. E.g. you do racing and end up with two open connections and (suddenly) want to use both...?\nI'd like to see this as well. To me it makes most sense to allow calling Initiate() on the same Preconnection multiple times to get multiple Connections, but not allow to change this Preconnection anymore after the first Initiate. This way it would be similar to Listen() and Rendezvous(), where we have a Preconnection that yields multiple Connections, and that cannot be changed anymore after Listen() or Rendezvous(). So here we already have a binding between the Preconnection and the Connections that come out of it. If the application wants to change the Preconnection before re-using it, I'd be in favor of letting it explicitly clone the Preconnection - not sure if we actually want to call it clone() though, because we already have Connection.Clone(), which creates a Connection Group, a completely different thing.\nWhy do you think it should not be allowed that a preconnection can be changed after the first initiate? If you don't change it you should get the same kind of connection, if you change it you might get something different. Your choice!\nI wanted the API contract to be similar for different forms of Connection establishment. But I may be overthinking this.\nFirst, I like the idea of reusing Preconnections. I am not sure whether we should define now whether they should be changeable after the first initiation. There might be kinds of Preconnections where this is a good idea, there might be others where it is not. I anticipate one can realize connection pools as a sub-class of Preconnection. In this case, in such cases, it should not be possible to change a Preconnection.\nSo the difference is if you think you morph a preconnection into a connection, or if the preconnection create the connection. i think the first think does not make sense because their are two completely different things; they have not actions or events in common.\nNAME The morphing one has its beauty for modeling TCP fastopen / 0-RTT data.\nWhether it makes sense to re-use a perhaps depends on how fully specified it is? Reusing a completely specified , that will just re-open an identical 5-tuple as an existing might not make sense. Reusing a partially specified , e.g., with the source port left as a wildcard to new connections can open without conflicts, seems reasonable.\nNAME right, using the exact same set of fully-specified properties isn't going to work. But having a partially specified one and copying it, or modifying one that was used (by switching out the Remote, etc) would be useful.\nThis might work better the other way -- on , the Preconnection corresponding to the initiated/initiating connection gets deep-copied into the new connection, leaving the existing Preconnection in the state that it was. That deep copy, inside the connection, cannot be reused, but the existing one could be. (This probably only works for connections on which you . turns the preconnection into a listener from which multiple , each of which will have their own deep copy of the listener, will spawn. might be even weirder, but there's a potentially bigger win to allowing reuse, but I don't quite have my head around how it would work)\nSo i think there is actually not that much too deep copy because a preconception is not a super class of a connection. At the moment when you call initiate or rendezvous the preconception should create a connection and store all information regarding this connection in the connection object which is later returned. The only thing that needs \"copying\" is the properties but you need to create a new object anyway because preconnection properties have preferences and connection properties not but have to be marked as mutable or unmutable. These two things are also not the same. So the only that that's left to copy maybe are the endpoints. Not sure if the preconnection and connection both have a copy of the endpoints or if an endpoint only exists once and both have a reference (which means if the endpoint gets somehow updated that's relevant for both) or maybe even the connection does not need a reference to the endpoint anymore...?\nNAME right \u2013 but I don't know a way of specifying \"a is consumed if fully specified, else may be reusedin any current programming language. Maybe tracking lifetimes so closely doesn't matter though, and we can just write this in the draft.\nI have no problem with an Interface where some Preconnections can be reused and others can not. That makes a pretty nice case for -- preconnection reuses itself for multiple incoming connections. For ` I see this API-wise as some kind of listen called at both sides.\nSide meeting says we can reuse the initiate call. Implementation should specify that a deep copy is made on Initiate() or Listen()\nWill be trivial to fix after is done\nSorry to reopen an old issue, but when reading the current API draft, I noticed that it still says that reusing the Preconnection is possible for Listeners, but not possible for Initiate() and Rendezvous(). For example, see Section 6.1: \"The Initiate() Action consumes the Preconnection [\u2026]\" From what I read in this issue, I think Preconnections should be allowed to be reused for Initiate() and Rendezvous() as well (possibly with a note on deep-copy).\nThis looks just right to me, thanks a lot for doing this! In particular having read again, I believe that NAME should definitely have the last word on this one. So I'll go with whatever NAME says.", "new_text": "6.4. Entangled Connections can be created using the Clone Action: Calling Clone on a Connection yields a group of two Connections: the parent Connection on which Clone was called, and the resulting cloned"} {"id": "q-en-api-drafts-17379de1d39ad7e7f0bf4014940c8e3c7bce399b0f986971fed0010b73ea0df8", "old_text": "on. Connections in a Connection Group share all Protocol Properties that are not applicable to a Message. Changing one of these Protocol Properties on one Connection in the group changes it for all others. Per-Message Protocol Properties, however, are not entangled. For example, changing \"Timeout for", "comments": "This adds an explicit Listener object to the API ( to clarify behaviour of multi-streaming protocols ( clarify behaviour and enable re-use of Preconnections (\nWe talk about listen objects in architecture and implementation, but not in the API. Changes needed: Let return a Listener Move the the Listener Allow Preconnection to be used multiple times.\nThis issue popped up while discussing and seem to be a wired QUIC multi-streaming issue, possible affecting SCTP too): When a multi-streaming protocol like QUIC allows both ends to open new streams (e.g. by cloning the QUIC connection representing the stream), the both side needs a way to get a ConnectionReceived event for a new stream. That might result in the following weird pattern: I think this needs to be documented somehow and we may need to add a Selection Property do turn off this behaviour.\nCalling is an active open. Once the connection is open, the peer can open new streams, but those would surely be events on the not on the .\nNAME I am not sure with this one: assuming you have a bunch of ? The first one? What happens if I close the first one?\nThat's a good point, and I'm not sure I know the answer, but I don't think the should fire the events. A is a potential connection, yet to be instantiated, but these connections come from an already active connection. I've never been entirely comfortable with cloning a connection as the multi-streaming abstraction. This is perhaps the first scenario where it struggles though.\nI do not understand the pattern in the example. You do not get a ConnectionReceived after an active open, you need a Listen?\nNAME Your assumption is true for TCP, but not for multi-streaming transports like QUIC. With QUIC, you can indeed get additional TAPS connections (QUIC streams) based on an existing, initiated connections. The question is how to handle this. NAME A possible way out would be making itself is returned by an event then.\nNAME the TAPS interface is the same weather you have TCP or QUIC underneath. For TAPS an active open generates a READY event not a ConnectionReceived? I do not think your example is a valid TAPS sequence.\nNAME the Ready event is not shown in the example (it would be within the first ). The problem I raised is how to model the addition of streams to a multi-streaming connection. In case of QUIC, additional streams can be initiated by the server/listening side of the QUIC connection. If we model QUIC streams as TAPS connections, this results in events at the client (and server) side for each new QUIC stream.\nNAME agree we could add , although it's a significant shift in the model. If we did that, we might also want to revisit as the way of creating a new stream on a connection, in favour of a method on . NAME did you implement this with QUIC? How does your implementation work?\nI also see a problem with the communication pattern here. To me, modelling a stream as a TAPS connection means that for every new connection that a peer creates (be it a stream of an existing connection or not), I'll have to listen. So: if I expect 5 incoming connections, which may e.g. be 1 connection with 1 stream and 4 later-added streams below the API, I'll have to listen 5 times. What this also means is that the grouping notion should be a readable property - I believe that we currently only describe it as something that's actively created via clone(). Regarding code, no idea how NAME has done this, but I believe that the NEAT implementation is like I describe here. For more details on this specific aspect, see: URL\nNAME We create a Listener from a Connection to handle inbound streams associated with the same parent transport connection. NAME Currently, our Listener object doesn't require calling Listen() for each new connection\u2014you can get the event invoked many times, so it doesn't require knowing how many streams may be opened by the peer. In order to get back pressure, we're talking about having a window for how many new inbound connections will be allowed at a time.\nOk, that also sounds perfectly reasonable. Actually it seems that this (receiving multiple ConnectionReceived events on one Preconnection) is already supported by our current text, sorry for missing this! So... what is the problem we're discussing here - what do we need to change in the text? I think we need only the following: do we have a use case for one Listener getting multiple Connections without having these Connections grouped? (I guess not? Grouping is not a promise of multi-streaming.) If so, we need the ability to query whether connections are grouped (currently grouping is only created actively via \"clone\"). Otherwise, we can just statically say that all Connections that are created from the same Preconnections are grouped. some kind of ConnectionReceived event for the Rendezvous case, with text saying that all Connections created off the same PreConnection are grouped. (right?) a way to provide back pressure. Here, we already have PreConnection.Stop(). In the interest of keeping things simple, do we think that intelligently switching between a listening and a non-listening mode would be enough? If so, maybe we should have a call similar to Stop which stops listening, but does not dispose of the PreConnection, such that Listen could be called again. Maybe have an \"End\" call which stops and disposes, and let Stop only stop listening (because \"Stop\", to me, indicates only stopping but nothing else).\nNAME This solution sounds very reasonable \u2013 I guess we should do it the same way for TAPS and add that as Text to the API NAME Some comments: It depends \u2013 if writing a server, you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener. So the behaviour wether to group should be a property of the Listener. I would not group connections received through the same Rendezvous (analogous to server-style listener). We might want to add a property to the listener\nNAME I agree, but: just because groups may encompass more than the Connections from one Listener doesn't mean that it would be bad to always group the Connections from one Listener (i.e., group them among themselves, but maybe group them with others too). Besides, I think that wouldn't be the listener deciding anyway. Just to understand this, are you saying \"no, see my argument for case 1\" ? Then we're just discussing case 1, my answer is above. Otherwise I'd say: why not? yes ... but what do you mean with ? We have , just the semantics are not what I suggested. Regarding , I think that's a good idea - my own proposal was me overdoing \"keep it simple, let's not add features\". Nobody needs to use this unless they want to.\nNAME I guess the answer really depends on what a connection group is for you. I always saw connection groups as an abstraction for multi-streaming connections (or their emulation) \u2013 therefore I see no use for grouping connections from multiple listeners. My opinion is auto-grouping for multi-streaming is fine, otherwise it is not. really falls out of 1 I want your semantics, but call it\nNAME Me too. I can imagine other things ( this paper of ours comes to mind: URL ), but generally I agree and don't think we should complicate the API for this. But I'm getting confused: you wrote \"you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener\". That sounded to me like you want to support multiple streams, but from several listeners?! ACK Agreed!\nNAME let me elaborate on this\u2026 You might want all streams of a multi-streaming connection grouped, e.g., all streams within a QUIC connection You might not want all multi-streaming connections (e.g., QUIC connections) received through the same Listener object grouped You definitely do not want all all streams of all QUIC connection received through one listener grouped (except if you have hierarchical groups that allows you to sort that out the cases above)\nI agree with all this. Anyway I think it's a moot point: whether they're grouped or not is not up to the listener to decide? So grouping needs to be something that can be queried.\nOk \u2013 fair enough \u2013 but I still think the Listener is the right place to configure whether connections that come out of it should be grouped by default or not.\nNot unless we have a concrete idea on what to do with this information inside the transport system. => I think no text update is needed as a result of this issue and lean towards closing it...\nI don't think we really need a text update for the grouping, but for the original question. To address the handling of incoming streams in multi-streaming connections, I would like to have the solution NAME outlined reflected in the API.\nOne question on this, for NAME the description of your Listener object being able to accept multiple events, is that how it works in general or was that specif for a multi-streaming scenario? As specified I am not sure that our Listen action does not allow multiple ConnectionReceived events? The text says\nNAME The listener object for all cases (both multi streaming and receiving brand-new transport connections) allows multiple events. I think the existing text is a bit ambiguous, but it seems that \"listening continues\" implies that the event can certainly fire again.\nRevisit after is done\nRegarding this text: This seems to imply that the Preconnection object, which may be full of many carefully tuned settings that could be useful for another Connection, is no longer usable. While I understand that the specific Preconnection cannot change for a Connection, this is inconvenient to the application. I see two good options to solve this: Make a deep copy of the Preconnection upon Initiate(), and say that a Preconnection may be used or changed but changes can no longer influence the Connection (this is what Network.framework currently does) Allow a URL() method to let the applications do the deep copy themselves when and if they want to re-use.\n+1 to reusing the preconnection. Do you even need a deep copy. The connection is a new object that should be independent of the preconnection anyway, no? Another thought: Are there actually ways to get different connections out of the preconenction. E.g. you do racing and end up with two open connections and (suddenly) want to use both...?\nI'd like to see this as well. To me it makes most sense to allow calling Initiate() on the same Preconnection multiple times to get multiple Connections, but not allow to change this Preconnection anymore after the first Initiate. This way it would be similar to Listen() and Rendezvous(), where we have a Preconnection that yields multiple Connections, and that cannot be changed anymore after Listen() or Rendezvous(). So here we already have a binding between the Preconnection and the Connections that come out of it. If the application wants to change the Preconnection before re-using it, I'd be in favor of letting it explicitly clone the Preconnection - not sure if we actually want to call it clone() though, because we already have Connection.Clone(), which creates a Connection Group, a completely different thing.\nWhy do you think it should not be allowed that a preconnection can be changed after the first initiate? If you don't change it you should get the same kind of connection, if you change it you might get something different. Your choice!\nI wanted the API contract to be similar for different forms of Connection establishment. But I may be overthinking this.\nFirst, I like the idea of reusing Preconnections. I am not sure whether we should define now whether they should be changeable after the first initiation. There might be kinds of Preconnections where this is a good idea, there might be others where it is not. I anticipate one can realize connection pools as a sub-class of Preconnection. In this case, in such cases, it should not be possible to change a Preconnection.\nSo the difference is if you think you morph a preconnection into a connection, or if the preconnection create the connection. i think the first think does not make sense because their are two completely different things; they have not actions or events in common.\nNAME The morphing one has its beauty for modeling TCP fastopen / 0-RTT data.\nWhether it makes sense to re-use a perhaps depends on how fully specified it is? Reusing a completely specified , that will just re-open an identical 5-tuple as an existing might not make sense. Reusing a partially specified , e.g., with the source port left as a wildcard to new connections can open without conflicts, seems reasonable.\nNAME right, using the exact same set of fully-specified properties isn't going to work. But having a partially specified one and copying it, or modifying one that was used (by switching out the Remote, etc) would be useful.\nThis might work better the other way -- on , the Preconnection corresponding to the initiated/initiating connection gets deep-copied into the new connection, leaving the existing Preconnection in the state that it was. That deep copy, inside the connection, cannot be reused, but the existing one could be. (This probably only works for connections on which you . turns the preconnection into a listener from which multiple , each of which will have their own deep copy of the listener, will spawn. might be even weirder, but there's a potentially bigger win to allowing reuse, but I don't quite have my head around how it would work)\nSo i think there is actually not that much too deep copy because a preconception is not a super class of a connection. At the moment when you call initiate or rendezvous the preconception should create a connection and store all information regarding this connection in the connection object which is later returned. The only thing that needs \"copying\" is the properties but you need to create a new object anyway because preconnection properties have preferences and connection properties not but have to be marked as mutable or unmutable. These two things are also not the same. So the only that that's left to copy maybe are the endpoints. Not sure if the preconnection and connection both have a copy of the endpoints or if an endpoint only exists once and both have a reference (which means if the endpoint gets somehow updated that's relevant for both) or maybe even the connection does not need a reference to the endpoint anymore...?\nNAME right \u2013 but I don't know a way of specifying \"a is consumed if fully specified, else may be reusedin any current programming language. Maybe tracking lifetimes so closely doesn't matter though, and we can just write this in the draft.\nI have no problem with an Interface where some Preconnections can be reused and others can not. That makes a pretty nice case for -- preconnection reuses itself for multiple incoming connections. For ` I see this API-wise as some kind of listen called at both sides.\nSide meeting says we can reuse the initiate call. Implementation should specify that a deep copy is made on Initiate() or Listen()\nWill be trivial to fix after is done\nSorry to reopen an old issue, but when reading the current API draft, I noticed that it still says that reusing the Preconnection is possible for Listeners, but not possible for Initiate() and Rendezvous(). For example, see Section 6.1: \"The Initiate() Action consumes the Preconnection [\u2026]\" From what I read in this issue, I think Preconnections should be allowed to be reused for Initiate() and Rendezvous() as well (possibly with a note on deep-copy).\nThis looks just right to me, thanks a lot for doing this! In particular having read again, I believe that NAME should definitely have the last word on this one. So I'll go with whatever NAME says.", "new_text": "on. Connections in a Connection Group share all Protocol Properties that are not applicable to a Message. In addition, incoming entangled Connections can be received by creating a Listener on an existing connection: Changing one of these Protocol Properties on one Connection in the group changes it for all others. Per-Message Protocol Properties, however, are not entangled. For example, changing \"Timeout for"} {"id": "q-en-api-drafts-bcab32f91bfffa1ed513f3bc70d71a328ee2b7f5a1fcb97b32569280da02c000", "old_text": "9. 9.1. Connection lifetime for TCP translates fairly simply into the the abstraction presented to an application. When the TCP three-way handshake is complete, its layer of the Protocol Stack can be considered Ready (established). This event will cause racing of Protocol Stack options to complete if TCP is the top-level protocol, at which point the application can be notified that the Connection is Ready to send and receive. If the application sends a Close, that can translate to a graceful termination of the TCP connection, which is performed by sending a FIN to the remote endpoint. If the application sends an Abort, then the TCP state can be closed abruptly, leading to a RST being sent to the peer. Without a layer of framing (a top-level protocol in the established Protocol Stack that preserves message boundaries, or an application- supplied deframer) on top of TCP, the receiver side of the transport system implementation can only treat the incoming stream of bytes as a single Message, terminated by a FIN when the Remote Endpoint closes the Connection. 9.2. UDP as a direct transport does not provide any handshake or connectivity state, so the notion of the transport protocol becoming Ready or established is degenerate. Once the system has validated that there is a route on which to send and receive UDP datagrams, the protocol is considered Ready. Similarly, a Close or Abort has no meaning to the on-the-wire protocol, but simply leads to the local state being torn down. When sending and receiving messages over UDP, each Message should correspond to a single UDP datagram. The Message can contain metadata about the packet, such as the ECN bits applied to the packet. 9.3. To support sender-side stream schedulers (which are implemented on the sender side), a receiver-side Transport System should always support message interleaving RFC8260. SCTP messages can be very large. To allow the reception of large messages in pieces, a \"partial flag\" can be used to inform a (native SCTP) receiving application that a message is incomplete. After receiving the \"partial flag\", this application would know that the next receive calls will only deliver remaining parts of the same message (i.e., no messages or partial messages will arrive on other streams until the message is complete) (see Section 8.1.20 in RFC6458). The \"partial flag\" can therefore facilitate the implementation of the receiver buffer in the receiving application, at the cost of limiting multiplexing and temporarily creating head- of-line blocking delay at the receiver. When a Transport System transfers a Message, it seems natural to map the Message object to SCTP messages in order to support properties such as \"Ordered\" or \"Lifetime\" (which maps onto partially reliable delivery with a SCTP_PR_SCTP_TTL policy RFC6458). However, since multiplexing of Connections onto SCTP streams may happen, and would be hidden from the application, the Transport System requires a per- stream receiver buffer anyway, so this potential benefit is lost and the \"partial flag\" becomes unnecessary for the system. The problem of long messages either requiring large receiver-side buffers or getting in the way of multiplexing is addressed by message interleaving RFC8260, which is yet another reason why a receivers- side transport system supporting SCTP should implement this mechanism. 9.4. The mapping of a TLS stream abstraction into the application is equivalent to the contract provided by TCP (see tcp). The Ready state should be determined by the completion of the TLS handshake, which involves potentially several more round trips beyond the TCP handshake. The application should not be notified that the Connection is Ready until TLS is established. 9.5.", "comments": "Starting to address This is an early review, to check if we like the structure. I'd also like help filling out the SCTP section!\nI can also help with SCTP, which I would do in the same style that I suggest above (for TCP, in my example of \"Abort\") - also later.\nWe've updated Implementation mapping details for most protocols with , but we didn't tackle SCTP yet.\nI'm fine with this being assigned to me. However, it's more than just writing the SCTP text. Here's the relevant part of what I wrote in PR : I don't suggest to delete much of the text that's there but definitely amend it with the RFC reference at least for what's covered in RFC 8303. There's a very direct line from this type of text in RFC 8303 to the original specs, so I do believe that should be helpful. That's (obviously) post-Montreal though!\nNAME please send a PR along these lines.\nLGTM modulo mwelzl changes.I think this is a good step forward. This said, I think we should describe things focusing on the transport's API call to be made (in line with the spec - that's why we have RFCs 8303 and 8304) rather than protocol internals. E.g., rather than saying that Abort on a TCP connection transmits a RST, I think we should simply say that Abort calls ABORT.TCP [RFC 8303, Section 4.1]. I can take care of these things, later. I'd still like to focus more on the API doc at this time.", "new_text": "9. Each protocol that can run as part of a Transport Services implementation defines both its API mapping as well as implementation details. API mappings for a protocol apply most to Connections in which the given protocol is the \"top\" of the Protocol Stack. For example, the mapping of the \"Send\" function for TCP applies to Connections in which the application directly sends over TCP. If HTTP/2 is used on top of TCP, the HTTP/2 mappings take precendence. Each protocol has a notion of Connectedness. Possible values for Connectedness are: Unconnected. Unconnected protocols do not establish explicit state between endpoints, and do not perform a handshake during Connection establishment. Connected. Connected protocols establish state between endpoints, and perform a handshake during Connection establishment. The handshake may be 0-RTT to send data or resume a session, but bidirectional traffic is required to confirm connectedness. Multiplexing Connected. Multiplexing Connected protocols share properties with Connected protocols, but also explictly support opening multiple application-level flows. This means that they can support cloning new Connection objects without a new explicit handshake. Protocols also define a notion of Data Unit. Possible values for Data Unit are: Byte-stream. Byte-stream protocols do not define any Message boundaries of their own apart from the end of a stream in each direction. Datagram. Datagram protocols define Message boundaries at the same level of transmission, such that only complete (not partial) Messages are supported. Message. Message protocols support Message boundaries that can be sent and received either as complete or partial Messages. Maximum Message lengths can be defined, and Messages can be partially reliable. 9.1. Connectedness: Connected Data Unit: Byte-stream API mappings for TCP are as follows: TCP connections between two hosts map directly to Connection objects. Calling \"Initiate\" on a TCP Connection causes it to reserve a local port, and send a SYN to the Remote Endpoint. Early idempotent data is sent on a TCP Connection in the SYN, as TCP Fast Open data. A TCP Connection is ready once the three-way handshake is complete. TCP can throw various errors during connection setup. Specifically, it is important to handle a RST being sent by the peer during the handshake. Once established, TCP throws errors whenever the connection is disconnected, such as due to receive a RST from the peer; or hitting a TCP retransmission timeout. Calling \"Listen\" for TCP binds a local port and prepares it to receive inbound SYN packets from peers. TCP Listeners will deliver new connections once they have replied to an inbound SYN with a SYN-ACK. Calling \"Clone\" on a TCP Connection creates a new Connection with equivalent parameters. The two Connections are otherwise independent. TCP does not on its own preserve Message boundaries. Calling \"Send\" on a TCP connection lays out the bytes on the TCP send stream without any other delineation. Any Message marked as Final will cause TCP to send a FIN once the Message has been completely written. TCP delivers a stream of bytes without any Message delineation. All data delivered in the \"Received\" or \"ReceivedPartial\" event will be part of a single stream-wide Message that is marked Final (unless a MessageFramer is used). EndOfMessage will be delivered when the TCP Connection has received a FIN from the peer. Calling \"Close\" on a TCP Connection indicates that the Connection should be gracefully closed by sending a FIN to the peer and waiting for a FIN-ACK before delivering the \"Closed\" event. Calling \"Abort\" on a TCP Connection indicates that the Connection should be immediately closed by sending a RST to the peer. 9.2. Connectedness: Unconnected Data Unit: Datagram API mappings for UDP are as follows: UDP connections represent a pair of specific IP addresses and ports on two hosts. Calling \"Initiate\" on a UDP Connection causes it to reserve a local port, but does not generate any traffic. Early data on a UDP Connection does not have any special meaning. The data is sent whenever the Connection is Ready. A UDP Connection is ready once the system has reserved a local port and has a path to send to the Remote Endpoint. UDP Connections can only generate errors on initiation due to port conflicts on the local system. Once in use, UDP throws errors upon receiving ICMP notifications indicating failures in the network. Calling \"Listen\" for UDP binds a local port and prepares it to receive inbound UDP datagrams from peers. UDP Listeners will deliver new connections once they have received traffic from a new Remote Endpoint. Calling \"Clone\" on a UDP Connection creates a new Connection with equivalent parameters. The two Connections are otherwise independent. Calling \"Send\" on a UDP connection sends the data as the payload of a complete UDP datagram. Marking Messages as Final does not change anything in the datagram's contents. UDP only delivers complete Messages to \"Received\", each of which represents a single datagram received in a UDP packet. Calling \"Close\" on a UDP Connection releases the local port reservation. Calling \"Abort\" on a UDP Connection is identical to calling \"Close\". 9.3. The mapping of a TLS stream abstraction into the application is equivalent to the contract provided by TCP (see tcp), and builds upon many of the actions of TCP connections. Connectedness: Connected Data Unit: Byte-stream Connection objects represent a single TLS connection running over a TCP connection between two hosts. Calling \"Initiate\" on a TLS Connection causes it to first initiate a TCP connection. Once the TCP protocol is Ready, the TLS handshake will be performed as a client (starting by sending a \"client_hello\", and so on). Early idempotent data is supported by TLS 1.3, and sends encrypted application data in the first TLS message when performing session resumption. For older versions of TLS, or if a session is not being resumed, the initial data will be delayed until the TLS handshake is complete. TCP Fast Option can also be enabled automatically. A TLS Connection is ready once the underlying TCP connection is Ready, and TLS handshake is also complete and keys have been established to encrypt application data. In addition to TCP initiation errors, TLS can generate errors during its handshake. Examples of error include a failure of the peer to successfully authenticate, the peer rejecting the local authentication, or a failure to match versions or algorithms. TLS connections will generate TCP errors, or errors due to failures to rekey or decrypt received messages. Calling \"Listen\" for TLS listens on TCP, and sets up received connections to perform server-side TLS handshakes. TLS Listeners will deliver new connections once they have successfully completed both TCP and TLS handshakes. As with TCP, calling \"Clone\" on a TLS Connection creates a new Connection with equivalent parameters. The two Connections are otherwise independent. Like TCP, TLS does not preserve message boundaries. Although application data is framed natively in TLS, there is not a general guarantee that these TLS messages represent semantically meaningful application stream boundaries. Rather, sending data on a TLS Connection only guarantees that the application data will be transmitted in an encrypted form. Marking Messages as Final causes a \"close_notify\" to be generated once the data has been written. Like TCP, TLS delivers a stream of bytes without any Message delineation. The data is decrypted prior to being delivered to the application. If a \"close_notify\" is received, the stream-wide Message will be delivered with EndOfMessage set. Calling \"Close\" on a TLS Connection indicates that the Connection should be gracefully closed by sending a \"close_notify\" to the peer and waiting for a corresponding \"close_notify\" before delivering the \"Closed\" event. Calling \"Abort\" on a TCP Connection indicates that the Connection should be immediately closed by sending a \"close_notify\", optionally preceded by \"user_canceled\", to the peer. Implementations do not need to wait to receive \"close_notify\" before delivering the \"Closed\" event. 9.4. DTLS follows the same behavior as TLS (tls), with the notable exception of not inheriting behavior directly from TCP. Differences from TLS are detailed below, and all cases not explicitly mentioned should be considered the same as TLS. Connectedness: Connected Data Unit: Datagram Connection objects represent a single DTLS connection running over a set of UDP ports between two hosts. Calling \"Initiate\" on a DTLS Connection causes it reserve a UDP local port, and begin sending handshake messages to the peer over UDP. These messages are reliable, and will be automatically retransmitted. A DTLS Connection is ready once the TLS handshake is complete and keys have been established to encrypt application data. Sending over DTLS does preserve message boundaries in the same way that UDP datagrams do. Marking a Message as Final does send a \"close_notify\" like TLS. Receiving over DTLS delivers one decrypted Message for each received DTLS datagram. If a \"close_notify\" is received, a Message will be delivered that is marked as Final. 9.5."} {"id": "q-en-api-drafts-bcab32f91bfffa1ed513f3bc70d71a328ee2b7f5a1fcb97b32569280da02c000", "old_text": "Message representing the body, and the Headers being provided in Message metadata. 9.6. QUIC provides a multi-streaming interface to an encrypted transport.", "comments": "Starting to address This is an early review, to check if we like the structure. I'd also like help filling out the SCTP section!\nI can also help with SCTP, which I would do in the same style that I suggest above (for TCP, in my example of \"Abort\") - also later.\nWe've updated Implementation mapping details for most protocols with , but we didn't tackle SCTP yet.\nI'm fine with this being assigned to me. However, it's more than just writing the SCTP text. Here's the relevant part of what I wrote in PR : I don't suggest to delete much of the text that's there but definitely amend it with the RFC reference at least for what's covered in RFC 8303. There's a very direct line from this type of text in RFC 8303 to the original specs, so I do believe that should be helpful. That's (obviously) post-Montreal though!\nNAME please send a PR along these lines.\nLGTM modulo mwelzl changes.I think this is a good step forward. This said, I think we should describe things focusing on the transport's API call to be made (in line with the spec - that's why we have RFCs 8303 and 8304) rather than protocol internals. E.g., rather than saying that Abort on a TCP connection transmits a RST, I think we should simply say that Abort calls ABORT.TCP [RFC 8303, Section 4.1]. I can take care of these things, later. I'd still like to focus more on the API doc at this time.", "new_text": "Message representing the body, and the Headers being provided in Message metadata. Connectedness: Multiplexing Connected Data Unit: Message Connection objects represent a flow of HTTP messages between a client and a server, which may be an HTTP/1.1 connection over TCP, or a single stream in an HTTP/2 connection. Calling \"Initiate\" on an HTTP connection intiates a TCP or TLS connection as a client. Calling \"Clone\" on an HTTP Connection opens a new stream on an existing HTTP/2 connection when possible. If the underlying version does not support multiplexed streams, calling \"Clone\" simply creates a new parallel connection. When an application sends an HTTP Message, it is expected to provide HTTP header values as a MessageContext in a canonical form, along with any associated HTTP message body as the Message data. The HTTP header values are encoded in the specific version format upon sending. HTTP Connections deliver Messages in which HTTP header values attached to MessageContexts, and HTTP bodies in Message data. Calling \"Close\" on an HTTP Connection will only close the underlying TLS or TCP connection if the HTTP version does not support multiplexing. For HTTP/2, for example, closing the connection only closes a specific stream. 9.6. QUIC provides a multi-streaming interface to an encrypted transport."} {"id": "q-en-api-drafts-bcab32f91bfffa1ed513f3bc70d71a328ee2b7f5a1fcb97b32569280da02c000", "old_text": "Closing a single QUIC stream, presented to the application as a Connection, does not imply closing the underlying QUIC connection itself. Rather, the implementation may choose to close the QUIC connection once all streams have been closed (possibly after some timeout), or after an individual stream Connection sends an Abort. Messages over a direct QUIC stream should be represented similarly to the TCP stream (one Message per direction, see tcp), unless a framing mapping is used on top of QUIC. 9.7.", "comments": "Starting to address This is an early review, to check if we like the structure. I'd also like help filling out the SCTP section!\nI can also help with SCTP, which I would do in the same style that I suggest above (for TCP, in my example of \"Abort\") - also later.\nWe've updated Implementation mapping details for most protocols with , but we didn't tackle SCTP yet.\nI'm fine with this being assigned to me. However, it's more than just writing the SCTP text. Here's the relevant part of what I wrote in PR : I don't suggest to delete much of the text that's there but definitely amend it with the RFC reference at least for what's covered in RFC 8303. There's a very direct line from this type of text in RFC 8303 to the original specs, so I do believe that should be helpful. That's (obviously) post-Montreal though!\nNAME please send a PR along these lines.\nLGTM modulo mwelzl changes.I think this is a good step forward. This said, I think we should describe things focusing on the transport's API call to be made (in line with the spec - that's why we have RFCs 8303 and 8304) rather than protocol internals. E.g., rather than saying that Abort on a TCP connection transmits a RST, I think we should simply say that Abort calls ABORT.TCP [RFC 8303, Section 4.1]. I can take care of these things, later. I'd still like to focus more on the API doc at this time.", "new_text": "Closing a single QUIC stream, presented to the application as a Connection, does not imply closing the underlying QUIC connection itself. Rather, the implementation may choose to close the QUIC connection once all streams have been closed (often after some timeout), or after an individual stream Connection sends an Abort. Connectedness: Multiplexing Connected Data Unit: Stream Connection objects represent a single QUIC stream on a QUIC connection. 9.7."} {"id": "q-en-api-drafts-bcab32f91bfffa1ed513f3bc70d71a328ee2b7f5a1fcb97b32569280da02c000", "old_text": "over the streams can be represented similarly to the TCP stream (one Message per direction, see tcp). 10. RFC-EDITOR: Please remove this section before publication.", "comments": "Starting to address This is an early review, to check if we like the structure. I'd also like help filling out the SCTP section!\nI can also help with SCTP, which I would do in the same style that I suggest above (for TCP, in my example of \"Abort\") - also later.\nWe've updated Implementation mapping details for most protocols with , but we didn't tackle SCTP yet.\nI'm fine with this being assigned to me. However, it's more than just writing the SCTP text. Here's the relevant part of what I wrote in PR : I don't suggest to delete much of the text that's there but definitely amend it with the RFC reference at least for what's covered in RFC 8303. There's a very direct line from this type of text in RFC 8303 to the original specs, so I do believe that should be helpful. That's (obviously) post-Montreal though!\nNAME please send a PR along these lines.\nLGTM modulo mwelzl changes.I think this is a good step forward. This said, I think we should describe things focusing on the transport's API call to be made (in line with the spec - that's why we have RFCs 8303 and 8304) rather than protocol internals. E.g., rather than saying that Abort on a TCP connection transmits a RST, I think we should simply say that Abort calls ABORT.TCP [RFC 8303, Section 4.1]. I can take care of these things, later. I'd still like to focus more on the API doc at this time.", "new_text": "over the streams can be represented similarly to the TCP stream (one Message per direction, see tcp). Connectedness: Multiplexing Connected Data Unit: Stream Connection objects represent a single HTTP/2 stream on a HTTP/2 connection. 9.8. To support sender-side stream schedulers (which are implemented on the sender side), a receiver-side Transport System should always support message interleaving RFC8260. SCTP messages can be very large. To allow the reception of large messages in pieces, a \"partial flag\" can be used to inform a (native SCTP) receiving application that a message is incomplete. After receiving the \"partial flag\", this application would know that the next receive calls will only deliver remaining parts of the same message (i.e., no messages or partial messages will arrive on other streams until the message is complete) (see Section 8.1.20 in RFC6458). The \"partial flag\" can therefore facilitate the implementation of the receiver buffer in the receiving application, at the cost of limiting multiplexing and temporarily creating head- of-line blocking delay at the receiver. When a Transport System transfers a Message, it seems natural to map the Message object to SCTP messages in order to support properties such as \"Ordered\" or \"Lifetime\" (which maps onto partially reliable delivery with a SCTP_PR_SCTP_TTL policy RFC6458). However, since multiplexing of Connections onto SCTP streams may happen, and would be hidden from the application, the Transport System requires a per- stream receiver buffer anyway, so this potential benefit is lost and the \"partial flag\" becomes unnecessary for the system. The problem of long messages either requiring large receiver-side buffers or getting in the way of multiplexing is addressed by message interleaving RFC8260, which is yet another reason why a receivers- side transport system supporting SCTP should implement this mechanism. 10. RFC-EDITOR: Please remove this section before publication."} {"id": "q-en-api-drafts-d79b9627857bb59ea6b1f4d2aa1cfb77c8b71bc4e45c8f70df485acc8b14e04e", "old_text": "to force early binding when required, for example with some Network Address Translator (NAT) traversal protocols (see rendezvous). 5.2. A Preconnection Object holds properties reflecting the application's", "comments": "Maybe add the ASN source address (set source address), source filter (add remote addresses)? Maybe also add that for sending, you specify the group as remote and the socket becomes write-only? Should we require specifying direction? This would be slightly harder to use, but future proof in case someone implements reliable ASN on TAPS\u2026\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\nlgtm", "new_text": "to force early binding when required, for example with some Network Address Translator (NAT) traversal protocols (see rendezvous). Specifying a multicast group address on the Local Endpoint will indicate to the transport system that the resulting connection will be used to receive multicast messages. The Remote Endpoint can be used to filter by specific senders. This will restrict the application to establishing the Preconnection by calling Listen(). The accepted Connections are receive-only. Similarly, specifying a multicast group address on the Remote Endpoint will indicate that the resulting connection will be used to send multicast messages. 5.2. A Preconnection Object holds properties reflecting the application's"} {"id": "q-en-api-drafts-c9dbd0380f3262177e152bd0ff1f8af744343f6e541586d0b9ac9f26c1d387bb", "old_text": "Integer (non-negative with -1 as special value) full coverage This property specifies the minimum length of the section of the Message, starting from byte 0, that the application requires to be", "comments": "I'm not quite sure what I meant by \"Only full coverage is guaranteed, any other requests are advisory\". I assume we mean that even if a partial checksum is requested you get full coverage because requesting partial coverage and getting no checksum doesn't seem right...?\nExactly, and you made it clearer. Thanks!", "new_text": "Integer (non-negative with -1 as special value) -1 (full coverage) This property specifies the minimum length of the section of the Message, starting from byte 0, that the application requires to be"} {"id": "q-en-api-drafts-c9dbd0380f3262177e152bd0ff1f8af744343f6e541586d0b9ac9f26c1d387bb", "old_text": "to specify options for simple integrity protection via checksums. A value of 0 means that no checksum is required, and -1 means that the entire Message is protected by a checksum. Only full coverage is guaranteed, any other requests are advisory. 7.4.7.", "comments": "I'm not quite sure what I meant by \"Only full coverage is guaranteed, any other requests are advisory\". I assume we mean that even if a partial checksum is requested you get full coverage because requesting partial coverage and getting no checksum doesn't seem right...?\nExactly, and you made it clearer. Thanks!", "new_text": "to specify options for simple integrity protection via checksums. A value of 0 means that no checksum is required, and -1 means that the entire Message is protected by a checksum. Only full coverage is guaranteed, any other requests are advisory, meaning that full coverage is applied anyway. 7.4.7."} {"id": "q-en-api-drafts-0177d5a9f8c58e5f01264ea4e2152e4bea9dc295e43bac2f5feea156e68b5f37", "old_text": "Specify a Local Endpoint using a local interface name and local port: Specify a Local Endpoint using a STUN server: Specify a Local Endpoint using a Any-Source Multicast group to join", "comments": "In the \"Specifying Endpoints\" section, added a reference to the Selection Property that does a similar thing but with more fine-grained control. It wasn't clear to me what is the best place to add this, so I put it right below the first example of specifying an interface name for the Local Endpoint to say that you can also do it differently. If you have better suggestions, please let me know. To the ListenError and RendezvousError, I added text mentioning the \"reconcile Endpoints and Properties\" pitfall. I think this is probably more general, i.e., there might be more sources of errors than just local interface, so I kept it generic.\nFor Local Address Preference we only have the option to prefer stable or temporary. However, I guess we would also need a way to e.g. specify that stable is required, no?\nIs this addressed by PR ?\nI don't think so.\nWhen reading the current API draft, I was wondering the following: If an application wants to specify one or multiple local network interfaces, it can either set the Local Endpoint(s) or it can set the \"Interface Instance or Type\" Selection Property. I think both ways are valid and should exist, but I'm wondering if we should specify anything on the interaction of these two ways to do a similar thing. For example, should we explicitly mention the Selection Property in Section 5.1 (\"Specifying Endpoints\"), to make readers aware that there is a \"better\" way to do this, which allows more flexibility? Also, if an application specifies conflicting things, I guess this leads to an InitiateError, ListenError, or RendezvousError. Our current text already says this for the InitiateError: But should we make this particular issue more explicit here and/or should we add it for the ListenError or RendezvousError?\nI think so, yes! Regarding the error in case of a conflict: I think so, yes - to all your questions: make it more explicit, state it in the description of both the ListenError and RendezvousError. Just my 2 cents\nThank you - works for me!", "new_text": "Specify a Local Endpoint using a local interface name and local port: As an alternative to specifying an interface name for the Local Endpoint, an application can express more fine-grained preferences using the \"Interface Instance or Type\" Selection Property, see prop- interface. However, if the application specifies Selection Properties which are inconsistent with the Local Endpoint, this will result in an error once the application attempts to open a Connection. Specify a Local Endpoint using a STUN server: Specify a Local Endpoint using a Any-Source Multicast group to join"} {"id": "q-en-api-drafts-0177d5a9f8c58e5f01264ea4e2152e4bea9dc295e43bac2f5feea156e68b5f37", "old_text": "higher value. By default, this value is Infinite. The caller is also able to reset the value to Infinite at any point. A ListenError occurs either when the Properties of the Preconnection cannot be fulfilled for listening, when the Local Endpoint (or Remote Endpoint, if specified) cannot be resolved, or when the application is prohibited from listening by policy. A Stopped Event occurs after the Listener has stopped listening.", "comments": "In the \"Specifying Endpoints\" section, added a reference to the Selection Property that does a similar thing but with more fine-grained control. It wasn't clear to me what is the best place to add this, so I put it right below the first example of specifying an interface name for the Local Endpoint to say that you can also do it differently. If you have better suggestions, please let me know. To the ListenError and RendezvousError, I added text mentioning the \"reconcile Endpoints and Properties\" pitfall. I think this is probably more general, i.e., there might be more sources of errors than just local interface, so I kept it generic.\nFor Local Address Preference we only have the option to prefer stable or temporary. However, I guess we would also need a way to e.g. specify that stable is required, no?\nIs this addressed by PR ?\nI don't think so.\nWhen reading the current API draft, I was wondering the following: If an application wants to specify one or multiple local network interfaces, it can either set the Local Endpoint(s) or it can set the \"Interface Instance or Type\" Selection Property. I think both ways are valid and should exist, but I'm wondering if we should specify anything on the interaction of these two ways to do a similar thing. For example, should we explicitly mention the Selection Property in Section 5.1 (\"Specifying Endpoints\"), to make readers aware that there is a \"better\" way to do this, which allows more flexibility? Also, if an application specifies conflicting things, I guess this leads to an InitiateError, ListenError, or RendezvousError. Our current text already says this for the InitiateError: But should we make this particular issue more explicit here and/or should we add it for the ListenError or RendezvousError?\nI think so, yes! Regarding the error in case of a conflict: I think so, yes - to all your questions: make it more explicit, state it in the description of both the ListenError and RendezvousError. Just my 2 cents\nThank you - works for me!", "new_text": "higher value. By default, this value is Infinite. The caller is also able to reset the value to Infinite at any point. A ListenError occurs either when the Properties and Security Parameters of the Preconnection cannot be fulfilled for listening or cannot be reconciled with the Local Endpoint (and/or Remote Endpoint, if specified), when the Local Endpoint (or Remote Endpoint, if specified) cannot be resolved, or when the application is prohibited from listening by policy. A Stopped Event occurs after the Listener has stopped listening."} {"id": "q-en-api-drafts-0177d5a9f8c58e5f01264ea4e2152e4bea9dc295e43bac2f5feea156e68b5f37", "old_text": "contained within the RendezvousDone<> Event, and is ready to use as soon as it is passed to the application via the Event. An RendezvousError occurs either when the Preconnection cannot be fulfilled for listening, when the Local Endpoint or Remote Endpoint cannot be resolved, when no transport-layer connection can be established to the Remote Endpoint, or when the application is prohibited from rendezvous by policy. When using some NAT traversal protocols, e.g., Interactive Connectivity Establishment (ICE) RFC5245, it is expected that the", "comments": "In the \"Specifying Endpoints\" section, added a reference to the Selection Property that does a similar thing but with more fine-grained control. It wasn't clear to me what is the best place to add this, so I put it right below the first example of specifying an interface name for the Local Endpoint to say that you can also do it differently. If you have better suggestions, please let me know. To the ListenError and RendezvousError, I added text mentioning the \"reconcile Endpoints and Properties\" pitfall. I think this is probably more general, i.e., there might be more sources of errors than just local interface, so I kept it generic.\nFor Local Address Preference we only have the option to prefer stable or temporary. However, I guess we would also need a way to e.g. specify that stable is required, no?\nIs this addressed by PR ?\nI don't think so.\nWhen reading the current API draft, I was wondering the following: If an application wants to specify one or multiple local network interfaces, it can either set the Local Endpoint(s) or it can set the \"Interface Instance or Type\" Selection Property. I think both ways are valid and should exist, but I'm wondering if we should specify anything on the interaction of these two ways to do a similar thing. For example, should we explicitly mention the Selection Property in Section 5.1 (\"Specifying Endpoints\"), to make readers aware that there is a \"better\" way to do this, which allows more flexibility? Also, if an application specifies conflicting things, I guess this leads to an InitiateError, ListenError, or RendezvousError. Our current text already says this for the InitiateError: But should we make this particular issue more explicit here and/or should we add it for the ListenError or RendezvousError?\nI think so, yes! Regarding the error in case of a conflict: I think so, yes - to all your questions: make it more explicit, state it in the description of both the ListenError and RendezvousError. Just my 2 cents\nThank you - works for me!", "new_text": "contained within the RendezvousDone<> Event, and is ready to use as soon as it is passed to the application via the Event. An RendezvousError occurs either when the Properties and Security Parameters of the Preconnection cannot be fulfilled for rendezvous or cannot be reconciled with the Local and/or Remote Endpoints, when the Local Endpoint or Remote Endpoint cannot be resolved, when no transport-layer connection can be established to the Remote Endpoint, or when the application is prohibited from rendezvous by policy. When using some NAT traversal protocols, e.g., Interactive Connectivity Establishment (ICE) RFC5245, it is expected that the"} {"id": "q-en-api-drafts-1ac93aa9abb2275910548f3cb4bf8c0e6b8eb6551104b9c74060973140f77a77", "old_text": "The Transport Services architecture evolves this general model of interaction, aiming to both modernize the API surface presented to applications by the transport layer and enrich the capabilities of the Transport System implementation. It combines interfaces for multiple interaction patterns into a unified whole. By combining name resolution with connection establishment and data transfer in a single API, it allows for more flexible implementations to provide", "comments": "Addresses\nIn the architecture document, I noticed that we sometimes say \"Transport Services system\" and sometimes \"Transport System\". Are these the same? Are \"Transport Services implementation\" and \"Transport System implementation\" the same, too?\nI prefer \"Transport Services system\" and \"Transport Services implementation\"\nI find it long and clumsy :( Side note: we'll have to rename it everywhere in the API draft too if we go with this.\nwas this fixed by ?\nYes it was. We already used Transport Services most places so it's not all that different.\nLooks good, thanks!", "new_text": "The Transport Services architecture evolves this general model of interaction, aiming to both modernize the API surface presented to applications by the transport layer and enrich the capabilities of the Transport Services implementation. It combines interfaces for multiple interaction patterns into a unified whole. By combining name resolution with connection establishment and data transfer in a single API, it allows for more flexible implementations to provide"} {"id": "q-en-api-drafts-1ac93aa9abb2275910548f3cb4bf8c0e6b8eb6551104b9c74060973140f77a77", "old_text": "make use of the data that arrived; the ability to manage dependencies between messages, when the Transport System could decide to not deliver a message, either following packet loss or because it has missed a deadline. In particular, this can avoid (re-)sending data that relies on a previous transmission that was never received. the ability to automatically assign messages and connections to", "comments": "Addresses\nIn the architecture document, I noticed that we sometimes say \"Transport Services system\" and sometimes \"Transport System\". Are these the same? Are \"Transport Services implementation\" and \"Transport System implementation\" the same, too?\nI prefer \"Transport Services system\" and \"Transport Services implementation\"\nI find it long and clumsy :( Side note: we'll have to rename it everywhere in the API draft too if we go with this.\nwas this fixed by ?\nYes it was. We already used Transport Services most places so it's not all that different.\nLooks good, thanks!", "new_text": "make use of the data that arrived; the ability to manage dependencies between messages, when the Transport Services system could decide to not deliver a message, either following packet loss or because it has missed a deadline. In particular, this can avoid (re-)sending data that relies on a previous transmission that was never received. the ability to automatically assign messages and connections to"} {"id": "q-en-api-drafts-1ac93aa9abb2275910548f3cb4bf8c0e6b8eb6551104b9c74060973140f77a77", "old_text": "Transport Properties: Transport Properties allow the application to express their requirements, prohibitions, and preferences and configure the Transport System. There are three kinds of Transport Properties: Selection Properties (preestablishment) that can only be", "comments": "Addresses\nIn the architecture document, I noticed that we sometimes say \"Transport Services system\" and sometimes \"Transport System\". Are these the same? Are \"Transport Services implementation\" and \"Transport System implementation\" the same, too?\nI prefer \"Transport Services system\" and \"Transport Services implementation\"\nI find it long and clumsy :( Side note: we'll have to rename it everywhere in the API draft too if we go with this.\nwas this fixed by ?\nYes it was. We already used Transport Services most places so it's not all that different.\nLooks good, thanks!", "new_text": "Transport Properties: Transport Properties allow the application to express their requirements, prohibitions, and preferences and configure the Transport Services system. There are three kinds of Transport Properties: Selection Properties (preestablishment) that can only be"} {"id": "q-en-api-drafts-1ac93aa9abb2275910548f3cb4bf8c0e6b8eb6551104b9c74060973140f77a77", "old_text": "Connection Properties: The Connection Properties are used to configure protocol-specific options and control per-connection behavior of the Transport System; for example, a protocol-specific Connection Property can express that if UDP is used, the implementation ought to use checksums. Note that the presence of such a property does not require that a specific protocol will be used. In general, these properties do not explicitly determine the selection of paths or protocols, but can be used in this way by an implementation during connection establishment. Connection Properties are specified on a Preconnection prior to Connection establishment, and can be modified on the Connection later. Changes made to Connection Properties after Connection establishment take effect on a best-effort basis. 4.1.3.", "comments": "Addresses\nIn the architecture document, I noticed that we sometimes say \"Transport Services system\" and sometimes \"Transport System\". Are these the same? Are \"Transport Services implementation\" and \"Transport System implementation\" the same, too?\nI prefer \"Transport Services system\" and \"Transport Services implementation\"\nI find it long and clumsy :( Side note: we'll have to rename it everywhere in the API draft too if we go with this.\nwas this fixed by ?\nYes it was. We already used Transport Services most places so it's not all that different.\nLooks good, thanks!", "new_text": "Connection Properties: The Connection Properties are used to configure protocol-specific options and control per-connection behavior of the Transport Services system; for example, a protocol-specific Connection Property can express that if UDP is used, the implementation ought to use checksums. Note that the presence of such a property does not require that a specific protocol will be used. In general, these properties do not explicitly determine the selection of paths or protocols, but can be used in this way by an implementation during connection establishment. Connection Properties are specified on a Preconnection prior to Connection establishment, and can be modified on the Connection later. Changes made to Connection Properties after Connection establishment take effect on a best- effort basis. 4.1.3."} {"id": "q-en-api-drafts-1ac93aa9abb2275910548f3cb4bf8c0e6b8eb6551104b9c74060973140f77a77", "old_text": "Properties when sending.) Abort: The action the application takes on a Connection to indicate a Close and also indicate that the Transport System SHOULD NOT attempt to deliver any outstanding data. This is intended for immediate termination of a connection, without cleaning up state. 4.2.", "comments": "Addresses\nIn the architecture document, I noticed that we sometimes say \"Transport Services system\" and sometimes \"Transport System\". Are these the same? Are \"Transport Services implementation\" and \"Transport System implementation\" the same, too?\nI prefer \"Transport Services system\" and \"Transport Services implementation\"\nI find it long and clumsy :( Side note: we'll have to rename it everywhere in the API draft too if we go with this.\nwas this fixed by ?\nYes it was. We already used Transport Services most places so it's not all that different.\nLooks good, thanks!", "new_text": "Properties when sending.) Abort: The action the application takes on a Connection to indicate a Close and also indicate that the Transport Services system SHOULD NOT attempt to deliver any outstanding data. This is intended for immediate termination of a connection, without cleaning up state. 4.2."} {"id": "q-en-api-drafts-1ac93aa9abb2275910548f3cb4bf8c0e6b8eb6551104b9c74060973140f77a77", "old_text": "protocols are not incorrectly swapped, Transport Services systems SHOULD only automatically generate equivalent Protocol Stacks when the transport security protocols within the stacks are identical. Specifically, a Transport System would consider protocols identical only if they are of the same type and version. For example, the same version of TLS running over two different transport Protocol Stacks are considered equivalent, whereas TLS", "comments": "Addresses\nIn the architecture document, I noticed that we sometimes say \"Transport Services system\" and sometimes \"Transport System\". Are these the same? Are \"Transport Services implementation\" and \"Transport System implementation\" the same, too?\nI prefer \"Transport Services system\" and \"Transport Services implementation\"\nI find it long and clumsy :( Side note: we'll have to rename it everywhere in the API draft too if we go with this.\nwas this fixed by ?\nYes it was. We already used Transport Services most places so it's not all that different.\nLooks good, thanks!", "new_text": "protocols are not incorrectly swapped, Transport Services systems SHOULD only automatically generate equivalent Protocol Stacks when the transport security protocols within the stacks are identical. Specifically, a Transport Services system would consider protocols identical only if they are of the same type and version. For example, the same version of TLS running over two different transport Protocol Stacks are considered equivalent, whereas TLS"} {"id": "q-en-api-drafts-1ac93aa9abb2275910548f3cb4bf8c0e6b8eb6551104b9c74060973140f77a77", "old_text": "protocol state, cached path state, and heuristics, may be shared (e.g. across multiple connections in an application). This provides efficiency and convenience for the application, since the Transport System implementation can automatically optimize behavior. There are several reasons, however, that an application might want to explicitly isolate some Connections. These reasons include:", "comments": "Addresses\nIn the architecture document, I noticed that we sometimes say \"Transport Services system\" and sometimes \"Transport System\". Are these the same? Are \"Transport Services implementation\" and \"Transport System implementation\" the same, too?\nI prefer \"Transport Services system\" and \"Transport Services implementation\"\nI find it long and clumsy :( Side note: we'll have to rename it everywhere in the API draft too if we go with this.\nwas this fixed by ?\nYes it was. We already used Transport Services most places so it's not all that different.\nLooks good, thanks!", "new_text": "protocol state, cached path state, and heuristics, may be shared (e.g. across multiple connections in an application). This provides efficiency and convenience for the application, since the Transport Services implementation can automatically optimize behavior. There are several reasons, however, that an application might want to explicitly isolate some Connections. These reasons include:"} {"id": "q-en-api-drafts-4d3e294f69bd8442243d7542dc15c0aa615314175f6f9e685a134492c699cc0c", "old_text": "than -1, it sets the minimum protection in protocols that allow limiting the checksum length (e.g. UDP-Lite). Singular Transmission: when this is true, the application requests to avoid transport-layer segmentation or network-layer fragmentation. Some transports implement network-layer fragmentation avoidance (Path MTU Discovery) without exposing this functionality to the application; in this case, only transport- layer segmentation should be avoided, by fitting the message into a single transport-layer segment or otherwise failing. Otherwise, network-layer fragmentation should be avoided--e.g. by requesting the IP Don't Fragment bit to be set in case of UDP(-Lite) and IPv4 (SET_DF in RFC8304). 5.1.2.", "comments": "this should\nRight there was a typo: Replacing the last 2 sentences, the last sentence ought to be: \" Endpoints should avoid IP fragmentation ({{!RFC8304}}) and when used with transports running over IP version 4 the Don't Fragment bit will be set.\" These are two separate issues that a stack implementor has to decide to do: (1) Decide if the endpoint will fragment; (2) decide if the path can fragment the fragments. In some OS this will be bundled together, in other OS they will not.\nDiscussed on the interim call. The name probably could be much better. We also want to model in the text the atomicity property of UDP transfers.\nThis text is broken: section 5.1 / o Singular Transmission: when this is true, the application requests to avoid transport-layer segmentation or network-layer fragmentation. Some transports implement network-layer fragmentation avoidance (Path MTU Discovery) without exposing this functionality to the application; in this case, only transport- layer segmentation should be avoided, by fitting the message into a single transport-layer segment or otherwise failing. Otherwise, network-layer fragmentation should be avoided--e.g. by requesting the IP Don't Fragment bit to be set in case of UDP(-Lite) and IPv4 (SET_DF in [RFC8304]). / we need to rewrite\nWaiting to see exactly how DPLPMTUD finishes WGLC ... will be ready for text in Jan.\nReads much better except the last broken sentence", "new_text": "than -1, it sets the minimum protection in protocols that allow limiting the checksum length (e.g. UDP-Lite). Singular Transmission: When set, this property limits the message size to the Maximum Message Size Before Fragmentation or Segmentation (see Section 10.1.7 of I-D.ietf-taps-interface). Messages larger than this size generate an error. Setting this avoids transport-layer segmentation or network-layer fragmentation. When used with transports running over IP version 4 the Don't Fragment bit will be set to avoid on-path IP fragmentation (RFC8304). 5.1.2."} {"id": "q-en-api-drafts-8f1d2ee4336f0d0f42533e7fdc604138064cd5d73c5cd7e154b65a0a7b998316", "old_text": "actions are alternatives (e.g., whether to initiate a connection or to listen for incoming connections), while others are optional (e.g., setting Connection and Message Properties in Pre-Establishment) or have been omitted for brevity. 4.1.1.", "comments": "Fig. 4 isn't fully consistent with sec. 12 in the API document. A few things that stood out to me: InitiateWithSend() is missing. I can see that this isn't trying to cover everything, but randomly leaving some things out makes it all a bit confusing IMO... \"Connection Received\" reads as if it should really be the \"ConnectionReceived<>\" event in the API doc. So I would put these events there and use the \"<>\" to indicate them. Same with \"Closed\"... this is actually an event, so why not write it as such, with the \"<>\"? RendezvousDone is missing... should we include all events or not? Of course that would then also require adding Sent, Received, ... maybe it would be better to indicate that events are events, but put a statement somewhere saying that not all events are shown. In the same vein, I would replace \"Connection Ready\" with \"Ready<>\" \"Conn. Finished\" reads as if it might be an event, but this event doesn't exist... I would just remove this\nI've added InitiateWithSend(), and removed Conn. Finished; you're right, these are confusing. This diagram does not use the API draft's event notation, and I'm not sure it makes sense to introduce it here -- it would require a bit more expository text only to make a diagram that is an attempt to provide a high-level view provide a... less high-level view. (The expository text now says \"... some actions ... have been omitted for brevity and simplicity\" and I think that's the right approach.)\nAgreed!\nlgtm", "new_text": "actions are alternatives (e.g., whether to initiate a connection or to listen for incoming connections), while others are optional (e.g., setting Connection and Message Properties in Pre-Establishment) or have been omitted for brevity and simplicity. 4.1.1."} {"id": "q-en-api-drafts-9c078aadc787a53031d1b3763c12f03215705f6109d4d7003e839a995d86039c", "old_text": "10.2. These properties specifies configurations for the User Timeout Option (UTO), in case TCP becomes the chosen transport protocol. Implementation is optional and of course only sensible if TCP is implemented in the transport system. All of the below parameters are optional (e.g., it is possible to specify \"User Timeout Enabled\" as true, but not specify an Advertised User Timeout value; in this case, the TCP default will be used).", "comments": "Following the discussion in , explain why TCP UTO is included as a Protocol-Specific Property, while other properties are not. Also, some minor consistency fixes.\nThanks for doing this!Thanks", "new_text": "10.2. These properties specify configurations for the User Timeout Option (UTO), in case TCP becomes the chosen transport protocol. Implementation is optional and of course only sensible if TCP is implemented in the transport system. These TCP-specific properties are included here because the feature \"Suggest timeout to the peer\" is part of the minimal set of transport services I-D.ietf-taps-minset, where this feature was categorized as \"functional\". This means that when an implementation offers this feature, it has to expose an interface to it to the application. Otherwise, the implementation might violate assumptions by the application, which could cause the application to fail. All of the below properties are optional (e.g., it is possible to specify \"User Timeout Enabled\" as true, but not specify an Advertised User Timeout value; in this case, the TCP default will be used)."} {"id": "q-en-api-drafts-9c078aadc787a53031d1b3763c12f03215705f6109d4d7003e839a995d86039c", "old_text": "Integer Default: :the TCP default This time value is advertised via the TCP User Timeout Option (UTO) RFC5482 at the remote endpoint to adapt its own \"Timeout for aborting", "comments": "Following the discussion in , explain why TCP UTO is included as a Protocol-Specific Property, while other properties are not. Also, some minor consistency fixes.\nThanks for doing this!Thanks", "new_text": "Integer the TCP default This time value is advertised via the TCP User Timeout Option (UTO) RFC5482 at the remote endpoint to adapt its own \"Timeout for aborting"} {"id": "q-en-api-drafts-c72009befedb31eb3abeca75d44370663fd328ac768feecf4629c7ea3889ac08", "old_text": "All data sent with the same MessageContext object will be treated as belonging to the same Message, and will constitute an in-order series until the endOfMessage is marked. Once the end of the Message is marked, the MessageContext object may be re-used as a new Message with identical parameters. 7.7.", "comments": "... as suggested by NAME\nThe section on partial send says: \"All data sent with the same MessageContext object will be treated as belonging to the same Message, and will constitute an in-order series until the endOfMessage is marked. Once the end of the Message is marked, the MessageContext object may be re-used as a new Message with identical parameters.\" Can we reconsider for a second if that is really the way we want to design this? This is really implicit and therefore an easy source for errors. I guess we could alternatively use an explicit message identifier...? Or the send action always return an identifier and for subsequent sends we input this identifier again? Or someone has a smarter idea?\nI don't have my head buried in code like some others here (NAME NAME , ..), so I guess that these folks should speak up - but just from reading this, I agree that the current design sounds like an easy source for errors.\nI'd prefer to just remove the part that mentions reusing the same object. That's an implementation detail (and a perf optimization). The rest seems correct.", "new_text": "All data sent with the same MessageContext object will be treated as belonging to the same Message, and will constitute an in-order series until the endOfMessage is marked. 7.7."} {"id": "q-en-api-drafts-3374e3c999b4b1da94aa7ed71962174f002e516441ee1cee855fc5100625de7a", "old_text": "Abstract This document describes an abstract programming interface, API, to the transport layer, following the Transport Services Architecture. It supports the asynchronous, atomic transmission of messages over transport protocols and network paths dynamically selected at runtime. It is intended to replace the traditional BSD sockets API as the common interface to the transport layer, in an environment where endpoints could select from multiple interfaces and potential transport protocols. 1.", "comments": "Fixing three oddities in the abstract, issue\nOdd English: /where endpoints have multiple interfaces and potential transport protocols to select from/where endpoints could select from multiple interfaces and potential transport protocols./ We use the words /abstract programming interface/ and the acronym /API/ but never tie these two together. I am not huge fan of: /lowest common denominator interface/ ... as an engineer, this sounds like a good thing, although /lowest/ isn't that clear, so I wonder if we can find a better term?\nResolved in PR\nLGTM modulo one nit", "new_text": "Abstract This document describes an abstract application programming interface, API, to the transport layer, following the Transport Services Architecture. It supports the asynchronous, atomic transmission of messages over transport protocols and network paths dynamically selected at runtime. It is intended to replace the traditional BSD sockets API as the common interface to the transport layer, in an environment where endpoints could select from multiple interfaces and potential transport protocols. 1."} {"id": "q-en-api-drafts-285cba6400091ce8762f5c70e6caff5cd7c35443d77c59397ea5ccd4f3e189c0", "old_text": "5.1.1. Lifetime: this should be implemented by removing the Message from its queue of pending Messages after the Lifetime has expired. A queue of pending Messages within the transport system implementation that have yet to be handed to the Protocol Stack can always support this property, but once a Message has been sent into the send buffer of a protocol, only certain protocols may support de-queueing a message. For example, TCP cannot remove bytes from its send buffer, while in case of SCTP, such control over the SCTP send buffer can be exercised using the partial reliability extension RFC8303. When there is no standing queue of Messages within the system, and the Protocol Stack does not support removing a Message from its buffer, this property may be ignored. Priority: this represents the ability to prioritize a Message over other Messages. This can be implemented by the system re-ordering", "comments": "Changes to address Issue\nMessage Properties This isn\u2019t really about /removing/ we need to express this differently, not sure what you wish to say: For example, TCP cannot remove bytes from its send buffer, while in case of SCTP, such control over the SCTP send buffer can be exercised using the partial reliability extension [RFC8303]. and: /Final: when this is true, it means that a transport connection can be closed immediately after its transmission./ I think this is: /Final: when this is true, it means that a transport connection can be closed immediately after transmission of the message./\nSee PR Update draft-ietf-taps-URL\nLooks good, just some more nits on top", "new_text": "5.1.1. Lifetime: this should be implemented by removing the Message from the queue of pending Messages after the Lifetime has expired. A queue of pending Messages within the transport system implementation that have yet to be handed to the Protocol Stack can always support this property, but once a Message has been sent into the send buffer of a protocol, only certain protocols may support removing a message. For example, an implementation cannot cannot remove bytes from a TCP send buffer, while it can remove data from a SCTP send buffer using the partial reliability extension RFC8303. When there is no standing queue of Messages within the system, and the Protocol Stack does not support the removal of a Message from the stack's send buffer, this property may be ignored. Priority: this represents the ability to prioritize a Message over other Messages. This can be implemented by the system re-ordering"} {"id": "q-en-api-drafts-285cba6400091ce8762f5c70e6caff5cd7c35443d77c59397ea5ccd4f3e189c0", "old_text": "could choose to send Messages of different Priority on streams of different priority. Ordered: when this is false, it disables the requirement of in- order-delivery for protocols that support configurable ordering. Safely Replayable: when this is true, it means that the Message can be used by mechanisms that might transfer it multiple times - e.g., as a result of racing multiple transports or as part of TCP Fast Open. Final: when this is true, it means that a transport connection can be closed immediately after its transmission. Corruption Protection Length: when this is set to any value other than -1, it sets the minimum protection in protocols that allow", "comments": "Changes to address Issue\nMessage Properties This isn\u2019t really about /removing/ we need to express this differently, not sure what you wish to say: For example, TCP cannot remove bytes from its send buffer, while in case of SCTP, such control over the SCTP send buffer can be exercised using the partial reliability extension [RFC8303]. and: /Final: when this is true, it means that a transport connection can be closed immediately after its transmission./ I think this is: /Final: when this is true, it means that a transport connection can be closed immediately after transmission of the message./\nSee PR Update draft-ietf-taps-URL\nLooks good, just some more nits on top", "new_text": "could choose to send Messages of different Priority on streams of different priority. Ordered: when this is false, this disables the requirement of in- order-delivery for protocols that support configurable ordering. Safely Replayable: when this is true, this means that the Message can be used by mechanisms that might transfer it multiple times - e.g., as a result of racing multiple transports or as part of TCP Fast Open. Final: when this is true, this means that a transport connection can be closed immediately after transmission of the message. Corruption Protection Length: when this is set to any value other than -1, it sets the minimum protection in protocols that allow"} {"id": "q-en-api-drafts-a27a4dcab43e3e757706998ebb302a4bd77d164aff3c4c688db909f54c2efc2d", "old_text": "boundaries to function, each application layer protocol has defined its own framing. Note that while Message Framers add the most value when placed above a protocol that otherwise does not preserve message boundaries, they can also be used with datagram- or message-based protocols. In these", "comments": "I think this helps, but one of the confusions is the distinction between a framing to provide a message-oriented API, and the use of framers underneath that API to change the representation of messages on the wire. I wonder if some of the difficulties we've had with framers is that some of us have been using the term to represent one of these concepts, and some the other? One is the concept of a framer that accumulates data and returns an object from a call, rather than returning a byte array containing a (partial) HTTP response. That's the API-level framer. The other is the sort of framer that performs TLS encryption on the HTTP request, after has been called. This transforms data, below the API. Do we need two different terms?\nNAME - personally, I always thought that Framers can do both of these things; if this wasn't clear enough before, then I think your part 1 (returning an HTTPResponse object) was clear from before, and now this PR makes it clear that they also intercept the send call to transform data afterwards. So IMHO that's just the clarification that was needed.\nSorry! Obviously I only wanted to comment, but hit the wrong button :)\nI know we moved some text about framers into the implementation doc but that really doesn't seem right when I look at it now. The section about framers in the implementation doc is the only section that defines an interface which probably is already an indication that just moving this text there was not right. However, also the text in the API doc seems now incomplete. I think we should move it back and rather define some real simple examples framers in the implementation document (just a prefixed length field (as many UDP mapping use) or maybe even an example implementation for STARTTLS). Further after moving it back we could figure out if really all parts of the interface are necessary or if there would be a more minimal interface that we could use. E.g. I'm not sure about the FailConnection() call or at least I think this could need more explanation.\nAlso I think it would be good to the section on framers earlier in the doc. I will open a separate section to propose some restructuring (as I also think it would be good to have the connection properties earlier).\nI think the real problem with framers is that we left them vague to avoid having to worry about all the implications they bring with them (which are a lot as they impact almost every stage of the life of a connection) but also to not limit what implementations can do with them by making them very rigid. But I do agree that the text in API feels somewhat incomplete. Maybe one solution would be to explain what a framer should at least be able to do in the interface draft without specifying the actual API that is in implementation. I think we all agree at when a framer should be able to interact with the connection just not how that interaction is supposed to look like (as that is highly implementation specific).\nI disagree that we should move the text back into the API document, since the group was very clear before that the details should be moved to implementation. The boundary we defined was to leave in the API: (a) the definition of what a framer is and why it's used and (b) the interface an application uses to add a framer to the connection. The API text should be complete with regards to how a user of a framer performs actions. What specific text would we want to add here?\nFirst as I say that's the only piece of additional interface we define in the implementation draft which already is a sign that it's maybe not ideal. But for the API draft the part of how a framer interact with the transport is really unclear and looking at the part of interface, that now defined in the implementation doc, makes it really clear. I disagree that this part of the interface can or should be left to the implementation. The fact that you have to call the framer before you start any transmission and the events a framer should be aware of are generic and nothing implementation specific.\nIn the implementation draft we say: \"A Message Framer is primarily defined by the set of code that handles events for a framer implementation\" My assumption is that a custom framer implementation could be part of the application logic and therefore we need to define this interface as well. However looking at the implementation draft I guess I now confused myself about how this actually works. Why is the sending logic asynchronous? Isn't this something that happens iteratively when the application calls send(). How would that actually work? I thought the application could just call URL() and then the connection could call URL() and that would simply return a modified message... can you give an example for sending and receiving based on the interfaces currently defined in the implementation draft? Also please note that while I think the interface definition belongs in API draft, however, there are comments about copying and curser handling which should/could probably say in the implementation draft.\nInterim: Add to -interface: Explain that framer(s) is in between the rest of the protocol stack and the application Intercepts calls to start/stop and send/receive And just leave as is, and have another framers document in the future once we get more details.\nI think it'll be helpful to be a bit clearer on the timing here, see comment.Thanks a lot for doing this - to me, this looks really good. It makes things much clearer for readers of this document.I think this should clarify it, thanks!", "new_text": "boundaries to function, each application layer protocol has defined its own framing. To use a Message Framer, the application adds it to its Preconnection object. Then, the Message Framer can intercept all calls to Send() or Receive() on a Connection to add Message semantics, in addition to interacting with the setup and teardown of the Connection. A Framer can start sending data before the application sends data if the framing protocol requires a prefix or handshake (see RFC8229 for an example of such a framing protocol). Note that while Message Framers add the most value when placed above a protocol that otherwise does not preserve message boundaries, they can also be used with datagram- or message-based protocols. In these"} {"id": "q-en-api-drafts-cea08b6f1a3546ccf55f6b00c9a80cfc187a9bddabe2dc83d596e379b0b73fb6", "old_text": "Enumeration: can take one value of a finite set of values, dependent on the property itself. The representation is implementation dependent; however, implementations MUST provide a method for the application to determine the entire set of possible values for each property. Preference: can take one of five values (Prohibit, Avoid, Ignore, Prefer, Require) for the level of preference of a given property", "comments": "sec 4.2.2 \"implementations MUST provide a method for the application to determine the entire set of possible values for each property.\" Is this meant to be a requirement for a machine-readable API for learning the enum, or merely a requirement for documentation? i.e. is it satisfied if the values are in the man page or comments of the C header file?\nAs a by-note, isn't this a great place to use \"REQUIRED\" and say whether it is required to specify or required to support :-).\nProposal: Just remove \"however, implementations MUST provide a method for the application to determine the entire set of possible values for each property.\" This is too implementation- and language- specific.", "new_text": "Enumeration: can take one value of a finite set of values, dependent on the property itself. The representation is implementation dependent. Preference: can take one of five values (Prohibit, Avoid, Ignore, Prefer, Require) for the level of preference of a given property"} {"id": "q-en-api-drafts-4c3df4858a723f8d628405a31d6a26a110b5443fba930934806f4a8873f86345", "old_text": "7.1.10. The following generic Connection Properties are read-only, i.e. they cannot be changed by an application. 7.1.10.1. zeroRttMsgMaxLen", "comments": "There are many ways to do this, but this one mimics the \"private browsing\" paradigm\" as I understand it.\nCan you say more? I read as specifically about flushing state, not Connection Group vs. Connection issues generally. This is not literally \"flushing\" the state but I've update taps-interface to reflect this design while, I think, preserving the spirit of the original text.\nConnection Groups used to be about state and context sharing as well as entanglement. Now it's about entanglement. We need to rename Connection Groups (\"connection contexts\"?) in the architecture to replace the existing \"group\" concept, and to retain \"connection group\" in the API for entanglement.\nSeems like this could be 360 degrees of rotation in our thinking? (which could be OK, NEAT was written to specify both contexts and flow groups). Let's try some definitions here and see if we converge...\nI think brings this back in synch again so no need to rename, possibly the last part of this section in architecture should be removed though as we do not have fine-grained control\nSection 9.1 of taps-impl loosely refers to the need for an interface to \"flush protocol state\" to preserve anonymity, but such a capability is not in taps-interface.\nI don't like this text: \"Applications must have a way to flush protocol cache state if desired. This may be necessary, for example, if application-layer identifiers rotate and clients wish to avoid linkability via trackable TLS tickets or TFO cookies.\" Those of you who know me, would immediately see a \"MUST\"...\"if desired.\" and a \"may be necessary\". My suggestion is that this digs too deep, and we should remove the text. I could also live with something more neutral though: Note: There are case where applications need to flush protocol cache state, for example, if application-layer identifiers rotate and a client wishes to avoid linkability of trackable TLS tickets or TFO cookies.\nI'm fine to have the more neutral sentence proposed by Gorry. I don't think we should specify an interface for it but I think it's good to mention.\nI agree that the best approach is to make it more neutral. The other option would be to say that this interface is implementation specific. For sure, we do not want it in the API I think.\nI have no opinion on Gorry's comment, but I'm a bit mystified by the \"shouldn't be in the API doc\" argument. Perhaps I'm not understanding the purpose of taps-interface? I thought it was a qualitative description of the controls offered to the application, which this clearly would be!\nI would define taps as an interface to control network connections. So flushing state is a good thing in some scenarios for privacy but not really needed to handle network connections.\nSee also -- this surfaced that we have drifted on connection group versus context, which we need to fix in arch.\nMartin to write a PR against API to propose a context API that would provide the intent of this functionality. Alternative is to NOP this in the API doc (\"connection contexts are too implementation specific to expose in a general way\" or similar)\nA second alternative is a one-bit fix: a selection property that requires a given connection to be in a separate connection context (which implies a separate connection group) from other connections created in the preconnection.\nA very nice approach IMO, thanks for doing thisI like this approach for the simplicity, but it still solves only half of the problem we have with connection groups vs. connection contexts. I am fine with merging this, but IMHO it does not fully Thanks NAME looks good to me now.", "new_text": "7.1.10. isolateSession Boolean false When set to true, this property will initiate new Connections using as little cached information (such as session tickets or cookies) as possible from previous connections that are not entangled with it. Any state generated by this Connection will only be shared with entangled connections. Cloned Connections will use saved state from within the Connection Group. Note that this does not guarantee no leakage of information, as implementations may not be able to fully isolate all caches (e.g. RTT estimates). Note that this property may degrade connection performance. 7.1.11. The following generic Connection Properties are read-only, i.e. they cannot be changed by an application. 7.1.11.1. zeroRttMsgMaxLen"} {"id": "q-en-api-drafts-4c3df4858a723f8d628405a31d6a26a110b5443fba930934806f4a8873f86345", "old_text": "before or during Connection establishment, see also msg- safelyreplayable. It is given in Bytes. 7.1.10.2. singularTransmissionMsgMaxLen", "comments": "There are many ways to do this, but this one mimics the \"private browsing\" paradigm\" as I understand it.\nCan you say more? I read as specifically about flushing state, not Connection Group vs. Connection issues generally. This is not literally \"flushing\" the state but I've update taps-interface to reflect this design while, I think, preserving the spirit of the original text.\nConnection Groups used to be about state and context sharing as well as entanglement. Now it's about entanglement. We need to rename Connection Groups (\"connection contexts\"?) in the architecture to replace the existing \"group\" concept, and to retain \"connection group\" in the API for entanglement.\nSeems like this could be 360 degrees of rotation in our thinking? (which could be OK, NEAT was written to specify both contexts and flow groups). Let's try some definitions here and see if we converge...\nI think brings this back in synch again so no need to rename, possibly the last part of this section in architecture should be removed though as we do not have fine-grained control\nSection 9.1 of taps-impl loosely refers to the need for an interface to \"flush protocol state\" to preserve anonymity, but such a capability is not in taps-interface.\nI don't like this text: \"Applications must have a way to flush protocol cache state if desired. This may be necessary, for example, if application-layer identifiers rotate and clients wish to avoid linkability via trackable TLS tickets or TFO cookies.\" Those of you who know me, would immediately see a \"MUST\"...\"if desired.\" and a \"may be necessary\". My suggestion is that this digs too deep, and we should remove the text. I could also live with something more neutral though: Note: There are case where applications need to flush protocol cache state, for example, if application-layer identifiers rotate and a client wishes to avoid linkability of trackable TLS tickets or TFO cookies.\nI'm fine to have the more neutral sentence proposed by Gorry. I don't think we should specify an interface for it but I think it's good to mention.\nI agree that the best approach is to make it more neutral. The other option would be to say that this interface is implementation specific. For sure, we do not want it in the API I think.\nI have no opinion on Gorry's comment, but I'm a bit mystified by the \"shouldn't be in the API doc\" argument. Perhaps I'm not understanding the purpose of taps-interface? I thought it was a qualitative description of the controls offered to the application, which this clearly would be!\nI would define taps as an interface to control network connections. So flushing state is a good thing in some scenarios for privacy but not really needed to handle network connections.\nSee also -- this surfaced that we have drifted on connection group versus context, which we need to fix in arch.\nMartin to write a PR against API to propose a context API that would provide the intent of this functionality. Alternative is to NOP this in the API doc (\"connection contexts are too implementation specific to expose in a general way\" or similar)\nA second alternative is a one-bit fix: a selection property that requires a given connection to be in a separate connection context (which implies a separate connection group) from other connections created in the preconnection.\nA very nice approach IMO, thanks for doing thisI like this approach for the simplicity, but it still solves only half of the problem we have with connection groups vs. connection contexts. I am fine with merging this, but IMHO it does not fully Thanks NAME looks good to me now.", "new_text": "before or during Connection establishment, see also msg- safelyreplayable. It is given in Bytes. 7.1.11.2. singularTransmissionMsgMaxLen"} {"id": "q-en-api-drafts-4c3df4858a723f8d628405a31d6a26a110b5443fba930934806f4a8873f86345", "old_text": "Packet Size (MPS) as described in Datagram PLPMTUD I-D.ietf-tsvwg- datagram-plpmtud. 7.1.10.3. sendMsgMaxLen", "comments": "There are many ways to do this, but this one mimics the \"private browsing\" paradigm\" as I understand it.\nCan you say more? I read as specifically about flushing state, not Connection Group vs. Connection issues generally. This is not literally \"flushing\" the state but I've update taps-interface to reflect this design while, I think, preserving the spirit of the original text.\nConnection Groups used to be about state and context sharing as well as entanglement. Now it's about entanglement. We need to rename Connection Groups (\"connection contexts\"?) in the architecture to replace the existing \"group\" concept, and to retain \"connection group\" in the API for entanglement.\nSeems like this could be 360 degrees of rotation in our thinking? (which could be OK, NEAT was written to specify both contexts and flow groups). Let's try some definitions here and see if we converge...\nI think brings this back in synch again so no need to rename, possibly the last part of this section in architecture should be removed though as we do not have fine-grained control\nSection 9.1 of taps-impl loosely refers to the need for an interface to \"flush protocol state\" to preserve anonymity, but such a capability is not in taps-interface.\nI don't like this text: \"Applications must have a way to flush protocol cache state if desired. This may be necessary, for example, if application-layer identifiers rotate and clients wish to avoid linkability via trackable TLS tickets or TFO cookies.\" Those of you who know me, would immediately see a \"MUST\"...\"if desired.\" and a \"may be necessary\". My suggestion is that this digs too deep, and we should remove the text. I could also live with something more neutral though: Note: There are case where applications need to flush protocol cache state, for example, if application-layer identifiers rotate and a client wishes to avoid linkability of trackable TLS tickets or TFO cookies.\nI'm fine to have the more neutral sentence proposed by Gorry. I don't think we should specify an interface for it but I think it's good to mention.\nI agree that the best approach is to make it more neutral. The other option would be to say that this interface is implementation specific. For sure, we do not want it in the API I think.\nI have no opinion on Gorry's comment, but I'm a bit mystified by the \"shouldn't be in the API doc\" argument. Perhaps I'm not understanding the purpose of taps-interface? I thought it was a qualitative description of the controls offered to the application, which this clearly would be!\nI would define taps as an interface to control network connections. So flushing state is a good thing in some scenarios for privacy but not really needed to handle network connections.\nSee also -- this surfaced that we have drifted on connection group versus context, which we need to fix in arch.\nMartin to write a PR against API to propose a context API that would provide the intent of this functionality. Alternative is to NOP this in the API doc (\"connection contexts are too implementation specific to expose in a general way\" or similar)\nA second alternative is a one-bit fix: a selection property that requires a given connection to be in a separate connection context (which implies a separate connection group) from other connections created in the preconnection.\nA very nice approach IMO, thanks for doing thisI like this approach for the simplicity, but it still solves only half of the problem we have with connection groups vs. connection contexts. I am fine with merging this, but IMHO it does not fully Thanks NAME looks good to me now.", "new_text": "Packet Size (MPS) as described in Datagram PLPMTUD I-D.ietf-tsvwg- datagram-plpmtud. 7.1.11.3. sendMsgMaxLen"} {"id": "q-en-api-drafts-4c3df4858a723f8d628405a31d6a26a110b5443fba930934806f4a8873f86345", "old_text": "This property represents the maximum Message size that an application can send. 7.1.10.4. recvMsgMaxLen", "comments": "There are many ways to do this, but this one mimics the \"private browsing\" paradigm\" as I understand it.\nCan you say more? I read as specifically about flushing state, not Connection Group vs. Connection issues generally. This is not literally \"flushing\" the state but I've update taps-interface to reflect this design while, I think, preserving the spirit of the original text.\nConnection Groups used to be about state and context sharing as well as entanglement. Now it's about entanglement. We need to rename Connection Groups (\"connection contexts\"?) in the architecture to replace the existing \"group\" concept, and to retain \"connection group\" in the API for entanglement.\nSeems like this could be 360 degrees of rotation in our thinking? (which could be OK, NEAT was written to specify both contexts and flow groups). Let's try some definitions here and see if we converge...\nI think brings this back in synch again so no need to rename, possibly the last part of this section in architecture should be removed though as we do not have fine-grained control\nSection 9.1 of taps-impl loosely refers to the need for an interface to \"flush protocol state\" to preserve anonymity, but such a capability is not in taps-interface.\nI don't like this text: \"Applications must have a way to flush protocol cache state if desired. This may be necessary, for example, if application-layer identifiers rotate and clients wish to avoid linkability via trackable TLS tickets or TFO cookies.\" Those of you who know me, would immediately see a \"MUST\"...\"if desired.\" and a \"may be necessary\". My suggestion is that this digs too deep, and we should remove the text. I could also live with something more neutral though: Note: There are case where applications need to flush protocol cache state, for example, if application-layer identifiers rotate and a client wishes to avoid linkability of trackable TLS tickets or TFO cookies.\nI'm fine to have the more neutral sentence proposed by Gorry. I don't think we should specify an interface for it but I think it's good to mention.\nI agree that the best approach is to make it more neutral. The other option would be to say that this interface is implementation specific. For sure, we do not want it in the API I think.\nI have no opinion on Gorry's comment, but I'm a bit mystified by the \"shouldn't be in the API doc\" argument. Perhaps I'm not understanding the purpose of taps-interface? I thought it was a qualitative description of the controls offered to the application, which this clearly would be!\nI would define taps as an interface to control network connections. So flushing state is a good thing in some scenarios for privacy but not really needed to handle network connections.\nSee also -- this surfaced that we have drifted on connection group versus context, which we need to fix in arch.\nMartin to write a PR against API to propose a context API that would provide the intent of this functionality. Alternative is to NOP this in the API doc (\"connection contexts are too implementation specific to expose in a general way\" or similar)\nA second alternative is a one-bit fix: a selection property that requires a given connection to be in a separate connection context (which implies a separate connection group) from other connections created in the preconnection.\nA very nice approach IMO, thanks for doing thisI like this approach for the simplicity, but it still solves only half of the problem we have with connection groups vs. connection contexts. I am fine with merging this, but IMHO it does not fully Thanks NAME looks good to me now.", "new_text": "This property represents the maximum Message size that an application can send. 7.1.11.4. recvMsgMaxLen"} {"id": "q-en-api-drafts-8f9c56f87f5a245d0f1324150698bdd92ddbabf65cfdc43b9d4bad77262dc6cb", "old_text": "Handover This property specifies the local policy for transferring data across multiple paths between the same end hosts if Parallel Use of Multiple Paths is not set to Disabled (see multipath-mode). Possible values are: The connection ought only to attempt to migrate between different paths when the original path is lost or becomes unusable. The", "comments": "Note, the change to the API draft here is a purely editorial detail that I fixed here in passing: the multipath selection property was renamed and this fixes the text referring to it.\n\"what we discussed\" - you mean in , where I say that I stumbled over a sentence? This PR does update that sentence (simply removing \"with migration support\" from it because any multipath protocol can do that).\nno, i meant what we discussed at the interim about using the handover, interactive, aggregate terminology. but i saw that this was not so easily done and we are all happy with your solution, so all good :)\nBefore distinguishing between protocols that do \"multipath\" and \"migrating between paths\", it might help to define these terms. This is tripping is up in QUIC, and IIRC the original \"Multipath TCP\" was pretty much the migration case. Something like, \"In this section we distinguish between 'multipath' protocols and protocols that allow migration between paths. The first means ...\"\nThis is in section 7.2 of taps-impl.\nWe define multipath (by policy in section 6.1.7 handover, interactive, aggregate) in the API doc. Section 4.2 in Implementation should use these definitions.\nI wanted to fix this now, but stumbled over the following sentence in the implementation draft: This talks about the protocol below the transport services system; do multipath-supporting protocols with no migration support exist? I struggle to imagine a scheduler that wouldn't make this happen... e.g. lowest latency scheduling, but with the exception: if a path becomes completely unavailable, keep going? :-) Looking at the three policies in section 6.1.7 of our API, clearly, migration should work with all of them (and \"Handover\" is limited in that it doesn't do anything else, but it's gotta be a subset of the others anyway, I'd say). Am I missing something? Just checking...\nI think the example of no migration support would be a multipath protocol that is not able to add new paths, so SCTP without support for the ADD_ADDR extension perhaps? Not sure this is a case we need to worry about though...\nAh, a multipath protocol that doesn't do multipath :-)\nWell, it can do multipath with the paths you give it from the beginning :) Possibly something else was intended with the text from the beginning, but this is what I could think of...\nLooks good to me!Thanks for fixing the nits!", "new_text": "Handover This property specifies the local policy for transferring data across multiple paths between the same end hosts if Multipath Transport is not set to Disabled (see multipath-mode). Possible values are: The connection ought only to attempt to migrate between different paths when the original path is lost or becomes unusable. The"} {"id": "q-en-api-drafts-8f9c56f87f5a245d0f1324150698bdd92ddbabf65cfdc43b9d4bad77262dc6cb", "old_text": "When a path change occurs, e.g., when the IP address of an interface changes or a new interface becomes available, the Transport Services implementation is responsible for notifying the application of the change. The path change may interrupt connectivity on a path for an active connection or provide an opportunity for a transport that supports multipath or migration to adapt to the new paths. For protocols that do not support multipath or migration, the Protocol Instances should be informed of the path change, but should", "comments": "Note, the change to the API draft here is a purely editorial detail that I fixed here in passing: the multipath selection property was renamed and this fixes the text referring to it.\n\"what we discussed\" - you mean in , where I say that I stumbled over a sentence? This PR does update that sentence (simply removing \"with migration support\" from it because any multipath protocol can do that).\nno, i meant what we discussed at the interim about using the handover, interactive, aggregate terminology. but i saw that this was not so easily done and we are all happy with your solution, so all good :)\nBefore distinguishing between protocols that do \"multipath\" and \"migrating between paths\", it might help to define these terms. This is tripping is up in QUIC, and IIRC the original \"Multipath TCP\" was pretty much the migration case. Something like, \"In this section we distinguish between 'multipath' protocols and protocols that allow migration between paths. The first means ...\"\nThis is in section 7.2 of taps-impl.\nWe define multipath (by policy in section 6.1.7 handover, interactive, aggregate) in the API doc. Section 4.2 in Implementation should use these definitions.\nI wanted to fix this now, but stumbled over the following sentence in the implementation draft: This talks about the protocol below the transport services system; do multipath-supporting protocols with no migration support exist? I struggle to imagine a scheduler that wouldn't make this happen... e.g. lowest latency scheduling, but with the exception: if a path becomes completely unavailable, keep going? :-) Looking at the three policies in section 6.1.7 of our API, clearly, migration should work with all of them (and \"Handover\" is limited in that it doesn't do anything else, but it's gotta be a subset of the others anyway, I'd say). Am I missing something? Just checking...\nI think the example of no migration support would be a multipath protocol that is not able to add new paths, so SCTP without support for the ADD_ADDR extension perhaps? Not sure this is a case we need to worry about though...\nAh, a multipath protocol that doesn't do multipath :-)\nWell, it can do multipath with the paths you give it from the beginning :) Possibly something else was intended with the text from the beginning, but this is what I could think of...\nLooks good to me!Thanks for fixing the nits!", "new_text": "When a path change occurs, e.g., when the IP address of an interface changes or a new interface becomes available, the Transport Services implementation is responsible for notifying the Protocol Instance of the change. The path change may interrupt connectivity on a path for an active connection or provide an opportunity for a transport that supports multipath or migration to adapt to the new paths. Note that, from the Transport Services API point of view, migration is considered a part of multipath connectivity; it is just a limiting policy on multipath usage. If the \"multipath\" Selection Property is set to \"Disabled\", migration is disallowed. For protocols that do not support multipath or migration, the Protocol Instances should be informed of the path change, but should"} {"id": "q-en-api-drafts-8f9c56f87f5a245d0f1324150698bdd92ddbabf65cfdc43b9d4bad77262dc6cb", "old_text": "implementation should not aggressively close connections in these scenarios. If the Protocol Stack includes a transport protocol that also supports multipath connectivity with migration support, the Transport Services implementation should also inform the Protocol Instance of potentially new paths that become permissible based on the Selection Properties passed by the application. A protocol can then establish new subflows over new paths while an active path is still available or, if migration is supported, also after a break has been detected, and should attempt to tear down subflows over paths that are no longer used. The Transport Services API provides an interface to set a multipath policy that indicates when and how different paths should be used. However, detailed handling of these policies is still implementation-specific. The decision about when to create a new path or to announce a new path or set of paths to the remote endpoint, e.g., in the form of additional IP addresses, is implementation-specific or could be be supported by future API extensions. If the Protocol Stack includes a transport protocol that does not support multipath, but does support migrating between paths, the update to the set of available paths can trigger the connection to be migrated. In case of Pooled Connections pooled-connections, the transport system may add connections over new paths or different protocols to the pool if permissible based on the multipath policy and Selection Properties. In case a previously used path becomes unavailable, the transport system may disconnect all connections that require this path, but should not disconnect the pooled connection object exposed", "comments": "Note, the change to the API draft here is a purely editorial detail that I fixed here in passing: the multipath selection property was renamed and this fixes the text referring to it.\n\"what we discussed\" - you mean in , where I say that I stumbled over a sentence? This PR does update that sentence (simply removing \"with migration support\" from it because any multipath protocol can do that).\nno, i meant what we discussed at the interim about using the handover, interactive, aggregate terminology. but i saw that this was not so easily done and we are all happy with your solution, so all good :)\nBefore distinguishing between protocols that do \"multipath\" and \"migrating between paths\", it might help to define these terms. This is tripping is up in QUIC, and IIRC the original \"Multipath TCP\" was pretty much the migration case. Something like, \"In this section we distinguish between 'multipath' protocols and protocols that allow migration between paths. The first means ...\"\nThis is in section 7.2 of taps-impl.\nWe define multipath (by policy in section 6.1.7 handover, interactive, aggregate) in the API doc. Section 4.2 in Implementation should use these definitions.\nI wanted to fix this now, but stumbled over the following sentence in the implementation draft: This talks about the protocol below the transport services system; do multipath-supporting protocols with no migration support exist? I struggle to imagine a scheduler that wouldn't make this happen... e.g. lowest latency scheduling, but with the exception: if a path becomes completely unavailable, keep going? :-) Looking at the three policies in section 6.1.7 of our API, clearly, migration should work with all of them (and \"Handover\" is limited in that it doesn't do anything else, but it's gotta be a subset of the others anyway, I'd say). Am I missing something? Just checking...\nI think the example of no migration support would be a multipath protocol that is not able to add new paths, so SCTP without support for the ADD_ADDR extension perhaps? Not sure this is a case we need to worry about though...\nAh, a multipath protocol that doesn't do multipath :-)\nWell, it can do multipath with the paths you give it from the beginning :) Possibly something else was intended with the text from the beginning, but this is what I could think of...\nLooks good to me!Thanks for fixing the nits!", "new_text": "implementation should not aggressively close connections in these scenarios. If the Protocol Stack includes a transport protocol that supports multipath connectivity, the Transport Services implementation should also inform the Protocol Instance of potentially new paths that become permissible based on the \"multipath\" Selection Property and the \"multipath-policy\" Connection Property choices made by the application. A protocol can then establish new subflows over new paths while an active path is still available or, if migration is supported, also after a break has been detected, and should attempt to tear down subflows over paths that are no longer used. The Transport Services API's Connection Property \"multipath-policy\" allows an application to indicate when and how different paths should be used. However, detailed handling of these policies is still implementation-specific. For example, if the \"multipath\" Selection Property is set to \"active\", the decision about when to create a new path or to announce a new path or set of paths to the remote endpoint, e.g., in the form of additional IP addresses, is implementation-specific. If the Protocol Stack includes a transport protocol that does not support multipath, but does support migrating between paths, the update to the set of available paths can trigger the connection to be migrated. In case of Pooled Connections pooled-connections, the Transport Services implementation may add connections over new paths to the pool if permissible based on the multipath policy and Selection Properties. In case a previously used path becomes unavailable, the transport system may disconnect all connections that require this path, but should not disconnect the pooled connection object exposed"} {"id": "q-en-api-drafts-0046b69a6cc7f929e402b071fcd5400eb57c489951192ea129283b830f93f4db", "old_text": "(e.g. I-D.ietf-httpbis-priority. A Transport Services system gives no guarantees about how its expression of relative priorities will be realized; for example, if a transport stack that only provides a single in-order reliable stream is selected, prioritization information can only be ignored. However, the Transport Services system will seek to ensure that performance of relatively-prioritized connections and messages is not worse with respect to those connections and messages than an equivalent configuration in which all prioritization properties are left at their defaults. The Transport Services interface does order Connection Priority over the Priority Message Property. In the absense of other externalities", "comments": "Section 7.2.6. says \" for example, if a transport stack that only provides a single in-order reliable stream is selected, prioritization information can only be ignored.\" I don't think this is entirely true as you can still decide in which order you schedule independent messages on one stream.\nI don't get this. The sentence says \"in-order\". Scheduling them from buffer A into buffer B in accordance with priorities won't help when everything leaves buffer B in a fixed sequence.\nHow about a less complex sentence: ...\"for example, if the transport system selects a stack that only provides a single in-order reliable stream, the prioritization information is ignored.\"\nmessage on the taps \"layer\" are independent. so the order an application calls send on these message doesn't have to be the same order the messages are send on an \"in-order stream\"\nOh, now I see what you mean. Hmmm... looking at the text, it seems that we could just delete this sentence and the problem is solved?\nI think what was intended with the example was to show that the set priorities may not have any effect. Perhaps we should instead just add \"and the set priorities may not have any effect\" if we want to be explicit? Removing it also works though.\nI think just add that!\nI'm for removing it (I think I wrote this sentence btw :-) ). Giving this example just throws off someone who thinks about this case like Mirja did here - even if we say \"may not apply\", the reader may (correctly) think \"but why, in this case they CAN well apply!\". So I don't find that helpful. For context, the full current statement is: and I believe that we say everything we need to say if we just change this into:\nI like simple examples, this one wasn't great so this is OK.", "new_text": "(e.g. I-D.ietf-httpbis-priority. A Transport Services system gives no guarantees about how its expression of relative priorities will be realized. However, the Transport Services system will seek to ensure that performance of relatively-prioritized connections and messages is not worse with respect to those connections and messages than an equivalent configuration in which all prioritization properties are left at their defaults. The Transport Services interface does order Connection Priority over the Priority Message Property. In the absense of other externalities"} {"id": "q-en-api-drafts-1e0a36d58a00729c55f86464a3fd92c1f65eca23f03854096d719c7bd5210c05", "old_text": "Preference Prefer This property specifies whether the application needs or prefers to use a transport protocol that preserves message boundaries.", "comments": "I am sure this has been discussed, but reading now it seems strange to me to use \"Prefer\" as the default for a Selection Property that affects how the application interacts with the API. Using \"Prefer\" means that I do not know what service I will get when using the default settings? In this case I do not know if I get messages or not if I use the defaults.\nThis should be Ignore as a default.", "new_text": "Preference Ignore This property specifies whether the application needs or prefers to use a transport protocol that preserves message boundaries."} {"id": "q-en-api-drafts-b1d0fba076cf81ebb23e7332e615c83fe0d42f3f47832528831b54df52b9e009", "old_text": "applications written to a single API to make use of transport protocols in terms of the features they provide. A unified interface to datagram and connection-oriented transports, allowing use of a common API for connection- establishment and closing. Message-orientation, as opposed to stream-orientation, using application-assisted framing and deframing where the underlying", "comments": "Interface, Section 2, \"A unified interface to datagram and connection-oriented transports, allowing use of a common API for connection-establishment and closing\". Should this be \"\u2026to datagram and stream-oriented transports\"? At what point do we explain that and are also used for connectionless protocols? This change would finesse the question here, leaving it for Section 3, or we can restructure the draft to clarify earlier.\nYes to (1): Datagram protocols can be connection-oriented or not; however the point here seems to be either: /interface to datagram and stream-oriented transports/ or /interface to both connection-less and connection-oriented transports/ ... I'd favour the first, it's clear enough and doesn't use terms that we later re-use. Also to (2): I think we could insert a specific sentence after: \" A Connection represents an instance of a transport Protocol Stack on which data can be sent to and/or received from a Remote Endpoint (i.e., depending on the kind of transport, connections can be bi-directional or unidirectional). \" That explicitly says something like: \"This connections are presently consistently to the application, irrespective of whether the underlying transport is connection-less and connection-oriented.\"", "new_text": "applications written to a single API to make use of transport protocols in terms of the features they provide. A unified interface to datagram and stream-oriented transports, allowing use of a common API for connection establishment and closing. Message-orientation, as opposed to stream-orientation, using application-assisted framing and deframing where the underlying"} {"id": "q-en-api-drafts-b1d0fba076cf81ebb23e7332e615c83fe0d42f3f47832528831b54df52b9e009", "old_text": "and/or received from a Remote Endpoint (i.e., a logical connection that, depending on the kind of transport, can be bi-directional or unidirectional, and that can use a stream protocol or a datagram protocol). Connections can be created from Preconnections in three ways: by initiating the Preconnection (i.e., actively opening, as in a client; initiate), through listening on the Preconnection (i.e., passively opening, as in a server listen), or rendezvousing on the Preconnection (i.e., peer to peer establishment; rendezvous). Once a Connection is established, data can be sent and received on it in the form of Messages. The interface supports the preservation of", "comments": "Interface, Section 2, \"A unified interface to datagram and connection-oriented transports, allowing use of a common API for connection-establishment and closing\". Should this be \"\u2026to datagram and stream-oriented transports\"? At what point do we explain that and are also used for connectionless protocols? This change would finesse the question here, leaving it for Section 3, or we can restructure the draft to clarify earlier.\nYes to (1): Datagram protocols can be connection-oriented or not; however the point here seems to be either: /interface to datagram and stream-oriented transports/ or /interface to both connection-less and connection-oriented transports/ ... I'd favour the first, it's clear enough and doesn't use terms that we later re-use. Also to (2): I think we could insert a specific sentence after: \" A Connection represents an instance of a transport Protocol Stack on which data can be sent to and/or received from a Remote Endpoint (i.e., depending on the kind of transport, connections can be bi-directional or unidirectional). \" That explicitly says something like: \"This connections are presently consistently to the application, irrespective of whether the underlying transport is connection-less and connection-oriented.\"", "new_text": "and/or received from a Remote Endpoint (i.e., a logical connection that, depending on the kind of transport, can be bi-directional or unidirectional, and that can use a stream protocol or a datagram protocol). Connections are presented consistently to the application, irrespective of whether the underlying transport is connection-less or connection-oriented. Connections can be created from Preconnections in three ways: by initiating the Preconnection (i.e., actively opening, as in a client; initiate), through listening on the Preconnection (i.e., passively opening, as in a server listen), or rendezvousing on the Preconnection (i.e., peer to peer establishment; rendezvous). Once a Connection is established, data can be sent and received on it in the form of Messages. The interface supports the preservation of"} {"id": "q-en-api-drafts-a01d78373105caae9b3eec84cbe88ae8783860f0ad78bef17629987d1caf47c6", "old_text": "The messageData object provides access to the bytes that were received for this Message, along with the length of the byte array. The messageContext is provided to enable retrieving metadata about the message and referring to the message, e.g., to send replies and map responses to their requests. See msg-ctx for details. See framing for handling Message framing in situations where the Protocol Stack only provides a byte-stream transport.", "comments": "The text below is a bit confusing as there are no details about how to to send replies and map responses to their requests in Section 7.1.1. This can be easily fixed by clarifying what Section 7.1.1 provides details on. But I was also wondering if there should be some more details on how to send replies and map responses to their requests somewhere?\nNote: this will be in \u00a79.3.2.1. after merge.\nIs seems the concept of mapping requests/replies has been fully eliminated. So just removing this part of the example.\nThanks Philipp, LGTM.", "new_text": "The messageData object provides access to the bytes that were received for this Message, along with the length of the byte array. The messageContext is provided to enable retrieving metadata about the message and referring to the message. The messageContext object ist described in msg-ctx. See framing for handling Message framing in situations where the Protocol Stack only provides a byte-stream transport."} {"id": "q-en-api-drafts-d8162328cad464bd03b7a68b32431688cb8b8d6f8d07b8d524ea06e23d792237", "old_text": "is Any-source Multicast, and the path selection is based on the outbound route to the group supplied in the Local Endpoint. UDP Multicast Receive Listeners will deliver new connections once they have received traffic from a new Remote Endpoint.", "comments": "This adds SO_reuse commentry on the multicast UDP service. It seeks to\nThe implementation document should explain IGMP joining and SO_REUSEADDR for multicast\nText blob for thought or incorporation: In some uses, it is required to open multiple connections for the same address(es). This is used for some multicast applications. For example, one connection might be opened to listen to a multicast group, and later a separate one to send to the same group; two applications might independently listen to the same address(es) to send signals to and/or receive signals from a common multicast control bus. In these cases, the interface needs to explicitly enable re-use of the same set of addresses (equivalent to setting SO_REUSEADDR in the socket API).", "new_text": "is Any-source Multicast, and the path selection is based on the outbound route to the group supplied in the Local Endpoint. There are cases where it is required to open multiple connections for the same address(es). For example, one Connection might be opened for a multicast group to for a multicast control bus, and another application later opens a separate Connection to the same group to send signals to and/or receive signals from the common bus. In such cases, the interface needs to explicitly enable re-use of the same set of addresses (equivalent to setting SO_REUSEADDR in the socket API). UDP Multicast Receive Listeners will deliver new connections once they have received traffic from a new Remote Endpoint."} {"id": "q-en-api-drafts-e7112c5b7a85ea9c890b19ddc7144e39b5730c1055307ca8743db2354af6cc43", "old_text": "3. The Transport Services API is the basic common abstract application programming interface to the Transport Services Architecture defined in the TAPS Architecture I-D.ietf-taps-arch. An application primarily interacts with this API through two Objects: Preconnections and Connections. A Preconnection object (pre- establishment) represents a set of properties and constraints on the", "comments": ", , , , , .\nIn 8.1.1 the second sentence is convoluted. Is there a way to restate it without as many nots, nons, an different zeros?\nDo you really mean \"host\" at \"(endpoints) SHOULD represent the same host\" in section 6, or do you mean something more like service, given that an endpoint may configured with a DNS name?\nactually we mean service, good catch.\nThe \"reformatted where necessary\" part of the requirement in the last paragraph of 4.1 makes the MUST NOT almost impossible to conform to. I think I understand what the paragraph is trying to say, but the current formulation leaves the reader guessing. I think you are trying to say \"Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor or implementation-specific property name\".\nThe first sentence-paragraph of section 4 is complex. Please break it apart.\nhas a comment that's too long for the text version\nIn the text version of the document, the last paragraph before 3.1.3 has some markup (~~~) that has leaked through.\nAs you can tell, I don't think the documents do what the first sentence of the API summary claims. This document would not lose anything if you deleted the sentence.\nAgreed, we can lose this sentence.\nWhy? Removing this makes no sense to me.\nFor one, it's redundant - it only repeats what we already say earlier twice.\nAh, I see - so just this one instance..... then go ahead, we don't need to say something twice.", "new_text": "3. An application primarily interacts with this API through two Objects: Preconnections and Connections. A Preconnection object (pre- establishment) represents a set of properties and constraints on the"} {"id": "q-en-api-drafts-e7112c5b7a85ea9c890b19ddc7144e39b5730c1055307ca8743db2354af6cc43", "old_text": "Preconnections are reusable after being used to initiate a Connection. Hence, for example, after the Connections were closed, the following would be correct: ~~~ //.. carry out adjustments to the Preconnection, if desire Connection := Preconnection.Initiate() ~~~ 3.1.3.", "comments": ", , , , , .\nIn 8.1.1 the second sentence is convoluted. Is there a way to restate it without as many nots, nons, an different zeros?\nDo you really mean \"host\" at \"(endpoints) SHOULD represent the same host\" in section 6, or do you mean something more like service, given that an endpoint may configured with a DNS name?\nactually we mean service, good catch.\nThe \"reformatted where necessary\" part of the requirement in the last paragraph of 4.1 makes the MUST NOT almost impossible to conform to. I think I understand what the paragraph is trying to say, but the current formulation leaves the reader guessing. I think you are trying to say \"Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor or implementation-specific property name\".\nThe first sentence-paragraph of section 4 is complex. Please break it apart.\nhas a comment that's too long for the text version\nIn the text version of the document, the last paragraph before 3.1.3 has some markup (~~~) that has leaked through.\nAs you can tell, I don't think the documents do what the first sentence of the API summary claims. This document would not lose anything if you deleted the sentence.\nAgreed, we can lose this sentence.\nWhy? Removing this makes no sense to me.\nFor one, it's redundant - it only repeats what we already say earlier twice.\nAh, I see - so just this one instance..... then go ahead, we don't need to say something twice.", "new_text": "Preconnections are reusable after being used to initiate a Connection. Hence, for example, after the Connections were closed, the following would be correct: 3.1.3."} {"id": "q-en-api-drafts-e7112c5b7a85ea9c890b19ddc7144e39b5730c1055307ca8743db2354af6cc43", "old_text": "4. Each application using the Transport Services Interface declares its preferences for how the transport service should operate using properties at each stage of the lifetime of a connection using Transport Properties, as defined in I-D.ietf-taps-arch. Transport Properties are divided into Selection, Connection, and Message Properties. Selection Properties (see selection-props) can", "comments": ", , , , , .\nIn 8.1.1 the second sentence is convoluted. Is there a way to restate it without as many nots, nons, an different zeros?\nDo you really mean \"host\" at \"(endpoints) SHOULD represent the same host\" in section 6, or do you mean something more like service, given that an endpoint may configured with a DNS name?\nactually we mean service, good catch.\nThe \"reformatted where necessary\" part of the requirement in the last paragraph of 4.1 makes the MUST NOT almost impossible to conform to. I think I understand what the paragraph is trying to say, but the current formulation leaves the reader guessing. I think you are trying to say \"Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor or implementation-specific property name\".\nThe first sentence-paragraph of section 4 is complex. Please break it apart.\nhas a comment that's too long for the text version\nIn the text version of the document, the last paragraph before 3.1.3 has some markup (~~~) that has leaked through.\nAs you can tell, I don't think the documents do what the first sentence of the API summary claims. This document would not lose anything if you deleted the sentence.\nAgreed, we can lose this sentence.\nWhy? Removing this makes no sense to me.\nFor one, it's redundant - it only repeats what we already say earlier twice.\nAh, I see - so just this one instance..... then go ahead, we don't need to say something twice.", "new_text": "4. Each application using the Transport Services Interface declares its preferences for how the transport service should operate. This is done by using Transport Properties, as defined in I-D.ietf-taps-arch, at each stage of the lifetime of a connection. Transport Properties are divided into Selection, Connection, and Message Properties. Selection Properties (see selection-props) can"} {"id": "q-en-api-drafts-e7112c5b7a85ea9c890b19ddc7144e39b5730c1055307ca8743db2354af6cc43", "old_text": "Namespaces for each of the keywords provided in the IANA protocol numbers registry (see https://www.iana.org/assignments/protocol- numbers/protocol-numbers.xhtml), reformatted where necessary to conform to an implementation's naming conventions, are reserved for Protocol Specific Properties and MUST NOT be used for vendor or implementation-specific properties. 4.2.", "comments": ", , , , , .\nIn 8.1.1 the second sentence is convoluted. Is there a way to restate it without as many nots, nons, an different zeros?\nDo you really mean \"host\" at \"(endpoints) SHOULD represent the same host\" in section 6, or do you mean something more like service, given that an endpoint may configured with a DNS name?\nactually we mean service, good catch.\nThe \"reformatted where necessary\" part of the requirement in the last paragraph of 4.1 makes the MUST NOT almost impossible to conform to. I think I understand what the paragraph is trying to say, but the current formulation leaves the reader guessing. I think you are trying to say \"Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor or implementation-specific property name\".\nThe first sentence-paragraph of section 4 is complex. Please break it apart.\nhas a comment that's too long for the text version\nIn the text version of the document, the last paragraph before 3.1.3 has some markup (~~~) that has leaked through.\nAs you can tell, I don't think the documents do what the first sentence of the API summary claims. This document would not lose anything if you deleted the sentence.\nAgreed, we can lose this sentence.\nWhy? Removing this makes no sense to me.\nFor one, it's redundant - it only repeats what we already say earlier twice.\nAh, I see - so just this one instance..... then go ahead, we don't need to say something twice.", "new_text": "Namespaces for each of the keywords provided in the IANA protocol numbers registry (see https://www.iana.org/assignments/protocol- numbers/protocol-numbers.xhtml) are reserved for Protocol Specific Properties and MUST NOT be used for vendor or implementation-specific properties. Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor- or implementation- specific property name. 4.2."} {"id": "q-en-api-drafts-e7112c5b7a85ea9c890b19ddc7144e39b5730c1055307ca8743db2354af6cc43", "old_text": "If more than one Remote Endpoint is specified on the Preconnection, then all the Remote Endpoints on the Preconnection SHOULD represent the same host. For example, the Remote Endpoints might represent various network interfaces of a host, or a server reflexive address that can be used to reach a host, or a set of hosts that provide equivalent local balanced service.", "comments": ", , , , , .\nIn 8.1.1 the second sentence is convoluted. Is there a way to restate it without as many nots, nons, an different zeros?\nDo you really mean \"host\" at \"(endpoints) SHOULD represent the same host\" in section 6, or do you mean something more like service, given that an endpoint may configured with a DNS name?\nactually we mean service, good catch.\nThe \"reformatted where necessary\" part of the requirement in the last paragraph of 4.1 makes the MUST NOT almost impossible to conform to. I think I understand what the paragraph is trying to say, but the current formulation leaves the reader guessing. I think you are trying to say \"Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor or implementation-specific property name\".\nThe first sentence-paragraph of section 4 is complex. Please break it apart.\nhas a comment that's too long for the text version\nIn the text version of the document, the last paragraph before 3.1.3 has some markup (~~~) that has leaked through.\nAs you can tell, I don't think the documents do what the first sentence of the API summary claims. This document would not lose anything if you deleted the sentence.\nAgreed, we can lose this sentence.\nWhy? Removing this makes no sense to me.\nFor one, it's redundant - it only repeats what we already say earlier twice.\nAh, I see - so just this one instance..... then go ahead, we don't need to say something twice.", "new_text": "If more than one Remote Endpoint is specified on the Preconnection, then all the Remote Endpoints on the Preconnection SHOULD represent the same service. For example, the Remote Endpoints might represent various network interfaces of a host, or a server reflexive address that can be used to reach a host, or a set of hosts that provide equivalent local balanced service."} {"id": "q-en-api-drafts-e7112c5b7a85ea9c890b19ddc7144e39b5730c1055307ca8743db2354af6cc43", "old_text": "Full Coverage This property specifies the minimum number of bytes in a received message that need to be covered by a checksum. A special value of 0 means that a received packet does not need to have a non-zero checksum field. A receiving endpoint will not forward messages that have less coverage to the application. The application is responsible for handling any corruption within the non-protected part of the message RFC8085. 8.1.2.", "comments": ", , , , , .\nIn 8.1.1 the second sentence is convoluted. Is there a way to restate it without as many nots, nons, an different zeros?\nDo you really mean \"host\" at \"(endpoints) SHOULD represent the same host\" in section 6, or do you mean something more like service, given that an endpoint may configured with a DNS name?\nactually we mean service, good catch.\nThe \"reformatted where necessary\" part of the requirement in the last paragraph of 4.1 makes the MUST NOT almost impossible to conform to. I think I understand what the paragraph is trying to say, but the current formulation leaves the reader guessing. I think you are trying to say \"Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor or implementation-specific property name\".\nThe first sentence-paragraph of section 4 is complex. Please break it apart.\nhas a comment that's too long for the text version\nIn the text version of the document, the last paragraph before 3.1.3 has some markup (~~~) that has leaked through.\nAs you can tell, I don't think the documents do what the first sentence of the API summary claims. This document would not lose anything if you deleted the sentence.\nAgreed, we can lose this sentence.\nWhy? Removing this makes no sense to me.\nFor one, it's redundant - it only repeats what we already say earlier twice.\nAh, I see - so just this one instance..... then go ahead, we don't need to say something twice.", "new_text": "Full Coverage This property specifies the minimum number of bytes in a received message that need to be covered by a checksum. A receiving endpoint will not forward messages that have less coverage to the application. The application is responsible for handling any corruption within the non-protected part of the message RFC8085. A special value of 0 means that a received packet may also have a zero checksum field. 8.1.2."} {"id": "q-en-api-drafts-cf6cb0a84bbdd56992c2430a010900c879e11665eaf7d9f68464ae53bc11b76e", "old_text": "to a single address over a single interface. They also present a single stream to the application. Software layers built upon sockets often propagate this limitation of a single-address single-stream model. The Transport Services architecture is designed to handle multiple candidate endpoints, protocols, and paths; and support multipath and multistreaming protocols. Transport Services implementations are meant to be flexible at connection establishment time, considering many different options and trying to select the most optimal combinations (gathering and racing). This requires applications to provide higher-level endpoints than IP addresses, such as hostnames and URLs, which are used by a Transport Services implementation for resolution, path selection, and racing. Transport services implementations can further implement fallback mechanisms if connection establishment of one protocol fails or performance is detected to be unsatisfactory. Flexibility after connection establishment is also important. Transport protocols that can migrate between multiple network-layer", "comments": "In (Arch) section 3.3, you are talking about racing before you've described what it is. At the point where you note that racing should only happen over stacks with identical security properties, note when doing so might introduce privacy issues and what can be done about them.\nDescribe what you mean by caching and why you might want to isolate sessions earlier.", "new_text": "to a single address over a single interface. They also present a single stream to the application. Software layers built upon sockets often propagate this limitation of a single-address single-stream model. The Transport Services architecture is designed: to handle multiple candidate endpoints, protocols, and paths; to support candidate protocol racing to select the most optimal stack in each situation; to support multipath and multistreaming protocols; to provide state caching and application control over it. Transport Services implementations are meant to be flexible at connection establishment time, considering many different options and trying to select the most optimal combinations by racing them and measuring the results (see gathering and racing). This requires applications to provide higher-level endpoints than IP addresses, such as hostnames and URLs, which are used by a Transport Services implementation for resolution, path selection, and racing. Transport Services implementations can further implement fallback mechanisms if connection establishment of one protocol fails or performance is detected to be unsatisfactory. Information used in connection establishment (e.g. cryptographic resumption tokens, information about usability of certain protocols on the path, results of racing in previous connections) are cached in the transport services implementation. Applications have control over whether this information is used for a specific establishment, in order to allow tradeoffs between efficiency and linkability. Flexibility after connection establishment is also important. Transport protocols that can migrate between multiple network-layer"} {"id": "q-en-api-drafts-cf6cb0a84bbdd56992c2430a010900c879e11665eaf7d9f68464ae53bc11b76e", "old_text": "system can race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Applications need to ensure that they use security APIs appropriately. In cases where applications use an interface to provide sensitive keying material, e.g., access to private keys or", "comments": "In (Arch) section 3.3, you are talking about racing before you've described what it is. At the point where you note that racing should only happen over stacks with identical security properties, note when doing so might introduce privacy issues and what can be done about them.\nDescribe what you mean by caching and why you might want to isolate sessions earlier.", "new_text": "system can race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Whether information from previous racing attempts, or other information cached by the Transport Services implementation about past communications, is used during establishment is under application control. This allows applications to make tradeoffs between efficiency (through racing) and privacy (via information that might leak from the cache toward an on-path observer). Some applications have native concepts (e.g. \"incognito mode\") that align with this functionality. Applications need to ensure that they use security APIs appropriately. In cases where applications use an interface to provide sensitive keying material, e.g., access to private keys or"} {"id": "q-en-api-drafts-f8e7fe4902326290afbf688f406fea9824a07e6c9ae47c8e110ae6e064b3ee24", "old_text": "application. In general, any protocol or path used for a connection must conform to all three sources of constraints. A violation of any of the layers should cause a protocol or path to be considered ineligible for use. For an example of application preferences leading to constraints, an application may prohibit the use of metered network interfaces for a given Connection to avoid user cost. Similarly, the system policy at a given time may prohibit the use of such a metered network interface from the application's process. Lastly, the implementation itself may default to disallowing certain network interfaces unless explicitly requested by the application and allowed by the system. It is expected that the database of system policies and the method of looking up these policies will vary across various platforms. An", "comments": "Fix to Issue\nWe don\u2019t mention layers in the text before: \u201cA violation of any of the layers should cause\u201d Is this better as: \u201d A violation that occurs at any of the policer layers should cause\u201d\nFixed", "new_text": "application. In general, any protocol or path used for a connection must conform to all three sources of constraints. A violation that occurs at any of the policy layers should cause a protocol or path to be considered ineligible for use. For an example of application preferences leading to constraints, an application may prohibit the use of metered network interfaces for a given Connection to avoid user cost. Similarly, the system policy at a given time may prohibit the use of such a metered network interface from the application's process. Lastly, the implementation itself may default to disallowing certain network interfaces unless explicitly requested by the application and allowed by the system. It is expected that the database of system policies and the method of looking up these policies will vary across various platforms. An"} {"id": "q-en-autonomic-control-plane-1dd4a3ac7926bcbc26801cd5171ebf8aee21ffb6d7923fe06b890b70b226ac3c", "old_text": "certificate directly instead of relying on a referential mechanism such as communicating only a hash and/or URL for the certificate. Any security association protocol MUST use PFS (such as profiles providing PFS). The degree of security required on every hop of an ACP network needs to be consistent across the network so that there is no designated", "comments": "During IETF 106 I solicited some feedback on the ACP IPsec/IKEv2 usage from Valery Smyslov; I've tried to digest that feedback into new text here. Some aspects are still subject to change. I also made a minor tweak to the DTLS section to bring it into parity with the IPsec one.\nMerging first, then fixing.", "new_text": "certificate directly instead of relying on a referential mechanism such as communicating only a hash and/or URL for the certificate. Any security association protocol MUST provide Forward Secrecy (whether inherently or as part of a profile of the security association protocol). The degree of security required on every hop of an ACP network needs to be consistent across the network so that there is no designated"} {"id": "q-en-autonomic-control-plane-1dd4a3ac7926bcbc26801cd5171ebf8aee21ffb6d7923fe06b890b70b226ac3c", "old_text": "6.7.1. An ACP node announces its ability to support IKEv2 as the ACP secure channel protocol in GRASP as \"IKEv2\". 6.7.1.1. To run ACP via IPsec natively, no further IANA assignments/ definitions are required. An ACP node that is supporting native IPsec MUST use IPsec security setup via IKEv2 for tunnel mode and IPsec/IKE signaling accordingly for IPv6 payload (e.g.: ESP next header of 41). It MUST use local and peer link-local IPv6 addresses for encapsulation. Authentication MUST use the ACP domain certificates. Certificate Encoding MUST support \"PKCS #7 wrapped X.509 certificate\" (0) (see IKEV2IANA for this and other IANA IKEv2 parameter names used in this text). If certificate chains are used, all intermediate certificates up to, but not including the locally provisioned trust anchor certificate must be signaled. IPsec tunnel mode is required because the ACP will route/forward packets received from any other ACP node across the ACP secure", "comments": "During IETF 106 I solicited some feedback on the ACP IPsec/IKEv2 usage from Valery Smyslov; I've tried to digest that feedback into new text here. Some aspects are still subject to change. I also made a minor tweak to the DTLS section to bring it into parity with the IPsec one.\nMerging first, then fixing.", "new_text": "6.7.1. An ACP node announces its ability to support IPsec, negotiated via IKEv2, as the ACP secure channel protocol in GRASP as the \"IKEv2\" method for the \"AN_ACP\" objective. 6.7.1.1. To run ACP via IPsec natively, no further IANA assignments/ definitions are required. The ACP usage of IPsec and IKEv2 mandates a narrow profile of the current standards-track usage guidance for IPsec RFC8221 and IKEv2 RFC8247. This profile provides for stringent security properties and can exclude deprecated/legacy algorithms because there is no need for interoperability with legacy equipment for ACP channels. Any such backward compatibility would lead only to increased attack surface and implementation complexity, for no benefit. An ACP node that is supporting native IPsec MUST use IPsec in tunnel mode, negotiated via IKEv2, and with IPv6 payload (e.g., ESP Next Header of 41). It MUST use local and peer link-local IPv6 addresses for encapsulation. Manual keying MUST NOT be used (independent of the security weaknesses of manual keying, it is incompatible with an autonomic platform). IPsec tunnel mode is required because the ACP will route/forward packets received from any other ACP node across the ACP secure"} {"id": "q-en-autonomic-control-plane-1dd4a3ac7926bcbc26801cd5171ebf8aee21ffb6d7923fe06b890b70b226ac3c", "old_text": "transport mode, it would only be possible to send packets originated by the ACP node itself. IKEv2 authentication MUST use authentication method=1 (\"RSA Digital Signature\") for ACP certificates with RSA key and methods 9,10,11 for ACP certificates with ECC key according to the keys size/curve. A certificate payload with the ACP certificate MUST be included during IKEv2 authentication to support the ACP domain membership as described in certcheck, because it is using additional elements of the ACP certificates. ACP peers are expected to have the same set of Trust Anchors (TA), so a certificate path MUST only be included in the signaled payload when the path contains intermediate certificates not in the TA set, such as sub-CAs (see acp-registrars). IPsec MUST support ESP with ENCR_AES_GCM_16 (RFC4106) due to its higher performance over ENCR_AES_CBC. ACP MUST NOT use any NULL encryption option due to the confidentiality of ACP payload that may not be encrypted by itself (when carrying legacy management protocol traffics as well as hop-by-hop GRASP). These requirements are based on RFC8221 but limited to the minimum necessary options because there is no need for interoperability with legacy equipment for ACP channels, instead any such backward compatibility would reduce the minimum security or performance required for ACP channels or increase the implementation complexity of the ACP. Once there are updates to RFC8221, these should accordingly be reflected in updates to these ACP requirements, for example if", "comments": "During IETF 106 I solicited some feedback on the ACP IPsec/IKEv2 usage from Valery Smyslov; I've tried to digest that feedback into new text here. Some aspects are still subject to change. I also made a minor tweak to the DTLS section to bring it into parity with the IPsec one.\nMerging first, then fixing.", "new_text": "transport mode, it would only be possible to send packets originated by the ACP node itself. ACP IPsec implementations MUST support ESP with ENCR_AES_GCM_16 (RFC4106) due to its higher performance over ENCR_AES_CBC. ACP MUST NOT use any NULL encryption option due to the confidentiality of ACP payload that may not be encrypted by itself (when carrying legacy management protocol traffics as well as hop-by-hop GRASP). When using AES encryption, 256-bit keys MUST be used. AES-GCM is an AEAD cipher mode, so no ESP authentication algorithm requirement is needed. Once there are updates to RFC8221, these should accordingly be reflected in updates to these ACP requirements, for example if"} {"id": "q-en-autonomic-control-plane-1dd4a3ac7926bcbc26801cd5171ebf8aee21ffb6d7923fe06b890b70b226ac3c", "old_text": "long as they do not result in a reduction of security over the above MTI requirements. For example, ESP compression MAY be used. IKEv2 MUST follow RFC8247 as necessary to support the above listed IPsec requirements. 6.7.1.2.", "comments": "During IETF 106 I solicited some feedback on the ACP IPsec/IKEv2 usage from Valery Smyslov; I've tried to digest that feedback into new text here. Some aspects are still subject to change. I also made a minor tweak to the DTLS section to bring it into parity with the IPsec one.\nMerging first, then fixing.", "new_text": "long as they do not result in a reduction of security over the above MTI requirements. For example, ESP compression MAY be used. As for ESP (above), for IKEv2, the ENCR_AES_GCM_16 encryption algorithm (with 256-bit keys) MUST be supported. The IKEv2 PRF_HMAC_SHA2_512 pseudorandom function MUST be supported. IKEv2 Diffie-Hellman key exchange group 19 (256-bit random ECP) MUST be supported (in addition to 2048-bit MODP). ECC provides a similar security level to finite-field (MODP) key exchange with a shorter key length, so is generally preferred absent other considerations. IKEv2 authentication MUST use the ACP domain certificates. The Certificate Encoding \"PKCS #7 wrapped X.509 certificate\" (1) MUST be supported (see IKEV2IANA for this and other IANA IKEv2 parameter names used in this text). If certificate chains are used, all intermediate certificates up to, but not including the locally provisioned trust anchor certificate must be signaled. A certificate payload with the ACP certificate MUST be included during IKEv2 authentication to support the ACP domain membership check as described in certcheck, because it is using additional elements of the ACP certificates. ACP peers are expected to have the same set of Trust Anchors (TA), so a certificate path MUST only be included in the signaled payload when the path contains intermediate certificates not in the TA set, such as sub-CAs (see acp-registrars). ACP nodes are identified by their ACP address, so the ID_IPv6_ADDR IKEv2 identification payload MUST be used and MUST convey the ACP address. If the peer's ACP domain certificate includes an ACP address in the domain information, the address in the identification payload must match the address in the certificate (allowing for the presence of virtualization bits in the ACP addressing scheme used). IKEv2 authentication MUST use authentication method 14 (\"Digital Signature\") for ACP certificates; this authentication method can be used with both RSA and ECDSA certificates, as indicated by a PKIX- style OID. The Digital Signature hash SHA2-512 MUST be supported (in addition to SHA2-256). Once there are updates to RFC8247, these should accordingly be reflected in updates to these ACP requirements, for example if ENCR_AES_GCM_16 was to be superceeded in the future or the key- exchange group recommendations are changed. Additional requirements from RFC8247 MAY be used for ACP channels as long as they do not result in a reduction of security over the above MTI requirements. 6.7.1.2."} {"id": "q-en-autonomic-control-plane-1dd4a3ac7926bcbc26801cd5171ebf8aee21ffb6d7923fe06b890b70b226ac3c", "old_text": "the first transport encryption supported in some classes of constrained devices. To run ACP via UDP and DTLS v1.2 RFC6347 a locally assigned UDP port is used that is announced as a parameter in the GRASP AN_ACP objective to candidate neighbors.", "comments": "During IETF 106 I solicited some feedback on the ACP IPsec/IKEv2 usage from Valery Smyslov; I've tried to digest that feedback into new text here. Some aspects are still subject to change. I also made a minor tweak to the DTLS section to bring it into parity with the IPsec one.\nMerging first, then fixing.", "new_text": "the first transport encryption supported in some classes of constrained devices. An ACP node announces its ability to support DTLS as the ACP secure channel protocol in GRASP as the \"DTLS\" method for the \"AN_ACP\" objective. To run ACP via UDP and DTLS v1.2 RFC6347 a locally assigned UDP port is used that is announced as a parameter in the GRASP AN_ACP objective to candidate neighbors."} {"id": "q-en-basic-yang-module-afeca7b6d0c19fcb03973e0be1f014be02b5d89dc98fd854788d4a480d716cd1", "old_text": "Abstract This document defines a YANG RPC and a minimal datastore tree required to retrieve attestation evidence about integrity measurements from a composite device with one or more roots of trust for reporting. Complementary measurement logs are also provided by the YANG RPC originating from one or more roots of trust of measurement. The module defined requires at least one TPM 1.2 or TPM 2.0 and corresponding Trusted Software Stack included in the device components of the composite device the YANG server is running on. 1. This document is based on the terminology defined in the I-D.ietf- rats-architecture and uses the interaction model and information elements defined in the I-D.birkholz-rats-reference-interaction-model document. The currently supported hardware security modules (HWM) - sometimes also referred to as an embedded secure element (eSE) - is the Trusted Platform Module (TPM) version 1.2 and 2.0 specified by the Trusted Computing Group (TCG). One ore more TPMs embedded in the components of a composite device - sometimes also referred to as an aggregate device - are required in order to use the YANG module defined in this document. A TPM is used as a root of trust for reporting (RTR) in order to retrieve attestation evidence from a composite device (quote primitive operation). Additionally, it is used as a root of trust for storage (RTS) in order to retain shielded secrets and store system measurements using a folding hash function (extend primitive operation). 1.1.", "comments": "Provides more context around the various elements of the model\nThanks a lot for your stab at Security Considerations! Also the I-D was in dire need for some expositional english text. Reads straight forward and comprehensive.", "new_text": "Abstract This document defines a YANG RPC and a minimal datastore required to retrieve attestation evidence about integrity measurements from a device following the operational context defined in I-D.ietf-rats- tpm-based-network-device-attest. Complementary measurement logs are also provided by the YANG RPC originating from one or more roots of trust of measurement. The module defined requires at least one TPM 1.2 or TPM 2.0 and corresponding Trusted Software Stack included in the device components of the composite device the YANG server is running on. 1. This document is based on the terminology defined in the I-D.ietf- rats-architecture and uses the operational context defined in I- D.ietf-rats-tpm-based-network-device-attest as well as the interaction model and information elements defined in I-D.birkholz- rats-reference-interaction-model. The currently supported hardware security modules (HWM) are the Trusted Platform Module (TPM) TPM1.2 and TPM2.0 specified by the Trusted Computing Group (TCG). One ore more TPMs embedded in the components of a composite device - sometimes also referred to as an aggregate device - are required in order to use the YANG module defined in this document. A TPM is used as a root of trust for reporting (RTR) in order to retrieve attestation evidence from a composite device (quote primitive operation). Additionally, it is used as a root of trust for storage (RTS) in order to retain shielded secrets and store system measurements using a folding hash function (extend primitive operation). 1.1."} {"id": "q-en-basic-yang-module-afeca7b6d0c19fcb03973e0be1f014be02b5d89dc98fd854788d4a480d716cd1", "old_text": "2.2.1. This YANG module imports modules from RFC6991, RFC8348, I-D.ietf- netconf-crypto-types, ietf-asymmetric-algs.yang. 2.3. Cryptographic algorithm types were initially included within -v14 NETCONF's iana-crypto-types.yang. Unfortunately all this content including the algorithms needed here failed to make the -v15 used WGLC. Therefore a modified version of this draft is included here. Perhaps someone will steward this list as a separate draft. 3. This document will include requests to IANA: To be defined yet. 4. There are always some. 5. Changes from version 01 to version 02: Extracted Crypto-types into a separate YANG file", "comments": "Provides more context around the various elements of the model\nThanks a lot for your stab at Security Considerations! Also the I-D was in dire need for some expositional english text. Reads straight forward and comprehensive.", "new_text": "2.2.1. This YANG module imports modules from RFC6991, RFC8348, I-D.ietf- netconf-keystore, ietf-tcg-algs.yang. 2.2.1.1. This module supports the following types of attestation event logs: , , and . 2.2.1.2. - Allows a Verifier to request a quote of PCRs from a TPM1.2 compliant cryptoprocessor. When one or more is not provided, all TPM1.2 compliant cryptoprocessors will respond. - Allows a Verifier to request a quote of PCRs from a TPM2.0 compliant cryptoprocessor. When one or more is not provided, all TPM2.0 compliant cryptoprocessors will respond. - Allows a Verifier to acquire the evidence which was extended into specific PCRs. 2.2.1.3. container - This exists when there are more than one TPM for a particular Attester. This allows each specific TPM to identify on which it belongs. container - Provides configuration and operational details for each supported TPM, including the tpm-firmware-version, PCRs which may be quoted, certificates which are associated with that TPM, and the current operational status. Of note is the certificates which are associated with that TPM. As a certificate is associated with a single Attestation key, knowledge of the certificate allows a specific TPM to be identified. container - Identifies which TCG algorithms are available for use the Attesting platform. This allows an operator to limit algorithms available for use by RPCs to just a desired set from the universe of all allowed by TCG. 2.2.1.4. 2.2.2. Cryptographic algorithm types were initially included within -v14 NETCONF's iana-crypto-types.yang. Unfortunately all this content including the algorithms needed here failed to make the -v15 used WGLC. As a result this document has encoded the TCG Algorithm definitions of TCG-Algos, revision 1.32. By including this full table as a separate YANG file within this document, it is possible for other YANG models to leverage the contents of this model. 2.2.2.1. There are two types of features supported and . Support for either of these features indicates that a cryptoprocessor supporting the corresponding type of TCG API is present on an Attester. Most commonly, only one type of cryptoprocessor will be available on an Attester. 2.2.2.2. There are three types of identities in this model. The first are the cryptographic functions supportable by a TPM algorithm, these include: , , , , , , , and . The definitions of each of these are in Table 2 of TCG-Algos. The second are API specifications for tpms: and . The third are specific algorithm types. Each algorithm type defines what cryptographic functions may be supported, and on which type of API specification. It is not required that an implementation of a specific TPM will support all algorithm types. The contents of each specific algorithm mirrors what is in Table 3 of TCG-Algos. 2.2.2.3. Note that not all cryptographic functions are required for use by ietf-tpm-remote-attestation.yang. However the full definition of Table 3 of TCG-Algos will allow use by additional YANG specifications. 3. This document will include requests to IANA: To be defined yet. But keeping up with changes to ietf-tcg-algs.yang will be necessary. 4. The YANG module specified in this document defines a schema for data that is designed to be accessed via network management protocols such as NETCONF RFC6241 or RESTCONF RFC8040. The lowest NETCONF layer is the secure transport layer, and the mandatory-to-implement secure transport is Secure Shell (SSH) RFC6242. The lowest RESTCONF layer is HTTPS, and the mandatory-to-implement secure transport is TLS RFC5246. There are a number of data nodes defined in this YANG module that are writable/creatable/deletable (i.e., config true, which is the default). These data nodes may be considered sensitive or vulnerable in some network environments. Write operations (e.g., edit-config) to these data nodes without proper protection can have a negative effect on network operations. These are the subtrees and data nodes and their sensitivity/vulnerability: Container: , , , and all could be populated with algorithms which are not supported by the underlying physical TPM installed by the equipment vendor. Container: - Although shown as 'rw', it is system generated - It is possible to configure PCRs for extraction which are not being extended by system software. This could unnecessarily use TPM resources. - It is possible to provision a certificate which does not correspond to a Attestation Identity Key (AIK) within the TPM. RPC: - Need to verify that the certificate is for an active AIK. RPC: - Need to verify that the certificate is for an active AIK. RPC: - Pulling lots of logs can chew up system resources. 5. Changes from version 02 to version 03: moved to tcg-algs cleaned up model to eliminate sources of errors removed key establishment RPC added lots of XPATH which must all be scrubbed still Descriptive text added on model contents. Changes from version 01 to version 02: Extracted Crypto-types into a separate YANG file"} {"id": "q-en-brski-cloud-e7d7cc47fa6d0c63d96bba8d032655d3bddb2ec0e9721d55af7cf5900f08d844", "old_text": "1. Bootstrapping Remote Secure Key Infrastructures BRSKI specifies automated bootstrapping of an Autonomic Control Plane. BRSKI Section 2.7 describes how a pledge \"MAY contact a well-known URI of a cloud registrar if a local registrar cannot be discovered or if the pledge's target use cases do not include a local registrar\". This document further specifies use of a BRSKI cloud registrar and clarifies operations that are not sufficiently specified in BRSKI.", "comments": "Not quite. It specifies secure bootstrapping of the individual nodes. It's RFC 8994 that bootstraps the ACP.", "new_text": "1. Bootstrapping Remote Secure Key Infrastructures BRSKI specifies automated network onboarding of devices, referred to as pledges, within an Autonomic Control Plane or other managed network infrastructure. BRSKI Section 2.7 describes how a pledge \"MAY contact a well-known URI of a cloud registrar if a local registrar cannot be discovered or if the pledge's target use cases do not include a local registrar\". This document further specifies use of a BRSKI cloud registrar and clarifies operations that are not sufficiently specified in BRSKI."} {"id": "q-en-brski-cloud-32c44735837e01e2d57ef43bd1453498973e8cc6979c52ae65b8784e6e6e81cc", "old_text": "logically separate entities. The two functions could of course be integrated into a single service. TWO CHOICES: 1. Cloud Registrar redirects to Owner Registrar 2. Cloud Registrar returns VOUCHER pinning Owner Register. 2.1.", "comments": "I find the \"??\" in the figure confusing. The Cloud Registrar and the MASA could just be shown as adjacent boxes; the explanation in the text is fine. That looks ugly as a single pseudo-sentence, and I'm not sure what it means. A more complete explanation would be good.", "new_text": "logically separate entities. The two functions could of course be integrated into a single service. There are two different mechanisms for a cloud registrar to handle voucher requests: 1. the Cloud Registrar redirects the request to Owner Registrar for handling 2. the Cloud Registrar returns a voucher pinning the Owner Register and includes additional bootstrapping information embedded in the voucher Both mechanisms are described in detail later in this document. 2.1."} {"id": "q-en-capport-wg-architecture-09dbee4ce4db45bfb81fb48bbb4c2ba3508b76bf28590d20d15686e5112c61b1", "old_text": " CAPPORT Architecture draft-ietf-capport-architecture-08 Abstract This document describes a CAPPORT architecture. DHCP or Router Advertisements, an optional signaling protocol, and an HTTP API are used to provide the solution. The role of Provisioning Domains (PvDs) is described. 1.", "comments": "This replaces all mentions of capport (other than document slugs) with \"captive portal\". Note: this change includes the title of the document.\nJoel Halpern writes, We should do this.\nOr erase all mentions of 'capport'", "new_text": " Captive Portal Architecture draft-ietf-capport-architecture-08 Abstract This document describes a captive portal architecture. DHCP or Router Advertisements, an optional signaling protocol, and an HTTP API are used to provide the solution. The role of Provisioning Domains (PvDs) is described. 1."} {"id": "q-en-capport-wg-architecture-09dbee4ce4db45bfb81fb48bbb4c2ba3508b76bf28590d20d15686e5112c61b1", "old_text": "2.2.1. A standard for providing a portal URI using DHCP or Router Advertisements is described in RFC7710bis. The CAPPORT architecture expects this URI to indicate the API described in section_api. 2.2.2.", "comments": "This replaces all mentions of capport (other than document slugs) with \"captive portal\". Note: this change includes the title of the document.\nJoel Halpern writes, We should do this.\nOr erase all mentions of 'capport'", "new_text": "2.2.1. A standard for providing a portal URI using DHCP or Router Advertisements is described in RFC7710bis. The captive portal architecture expects this URI to indicate the API described in section_api. 2.2.2."} {"id": "q-en-capport-wg-architecture-b19787febcbfa6f33ad13f47904423b8b507ead48d27f44fddb5b6869b3b6dd2", "old_text": "A side-benefit of the architecture described in this document is that devices without user interfaces are able to identify parameters of captivity. However, this document does not yet describe a mechanism for such devices to escape captivity. The architecture uses the following mechanisms:", "comments": "Multiple reviewers objected to the way in which we captured interactions with headless/UI-less devices. Clarify the document's intent. Do this by removing language that indicated that we would eventually get around to adding the requirements, and specify explicitly that we are not handling devices without UIs.\nEric points out in his review: He suggests as a solution: I propose we do this. The first mention is in section one. We can say something along the lines of \"A future document could provide a solution for devices without user interfaces. This document focuses on devices with user interfaces.\"\nOn page 4: Joel Halpern writes,\nWe might want to avoid relying on the captivity concept so heavily, but this is a good change nonetheless.", "new_text": "A side-benefit of the architecture described in this document is that devices without user interfaces are able to identify parameters of captivity. However, this document does not describe a mechanism for such devices to negotiate for unrestricted network access. A future document could provide a solution to devices without user interfaces. This document focuses on devices with user interfaces. The architecture uses the following mechanisms:"} {"id": "q-en-capport-wg-architecture-b19787febcbfa6f33ad13f47904423b8b507ead48d27f44fddb5b6869b3b6dd2", "old_text": "section_capport_enforcement, until site-specific requirements have been met. At this time we consider only devices with web browsers, with web applications being the means of satisfying Captive Portal Conditions. An example interactive User Equipment is a smart phone. The User Equipment:", "comments": "Multiple reviewers objected to the way in which we captured interactions with headless/UI-less devices. Clarify the document's intent. Do this by removing language that indicated that we would eventually get around to adding the requirements, and specify explicitly that we are not handling devices without UIs.\nEric points out in his review: He suggests as a solution: I propose we do this. The first mention is in section one. We can say something along the lines of \"A future document could provide a solution for devices without user interfaces. This document focuses on devices with user interfaces.\"\nOn page 4: Joel Halpern writes,\nWe might want to avoid relying on the captivity concept so heavily, but this is a good change nonetheless.", "new_text": "section_capport_enforcement, until site-specific requirements have been met. This document only considers devices with web browsers, with web applications being the means of satisfying Captive Portal Conditions. An example of such User Equipment is a smart phone. The User Equipment:"} {"id": "q-en-capport-wg-architecture-2619cf4ae61f6d56df0629a0ce6c445dac052716a6d9c45ac0d31e160efeddc1", "old_text": "A Captive Portal API needs to present information to clients that is unique to that client. To do this, some systems use information from the context of a request, such as the source address, to identify the UE. Using information from context rather than information from the URI allows the same URI to be used for different clients. However, it also means that the resource is unable to provide relevant information if the UE makes a request using a different network path. This might happen when UE has multiple network interfaces. It might also happen if the address of the API provided by DNS depends on where the query originates (as in split DNS RFC8499). Accessing the API MAY depend on contextual information. However, the URIs provided in the API SHOULD be unique to the UE and not dependent on contextual information to function correctly. Though a URI might still correctly resolve when the UE makes the request from a different network, it is possible that some functions could be limited to when the UE makes requests using the captive network. For example, payment options could be absent or a warning could be displayed to indicate the payment is not for the current connection. URIs could include some means of identifying the User Equipment in the URIs. However, including unauthenticated User Equipment", "comments": "We only use UE in one section. Everywhere else we use User Equipment. Expand UE in that section so we are consistent with our use of it.\nIt was pointed out that we sometimes use UE rather than user equipment (e.g. section 3.5.). Let's be consistent, perhaps shortening all references to UE other than where it is defined.", "new_text": "A Captive Portal API needs to present information to clients that is unique to that client. To do this, some systems use information from the context of a request, such as the source address, to identify the User Equipment. Using information from context rather than information from the URI allows the same URI to be used for different clients. However, it also means that the resource is unable to provide relevant information if the User Equipment makes a request using a different network path. This might happen when User Equipment has multiple network interfaces. It might also happen if the address of the API provided by DNS depends on where the query originates (as in split DNS RFC8499). Accessing the API MAY depend on contextual information. However, the URIs provided in the API SHOULD be unique to the User Equipment and not dependent on contextual information to function correctly. Though a URI might still correctly resolve when the User Equipment makes the request from a different network, it is possible that some functions could be limited to when the User Equipment makes requests using the captive network. For example, payment options could be absent or a warning could be displayed to indicate the payment is not for the current connection. URIs could include some means of identifying the User Equipment in the URIs. However, including unauthenticated User Equipment"} {"id": "q-en-capport-wg-architecture-3419ea3f35d00324d4e61bbab5188868997355dc0135587d9f55550b89a043ac", "old_text": "Captive Portal Signaling Protocol: Also known as Signaling Protocol. The protocol for communicating Captive Portal Signals. 2. 2.1.", "comments": "The idea of a session is important, both because it is used in the API document, and because it helps to tie everything together. Define it in the terminology.\nThis is a first stab at defining the session. I suspect we may need to iterate on it once or twice. Let me know if you think it deserves its own section on top of the terminology.\nThanks for doing this! Here\u2019s a suggested alternate way to explain this. Feel free to modify as desired. Captive Portal Session: Also referred to simply as the \u201csession\u201d, a Captive Portal Session is the association for a particular User Equipment that starts when it interacts with the Captive Portal and gains open access to the network, and ends when the User Equipment moves back into the original captive state. The Captive Network maintains the state of each active Session, and can limit Sessions based on a length of time or a number of bytes used. The Session is associated with a particular User Equipment using the User Equipment's identifier (see ).\nYour phrasing is better. It manages to work in the session limits while keeping it about the same lenght. I've taken it almost verbatim. NAME Do you have any input on this topic?\nFrom Ben's review of the API document, we should define \"session\" formally in the architecture. The architecture doc mentions it in passing once already. The API doc uses the term session much more.", "new_text": "Captive Portal Signaling Protocol: Also known as Signaling Protocol. The protocol for communicating Captive Portal Signals. Captive Portal Session: Also referred to simply as the \"session\", a Captive Portal Session is the association for a particular User Equipment that starts when it interacts with the Captive Portal and gains open access to the network, and ends when the User Equipment moves back into the original captive state. The Captive Network maintains the state of each active Session, and can limit Sessions based on a length of time or a number of bytes used. The Session is associated with a particular User Equipment using the User Equipment's identifier (see ue_identity). 2. 2.1."} {"id": "q-en-capport-wg-architecture-0cdd7b84095e0edfb504d88e7826c05a6f7596a9d0e0a7d0ad6b0e9387ce0986", "old_text": "Architecture RFC7556 provides some discussion on authenticating an operator. Given that a user chooses to visit a Captive Portal URI, the URI location SHOULD be securely provided to the user's device. E.g., the DHCPv6 AUTH option can sign this information. If a user decides to incorrectly trust an attacking network, they might be convinced to visit an attacking web page and unwittingly", "comments": "The language of this section was hard for some of the reviewers to understand. Rephrase it to be more explicit and clear.\nYes. We should probably rephrase this. E.g. \"The user makes an informed choice in deciding to visit and trust the Captive Portal URI. Since the network provides Captive Portal URI to the user equipment, thenetwork SHOULD do so securely so that the user's trust in the network can extend to their trust of the Captive Portal URI. E.g., the DHCPv6 AUTH option can sign this information.\"", "new_text": "Architecture RFC7556 provides some discussion on authenticating an operator. The user makes an informed choice to visit and trust the Captive Portal URI. Since the network provides Captive Portal URI to the user equipment, the network SHOULD do so securely so that the user's trust in the network can extend to their trust of the Captive Portal URI. E.g., the DHCPv6 AUTH option can sign this information. If a user decides to incorrectly trust an attacking network, they might be convinced to visit an attacking web page and unwittingly"} {"id": "q-en-capport-wg-architecture-e1b46b2556797d6094a3adf10a4deb44d5bc58db16c209ab2cf171cfa536e904", "old_text": "This document describes an architecture for implementing captive portals while addressing most of the problems arising for current captive portal mechanisms. The architecture is guided by these principles: A side-benefit of the architecture described in this document is that devices without user interfaces are able to identify parameters of", "comments": "Attempting to fix issue . I attempted to reword the principles, adding more prose for the rationale. Also upgraded some SHOULD NOT to MUST NOT because it seems some people thought the SHOULD NOT says the existing approach is OK. These are just some ideas. I won't be offended if rejected.\nI believe I addressed all comments. You'll probably want to review the complete diff again.\nBenjamin writes in his review: Michael replied with this suggested text: This seems reasonable, as I mentioned in the reply. We should go with it, unless anyone has an objection. Also, attribute it to Michael Richardson.\nS. Moonesamy writes\nOne approach I suggested in my reply to the feedback:\nThis is good. Thanks for sorting through this; it's important that this is right, and you have cleared it up a lot.Much clearer. Thanks!", "new_text": "This document describes an architecture for implementing captive portals while addressing most of the problems arising for current captive portal mechanisms. The architecture is guided by these requirements: A side-benefit of the architecture described in this document is that devices without user interfaces are able to identify parameters of"} {"id": "q-en-capport-wg-architecture-e1b46b2556797d6094a3adf10a4deb44d5bc58db16c209ab2cf171cfa536e904", "old_text": "The authors thank Benjamin Kaduk for providing the content related to TLS certificate validation of the API server. The authors thank various individuals for their feedback on the mailing list and during the IETF98 hackathon: David Bird, Erik Kline, Alexis La Goulette, Alex Roscoe, Darshak Thakore, and Vincent van", "comments": "Attempting to fix issue . I attempted to reword the principles, adding more prose for the rationale. Also upgraded some SHOULD NOT to MUST NOT because it seems some people thought the SHOULD NOT says the existing approach is OK. These are just some ideas. I won't be offended if rejected.\nI believe I addressed all comments. You'll probably want to review the complete diff again.\nBenjamin writes in his review: Michael replied with this suggested text: This seems reasonable, as I mentioned in the reply. We should go with it, unless anyone has an objection. Also, attribute it to Michael Richardson.\nS. Moonesamy writes\nOne approach I suggested in my reply to the feedback:\nThis is good. Thanks for sorting through this; it's important that this is right, and you have cleared it up a lot.Much clearer. Thanks!", "new_text": "The authors thank Benjamin Kaduk for providing the content related to TLS certificate validation of the API server. The authors thank Michael Richardson for providing wording requiring DNSSEC and TLS to operate without the user adding exceptions. The authors thank various individuals for their feedback on the mailing list and during the IETF98 hackathon: David Bird, Erik Kline, Alexis La Goulette, Alex Roscoe, Darshak Thakore, and Vincent van"} {"id": "q-en-capport-wg-architecture-e1f867db997f2f690b499bdcd2bb48c17fbba5644e434bfc19bc457eef36028c", "old_text": "The API MUST ensure the integrity of this information, as well as its confidentiality. 7.4. If a Signaling Protocol is implemented, it may be possible for any", "comments": "One of the reviewers asked for more information about the attacker in section 7.3. This does so, describing what the attacker could achieve by compromising the security.\nBenjamin writes, Without knowing the details of the particular solution, it's a bit hard to say for sure, but roughly I'd say it's someone who wants to interact with the API using the identity of the user. E.g. if we're using an 'unguessable' URI, an attacker snooping on the communication with the API could determine the URI, and use it.", "new_text": "The API MUST ensure the integrity of this information, as well as its confidentiality. An attacker with access to this information might be able to masquerade as a specific User Equipment when interacting with the API, which could then allow them to masquerade as that User Equipment when interacting with the User Portal. This could give them the ability to determine whether the User Equipment has accessed the portal, or deny the User Equipment service by ending their session using mechanisms provided by the User Portal, or consume that User Equipment's quota. An attacker with the ability to modify the information could deny service to the User Equipment, or cause them to appear as a different User Equipment. 7.4. If a Signaling Protocol is implemented, it may be possible for any"} {"id": "q-en-capport-wg-architecture-3e1bcbd58280b13238d25caf661a75e494ba7cc1da5a9af17929ccaa56bdfc5a", "old_text": "2.5. User Equipment may send traffic to hosts blocked by the captive network prior to the Enforcement device granting it access. The Enforcement Device rightly blocks or resets these requests. However, in the absence of a signal from the Enforcement Device or interaction with the API server, the User Equipment can only guess at whether it is captive. Consequently, allowing the Enforcement Device to explicitly signal to the User Equipment that the traffic is being blocked may improve the user's experience. An Enforcement Device may also want to notify the User Equipment of a pending expiry of its access to the external network, so providing the Enforcement Device the ability to preemptively signal may be desirable. A specific Captive Portal Signaling Protocol is out of scope for this document. However, in order to ensure that future protocols fit into the architecture, requirements for a Captive Portal Signaling Protocol follow: The Captive Portal Signaling Protocol does not provide any means of indicating that the network prevents access to some destinations. The intent is to rely on the Captive Portal API and the web portal to which it points to communicate local network policies. The Captive Portal Enforcement function MAY send Captive Portal Signals when User Equipment that has not satisfied the Captive Portal Conditions attempts to send traffic to the network. These signals MUST be rate-limited to a configurable rate. The signals MUST NOT be sent to the destinations/peers that the User Equipment is restricted from accessing. The indications are only to be sent to the User Equipment. 2.6.", "comments": "The working group has decided that we do not need to tackle specifying the signaling protocol. Rather than leaving the document with it partially specified, remove most of the text related to it. Keep some basic examples of constraints. This also removes the text which claimed that the document would specify rqeuirements for the protocol. It does not. Made a few related editorial changes.\nAs suggested by NAME : Do a pass through the document with this text in mind and make sure it makes sense. Clean up anything that may have depended on removed text.\nThanks for putting this together. This seems sufficient, and brief.", "new_text": "2.5. When User Equipment first connects to a network, or when there are changes in status, the Enforcement Device could generate a signal toward the User Equipment. This signal indicates that the User Equipment might need to contact the API Server to receive updated information. For instance, this signal might be generated when the end of a session is imminent, or when network access was denied. An Enforcement Device MUST rate-limit any signal generated in response to these conditions. See section_signal_risks for a discussion of risks related to a Captive Portal Signal. 2.6."} {"id": "q-en-capport-wg-architecture-3e1bcbd58280b13238d25caf661a75e494ba7cc1da5a9af17929ccaa56bdfc5a", "old_text": "7.5. The Signal could inform the User Equipment that it is being held captive. There is no requirement that the User Equipment do something about this. Devices MAY permit users to disable automatic reaction to captive-portal indications for privacy reasons. However, there would be the trade-off that the user doesn't get notified when network access is restricted. Hence, end-user devices MAY allow users to manually control captive portal interactions, possibly on the granularity of Provisioning Domains. ", "comments": "The working group has decided that we do not need to tackle specifying the signaling protocol. Rather than leaving the document with it partially specified, remove most of the text related to it. Keep some basic examples of constraints. This also removes the text which claimed that the document would specify rqeuirements for the protocol. It does not. Made a few related editorial changes.\nAs suggested by NAME : Do a pass through the document with this text in mind and make sure it makes sense. Clean up anything that may have depended on removed text.\nThanks for putting this together. This seems sufficient, and brief.", "new_text": "7.5. The Captive Portal Signal could inform the User Equipment that it is being held captive. There is no requirement that the User Equipment do something about this. Devices MAY permit users to disable automatic reaction to Captive Portal Signals indications for privacy reasons. However, there would be the trade-off that the user doesn't get notified when network access is restricted. Hence, end-user devices MAY allow users to manually control captive portal interactions, possibly on the granularity of Provisioning Domains. "} {"id": "q-en-capport-wg-architecture-dbd2645d95da5e111c80724fa6af521c0bec42f83788cf07316ebfbca2bb0d72", "old_text": "information, expiry time, method of providing credentials, security token for validating ICMP messages. This document does not specify the details of the API. The CAPPORT API MUST support TLS for privacy and server authentication. 2.4.", "comments": "We want to make TLS a requirement, not a suggestion, so I've chagned the wording around TLS to reflect this. I also added a section to the security section discussing the motivation behind the requirement.\nCan you forward to the list please?", "new_text": "information, expiry time, method of providing credentials, security token for validating ICMP messages. The API MUST use TLS for privacy and server authentication. The implementation of the API MUST ensure both privacy and integrity of any information provided by or required by it. This document does not specify the details of the API. 2.4."} {"id": "q-en-capport-wg-architecture-dbd2645d95da5e111c80724fa6af521c0bec42f83788cf07316ebfbca2bb0d72", "old_text": "5.2. The solution described here assumes that when the User Equipment needs to trust the API server, server authentication will be utilized using TLS mechanisms. 5.3. It is possible for any user on the Internet to send ICMP packets in an attempt to cause the receiving equipment to go to the captive portal. This has been considered and addressed in the following", "comments": "We want to make TLS a requirement, not a suggestion, so I've chagned the wording around TLS to reflect this. I also added a section to the security section discussing the motivation behind the requirement.\nCan you forward to the list please?", "new_text": "5.2. The solution described here assumes that when the User Equipment needs to trust the API server, server authentication will be performed using TLS mechanisms. 5.3. The solution described here requires that the API be secured using TLS. This is required to allow the user equipment and API server to exchange secrets which can be used to validate future interactions. The API must ensure the integrity of this information, as well as its confidentiality. 5.4. It is possible for any user on the Internet to send ICMP packets in an attempt to cause the receiving equipment to go to the captive portal. This has been considered and addressed in the following"} {"id": "q-en-capport-wg-architecture-dbd2645d95da5e111c80724fa6af521c0bec42f83788cf07316ebfbca2bb0d72", "old_text": "Even when redirected, the User Equipment securely authenticates with API servers. 5.4. The ICMP messaging informs the User Equipment that it is being held captive. There is no requirement that the User Equipment do", "comments": "We want to make TLS a requirement, not a suggestion, so I've chagned the wording around TLS to reflect this. I also added a section to the security section discussing the motivation behind the requirement.\nCan you forward to the list please?", "new_text": "Even when redirected, the User Equipment securely authenticates with API servers. 5.5. The ICMP messaging informs the User Equipment that it is being held captive. There is no requirement that the User Equipment do"} {"id": "q-en-capport-wg-architecture-29971fb1c60487cf8ccf9e07d5b8f260c1e1a0e2487206a250d6231da556c9db", "old_text": "3. This section aims to improve understanding by describing a possible workflow of solutions adhering to the architecture. 3.1. This section describes a possible work-flow when User Equipment initially joins a Captive Network.", "comments": "We need to discuss what information will be used to identify the UE from the perspective of the various components in the system. This updates the draft to discuss this concept, including desired properties of the identifier and some examples. This aims to address issue 5 (URL)", "new_text": "3. Multiple components in the architecture interact with both the User Equipment and each other. Since the User Equipment is the focus of these interactions, the components must be able to both identify the user equipment from their interactions with it, and be able to agree on the identify of the user equipment when interacting with each other. The methods by which the components interact restrict the type of information that may be used as an identifying charactertics. This section discusses the identifying charactersitics. 3.1. An Identifier is a chatacteristic of the User Equipment used by the components of a Captive Portal to uniquely determine which specific User Equipment is interacting with them. An Identifier MAY be a field contained in packets sent by the User Equipment to the External Network. Or, an Identifier MAY be an ephemeral property not contained in packets destined for the External Network, but instead correlated with such information through knowledge available to the different components. 3.2. The set of possible identifiers is quite large. However, in order to be considered a good identifer, an identifier SHOULD meet the following criteria. Note that the optimal identifier will likely change depending on the positino of the components in the network as well as the information available to them. An identifier SHOULD: Uniquely Identify the User Equipment Be Hard to Spoof Be Visible to the API Be Visible to the Enforcement Device 3.2.1. In order to uniquely identify the User Equipment, at most one user equipment interacting with the other components of the Captive Portal MUST have a given value of the identifier. Over time, the user equipment identified by the value MAY change. Allowing the identified device to change over time ensures that the space of possible identifying vallues need not be overly large. Independent Captive Portals MAY use the same identifying value to identify different User Equipment. Allowing indendent captive portals to reuse identifying values allows the identifier to be a property of the local network, expanding the space of possible identifiers. 3.2.2. A good identifier does not lend itself to being easily spoofed. At no time should it be simple or straightforward for one User Equipment to pretend to be another User Equipment, regardless of whether both are active at the same time. This property is particularly important when the user equipment is extended externally to devices such as billing systems, or where the identity of the User Equipment could imply liability. 3.2.3. Since the API will need to perform operations which rely on the identify of the user equipment, such as query whether it is captive, the API needs to be able to relate requests to the User Equipment making the request. 3.2.4. The Enforcement Device will decide on a per packet basis whether it should be permitted to communicate with the external network. Since this decision depends on which User Equipment sent the packet, the Enforcement Device requires that it be able to map the packet to its concept of the User Equipment. 3.3. To evaluate whether an identifer is appropriate, one should consider every recommended property from the perspective of interactions among the components in the architecture. When comparing identifiers, choose the one which best satifies all of the recommended properties. The architecture does not provide an exact measure of how well an identifier satisfies a given property; care should be taken in performing the evaluation. 3.4. This section provides some examples of identifiers, along with some evaluation of whether they are good identifiers. The list of identifiers is not exhaustive. Other identifiers may be used. An important point to note is that whether the identifiers are good depends heavily on the capabilities of the components and where in the network the components exist. 3.4.1. The physical interface by which the User Equipment is attached to the network can be used to identify the User Equipment. This identifier has the property of being extremely difficult to spoof: the User Equipment is unaware of the property; one User Equipment cannot manipulate its interactions to appear as though it is another. Further, if only a single User Equipment is attatched to a given physical interface, then the identifier will be unique. If multiple User Equipment is attached to the network on the same physical interface, then this property is not appropriate. Another consideration related to uniqueness of the User Equipment is that if the attached User Equipment changes, both the API server and the Enforcement Device must invalidate their state related to the User Equipment. The Enforcement Device needs to be aware of the physical interface, which constrains the environment: it must either be part of the device providing physical access (e.g., implemented in firmware), or packets traversing the network must be extended to include information about the source physical interface (e.g. a tunnel). The API server faces a similar problem, implying that it should co- exist with the Enforcement Device, or that the enforcement device should extend requests to it with the identifying information. 3.4.2. A natural identifier to consider is the IP address of the User Equipment. At any given time, no device on the network can have the same IP address without causing the network to malfunction, so it is appropriate from the perspective of uniqueness. However, it may be possible to spoof the IP address, particularly for malicious reasons where proper functioning of the network is not necessary for the malicious actor. Consequently, any solution using the IP address should proactively try to prevent spoofing of the IP address. Similarily, if the mapping of IP address to User Equipment is changed, the components of the architecture must remove or update their mapping to prevent spoofing. Demonstrations of return routeability, such as that required for TCP connection establishment, might be sufficient defense against spoofing, though this might not be sufficient in networks that use broadcast media (such as some wireless networks). Since the IP address may traverse multiple segments of the network, more flexibility is afforded to the Enforcement Device and the API server: they simply must exist on a segment of the network where the IP address is still unique. However, consider that a NAT may be deployed between the User Equipment and the Enforcement Device. In such cases, it is possible for the components to still uniquely identify the device if they are aware of the port mapping. In some situtations, the User Equipment may have multiple IP addresses, while still satisfying all of the recommended properties. This raises some challenges to the components of the network. For example, if the user equipment tries to access the network with multiple IP addresses, should the enforcement device and API server treat each IP address as a unique User Equipment, or should it tie the multiple addresses together into one view of the subscriber? An implementation MAY do either. Attention should be paid to IPv6 and the fact that it is expected for a device to have multiple IPv6 addresses on a single link. In such cases, idenfication could be performed by subnet, such as the /64 to which the IP belongs. 4. This section aims to improve understanding by describing a possible workflow of solutions adhering to the architecture. 4.1. This section describes a possible work-flow when User Equipment initially joins a Captive Network."} {"id": "q-en-capport-wg-architecture-29971fb1c60487cf8ccf9e07d5b8f260c1e1a0e2487206a250d6231da556c9db", "old_text": "The User Equipment accesses the network until conditions Expire. 3.2. This section describes a possible work-flow when conditions expire and the user visits the portal again (e.g., low quota, or time", "comments": "We need to discuss what information will be used to identify the UE from the perspective of the various components in the system. This updates the draft to discuss this concept, including desired properties of the identifier and some examples. This aims to address issue 5 (URL)", "new_text": "The User Equipment accesses the network until conditions Expire. 4.2. This section describes a possible work-flow when conditions expire and the user visits the portal again (e.g., low quota, or time"} {"id": "q-en-capport-wg-architecture-29971fb1c60487cf8ccf9e07d5b8f260c1e1a0e2487206a250d6231da556c9db", "old_text": "The User Equipment accesses the external network. 4. This memo includes no request to IANA. 5. 5.1. When joining a network, some trust is placed in the network operator. This is usually considered to be a decision by a user on the basis of", "comments": "We need to discuss what information will be used to identify the UE from the perspective of the various components in the system. This updates the draft to discuss this concept, including desired properties of the identifier and some examples. This aims to address issue 5 (URL)", "new_text": "The User Equipment accesses the external network. 5. This memo includes no request to IANA. 6. 6.1. When joining a network, some trust is placed in the network operator. This is usually considered to be a decision by a user on the basis of"} {"id": "q-en-capport-wg-architecture-29971fb1c60487cf8ccf9e07d5b8f260c1e1a0e2487206a250d6231da556c9db", "old_text": "provide credentials to an attacker. Browsers can authenticate servers but cannot detect cleverly misspelled domains, for example. 5.2. The solution described here assumes that when the User Equipment needs to trust the API server, server authentication will be performed using TLS mechanisms. 5.3. The solution described here requires that the API be secured using TLS. This is required to allow the user equipment and API server to", "comments": "We need to discuss what information will be used to identify the UE from the perspective of the various components in the system. This updates the draft to discuss this concept, including desired properties of the identifier and some examples. This aims to address issue 5 (URL)", "new_text": "provide credentials to an attacker. Browsers can authenticate servers but cannot detect cleverly misspelled domains, for example. 6.2. The solution described here assumes that when the User Equipment needs to trust the API server, server authentication will be performed using TLS mechanisms. 6.3. The solution described here requires that the API be secured using TLS. This is required to allow the user equipment and API server to"} {"id": "q-en-capport-wg-architecture-29971fb1c60487cf8ccf9e07d5b8f260c1e1a0e2487206a250d6231da556c9db", "old_text": "The API must ensure the integrity of this information, as well as its confidentiality. 5.4. It is possible for any user on the Internet to send ICMP packets in an attempt to cause the receiving equipment to go to the captive", "comments": "We need to discuss what information will be used to identify the UE from the perspective of the various components in the system. This updates the draft to discuss this concept, including desired properties of the identifier and some examples. This aims to address issue 5 (URL)", "new_text": "The API must ensure the integrity of this information, as well as its confidentiality. 6.4. It is possible for any user on the Internet to send ICMP packets in an attempt to cause the receiving equipment to go to the captive"} {"id": "q-en-capport-wg-architecture-29971fb1c60487cf8ccf9e07d5b8f260c1e1a0e2487206a250d6231da556c9db", "old_text": "Even when redirected, the User Equipment securely authenticates with API servers. 5.5. The ICMP messaging informs the User Equipment that it is being held captive. There is no requirement that the User Equipment do", "comments": "We need to discuss what information will be used to identify the UE from the perspective of the various components in the system. This updates the draft to discuss this concept, including desired properties of the identifier and some examples. This aims to address issue 5 (URL)", "new_text": "Even when redirected, the User Equipment securely authenticates with API servers. 6.5. The ICMP messaging informs the User Equipment that it is being held captive. There is no requirement that the User Equipment do"} {"id": "q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894", "old_text": "2.2. Both the client and the server MUST send a Capabilities and Settings message (CSM see csm) as its first message on the connection. This message establishes the initial settings and capabilities for the endpoint such as maximum message size or support for block-wise transfers. The absence of options in the CSM indicates that base values are assumed. To avoid unnecessary latency, a client MAY send additional messages without waiting to receive the server CSM; however, it is important to note that the server CSM might advertise capabilities that impact how a client is expected to communicate with the server. For example, the server CSM could advertise a Max-Message-Size option (see max-message-size) that is smaller than the base value (1152). Clients and servers MUST treat a missing or invalid CSM as a connection error and abort the connection (see sec-abort). 2.3. The CoAP message format defined in RFC7252, as shown in CoAP-Header, relies on the datagram transport (UDP, or DTLS over UDP) for keeping the individual messages separate and for providing length", "comments": "URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context \u2014 in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong.", "new_text": "2.2. The CoAP message format defined in RFC7252, as shown in CoAP-Header, relies on the datagram transport (UDP, or DTLS over UDP) for keeping the individual messages separate and for providing length"} {"id": "q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894", "old_text": "The semantics of the other CoAP header fields are left unchanged. 2.4. CoAP requests and responses are exchanged asynchronously over the TCP/TLS connection. A CoAP client can send multiple requests without", "comments": "URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context \u2014 in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong.", "new_text": "The semantics of the other CoAP header fields are left unchanged. 2.3. Once a connection is established, both the client and the server MUST send a Capabilities and Settings message (CSM see csm) as its first message on the connection. This message establishes the initial settings and capabilities for the endpoint such as maximum message size or support for block-wise transfers. The absence of options in the CSM indicates that base values are assumed. To avoid unnecessary latency, a client MAY send additional messages without waiting to receive the server CSM; however, it is important to note that the server CSM might advertise capabilities that impact how a client is expected to communicate with the server. For example, the server CSM could advertise a Max-Message-Size option (see max-message-size) that is smaller than the base value (1152). Clients and servers MUST treat a missing or invalid CSM as a connection error and abort the connection (see sec-abort). CoAP requests and responses are exchanged asynchronously over the TCP/TLS connection. A CoAP client can send multiple requests without"} {"id": "q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894", "old_text": "Retransmission and deduplication of messages is provided by the TCP/ TLS protocol. 3. CoAP over WebSockets can be used in a number of configurations. The most basic configuration is a CoAP client retrieving or updating a CoAP resource located at a CoAP server that exposes a WebSocket endpoint (arch-1). The CoAP client acts as the WebSocket client, establishes a WebSocket connection, and sends a CoAP request, to which the CoAP server returns a CoAP response. The WebSocket", "comments": "URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context \u2014 in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong.", "new_text": "Retransmission and deduplication of messages is provided by the TCP/ TLS protocol. 2.4. Empty messages (Code 0.00) can always be sent and MUST be ignored by the recipient. This provides a basic keep-alive function that can refresh NAT bindings. If a client does not receive any response for some time after sending a CoAP request (or, similarly, when a client observes a resource and it does not receive any notification for some time), it can send a CoAP Ping Signaling message (sec-ping) to test the connection and verify that the server is responsive. 3. CoAP over WebSockets is intentionally similar to CoAP over TCP; therefore, this section only specifies the differences between the transports. CoAP over WebSockets can be used in a number of configurations. The most basic configuration is a CoAP client retrieving or updating a CoAP resource located on a CoAP server that exposes a WebSocket endpoint (arch-1). The CoAP client acts as the WebSocket client, establishes a WebSocket connection, and sends a CoAP request, to which the CoAP server returns a CoAP response. The WebSocket"} {"id": "q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894", "old_text": "Further configurations are possible, including those where a WebSocket connection is established through an HTTP proxy. CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from RFC7252. 3.1. Before CoAP requests and responses are exchanged, a WebSocket", "comments": "URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context \u2014 in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong.", "new_text": "Further configurations are possible, including those where a WebSocket connection is established through an HTTP proxy. 3.1. Before CoAP requests and responses are exchanged, a WebSocket"} {"id": "q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894", "old_text": "frames as specified in Sections 5 and 6 of RFC6455. The message format shown in ws-message-format is the same as the CoAP over TCP message format (see tcp-message-format) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame contains the length. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. Requests and response messages can be fragmented as specified in Section 5.4 of RFC6455, though typically they are sent unfragmented", "comments": "URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context \u2014 in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong.", "new_text": "frames as specified in Sections 5 and 6 of RFC6455. The message format shown in ws-message-format is the same as the CoAP over TCP message format (see tcp-message-format) with one change. The Length (Len) field MUST be set to zero because the WebSockets frame contains the length. As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. Requests and response messages can be fragmented as specified in Section 5.4 of RFC6455, though typically they are sent unfragmented"} {"id": "q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894", "old_text": "requests and responses can be transferred in a block-wise fashion as defined in RFC7959. Empty messages (Code 0.00) MUST be ignored by the recipient (see also sec-ping). 3.3. CoAP requests and responses are exchanged asynchronously over the WebSocket connection. A CoAP client can send multiple requests without waiting for a response and the CoAP server can return", "comments": "URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context \u2014 in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong.", "new_text": "requests and responses can be transferred in a block-wise fashion as defined in RFC7959. 3.3. As with CoAP over TCP, both the client and the server MUST send a Capabilities and Settings message (CSM see csm) as its first message on the WebSocket connection. CoAP requests and responses are exchanged asynchronously over the WebSocket connection. A CoAP client can send multiple requests without waiting for a response and the CoAP server can return"} {"id": "q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894", "old_text": "The connection is bi-directional, so requests can be sent both by the entity that established the connection and the remote host. Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. When a client does not receive any response for some time after sending a CoAP request (or, similarly, when a client observes a resource and it does not receive any notification for some time), the connection between the WebSocket client and the WebSocket server may be lost or temporarily disrupted without the client being aware of it. To check the health of the WebSocket connection (and thereby of all active requests, if any), a client can send a CoAP Ping Signaling message (sec-ping). WebSocket Ping and unsolicited Pong frames as specified in Section 5.5 of RFC6455 SHOULD NOT be used to ensure that redundant maintenance traffic is not transmitted. There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. 3.5. The WebSocket connection is closed as specified in Section 7 of RFC6455. All requests for which the CoAP client has not received a response yet are cancelled when the connection is closed. 4.", "comments": "URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context \u2014 in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong.", "new_text": "The connection is bi-directional, so requests can be sent both by the entity that established the connection and the remote host. As with CoAP over TCP, retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non- Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. As with CoAP over TCP, the client can test the health of the CoAP over WebSocket connection by sending a CoAP Ping Signaling message (sec-ping). WebSocket Ping and unsolicited Pong frames (Section 5.5 of RFC6455) SHOULD NOT be used to ensure that redundant maintenance traffic is not transmitted. 4."} {"id": "q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894", "old_text": "4.4. In CoAP over TCP, Empty messages (Code 0.00) can always be sent and MUST be ignored by the recipient. This provides a basic keep-alive function that can refresh NAT bindings. In contrast, Ping and Pong messages are a bidirectional exchange. Upon receipt of a Ping message, a single Pong message is returned with the identical token. As with all Signaling messages, the", "comments": "URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context \u2014 in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong.", "new_text": "4.4. In CoAP over reliable transports, Empty messages (Code 0.00) can always be sent and MUST be ignored by the recipient. This provides a basic keep-alive function. In contrast, Ping and Pong messages are a bidirectional exchange. Upon receipt of a Ping message, a single Pong message is returned with the identical token. As with all Signaling messages, the"} {"id": "q-en-coap-tcp-tls-3d7794cd646cd3fb57d222f9b1e3287b332f124116886281b812b5346736b242", "old_text": "\"authority\" as defined in Section 3.2 of RFC3986. The Alternative-Address Option is a repeatable option as defined in Section 5.4.5 of RFC7252. The elective Hold-Off Option indicates that the server is requesting that the peer not reconnect to it for the number of seconds given in", "comments": "URL - WGLC review by Esko Dijk Page 20: Alternative-Address Option - the semantics of including multiple of these options is not defined. Can the receiver pick any of the addresses? Or does it need to try the first address first? And then keep trying all alternatives until one succeeds, or not?\nURL Unless Carsten or Hannes (authors of draft-bormann-core-coap-sig-02) have a preference, I would suggest that the addresses are unordered (pick any).", "new_text": "\"authority\" as defined in Section 3.2 of RFC3986. The Alternative-Address Option is a repeatable option as defined in Section 5.4.5 of RFC7252. When multiple occurrences of the option are included, the peer can choose any of the alternative transport addresses. The elective Hold-Off Option indicates that the server is requesting that the peer not reconnect to it for the number of seconds given in"} {"id": "q-en-coap-tcp-tls-efc28484cba428eea15f6bceaea633cbe03a57ff3caa7c33069ebb4f5aaf5351", "old_text": "The syntax for the URI schemes in this section are specified using Augmented Backus-Naur Form (ABNF) RFC5234. The definitions of \"host\", \"port\", \"path-abempty\", and \"query\" are adopted from RFC3986. Section 8 (Multicast CoAP) in RFC7252 is not applicable to these schemes.", "comments": "Do we need to include the RFC7252 note for emphasis?\nProbably not, as we are not changing anything about 5.10.1, are we? (But then 7.5 also points out a place where we maintain a property of 5.10.1.) ((But then, again, where would be put this?))\nWe need to identify the ABNF fragments as ABNF as in so they become easier to check in BAP.\nWe could add the note to Section 7, right after the new reference to \"fragment\":\nMerged in abnf annotations from\nis relevant to Section 7.\nGood point. So I assume applying the same fix (add \u00bb[ \"#\" fragment ]\u00ab) four times to the ABNF in 7.1 to 7.4 would fix that? (We probably want to line-break before path-abempty in all four cases.)\nIf would be helpful to have more context for how the fragment identifier is used in CoAP. I don't see much detail in RFC7252 besides: Note: Fragments ([RFC3986], Section 3.5) are not part of the request URI and thus will not be transmitted in a CoAP request.\nFragment identifiers are not used in a transfer protocol; their use is a local matter on the client side. The mistake we made in RFC 7252 is providing ABNF that doesn't show fragment identifiers at all in the URI syntax. While CoAP as a protocol does not care about them, they are intended to be allowed in CoAP URIs so they can indeed be used on the client side. The note mentions this, but the ABNF does not reflect that. Same for coap-tcp-tls. Does it matter? The meaning of fragment identifiers is defined by the media type. So far, there has been little use of fragment identifiers in the media types being used with CoAP. Now, SenML actually does go ahead and defines a meaning for them, and other media types in the future might as well. So it would be good if the ABNF in the CoAP standards did not give the impression fragment identifiers cannot be used at all with CoAP URIs.\nNote from Carsten: We may have to reopen that issue based on the about-face that the HTTP WG made here:\nFrom URL which references: Absolute URI Some protocol elements allow only the absolute form of a URI without a fragment identifier. For example, defining a base URI for later use by relative references calls for an absolute-URI syntax rule that does not allow a fragment. absolute-URI = scheme \":\" hier-part [ \"?\" query ] URI scheme specifications must define their own syntax so that all strings matching their scheme-specific syntax will also match the grammar. Scheme specifications will not define fragment identifier syntax or usage, regardless of its applicability to resources identifiable via that scheme, as fragment identification is orthogonal to scheme definition. However, scheme specifications are encouraged to include a wide range of examples, including examples that show use of the scheme's URIs with fragment identifiers when such usage is appropriate. and URL further states: Requirements for Permanent Scheme Definitions This section gives considerations for new schemes. Meeting these guidelines is REQUIRED for 'permanent' scheme registration. 'Permanent' status is appropriate for, but not limited to, use in standards. For URI schemes defined or normatively referenced by IETF Standards Track documents, 'permanent' registration status is REQUIRED. [RFC3986] defines the overall syntax for URIs as: URI = scheme \":\" hier-part [ \"?\" query ] [ \"#\" fragment ] A scheme definition cannot override the overall syntax for URIs. For example, this means that fragment identifiers cannot be reused outside the generic syntax restrictions and that fragment identifiers are not scheme specific. A scheme definition must specify the scheme name and the syntax of the scheme-specific part, which is clarified as follows: URI = scheme \":\" scheme-specific-part [ \"#\" fragment ] scheme-specific-part = hier-part [ \"?\" query ] On Jan 28, 2015, at 1:56 PM, Julian Reschke wrote: The answer is \"no\". URL I don't know how I got beaten into submission on that for RFC7230. ....Roy", "new_text": "The syntax for the URI schemes in this section are specified using Augmented Backus-Naur Form (ABNF) RFC5234. The definitions of \"host\", \"port\", \"path-abempty\", \"query\", and \"fragment\" are adopted from RFC3986. Section 8 (Multicast CoAP) in RFC7252 is not applicable to these schemes."} {"id": "q-en-coap-tcp-tls-97f9cde8b397d8a6a6dd25eb153f8c6ae8a8e438cdc7ba01e117c519adfe7ade", "old_text": "Capability and Settings messages are used for two purposes: Capability options advertise the capabilities of the sender to the recipient. Setting options indicate a setting that will be applied by the sender.", "comments": "sig-02 did not define the length for options with uint values. Can you confirm whether 0-4 is correct? I note that there are variants in the RFC7252 options with uint values.\nGood catch. 0-4 definitely is right for Max-Message-Size. 0-4 is a bit over the top for Hold-Off, but arbitrarily limiting the time at, say, 65535 seconds (0-2) or 16777215 seconds (0-3) doesn't sound right either -- well, maybe half a year max (0-3) is about right. Option numbers are 65535 max, so Bad-CSM-Option needs to be 0-2.", "new_text": "Capability and Settings messages are used for two purposes: Each capability option advertises one capability of the sender to the recipient. Setting options indicate a setting that will be applied by the sender."} {"id": "q-en-coap-tcp-tls-fdd4bc0cd9c398b70eb7c6be576ae10b557f1cc7edc2c8247f9aaa211866b028", "old_text": "2.3. The CoAP message format defined in RFC7252, as shown in CoAP-Header, relies on the datagram transport (UDP, or DTLS over UDP) for keeping the individual messages separate and for providing length", "comments": "This text does not address who might be allowed to wait for the other guy to make their statement (if both wait, there is a deadlock).\nThe intention is that neither side needs to wait. The only open issue is whether we want to allow the client to immediately start sending messages after it sends its CSM without waiting for the server CSM similar to the HTTP/2 case. I planned to discuss that on-list when I \"advertised\" this set of pull requests.\nFrom your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nWhen I submitted the original pull request, I also posed a question on the but did not receive a response. One open question is whether we want to allow a client to immediately send messages after sending its CSM without waiting for the server CSM. This would be similar to HTTP/2: URL To avoid unnecessary latency, clients are permitted to send additional messages to the server immediately after sending the client connection preface, without waiting to receive the server connection preface. It is important to note, however, that the server connection preface SETTINGS frame might include parameters that necessarily alter how a client is expected to communicate with the server. Upon receiving the SETTINGS frame, the client is expected to honor any parameters established. In some configurations, it is possible for the server to transmit SETTINGS before the client sends additional frames, providing an opportunity to avoid this issue.\nI support not blocking the device until the server CSM. The server can choose to not process messages that are invalid (e.g. too large) and immediately follow up with the server CSM. From the perspective of a cloud and gateway software provider, the CoAP (using TCP/TLS) server will usually be in the cloud or gateway and therefore usually not power constrained. I may be missing other scenarios where a battery powered device is acting as a CoAP/TCP server. Does anyone have plans for any of those scenarios?\nNAME added a related comment to (which is closed) From your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nMichel Veillette points out that a peer that wants to use a new connection has an uncertainty whether a CSM message with capability indications will come or not, so it is hard to decide when to send the first request (either based on default capabilities or waiting for the CSM to come in). One way out of having to do arbitrary waits would be to require the exchange of a CSM message (even an option-less one) after connection setup. The burden is very low, as a minimal implementation would just need to send a constant message of 0x00 0xe1 at setup. This requirement could be stated without giving a sequence, effectively allowing both sides to send the initial CSM in parallel, or it could give the listener the opportunity to not send its CSM until the connection opener has sent theirs, trying to avoid the need to answer right away to a connection setup -- this makes it too easy to scan for ports (OK, we also have a default port, and making scanning harder does not by itself provide any security, but it sure makes life a little harder for the attacker). (To avoid deadlock, the connection opener would not have the opportunity to wait sending the CSM.)\n+1 ... especially for Maximum-Message-Size. HTTP/2 has a similar flow for SETTINGS during connection including potentially empty settings: URL URL\nNo objections. I will create a pull request for review by the WG.", "new_text": "2.3. Both the client and the server MUST send a Capability and Settings message (CSM see csm) as its first message on the connection. This message establishes the initial settings and capabilities for the endpoint such as maximum message size or support for block-wise transfers. The absence of options in the CSM indicates that base values are assumed. Clients and servers MUST treat a missing or invalid CSM as a connection error and abort the connection (see sec-abort). 2.4. The CoAP message format defined in RFC7252, as shown in CoAP-Header, relies on the datagram transport (UDP, or DTLS over UDP) for keeping the individual messages separate and for providing length"} {"id": "q-en-coap-tcp-tls-fdd4bc0cd9c398b70eb7c6be576ae10b557f1cc7edc2c8247f9aaa211866b028", "old_text": "The semantics of the other CoAP header fields are left unchanged. 2.4. CoAP requests and responses are exchanged asynchronously over the TCP/TLS connection. A CoAP client can send multiple requests without", "comments": "This text does not address who might be allowed to wait for the other guy to make their statement (if both wait, there is a deadlock).\nThe intention is that neither side needs to wait. The only open issue is whether we want to allow the client to immediately start sending messages after it sends its CSM without waiting for the server CSM similar to the HTTP/2 case. I planned to discuss that on-list when I \"advertised\" this set of pull requests.\nFrom your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nWhen I submitted the original pull request, I also posed a question on the but did not receive a response. One open question is whether we want to allow a client to immediately send messages after sending its CSM without waiting for the server CSM. This would be similar to HTTP/2: URL To avoid unnecessary latency, clients are permitted to send additional messages to the server immediately after sending the client connection preface, without waiting to receive the server connection preface. It is important to note, however, that the server connection preface SETTINGS frame might include parameters that necessarily alter how a client is expected to communicate with the server. Upon receiving the SETTINGS frame, the client is expected to honor any parameters established. In some configurations, it is possible for the server to transmit SETTINGS before the client sends additional frames, providing an opportunity to avoid this issue.\nI support not blocking the device until the server CSM. The server can choose to not process messages that are invalid (e.g. too large) and immediately follow up with the server CSM. From the perspective of a cloud and gateway software provider, the CoAP (using TCP/TLS) server will usually be in the cloud or gateway and therefore usually not power constrained. I may be missing other scenarios where a battery powered device is acting as a CoAP/TCP server. Does anyone have plans for any of those scenarios?\nNAME added a related comment to (which is closed) From your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nMichel Veillette points out that a peer that wants to use a new connection has an uncertainty whether a CSM message with capability indications will come or not, so it is hard to decide when to send the first request (either based on default capabilities or waiting for the CSM to come in). One way out of having to do arbitrary waits would be to require the exchange of a CSM message (even an option-less one) after connection setup. The burden is very low, as a minimal implementation would just need to send a constant message of 0x00 0xe1 at setup. This requirement could be stated without giving a sequence, effectively allowing both sides to send the initial CSM in parallel, or it could give the listener the opportunity to not send its CSM until the connection opener has sent theirs, trying to avoid the need to answer right away to a connection setup -- this makes it too easy to scan for ports (OK, we also have a default port, and making scanning harder does not by itself provide any security, but it sure makes life a little harder for the attacker). (To avoid deadlock, the connection opener would not have the opportunity to wait sending the CSM.)\n+1 ... especially for Maximum-Message-Size. HTTP/2 has a similar flow for SETTINGS during connection including potentially empty settings: URL URL\nNo objections. I will create a pull request for review by the WG.", "new_text": "The semantics of the other CoAP header fields are left unchanged. 2.5. CoAP requests and responses are exchanged asynchronously over the TCP/TLS connection. A CoAP client can send multiple requests without"} {"id": "q-en-coap-tcp-tls-fdd4bc0cd9c398b70eb7c6be576ae10b557f1cc7edc2c8247f9aaa211866b028", "old_text": "Setting options indicate a setting that will be applied by the sender. Most CSM Options are useful mainly as initial messages in the connection. Both capability and settings options are cumulative. A Capability and Settings message does not invalidate a previously sent capability", "comments": "This text does not address who might be allowed to wait for the other guy to make their statement (if both wait, there is a deadlock).\nThe intention is that neither side needs to wait. The only open issue is whether we want to allow the client to immediately start sending messages after it sends its CSM without waiting for the server CSM similar to the HTTP/2 case. I planned to discuss that on-list when I \"advertised\" this set of pull requests.\nFrom your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nWhen I submitted the original pull request, I also posed a question on the but did not receive a response. One open question is whether we want to allow a client to immediately send messages after sending its CSM without waiting for the server CSM. This would be similar to HTTP/2: URL To avoid unnecessary latency, clients are permitted to send additional messages to the server immediately after sending the client connection preface, without waiting to receive the server connection preface. It is important to note, however, that the server connection preface SETTINGS frame might include parameters that necessarily alter how a client is expected to communicate with the server. Upon receiving the SETTINGS frame, the client is expected to honor any parameters established. In some configurations, it is possible for the server to transmit SETTINGS before the client sends additional frames, providing an opportunity to avoid this issue.\nI support not blocking the device until the server CSM. The server can choose to not process messages that are invalid (e.g. too large) and immediately follow up with the server CSM. From the perspective of a cloud and gateway software provider, the CoAP (using TCP/TLS) server will usually be in the cloud or gateway and therefore usually not power constrained. I may be missing other scenarios where a battery powered device is acting as a CoAP/TCP server. Does anyone have plans for any of those scenarios?\nNAME added a related comment to (which is closed) From your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nMichel Veillette points out that a peer that wants to use a new connection has an uncertainty whether a CSM message with capability indications will come or not, so it is hard to decide when to send the first request (either based on default capabilities or waiting for the CSM to come in). One way out of having to do arbitrary waits would be to require the exchange of a CSM message (even an option-less one) after connection setup. The burden is very low, as a minimal implementation would just need to send a constant message of 0x00 0xe1 at setup. This requirement could be stated without giving a sequence, effectively allowing both sides to send the initial CSM in parallel, or it could give the listener the opportunity to not send its CSM until the connection opener has sent theirs, trying to avoid the need to answer right away to a connection setup -- this makes it too easy to scan for ports (OK, we also have a default port, and making scanning harder does not by itself provide any security, but it sure makes life a little harder for the attacker). (To avoid deadlock, the connection opener would not have the opportunity to wait sending the CSM.)\n+1 ... especially for Maximum-Message-Size. HTTP/2 has a similar flow for SETTINGS during connection including potentially empty settings: URL URL\nNo objections. I will create a pull request for review by the WG.", "new_text": "Setting options indicate a setting that will be applied by the sender. A Capability and Settings message MUST be sent by both endpoints at the start of the connection and MAY be sent at any other time by either endpoint over the lifetime of the connection. Both capability and settings options are cumulative. A Capability and Settings message does not invalidate a previously sent capability"} {"id": "q-en-coap-tcp-tls-fdd4bc0cd9c398b70eb7c6be576ae10b557f1cc7edc2c8247f9aaa211866b028", "old_text": "Base values are listed below for CSM Options. These are the values for the Capability and Setting before any Capability and Settings messages sends a modified value. These are not default values for the option as defined in Section 5.4.4 in RFC7252. A default value would mean that an empty", "comments": "This text does not address who might be allowed to wait for the other guy to make their statement (if both wait, there is a deadlock).\nThe intention is that neither side needs to wait. The only open issue is whether we want to allow the client to immediately start sending messages after it sends its CSM without waiting for the server CSM similar to the HTTP/2 case. I planned to discuss that on-list when I \"advertised\" this set of pull requests.\nFrom your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nWhen I submitted the original pull request, I also posed a question on the but did not receive a response. One open question is whether we want to allow a client to immediately send messages after sending its CSM without waiting for the server CSM. This would be similar to HTTP/2: URL To avoid unnecessary latency, clients are permitted to send additional messages to the server immediately after sending the client connection preface, without waiting to receive the server connection preface. It is important to note, however, that the server connection preface SETTINGS frame might include parameters that necessarily alter how a client is expected to communicate with the server. Upon receiving the SETTINGS frame, the client is expected to honor any parameters established. In some configurations, it is possible for the server to transmit SETTINGS before the client sends additional frames, providing an opportunity to avoid this issue.\nI support not blocking the device until the server CSM. The server can choose to not process messages that are invalid (e.g. too large) and immediately follow up with the server CSM. From the perspective of a cloud and gateway software provider, the CoAP (using TCP/TLS) server will usually be in the cloud or gateway and therefore usually not power constrained. I may be missing other scenarios where a battery powered device is acting as a CoAP/TCP server. Does anyone have plans for any of those scenarios?\nNAME added a related comment to (which is closed) From your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nMichel Veillette points out that a peer that wants to use a new connection has an uncertainty whether a CSM message with capability indications will come or not, so it is hard to decide when to send the first request (either based on default capabilities or waiting for the CSM to come in). One way out of having to do arbitrary waits would be to require the exchange of a CSM message (even an option-less one) after connection setup. The burden is very low, as a minimal implementation would just need to send a constant message of 0x00 0xe1 at setup. This requirement could be stated without giving a sequence, effectively allowing both sides to send the initial CSM in parallel, or it could give the listener the opportunity to not send its CSM until the connection opener has sent theirs, trying to avoid the need to answer right away to a connection setup -- this makes it too easy to scan for ports (OK, we also have a default port, and making scanning harder does not by itself provide any security, but it sure makes life a little harder for the attacker). (To avoid deadlock, the connection opener would not have the opportunity to wait sending the CSM.)\n+1 ... especially for Maximum-Message-Size. HTTP/2 has a similar flow for SETTINGS during connection including potentially empty settings: URL URL\nNo objections. I will create a pull request for review by the WG.", "new_text": "Base values are listed below for CSM Options. These are the values for the Capability and Setting before any Capability and Settings messages send a modified value. These are not default values for the option as defined in Section 5.4.4 in RFC7252. A default value would mean that an empty"} {"id": "q-en-core-problem-details-4e804c23fc93511543f758947efafd2daf1a130013d0cae4444a172115c3d7df", "old_text": "1.1. The terminology from RFC7252 and STD94 applies. Readers are also expected to be familiar with the terminology from RFC7807. In this document, the structure of data is specified in CDDL RFC8610 RFC9165.", "comments": "Start collecting changes from AD review\nComment by NAME : /example value 4711 not actually registered like this:/ ~~~ I don't get the sentence above - if this is to highlight that this is an example and that the value 4711 is not actually registered, I think this is redundant (and you could also add that in the Figure caption rather than in the example's text).\nSee : The danger that people are just blindly copying an example is just too high. So we would prefer to keep this redundant comment in. We could improve the wording of the comment.\nComment by NAME : Please explicitly state that the example (or examples if you add one in ) uses CBOR diagnostic notation.\nFixed in .\nClosed by 7463ce1", "new_text": "1.1. The terminology from RFC7252, STD94, and RFC8610 applies; in particular CBOR diagnostic notation is defined in STD94 and RFC8610. Readers are also expected to be familiar with the terminology from RFC7807. In this document, the structure of data is specified in CDDL RFC8610 RFC9165."} {"id": "q-en-core-problem-details-4e804c23fc93511543f758947efafd2daf1a130013d0cae4444a172115c3d7df", "old_text": "be dereferenced in the normal course of handling problem details (i.e., outside diagnostic/debugging procedures involving humans). An example of a custom extension using a URI as \"custom-problem- detail-entries\" key is shown in fig-example-custom-with-uri. Obviously, an SDO like 3GPP can also easily register such a custom problem detail entry to receive a more efficient unsigned integer key; the same example but using a registered unsigned int as \"custom- problem-detail-entries\" key is shown in fig-example-custom-with-uint. In summary, the keys for the maps used inside Custom Problem Detail entries are defined specifically to the identifier of that Custom", "comments": "Start collecting changes from AD review\nComment by NAME : /example value 4711 not actually registered like this:/ ~~~ I don't get the sentence above - if this is to highlight that this is an example and that the value 4711 is not actually registered, I think this is redundant (and you could also add that in the Figure caption rather than in the example's text).\nSee : The danger that people are just blindly copying an example is just too high. So we would prefer to keep this redundant comment in. We could improve the wording of the comment.\nComment by NAME : Please explicitly state that the example (or examples if you add one in ) uses CBOR diagnostic notation.\nFixed in .\nClosed by 7463ce1", "new_text": "be dereferenced in the normal course of handling problem details (i.e., outside diagnostic/debugging procedures involving humans). fig-example-custom-with-uri shows an example (in CBOR diagnostic notation) of a custom extension using a (made-up) URI as \"custom- problem-detail-entries\" key. Obviously, an SDO like 3GPP can also easily register such a custom problem detail entry to receive a more efficient unsigned integer key; fig-example-custom-with-uint shows how the same example would look like using a (made-up) registered unsigned int as \"custom- problem-detail-entries\" key: In summary, the keys for the maps used inside Custom Problem Detail entries are defined specifically to the identifier of that Custom"} {"id": "q-en-cose-spec-d5730063d99c8bc80a04ed2f6d8968bcca80eb57e53ecfb5ddd8256e35c1f309", "old_text": "calculation of the counter signature can be computed. Details on computing counter signatures are found in counter_signature. This parameter provides the time the content was created. For signatures and recipient structures, this would be the time that the signature or recipient key object was created. For content structures, this would be the time that the content was created. The unsigned integer value is the number of seconds, excluding leap seconds; after midnight UTC, January 1, 1970. The CDDL fragment that represents the set of headers defined in this section is given below. Each of the headers is tagged as optional", "comments": "It makes sense that we do not talk about creation time of the content. However the concept of having a time in the header is also very reasonable for things like countersignatures where one wants to make a timestamp as there is no content that the time marker can be placed in. We therefore change the text to make the time field be the time that the cryptographic operation was performed at.", "new_text": "calculation of the counter signature can be computed. Details on computing counter signatures are found in counter_signature. This parameter provides the time the content cryptographic operation is performed. For signatures and recipient structures, this would be the time that the signature or recipient key object was created. For content structures, this would be the time that the content structure was created. The unsigned integer value is the number of seconds, excluding leap seconds; after midnight UTC, January 1, 1970. The field is primarily intended to be to be used for countersignatures, however it can additionally be used for replay detection as well. The CDDL fragment that represents the set of headers defined in this section is given below. Each of the headers is tagged as optional"} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "1.3. There currently is no standard CBOR grammar available for use by specifications. In this document, we use a modified version of the CBOR data definition language (CDDL) defined in I-D.greevenbosch- appsawg-cbor-cddl. The differences between the defined grammar and the one we used are mostly self explanatory. The biggest difference being the addition of the choice operator '|'. Additionally, note the use of the null value which is used to occupy a location in an array but to mark that the element is not present. 2.", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "1.3. There currently is no standard CBOR grammar available for use by specifications. In this document, we use the grammar defined in the CBOR data definition language (CDDL) I-D.greevenbosch-appsawg-cbor- cddl. 2."} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "contains the information about the plain text or encryption process that is to be integrity protected. The field is encoded in CBOR as a 'bstr' if present and the value 'null' if there is no data. The contents of the protected field is a CBOR map of the protected data names and values. The map is CBOR encoded before placing it into the bstr. Only values associated with the current", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "contains the information about the plain text or encryption process that is to be integrity protected. The field is encoded in CBOR as a 'bstr' if present and the value 'nil' if there is no data. The contents of the protected field is a CBOR map of the protected data names and values. The map is CBOR encoded before placing it into the bstr. Only values associated with the current"} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "would apply to multiple recipient structures. contains information about the plain text that is not integrity protected. If there are no field, then the value 'null' is used. Only values associated with the current cipher text are to be placed in this location even if the value would apply to multiple recipient structures. contains the initialization vector (IV), or it's equivalent, if one is needed by the encryption algorithm. If there is no IV, then the value 'null' is used. contains additional authenticated data (aad) supplied by the application. This field contains information about the plain text data that is authenticated, but not encrypted. If the application does not provide this data, the value 'null' is used. contains the encrypted plain text. If the cipherText is to be transported independently of the control information about the encryption process (i.e. detached content) then the value 'null' is encoded here. contains the recipient information. The field can have one of three data types:", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "would apply to multiple recipient structures. contains information about the plain text that is not integrity protected. If there are no field, then the value 'nil' is used. Only values associated with the current cipher text are to be placed in this location even if the value would apply to multiple recipient structures. contains the initialization vector (IV), or it's equivalent, if one is needed by the encryption algorithm. If there is no IV, then the value 'nil' is used. contains additional authenticated data (aad) supplied by the application. This field contains information about the plain text data that is authenticated, but not encrypted. If the application does not provide this data, the value 'nil' is used. contains the encrypted plain text. If the cipherText is to be transported independently of the control information about the encryption process (i.e. detached content) then the value 'nil' is encoded here. contains the recipient information. The field can have one of three data types:"} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "recipients can be encoded either this way or as a single array element. A 'null' value if there are no recipients. 4.1.", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "recipients can be encoded either this way or as a single array element. A 'nil' value if there are no recipients. 4.1."} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "4.1.1. In direct encryption mode, a shared secret between the sender and the recipient is used as the CEK. For direct encryption mode, no recipient structure is built. All of the information about the key is placed in either the protected or unprotected fields at the content level. When direct encryption mode is used, it MUST be the only mode used on the message. It is a massive security leak to have both direct encryption and a different key management mode on the same message. For JOSE, direct encryption key management is the only key management method allowed for doing MAC-ed messages. In COSE, all of the key", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "4.1.1. In direct encryption mode, a shared secret between the sender and the recipient is used as the CEK. When direct encryption mode is used, it MUST be the only mode used on the message. It is a massive security leak to have both direct encryption and a different key management mode on the same message. For JOSE, direct encryption key management is the only key management method allowed for doing MAC-ed messages. In COSE, all of the key"} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'iv', 'aad', 'ciphertext' and 'recipients' fields MUST be null. At a minimum, the 'unprotected' field SHOULD contain the 'alg' parameter as well as a parameter identifying the shared secret.", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'iv', 'aad', 'ciphertext' and 'recipients' fields MUST be nil. At a minimum, the 'unprotected' field SHOULD contain the 'alg' parameter as well as a parameter identifying the shared secret."} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'aad', and 'recipients' fields MUST be null. The plain text to be encrypted is the key from next layer down (usually the content layer).", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'aad', and 'recipients' fields MUST be nil. The plain text to be encrypted is the key from next layer down (usually the content layer)."} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "4.1.4. Direct Key Agreement derives the CEK from the shared secret computed by the key agreement operation. For Direct Key Agreement, no recipient structure is built. All of the information about the key and key agreement process is placed in either the 'protected' or 'unprotected' fields at the content level. When direct key agreement mode is used, it SHOULD be the only mode used on the message. This method creates the CEK directly and that", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "4.1.4. Direct Key Agreement derives the CEK from the shared secret computed by the key agreement operation. When direct key agreement mode is used, it SHOULD be the only mode used on the message. This method creates the CEK directly and that"} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'aad', and 'iv' fields all use the 'null' value. At a minimum, the 'unprotected' field SHOULD contain the 'alg' parameter as well as a parameter identifying the asymmetric key.", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'aad', and 'iv' fields all use the 'nil' value. At a minimum, the 'unprotected' field SHOULD contain the 'alg' parameter as well as a parameter identifying the asymmetric key."} {"id": "q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4", "old_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'aad', and 'iv' fields all use the 'null' value. The plain text to be encrypted is the key from next layer down (usually the content layer).", "comments": "The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match.", "new_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'aad', and 'iv' fields all use the 'nil' value. The plain text to be encrypted is the key from next layer down (usually the content layer)."} {"id": "q-en-cose-spec-454e0ad2563909c7e11f00ecb5856a08b470b70a4ed832a55842d7f5104bfd65", "old_text": "The following values are used for L: limits messages to 2^16 bytes in length. The nonce length is 13 bytes allowing for 2^(13*8) possible values of the nonce without repeating. limits messages to 2^64 byes in length. The nonce length is 7 bytes allowing for 2^56 possible values of the nonce without", "comments": "Assign integer tags to the CCM modes that we are keeping. Remove the 192-bit key CCM modes from the table. Update example using CCM", "new_text": "The following values are used for L: limits messages to 2^16 bytes (64Kbyte) in length. This sufficently long for messages in the constrainted world. The nonce length is 13 bytes allowing for 2^(13*8) possible values of the nonce without repeating. limits messages to 2^64 byes in length. The nonce length is 7 bytes allowing for 2^56 possible values of the nonce without"} {"id": "q-en-data-plane-drafts-a24ebf5660ba617c1b06ad1225da371c684f4c546e1fa79f2354dd6843d122ca", "old_text": "7. Security considerations for DetNet are described in detail in I- D.ietf-detnet-security. This section considers exclusively security considerations which are specific to the DetNet data plane. Security aspects which are unique to DetNet are those whose aim is to provide the specific quality of service aspects of DetNet, which are", "comments": "Sorry, now it is uploaded to the right folder. Simple accept pull request. Thx Bala'zs", "new_text": "7. Security considerations for DetNet are described in detail in I- D.ietf-detnet-security. General security considerations are described in I-D.ietf-detnet-architecture. This section considers exclusively security considerations which are specific to the DetNet IP data plane. Security aspects which are unique to DetNet are those whose aim is to provide the specific quality of service aspects of DetNet, which are"} {"id": "q-en-data-plane-drafts-a24ebf5660ba617c1b06ad1225da371c684f4c546e1fa79f2354dd6843d122ca", "old_text": "and bounded end-to-end delivery latency. The primary considerations for the data plane is to maintain confidentiality of data traversing the DetNet network, application flows can be protected through whatever means is provided by the underlying technology. For example, encryption may be used, such as that provided by IPSec RFC4301 for IP flows and by MACSec IEEE802.1AE-2018 for Ethernet (Layer-2) flows. DetNet flows are identified on a per-flow basis, which may provide attackers with additional information about the data flows (when compared to networks that do not include per-flow identification). This is an inherent property of DetNet which has security implications that should be considered when determining if DetNet is a suitable technology for any given use case. To provide uninterrupted availability of the DetNet quality of service, provisions can be made against DOS attacks and delay attacks. To protect against DOS attacks, excess traffic due to malicious or malfunctioning devices can be prevented or mitigated, for example through the use of traffic admission control applied at the input of a DetNet domain. To prevent DetNet packets from being delayed by an entityexternal to a DetNet domain, DetNet technology definition can allow for the mitigation of Man-In-The-Middle attacks, for example through use of authentication and authorization of devices within the DetNet domain. Because DetNet mechanisms or applications that rely on DetNet can make heavy use of methods that require precise time synchronization, the accuracy, availability, and integrity of time synchronization is of critical importance. Extensive discussion of this topic can be found in RFC7384. 8.", "comments": "Sorry, now it is uploaded to the right folder. Simple accept pull request. Thx Bala'zs", "new_text": "and bounded end-to-end delivery latency. The primary considerations for the data plane is to maintain integrity of data and delivery of the associated DetNet service traversing the DetNet network. Application flows can be protected through whatever means is provided by the underlying technology. For example, encryption may be used, such as that provided by IPSec RFC4301 for IP flows and/or by an underlying sub-net using MACSec IEEE802.1AE-2018 for IP over Ethernet (Layer-2) flows. From a data plane perspective this document does not add or modify any header information. At the management and control level DetNet flows are identified on a per-flow basis, which may provide controller plane attackers with additional information about the data flows (when compared to controller planes that do not include per-flow identification). This is an inherent property of DetNet which has security implications that should be considered when determining if DetNet is a suitable technology for any given use case. To provide uninterrupted availability of the DetNet service, provisions can be made against DOS attacks and delay attacks. To protect against DOS attacks, excess traffic due to malicious or malfunctioning devices can be prevented or mitigated, for example through the use of existing mechanism such as policing and shaping applied at the input of a DetNet domain. To prevent DetNet packets from being delayed by an entity external to a DetNet domain, DetNet technology definition can allow for the mitigation of Man-In-The- Middle attacks, for example through use of authentication and authorization of devices within the DetNet domain. 8."} {"id": "q-en-deprecation-header-c30733c920c1c9077a9cf21cb42f2c98767c94d9a840a69d75fe4a656a679a95", "old_text": "The \"Deprecation\" response header field describes the deprecation of the resource identified with the response it occurred within (see Section 3.1.4.1 of HTTP). It conveys either the deprecation date, which may be in the future (the resource context will be deprecated at that date) or in the past (the resource context has been deprecated at that date), or it simply flags the resource context as being deprecated. Servers MUST NOT include more than one \"Deprecation\" header field in the same response. The date, if present, is the date when the resource was or will be deprecated. It is in the form of an IMF-fixdate timestamp. The following example shows that the resource context has been deprecated on Sunday, November 11, 2018 at 23:59:59 GMT: The deprecation date can be in the future. This means that the resource will be deprecated at the given date in future. If the deprecation date is not known, the header field can carry the simple string \"true\", indicating that the resource context is deprecated, without indicating when that happened: 2.2.", "comments": "Looks good to me.\n... are considered a 'bad smell'. Instead of a date vs. a fixed string, could we just say that a date in the past is equal to 'true'?\ni guess we could. and we already say that a date in the past means that the resource is deprecated. the motivation to allow a flag was to not force services to make up a date when they don't have one and simply want to flag something as deprecated. but we could also \"force\" them to make up a date by not allowing the \"true\" flag. i am fine either way and i do see the point that the syntax may cause some implementations to not parse the field correctly. maybe others can weigh in as well?\nFWIW, I see it like Expires -- if it's already stale, you just assign a date in the past.\ni could definitely live with such a design.\nOne use case was to align the header's usage with API contract as defined using OpenAPI. Having boolean helps align this header with OpenAPI's flag which does not require date. Forcing API developers to add a random date in the past in case if it is not available due toe various reasons would not be desirable for various other reasons (SLA?). Although, I wish that OpenAPI adds date for . >If the deprecation date is not known, the header field can carry the simple string \"true\", indicating that the resource context is deprecated, without indicating when that happened: On Jul 31, 2021, at 06:46, Mark Nottingham NAME wrote: FWIW, I see it like Expires -- if it's already stale, you just assign a date in the past.\nFor Expires, January 1 1970 (epoch 0) is widely used.\nNAME I wish OpenAPI supported deprecated date also. But when we do, it won't be with a property that can be boolean or date.\nOk NAME thanks for clarification on the OpenAPI side of things. I agree. We can just use date. There are some implementations out there, I would not know if any of those use boolean but since we are not an RFC yet, such a change should not be unexpected.\nit seems like we can safely remove the alternative syntax. i have created a PR asking for review by NAME\nLGTM", "new_text": "The \"Deprecation\" response header field describes the deprecation of the resource identified with the response it occurred within (see Section 3.1.4.1 of HTTP). It conveys the deprecation date, which may be in the future (the resource context will be deprecated at that date) or in the past (the resource context has been deprecated at that date). Servers MUST NOT include more than one \"Deprecation\" header field in the same response. The date is the date when the resource was or will be deprecated. It is in the form of an IMF-fixdate timestamp. The following example shows that the resource context has been deprecated on Sunday, November 11, 2018 at 23:59:59 GMT: The deprecation date can be in the future. This means that the resource will be deprecated at the indicated date in the future. 2.2."} {"id": "q-en-deprecation-header-c30733c920c1c9077a9cf21cb42f2c98767c94d9a840a69d75fe4a656a679a95", "old_text": "10. The first example shows a deprecation header field without date information: The second example shows a deprecation header with date information and a link to the successor version: The third example shows a deprecation header field with links for the successor version and for the API's deprecation policy. In addition, it shows the sunset date for the deprecated resource: ", "comments": "Looks good to me.\n... are considered a 'bad smell'. Instead of a date vs. a fixed string, could we just say that a date in the past is equal to 'true'?\ni guess we could. and we already say that a date in the past means that the resource is deprecated. the motivation to allow a flag was to not force services to make up a date when they don't have one and simply want to flag something as deprecated. but we could also \"force\" them to make up a date by not allowing the \"true\" flag. i am fine either way and i do see the point that the syntax may cause some implementations to not parse the field correctly. maybe others can weigh in as well?\nFWIW, I see it like Expires -- if it's already stale, you just assign a date in the past.\ni could definitely live with such a design.\nOne use case was to align the header's usage with API contract as defined using OpenAPI. Having boolean helps align this header with OpenAPI's flag which does not require date. Forcing API developers to add a random date in the past in case if it is not available due toe various reasons would not be desirable for various other reasons (SLA?). Although, I wish that OpenAPI adds date for . >If the deprecation date is not known, the header field can carry the simple string \"true\", indicating that the resource context is deprecated, without indicating when that happened: On Jul 31, 2021, at 06:46, Mark Nottingham NAME wrote: FWIW, I see it like Expires -- if it's already stale, you just assign a date in the past.\nFor Expires, January 1 1970 (epoch 0) is widely used.\nNAME I wish OpenAPI supported deprecated date also. But when we do, it won't be with a property that can be boolean or date.\nOk NAME thanks for clarification on the OpenAPI side of things. I agree. We can just use date. There are some implementations out there, I would not know if any of those use boolean but since we are not an RFC yet, such a change should not be unexpected.\nit seems like we can safely remove the alternative syntax. i have created a PR asking for review by NAME\nLGTM", "new_text": "10. The first example shows a deprecation header with date information and a link to the successor version: The second example shows a deprecation header field with links for the successor version and for the API's deprecation policy. In addition, it shows the sunset date for the deprecated resource: "} {"id": "q-en-dnssec-chain-extension-3b7570121589a526bf5d1838602659d65bd8bf32f68881d603ee160538d28465", "old_text": "unsigned CNAME records that may have been synthesized in the response from a DNS resolver. The subsequent RRsets MUST contain the full sequence of DNS records needed to authenticate the TLSA record set from the server's trust anchor. Typically this means a sequence of DNSKEY and DS RRsets that cover all zones from the target zone containing the TLSA record set to the trust anchor zone. Names that are aliased via CNAME and/or DNAME records may involve multiple branches of the DNS tree. In this case, the authentication", "comments": "Small tweak to text to make clearer that validation RRsets can be in any order.", "new_text": "unsigned CNAME records that may have been synthesized in the response from a DNS resolver. The subsequent RRsets MUST contain the full set of DNS records needed to authenticate the TLSA record set from the server's trust anchor. Typically this means a set of DNSKEY and DS RRsets that cover all zones from the target zone containing the TLSA record set to the trust anchor zone. The TLS client should be prepared to receive this set of RRsets in any order. Names that are aliased via CNAME and/or DNAME records may involve multiple branches of the DNS tree. In this case, the authentication"} {"id": "q-en-draft-ietf-add-ddr-beb1d7fdfcb2dd388f1ea523f1e0bf704b40a1f093218e6a97413a7da4719ae3", "old_text": "for encrypted DNS protocols when the name of an encrypted resolver is known. This mechanism is designed to be limited to cases where unencrypted resolvers and their designated resolvers are operated by the same entity. 1.", "comments": "changed equivalence instances for designated and added the possibility of multiple entities so long as they are cooperating This is to address issue\nI am not sure we need to introduce \"Equivalence\" nor to develop who is operating the resolvers - especially as many entities may be involved in the operation of a resolver. I am not sure we need to introduce Equivalence. If so I would propose the following text: OLD: \"Equivalence\" in this context means that the resolvers are operated by the same entity; for example, the resolvers are accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. NEW: \"Equivalence\" in this context means that Encrypted and Unencrypted resolvers are either accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. If Equivalence is not introduced - which I prefer -I would propose the following text: NEW: In this context the discovery process ensures that Encrypted and Unencrypted resolvers are either accessible on the same IP address, or there is a certificate that claims ownership over both resolvers.\nI agree that we do not need to introduce \"equivalence\" in this document. It is a remnant of the DEER draft DDR is based on. Per WG discussion, we want to use the term \"designation\" to describe the relationship between the unencrypted resolver and the encrypted resolver to which the client is transferring. I'm editing accordingly. I have also added references to the possibility of multiple \"entities\" as that is definitely possible. (PR coming later tonight)\ngood.", "new_text": "for encrypted DNS protocols when the name of an encrypted resolver is known. This mechanism is designed to be limited to cases where unencrypted resolvers and their designated resolvers are operated by the same entity or cooperating entities. 1."} {"id": "q-en-draft-ietf-add-ddr-beb1d7fdfcb2dd388f1ea523f1e0bf704b40a1f093218e6a97413a7da4719ae3", "old_text": "Both of these approaches allow clients to confirm that a discovered Encrypted Resolver is designated by the originally provisioned resolver. \"Equivalence\" in this context means that the resolvers are operated by the same entity; for example, the resolvers are accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. 1.1.", "comments": "changed equivalence instances for designated and added the possibility of multiple entities so long as they are cooperating This is to address issue\nI am not sure we need to introduce \"Equivalence\" nor to develop who is operating the resolvers - especially as many entities may be involved in the operation of a resolver. I am not sure we need to introduce Equivalence. If so I would propose the following text: OLD: \"Equivalence\" in this context means that the resolvers are operated by the same entity; for example, the resolvers are accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. NEW: \"Equivalence\" in this context means that Encrypted and Unencrypted resolvers are either accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. If Equivalence is not introduced - which I prefer -I would propose the following text: NEW: In this context the discovery process ensures that Encrypted and Unencrypted resolvers are either accessible on the same IP address, or there is a certificate that claims ownership over both resolvers.\nI agree that we do not need to introduce \"equivalence\" in this document. It is a remnant of the DEER draft DDR is based on. Per WG discussion, we want to use the term \"designation\" to describe the relationship between the unencrypted resolver and the encrypted resolver to which the client is transferring. I'm editing accordingly. I have also added references to the possibility of multiple \"entities\" as that is definitely possible. (PR coming later tonight)\ngood.", "new_text": "Both of these approaches allow clients to confirm that a discovered Encrypted Resolver is designated by the originally provisioned resolver. \"Designated\" in this context means that the resolvers are operated by the same entity or cooperating entities; for example, the resolvers are accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. 1.1."} {"id": "q-en-draft-ietf-add-ddr-beb1d7fdfcb2dd388f1ea523f1e0bf704b40a1f093218e6a97413a7da4719ae3", "old_text": "Resolvers. Other protocols can also use the format defined by I- D.schwartz-svcb-dns. However, if any protocol does not involve some form of certificate validation, new validation mechanisms will need to be defined to support validating equivalence as defined in authenticated. 4.", "comments": "changed equivalence instances for designated and added the possibility of multiple entities so long as they are cooperating This is to address issue\nI am not sure we need to introduce \"Equivalence\" nor to develop who is operating the resolvers - especially as many entities may be involved in the operation of a resolver. I am not sure we need to introduce Equivalence. If so I would propose the following text: OLD: \"Equivalence\" in this context means that the resolvers are operated by the same entity; for example, the resolvers are accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. NEW: \"Equivalence\" in this context means that Encrypted and Unencrypted resolvers are either accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. If Equivalence is not introduced - which I prefer -I would propose the following text: NEW: In this context the discovery process ensures that Encrypted and Unencrypted resolvers are either accessible on the same IP address, or there is a certificate that claims ownership over both resolvers.\nI agree that we do not need to introduce \"equivalence\" in this document. It is a remnant of the DEER draft DDR is based on. Per WG discussion, we want to use the term \"designation\" to describe the relationship between the unencrypted resolver and the encrypted resolver to which the client is transferring. I'm editing accordingly. I have also added references to the possibility of multiple \"entities\" as that is definitely possible. (PR coming later tonight)\ngood.", "new_text": "Resolvers. Other protocols can also use the format defined by I- D.schwartz-svcb-dns. However, if any protocol does not involve some form of certificate validation, new validation mechanisms will need to be defined to support validating designation as defined in authenticated. 4."} {"id": "q-en-draft-ietf-add-ddr-0725a524283b98d66db922170a236dc52c5e3f13d33276ed4a133f2a7541b6f6", "old_text": "4.1. In order to be considered an authenticated Designated Resolver, the TLS certificate presented by the Encrypted Resolver MUST contain both the domain name (from the SVCB answer) and the IP address of the designating Unencrypted Resolver within the SubjectAlternativeName certificate field. The client MUST check the SubjectAlternativeName field for both the Unencrypted Resolver's IP address and the", "comments": "Right now, we only allow authenticated and matching-IP opportunistic use of the DDR record. While use of the record in a case where authentication fails isn't safe to use automatically, we should clarify that implementations MAY choose to use the information anyhow by policy, and possibly with user control.\nFor authentication to matter, it generally has to operate without fallback. Currently the text is not very clear on this point, but it sure seems like IP-authenticated mode is expected to fall back to cleartext if no upgrade is possible. I think we need to clarify the use case for an authenticated mode, or abandon the distinction between opportunistic and authenticated. In my view, authenticated mode only makes sense if (1) the resolver is identified by its IP address AND (2) the system knows a priori that this resolver supports encrypted DNS, so that it can refuse to fall back to cleartext if the upgrade information is stripped by a network attacker. This is a sufficiently narrow use case that I lean toward dropping it entirely and recommending Discovery Using Resolver Names (Section 5) or opportunistic instead.\nCurrently, this is what we say (which I now see a grammatical error in, but there it is): We don't say anything else because that would be out of WG scope. The point of the document is to help a stub resolver configured with a recursive resolver known only by IP address to bootstrap into encrypted DNS. The stub resolver's posture on whether to fall back to unencrypted DNS or simply refuse to use the resolver is its own business. Look at this through a common OS scenario: the OS stub resolver learns an IP address from DHCP, and it's a public IP address (my old ISP did this). It knows of no other resolvers because the user doesn't know what DNS is and that's the default configuration (use network recommendation). If I try to use DDR and I get no answer back, then I'm in the same position as I was before DDR: I know of no other resolver, so I either have to use this one or have no DNS resolutions. DDR did not introduce that conundrum; it offers clients a way out of it when DNR is not present and there is no plain-text DNS attack present. The authenticated DDR mode allows such a client to be confident in the authenticity of the bootstrap even though it started over plain text. I do not agree this scenario is sufficiently narrow to drop. Is your concern over the usefulness of the scenario, or over naming it \"authenticated\"? I am open to suggestions on the latter; I strongly disagree with the former.\nBoth. How is this information useful to the client? I have not heard of any proposal for any client to behave differently on the basis of whether its encrypted connection is authenticated, when the resolver is identified by IP address. As for the name, RFC 7435 is the reference. I find it a bit confusing, but here's some of the relevant text: The \"authenticated\" section does not seem to describe an \"authenticated\" protocol by this definition, because it does not provide a downgrade-resistant method to determine that authentication is expected to work.\nPlease suggest a more accurate name then please. I do agree behavior naming should be consistent with other documents. Your bar for useful is apparently different from mine. Before DDR, a client configured only with an IP address had only unencrypted DNS as an option. With DDR, the client can possibly encrypt that connection. A transient attacker would have to have been present for the initial SVCB discovery to prevent that; an attacker arriving after DDR has taken place will be kept out of the loop. In the worst case, a client is returned to square one. I do not agree that just because a mechanism can be prevented means the mechanism provides no value. It doesn't have to. In the common case, the client is forced by lack of knowledge to use unencrypted DNS. If it can from that default configuration divine encrypted DNS configuration, that's a net positive. Resolver selection is out of scope and has nothing to do with it.\nI've updated to address this question by reformulating the concept of authentication, and being much more explicit about the threat model. Yes, that's a good step, but it doesn't rely on authentication. This is a very helpful observation. I've incorporated it in .\nI believe this particular issue is largely overtaken by events. Particularly, I think we've added some clarity and escape hatches with , and protections with .\nSame comment for line 337/360 about referring to the resolver as Designated versus Encrypted (it's unmodified in this PR but based on your agreement with the other in-line comment it may need modified as well) I think we will need to add some text to the Security Considerations section to mention the security promise difference between following the MAY or not (for optional use of the unverifiable designation). However, we can close this PR without it and handle that in a new PR if that's preferable.", "new_text": "4.1. When a client discovers Designated Resolvers from an Unencrypted Resolver IP address, it can choose to use these Designated Resolvers either automatically, or based on some other policy, heuristic, or user choice. This document defines two preferred methods to automatically use Designated Resolvers: Authenticated Discovery authenticated, for when a TLS certificate can be used to validate the resolver's identity. Opportunistic Discovery opportunistic, for when a resolver is accessed using a non-public IP address. A client MAY additionally use a discovered Designated Resolver without either of these methods, based on implementation-specific policy or user input. Details of such policy are out of scope of this document. Clients SHOULD NOT automatically use a Designated Resolver without some sort of validation, such as the two methods defined in this document or a future mechanism. 4.2. Authenticated Discovery is a mechanism that allows automatic use of a Designated Resolver that supports DNS encryption that performs a TLS handshake. In order to be considered an authenticated Designated Resolver, the TLS certificate presented by the Designated Resolver MUST contain both the domain name (from the SVCB answer) and the IP address of the designating Unencrypted Resolver within the SubjectAlternativeName certificate field. The client MUST check the SubjectAlternativeName field for both the Unencrypted Resolver's IP address and the"} {"id": "q-en-draft-ietf-add-ddr-0725a524283b98d66db922170a236dc52c5e3f13d33276ed4a133f2a7541b6f6", "old_text": "Resolver for any cases in which it would have otherwise used the Unencrypted Resolver. If the Designated Resolver has a different IP address than the Unencrypted Resolver and the TLS certificate does not cover the Unencrypted Resolver address, the client MUST NOT use the discovered Encrypted Resolver. Additionally, the client SHOULD suppress any further queries for Designated Resolvers using this Unencrypted Resolver for the length of time indicated by the SVCB record's Time to Live (TTL). If the Designated Resolver and the Unencrypted Resolver share an IP address, clients MAY choose to opportunistically use the Encrypted Resolver even without this certificate check (opportunistic). If resolving the name of an Encrypted Resolver from an SVCB record yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Encrypted Resolver name. 4.2. There are situations where authenticated discovery of encrypted DNS configuration over unencrypted DNS is not possible. This includes", "comments": "Right now, we only allow authenticated and matching-IP opportunistic use of the DDR record. While use of the record in a case where authentication fails isn't safe to use automatically, we should clarify that implementations MAY choose to use the information anyhow by policy, and possibly with user control.\nFor authentication to matter, it generally has to operate without fallback. Currently the text is not very clear on this point, but it sure seems like IP-authenticated mode is expected to fall back to cleartext if no upgrade is possible. I think we need to clarify the use case for an authenticated mode, or abandon the distinction between opportunistic and authenticated. In my view, authenticated mode only makes sense if (1) the resolver is identified by its IP address AND (2) the system knows a priori that this resolver supports encrypted DNS, so that it can refuse to fall back to cleartext if the upgrade information is stripped by a network attacker. This is a sufficiently narrow use case that I lean toward dropping it entirely and recommending Discovery Using Resolver Names (Section 5) or opportunistic instead.\nCurrently, this is what we say (which I now see a grammatical error in, but there it is): We don't say anything else because that would be out of WG scope. The point of the document is to help a stub resolver configured with a recursive resolver known only by IP address to bootstrap into encrypted DNS. The stub resolver's posture on whether to fall back to unencrypted DNS or simply refuse to use the resolver is its own business. Look at this through a common OS scenario: the OS stub resolver learns an IP address from DHCP, and it's a public IP address (my old ISP did this). It knows of no other resolvers because the user doesn't know what DNS is and that's the default configuration (use network recommendation). If I try to use DDR and I get no answer back, then I'm in the same position as I was before DDR: I know of no other resolver, so I either have to use this one or have no DNS resolutions. DDR did not introduce that conundrum; it offers clients a way out of it when DNR is not present and there is no plain-text DNS attack present. The authenticated DDR mode allows such a client to be confident in the authenticity of the bootstrap even though it started over plain text. I do not agree this scenario is sufficiently narrow to drop. Is your concern over the usefulness of the scenario, or over naming it \"authenticated\"? I am open to suggestions on the latter; I strongly disagree with the former.\nBoth. How is this information useful to the client? I have not heard of any proposal for any client to behave differently on the basis of whether its encrypted connection is authenticated, when the resolver is identified by IP address. As for the name, RFC 7435 is the reference. I find it a bit confusing, but here's some of the relevant text: The \"authenticated\" section does not seem to describe an \"authenticated\" protocol by this definition, because it does not provide a downgrade-resistant method to determine that authentication is expected to work.\nPlease suggest a more accurate name then please. I do agree behavior naming should be consistent with other documents. Your bar for useful is apparently different from mine. Before DDR, a client configured only with an IP address had only unencrypted DNS as an option. With DDR, the client can possibly encrypt that connection. A transient attacker would have to have been present for the initial SVCB discovery to prevent that; an attacker arriving after DDR has taken place will be kept out of the loop. In the worst case, a client is returned to square one. I do not agree that just because a mechanism can be prevented means the mechanism provides no value. It doesn't have to. In the common case, the client is forced by lack of knowledge to use unencrypted DNS. If it can from that default configuration divine encrypted DNS configuration, that's a net positive. Resolver selection is out of scope and has nothing to do with it.\nI've updated to address this question by reformulating the concept of authentication, and being much more explicit about the threat model. Yes, that's a good step, but it doesn't rely on authentication. This is a very helpful observation. I've incorporated it in .\nI believe this particular issue is largely overtaken by events. Particularly, I think we've added some clarity and escape hatches with , and protections with .\nSame comment for line 337/360 about referring to the resolver as Designated versus Encrypted (it's unmodified in this PR but based on your agreement with the other in-line comment it may need modified as well) I think we will need to add some text to the Security Considerations section to mention the security promise difference between following the MAY or not (for optional use of the unverifiable designation). However, we can close this PR without it and handle that in a new PR if that's preferable.", "new_text": "Resolver for any cases in which it would have otherwise used the Unencrypted Resolver. If the Designated Resolver has a different IP address than the Unencrypted Resolver and the TLS certificate does not cover the Unencrypted Resolver address, the client MUST NOT automatically use the discovered Designated Resolver. Additionally, the client SHOULD suppress any further queries for Designated Resolvers using this Unencrypted Resolver for the length of time indicated by the SVCB record's Time to Live (TTL). If the Designated Resolver and the Unencrypted Resolver share an IP address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (opportunistic). If resolving the name of a Designated Resolver from an SVCB record yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. 4.3. There are situations where authenticated discovery of encrypted DNS configuration over unencrypted DNS is not possible. This includes"} {"id": "q-en-draft-ietf-add-ddr-0725a524283b98d66db922170a236dc52c5e3f13d33276ed4a133f2a7541b6f6", "old_text": "like RA guard RFC6105. An attacker might try to direct Encrypted DNS traffic to itself by causing the client to think that a discovered Designated Resolver uses a different IP address from the Unencrypted Resolver. Such an Encrypted Resolver might have a valid certificate, but be operated by an attacker that is trying to observe or modify user queries without the knowledge of the client or network. If the IP address of a Designated Resolver differs from that of an Unencrypted Resolver, clients MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Encrypted Resolver's TLS certificate (authenticated). Opportunistic use of Encrypted Resolvers MUST be limited to cases where the Unencrypted Resolver and Designated Resolver have the same IP address (opportunistic). 8.", "comments": "Right now, we only allow authenticated and matching-IP opportunistic use of the DDR record. While use of the record in a case where authentication fails isn't safe to use automatically, we should clarify that implementations MAY choose to use the information anyhow by policy, and possibly with user control.\nFor authentication to matter, it generally has to operate without fallback. Currently the text is not very clear on this point, but it sure seems like IP-authenticated mode is expected to fall back to cleartext if no upgrade is possible. I think we need to clarify the use case for an authenticated mode, or abandon the distinction between opportunistic and authenticated. In my view, authenticated mode only makes sense if (1) the resolver is identified by its IP address AND (2) the system knows a priori that this resolver supports encrypted DNS, so that it can refuse to fall back to cleartext if the upgrade information is stripped by a network attacker. This is a sufficiently narrow use case that I lean toward dropping it entirely and recommending Discovery Using Resolver Names (Section 5) or opportunistic instead.\nCurrently, this is what we say (which I now see a grammatical error in, but there it is): We don't say anything else because that would be out of WG scope. The point of the document is to help a stub resolver configured with a recursive resolver known only by IP address to bootstrap into encrypted DNS. The stub resolver's posture on whether to fall back to unencrypted DNS or simply refuse to use the resolver is its own business. Look at this through a common OS scenario: the OS stub resolver learns an IP address from DHCP, and it's a public IP address (my old ISP did this). It knows of no other resolvers because the user doesn't know what DNS is and that's the default configuration (use network recommendation). If I try to use DDR and I get no answer back, then I'm in the same position as I was before DDR: I know of no other resolver, so I either have to use this one or have no DNS resolutions. DDR did not introduce that conundrum; it offers clients a way out of it when DNR is not present and there is no plain-text DNS attack present. The authenticated DDR mode allows such a client to be confident in the authenticity of the bootstrap even though it started over plain text. I do not agree this scenario is sufficiently narrow to drop. Is your concern over the usefulness of the scenario, or over naming it \"authenticated\"? I am open to suggestions on the latter; I strongly disagree with the former.\nBoth. How is this information useful to the client? I have not heard of any proposal for any client to behave differently on the basis of whether its encrypted connection is authenticated, when the resolver is identified by IP address. As for the name, RFC 7435 is the reference. I find it a bit confusing, but here's some of the relevant text: The \"authenticated\" section does not seem to describe an \"authenticated\" protocol by this definition, because it does not provide a downgrade-resistant method to determine that authentication is expected to work.\nPlease suggest a more accurate name then please. I do agree behavior naming should be consistent with other documents. Your bar for useful is apparently different from mine. Before DDR, a client configured only with an IP address had only unencrypted DNS as an option. With DDR, the client can possibly encrypt that connection. A transient attacker would have to have been present for the initial SVCB discovery to prevent that; an attacker arriving after DDR has taken place will be kept out of the loop. In the worst case, a client is returned to square one. I do not agree that just because a mechanism can be prevented means the mechanism provides no value. It doesn't have to. In the common case, the client is forced by lack of knowledge to use unencrypted DNS. If it can from that default configuration divine encrypted DNS configuration, that's a net positive. Resolver selection is out of scope and has nothing to do with it.\nI've updated to address this question by reformulating the concept of authentication, and being much more explicit about the threat model. Yes, that's a good step, but it doesn't rely on authentication. This is a very helpful observation. I've incorporated it in .\nI believe this particular issue is largely overtaken by events. Particularly, I think we've added some clarity and escape hatches with , and protections with .\nSame comment for line 337/360 about referring to the resolver as Designated versus Encrypted (it's unmodified in this PR but based on your agreement with the other in-line comment it may need modified as well) I think we will need to add some text to the Security Considerations section to mention the security promise difference between following the MAY or not (for optional use of the unverifiable designation). However, we can close this PR without it and handle that in a new PR if that's preferable.", "new_text": "like RA guard RFC6105. An attacker might try to direct Encrypted DNS traffic to itself by causing the client to think that a discovered Designated Resolver uses a different IP address from the Unencrypted Resolver. Such a Designated Resolver might have a valid certificate, but be operated by an attacker that is trying to observe or modify user queries without the knowledge of the client or network. The constraints on validation of Designated Resolvers specified here apply specifically to the automatic discovery mechanisms defined in this documents, which are referred to as Authenticated Discovery and Opportunistic Discovery. Clients MAY use some other mechanism to validate and use Designated Resolvers discovered using the DNS SVCB record. However, use of such an alternate mechanism needs to take into account the attack scenarios detailed here. If the IP address of a Designated Resolver differs from that of an Unencrypted Resolver, clients applying Authenicated Discovery (authenticated) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. Clients using Opportunistic Discovery (opportunistic) MUST be limited to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. 8."} {"id": "q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b", "old_text": "resolvers frequently support a local \"hosts file\" that preempts query forwarding, and most DNS forwarders and full resolvers can also serve responses from a local zone file. Other standardized hybrid resolution behaviors include Local Root RFC8806, mDNS RFC6762, and NXDOMAIN synthesis for .onion RFC7686. In many network environments, the network offers clients a DNS server (e.g. DHCP OFFER, IPv6 Router Advertisement). Although this server", "comments": "Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me.", "new_text": "resolvers frequently support a local \"hosts file\" that preempts query forwarding, and most DNS forwarders and full resolvers can also serve responses from a local zone file. Other standardized hybrid resolution behaviors include RFC8806, RFC6762, and RFC7686. In many network environments, the network offers clients a DNS server (e.g. DHCP OFFER, IPv6 Router Advertisement). Although this server"} {"id": "q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b", "old_text": "3. The protocol in this document allows the domain owner to create a split-horizon DNS. Other entities which do not own the domain are detected by the client. Thus, DNS filtering is not enabled by this protocol. 4.", "comments": "Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me.", "new_text": "3. The protocol in this document is designed to support the ability of a domain owner to create or authorize a split-horizon view of their domain. The protocol does not support split-horizon views created by any other entity. Thus, DNS filtering is not enabled by this protocol. 4."} {"id": "q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b", "old_text": "4.1. There are several DHCP options that convey local domain hints of different kinds. The most directly relevant is \"RDNSS Selection\" RFC6731, which provides \"a list of domains ... about which the RDNSS has special knowledge\", along with a \"High\", \"Medium\", or \"Low\" preference for each name. The specification notes the difficulty of relying on these hints without validation: Other local domain hints in DHCP include the \"Domain Name\" RFC2132, \"Access Network Domain Name\" RFC5986, \"Client FQDN\" RFC4704, and \"Name Service Search\" RFC2937 options. This specification may help clients to interpret these hints. For example, a rogue DHCP server could use the \"Client FQDN\" option to assign a client the name \"www.example.com\" in order to prevent the client from reaching the true \"www.example.com\". A client could use this specification to check the network's authority over this name, and adjust its behavior", "comments": "Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me.", "new_text": "4.1. There are several DHCP options that convey local domain hints of different kinds. The most directly relevant is RFC6731, which provides \"a list of domains ... about which the RDNSS has special knowledge\", along with a \"High\", \"Medium\", or \"Low\" preference for each name. The specification notes the difficulty of relying on these hints without validation: Other local domain hints in DHCP include the RFC2132, RFC5986, \"Client FQDN\" RFC4704, and RFC2937 options. This specification may help clients to interpret these hints. For example, a rogue DHCP server could use the \"Client FQDN\" option to assign a client the name \"www.example.com\" in order to prevent the client from reaching the true \"www.example.com\". A client could use this specification to check the network's authority over this name, and adjust its behavior"} {"id": "q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b", "old_text": "SSID, IP subnet assigned, DNS server IP address or name, and other similar mechanisms. For example, one existing implementation determines the host has joined an internal network because the DHCP- assigned IP address belongs to the company's IP address (as assigned by the regional IP addressing authority) and the DHCP-advertised DNS IP address is one used by IT at that network. Other mechanisms exist in other products but are not interesting to this specification; rather what is interesting is this step to determine \"we have joined the internal corporate network\" occurred and the DNS server is configured as authoritative for certain DNS zones (e.g., *.example.com). Because a rogue network can simulate all or most of the above characteristics this specification details how to validate these claims in validating. 4.3.", "comments": "Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me.", "new_text": "SSID, IP subnet assigned, DNS server IP address or name, and other similar mechanisms. For example, one existing implementation determines the host has joined an internal network because the DHCP- assigned IP address belongs to the company's IP range (as assigned by the regional IP addressing authority) and the DHCP-advertised DNS IP address is one used by IT at that network. Other mechanisms exist in other products but are not interesting to this specification; rather what is interesting is this step to determine \"we have joined the internal corporate network\" occurred and the DNS server is configured as authoritative for certain DNS zones (e.g., ). Because a rogue network can simulate all or most of the above characteristics, this specification details how to validate these claims in validating. 4.3."} {"id": "q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b", "old_text": "7. Two examples are shown below. The first example showing an company with an internal-only DNS server resolving the entire zone for that company (e.g., *.example.com) the second example resolving only a subdomain of the company's zone (e.g., *.internal.example.com). 7.1.", "comments": "Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me.", "new_text": "7. Two examples are shown below. The first example shows a company with an internal-only DNS server that claims the entire zone for that company (e.g., ). In the second example, the internal servers resolves only a subdomain of the company's zone (e.g., ). 7.1."} {"id": "q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b", "old_text": "Today, on the Internet it publishes two NS records, \"ns1.example.com\" and \"ns2.example.com\". The host and network first need mutual support one of the mechanisms described in learning. Shown in fig-learn is learning using DNR and PvD. Validation is then perfomed using either example-verify-public or example-verify-dnssec. 7.1.1. The figure below shows the steps performed to verify the local claims of DNS authority using a public resolver. 7.1.2.", "comments": "Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me.", "new_text": "Today, on the Internet it publishes two NS records, \"ns1.example.com\" and \"ns2.example.com\". First, the host and network both need to support one of the discovery mechanisms described in learning. fig-learn shows discovery using DNR and PvD. Validation is then perfomed using either example-verify-external or example-verify-dnssec. 7.1.1. The figure below shows the steps performed to verify the local claims of DNS authority using an external resolver. 7.1.2."} {"id": "q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b", "old_text": "7.2. A subdomain can also be used for all internal DNS names (e.g., the zone internal.example.com exists only on the internal DNS server). For successful validation described in this document the the internal DNS server will need a certificate signed by a CA trusted by the client. For such a name internal.example.com the message flow is similar to internal-only the difference is that queries for hosts not within the subdomain (www.example.com) are sent to the public resolver rather than resolver for internal.example.com. 8.", "comments": "Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me.", "new_text": "7.2. In many split-horizon deployments, all non-public domain names are placed in a separate child zone (e.g., ). In this configuration, the message flow is similar to internal- only, except that queries for hosts not within the subdomain (e.g., ) are sent to the external resolver rather than resolver for internal.example.com. As in internal-only, the internal DNS server will need a certificate signed by a CA trusted by the client. 8."} {"id": "q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b", "old_text": "ensure that names under the split-horizon are correctly signed or place them in an unsigned zone. If an internal zone name (e.g., internal.example.com) is used with this specification and a public certificate is obtained for validation, that internal zone name will exist in Certificate Transparency logs RFC9162. It should be noted, however, that this specification does not leak individual host names (e.g., www.internal.example.com) into the Certificate Transparancy logs or to public DNS resolvers. 10.", "comments": "Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me.", "new_text": "ensure that names under the split-horizon are correctly signed or place them in an unsigned zone. If an internal zone name (e.g., ) is used with this specification and a public certificate is obtained for validation, that internal zone name will exist in Certificate Transparency logs RFC9162. It should be noted, however, that this specification does not leak individual host names (e.g., ) into the Certificate Transparency logs or to external DNS resolvers. 10."} {"id": "q-en-draft-ietf-add-svcb-dns-ac79bccb838b23737634d4dbdcd2596729064f28710ad964c19052725a94b131", "old_text": "3. Names are formed using Port-Prefix Naming (Section 2.3 of SVCB), with a scheme of \"dns\". For example, SVCB records for a DNS service identified as \"\"dns1.example.com\"\" would be located at \"\"_dns.dns1.example.com\"\". 3.1.", "comments": "This should help with the case in DDR. See URL\nMuch better. Thanks for indulging me.", "new_text": "3. SVCB record names (i.e. QNAMEs) are formed using Port-Prefix Naming (Section 2.3 of SVCB), with a scheme of \"dns\". For example, SVCB records for a DNS service identified as \"\"dns1.example.com\"\" would be queried at \"\"_dns.dns1.example.com\"\". In some use cases, the name used for retrieving these DNS records is different from the server identity used to authenticate the secure transport. To distinguish them, we use the following terms: Binding authority - The service name (Section 1.4 of SVCB) and optional port number used as input to Port-Prefix Naming. Authentication name - The name used for secure transport authentication. It must be a DNS hostname or a literal IP address. Unless otherwise specified, it is the service name from the binding authority. 3.1."} {"id": "q-en-draft-ietf-add-svcb-dns-ac79bccb838b23737634d4dbdcd2596729064f28710ad964c19052725a94b131", "old_text": "default of 53). DNS URIs normally omit the authority, or specify an IP address, but a hostname and non-default port number are allowed. When a non-default port number is part of a service identifier, Port- Prefix Naming places the port number in an additional a prefix on the name. For example, SVCB records for a DNS service identified as \"\"dns1.example.com:9953\"\" would be located at \"\"_9953._dns.dns1.example.com\"\". If two DNS services operating on different port numbers provide different behaviors, this arrangement allows them to preserve the distinction when specifying alternative", "comments": "This should help with the case in DDR. See URL\nMuch better. Thanks for indulging me.", "new_text": "default of 53). DNS URIs normally omit the authority, or specify an IP address, but a hostname and non-default port number are allowed. When the binding authority specifies a non-default port number, Port- Prefix Naming places the port number in an additional a prefix on the name. For example, if the binding authority is \"\"dns1.example.com:9953\"\", the client would query for SVCB records at \"\"_9953._dns.dns1.example.com\"\". If two DNS services operating on different port numbers provide different behaviors, this arrangement allows them to preserve the distinction when specifying alternative"} {"id": "q-en-draft-ietf-add-svcb-dns-ac79bccb838b23737634d4dbdcd2596729064f28710ad964c19052725a94b131", "old_text": "presentation and wire format) is a relative URI Template RFC6570, normally starting with \"/\". If the \"alpn\" SvcParamKey indicates support for HTTP, clients MAY construct a DNS over HTTPS URI Template by combining the prefix \"https://\", the service name, the port from the \"port\" key if present, and the \"dohpath\" value. (The DNS service's original port number MUST NOT be used.) Clients SHOULD NOT query for any \"HTTPS\" RRs when using the constructed URI Template. Instead, the SvcParams and address records", "comments": "This should help with the case in DDR. See URL\nMuch better. Thanks for indulging me.", "new_text": "presentation and wire format) is a relative URI Template RFC6570, normally starting with \"/\". If the \"alpn\" SvcParamKey indicates support for HTTP, clients MAY construct a DNS over HTTPS URI Template by combining the prefix \"https://\", the authentication name, the port from the \"port\" key if present, and the \"dohpath\" value. (The binding authority's port number MUST NOT be used.) Clients SHOULD NOT query for any \"HTTPS\" RRs when using the constructed URI Template. Instead, the SvcParams and address records"} {"id": "q-en-draft-ietf-add-svcb-dns-ac79bccb838b23737634d4dbdcd2596729064f28710ad964c19052725a94b131", "old_text": "This section considers an adversary who can add or remove responses to the SVCB query. Clients MUST authenticate the server to its name during secure transport establishment. This name is the hostname used to construct the original SVCB query, and cannot be influenced by the SVCB record contents. Accordingly, this draft does not mandate the use of DNSSEC. This draft also does not specify how clients authenticate the name (e.g. selection of roots of trust), which might vary according to the context. Although this adversary cannot alter the authentication name of the service, it does have control of the port number and \"dohpath\" value.", "comments": "This should help with the case in DDR. See URL\nMuch better. Thanks for indulging me.", "new_text": "This section considers an adversary who can add or remove responses to the SVCB query. During secure transport establishment, clients MUST authenticate the server to its authentication name, which is not influenced by the SVCB record contents. Accordingly, this draft does not mandate the use of DNSSEC. This draft also does not specify how clients authenticate the name (e.g. selection of roots of trust), which might vary according to the context. Although this adversary cannot alter the authentication name of the service, it does have control of the port number and \"dohpath\" value."} {"id": "q-en-draft-ietf-dnsop-terminology-bis-76b3d80eb93bae7e9518d2f4de98caf12c95e105552e7497a87898de9a69accf", "old_text": "the delegation, even though they are not themselves authoritative data. \"[Resource records] which are not part of the authoritative data [of the zone], and are address resource records for the [name servers in subzones]. These RRs are only necessary if the name", "comments": "We don't seem to have defined \"lame delegation\", which is a term that is used all the time. It's in RFC 1912. Should we add it?\nYes.", "new_text": "the delegation, even though they are not themselves authoritative data. \"A lame delegations exists when a nameserver is delegated responsibility for providing nameservice for a zone (via NS records) but is not performing nameservice for that zone (usually because it is not set up as a primary or secondary for the zone).\" (Definition from RFC1912, Section 2.8) \"[Resource records] which are not part of the authoritative data [of the zone], and are address resource records for the [name servers in subzones]. These RRs are only necessary if the name"} {"id": "q-en-draft-ietf-dnssd-srp-baa9775aad553f782eb5a38c9d83bac3ba055a3432cc8121a1ed9488d712ac85", "old_text": "An instruction is a Service Discovery Instruction if it contains 2.3.1.2. An instruction is a Service Description Instruction if, for the", "comments": "Update Service Discovery Instruction text to address Esko Dijk's last call comments. This fixes a bogus reference, removes a confusing double negative, and splits the text hierarchically in a way that hopefully makes more sense.", "new_text": "An instruction is a Service Discovery Instruction if it contains Note that there can be more than one Service Discovery Instruction for the same name if the SRP requestor is advertising more than one service of the same type, or is changing the target of a PTR RR. For each such PTR RR add or remove, the above constraints must be met. 2.3.1.2. An instruction is a Service Description Instruction if, for the"} {"id": "q-en-draft-ietf-doh-dns-over-https-755230319acc535d998474a37b05fc990d18d71249a05b110b2cd13f655ef647", "old_text": "The protocol must permit the addition of new formats for DNS queries and responses. The protocol must ensure interoperable media formats through a mandatory to implement format wherein a query must be able to contain future modifications to the DNS protocol including the inclusion of one or more EDNS extensions (including those not yet defined). The protocol must use a secure transport that meets the requirements for HTTPS.", "comments": "This text does not read very well in its current form. Since it is two ideas, two sentences is best.", "new_text": "The protocol must permit the addition of new formats for DNS queries and responses. The protocol must ensure interoperability by specifying a single format for requests and responses that is mandatory to implement. That format must be able to support future modifications to the DNS protocol including the inclusion of one or more EDNS extensions (including those not yet defined). The protocol must use a secure transport that meets the requirements for HTTPS."} {"id": "q-en-draft-ietf-doh-dns-over-https-057e4d999fe11e53975b8319ec271085dc19959abdf3c9f3d06fdbf5b65bc650", "old_text": "A client can use DNS over HTTPS as one of multiple mechanisms to obtain DNS data. If a client of this protocol encounters an HTTP error after sending a DNS query, and then falls back to a different DNS retrieval mechanism, doing so can weaken the privacy expected by the user of the client. 9.", "comments": "added authenticity\nThe potential for misunderstanding here is great. If we consider authenticity to be stated as a goal, we would need more explicit about the properties that this provides. Specifically, this does not provide authenticity with respect to responses, because that is the purpose of DNSSEC.\nYes, that is true. Thinking about it, I think a web application developer (one of the targets) reading this draft might not fully understand the differences between authenticity at the HTTP layer and authenticity at the DNS layer. One of the stated features of Secure Contexts is authenticated channels that guarantee data integrity. If possible, I think a few more sentences elaborating on this topic would not hurt.\nI'm going to close this because we have no idea what kind of authentication that a user would expect, whereas we know pretty well what kind of privacy they expect.\nI think there might be room for something different along these lines. DNS hijack from your configured DNS server is currently a major problem, and DoH does allow you to authenticate whether or not you are talking to your configured recursive (while not saying anything about e2e). I'm going to reopen to track..", "new_text": "A client can use DNS over HTTPS as one of multiple mechanisms to obtain DNS data. If a client of this protocol encounters an HTTP error after sending a DNS query, and then falls back to a different DNS retrieval mechanism, doing so can weaken the privacy and authenticity expected by the user of the client. 9."} {"id": "q-en-draft-ietf-doh-dns-over-https-4a09a63dead37ee60ad6ebf9f7a43cd5c25a57b3d744d236492ab7682d58354a", "old_text": "to achieve similar performance. Those features were introduced to HTTP in HTTP/2 RFC7540. Earlier versions of HTTP are capable of conveying the semantic requirements of DOH but may result in very poor performance for many uses cases. 6.3.", "comments": "Hello, I am a non-native English speaker but I noticed the following sentence: \"Earlier versions of HTTP are capable of conveying the semantic requirements of DOH but may result in very poor performance for many uses cases.\" Shouldn't be there at the end? Wiktionary ( URL ) suggests that plural of \"use case\" is \"use cases\", not \"uses cases\".", "new_text": "to achieve similar performance. Those features were introduced to HTTP in HTTP/2 RFC7540. Earlier versions of HTTP are capable of conveying the semantic requirements of DOH but may result in very poor performance. 6.3."} {"id": "q-en-draft-ietf-doh-dns-over-https-dada2751b2b4d329b70fbaefc2d4edea4737d9edc49358926e5a9d1ffcfec11e", "old_text": "The DNS API client SHOULD include an HTTP \"Accept\" request header to indicate what type of content can be understood in response. Irrespective of the value of the Accept request header, the client MUST be prepared to process \"message/dns\" (as described in dnswire) responses but MAY also process any other type it receives. In order to maximize cache friendliness, DNS API clients using media formats that include DNS ID, such as message/dns, SHOULD use a DNS ID of 0 in every DNS request. HTTP correlates the request and response, thus eliminating the need for the ID in a media type such as message/ dns. The use of a varying DNS ID can cause semantically equivalent DNS queries to be cached separately. DNS API clients can use HTTP/2 padding and compression in the same way that other HTTP/2 clients use (or don't use) them.", "comments": "Fixes issue\nThe message/ tree requires that the sub-types contain other MIME messages, which DOH does not. Maybe use application/dns-message instead.", "new_text": "The DNS API client SHOULD include an HTTP \"Accept\" request header to indicate what type of content can be understood in response. Irrespective of the value of the Accept request header, the client MUST be prepared to process \"application/dns-message\" (as described in dnswire) responses but MAY also process any other type it receives. In order to maximize cache friendliness, DNS API clients using media formats that include DNS ID, such as application/dns-message, SHOULD use a DNS ID of 0 in every DNS request. HTTP correlates the request and response, thus eliminating the need for the ID in a media type such as application/dns-message. The use of a varying DNS ID can cause semantically equivalent DNS queries to be cached separately. DNS API clients can use HTTP/2 padding and compression in the same way that other HTTP/2 clients use (or don't use) them."} {"id": "q-en-draft-ietf-doh-dns-over-https-dada2751b2b4d329b70fbaefc2d4edea4737d9edc49358926e5a9d1ffcfec11e", "old_text": "\"https://dnsserver.example.net/dns-query{?dns}\" to resolve IN A records. The requests are represented as message/dns typed bodies. The first example request uses GET to request www.example.com", "comments": "Fixes issue\nThe message/ tree requires that the sub-types contain other MIME messages, which DOH does not. Maybe use application/dns-message instead.", "new_text": "\"https://dnsserver.example.net/dns-query{?dns}\" to resolve IN A records. The requests are represented as application/dns-message typed bodies. The first example request uses GET to request www.example.com"} {"id": "q-en-draft-ietf-doh-dns-over-https-dada2751b2b4d329b70fbaefc2d4edea4737d9edc49358926e5a9d1ffcfec11e", "old_text": "At the time this is published, the response types are works in progress. The only response type defined in this document is \"message/dns\", but it is possible that other response formats will be defined in the future. The DNS response for \"message/dns\" in dnswire MAY have one or more EDNS options, depending on the extension definition of the extensions given in the DNS request. Each DNS request-response pair is matched to one HTTP exchange. The responses may be processed and transported in any order using HTTP's", "comments": "Fixes issue\nThe message/ tree requires that the sub-types contain other MIME messages, which DOH does not. Maybe use application/dns-message instead.", "new_text": "At the time this is published, the response types are works in progress. The only response type defined in this document is \"application/dns-message\", but it is possible that other response formats will be defined in the future. The DNS response for \"application/dns-message\" in dnswire MAY have one or more EDNS options, depending on the extension definition of the extensions given in the DNS request. Each DNS request-response pair is matched to one HTTP exchange. The responses may be processed and transported in any order using HTTP's"} {"id": "q-en-draft-ietf-doh-dns-over-https-dada2751b2b4d329b70fbaefc2d4edea4737d9edc49358926e5a9d1ffcfec11e", "old_text": "caching discusses the relationship between DNS and HTTP response caching. A DNS API server MUST be able to process message/dns request messages. A DNS API server SHOULD respond with HTTP status code 415 (Unsupported Media Type) upon receiving a media type it is unable to", "comments": "Fixes issue\nThe message/ tree requires that the sub-types contain other MIME messages, which DOH does not. Maybe use application/dns-message instead.", "new_text": "caching discusses the relationship between DNS and HTTP response caching. A DNS API server MUST be able to process application/dns-message request messages. A DNS API server SHOULD respond with HTTP status code 415 (Unsupported Media Type) upon receiving a media type it is unable to"} {"id": "q-en-draft-ietf-doh-dns-over-https-dada2751b2b4d329b70fbaefc2d4edea4737d9edc49358926e5a9d1ffcfec11e", "old_text": "5.4. In order to maximize interoperability, DNS API clients and DNS API servers MUST support the \"message/dns\" media type. Other media types MAY be used as defined by HTTP Content Negotiation (RFC7231 Section 3.4). 6.", "comments": "Fixes issue\nThe message/ tree requires that the sub-types contain other MIME messages, which DOH does not. Maybe use application/dns-message instead.", "new_text": "5.4. In order to maximize interoperability, DNS API clients and DNS API servers MUST support the \"application/dns-message\" media type. Other media types MAY be used as defined by HTTP Content Negotiation (RFC7231 Section 3.4). 6."} {"id": "q-en-draft-ietf-doh-dns-over-https-dada2751b2b4d329b70fbaefc2d4edea4737d9edc49358926e5a9d1ffcfec11e", "old_text": "DNS API clients using the DNS wire format MAY have one or more EDNS options RFC6891 in the request. The media type is \"message/dns\". 7.", "comments": "Fixes issue\nThe message/ tree requires that the sub-types contain other MIME messages, which DOH does not. Maybe use application/dns-message instead.", "new_text": "DNS API clients using the DNS wire format MAY have one or more EDNS options RFC6891 in the request. The media type is \"application/dns-message\". 7."} {"id": "q-en-draft-ietf-doh-dns-over-https-2562d316148667b3e0135e65d6d1f8a48573d0c8b5be138e4f215d4482bf5ac4", "old_text": " DNS Queries over HTTPS draft-ietf-doh-dns-over-https Abstract This document describes how to run DNS service over HTTP (DOH) using https:// URIs. 1. This document defines a specific protocol for sending DNS RFC1035 queries and getting DNS responses over HTTP RFC7540 using https:// (and therefore TLS RFC5246 security for integrity and confidentiality). Each DNS query-response pair is mapped into a HTTP exchange.", "comments": "The \"using https:// URIs\" part is just odd. Easier to simply state that this uses HTTPS at this level. Also, if the concept of a DNS service evolves to encompass other types of exchange than simple request-response, then \"run DNS service\" won't be accurate.\nI added \"DOH\" to the title because it was now being removed from the Abstract.", "new_text": " DNS Queries over HTTPS (DOH) draft-ietf-doh-dns-over-https Abstract This document describes how to make DNS queries over HTTPS. 1. This document defines a specific protocol for sending DNS RFC1035 queries and getting DNS responses over HTTP RFC7540 using https URIs (and therefore TLS RFC5246 security for integrity and confidentiality). Each DNS query-response pair is mapped into a HTTP exchange."} {"id": "q-en-draft-ietf-doh-dns-over-https-3ba1e6ed3ed092e6655003d08c7a9f46ea73b5eb188cb9e0466d2e6398262e26", "old_text": "5.3. Before using DOH response data for DNS resolution, the client MUST establish that the HTTP request URI is a trusted service for the DOH query. For HTTP requests initiated by the DNS API client this trust is implicit in the selection of URI. For HTTP server push (RFC7540 Section 8.2) extra care must be taken to ensure that the pushed URI is one that the client would have directed the same query to if the client had initiated the request. This specification does not extend DNS resolution privileges to URIs that are not recognized by the client as trusted DNS API servers. 5.4.", "comments": "This is OK, but I like the changes in better because it brings the \"trust\" discussion further forwards. Preferences?\nI prefer this PR over - changing from trust to authorization is more on point\nI agree authorize is closer to the correct concept here but I'd like to see a definition of it (in a section of it's own early in the document). But - I don't believe there is an existing concept in DNS or DNSSEC that this is re-using? Does 'authorize' here mean that: the client will use all response from that server for resolution the client will cache all responses received from that server Also, if the trusted resolver itself isn't doing DNSSEC validation then the client's cache can still be poisoned. The only 100% sure method to stop cache poisoning is for the caching entity to perform DNSSEC validation.", "new_text": "5.3. Before using DOH response data for DNS resolution, the client MUST establish that the HTTP request URI is authorized for the DOH query. For HTTP requests initiated by the DNS API client this authorization is implicit in the selection of URI. For HTTP server push (RFC7540 Section 8.2) extra care must be taken to ensure that the pushed URI is one that the client would have directed the same query to if the client had initiated the request. This specification does not extend DNS resolution privileges to URIs that are not recognized by the client as authorized DNS API servers. 5.4."} {"id": "q-en-draft-ietf-doh-dns-over-https-3ba1e6ed3ed092e6655003d08c7a9f46ea73b5eb188cb9e0466d2e6398262e26", "old_text": "security implications of HTTP caching for other protocols that use HTTP. A server that is acting both as a normal web server and a DNS API server is in a position to choose which DNS names it forces a client to resolve (through its web service) and also be the one to answer those queries (through its DNS API service). An untrusted DNS API server can thus easily cause damage by poisoning a client's cache with names that the DNS API server chooses to poison. A client MUST NOT trust a DNS API server simply because it was discovered, or because the client was told to trust the DNS API server by an untrusted party. Instead, a client MUST only trust DNS API server that is configured as trustworthy. A client can use DNS over HTTPS as one of multiple mechanisms to obtain DNS data. If a client of this protocol encounters an HTTP", "comments": "This is OK, but I like the changes in better because it brings the \"trust\" discussion further forwards. Preferences?\nI prefer this PR over - changing from trust to authorization is more on point\nI agree authorize is closer to the correct concept here but I'd like to see a definition of it (in a section of it's own early in the document). But - I don't believe there is an existing concept in DNS or DNSSEC that this is re-using? Does 'authorize' here mean that: the client will use all response from that server for resolution the client will cache all responses received from that server Also, if the trusted resolver itself isn't doing DNSSEC validation then the client's cache can still be poisoned. The only 100% sure method to stop cache poisoning is for the caching entity to perform DNSSEC validation.", "new_text": "security implications of HTTP caching for other protocols that use HTTP. In the absence of information about the authenticity of responses, such as DNSSEC, a DNS API server can poison a client's cache. A client MUST NOT authorize arbitrary DNS API servers. Instead, a client MUST specifically authorize DNS API servers using mechanisms such as explicit configuration. A client can use DNS over HTTPS as one of multiple mechanisms to obtain DNS data. If a client of this protocol encounters an HTTP"} {"id": "q-en-draft-ietf-doh-dns-over-https-95f7e3440953b4d2cc40199225483ea28a72c37b01c5b52fc58a38d9c50ec164", "old_text": "partially because they did not follow HTTP best practices. This document defines a specific protocol for sending DNS RFC1035 queries and getting DNS responses over modern versions of HTTP RFC7540 using https:// (and therefore TLS RFC5246 security for integrity and confidentiality). Each DNS query-response pair is mapped into a HTTP request-response pair. The described approach is more than a tunnel over HTTP. It establishes default media formatting types for requests and responses", "comments": "NAME I'm ok with you merging this and closing 11 if you're ok with it.\nI think that it is fine - or more precisely, invaluable - to explain what the semantics of server push are. But you don't actually require HTTP/2 for this protocol to work. The only thing that requiring HTTP/2 gets you is tighter constraints on TLS usage. You can get those same constraints with some deployment recommendations. I would support strong recommendations that said HTTP/2 is useful for its multiplexing as well as similarly strong advice about following the advice in RFC 7525. Those are both good practice.\nfairly clear feedback from singapore to change this to an endorsement (SHOULD) with better explanatory text focusing on performance and scale.\nWhat are the drawbacks of HTTP/2? Less readily available libraries? more complex protocol? Potential difficulties in integrating with current fronting stacks that may not be HTTP/2 ready? On one hand, I feel like it may be easier to just enforce 1 stack for now rather than having to support multiple flavor of client/server protocols. A suboptimal HTTP/2 implementation may just be similar to HTTP/1 but at least we can try to move forward toward multiplexing. Making HTTP/2 a should may just leave us in limbo in HTTP/1 land.\nNAME the singapore meeting is available here URL .. goto the 12 minute mark to review the discussion (wrt your question about drawbacks).\nText is in URL\nLast PR accepted. I suspect there will be a bit more wordsmithing, but this should cover the main issue well.\nthanks, I was in the room, but my notes were sparse and some of the cons I remembered of were more inconvenience to me than blockers. What I am not getting from the current spec, and which I believe NAME touched on in URL, is that it is unclear whether or not a minimum HTTP version must be supported. Based on URL , do we consider that by supporting HTTP/1.1 we get HTTP/1.0 for free and consider 0.9 not an option? Wouldn't not mandating a required version prevent client/server from having a known version to use? Can we make this clearer maybe?\nThe decision was to rely only on HTTP semantics and not versions. So any HTTP implementation that is acceptable to its peer can be used.", "new_text": "partially because they did not follow HTTP best practices. This document defines a specific protocol for sending DNS RFC1035 queries and getting DNS responses over HTTP RFC7540 using https:// (and therefore TLS RFC5246 security for integrity and confidentiality). Each DNS query-response pair is mapped into a HTTP request-response pair. The described approach is more than a tunnel over HTTP. It establishes default media formatting types for requests and responses"} {"id": "q-en-draft-ietf-doh-dns-over-https-95f7e3440953b4d2cc40199225483ea28a72c37b01c5b52fc58a38d9c50ec164", "old_text": "This protocol MUST be used with https scheme URI RFC7230. This protocol MUST use HTTP/2 RFC7540 or its successors in order to satisfy the security requirements of DNS over HTTPS. Further, the messages in classic UDP based DNS RFC1035 are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, priority, parallelism, and header compression, all of which are supported by HTTP/2 RFC7540 or its successors. 8.", "comments": "NAME I'm ok with you merging this and closing 11 if you're ok with it.\nI think that it is fine - or more precisely, invaluable - to explain what the semantics of server push are. But you don't actually require HTTP/2 for this protocol to work. The only thing that requiring HTTP/2 gets you is tighter constraints on TLS usage. You can get those same constraints with some deployment recommendations. I would support strong recommendations that said HTTP/2 is useful for its multiplexing as well as similarly strong advice about following the advice in RFC 7525. Those are both good practice.\nfairly clear feedback from singapore to change this to an endorsement (SHOULD) with better explanatory text focusing on performance and scale.\nWhat are the drawbacks of HTTP/2? Less readily available libraries? more complex protocol? Potential difficulties in integrating with current fronting stacks that may not be HTTP/2 ready? On one hand, I feel like it may be easier to just enforce 1 stack for now rather than having to support multiple flavor of client/server protocols. A suboptimal HTTP/2 implementation may just be similar to HTTP/1 but at least we can try to move forward toward multiplexing. Making HTTP/2 a should may just leave us in limbo in HTTP/1 land.\nNAME the singapore meeting is available here URL .. goto the 12 minute mark to review the discussion (wrt your question about drawbacks).\nText is in URL\nLast PR accepted. I suspect there will be a bit more wordsmithing, but this should cover the main issue well.\nthanks, I was in the room, but my notes were sparse and some of the cons I remembered of were more inconvenience to me than blockers. What I am not getting from the current spec, and which I believe NAME touched on in URL, is that it is unclear whether or not a minimum HTTP version must be supported. Based on URL , do we consider that by supporting HTTP/1.1 we get HTTP/1.0 for free and consider 0.9 not an option? Wouldn't not mandating a required version prevent client/server from having a known version to use? Can we make this clearer maybe?\nThe decision was to rely only on HTTP semantics and not versions. So any HTTP implementation that is acceptable to its peer can be used.", "new_text": "This protocol MUST be used with https scheme URI RFC7230. 7.1. The minimum version of HTTP used by DOH SHOULD be HTTP/2 RFC7540. The messages in classic UDP based DNS RFC1035 are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, parallelism, priority, and header compression to acheive similar performance. Those features were introduced to HTTP in HTTP/2 RFC7540. Earlier versions of HTTP are capable of conveying the semantic requirements of DOH but would result in very poor performance for many uses cases. 8."} {"id": "q-en-draft-ietf-doh-dns-over-https-95f7e3440953b4d2cc40199225483ea28a72c37b01c5b52fc58a38d9c50ec164", "old_text": "9. Running DNS over HTTPS relies on the security of the underlying HTTP connection. By requiring at least RFC7540 levels of support for TLS, this protocol expects to use current best practices for secure transport. Session level encryption has well known weaknesses with respect to traffic analysis which might be particularly acute when dealing with", "comments": "NAME I'm ok with you merging this and closing 11 if you're ok with it.\nI think that it is fine - or more precisely, invaluable - to explain what the semantics of server push are. But you don't actually require HTTP/2 for this protocol to work. The only thing that requiring HTTP/2 gets you is tighter constraints on TLS usage. You can get those same constraints with some deployment recommendations. I would support strong recommendations that said HTTP/2 is useful for its multiplexing as well as similarly strong advice about following the advice in RFC 7525. Those are both good practice.\nfairly clear feedback from singapore to change this to an endorsement (SHOULD) with better explanatory text focusing on performance and scale.\nWhat are the drawbacks of HTTP/2? Less readily available libraries? more complex protocol? Potential difficulties in integrating with current fronting stacks that may not be HTTP/2 ready? On one hand, I feel like it may be easier to just enforce 1 stack for now rather than having to support multiple flavor of client/server protocols. A suboptimal HTTP/2 implementation may just be similar to HTTP/1 but at least we can try to move forward toward multiplexing. Making HTTP/2 a should may just leave us in limbo in HTTP/1 land.\nNAME the singapore meeting is available here URL .. goto the 12 minute mark to review the discussion (wrt your question about drawbacks).\nText is in URL\nLast PR accepted. I suspect there will be a bit more wordsmithing, but this should cover the main issue well.\nthanks, I was in the room, but my notes were sparse and some of the cons I remembered of were more inconvenience to me than blockers. What I am not getting from the current spec, and which I believe NAME touched on in URL, is that it is unclear whether or not a minimum HTTP version must be supported. Based on URL , do we consider that by supporting HTTP/1.1 we get HTTP/1.0 for free and consider 0.9 not an option? Wouldn't not mandating a required version prevent client/server from having a known version to use? Can we make this clearer maybe?\nThe decision was to rely only on HTTP semantics and not versions. So any HTTP implementation that is acceptable to its peer can be used.", "new_text": "9. Running DNS over HTTPS relies on the security of the underlying HTTP transport. Implementations utilizing HTTP/2 benefit from the TLS profile defined in RFC7540 Section 9.2. Session level encryption has well known weaknesses with respect to traffic analysis which might be particularly acute when dealing with"} {"id": "q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b", "old_text": "but is not limited to, spoofing DNS responses, blocking DNS requests, and tracking. HTTP authentication and proxy friendliness are expected to make this protocol function in some environments where DNS directly on TLS (RFC7858) would not. A secondary use case is web applications that want to access DNS information. Standardizing an HTTPS mechanism allows this to be done", "comments": "I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do.", "new_text": "but is not limited to, spoofing DNS responses, blocking DNS requests, and tracking. HTTP authentication and proxy friendliness are expected to make this protocol function in some environments where unsecured DNS (DNS) or DNS directly on TLS (RFC7858) would not. A secondary use case is web applications that want to access DNS information. Standardizing an HTTPS mechanism allows this to be done"} {"id": "q-en-draft-ietf-doh-dns-over-https-a1dd61316298ea88fb6622f6e9b9d57ce6107fa3a465616b05516d1642b87af1", "old_text": "conveying the semantic requirements of DOH but may result in very poor performance for many uses cases. 8. 8.1.", "comments": "Describe how GET (not POST) is used with server push. This will ensure that you answer the bigger question: what do you do with a server push from an arbitrary server? I assume that this won't be privileged in any way, and only the configured DNS API server will have its answers routed to the DNS stack, but that needs to be explicit.\nServer push in a request-response protocol like DNS would take a lot of designing. It isn't really needed in DNS because the server can always slap in additional answers in the Additional section of a request. I propose we close this by saying \"HTTP server push is not defined for this protocol, but might be defined in a future version\". If someone wants to define an EDNS0 or session-signal extension for it, that should be just fine.\nHTTP already has server push, so I would prefer something closer to \"this protocol does not alter the semantics of HTTP server push\".\nI think NAME is closer to what I'm thinking.. (though I don't really think that needs to be stated beyond other clarifications that this document just uses http in a particular manner, it does not define it.) additionally RR's don't quite address the same use case by the way as push can operate in more of a pub-sub fashion if you like where an additional RR is definitely a poll model.\nOf course this protocol does not alter the semantics of server push, but what does it mean for a DNS client? In the DNS protocol, the only thing even vaguely like server-initiated information is stuff in the Additional section of an answer, and even then, there are pretty strict rules about ignoring it if it is at all smelly.\nan often times misunderstood part of push is that a server pushes both a response and the request that it is responding to. (like dns, http is always a request/response pair, so if the client doesn't provide the request the server needs to). So that's what it means - its an answer to a particular question and you can cache it or not depending on whether that's a request you would have asked/trusted the peer to handle. That's pretty much the same rule for dns and traditional http clients. A client who would rather not have things pushed at them has existing protocol mechanisms to disable that (SETTINGS) which doh doesn't need to specify. I think all of this gets at what NAME was originally commenting on - you need to make a decision on whether or not you trust the server for the query/response being pushed (i.e is this your dns api server otherwise don't accept) before processing it.. this is already true for anything pushed via http but a sentence making it explicit is fine.", "new_text": "conveying the semantic requirements of DOH but may result in very poor performance for many uses cases. 7.3. Before using DOH response data for DNS resolution, the client MUST establish that the HTTP request URI is a trusted service for the DOH query. For HTTP requests initiated by the DNS API client this trust is implicit in the selection of URI. For HTTP server push (RFC7540 Section 8.2) extra care must be taken to ensure that the pushed URI is one that the client would have directed the same query to if the client had initiated the request. This specification does not extend DNS resolution privileges to URIs that are not recognized by the client as trusted DNS API servers. 8. 8.1."} {"id": "q-en-draft-ietf-emu-eap-tls13-8a00221ffe39e503ac5f718a9edca091c96e5f8c3b630ee51de096e8dd129e10", "old_text": "using the TLS exporter interface RFC5705 (for TLS 1.3 this is defined in Section 7.5 of RFC8446). All other parameters such as MSK and EMSK are derived in the same manner as with EAP-TLS RFC5216, Section 2.3. The definitions are repeated below for simplicity: The use of these keys is specific to the lower layer, as described RFC5247.", "comments": "Proposal to change the derivation of the EMSK and MSK individually and directly from TLS exporter. Defines new tags for this purpose.\nThanks!", "new_text": "using the TLS exporter interface RFC5705 (for TLS 1.3 this is defined in Section 7.5 of RFC8446). All other parameters are derived in the same manner as with EAP-TLS RFC5216, Section 2.3. The definitions are repeated below for simplicity: The use of these keys is specific to the lower layer, as described RFC5247."} {"id": "q-en-draft-ietf-emu-eap-tls13-b278c25d295a4bf56658e20d0f2b0dc56ca460d084fdbfd403d05722395d490a", "old_text": "The PSK associated with the ticket depends on the client Finished and cannot be pre-computed in handshakes with client authentication. The NewSessionTicket message MUST NOT include an \"early_data\" extension. A mechanism by which clients can specify the desired number of tickets needed for future connections is defined in I-D.ietf-tls- ticketrequests. figbase2 shows an example message flow for a succesfull EAP-TLS full handshake with mutual authentication and ticket establishment of a", "comments": "According to suggeestion in issue\nAlan has suggested that more Profiling / guidance regarding NewSessionTicket would be good. TLS 1.3 allows any number of newsessionticket to be send in any of the server flights. All ways are secure. TLS 1.3 leaves how and when to the TLS implementation.\nThis is a new request for technical change not related to the IESG DISUCUSS. Based on earlier discussions changes to the TLS layer are likely a bit controversial. John: I don't see what is special with EAP-TLS here. I also don't know what a good recommendation for the TLS layer would be or what can be enforced in TLS implementations. This would likely need interaction with TLS people,\nNote also that draft-ietf-tls-ticketrequests is in the RFC Editor's queue and goes into a bit more detail about when different numbers of tickets might be desired (as well as a mechanism to indicate the client's desires, though it's not entirely clear to me whether that will be very helpful for the EAP-TLS usage).\nBased on your earlier comment on draft-ietf-tls-ticketrequests, I already added a reference to ietf-tls-ticketrequests in version -14 \"A mechanism by which clients can specify the desired number of tickets needed for future connections is defined in [I-D.ietf-tls-ticketrequests].\"\nOops, shame on me for not checking. (Thanks for adding the reference the last time I mentioned it!)\nI think it would be fine to include a SHOULD send only one NewSessionTicket since EAP clients will typically only need one at a time.\nI am not comfortable adding such guidance without a thorough survey of libraries. As draft-ietf-tls-ticketrequests says: \"servers vend some (often hard-coded) number of tickets per connection. Some server implementations return a different default number of tickets for session resumption than for the initial connection that created the session. No static choice, whether fixed, or resumption-dependent is ideal for all situations.\" I know that openssl allows to control number of tickets with URL but I don't we think we can be sure about behavior of all libraries out there. Also some noteworthy text from draft-ietf-tls-ticketrequests: \"Especially when the client was initially authenticated with a client certificate, that session may need to be refreshed from time to time. Consequently, a server may periodically force a new connection even when the client presents a valid ticket. When that happens, it is possible that any other tickets derived from the same original session are equally invalid. A client avoids a full handshake on subsequent connections if it replaces all stored tickets with new ones obtained from the just performed full handshake. The number of tickets the server should vend for a new connection may therefore need to be larger than the number for routine resumption.\"\nHow about removing the normative SHOULD, something like: desired number of tickets needed for future connections is defined in [I-D.ietf-tls-ticketrequests]. >\nThis non-normative suggestion sounds good to me.\nJohn's suggested text looks good. By the way, I checked boringssl: URL It seems to issue 2 tickets by default (static const int kNumTickets = 2). As far as I can tell, the only way to limit to 1 ticket is by changing the code manually and re-compiling the library.\nThis is resolved in PR\nMerged PR", "new_text": "The PSK associated with the ticket depends on the client Finished and cannot be pre-computed in handshakes with client authentication. The NewSessionTicket message MUST NOT include an \"early_data\" extension. Servers should take into account that fewer newSessionTickets will likely be needed in EAP-TLS than in the usual HTTPS connection scenario. In most cases a single newSessionTicket will be sufficient. A mechanism by which clients can specify the desired number of tickets needed for future connections is defined in I- D.ietf-tls-ticketrequests. figbase2 shows an example message flow for a succesfull EAP-TLS full handshake with mutual authentication and ticket establishment of a"} {"id": "q-en-draft-ietf-emu-eap-tls13-e70c384dea70a96bbf96d25517486c46ac9762eecb7c2f7f6c92c84896bcdbe9", "old_text": "order things are sent and the application data MAY therefore be sent before a NewSessionTicket. TLS application data 0x00 is therefore to be interpreted as success after the EAP-Request that contains TLS application data 0x00. After the EAP-TLS server has received an empty EAP-Response to the EAP-Request containing the TLS application data 0x00, the EAP-TLS server sends EAP-Success. figbase1 shows an example message flow for a successful EAP-TLS full handshake with mutual authentication (and neither HelloRetryRequest", "comments": "URL\nI added \"604800 seconds\" as suggested by Oleg. Joe suggested \"Perhaps both of these should be left to the TLS spec.\" I would agree with this.... Thanks Joe, one comment regarding item is inline below. Joseph Salowey wrote: [Oleg] Got it. Then maybe it is worth specifying 604800 seconds in the section \"5.7. Resumption\" also so it will be clear that the requirement in this section is the same strict as in \"2.1.2. Ticket Establishment\".", "new_text": "order things are sent and the application data MAY therefore be sent before a NewSessionTicket. TLS application data 0x00 is therefore to be interpreted as success after the EAP-Request that contains TLS application data 0x00. After the EAP-TLS server has sent an EAP- Request containing the TLS application data 0x00 and received an EAP- Response packet of EAP-Type=EAP-TLS and no data, the EAP-TLS server sends EAP-Success. figbase1 shows an example message flow for a successful EAP-TLS full handshake with mutual authentication (and neither HelloRetryRequest"} {"id": "q-en-draft-ietf-emu-eap-tls13-e70c384dea70a96bbf96d25517486c46ac9762eecb7c2f7f6c92c84896bcdbe9", "old_text": "\"key_share\" and using the \"psk_dhe_ke\" pre-shared key exchange mode is also important in order to limit the impact of a key compromise. When using \"psk_dhe_ke\", TLS 1.3 provides forward secrecy meaning that key leakage does not compromise any earlier connections. It is RECOMMMENDED to use \"psk_dhe_ke\" for resumption. 2.1.4.", "comments": "URL\nI added \"604800 seconds\" as suggested by Oleg. Joe suggested \"Perhaps both of these should be left to the TLS spec.\" I would agree with this.... Thanks Joe, one comment regarding item is inline below. Joseph Salowey wrote: [Oleg] Got it. Then maybe it is worth specifying 604800 seconds in the section \"5.7. Resumption\" also so it will be clear that the requirement in this section is the same strict as in \"2.1.2. Ticket Establishment\".", "new_text": "\"key_share\" and using the \"psk_dhe_ke\" pre-shared key exchange mode is also important in order to limit the impact of a key compromise. When using \"psk_dhe_ke\", TLS 1.3 provides forward secrecy meaning that key leakage does not compromise any earlier connections. The \"psk_dh_ke\" mechanism MUST be used for resumption unless the deployment has a local requirement to allow configuration of other mechanisms. 2.1.4."} {"id": "q-en-draft-ietf-emu-eap-tls13-e70c384dea70a96bbf96d25517486c46ac9762eecb7c2f7f6c92c84896bcdbe9", "old_text": "figterm3 shows an example message flow where the EAP-TLS server authenticates to the EAP-TLS peer successfully, but the EAP-TLS peer fails to authenticate to the EAP-TLS server and sends a TLS Error alert. 2.1.5.", "comments": "URL\nI added \"604800 seconds\" as suggested by Oleg. Joe suggested \"Perhaps both of these should be left to the TLS spec.\" I would agree with this.... Thanks Joe, one comment regarding item is inline below. Joseph Salowey wrote: [Oleg] Got it. Then maybe it is worth specifying 604800 seconds in the section \"5.7. Resumption\" also so it will be clear that the requirement in this section is the same strict as in \"2.1.2. Ticket Establishment\".", "new_text": "figterm3 shows an example message flow where the EAP-TLS server authenticates to the EAP-TLS peer successfully, but the EAP-TLS peer fails to authenticate to the EAP-TLS server and the server sends a TLS Error alert. 2.1.5."} {"id": "q-en-draft-ietf-emu-eap-tls13-e70c384dea70a96bbf96d25517486c46ac9762eecb7c2f7f6c92c84896bcdbe9", "old_text": "accounting. As suggested in RFC8446, EAP-TLS peers MUST NOT store resumption PSKs or tickets (and associated cached data) for longer than 7 days, regardless of the PSK or ticket lifetime. The EAP-TLS peer MAY delete them earlier based on local policy. The cached data MAY also be removed on the EAP-TLS server or EAP-TLS peer if any certificate in the certificate chain has been revoked or has expired. In all such cases, an attempt at resumption results in a full TLS handshake instead. Information from the EAP-TLS exchange (e.g., the identity provided in EAP-Response/Identity) as well as non-EAP information (e.g., IP", "comments": "URL\nI added \"604800 seconds\" as suggested by Oleg. Joe suggested \"Perhaps both of these should be left to the TLS spec.\" I would agree with this.... Thanks Joe, one comment regarding item is inline below. Joseph Salowey wrote: [Oleg] Got it. Then maybe it is worth specifying 604800 seconds in the section \"5.7. Resumption\" also so it will be clear that the requirement in this section is the same strict as in \"2.1.2. Ticket Establishment\".", "new_text": "accounting. As suggested in RFC8446, EAP-TLS peers MUST NOT store resumption PSKs or tickets (and associated cached data) for longer than 604800 seconds (7 days), regardless of the PSK or ticket lifetime. The EAP- TLS peer MAY delete them earlier based on local policy. The cached data MAY also be removed on the EAP-TLS server or EAP-TLS peer if any certificate in the certificate chain has been revoked or has expired. In all such cases, an attempt at resumption results in a full TLS handshake instead. Information from the EAP-TLS exchange (e.g., the identity provided in EAP-Response/Identity) as well as non-EAP information (e.g., IP"} {"id": "q-en-draft-ietf-jsonpath-base-42cd38e83865dc3b5f9c3fc353fa782067308c615c4057e1557c6a29ec444ff4", "old_text": "3.6. A Normalized Path is a JSONPath with restricted syntax that identifies a node by providing a query that results in exactly that node. For example, the JSONPath expression \"$.book[?(@.price<10)]\"", "comments": "Fixes URL\nFrom it was stated that the current ABNF doesn't require parentheses around the expression in a filter expression. The spec thus contains examples without them. Previous discussions had occurred around making them optional, but no decision came out of them that I can recall. The closest documented conversation I could find was by NAME and the following comments which indicate that the parentheses are currently expected. To be clear, we should currently require , and disallow . The original blog post consistently uses them and never shows a filter expression selector without them.\nI don't have a strong opinion on this simplification, but one data point: The simplified grammar has been in the document since we added ABNF grammar, i.e., in draft-ietf-jsonpath-base-01 (2021-07-08). The discussion was started in , 2021-04-30. For me, the current ABNF is the status quo.. If we need to have a discussion here, let's have it, but I'm not interested in the \"we haven't discussed this enough\" angle -- this has been out in the document for 9 months. So if there are any technical arguments for requiring the parentheses, please add them here.\nI am strongly voting for the simplified, backwards compatible notation with optional parentheses.\nThe PR that put in the ABNF (which has never changed in substance with regard to requiring parentheses): URL There is even a comment from you (Greg) on that ABNF snippet in that PR conversation, so you apparently reviewed it. (Of course, you might have missed the simplification and there may be technical arguments against it -- let's hear them.)\nYes, looking back, I see point 4.1 in raised the topic, and the fact that it wasn't discussed at all (even though the original post said it should be discussed) could have been interpreted as acceptance and agreement. Also, that PR you reference is massive. It's easy to miss in the multitude of changes contained therein. I suspect this was something that was snuck in rather than agreed upon. Agreement requires discussion, and I see none. Personally, when I see , the looks as though it's part of the expression. Having the parentheses, , better isolates the expression visually. This is not a problem of parsing, but one of the human element.\nFor the record, I was originally uncomfortable with the parentheses being optional, but I remember this being discussed at a meeting quite a while ago and I was persuaded at that time that the current approach is reasonable. Perhaps that discussion wasn't highlighted in the minutes. I think we should stay as we are.\nNAME I know you already commented to the contrary, but your original post has this to say: I think this is pretty indicative that parentheses should be included as part of the required syntax. Taking this stance also aligns with our preference to accommodate existing implementations. While we say that this is a backward compatible change, it may not be. For instance, my parser won't recognize a filter selector without the parentheses. I expect many other existing implementations will be the same.\nAt today's meeting I took an action to write a comparison project test for a filter without parentheses. It turns out there is one already -- URL -- and the consensus among implementations is \"Not supported\".\nHardly indicative. Original Goessner JsonPath used JavaScript for evaluating filter expressions, I imagine that requiring the outer parentheses was just to provide a pragmatic way to delimit the JavaScript part, original Goessner JsonPath was not responsible for parsing that part. That consideration no longer applies when the syntax of filter expressions is defined in the JSONPath specification, mandatory outer parentheses no longer serve any purpose. Be real :-) The spec in its current form is incompatible to a lessor or greater extent with all existing JSONPath implementations.\nI still vote for the lean syntax with optional parentheses in filter expressions. Old queries (having parentheses) can be handled by 'old' and 'new' implementations without problems (backward compatibility). We don't break old 'query code'. New queries (no parentheses) can be treated by 'new' implementations only. This is the same behavior compared with regular expressions in filters, not being understood by old implementations. Thanks for Daniel also making that point. Am 26.04.2022 um 19:26 schrieb Daniel Parker:\nMy issue isn't that existing paths can be handled by new implementations (what we're calling \"backward comparability\"), but rather the inverse: new paths won't be handled by existing implementations (\"forward compatibility\"). Users will complain that their paths, which don't have parentheses, don't work. Implementors will respond with, \"You need parentheses,\" and then users will be trained to use parentheses anyway. As it stands, most users are already trained to use parentheses because existing implementations require them.\nRight. For new features we can't expect forward compatibility anyway. The discussion here was whether it is worth to reduce forward compatibility by admitting new syntax for existing, widely implemented features. And I grudgingly have to acknowledge that that is a different question. Actually, implementers will quickly fix that. Between competing implementations, there is a race to accept the largest amount of inputs (within reason, i.e., balancing it with the cost of implementing them). But, yes, query creators that care about widest compatibility will be trained to use the parentheses. (For me, actually, having a feature that only new implementations can understand is a feature. So the expression will break on implementations that haven't been updated. Darwin. And a nice canary.) Yes, which is part of the reason why I still think we would better give up on forward compatibility here in favor of cleaning up the query language. But that is now a matter of taste, and I have to accept that there is a valid argument to the contrary.\nI just don't think it makes sense to go down this road. Users will understand that IETF JsonPath is just another take on the idea of JSONPath, an idea that took major forks from very early on. Any Stack Overflow question on JSONPath quickly establishes that the answers are generally implementation specific.\nI'm going to stop pushing this, but I would like to add this last comment. JSON Schema has had several forward-incompatible changes over the years as well that I've been fine with. But the difference between this and those is that JSON Schema is already published and versioned. It's easy to tell people, \"As of version X, this feature works differently.\" We're not technically versioned as yet because we don't really have a proper publication from which a full implementation can be built (because we're not done). It feels different to make this change for a 1.0. But I'll follow the team's consensus.\nHave you submitted an issue?\nNo. I would add that the fact that they're different doesn't mean that they're wrong, because languages like Python, JavaScript and C use different precedence orderings, particularly Python and JavaScript, see e.g. and , , and . My comment to NAME was only about them being different.\nNAME please do; I wasn't aware they are different. (Thanks for the pointers, very useful! -- the C one is a copy of the XPath one, though, and I can't resolve the Python one.)\nAll right, .\nI'm not sure what the big fuss about the precedences being different is. For the operations we have defined, all of these are largely equivalent. Some have variation in that comparisons are higher than equality, but I'm not even sure under what circumstances that'd be useful. Contrivingly, I could say as a shorthand for but that's stretching things, IMO. I can't see that such a thing would be so useful that they'd add it into the language. NAME do you know why that split in precedence is defined?\nNAME My own, personal view, is that it's unrewarding to think about precedences, and uninteresting to spend any time at all on thinking about the implications of trying something different. That's why my personal preference is to blindly follow what somebody else has already done, and not attempt anything new. I'd like to assume that the authors of C and Python have thought about this, I don't want to. So, when I implemented this, I followed C, except where C didn't have , I followed Perl, and ended up with . If like NAME I had a strong preference that the comparison and equality operators belonged at the same level, I would have chosen another language to follow that did that, Python qualifies, and used that language as a model for all operators, except , which Python doesn't have, and gone with Perl again for that. What I wouldn't do is is help myself to Python's comparison and equality operators, but C's logical not operator. As much as possible, I would want one model to base my implementation on. Daniel\nvery helpfull indeed. IIRC ... I took my first proposal of the precedence table from JavaScript (MDN), which should follow C. So I cannot explain the fact now, why has the same precedence as . Maybe we also should follow C ... and also include an associativity column. oh, and insert following Perl. thanks\nWhy is there a difference? I don't see how the precedence chain we've declared is any different than any of these languages. They're semantically identical! groups not comparison/equality (what benefit is there to separating these?) logical and logical or All of the listed languages have this ordering. What is being discussed here?\nThe current spec is silent about how is handled in many situations: What is the result of applying the JSONPath to the JSON document ? What is the result of applying the JSONPath to the JSON document ? The implies a node set with the single value . What is the result of applying the JSONPath to the JSON document ? The implies a node set with the single value . What is the result of applying the JSONPath to the JSON document ? Presumably a node set with the single value . What is the result of applying the JSONPath to the JSON document ? Presumably a node set with the values and . What is the result of applying the JSONPath to the JSON document ? What is the result of applying the JSONPath to the JSON document ? What is the result of applying the JSONPath to the JSON document ? The implies a node set with the single value .\nTo 1.: Similar is URL To 6.: URL To 7.: URL\nI have historically handles as a valid value (contrary to most of .net) separate from \"undefined\" or \"missing.\" As a result, I agree with these points. Specifically for [7], this is an existence test. Since exists as a property in the object, that object would be returned. I believe we decided that existence testing does not consider falsiness.", "new_text": "3.6. Note that JSON \"null\" is treated the same as any other JSON value: it is not taken to mean \"undefined\" or \"missing\". 3.6.1. JSON document: Queries: 3.7. A Normalized Path is a JSONPath with restricted syntax that identifies a node by providing a query that results in exactly that node. For example, the JSONPath expression \"$.book[?(@.price<10)]\""} {"id": "q-en-draft-ietf-jsonpath-base-c15d2df8fdccc0f612d49f101367239cfaa6952ce8d89c159853ef5d2b55ae32", "old_text": "only if that object has a member with that name. Nothing is selected from a value that is not a object. Array indexing via \"element-index\" is a way of selecting a particular array element using a zero-based index. For example, selector \"[0]\" selects the first and selector \"[4]\" the fifth element of a sufficiently long array. A negative \"element-index\" counts from the array end. For example, selector \"[-1]\" selects the last and selector \"[-2]\" selects the", "comments": "The index selector (but not the index wildcard selector) selects no elements from non-arrays. See URL for background.\nThis seems implicit since JSON objects cannot be numerically accessed.\nYes, but making it explicit is clearer.\nThis looks okay, and it aligns the text with the previous paragraph.", "new_text": "only if that object has a member with that name. Nothing is selected from a value that is not a object. The \"index-selector\" applied with an \"element-index\" to an array selects an array element using a zero-based index. For example, selector \"[0]\" selects the first and selector \"[4]\" the fifth element of a sufficiently long array. Nothing is selected, and it is not an error, if the index lies outside the range of the array. Nothing is selected from a value that is not an array. A negative \"element-index\" counts from the array end. For example, selector \"[-1]\" selects the last and selector \"[-2]\" selects the"} {"id": "q-en-draft-ietf-jsonpath-base-c15d2df8fdccc0f612d49f101367239cfaa6952ce8d89c159853ef5d2b55ae32", "old_text": "of the selector entries in the list and yields the concatenation of the lists (in the order of the selector entries) of nodes selected by the selector entries. Note that any node selected in more than one of the selector entries is kept as many times in the node list. To be valid, integer values in the \"element-index\" and \"slice-index\" components MUST be in the I-JSON range of exact values, see synsem-", "comments": "The index selector (but not the index wildcard selector) selects no elements from non-arrays. See URL for background.\nThis seems implicit since JSON objects cannot be numerically accessed.\nYes, but making it explicit is clearer.\nThis looks okay, and it aligns the text with the previous paragraph.", "new_text": "of the selector entries in the list and yields the concatenation of the lists (in the order of the selector entries) of nodes selected by the selector entries. Note that any node selected in more than one of the selector entries is kept as many times in the nodelist. To be valid, integer values in the \"element-index\" and \"slice-index\" components MUST be in the I-JSON range of exact values, see synsem-"} {"id": "q-en-draft-ietf-jsonpath-base-6a45c603fc80d10d2087f41fce2471d6e793966c7c46c2f8f9d21df701bf7bb9", "old_text": "strings, with those strings coming from the JSONPath and from member names and string values in the JSON to which it is being applied. Two strings MUST be considered equal if and only if they are identical sequences of Unicode code points. In other words, normalization operations MUST NOT be applied to either the string from the JSONPath or from the JSON prior to comparison.", "comments": "And: allow normal string comparison. use some phrases consistently (editorial change only). (Reviewers may like to view .) The options under consideration in issue were: . . . The NaB proposal, but with left-to-right evaluation and short-circuiting. The option of forcing a total order among all values, like Erlang does. The option of not selecting the current item if any type-mismatched comparisons occur anywhere in the expression. This is option 2 with string comparisons. Fixes URL\n(Do you mean beyond-BMP characters?) You do not need to encode beyond-BMP characters, but if you do, you indeed go through UTF-16 surrogate pairs. No, and that is true of other backslash-encoded characters, too. You can sort by Unicode Scalar Values. You can also sort by UTF-32 code units (obviously) and by UTF-8 code units (a.k.a. bytes) \u2014 UTF-8 was careful to preserve sorting order. Gr\u00fc\u00dfe, Carsten\nMerging. We can fine-tune in a follow-on PR.\nWhat is the expected behavior for expressions which are not semantically valid? For example, is only defined for numbers. So what if someone does Is this a parse error? Does it just evaluate to for all potential values? This kind of thing will be important to define if/when we decide to support more complex expressions (e.g. with mathematic operators) or inline JSON values such as objects or arrays.\nHere you can see the current behaviour of the various implementations for a similar case: URL\nThanks NAME But for this issue, I'd like to explore what we want for the specification. Also, for some context, I'm updating my library to operate in a \"specification compliant\" mode by default. Optionally, users will be able to configure for extended support like math operations and inline JSON (as I mentioned above). But until that stuff is actually included in the spec, I'd like it turned off by default.\nThe spec says: and the grammar in the spec includes as syntactically valid. (We could tighten up the grammar to exclude cases where non-numeric literals appear in ordered comparisons, but I don't think there's a great benefit in doing so.) The spec also says: So, evaluates to \"false\".\nNAME You're quite right. I noticed that the other day and fixed it, but I quoted the previous spec before the fix was merged. Apologies. Please see the revised wording in the filter .\nI don't understand numeric only semantic constraint. Is there a reason behind it? JSON has very few value types and lexical ordering of strings is well defined in Unicode. Standards like ISO8601, commonly used for representing date-time values in JSON, give consideration to lexical ordering. So a useful filter for me could be to return JSON objects that have a date greater than (or equal) to a date taken from somewhere else in the same instance. Having the comparison evaluate to false could also be confusing. For , what should yield for ? Should it be true? If so such an expression would return which is a non-numeric result even though a numeric comparison was performed. Or should breaking the semantic rule about numerics cause the entire predicate to yield false, regardless?\nI'd like to be sure that we keep to the topic here and not get distracted by my choice of example. Yes, comparisons with strings is widely well-defined, but this issue is about comparisons that don't align with what's in the spec currently. Currently, as defined by the spec, strings are not comparable. Maybe my question could be better served with the boolean example mentioned by NAME What happens in this case?\nIt's hard to tell from . But consider something that is stated in the draft, \"Note that comparisons between structured values, even if the values are equal, produce a \"false\" comparison result.\" So, the draft considers comparisons of structured values to be well formed, but always evaluates the comparison to false. Consequently, given element-for-element equal arrays and , and both evaluate to , and so presumably and both evaluate to . This suggests to me that the draft's authors intend comparisons that may be regarded as \"invalid\" to evaluate to . Would the draft also have gives and gives ? Regardless, the draft's rules for comparing structured values appear to be incompatible with all existing JSONPath implementations, no existing implementation follows these rules, as evidenced in the Comparisons by and . None appear to evaluate both and to . These include both implementations in which the expression is well defined, and implementations in which the expression is a type error. For example, the Javascript implementations evaluate as , and as , because Javascript is comparing references to arrays (rather than values.) Implementations in which the expression is a type error either report an error or return an empty result list. I don't think you'd find precedence for these rules in any comparable query language, whether JSONPath, JMESPath, JSONAta, or XPATH 3.1. There is priori experience for defining comparisons for all JSON values (see e.g. JAVA Jayway), just as there is priori experience for treating some comparisons as type errors and taking those terms out of the evaluation (see e.g. JMESPath.) But the draft's approach to comparisons is different. It does not seem to follow any prior experience.\nWhat the says about comes down to the semantics of the comparison . The Filter Selector says: Since is non-numeric, the comparison produces a \"false\" result. Therefore, always results in an empty nodelist.\nBut the spec doesn't say that the result is , and it's incorrect to assume this behavior. It's equally valid to assume that such a comparison is a syntax error. This is the crux of this issue.\nThe spec describes comparisons as boolean expressions which therefore have the value true or false. The difficulty of using the terms and in that context would be possible confusion with the JSON values and . The spec is currently inconsistent in describing the results of boolean expressions as true/false or \"true\"/\"false\", but the fix there is to be consistent. As for for the syntax side, the spec's grammar implies that is syntactically valid.\nI feel it would be better to be explicit.\nAnd always returns a non-empty result list. As does . If the idea is that expressions involving unsupported constructions should return an empty result list, than none of the above expressions should return anything. There is prior experience for matching nothing in all cases, e.g. in JMESPath, both and , where the greater-than comparison isn't supported, match nothing. There is also prior experience in the JSONPath Comparisons where the presence of unsupported comparisons always results in an empty list. But the approach taken in the draft appears to be a minority of one. There appears to be no prior experience for it.\nI don't think we can be any more explicit than the following productions: What did you have in mind?\nI had a look and couldn't find these tests. Please could you provide a link or links.\nIt depends on what we want. If we say is syntactically valid, then we need to explicitly define that it (and NAME variants, etc) evaluate to (boolean, not JSON) because they are nonsensical (semantically invalid). But if we say that these expressions are syntactically invalid, then we need to update the ABNF to reflect that by splitting and so that we can properly restrict the syntax.\nEven if we made syntactically invalid, the same expression could arise from, for example, where is . So, since the spec needs to be clear about the semantics of such expressions, I don't really see the benefit of ruling some of them out syntactically if that would make the grammar messier.\nThen we need to be explicitly clear that such cases result in a evaluation. We don't currently have that.\nSee, for example, , , and . Bash (URL), Elixir (jaxon), and Ruby (jsonpath) all return empty result lists () for $[?(NAME (NAME == true))] $[?((NAME (NAME == true))] and $[?(!(NAME (NAME == true))] On the other hand, I don't think you could find any cases where evaluated to while evaluated to . Not in JSONPath, nor in any of the other good query languages like , , or . That is unique to the draft.\nOne solution would be to introduce \"not a boolean\" () analogous to \"not a number\" (). Semantically invalid boolean expressions, such as would yield . would obey the following laws, where is any boolean or : (The logical operator laws in the spec continue to hold, but only when none of , , or are .) The rule in the spec for filters would still apply: In other words, a semantically invalid boolean expression anywhere inside a filter's boolean expression causes the filter to not to select the current node. (Edited based on feedback from NAME What do you make of this solution? /cc NAME NAME\nThat would mean an implementation can no longer short-circuit on b && c where b is false. I don\u2019t think there is much point in getting an \u201cexception\u201d semantics for unsupported comparisons. If we do want to do them, they should be actual exception semantics. But the loss of short-circuiting is more important to me than the weirdness of returning false (true) in certain cases. Gr\u00fc\u00dfe, Carsten\nYea, good point. I wonder if there is a consistent set of laws which allow both true and to be annihilators for OR and allow both false and to be annihilators for AND? That would at least preserve short-circuiting, but the overall effect would be more difficult to describe as wouldn't then trump everything else.\nI'm attracted by the \"consistency with JMESPath\" argument. I.e. the strings and can replace each other with no change in effect. I thought that's what the draft said. I'd be sympathetic with making the literal string non-well-formed but as Glyn points out, that wouldn't make the need for clarity on the issue go away.\nRight. We might even point out that constructs like this could lead to warnings where the API provides a way to do this. Gr\u00fc\u00dfe, Carsten\nYes, the behaviour of the current draft for semantically invalid expressions seems to be the same as that of JMESPath after all. The says invalid comparisons (such as ordering operators applied to non-numbers) yield a value which is then treated equivalently to . I thought I'd verify this to be on the safe side. A with the expression evaluated against the JSON: gives the result: whereas gives the result: The evaluator with the expression applied to the same JSON gives the result: whereas the expression gives the result .\nEDIT: I think you may be right about this after all. Just not with your example, I believe you are testing with an implementation that does string comparisons differently from the spec (if you're using the online evaluator on the JMESPath site.) Yes Actually, the JMESPath implementation you're using looks like the Python version accessed through the . As the JMESPath author, James Saryerwinnie noted in , \"What's going on here is that the spec only defines comparisons as valid on numbers (URL). In 0.9.1, validation was added to enforce this requirement, so things that implicitly worked before now no longer work ... Given this is affecting multiple people I'll add back the support for strings now.\" James Saryerwinnie went on the suggest that he was going to update the spec as well to allow more general comparisons, but that didn't happen. James actually took a break from JMESPath at around that time to quite recently. Yes. Here it's actually comparing \"char\" < \"char\", and returning , not . and gives However, I did the following test with JMESPath.Net with gives gives which I think is consistent with the current draft.\nLet's bring this back home. I think we've decided that is not a syntax error. So now we are deciding how it evaluates. The leading proposal is that all of these evaluate to false: - More holistically, any expression which contains an invalid operation evaluates to false. This would imply that an invalid comparison itself does not result in a boolean directly. Rather (as suggested by NAME an implementation would need an internal placeholder (e.g. or ) that is persistent through all subsequent operations and ultimately is coerced to false so that the node is not selected. This would mean that the processing of each of the above expressions is as follows: - observes and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false At best, the use of something like is guidance for an implementation. The specification need only declare that expressions which contain invalid operations (at all) evaluate to false. How a developer accomplishes that is up to them.\nAbort evaluation and report an error (the vast majority of existing JSONPath implementations) Abort evaluation and return an empty list (a few existing JSONPath implementations) Continue evaluation while excluding the element affected by the dynamic error (JMESPath specification) All of that sounds complicated and unnecessary. Personally, I don't think any of these constructions are necessary. It is enough to say that in these situations a dynamic error has been detected, and the implementation must react in this way (whatever is decided upon.) As you say \"How a developer accomplishes that is up to them.\"\nUmmmm\u2026 I was under the impression that was identical to and thus your third item would be true. This seems violently counter-intuitive. We're not going to treat as a syntax error, but we are going to cause its presence to have a nonlocal effect on the entire containing expression. OK, it's back to a careful read of the draft for me.\nIf you want a nonlocal effect, it is probably best to think in terms of a throw/catch model. (And, yes, I think one needs catch as an explicit mechanism, not just implicitly at the outside of a filter expression.)\nIf that were the case, then would be true, which isn't right. The expression is still unevaluateable. Similarly with the expression. True in general, but for this specific case, it's the combined with that gives it away, and this can be detected at parse time. Regardless, we need to also consider the case where is true.\nI think the crucial thing is that these expressions have well defined semantics so that we can test the behaviour and ensure interoperation. I really don't think we should be encouraging the use of these expressions. My suffers from the lack of short-cutting. If we preserved short-cutting with a similar approach, we'd have to give up commutativity of and and probably some of the other boolean algebra laws. Overall, I think such approaches would make the spec larger and consequently increase its cognitive load for users and implementers alike. Producing a non-local effect is another solution, but I think that too is overly complex for the benefits. If we made the non-local effect that the filter selected no nodes, I think that would be semantically equivalent to my proposal. Also, I don't think we should introduce a try/catch mechanism as the spec complexity would far outweigh the benefits. The current spec: is quite compact in the way it handles the current issue preserves the boolean algebra laws has a precedent in the JMESPath spec. So I propose that we stick with the current spec.\nBut what's written in the spec doesn't resolve the vs. disparity. Both of these expressions should return an empty node list because both are semantically invalid. As proof of this, is logically equivalent to , which also returns the empty nodelist. If simply and immediately returns , then this equivalent no longer holds.\nI wish we could stop using the term \"semantically invalid\", because that doesn't say what the construct means. A lightweight variant of the catch/throw approach I would prefer would be to say that an expression fails. Constructs that have failing subexpressions also fail. A failing filter expression does not select the item for which the filter expression failed. Disadvantage: Either no short circuiting, or ambiguity with short circuiting. Without short circuiting, one cannot test whether a subsequent subexpression will fail and still get a deterministic result. (I'm not sure we have all we need to make such tests.)\nURL makes the current spec clearer: yields false and so yields true and yield false. Where this is a \"disparity\" or not depends on what semantics we are aiming at. \"should\" only if we adopt the NaB (or equivalent) or non-local effect semantics. (I agree with NAME that the term \"semantically invalid expression\" isn't very useful when we are trying to define the semantics of such expressions.) URL defines in terms of to preserve this very equivalence.\nI think these disadvantages should steer us away from catch/throw, or equivalent, approaches.\nI didn't follow any of this comment. I can't tell what you're saying the spec says vs what you think it should say. Certainly, you're not saying that should be evaluated to true, which would return ALL of the nodes, right? Because that's what it sounds like you're arguing for. Comparisons with booleans don't make sense. Ever. My proof above shows the absurdity of defining one expression to evaluate to false because it, by necessity, means that the converse evaluates to true when neither make sense at all. Call it what you want. I call it \"semantically invalid\" because that phrase accurately describes that there are no (common) operative semantics that make this expression evaluatable. Sure, if we define it to mean something, then that technically gives it semantic meaning, but in all ordinary cases of how works, this operation doesn't make sense and therefore is, by definition, semantically invalid. There should be one of two outcomes from this issue: It's a parse error. It's a runtime evaluation error. In both cases it's an error; that's not in question here. What kind of error and how it's handled are in question. We've already seen that it doesn't make sense to be a parse error because we need to account for the boolean value coming from the data (i.e. from ). That leaves us with a runtime evaluation error. Whenever a runtime evaluation error occurs, because we've decided that a syntactically valid path always returns, it MUST return an empty nodelist. Therefore, ANY expression that triggers an evaluation error MUST return an empty nodelist. This includes both and . I don't know how to stress this any more. There are no other logical options to resolve this issue.\nThis reasoning technique is also called proof by lack of imagination (related to the technique \"I don't get out much\"). I hear that you are arguing for some exception semantics (similar to the catch/throw I brought up). I can sympathize with that, because I also argued for that. However, we are really free to define the expression language as we want, and what we have now is not that much worse than what I was (incompletely) proposing.\n(Found the rationale why Erlang does things this way: URL Unfortunately, we can't ask Joe for additional details any more. But apparently these titans in language design for highly reliable systems saw a benefit in a complete total ordering of the type system. JSON does not have a type system, so anything we do will be invention.)\nMy comment was about the current spec, especially now that PR had landed. The current spec says that yields true. Yes (unless that comparison was part of a large boolean expression which sometimes yielded false) that would return all the nodes. That's precisely what I am arguing for as I think it's the best option. I suspect absurdity, like beauty, is in the eye of the beholder. I can see where you're coming from. I don't follow the chain of reasoning there. The current spec returns, but doesn't always return an empty nodelist. At the risk of repeating myself, the current spec is a logical option in the sense that it is well defined and consistent. I think we are all agreed that none of the options is perfect. All the options under consideration are logical, otherwise we'd rule them out. Some would take more words in the spec to explain and that puts a cognitive load on the user for very little benefit.\nNAME It seems your preferred approach is: Let's try to flesh this out a bit. What about shortcutting? For example, what should the behaviour of be? There would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow shortcuts. Result: all nodes are selected. Do not prescribe an order of evaluation and allow shortcuts. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow shortcuts (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nThis seems useful to me. The spec is free to specify that inequalities (, >=) are only for numbers. Short-circuit evaluation can be used to avoid errors, e.g.: will return the first array element.\nYou still have to parse this. And you have to know the value of before you evaluate it. That means you're able to know that this expression is nonsensical (to avoid \"invalid\") before you evaluate it. Thus shortcutting is moot.\nLet's use \"undesirable\" for expressions we don't like. The example that Glyn provided is just a stand-in for a more complex example that gets true from the JSON value. So the fact that you can parse this and find the undesirable part immediately is not so relevant. With shortcut semantics, the fact that the ignored part of the disjunction is undesirable also is irrelevant. So I do think that Glyn's question is valid.\nIn JMESPath, evaluates to, not , but \"absent\", represented in JMESPath by . When \"absent\" () is evaluated as a boolean according to the JMESPath rules of truth, it gives . It seems consistent that \"not absent\" is . What would you have if you resolved to the draft's notion of \"absent\", an empty node list?\nOk, let's try a modified example to avoid the syntactic issue. What should the behaviour of be (where results in a nodelist with a single node with value )? With NAME preferred approach, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nI'd like to go with the simplest possible thing unless are strong reasons not to. I think the simplest possible thing is: Type-incompatible comparisons yield false and there are no \"exceptions\" or nonlocal effects. So in this example you unambiguously get and select all the nodes and if you want to optimize by short-circuiting you can do that without fear of confusion or breakage. Type-incompatibility yielding false has the advantage that it's easy to explain in English and, I would argue, usually produces the desired effect. If I am filtering something on and turns out not to be a number - almost certainly an unexpected-data-shape problem - I do not want to match. But if I that with something else then I'm explicitly OK with not getting a match here. Yes, there are still corner-case surprises, for example: is true if is non-numeric, but is not true. I claim that this is at least easy to explain and understand in brief clear language.\nA better choice for your argument would have been JS as that was one of the original implementations. Similiar behavior works there (as I discovered testing it in my browser's console). ! and result in false and result in true Interestingly and both result in true for JS. This makes it very apparent that JS is just casting the to and performing the comparison. This kind of implicit casting is something that we have already explicitly decided against. This means that even the JS evaluation isn't a valid argument. I could just as easily use a .Net language or any other strongly typed language to show that doesn't even compile and is therefore nonsensical. Why do we continually compare with this? It's a deviated derivative of JSON Path. Are we trying to align JSON Path with it? To what end? I thought we were trying to define JSON Path, not redefine JMESPath to look like JSON Path. \"... develop a standards-track JSONPath specification that is technically sound and complete, based on the common semantics and other aspects of existing implementations. Where there are differences, the working group will analyze those differences and make choices that rough consensus considers technically best, with an aim toward minimizing disruption among the different JSONPath implementations.\" Yet no one has thought to see what the implementations do in these cases. I would say that the short-cutting is likely preferred, but I don't think my implementation would do it that way. My implementation would evaluate all operands then apply the . This would result in the strange comparison with and so it would select no nodes. C# would shortcut this. and would be considered \"side-effect\" evaluations and would be skipped. That said, strong typing would guarantee that these would evaluate to types that are valid for , so the resolved expression is guaranteed to make sense in the event shortcutting didn't happen. (In reality, the C# compiler would optimize to a constant , and ultimately the remaining expression wouldn't even make it into the compiled code, so really there's no short-cutting either.)\nWell, I chose Erlang because their total ordering is the result of a deliberate design process. Any ordering that you can derive in JavaScript is the result of a haphazard, now universally despised system of implicit conversions \u2014 not quite as good an example for my argument as Erlang\u2019s careful design. (Some people appear to have a tendency to derive processing rules for JSON from JavaScript. JSON is an interchange format, which derived its syntax from JavaScript. JSON has no processing rules, and so we have to add some. Any inspiration for the ones we decide to use in JSONPath is as good as any other, as long as it meets the needs of its users.) Gr\u00fc\u00dfe, Carsten\nFor the record, unlike Erland, comparisons in our current draft do not form a (a.k.a. a linear order). A total order satisfies this law: But, according to our current draft, both and are false. Our current draft does, however, provide a since satisfies these laws: That said, in our draft is not a strict partial order which would have to satisfy these laws: In our draft: is false, so is true, which breaks irreflexivity. both and are false, so and are true, which breaks asymmetry. Our current draft does however preserves the laws of boolean algebra (some of which would be broken by the alternatives being discussed in this issue).\nAnd typed comparisons in C# are the result of a deliberate design process. What's your point? You're invoking a selection bias. Our decision to not have type casting (i.e. ) demonstrates that types are important to us. That line of thinking necessitates that any comparison between different types must either error or return false because such a comparison is meaningless. We're not erroring, so we must return false.\nAre we now agreed then?\nIf by in agreement you mean that must also return false.\nAccording to the current draft, returns false and so returns true and thus returns false.\nOkay... You're twisting my examples. I want any expression that contains comparisons between types to return an empty nodelist.\nI want all items that are not back-ordered or backordered for less than 10 days.\nThe simplest possible thing would be to do what the vast majority of JSONPath implementations do when a comparison happens that is regarded as not supported, which is to abort evaluation and report a diagnostic. That has the additional advantage of being compatible with the charter \"capturing the common semantics of existing implementations\", if that still matters. It is also consistent with XPATH 3.1, see (URL) and (URL). The next simplest thing would be to abort evaluation and return an empty list (a few existing JSONPath implementations do that.)\nSpeaking with my co-chair hat on, I'd like to draw attention to the following text from section 3.1 of the current draft: \"The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further errors can be raised during application of the query to a value.\" I think this has for a long time represented the consensus of the WG. If someone wants to approach the problems raised in this issue by proposing an exception mechanism, that would require a specific proposal covering the details and how best to specify it. Absent such a proposal existing and getting consensus, I think approaches that include a run-time error/exception mechanism are out of bounds.\nCo-chair hat off: I could live with that. Among other things, it's easy to describe. It's not my favorite approach but it's sensible.\nOn 2022-07-20, at 22:15, Tim Bray NAME wrote: Please don\u2019t commingle exceptions with erroring out. Erroring out because of data fed to the expression is not consistent with the above invariant. Exceptions may be processed within the query (e.g., in the classical catch/throw form), and need not violate that invariant. \u201cNaB\u201d is an attempt to add to the data types in such a way that an exception can be handed up the evaluation tree as a return value. Exceptions tend to make the outcome of the query dependent of the sequence in which parts of the query expression are processed, so they may be violating other invariants (which are implicit in some of our minds). Gr\u00fc\u00dfe, Carsten\nI've been thinking why I keep liking having type-mismatch comparisons be just and made some progress. I think that is a compact way of saying . So if it's not a number, this is unsurprisingly false. If you believe this then it makes perfect sense that if is then is true but is false.\nNAME If is , then the spec's statement: implies that is false. Then the spec's statement: implies that , the negation of , is true. Essentially, and give non-intuitive results for non-numeric comparisons. This is surprising and far from ideal. However, because comparisons always produce a boolean value, they can be hedged around with other predicates to get the desired result, e.g. we can replace with which is equivalent to for numeric and is false for non-numeric .\nThe approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison means that: it is not possible to hedge around \"undesirable\" comparisons with other predicates, results tend to depend on the order in which subexpressions are evaluated. Let's take an example. Suppose we want to pick out all objects in an array such that the object has a key which is either at least 9 or equal to . For example, in the JSON: With the current spec, the filter has the desired effect and returns the nodelist: The alternative doesn't allow that kind of hedging around. A solution with the alternative approach is to use a list .\nNext, let's explore the ordering issue with the alternative approach. What should the behaviour of be (where results in a nodelist with a single node with value )? (Apologies this is for the third time of asking.) With the approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected, but the implementation cannot take advantage of the short-circuit as an optimisation. I think option 1 is preferable and probably the most intuitive option. I wonder if there are any other issues (apart from being rather prescriptive) with that option? The boolean algebra laws would only apply in general when \"undesirable\" comparisons are not present.\nI suggest changing this to \"\u2026using one of the operators , =. Then\u2026 I suggest removing this statement because we've defined , then there's some work to tidy up the definition of == and !=. And in every case, any comparison with a type mismatch is always false.\nActually, with a type mismatch is always true. I was hoping to extend the definition by negation to and , but that seems to create cognitive dissonances.\nIf you say that type mismatch always yields false, there's no problem, this reduces to true false, which is to say true. You don't need to say anything about order of evaluation. I think if the spec needs to specify order of evaluation, that is a very severe code smell. I'd find that very hard to accept.\nNo, the rule I'm proposing is simpler: Any comparison with a type mismatch is always false. Which means is not specified as \"the opposite of ==\" it's specified as meaning \"are values of the same non-structured type, and the values are not equal\". I can't tell wither and are equal or not if is not a boolean. They are not comparable.\nSo is always false. Not intuitive to me.\nHmm, having written that, I might have been going overboard. It would be perfectly OK to define as \"true if the operands are of different types, or of the same type but not equal\" and that would make perfect sense.\nWe also need to design in an escape clause for structured values, which so far we don't compare.\nOK, I think I have probably said enough on this, but I have been doing an unsatisfactory job communicating what seems to me like a reasonably straightforward proposal. Once the spec stabilizes a bit I'd be happy to do a PR. The key language, under \"Comparisons\" in 3.4.8, would be: True if both operands are of the same type, that type is not a structured type, and the values are equal; otherwise false True if either operand is of a structured type, or if the operands are of different types, or if they are of the same type but the values are unequal; otherwise false. True if both operands are numbers and the first operand is strictly greater than the second; otherwise false. True if both operands are numbers and the first operand is strictly less than the second; otherwise false. True if both operands are numbers and the first operand is greater than or equal to the second; otherwise false. True if both operands are numbers and the first operand is less than or equal to the second; otherwise false.\nThat approach, like the current draft, preserves the laws of boolean algebra and therefore the order of evaluation doesn't matter. It gets rid of some nasty surprises (such as ) which are present in the current draft. It seems that is the negation of , in which case it's probably simpler to define it as such. However, the approach is not completely free of surprises because it breaks: the converse relationship between and that if and only if not the converse relationship between and that if and only if not as well as: the reflexivity law (see above, e.g. is false) the strongly connected or total law (see above, e.g. it is not true that either or since both are false). Thus and would no longer be partial orders and and would no longer be strict total orders when these four operators are considered as binary relations over the whole set of values. We could rationalise this by thinking of these four operators as orderings of numbers with non-numbers added as \"unordered\" extensions.\nEh, I like writing each one out in clear English rather than having them depend on each other. But, editorial choice. It's still true if a & b are both numbers. If you accept that \"<\" means \"A is a number AND B is a number AND A I am very strongly against applying /// to anything but numbers and strings. All of glyn's proposals (1)-(5) define /// for all kinds of values. In which case you might want a diagnostic, like . Or the vast majority of existing JSONPath implementations.\nOn 2022-07-26, at 14:44, Daniel Parker NAME wrote: OK, so the new consensus is that we do want JSONPath to be able to break? (Not just syntax errors before the query meets data, but also data errors?) Gr\u00fc\u00dfe, Carsten\nIn the case of XPath 3.1, errors are raised or not, depending on a mode. Please let's avoid that. I personally don't think there is a convincing case for raising data errors. I would be open to raising optional, implementation-dependendant warnings. But I'm also comfortable with continuing the current design point of raising syntax errors based on the path alone and then absorbing any kind of anomaly in the data without failing the request.\nNAME Not even if, in Tim's words, \"Something is broken\"? Seriously broken? Proposals (1)-(5) all perform a common purpose: to take the comparisons that are less sensible, and resolve them to something, for completeness. Something is returned, and for all the proposals, that may be a non-empty result list, depending on the query. But consider a user playing around with an online query tool. Which result is more helpful? The result or the diagnostic The expression either side of operator \">\" must evaluate to numeric or string values\nI agree that diagnostics are great for attended situations, but for unattended scenarios, I'd prefer the behaviour to be robust.\nNAME For \"unattended scenarios\" we have error logs and alerts.\nYes, I'm comfortable with logging as a side effect of a robust operation (no failure after syntax checking the path).\nCo-chair hat off: While I do not have a strong opinion about diagnostics vs exceptions vs silent-false, I do note that this kind of type-mismatch problem is a corner case and I wonder if it justifies the investment of a whole bunch of WG effort. Co-chair hat back on: As I said, I think the current draft does represent a fairly long-held consensus of the WG. If we want to introduce a diagnostic/exception mechanism, it's not going to happen without a PR to make the option concrete as opposed to abstract. So if you want such a thing, arguing for it in this thread is not going to get us there.\nSo, one imagines something like: \"A JSONPath implementation MUST NOT cause a failure or exception as the consequence of applying a well-formed JSONPath to well-formed JSON data. An implementation SHOULD provide a mechanism whereby software can receive report of run-time errors such as type-incompatible comparisons.\" Hmm, I don't see anything in the draft spec about what happens if you try to apply a JSONPath and what you're trying to apply it to is broken, i.e. not valid JSON. Maybe we're in a world where callers have to live with failed applications anyhow?\nI'm not \"arguing for it\", I don't care what the authors of the draft eventually put in it. I'm simply providing comments or feedback on matters that may be helpful to the authors in thinking about the issues, or not. They can decide. There used to be more people doing that, but my impression is that most of them have dropped off.\nI think that deserves its own issue: URL\nI'd like to address this issue, so, unless someone someone objects strongly, I'll put together a PR for option 2 plus string comparisons. That won't necessarily be the end of the discussion, but at least it will give us a concrete alternative to the current draft.\nPlease do, and thanks in advance. We will do better if we have concrete texts before us to shape our discussion.\nOn 2022-07-19, at 15:55, Glyn Normington NAME wrote: The other reason that would speak against an option is that it lacks expressiveness, i.e., it is impossible to express a query that can be expected to be needed. I am specifically thinking about queries that first look at a data item to see what it is and then make a condition based on what was found. Gr\u00fc\u00dfe, Carsten\nOn 2022-07-20, at 22:23, Tim Bray NAME wrote: \u2026 which reminds us that this cannot currently be explicitly expressed. Gr\u00fc\u00dfe, Carsten\nI think there may be room for further fine-tuning but at this point I'd like to merge this and get a nice editors' draft that we can read end-to-end.", "new_text": "strings, with those strings coming from the JSONPath and from member names and string values in the JSON to which it is being applied. Two strings MUST be considered equal if and only if they are identical sequences of Unicode scalar values. In other words, normalization operations MUST NOT be applied to either the string from the JSONPath or from the JSON prior to comparison."} {"id": "q-en-draft-ietf-jsonpath-base-6a45c603fc80d10d2087f41fce2471d6e793966c7c46c2f8f9d21df701bf7bb9", "old_text": "A \"dot-wild-selector\" acts as a wildcard by selecting the nodes of all member values of an object in its input nodelist as well as all element nodes of an array in its input nodelist. Applying the \"dot- wild-selector\" to a primitive JSON value (number, string, or true/false/null) selects no node. 3.4.3.3.", "comments": "And: allow normal string comparison. use some phrases consistently (editorial change only). (Reviewers may like to view .) The options under consideration in issue were: . . . The NaB proposal, but with left-to-right evaluation and short-circuiting. The option of forcing a total order among all values, like Erlang does. The option of not selecting the current item if any type-mismatched comparisons occur anywhere in the expression. This is option 2 with string comparisons. Fixes URL\n(Do you mean beyond-BMP characters?) You do not need to encode beyond-BMP characters, but if you do, you indeed go through UTF-16 surrogate pairs. No, and that is true of other backslash-encoded characters, too. You can sort by Unicode Scalar Values. You can also sort by UTF-32 code units (obviously) and by UTF-8 code units (a.k.a. bytes) \u2014 UTF-8 was careful to preserve sorting order. Gr\u00fc\u00dfe, Carsten\nMerging. We can fine-tune in a follow-on PR.\nWhat is the expected behavior for expressions which are not semantically valid? For example, is only defined for numbers. So what if someone does Is this a parse error? Does it just evaluate to for all potential values? This kind of thing will be important to define if/when we decide to support more complex expressions (e.g. with mathematic operators) or inline JSON values such as objects or arrays.\nHere you can see the current behaviour of the various implementations for a similar case: URL\nThanks NAME But for this issue, I'd like to explore what we want for the specification. Also, for some context, I'm updating my library to operate in a \"specification compliant\" mode by default. Optionally, users will be able to configure for extended support like math operations and inline JSON (as I mentioned above). But until that stuff is actually included in the spec, I'd like it turned off by default.\nThe spec says: and the grammar in the spec includes as syntactically valid. (We could tighten up the grammar to exclude cases where non-numeric literals appear in ordered comparisons, but I don't think there's a great benefit in doing so.) The spec also says: So, evaluates to \"false\".\nNAME You're quite right. I noticed that the other day and fixed it, but I quoted the previous spec before the fix was merged. Apologies. Please see the revised wording in the filter .\nI don't understand numeric only semantic constraint. Is there a reason behind it? JSON has very few value types and lexical ordering of strings is well defined in Unicode. Standards like ISO8601, commonly used for representing date-time values in JSON, give consideration to lexical ordering. So a useful filter for me could be to return JSON objects that have a date greater than (or equal) to a date taken from somewhere else in the same instance. Having the comparison evaluate to false could also be confusing. For , what should yield for ? Should it be true? If so such an expression would return which is a non-numeric result even though a numeric comparison was performed. Or should breaking the semantic rule about numerics cause the entire predicate to yield false, regardless?\nI'd like to be sure that we keep to the topic here and not get distracted by my choice of example. Yes, comparisons with strings is widely well-defined, but this issue is about comparisons that don't align with what's in the spec currently. Currently, as defined by the spec, strings are not comparable. Maybe my question could be better served with the boolean example mentioned by NAME What happens in this case?\nIt's hard to tell from . But consider something that is stated in the draft, \"Note that comparisons between structured values, even if the values are equal, produce a \"false\" comparison result.\" So, the draft considers comparisons of structured values to be well formed, but always evaluates the comparison to false. Consequently, given element-for-element equal arrays and , and both evaluate to , and so presumably and both evaluate to . This suggests to me that the draft's authors intend comparisons that may be regarded as \"invalid\" to evaluate to . Would the draft also have gives and gives ? Regardless, the draft's rules for comparing structured values appear to be incompatible with all existing JSONPath implementations, no existing implementation follows these rules, as evidenced in the Comparisons by and . None appear to evaluate both and to . These include both implementations in which the expression is well defined, and implementations in which the expression is a type error. For example, the Javascript implementations evaluate as , and as , because Javascript is comparing references to arrays (rather than values.) Implementations in which the expression is a type error either report an error or return an empty result list. I don't think you'd find precedence for these rules in any comparable query language, whether JSONPath, JMESPath, JSONAta, or XPATH 3.1. There is priori experience for defining comparisons for all JSON values (see e.g. JAVA Jayway), just as there is priori experience for treating some comparisons as type errors and taking those terms out of the evaluation (see e.g. JMESPath.) But the draft's approach to comparisons is different. It does not seem to follow any prior experience.\nWhat the says about comes down to the semantics of the comparison . The Filter Selector says: Since is non-numeric, the comparison produces a \"false\" result. Therefore, always results in an empty nodelist.\nBut the spec doesn't say that the result is , and it's incorrect to assume this behavior. It's equally valid to assume that such a comparison is a syntax error. This is the crux of this issue.\nThe spec describes comparisons as boolean expressions which therefore have the value true or false. The difficulty of using the terms and in that context would be possible confusion with the JSON values and . The spec is currently inconsistent in describing the results of boolean expressions as true/false or \"true\"/\"false\", but the fix there is to be consistent. As for for the syntax side, the spec's grammar implies that is syntactically valid.\nI feel it would be better to be explicit.\nAnd always returns a non-empty result list. As does . If the idea is that expressions involving unsupported constructions should return an empty result list, than none of the above expressions should return anything. There is prior experience for matching nothing in all cases, e.g. in JMESPath, both and , where the greater-than comparison isn't supported, match nothing. There is also prior experience in the JSONPath Comparisons where the presence of unsupported comparisons always results in an empty list. But the approach taken in the draft appears to be a minority of one. There appears to be no prior experience for it.\nI don't think we can be any more explicit than the following productions: What did you have in mind?\nI had a look and couldn't find these tests. Please could you provide a link or links.\nIt depends on what we want. If we say is syntactically valid, then we need to explicitly define that it (and NAME variants, etc) evaluate to (boolean, not JSON) because they are nonsensical (semantically invalid). But if we say that these expressions are syntactically invalid, then we need to update the ABNF to reflect that by splitting and so that we can properly restrict the syntax.\nEven if we made syntactically invalid, the same expression could arise from, for example, where is . So, since the spec needs to be clear about the semantics of such expressions, I don't really see the benefit of ruling some of them out syntactically if that would make the grammar messier.\nThen we need to be explicitly clear that such cases result in a evaluation. We don't currently have that.\nSee, for example, , , and . Bash (URL), Elixir (jaxon), and Ruby (jsonpath) all return empty result lists () for $[?(NAME (NAME == true))] $[?((NAME (NAME == true))] and $[?(!(NAME (NAME == true))] On the other hand, I don't think you could find any cases where evaluated to while evaluated to . Not in JSONPath, nor in any of the other good query languages like , , or . That is unique to the draft.\nOne solution would be to introduce \"not a boolean\" () analogous to \"not a number\" (). Semantically invalid boolean expressions, such as would yield . would obey the following laws, where is any boolean or : (The logical operator laws in the spec continue to hold, but only when none of , , or are .) The rule in the spec for filters would still apply: In other words, a semantically invalid boolean expression anywhere inside a filter's boolean expression causes the filter to not to select the current node. (Edited based on feedback from NAME What do you make of this solution? /cc NAME NAME\nThat would mean an implementation can no longer short-circuit on b && c where b is false. I don\u2019t think there is much point in getting an \u201cexception\u201d semantics for unsupported comparisons. If we do want to do them, they should be actual exception semantics. But the loss of short-circuiting is more important to me than the weirdness of returning false (true) in certain cases. Gr\u00fc\u00dfe, Carsten\nYea, good point. I wonder if there is a consistent set of laws which allow both true and to be annihilators for OR and allow both false and to be annihilators for AND? That would at least preserve short-circuiting, but the overall effect would be more difficult to describe as wouldn't then trump everything else.\nI'm attracted by the \"consistency with JMESPath\" argument. I.e. the strings and can replace each other with no change in effect. I thought that's what the draft said. I'd be sympathetic with making the literal string non-well-formed but as Glyn points out, that wouldn't make the need for clarity on the issue go away.\nRight. We might even point out that constructs like this could lead to warnings where the API provides a way to do this. Gr\u00fc\u00dfe, Carsten\nYes, the behaviour of the current draft for semantically invalid expressions seems to be the same as that of JMESPath after all. The says invalid comparisons (such as ordering operators applied to non-numbers) yield a value which is then treated equivalently to . I thought I'd verify this to be on the safe side. A with the expression evaluated against the JSON: gives the result: whereas gives the result: The evaluator with the expression applied to the same JSON gives the result: whereas the expression gives the result .\nEDIT: I think you may be right about this after all. Just not with your example, I believe you are testing with an implementation that does string comparisons differently from the spec (if you're using the online evaluator on the JMESPath site.) Yes Actually, the JMESPath implementation you're using looks like the Python version accessed through the . As the JMESPath author, James Saryerwinnie noted in , \"What's going on here is that the spec only defines comparisons as valid on numbers (URL). In 0.9.1, validation was added to enforce this requirement, so things that implicitly worked before now no longer work ... Given this is affecting multiple people I'll add back the support for strings now.\" James Saryerwinnie went on the suggest that he was going to update the spec as well to allow more general comparisons, but that didn't happen. James actually took a break from JMESPath at around that time to quite recently. Yes. Here it's actually comparing \"char\" < \"char\", and returning , not . and gives However, I did the following test with JMESPath.Net with gives gives which I think is consistent with the current draft.\nLet's bring this back home. I think we've decided that is not a syntax error. So now we are deciding how it evaluates. The leading proposal is that all of these evaluate to false: - More holistically, any expression which contains an invalid operation evaluates to false. This would imply that an invalid comparison itself does not result in a boolean directly. Rather (as suggested by NAME an implementation would need an internal placeholder (e.g. or ) that is persistent through all subsequent operations and ultimately is coerced to false so that the node is not selected. This would mean that the processing of each of the above expressions is as follows: - observes and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false At best, the use of something like is guidance for an implementation. The specification need only declare that expressions which contain invalid operations (at all) evaluate to false. How a developer accomplishes that is up to them.\nAbort evaluation and report an error (the vast majority of existing JSONPath implementations) Abort evaluation and return an empty list (a few existing JSONPath implementations) Continue evaluation while excluding the element affected by the dynamic error (JMESPath specification) All of that sounds complicated and unnecessary. Personally, I don't think any of these constructions are necessary. It is enough to say that in these situations a dynamic error has been detected, and the implementation must react in this way (whatever is decided upon.) As you say \"How a developer accomplishes that is up to them.\"\nUmmmm\u2026 I was under the impression that was identical to and thus your third item would be true. This seems violently counter-intuitive. We're not going to treat as a syntax error, but we are going to cause its presence to have a nonlocal effect on the entire containing expression. OK, it's back to a careful read of the draft for me.\nIf you want a nonlocal effect, it is probably best to think in terms of a throw/catch model. (And, yes, I think one needs catch as an explicit mechanism, not just implicitly at the outside of a filter expression.)\nIf that were the case, then would be true, which isn't right. The expression is still unevaluateable. Similarly with the expression. True in general, but for this specific case, it's the combined with that gives it away, and this can be detected at parse time. Regardless, we need to also consider the case where is true.\nI think the crucial thing is that these expressions have well defined semantics so that we can test the behaviour and ensure interoperation. I really don't think we should be encouraging the use of these expressions. My suffers from the lack of short-cutting. If we preserved short-cutting with a similar approach, we'd have to give up commutativity of and and probably some of the other boolean algebra laws. Overall, I think such approaches would make the spec larger and consequently increase its cognitive load for users and implementers alike. Producing a non-local effect is another solution, but I think that too is overly complex for the benefits. If we made the non-local effect that the filter selected no nodes, I think that would be semantically equivalent to my proposal. Also, I don't think we should introduce a try/catch mechanism as the spec complexity would far outweigh the benefits. The current spec: is quite compact in the way it handles the current issue preserves the boolean algebra laws has a precedent in the JMESPath spec. So I propose that we stick with the current spec.\nBut what's written in the spec doesn't resolve the vs. disparity. Both of these expressions should return an empty node list because both are semantically invalid. As proof of this, is logically equivalent to , which also returns the empty nodelist. If simply and immediately returns , then this equivalent no longer holds.\nI wish we could stop using the term \"semantically invalid\", because that doesn't say what the construct means. A lightweight variant of the catch/throw approach I would prefer would be to say that an expression fails. Constructs that have failing subexpressions also fail. A failing filter expression does not select the item for which the filter expression failed. Disadvantage: Either no short circuiting, or ambiguity with short circuiting. Without short circuiting, one cannot test whether a subsequent subexpression will fail and still get a deterministic result. (I'm not sure we have all we need to make such tests.)\nURL makes the current spec clearer: yields false and so yields true and yield false. Where this is a \"disparity\" or not depends on what semantics we are aiming at. \"should\" only if we adopt the NaB (or equivalent) or non-local effect semantics. (I agree with NAME that the term \"semantically invalid expression\" isn't very useful when we are trying to define the semantics of such expressions.) URL defines in terms of to preserve this very equivalence.\nI think these disadvantages should steer us away from catch/throw, or equivalent, approaches.\nI didn't follow any of this comment. I can't tell what you're saying the spec says vs what you think it should say. Certainly, you're not saying that should be evaluated to true, which would return ALL of the nodes, right? Because that's what it sounds like you're arguing for. Comparisons with booleans don't make sense. Ever. My proof above shows the absurdity of defining one expression to evaluate to false because it, by necessity, means that the converse evaluates to true when neither make sense at all. Call it what you want. I call it \"semantically invalid\" because that phrase accurately describes that there are no (common) operative semantics that make this expression evaluatable. Sure, if we define it to mean something, then that technically gives it semantic meaning, but in all ordinary cases of how works, this operation doesn't make sense and therefore is, by definition, semantically invalid. There should be one of two outcomes from this issue: It's a parse error. It's a runtime evaluation error. In both cases it's an error; that's not in question here. What kind of error and how it's handled are in question. We've already seen that it doesn't make sense to be a parse error because we need to account for the boolean value coming from the data (i.e. from ). That leaves us with a runtime evaluation error. Whenever a runtime evaluation error occurs, because we've decided that a syntactically valid path always returns, it MUST return an empty nodelist. Therefore, ANY expression that triggers an evaluation error MUST return an empty nodelist. This includes both and . I don't know how to stress this any more. There are no other logical options to resolve this issue.\nThis reasoning technique is also called proof by lack of imagination (related to the technique \"I don't get out much\"). I hear that you are arguing for some exception semantics (similar to the catch/throw I brought up). I can sympathize with that, because I also argued for that. However, we are really free to define the expression language as we want, and what we have now is not that much worse than what I was (incompletely) proposing.\n(Found the rationale why Erlang does things this way: URL Unfortunately, we can't ask Joe for additional details any more. But apparently these titans in language design for highly reliable systems saw a benefit in a complete total ordering of the type system. JSON does not have a type system, so anything we do will be invention.)\nMy comment was about the current spec, especially now that PR had landed. The current spec says that yields true. Yes (unless that comparison was part of a large boolean expression which sometimes yielded false) that would return all the nodes. That's precisely what I am arguing for as I think it's the best option. I suspect absurdity, like beauty, is in the eye of the beholder. I can see where you're coming from. I don't follow the chain of reasoning there. The current spec returns, but doesn't always return an empty nodelist. At the risk of repeating myself, the current spec is a logical option in the sense that it is well defined and consistent. I think we are all agreed that none of the options is perfect. All the options under consideration are logical, otherwise we'd rule them out. Some would take more words in the spec to explain and that puts a cognitive load on the user for very little benefit.\nNAME It seems your preferred approach is: Let's try to flesh this out a bit. What about shortcutting? For example, what should the behaviour of be? There would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow shortcuts. Result: all nodes are selected. Do not prescribe an order of evaluation and allow shortcuts. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow shortcuts (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nThis seems useful to me. The spec is free to specify that inequalities (, >=) are only for numbers. Short-circuit evaluation can be used to avoid errors, e.g.: will return the first array element.\nYou still have to parse this. And you have to know the value of before you evaluate it. That means you're able to know that this expression is nonsensical (to avoid \"invalid\") before you evaluate it. Thus shortcutting is moot.\nLet's use \"undesirable\" for expressions we don't like. The example that Glyn provided is just a stand-in for a more complex example that gets true from the JSON value. So the fact that you can parse this and find the undesirable part immediately is not so relevant. With shortcut semantics, the fact that the ignored part of the disjunction is undesirable also is irrelevant. So I do think that Glyn's question is valid.\nIn JMESPath, evaluates to, not , but \"absent\", represented in JMESPath by . When \"absent\" () is evaluated as a boolean according to the JMESPath rules of truth, it gives . It seems consistent that \"not absent\" is . What would you have if you resolved to the draft's notion of \"absent\", an empty node list?\nOk, let's try a modified example to avoid the syntactic issue. What should the behaviour of be (where results in a nodelist with a single node with value )? With NAME preferred approach, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nI'd like to go with the simplest possible thing unless are strong reasons not to. I think the simplest possible thing is: Type-incompatible comparisons yield false and there are no \"exceptions\" or nonlocal effects. So in this example you unambiguously get and select all the nodes and if you want to optimize by short-circuiting you can do that without fear of confusion or breakage. Type-incompatibility yielding false has the advantage that it's easy to explain in English and, I would argue, usually produces the desired effect. If I am filtering something on and turns out not to be a number - almost certainly an unexpected-data-shape problem - I do not want to match. But if I that with something else then I'm explicitly OK with not getting a match here. Yes, there are still corner-case surprises, for example: is true if is non-numeric, but is not true. I claim that this is at least easy to explain and understand in brief clear language.\nA better choice for your argument would have been JS as that was one of the original implementations. Similiar behavior works there (as I discovered testing it in my browser's console). ! and result in false and result in true Interestingly and both result in true for JS. This makes it very apparent that JS is just casting the to and performing the comparison. This kind of implicit casting is something that we have already explicitly decided against. This means that even the JS evaluation isn't a valid argument. I could just as easily use a .Net language or any other strongly typed language to show that doesn't even compile and is therefore nonsensical. Why do we continually compare with this? It's a deviated derivative of JSON Path. Are we trying to align JSON Path with it? To what end? I thought we were trying to define JSON Path, not redefine JMESPath to look like JSON Path. \"... develop a standards-track JSONPath specification that is technically sound and complete, based on the common semantics and other aspects of existing implementations. Where there are differences, the working group will analyze those differences and make choices that rough consensus considers technically best, with an aim toward minimizing disruption among the different JSONPath implementations.\" Yet no one has thought to see what the implementations do in these cases. I would say that the short-cutting is likely preferred, but I don't think my implementation would do it that way. My implementation would evaluate all operands then apply the . This would result in the strange comparison with and so it would select no nodes. C# would shortcut this. and would be considered \"side-effect\" evaluations and would be skipped. That said, strong typing would guarantee that these would evaluate to types that are valid for , so the resolved expression is guaranteed to make sense in the event shortcutting didn't happen. (In reality, the C# compiler would optimize to a constant , and ultimately the remaining expression wouldn't even make it into the compiled code, so really there's no short-cutting either.)\nWell, I chose Erlang because their total ordering is the result of a deliberate design process. Any ordering that you can derive in JavaScript is the result of a haphazard, now universally despised system of implicit conversions \u2014 not quite as good an example for my argument as Erlang\u2019s careful design. (Some people appear to have a tendency to derive processing rules for JSON from JavaScript. JSON is an interchange format, which derived its syntax from JavaScript. JSON has no processing rules, and so we have to add some. Any inspiration for the ones we decide to use in JSONPath is as good as any other, as long as it meets the needs of its users.) Gr\u00fc\u00dfe, Carsten\nFor the record, unlike Erland, comparisons in our current draft do not form a (a.k.a. a linear order). A total order satisfies this law: But, according to our current draft, both and are false. Our current draft does, however, provide a since satisfies these laws: That said, in our draft is not a strict partial order which would have to satisfy these laws: In our draft: is false, so is true, which breaks irreflexivity. both and are false, so and are true, which breaks asymmetry. Our current draft does however preserves the laws of boolean algebra (some of which would be broken by the alternatives being discussed in this issue).\nAnd typed comparisons in C# are the result of a deliberate design process. What's your point? You're invoking a selection bias. Our decision to not have type casting (i.e. ) demonstrates that types are important to us. That line of thinking necessitates that any comparison between different types must either error or return false because such a comparison is meaningless. We're not erroring, so we must return false.\nAre we now agreed then?\nIf by in agreement you mean that must also return false.\nAccording to the current draft, returns false and so returns true and thus returns false.\nOkay... You're twisting my examples. I want any expression that contains comparisons between types to return an empty nodelist.\nI want all items that are not back-ordered or backordered for less than 10 days.\nThe simplest possible thing would be to do what the vast majority of JSONPath implementations do when a comparison happens that is regarded as not supported, which is to abort evaluation and report a diagnostic. That has the additional advantage of being compatible with the charter \"capturing the common semantics of existing implementations\", if that still matters. It is also consistent with XPATH 3.1, see (URL) and (URL). The next simplest thing would be to abort evaluation and return an empty list (a few existing JSONPath implementations do that.)\nSpeaking with my co-chair hat on, I'd like to draw attention to the following text from section 3.1 of the current draft: \"The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further errors can be raised during application of the query to a value.\" I think this has for a long time represented the consensus of the WG. If someone wants to approach the problems raised in this issue by proposing an exception mechanism, that would require a specific proposal covering the details and how best to specify it. Absent such a proposal existing and getting consensus, I think approaches that include a run-time error/exception mechanism are out of bounds.\nCo-chair hat off: I could live with that. Among other things, it's easy to describe. It's not my favorite approach but it's sensible.\nOn 2022-07-20, at 22:15, Tim Bray NAME wrote: Please don\u2019t commingle exceptions with erroring out. Erroring out because of data fed to the expression is not consistent with the above invariant. Exceptions may be processed within the query (e.g., in the classical catch/throw form), and need not violate that invariant. \u201cNaB\u201d is an attempt to add to the data types in such a way that an exception can be handed up the evaluation tree as a return value. Exceptions tend to make the outcome of the query dependent of the sequence in which parts of the query expression are processed, so they may be violating other invariants (which are implicit in some of our minds). Gr\u00fc\u00dfe, Carsten\nI've been thinking why I keep liking having type-mismatch comparisons be just and made some progress. I think that is a compact way of saying . So if it's not a number, this is unsurprisingly false. If you believe this then it makes perfect sense that if is then is true but is false.\nNAME If is , then the spec's statement: implies that is false. Then the spec's statement: implies that , the negation of , is true. Essentially, and give non-intuitive results for non-numeric comparisons. This is surprising and far from ideal. However, because comparisons always produce a boolean value, they can be hedged around with other predicates to get the desired result, e.g. we can replace with which is equivalent to for numeric and is false for non-numeric .\nThe approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison means that: it is not possible to hedge around \"undesirable\" comparisons with other predicates, results tend to depend on the order in which subexpressions are evaluated. Let's take an example. Suppose we want to pick out all objects in an array such that the object has a key which is either at least 9 or equal to . For example, in the JSON: With the current spec, the filter has the desired effect and returns the nodelist: The alternative doesn't allow that kind of hedging around. A solution with the alternative approach is to use a list .\nNext, let's explore the ordering issue with the alternative approach. What should the behaviour of be (where results in a nodelist with a single node with value )? (Apologies this is for the third time of asking.) With the approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected, but the implementation cannot take advantage of the short-circuit as an optimisation. I think option 1 is preferable and probably the most intuitive option. I wonder if there are any other issues (apart from being rather prescriptive) with that option? The boolean algebra laws would only apply in general when \"undesirable\" comparisons are not present.\nI suggest changing this to \"\u2026using one of the operators , =. Then\u2026 I suggest removing this statement because we've defined , then there's some work to tidy up the definition of == and !=. And in every case, any comparison with a type mismatch is always false.\nActually, with a type mismatch is always true. I was hoping to extend the definition by negation to and , but that seems to create cognitive dissonances.\nIf you say that type mismatch always yields false, there's no problem, this reduces to true false, which is to say true. You don't need to say anything about order of evaluation. I think if the spec needs to specify order of evaluation, that is a very severe code smell. I'd find that very hard to accept.\nNo, the rule I'm proposing is simpler: Any comparison with a type mismatch is always false. Which means is not specified as \"the opposite of ==\" it's specified as meaning \"are values of the same non-structured type, and the values are not equal\". I can't tell wither and are equal or not if is not a boolean. They are not comparable.\nSo is always false. Not intuitive to me.\nHmm, having written that, I might have been going overboard. It would be perfectly OK to define as \"true if the operands are of different types, or of the same type but not equal\" and that would make perfect sense.\nWe also need to design in an escape clause for structured values, which so far we don't compare.\nOK, I think I have probably said enough on this, but I have been doing an unsatisfactory job communicating what seems to me like a reasonably straightforward proposal. Once the spec stabilizes a bit I'd be happy to do a PR. The key language, under \"Comparisons\" in 3.4.8, would be: True if both operands are of the same type, that type is not a structured type, and the values are equal; otherwise false True if either operand is of a structured type, or if the operands are of different types, or if they are of the same type but the values are unequal; otherwise false. True if both operands are numbers and the first operand is strictly greater than the second; otherwise false. True if both operands are numbers and the first operand is strictly less than the second; otherwise false. True if both operands are numbers and the first operand is greater than or equal to the second; otherwise false. True if both operands are numbers and the first operand is less than or equal to the second; otherwise false.\nThat approach, like the current draft, preserves the laws of boolean algebra and therefore the order of evaluation doesn't matter. It gets rid of some nasty surprises (such as ) which are present in the current draft. It seems that is the negation of , in which case it's probably simpler to define it as such. However, the approach is not completely free of surprises because it breaks: the converse relationship between and that if and only if not the converse relationship between and that if and only if not as well as: the reflexivity law (see above, e.g. is false) the strongly connected or total law (see above, e.g. it is not true that either or since both are false). Thus and would no longer be partial orders and and would no longer be strict total orders when these four operators are considered as binary relations over the whole set of values. We could rationalise this by thinking of these four operators as orderings of numbers with non-numbers added as \"unordered\" extensions.\nEh, I like writing each one out in clear English rather than having them depend on each other. But, editorial choice. It's still true if a & b are both numbers. If you accept that \"<\" means \"A is a number AND B is a number AND A I am very strongly against applying /// to anything but numbers and strings. All of glyn's proposals (1)-(5) define /// for all kinds of values. In which case you might want a diagnostic, like . Or the vast majority of existing JSONPath implementations.\nOn 2022-07-26, at 14:44, Daniel Parker NAME wrote: OK, so the new consensus is that we do want JSONPath to be able to break? (Not just syntax errors before the query meets data, but also data errors?) Gr\u00fc\u00dfe, Carsten\nIn the case of XPath 3.1, errors are raised or not, depending on a mode. Please let's avoid that. I personally don't think there is a convincing case for raising data errors. I would be open to raising optional, implementation-dependendant warnings. But I'm also comfortable with continuing the current design point of raising syntax errors based on the path alone and then absorbing any kind of anomaly in the data without failing the request.\nNAME Not even if, in Tim's words, \"Something is broken\"? Seriously broken? Proposals (1)-(5) all perform a common purpose: to take the comparisons that are less sensible, and resolve them to something, for completeness. Something is returned, and for all the proposals, that may be a non-empty result list, depending on the query. But consider a user playing around with an online query tool. Which result is more helpful? The result or the diagnostic The expression either side of operator \">\" must evaluate to numeric or string values\nI agree that diagnostics are great for attended situations, but for unattended scenarios, I'd prefer the behaviour to be robust.\nNAME For \"unattended scenarios\" we have error logs and alerts.\nYes, I'm comfortable with logging as a side effect of a robust operation (no failure after syntax checking the path).\nCo-chair hat off: While I do not have a strong opinion about diagnostics vs exceptions vs silent-false, I do note that this kind of type-mismatch problem is a corner case and I wonder if it justifies the investment of a whole bunch of WG effort. Co-chair hat back on: As I said, I think the current draft does represent a fairly long-held consensus of the WG. If we want to introduce a diagnostic/exception mechanism, it's not going to happen without a PR to make the option concrete as opposed to abstract. So if you want such a thing, arguing for it in this thread is not going to get us there.\nSo, one imagines something like: \"A JSONPath implementation MUST NOT cause a failure or exception as the consequence of applying a well-formed JSONPath to well-formed JSON data. An implementation SHOULD provide a mechanism whereby software can receive report of run-time errors such as type-incompatible comparisons.\" Hmm, I don't see anything in the draft spec about what happens if you try to apply a JSONPath and what you're trying to apply it to is broken, i.e. not valid JSON. Maybe we're in a world where callers have to live with failed applications anyhow?\nI'm not \"arguing for it\", I don't care what the authors of the draft eventually put in it. I'm simply providing comments or feedback on matters that may be helpful to the authors in thinking about the issues, or not. They can decide. There used to be more people doing that, but my impression is that most of them have dropped off.\nI think that deserves its own issue: URL\nI'd like to address this issue, so, unless someone someone objects strongly, I'll put together a PR for option 2 plus string comparisons. That won't necessarily be the end of the discussion, but at least it will give us a concrete alternative to the current draft.\nPlease do, and thanks in advance. We will do better if we have concrete texts before us to shape our discussion.\nOn 2022-07-19, at 15:55, Glyn Normington NAME wrote: The other reason that would speak against an option is that it lacks expressiveness, i.e., it is impossible to express a query that can be expected to be needed. I am specifically thinking about queries that first look at a data item to see what it is and then make a condition based on what was found. Gr\u00fc\u00dfe, Carsten\nOn 2022-07-20, at 22:23, Tim Bray NAME wrote: \u2026 which reminds us that this cannot currently be explicitly expressed. Gr\u00fc\u00dfe, Carsten\nI think there may be room for further fine-tuning but at this point I'd like to merge this and get a nice editors' draft that we can read end-to-end.", "new_text": "A \"dot-wild-selector\" acts as a wildcard by selecting the nodes of all member values of an object in its input nodelist as well as all element nodes of an array in its input nodelist. Applying the \"dot- wild-selector\" to a primitive JSON value (a number, a string, \"true\", \"false\", or \"null\") selects no node. 3.4.3.3."} {"id": "q-en-draft-ietf-jsonpath-base-6a45c603fc80d10d2087f41fce2471d6e793966c7c46c2f8f9d21df701bf7bb9", "old_text": "An \"index-wild-selector\" selects the nodes of all member values of an object as well as of all elements of an array. Applying the \"index- wild-selector\" to a primitive JSON value (such as a number, string, or true/false/null) selects no node. The \"index-wild-selector\" behaves identically to the \"dot-wild- selector\".", "comments": "And: allow normal string comparison. use some phrases consistently (editorial change only). (Reviewers may like to view .) The options under consideration in issue were: . . . The NaB proposal, but with left-to-right evaluation and short-circuiting. The option of forcing a total order among all values, like Erlang does. The option of not selecting the current item if any type-mismatched comparisons occur anywhere in the expression. This is option 2 with string comparisons. Fixes URL\n(Do you mean beyond-BMP characters?) You do not need to encode beyond-BMP characters, but if you do, you indeed go through UTF-16 surrogate pairs. No, and that is true of other backslash-encoded characters, too. You can sort by Unicode Scalar Values. You can also sort by UTF-32 code units (obviously) and by UTF-8 code units (a.k.a. bytes) \u2014 UTF-8 was careful to preserve sorting order. Gr\u00fc\u00dfe, Carsten\nMerging. We can fine-tune in a follow-on PR.\nWhat is the expected behavior for expressions which are not semantically valid? For example, is only defined for numbers. So what if someone does Is this a parse error? Does it just evaluate to for all potential values? This kind of thing will be important to define if/when we decide to support more complex expressions (e.g. with mathematic operators) or inline JSON values such as objects or arrays.\nHere you can see the current behaviour of the various implementations for a similar case: URL\nThanks NAME But for this issue, I'd like to explore what we want for the specification. Also, for some context, I'm updating my library to operate in a \"specification compliant\" mode by default. Optionally, users will be able to configure for extended support like math operations and inline JSON (as I mentioned above). But until that stuff is actually included in the spec, I'd like it turned off by default.\nThe spec says: and the grammar in the spec includes as syntactically valid. (We could tighten up the grammar to exclude cases where non-numeric literals appear in ordered comparisons, but I don't think there's a great benefit in doing so.) The spec also says: So, evaluates to \"false\".\nNAME You're quite right. I noticed that the other day and fixed it, but I quoted the previous spec before the fix was merged. Apologies. Please see the revised wording in the filter .\nI don't understand numeric only semantic constraint. Is there a reason behind it? JSON has very few value types and lexical ordering of strings is well defined in Unicode. Standards like ISO8601, commonly used for representing date-time values in JSON, give consideration to lexical ordering. So a useful filter for me could be to return JSON objects that have a date greater than (or equal) to a date taken from somewhere else in the same instance. Having the comparison evaluate to false could also be confusing. For , what should yield for ? Should it be true? If so such an expression would return which is a non-numeric result even though a numeric comparison was performed. Or should breaking the semantic rule about numerics cause the entire predicate to yield false, regardless?\nI'd like to be sure that we keep to the topic here and not get distracted by my choice of example. Yes, comparisons with strings is widely well-defined, but this issue is about comparisons that don't align with what's in the spec currently. Currently, as defined by the spec, strings are not comparable. Maybe my question could be better served with the boolean example mentioned by NAME What happens in this case?\nIt's hard to tell from . But consider something that is stated in the draft, \"Note that comparisons between structured values, even if the values are equal, produce a \"false\" comparison result.\" So, the draft considers comparisons of structured values to be well formed, but always evaluates the comparison to false. Consequently, given element-for-element equal arrays and , and both evaluate to , and so presumably and both evaluate to . This suggests to me that the draft's authors intend comparisons that may be regarded as \"invalid\" to evaluate to . Would the draft also have gives and gives ? Regardless, the draft's rules for comparing structured values appear to be incompatible with all existing JSONPath implementations, no existing implementation follows these rules, as evidenced in the Comparisons by and . None appear to evaluate both and to . These include both implementations in which the expression is well defined, and implementations in which the expression is a type error. For example, the Javascript implementations evaluate as , and as , because Javascript is comparing references to arrays (rather than values.) Implementations in which the expression is a type error either report an error or return an empty result list. I don't think you'd find precedence for these rules in any comparable query language, whether JSONPath, JMESPath, JSONAta, or XPATH 3.1. There is priori experience for defining comparisons for all JSON values (see e.g. JAVA Jayway), just as there is priori experience for treating some comparisons as type errors and taking those terms out of the evaluation (see e.g. JMESPath.) But the draft's approach to comparisons is different. It does not seem to follow any prior experience.\nWhat the says about comes down to the semantics of the comparison . The Filter Selector says: Since is non-numeric, the comparison produces a \"false\" result. Therefore, always results in an empty nodelist.\nBut the spec doesn't say that the result is , and it's incorrect to assume this behavior. It's equally valid to assume that such a comparison is a syntax error. This is the crux of this issue.\nThe spec describes comparisons as boolean expressions which therefore have the value true or false. The difficulty of using the terms and in that context would be possible confusion with the JSON values and . The spec is currently inconsistent in describing the results of boolean expressions as true/false or \"true\"/\"false\", but the fix there is to be consistent. As for for the syntax side, the spec's grammar implies that is syntactically valid.\nI feel it would be better to be explicit.\nAnd always returns a non-empty result list. As does . If the idea is that expressions involving unsupported constructions should return an empty result list, than none of the above expressions should return anything. There is prior experience for matching nothing in all cases, e.g. in JMESPath, both and , where the greater-than comparison isn't supported, match nothing. There is also prior experience in the JSONPath Comparisons where the presence of unsupported comparisons always results in an empty list. But the approach taken in the draft appears to be a minority of one. There appears to be no prior experience for it.\nI don't think we can be any more explicit than the following productions: What did you have in mind?\nI had a look and couldn't find these tests. Please could you provide a link or links.\nIt depends on what we want. If we say is syntactically valid, then we need to explicitly define that it (and NAME variants, etc) evaluate to (boolean, not JSON) because they are nonsensical (semantically invalid). But if we say that these expressions are syntactically invalid, then we need to update the ABNF to reflect that by splitting and so that we can properly restrict the syntax.\nEven if we made syntactically invalid, the same expression could arise from, for example, where is . So, since the spec needs to be clear about the semantics of such expressions, I don't really see the benefit of ruling some of them out syntactically if that would make the grammar messier.\nThen we need to be explicitly clear that such cases result in a evaluation. We don't currently have that.\nSee, for example, , , and . Bash (URL), Elixir (jaxon), and Ruby (jsonpath) all return empty result lists () for $[?(NAME (NAME == true))] $[?((NAME (NAME == true))] and $[?(!(NAME (NAME == true))] On the other hand, I don't think you could find any cases where evaluated to while evaluated to . Not in JSONPath, nor in any of the other good query languages like , , or . That is unique to the draft.\nOne solution would be to introduce \"not a boolean\" () analogous to \"not a number\" (). Semantically invalid boolean expressions, such as would yield . would obey the following laws, where is any boolean or : (The logical operator laws in the spec continue to hold, but only when none of , , or are .) The rule in the spec for filters would still apply: In other words, a semantically invalid boolean expression anywhere inside a filter's boolean expression causes the filter to not to select the current node. (Edited based on feedback from NAME What do you make of this solution? /cc NAME NAME\nThat would mean an implementation can no longer short-circuit on b && c where b is false. I don\u2019t think there is much point in getting an \u201cexception\u201d semantics for unsupported comparisons. If we do want to do them, they should be actual exception semantics. But the loss of short-circuiting is more important to me than the weirdness of returning false (true) in certain cases. Gr\u00fc\u00dfe, Carsten\nYea, good point. I wonder if there is a consistent set of laws which allow both true and to be annihilators for OR and allow both false and to be annihilators for AND? That would at least preserve short-circuiting, but the overall effect would be more difficult to describe as wouldn't then trump everything else.\nI'm attracted by the \"consistency with JMESPath\" argument. I.e. the strings and can replace each other with no change in effect. I thought that's what the draft said. I'd be sympathetic with making the literal string non-well-formed but as Glyn points out, that wouldn't make the need for clarity on the issue go away.\nRight. We might even point out that constructs like this could lead to warnings where the API provides a way to do this. Gr\u00fc\u00dfe, Carsten\nYes, the behaviour of the current draft for semantically invalid expressions seems to be the same as that of JMESPath after all. The says invalid comparisons (such as ordering operators applied to non-numbers) yield a value which is then treated equivalently to . I thought I'd verify this to be on the safe side. A with the expression evaluated against the JSON: gives the result: whereas gives the result: The evaluator with the expression applied to the same JSON gives the result: whereas the expression gives the result .\nEDIT: I think you may be right about this after all. Just not with your example, I believe you are testing with an implementation that does string comparisons differently from the spec (if you're using the online evaluator on the JMESPath site.) Yes Actually, the JMESPath implementation you're using looks like the Python version accessed through the . As the JMESPath author, James Saryerwinnie noted in , \"What's going on here is that the spec only defines comparisons as valid on numbers (URL). In 0.9.1, validation was added to enforce this requirement, so things that implicitly worked before now no longer work ... Given this is affecting multiple people I'll add back the support for strings now.\" James Saryerwinnie went on the suggest that he was going to update the spec as well to allow more general comparisons, but that didn't happen. James actually took a break from JMESPath at around that time to quite recently. Yes. Here it's actually comparing \"char\" < \"char\", and returning , not . and gives However, I did the following test with JMESPath.Net with gives gives which I think is consistent with the current draft.\nLet's bring this back home. I think we've decided that is not a syntax error. So now we are deciding how it evaluates. The leading proposal is that all of these evaluate to false: - More holistically, any expression which contains an invalid operation evaluates to false. This would imply that an invalid comparison itself does not result in a boolean directly. Rather (as suggested by NAME an implementation would need an internal placeholder (e.g. or ) that is persistent through all subsequent operations and ultimately is coerced to false so that the node is not selected. This would mean that the processing of each of the above expressions is as follows: - observes and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false At best, the use of something like is guidance for an implementation. The specification need only declare that expressions which contain invalid operations (at all) evaluate to false. How a developer accomplishes that is up to them.\nAbort evaluation and report an error (the vast majority of existing JSONPath implementations) Abort evaluation and return an empty list (a few existing JSONPath implementations) Continue evaluation while excluding the element affected by the dynamic error (JMESPath specification) All of that sounds complicated and unnecessary. Personally, I don't think any of these constructions are necessary. It is enough to say that in these situations a dynamic error has been detected, and the implementation must react in this way (whatever is decided upon.) As you say \"How a developer accomplishes that is up to them.\"\nUmmmm\u2026 I was under the impression that was identical to and thus your third item would be true. This seems violently counter-intuitive. We're not going to treat as a syntax error, but we are going to cause its presence to have a nonlocal effect on the entire containing expression. OK, it's back to a careful read of the draft for me.\nIf you want a nonlocal effect, it is probably best to think in terms of a throw/catch model. (And, yes, I think one needs catch as an explicit mechanism, not just implicitly at the outside of a filter expression.)\nIf that were the case, then would be true, which isn't right. The expression is still unevaluateable. Similarly with the expression. True in general, but for this specific case, it's the combined with that gives it away, and this can be detected at parse time. Regardless, we need to also consider the case where is true.\nI think the crucial thing is that these expressions have well defined semantics so that we can test the behaviour and ensure interoperation. I really don't think we should be encouraging the use of these expressions. My suffers from the lack of short-cutting. If we preserved short-cutting with a similar approach, we'd have to give up commutativity of and and probably some of the other boolean algebra laws. Overall, I think such approaches would make the spec larger and consequently increase its cognitive load for users and implementers alike. Producing a non-local effect is another solution, but I think that too is overly complex for the benefits. If we made the non-local effect that the filter selected no nodes, I think that would be semantically equivalent to my proposal. Also, I don't think we should introduce a try/catch mechanism as the spec complexity would far outweigh the benefits. The current spec: is quite compact in the way it handles the current issue preserves the boolean algebra laws has a precedent in the JMESPath spec. So I propose that we stick with the current spec.\nBut what's written in the spec doesn't resolve the vs. disparity. Both of these expressions should return an empty node list because both are semantically invalid. As proof of this, is logically equivalent to , which also returns the empty nodelist. If simply and immediately returns , then this equivalent no longer holds.\nI wish we could stop using the term \"semantically invalid\", because that doesn't say what the construct means. A lightweight variant of the catch/throw approach I would prefer would be to say that an expression fails. Constructs that have failing subexpressions also fail. A failing filter expression does not select the item for which the filter expression failed. Disadvantage: Either no short circuiting, or ambiguity with short circuiting. Without short circuiting, one cannot test whether a subsequent subexpression will fail and still get a deterministic result. (I'm not sure we have all we need to make such tests.)\nURL makes the current spec clearer: yields false and so yields true and yield false. Where this is a \"disparity\" or not depends on what semantics we are aiming at. \"should\" only if we adopt the NaB (or equivalent) or non-local effect semantics. (I agree with NAME that the term \"semantically invalid expression\" isn't very useful when we are trying to define the semantics of such expressions.) URL defines in terms of to preserve this very equivalence.\nI think these disadvantages should steer us away from catch/throw, or equivalent, approaches.\nI didn't follow any of this comment. I can't tell what you're saying the spec says vs what you think it should say. Certainly, you're not saying that should be evaluated to true, which would return ALL of the nodes, right? Because that's what it sounds like you're arguing for. Comparisons with booleans don't make sense. Ever. My proof above shows the absurdity of defining one expression to evaluate to false because it, by necessity, means that the converse evaluates to true when neither make sense at all. Call it what you want. I call it \"semantically invalid\" because that phrase accurately describes that there are no (common) operative semantics that make this expression evaluatable. Sure, if we define it to mean something, then that technically gives it semantic meaning, but in all ordinary cases of how works, this operation doesn't make sense and therefore is, by definition, semantically invalid. There should be one of two outcomes from this issue: It's a parse error. It's a runtime evaluation error. In both cases it's an error; that's not in question here. What kind of error and how it's handled are in question. We've already seen that it doesn't make sense to be a parse error because we need to account for the boolean value coming from the data (i.e. from ). That leaves us with a runtime evaluation error. Whenever a runtime evaluation error occurs, because we've decided that a syntactically valid path always returns, it MUST return an empty nodelist. Therefore, ANY expression that triggers an evaluation error MUST return an empty nodelist. This includes both and . I don't know how to stress this any more. There are no other logical options to resolve this issue.\nThis reasoning technique is also called proof by lack of imagination (related to the technique \"I don't get out much\"). I hear that you are arguing for some exception semantics (similar to the catch/throw I brought up). I can sympathize with that, because I also argued for that. However, we are really free to define the expression language as we want, and what we have now is not that much worse than what I was (incompletely) proposing.\n(Found the rationale why Erlang does things this way: URL Unfortunately, we can't ask Joe for additional details any more. But apparently these titans in language design for highly reliable systems saw a benefit in a complete total ordering of the type system. JSON does not have a type system, so anything we do will be invention.)\nMy comment was about the current spec, especially now that PR had landed. The current spec says that yields true. Yes (unless that comparison was part of a large boolean expression which sometimes yielded false) that would return all the nodes. That's precisely what I am arguing for as I think it's the best option. I suspect absurdity, like beauty, is in the eye of the beholder. I can see where you're coming from. I don't follow the chain of reasoning there. The current spec returns, but doesn't always return an empty nodelist. At the risk of repeating myself, the current spec is a logical option in the sense that it is well defined and consistent. I think we are all agreed that none of the options is perfect. All the options under consideration are logical, otherwise we'd rule them out. Some would take more words in the spec to explain and that puts a cognitive load on the user for very little benefit.\nNAME It seems your preferred approach is: Let's try to flesh this out a bit. What about shortcutting? For example, what should the behaviour of be? There would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow shortcuts. Result: all nodes are selected. Do not prescribe an order of evaluation and allow shortcuts. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow shortcuts (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nThis seems useful to me. The spec is free to specify that inequalities (, >=) are only for numbers. Short-circuit evaluation can be used to avoid errors, e.g.: will return the first array element.\nYou still have to parse this. And you have to know the value of before you evaluate it. That means you're able to know that this expression is nonsensical (to avoid \"invalid\") before you evaluate it. Thus shortcutting is moot.\nLet's use \"undesirable\" for expressions we don't like. The example that Glyn provided is just a stand-in for a more complex example that gets true from the JSON value. So the fact that you can parse this and find the undesirable part immediately is not so relevant. With shortcut semantics, the fact that the ignored part of the disjunction is undesirable also is irrelevant. So I do think that Glyn's question is valid.\nIn JMESPath, evaluates to, not , but \"absent\", represented in JMESPath by . When \"absent\" () is evaluated as a boolean according to the JMESPath rules of truth, it gives . It seems consistent that \"not absent\" is . What would you have if you resolved to the draft's notion of \"absent\", an empty node list?\nOk, let's try a modified example to avoid the syntactic issue. What should the behaviour of be (where results in a nodelist with a single node with value )? With NAME preferred approach, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nI'd like to go with the simplest possible thing unless are strong reasons not to. I think the simplest possible thing is: Type-incompatible comparisons yield false and there are no \"exceptions\" or nonlocal effects. So in this example you unambiguously get and select all the nodes and if you want to optimize by short-circuiting you can do that without fear of confusion or breakage. Type-incompatibility yielding false has the advantage that it's easy to explain in English and, I would argue, usually produces the desired effect. If I am filtering something on and turns out not to be a number - almost certainly an unexpected-data-shape problem - I do not want to match. But if I that with something else then I'm explicitly OK with not getting a match here. Yes, there are still corner-case surprises, for example: is true if is non-numeric, but is not true. I claim that this is at least easy to explain and understand in brief clear language.\nA better choice for your argument would have been JS as that was one of the original implementations. Similiar behavior works there (as I discovered testing it in my browser's console). ! and result in false and result in true Interestingly and both result in true for JS. This makes it very apparent that JS is just casting the to and performing the comparison. This kind of implicit casting is something that we have already explicitly decided against. This means that even the JS evaluation isn't a valid argument. I could just as easily use a .Net language or any other strongly typed language to show that doesn't even compile and is therefore nonsensical. Why do we continually compare with this? It's a deviated derivative of JSON Path. Are we trying to align JSON Path with it? To what end? I thought we were trying to define JSON Path, not redefine JMESPath to look like JSON Path. \"... develop a standards-track JSONPath specification that is technically sound and complete, based on the common semantics and other aspects of existing implementations. Where there are differences, the working group will analyze those differences and make choices that rough consensus considers technically best, with an aim toward minimizing disruption among the different JSONPath implementations.\" Yet no one has thought to see what the implementations do in these cases. I would say that the short-cutting is likely preferred, but I don't think my implementation would do it that way. My implementation would evaluate all operands then apply the . This would result in the strange comparison with and so it would select no nodes. C# would shortcut this. and would be considered \"side-effect\" evaluations and would be skipped. That said, strong typing would guarantee that these would evaluate to types that are valid for , so the resolved expression is guaranteed to make sense in the event shortcutting didn't happen. (In reality, the C# compiler would optimize to a constant , and ultimately the remaining expression wouldn't even make it into the compiled code, so really there's no short-cutting either.)\nWell, I chose Erlang because their total ordering is the result of a deliberate design process. Any ordering that you can derive in JavaScript is the result of a haphazard, now universally despised system of implicit conversions \u2014 not quite as good an example for my argument as Erlang\u2019s careful design. (Some people appear to have a tendency to derive processing rules for JSON from JavaScript. JSON is an interchange format, which derived its syntax from JavaScript. JSON has no processing rules, and so we have to add some. Any inspiration for the ones we decide to use in JSONPath is as good as any other, as long as it meets the needs of its users.) Gr\u00fc\u00dfe, Carsten\nFor the record, unlike Erland, comparisons in our current draft do not form a (a.k.a. a linear order). A total order satisfies this law: But, according to our current draft, both and are false. Our current draft does, however, provide a since satisfies these laws: That said, in our draft is not a strict partial order which would have to satisfy these laws: In our draft: is false, so is true, which breaks irreflexivity. both and are false, so and are true, which breaks asymmetry. Our current draft does however preserves the laws of boolean algebra (some of which would be broken by the alternatives being discussed in this issue).\nAnd typed comparisons in C# are the result of a deliberate design process. What's your point? You're invoking a selection bias. Our decision to not have type casting (i.e. ) demonstrates that types are important to us. That line of thinking necessitates that any comparison between different types must either error or return false because such a comparison is meaningless. We're not erroring, so we must return false.\nAre we now agreed then?\nIf by in agreement you mean that must also return false.\nAccording to the current draft, returns false and so returns true and thus returns false.\nOkay... You're twisting my examples. I want any expression that contains comparisons between types to return an empty nodelist.\nI want all items that are not back-ordered or backordered for less than 10 days.\nThe simplest possible thing would be to do what the vast majority of JSONPath implementations do when a comparison happens that is regarded as not supported, which is to abort evaluation and report a diagnostic. That has the additional advantage of being compatible with the charter \"capturing the common semantics of existing implementations\", if that still matters. It is also consistent with XPATH 3.1, see (URL) and (URL). The next simplest thing would be to abort evaluation and return an empty list (a few existing JSONPath implementations do that.)\nSpeaking with my co-chair hat on, I'd like to draw attention to the following text from section 3.1 of the current draft: \"The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further errors can be raised during application of the query to a value.\" I think this has for a long time represented the consensus of the WG. If someone wants to approach the problems raised in this issue by proposing an exception mechanism, that would require a specific proposal covering the details and how best to specify it. Absent such a proposal existing and getting consensus, I think approaches that include a run-time error/exception mechanism are out of bounds.\nCo-chair hat off: I could live with that. Among other things, it's easy to describe. It's not my favorite approach but it's sensible.\nOn 2022-07-20, at 22:15, Tim Bray NAME wrote: Please don\u2019t commingle exceptions with erroring out. Erroring out because of data fed to the expression is not consistent with the above invariant. Exceptions may be processed within the query (e.g., in the classical catch/throw form), and need not violate that invariant. \u201cNaB\u201d is an attempt to add to the data types in such a way that an exception can be handed up the evaluation tree as a return value. Exceptions tend to make the outcome of the query dependent of the sequence in which parts of the query expression are processed, so they may be violating other invariants (which are implicit in some of our minds). Gr\u00fc\u00dfe, Carsten\nI've been thinking why I keep liking having type-mismatch comparisons be just and made some progress. I think that is a compact way of saying . So if it's not a number, this is unsurprisingly false. If you believe this then it makes perfect sense that if is then is true but is false.\nNAME If is , then the spec's statement: implies that is false. Then the spec's statement: implies that , the negation of , is true. Essentially, and give non-intuitive results for non-numeric comparisons. This is surprising and far from ideal. However, because comparisons always produce a boolean value, they can be hedged around with other predicates to get the desired result, e.g. we can replace with which is equivalent to for numeric and is false for non-numeric .\nThe approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison means that: it is not possible to hedge around \"undesirable\" comparisons with other predicates, results tend to depend on the order in which subexpressions are evaluated. Let's take an example. Suppose we want to pick out all objects in an array such that the object has a key which is either at least 9 or equal to . For example, in the JSON: With the current spec, the filter has the desired effect and returns the nodelist: The alternative doesn't allow that kind of hedging around. A solution with the alternative approach is to use a list .\nNext, let's explore the ordering issue with the alternative approach. What should the behaviour of be (where results in a nodelist with a single node with value )? (Apologies this is for the third time of asking.) With the approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected, but the implementation cannot take advantage of the short-circuit as an optimisation. I think option 1 is preferable and probably the most intuitive option. I wonder if there are any other issues (apart from being rather prescriptive) with that option? The boolean algebra laws would only apply in general when \"undesirable\" comparisons are not present.\nI suggest changing this to \"\u2026using one of the operators , =. Then\u2026 I suggest removing this statement because we've defined , then there's some work to tidy up the definition of == and !=. And in every case, any comparison with a type mismatch is always false.\nActually, with a type mismatch is always true. I was hoping to extend the definition by negation to and , but that seems to create cognitive dissonances.\nIf you say that type mismatch always yields false, there's no problem, this reduces to true false, which is to say true. You don't need to say anything about order of evaluation. I think if the spec needs to specify order of evaluation, that is a very severe code smell. I'd find that very hard to accept.\nNo, the rule I'm proposing is simpler: Any comparison with a type mismatch is always false. Which means is not specified as \"the opposite of ==\" it's specified as meaning \"are values of the same non-structured type, and the values are not equal\". I can't tell wither and are equal or not if is not a boolean. They are not comparable.\nSo is always false. Not intuitive to me.\nHmm, having written that, I might have been going overboard. It would be perfectly OK to define as \"true if the operands are of different types, or of the same type but not equal\" and that would make perfect sense.\nWe also need to design in an escape clause for structured values, which so far we don't compare.\nOK, I think I have probably said enough on this, but I have been doing an unsatisfactory job communicating what seems to me like a reasonably straightforward proposal. Once the spec stabilizes a bit I'd be happy to do a PR. The key language, under \"Comparisons\" in 3.4.8, would be: True if both operands are of the same type, that type is not a structured type, and the values are equal; otherwise false True if either operand is of a structured type, or if the operands are of different types, or if they are of the same type but the values are unequal; otherwise false. True if both operands are numbers and the first operand is strictly greater than the second; otherwise false. True if both operands are numbers and the first operand is strictly less than the second; otherwise false. True if both operands are numbers and the first operand is greater than or equal to the second; otherwise false. True if both operands are numbers and the first operand is less than or equal to the second; otherwise false.\nThat approach, like the current draft, preserves the laws of boolean algebra and therefore the order of evaluation doesn't matter. It gets rid of some nasty surprises (such as ) which are present in the current draft. It seems that is the negation of , in which case it's probably simpler to define it as such. However, the approach is not completely free of surprises because it breaks: the converse relationship between and that if and only if not the converse relationship between and that if and only if not as well as: the reflexivity law (see above, e.g. is false) the strongly connected or total law (see above, e.g. it is not true that either or since both are false). Thus and would no longer be partial orders and and would no longer be strict total orders when these four operators are considered as binary relations over the whole set of values. We could rationalise this by thinking of these four operators as orderings of numbers with non-numbers added as \"unordered\" extensions.\nEh, I like writing each one out in clear English rather than having them depend on each other. But, editorial choice. It's still true if a & b are both numbers. If you accept that \"<\" means \"A is a number AND B is a number AND A I am very strongly against applying /// to anything but numbers and strings. All of glyn's proposals (1)-(5) define /// for all kinds of values. In which case you might want a diagnostic, like . Or the vast majority of existing JSONPath implementations.\nOn 2022-07-26, at 14:44, Daniel Parker NAME wrote: OK, so the new consensus is that we do want JSONPath to be able to break? (Not just syntax errors before the query meets data, but also data errors?) Gr\u00fc\u00dfe, Carsten\nIn the case of XPath 3.1, errors are raised or not, depending on a mode. Please let's avoid that. I personally don't think there is a convincing case for raising data errors. I would be open to raising optional, implementation-dependendant warnings. But I'm also comfortable with continuing the current design point of raising syntax errors based on the path alone and then absorbing any kind of anomaly in the data without failing the request.\nNAME Not even if, in Tim's words, \"Something is broken\"? Seriously broken? Proposals (1)-(5) all perform a common purpose: to take the comparisons that are less sensible, and resolve them to something, for completeness. Something is returned, and for all the proposals, that may be a non-empty result list, depending on the query. But consider a user playing around with an online query tool. Which result is more helpful? The result or the diagnostic The expression either side of operator \">\" must evaluate to numeric or string values\nI agree that diagnostics are great for attended situations, but for unattended scenarios, I'd prefer the behaviour to be robust.\nNAME For \"unattended scenarios\" we have error logs and alerts.\nYes, I'm comfortable with logging as a side effect of a robust operation (no failure after syntax checking the path).\nCo-chair hat off: While I do not have a strong opinion about diagnostics vs exceptions vs silent-false, I do note that this kind of type-mismatch problem is a corner case and I wonder if it justifies the investment of a whole bunch of WG effort. Co-chair hat back on: As I said, I think the current draft does represent a fairly long-held consensus of the WG. If we want to introduce a diagnostic/exception mechanism, it's not going to happen without a PR to make the option concrete as opposed to abstract. So if you want such a thing, arguing for it in this thread is not going to get us there.\nSo, one imagines something like: \"A JSONPath implementation MUST NOT cause a failure or exception as the consequence of applying a well-formed JSONPath to well-formed JSON data. An implementation SHOULD provide a mechanism whereby software can receive report of run-time errors such as type-incompatible comparisons.\" Hmm, I don't see anything in the draft spec about what happens if you try to apply a JSONPath and what you're trying to apply it to is broken, i.e. not valid JSON. Maybe we're in a world where callers have to live with failed applications anyhow?\nI'm not \"arguing for it\", I don't care what the authors of the draft eventually put in it. I'm simply providing comments or feedback on matters that may be helpful to the authors in thinking about the issues, or not. They can decide. There used to be more people doing that, but my impression is that most of them have dropped off.\nI think that deserves its own issue: URL\nI'd like to address this issue, so, unless someone someone objects strongly, I'll put together a PR for option 2 plus string comparisons. That won't necessarily be the end of the discussion, but at least it will give us a concrete alternative to the current draft.\nPlease do, and thanks in advance. We will do better if we have concrete texts before us to shape our discussion.\nOn 2022-07-19, at 15:55, Glyn Normington NAME wrote: The other reason that would speak against an option is that it lacks expressiveness, i.e., it is impossible to express a query that can be expected to be needed. I am specifically thinking about queries that first look at a data item to see what it is and then make a condition based on what was found. Gr\u00fc\u00dfe, Carsten\nOn 2022-07-20, at 22:23, Tim Bray NAME wrote: \u2026 which reminds us that this cannot currently be explicitly expressed. Gr\u00fc\u00dfe, Carsten\nI think there may be room for further fine-tuning but at this point I'd like to merge this and get a nice editors' draft that we can read end-to-end.", "new_text": "An \"index-wild-selector\" selects the nodes of all member values of an object as well as of all elements of an array. Applying the \"index- wild-selector\" to a primitive JSON value (that is, a number, a string, \"true\", \"false\", or \"null\") selects no node. The \"index-wild-selector\" behaves identically to the \"dot-wild- selector\"."} {"id": "q-en-draft-ietf-jsonpath-base-6a45c603fc80d10d2087f41fce2471d6e793966c7c46c2f8f9d21df701bf7bb9", "old_text": "contained in the current lean syntax \"[?]\" as a special case. Comparisons are restricted to Singular Path values and primitive values (such as number, string, \"true\", \"false\", \"null\"). Alphabetic characters in ABNF are case-insensitive, so \"e\" can be either \"e\" or \"E\".", "comments": "And: allow normal string comparison. use some phrases consistently (editorial change only). (Reviewers may like to view .) The options under consideration in issue were: . . . The NaB proposal, but with left-to-right evaluation and short-circuiting. The option of forcing a total order among all values, like Erlang does. The option of not selecting the current item if any type-mismatched comparisons occur anywhere in the expression. This is option 2 with string comparisons. Fixes URL\n(Do you mean beyond-BMP characters?) You do not need to encode beyond-BMP characters, but if you do, you indeed go through UTF-16 surrogate pairs. No, and that is true of other backslash-encoded characters, too. You can sort by Unicode Scalar Values. You can also sort by UTF-32 code units (obviously) and by UTF-8 code units (a.k.a. bytes) \u2014 UTF-8 was careful to preserve sorting order. Gr\u00fc\u00dfe, Carsten\nMerging. We can fine-tune in a follow-on PR.\nWhat is the expected behavior for expressions which are not semantically valid? For example, is only defined for numbers. So what if someone does Is this a parse error? Does it just evaluate to for all potential values? This kind of thing will be important to define if/when we decide to support more complex expressions (e.g. with mathematic operators) or inline JSON values such as objects or arrays.\nHere you can see the current behaviour of the various implementations for a similar case: URL\nThanks NAME But for this issue, I'd like to explore what we want for the specification. Also, for some context, I'm updating my library to operate in a \"specification compliant\" mode by default. Optionally, users will be able to configure for extended support like math operations and inline JSON (as I mentioned above). But until that stuff is actually included in the spec, I'd like it turned off by default.\nThe spec says: and the grammar in the spec includes as syntactically valid. (We could tighten up the grammar to exclude cases where non-numeric literals appear in ordered comparisons, but I don't think there's a great benefit in doing so.) The spec also says: So, evaluates to \"false\".\nNAME You're quite right. I noticed that the other day and fixed it, but I quoted the previous spec before the fix was merged. Apologies. Please see the revised wording in the filter .\nI don't understand numeric only semantic constraint. Is there a reason behind it? JSON has very few value types and lexical ordering of strings is well defined in Unicode. Standards like ISO8601, commonly used for representing date-time values in JSON, give consideration to lexical ordering. So a useful filter for me could be to return JSON objects that have a date greater than (or equal) to a date taken from somewhere else in the same instance. Having the comparison evaluate to false could also be confusing. For , what should yield for ? Should it be true? If so such an expression would return which is a non-numeric result even though a numeric comparison was performed. Or should breaking the semantic rule about numerics cause the entire predicate to yield false, regardless?\nI'd like to be sure that we keep to the topic here and not get distracted by my choice of example. Yes, comparisons with strings is widely well-defined, but this issue is about comparisons that don't align with what's in the spec currently. Currently, as defined by the spec, strings are not comparable. Maybe my question could be better served with the boolean example mentioned by NAME What happens in this case?\nIt's hard to tell from . But consider something that is stated in the draft, \"Note that comparisons between structured values, even if the values are equal, produce a \"false\" comparison result.\" So, the draft considers comparisons of structured values to be well formed, but always evaluates the comparison to false. Consequently, given element-for-element equal arrays and , and both evaluate to , and so presumably and both evaluate to . This suggests to me that the draft's authors intend comparisons that may be regarded as \"invalid\" to evaluate to . Would the draft also have gives and gives ? Regardless, the draft's rules for comparing structured values appear to be incompatible with all existing JSONPath implementations, no existing implementation follows these rules, as evidenced in the Comparisons by and . None appear to evaluate both and to . These include both implementations in which the expression is well defined, and implementations in which the expression is a type error. For example, the Javascript implementations evaluate as , and as , because Javascript is comparing references to arrays (rather than values.) Implementations in which the expression is a type error either report an error or return an empty result list. I don't think you'd find precedence for these rules in any comparable query language, whether JSONPath, JMESPath, JSONAta, or XPATH 3.1. There is priori experience for defining comparisons for all JSON values (see e.g. JAVA Jayway), just as there is priori experience for treating some comparisons as type errors and taking those terms out of the evaluation (see e.g. JMESPath.) But the draft's approach to comparisons is different. It does not seem to follow any prior experience.\nWhat the says about comes down to the semantics of the comparison . The Filter Selector says: Since is non-numeric, the comparison produces a \"false\" result. Therefore, always results in an empty nodelist.\nBut the spec doesn't say that the result is , and it's incorrect to assume this behavior. It's equally valid to assume that such a comparison is a syntax error. This is the crux of this issue.\nThe spec describes comparisons as boolean expressions which therefore have the value true or false. The difficulty of using the terms and in that context would be possible confusion with the JSON values and . The spec is currently inconsistent in describing the results of boolean expressions as true/false or \"true\"/\"false\", but the fix there is to be consistent. As for for the syntax side, the spec's grammar implies that is syntactically valid.\nI feel it would be better to be explicit.\nAnd always returns a non-empty result list. As does . If the idea is that expressions involving unsupported constructions should return an empty result list, than none of the above expressions should return anything. There is prior experience for matching nothing in all cases, e.g. in JMESPath, both and , where the greater-than comparison isn't supported, match nothing. There is also prior experience in the JSONPath Comparisons where the presence of unsupported comparisons always results in an empty list. But the approach taken in the draft appears to be a minority of one. There appears to be no prior experience for it.\nI don't think we can be any more explicit than the following productions: What did you have in mind?\nI had a look and couldn't find these tests. Please could you provide a link or links.\nIt depends on what we want. If we say is syntactically valid, then we need to explicitly define that it (and NAME variants, etc) evaluate to (boolean, not JSON) because they are nonsensical (semantically invalid). But if we say that these expressions are syntactically invalid, then we need to update the ABNF to reflect that by splitting and so that we can properly restrict the syntax.\nEven if we made syntactically invalid, the same expression could arise from, for example, where is . So, since the spec needs to be clear about the semantics of such expressions, I don't really see the benefit of ruling some of them out syntactically if that would make the grammar messier.\nThen we need to be explicitly clear that such cases result in a evaluation. We don't currently have that.\nSee, for example, , , and . Bash (URL), Elixir (jaxon), and Ruby (jsonpath) all return empty result lists () for $[?(NAME (NAME == true))] $[?((NAME (NAME == true))] and $[?(!(NAME (NAME == true))] On the other hand, I don't think you could find any cases where evaluated to while evaluated to . Not in JSONPath, nor in any of the other good query languages like , , or . That is unique to the draft.\nOne solution would be to introduce \"not a boolean\" () analogous to \"not a number\" (). Semantically invalid boolean expressions, such as would yield . would obey the following laws, where is any boolean or : (The logical operator laws in the spec continue to hold, but only when none of , , or are .) The rule in the spec for filters would still apply: In other words, a semantically invalid boolean expression anywhere inside a filter's boolean expression causes the filter to not to select the current node. (Edited based on feedback from NAME What do you make of this solution? /cc NAME NAME\nThat would mean an implementation can no longer short-circuit on b && c where b is false. I don\u2019t think there is much point in getting an \u201cexception\u201d semantics for unsupported comparisons. If we do want to do them, they should be actual exception semantics. But the loss of short-circuiting is more important to me than the weirdness of returning false (true) in certain cases. Gr\u00fc\u00dfe, Carsten\nYea, good point. I wonder if there is a consistent set of laws which allow both true and to be annihilators for OR and allow both false and to be annihilators for AND? That would at least preserve short-circuiting, but the overall effect would be more difficult to describe as wouldn't then trump everything else.\nI'm attracted by the \"consistency with JMESPath\" argument. I.e. the strings and can replace each other with no change in effect. I thought that's what the draft said. I'd be sympathetic with making the literal string non-well-formed but as Glyn points out, that wouldn't make the need for clarity on the issue go away.\nRight. We might even point out that constructs like this could lead to warnings where the API provides a way to do this. Gr\u00fc\u00dfe, Carsten\nYes, the behaviour of the current draft for semantically invalid expressions seems to be the same as that of JMESPath after all. The says invalid comparisons (such as ordering operators applied to non-numbers) yield a value which is then treated equivalently to . I thought I'd verify this to be on the safe side. A with the expression evaluated against the JSON: gives the result: whereas gives the result: The evaluator with the expression applied to the same JSON gives the result: whereas the expression gives the result .\nEDIT: I think you may be right about this after all. Just not with your example, I believe you are testing with an implementation that does string comparisons differently from the spec (if you're using the online evaluator on the JMESPath site.) Yes Actually, the JMESPath implementation you're using looks like the Python version accessed through the . As the JMESPath author, James Saryerwinnie noted in , \"What's going on here is that the spec only defines comparisons as valid on numbers (URL). In 0.9.1, validation was added to enforce this requirement, so things that implicitly worked before now no longer work ... Given this is affecting multiple people I'll add back the support for strings now.\" James Saryerwinnie went on the suggest that he was going to update the spec as well to allow more general comparisons, but that didn't happen. James actually took a break from JMESPath at around that time to quite recently. Yes. Here it's actually comparing \"char\" < \"char\", and returning , not . and gives However, I did the following test with JMESPath.Net with gives gives which I think is consistent with the current draft.\nLet's bring this back home. I think we've decided that is not a syntax error. So now we are deciding how it evaluates. The leading proposal is that all of these evaluate to false: - More holistically, any expression which contains an invalid operation evaluates to false. This would imply that an invalid comparison itself does not result in a boolean directly. Rather (as suggested by NAME an implementation would need an internal placeholder (e.g. or ) that is persistent through all subsequent operations and ultimately is coerced to false so that the node is not selected. This would mean that the processing of each of the above expressions is as follows: - observes and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false At best, the use of something like is guidance for an implementation. The specification need only declare that expressions which contain invalid operations (at all) evaluate to false. How a developer accomplishes that is up to them.\nAbort evaluation and report an error (the vast majority of existing JSONPath implementations) Abort evaluation and return an empty list (a few existing JSONPath implementations) Continue evaluation while excluding the element affected by the dynamic error (JMESPath specification) All of that sounds complicated and unnecessary. Personally, I don't think any of these constructions are necessary. It is enough to say that in these situations a dynamic error has been detected, and the implementation must react in this way (whatever is decided upon.) As you say \"How a developer accomplishes that is up to them.\"\nUmmmm\u2026 I was under the impression that was identical to and thus your third item would be true. This seems violently counter-intuitive. We're not going to treat as a syntax error, but we are going to cause its presence to have a nonlocal effect on the entire containing expression. OK, it's back to a careful read of the draft for me.\nIf you want a nonlocal effect, it is probably best to think in terms of a throw/catch model. (And, yes, I think one needs catch as an explicit mechanism, not just implicitly at the outside of a filter expression.)\nIf that were the case, then would be true, which isn't right. The expression is still unevaluateable. Similarly with the expression. True in general, but for this specific case, it's the combined with that gives it away, and this can be detected at parse time. Regardless, we need to also consider the case where is true.\nI think the crucial thing is that these expressions have well defined semantics so that we can test the behaviour and ensure interoperation. I really don't think we should be encouraging the use of these expressions. My suffers from the lack of short-cutting. If we preserved short-cutting with a similar approach, we'd have to give up commutativity of and and probably some of the other boolean algebra laws. Overall, I think such approaches would make the spec larger and consequently increase its cognitive load for users and implementers alike. Producing a non-local effect is another solution, but I think that too is overly complex for the benefits. If we made the non-local effect that the filter selected no nodes, I think that would be semantically equivalent to my proposal. Also, I don't think we should introduce a try/catch mechanism as the spec complexity would far outweigh the benefits. The current spec: is quite compact in the way it handles the current issue preserves the boolean algebra laws has a precedent in the JMESPath spec. So I propose that we stick with the current spec.\nBut what's written in the spec doesn't resolve the vs. disparity. Both of these expressions should return an empty node list because both are semantically invalid. As proof of this, is logically equivalent to , which also returns the empty nodelist. If simply and immediately returns , then this equivalent no longer holds.\nI wish we could stop using the term \"semantically invalid\", because that doesn't say what the construct means. A lightweight variant of the catch/throw approach I would prefer would be to say that an expression fails. Constructs that have failing subexpressions also fail. A failing filter expression does not select the item for which the filter expression failed. Disadvantage: Either no short circuiting, or ambiguity with short circuiting. Without short circuiting, one cannot test whether a subsequent subexpression will fail and still get a deterministic result. (I'm not sure we have all we need to make such tests.)\nURL makes the current spec clearer: yields false and so yields true and yield false. Where this is a \"disparity\" or not depends on what semantics we are aiming at. \"should\" only if we adopt the NaB (or equivalent) or non-local effect semantics. (I agree with NAME that the term \"semantically invalid expression\" isn't very useful when we are trying to define the semantics of such expressions.) URL defines in terms of to preserve this very equivalence.\nI think these disadvantages should steer us away from catch/throw, or equivalent, approaches.\nI didn't follow any of this comment. I can't tell what you're saying the spec says vs what you think it should say. Certainly, you're not saying that should be evaluated to true, which would return ALL of the nodes, right? Because that's what it sounds like you're arguing for. Comparisons with booleans don't make sense. Ever. My proof above shows the absurdity of defining one expression to evaluate to false because it, by necessity, means that the converse evaluates to true when neither make sense at all. Call it what you want. I call it \"semantically invalid\" because that phrase accurately describes that there are no (common) operative semantics that make this expression evaluatable. Sure, if we define it to mean something, then that technically gives it semantic meaning, but in all ordinary cases of how works, this operation doesn't make sense and therefore is, by definition, semantically invalid. There should be one of two outcomes from this issue: It's a parse error. It's a runtime evaluation error. In both cases it's an error; that's not in question here. What kind of error and how it's handled are in question. We've already seen that it doesn't make sense to be a parse error because we need to account for the boolean value coming from the data (i.e. from ). That leaves us with a runtime evaluation error. Whenever a runtime evaluation error occurs, because we've decided that a syntactically valid path always returns, it MUST return an empty nodelist. Therefore, ANY expression that triggers an evaluation error MUST return an empty nodelist. This includes both and . I don't know how to stress this any more. There are no other logical options to resolve this issue.\nThis reasoning technique is also called proof by lack of imagination (related to the technique \"I don't get out much\"). I hear that you are arguing for some exception semantics (similar to the catch/throw I brought up). I can sympathize with that, because I also argued for that. However, we are really free to define the expression language as we want, and what we have now is not that much worse than what I was (incompletely) proposing.\n(Found the rationale why Erlang does things this way: URL Unfortunately, we can't ask Joe for additional details any more. But apparently these titans in language design for highly reliable systems saw a benefit in a complete total ordering of the type system. JSON does not have a type system, so anything we do will be invention.)\nMy comment was about the current spec, especially now that PR had landed. The current spec says that yields true. Yes (unless that comparison was part of a large boolean expression which sometimes yielded false) that would return all the nodes. That's precisely what I am arguing for as I think it's the best option. I suspect absurdity, like beauty, is in the eye of the beholder. I can see where you're coming from. I don't follow the chain of reasoning there. The current spec returns, but doesn't always return an empty nodelist. At the risk of repeating myself, the current spec is a logical option in the sense that it is well defined and consistent. I think we are all agreed that none of the options is perfect. All the options under consideration are logical, otherwise we'd rule them out. Some would take more words in the spec to explain and that puts a cognitive load on the user for very little benefit.\nNAME It seems your preferred approach is: Let's try to flesh this out a bit. What about shortcutting? For example, what should the behaviour of be? There would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow shortcuts. Result: all nodes are selected. Do not prescribe an order of evaluation and allow shortcuts. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow shortcuts (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nThis seems useful to me. The spec is free to specify that inequalities (, >=) are only for numbers. Short-circuit evaluation can be used to avoid errors, e.g.: will return the first array element.\nYou still have to parse this. And you have to know the value of before you evaluate it. That means you're able to know that this expression is nonsensical (to avoid \"invalid\") before you evaluate it. Thus shortcutting is moot.\nLet's use \"undesirable\" for expressions we don't like. The example that Glyn provided is just a stand-in for a more complex example that gets true from the JSON value. So the fact that you can parse this and find the undesirable part immediately is not so relevant. With shortcut semantics, the fact that the ignored part of the disjunction is undesirable also is irrelevant. So I do think that Glyn's question is valid.\nIn JMESPath, evaluates to, not , but \"absent\", represented in JMESPath by . When \"absent\" () is evaluated as a boolean according to the JMESPath rules of truth, it gives . It seems consistent that \"not absent\" is . What would you have if you resolved to the draft's notion of \"absent\", an empty node list?\nOk, let's try a modified example to avoid the syntactic issue. What should the behaviour of be (where results in a nodelist with a single node with value )? With NAME preferred approach, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nI'd like to go with the simplest possible thing unless are strong reasons not to. I think the simplest possible thing is: Type-incompatible comparisons yield false and there are no \"exceptions\" or nonlocal effects. So in this example you unambiguously get and select all the nodes and if you want to optimize by short-circuiting you can do that without fear of confusion or breakage. Type-incompatibility yielding false has the advantage that it's easy to explain in English and, I would argue, usually produces the desired effect. If I am filtering something on and turns out not to be a number - almost certainly an unexpected-data-shape problem - I do not want to match. But if I that with something else then I'm explicitly OK with not getting a match here. Yes, there are still corner-case surprises, for example: is true if is non-numeric, but is not true. I claim that this is at least easy to explain and understand in brief clear language.\nA better choice for your argument would have been JS as that was one of the original implementations. Similiar behavior works there (as I discovered testing it in my browser's console). ! and result in false and result in true Interestingly and both result in true for JS. This makes it very apparent that JS is just casting the to and performing the comparison. This kind of implicit casting is something that we have already explicitly decided against. This means that even the JS evaluation isn't a valid argument. I could just as easily use a .Net language or any other strongly typed language to show that doesn't even compile and is therefore nonsensical. Why do we continually compare with this? It's a deviated derivative of JSON Path. Are we trying to align JSON Path with it? To what end? I thought we were trying to define JSON Path, not redefine JMESPath to look like JSON Path. \"... develop a standards-track JSONPath specification that is technically sound and complete, based on the common semantics and other aspects of existing implementations. Where there are differences, the working group will analyze those differences and make choices that rough consensus considers technically best, with an aim toward minimizing disruption among the different JSONPath implementations.\" Yet no one has thought to see what the implementations do in these cases. I would say that the short-cutting is likely preferred, but I don't think my implementation would do it that way. My implementation would evaluate all operands then apply the . This would result in the strange comparison with and so it would select no nodes. C# would shortcut this. and would be considered \"side-effect\" evaluations and would be skipped. That said, strong typing would guarantee that these would evaluate to types that are valid for , so the resolved expression is guaranteed to make sense in the event shortcutting didn't happen. (In reality, the C# compiler would optimize to a constant , and ultimately the remaining expression wouldn't even make it into the compiled code, so really there's no short-cutting either.)\nWell, I chose Erlang because their total ordering is the result of a deliberate design process. Any ordering that you can derive in JavaScript is the result of a haphazard, now universally despised system of implicit conversions \u2014 not quite as good an example for my argument as Erlang\u2019s careful design. (Some people appear to have a tendency to derive processing rules for JSON from JavaScript. JSON is an interchange format, which derived its syntax from JavaScript. JSON has no processing rules, and so we have to add some. Any inspiration for the ones we decide to use in JSONPath is as good as any other, as long as it meets the needs of its users.) Gr\u00fc\u00dfe, Carsten\nFor the record, unlike Erland, comparisons in our current draft do not form a (a.k.a. a linear order). A total order satisfies this law: But, according to our current draft, both and are false. Our current draft does, however, provide a since satisfies these laws: That said, in our draft is not a strict partial order which would have to satisfy these laws: In our draft: is false, so is true, which breaks irreflexivity. both and are false, so and are true, which breaks asymmetry. Our current draft does however preserves the laws of boolean algebra (some of which would be broken by the alternatives being discussed in this issue).\nAnd typed comparisons in C# are the result of a deliberate design process. What's your point? You're invoking a selection bias. Our decision to not have type casting (i.e. ) demonstrates that types are important to us. That line of thinking necessitates that any comparison between different types must either error or return false because such a comparison is meaningless. We're not erroring, so we must return false.\nAre we now agreed then?\nIf by in agreement you mean that must also return false.\nAccording to the current draft, returns false and so returns true and thus returns false.\nOkay... You're twisting my examples. I want any expression that contains comparisons between types to return an empty nodelist.\nI want all items that are not back-ordered or backordered for less than 10 days.\nThe simplest possible thing would be to do what the vast majority of JSONPath implementations do when a comparison happens that is regarded as not supported, which is to abort evaluation and report a diagnostic. That has the additional advantage of being compatible with the charter \"capturing the common semantics of existing implementations\", if that still matters. It is also consistent with XPATH 3.1, see (URL) and (URL). The next simplest thing would be to abort evaluation and return an empty list (a few existing JSONPath implementations do that.)\nSpeaking with my co-chair hat on, I'd like to draw attention to the following text from section 3.1 of the current draft: \"The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further errors can be raised during application of the query to a value.\" I think this has for a long time represented the consensus of the WG. If someone wants to approach the problems raised in this issue by proposing an exception mechanism, that would require a specific proposal covering the details and how best to specify it. Absent such a proposal existing and getting consensus, I think approaches that include a run-time error/exception mechanism are out of bounds.\nCo-chair hat off: I could live with that. Among other things, it's easy to describe. It's not my favorite approach but it's sensible.\nOn 2022-07-20, at 22:15, Tim Bray NAME wrote: Please don\u2019t commingle exceptions with erroring out. Erroring out because of data fed to the expression is not consistent with the above invariant. Exceptions may be processed within the query (e.g., in the classical catch/throw form), and need not violate that invariant. \u201cNaB\u201d is an attempt to add to the data types in such a way that an exception can be handed up the evaluation tree as a return value. Exceptions tend to make the outcome of the query dependent of the sequence in which parts of the query expression are processed, so they may be violating other invariants (which are implicit in some of our minds). Gr\u00fc\u00dfe, Carsten\nI've been thinking why I keep liking having type-mismatch comparisons be just and made some progress. I think that is a compact way of saying . So if it's not a number, this is unsurprisingly false. If you believe this then it makes perfect sense that if is then is true but is false.\nNAME If is , then the spec's statement: implies that is false. Then the spec's statement: implies that , the negation of , is true. Essentially, and give non-intuitive results for non-numeric comparisons. This is surprising and far from ideal. However, because comparisons always produce a boolean value, they can be hedged around with other predicates to get the desired result, e.g. we can replace with which is equivalent to for numeric and is false for non-numeric .\nThe approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison means that: it is not possible to hedge around \"undesirable\" comparisons with other predicates, results tend to depend on the order in which subexpressions are evaluated. Let's take an example. Suppose we want to pick out all objects in an array such that the object has a key which is either at least 9 or equal to . For example, in the JSON: With the current spec, the filter has the desired effect and returns the nodelist: The alternative doesn't allow that kind of hedging around. A solution with the alternative approach is to use a list .\nNext, let's explore the ordering issue with the alternative approach. What should the behaviour of be (where results in a nodelist with a single node with value )? (Apologies this is for the third time of asking.) With the approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected, but the implementation cannot take advantage of the short-circuit as an optimisation. I think option 1 is preferable and probably the most intuitive option. I wonder if there are any other issues (apart from being rather prescriptive) with that option? The boolean algebra laws would only apply in general when \"undesirable\" comparisons are not present.\nI suggest changing this to \"\u2026using one of the operators , =. Then\u2026 I suggest removing this statement because we've defined , then there's some work to tidy up the definition of == and !=. And in every case, any comparison with a type mismatch is always false.\nActually, with a type mismatch is always true. I was hoping to extend the definition by negation to and , but that seems to create cognitive dissonances.\nIf you say that type mismatch always yields false, there's no problem, this reduces to true false, which is to say true. You don't need to say anything about order of evaluation. I think if the spec needs to specify order of evaluation, that is a very severe code smell. I'd find that very hard to accept.\nNo, the rule I'm proposing is simpler: Any comparison with a type mismatch is always false. Which means is not specified as \"the opposite of ==\" it's specified as meaning \"are values of the same non-structured type, and the values are not equal\". I can't tell wither and are equal or not if is not a boolean. They are not comparable.\nSo is always false. Not intuitive to me.\nHmm, having written that, I might have been going overboard. It would be perfectly OK to define as \"true if the operands are of different types, or of the same type but not equal\" and that would make perfect sense.\nWe also need to design in an escape clause for structured values, which so far we don't compare.\nOK, I think I have probably said enough on this, but I have been doing an unsatisfactory job communicating what seems to me like a reasonably straightforward proposal. Once the spec stabilizes a bit I'd be happy to do a PR. The key language, under \"Comparisons\" in 3.4.8, would be: True if both operands are of the same type, that type is not a structured type, and the values are equal; otherwise false True if either operand is of a structured type, or if the operands are of different types, or if they are of the same type but the values are unequal; otherwise false. True if both operands are numbers and the first operand is strictly greater than the second; otherwise false. True if both operands are numbers and the first operand is strictly less than the second; otherwise false. True if both operands are numbers and the first operand is greater than or equal to the second; otherwise false. True if both operands are numbers and the first operand is less than or equal to the second; otherwise false.\nThat approach, like the current draft, preserves the laws of boolean algebra and therefore the order of evaluation doesn't matter. It gets rid of some nasty surprises (such as ) which are present in the current draft. It seems that is the negation of , in which case it's probably simpler to define it as such. However, the approach is not completely free of surprises because it breaks: the converse relationship between and that if and only if not the converse relationship between and that if and only if not as well as: the reflexivity law (see above, e.g. is false) the strongly connected or total law (see above, e.g. it is not true that either or since both are false). Thus and would no longer be partial orders and and would no longer be strict total orders when these four operators are considered as binary relations over the whole set of values. We could rationalise this by thinking of these four operators as orderings of numbers with non-numbers added as \"unordered\" extensions.\nEh, I like writing each one out in clear English rather than having them depend on each other. But, editorial choice. It's still true if a & b are both numbers. If you accept that \"<\" means \"A is a number AND B is a number AND A I am very strongly against applying /// to anything but numbers and strings. All of glyn's proposals (1)-(5) define /// for all kinds of values. In which case you might want a diagnostic, like . Or the vast majority of existing JSONPath implementations.\nOn 2022-07-26, at 14:44, Daniel Parker NAME wrote: OK, so the new consensus is that we do want JSONPath to be able to break? (Not just syntax errors before the query meets data, but also data errors?) Gr\u00fc\u00dfe, Carsten\nIn the case of XPath 3.1, errors are raised or not, depending on a mode. Please let's avoid that. I personally don't think there is a convincing case for raising data errors. I would be open to raising optional, implementation-dependendant warnings. But I'm also comfortable with continuing the current design point of raising syntax errors based on the path alone and then absorbing any kind of anomaly in the data without failing the request.\nNAME Not even if, in Tim's words, \"Something is broken\"? Seriously broken? Proposals (1)-(5) all perform a common purpose: to take the comparisons that are less sensible, and resolve them to something, for completeness. Something is returned, and for all the proposals, that may be a non-empty result list, depending on the query. But consider a user playing around with an online query tool. Which result is more helpful? The result or the diagnostic The expression either side of operator \">\" must evaluate to numeric or string values\nI agree that diagnostics are great for attended situations, but for unattended scenarios, I'd prefer the behaviour to be robust.\nNAME For \"unattended scenarios\" we have error logs and alerts.\nYes, I'm comfortable with logging as a side effect of a robust operation (no failure after syntax checking the path).\nCo-chair hat off: While I do not have a strong opinion about diagnostics vs exceptions vs silent-false, I do note that this kind of type-mismatch problem is a corner case and I wonder if it justifies the investment of a whole bunch of WG effort. Co-chair hat back on: As I said, I think the current draft does represent a fairly long-held consensus of the WG. If we want to introduce a diagnostic/exception mechanism, it's not going to happen without a PR to make the option concrete as opposed to abstract. So if you want such a thing, arguing for it in this thread is not going to get us there.\nSo, one imagines something like: \"A JSONPath implementation MUST NOT cause a failure or exception as the consequence of applying a well-formed JSONPath to well-formed JSON data. An implementation SHOULD provide a mechanism whereby software can receive report of run-time errors such as type-incompatible comparisons.\" Hmm, I don't see anything in the draft spec about what happens if you try to apply a JSONPath and what you're trying to apply it to is broken, i.e. not valid JSON. Maybe we're in a world where callers have to live with failed applications anyhow?\nI'm not \"arguing for it\", I don't care what the authors of the draft eventually put in it. I'm simply providing comments or feedback on matters that may be helpful to the authors in thinking about the issues, or not. They can decide. There used to be more people doing that, but my impression is that most of them have dropped off.\nI think that deserves its own issue: URL\nI'd like to address this issue, so, unless someone someone objects strongly, I'll put together a PR for option 2 plus string comparisons. That won't necessarily be the end of the discussion, but at least it will give us a concrete alternative to the current draft.\nPlease do, and thanks in advance. We will do better if we have concrete texts before us to shape our discussion.\nOn 2022-07-19, at 15:55, Glyn Normington NAME wrote: The other reason that would speak against an option is that it lacks expressiveness, i.e., it is impossible to express a query that can be expected to be needed. I am specifically thinking about queries that first look at a data item to see what it is and then make a condition based on what was found. Gr\u00fc\u00dfe, Carsten\nOn 2022-07-20, at 22:23, Tim Bray NAME wrote: \u2026 which reminds us that this cannot currently be explicitly expressed. Gr\u00fc\u00dfe, Carsten\nI think there may be room for further fine-tuning but at this point I'd like to merge this and get a nice editors' draft that we can read end-to-end.", "new_text": "contained in the current lean syntax \"[?]\" as a special case. Comparisons are restricted to Singular Path values and primitive values (that is, numbers, strings, \"true\", \"false\", and \"null\"). Alphabetic characters in ABNF are case-insensitive, so \"e\" can be either \"e\" or \"E\"."} {"id": "q-en-draft-ietf-jsonpath-base-6a45c603fc80d10d2087f41fce2471d6e793966c7c46c2f8f9d21df701bf7bb9", "old_text": "consisting of a single node, each such path is replaced by the value of its node and then: comparison using one of the operators \"==\" yields true if and only if the comparison is between equal primitive values. comparisons using one of the operators \"<=\" or \">=\" yield true if and only if the comparison is between numeric values which satisfy the comparison. any comparison of two values using one of the operators \"!=\", \">\", \"<\" is defined as the negation of the comparison of the same values using the operator \"==\", \"<=\", \">=\", respectively. Note that \"==\" comparisons between a structured value and any value, including the same structured value, yield false. Consequently, \"!=\" comparisons between a structured value and any value, including the same structured value, yield true. 3.4.7.2.2.1.", "comments": "And: allow normal string comparison. use some phrases consistently (editorial change only). (Reviewers may like to view .) The options under consideration in issue were: . . . The NaB proposal, but with left-to-right evaluation and short-circuiting. The option of forcing a total order among all values, like Erlang does. The option of not selecting the current item if any type-mismatched comparisons occur anywhere in the expression. This is option 2 with string comparisons. Fixes URL\n(Do you mean beyond-BMP characters?) You do not need to encode beyond-BMP characters, but if you do, you indeed go through UTF-16 surrogate pairs. No, and that is true of other backslash-encoded characters, too. You can sort by Unicode Scalar Values. You can also sort by UTF-32 code units (obviously) and by UTF-8 code units (a.k.a. bytes) \u2014 UTF-8 was careful to preserve sorting order. Gr\u00fc\u00dfe, Carsten\nMerging. We can fine-tune in a follow-on PR.\nWhat is the expected behavior for expressions which are not semantically valid? For example, is only defined for numbers. So what if someone does Is this a parse error? Does it just evaluate to for all potential values? This kind of thing will be important to define if/when we decide to support more complex expressions (e.g. with mathematic operators) or inline JSON values such as objects or arrays.\nHere you can see the current behaviour of the various implementations for a similar case: URL\nThanks NAME But for this issue, I'd like to explore what we want for the specification. Also, for some context, I'm updating my library to operate in a \"specification compliant\" mode by default. Optionally, users will be able to configure for extended support like math operations and inline JSON (as I mentioned above). But until that stuff is actually included in the spec, I'd like it turned off by default.\nThe spec says: and the grammar in the spec includes as syntactically valid. (We could tighten up the grammar to exclude cases where non-numeric literals appear in ordered comparisons, but I don't think there's a great benefit in doing so.) The spec also says: So, evaluates to \"false\".\nNAME You're quite right. I noticed that the other day and fixed it, but I quoted the previous spec before the fix was merged. Apologies. Please see the revised wording in the filter .\nI don't understand numeric only semantic constraint. Is there a reason behind it? JSON has very few value types and lexical ordering of strings is well defined in Unicode. Standards like ISO8601, commonly used for representing date-time values in JSON, give consideration to lexical ordering. So a useful filter for me could be to return JSON objects that have a date greater than (or equal) to a date taken from somewhere else in the same instance. Having the comparison evaluate to false could also be confusing. For , what should yield for ? Should it be true? If so such an expression would return which is a non-numeric result even though a numeric comparison was performed. Or should breaking the semantic rule about numerics cause the entire predicate to yield false, regardless?\nI'd like to be sure that we keep to the topic here and not get distracted by my choice of example. Yes, comparisons with strings is widely well-defined, but this issue is about comparisons that don't align with what's in the spec currently. Currently, as defined by the spec, strings are not comparable. Maybe my question could be better served with the boolean example mentioned by NAME What happens in this case?\nIt's hard to tell from . But consider something that is stated in the draft, \"Note that comparisons between structured values, even if the values are equal, produce a \"false\" comparison result.\" So, the draft considers comparisons of structured values to be well formed, but always evaluates the comparison to false. Consequently, given element-for-element equal arrays and , and both evaluate to , and so presumably and both evaluate to . This suggests to me that the draft's authors intend comparisons that may be regarded as \"invalid\" to evaluate to . Would the draft also have gives and gives ? Regardless, the draft's rules for comparing structured values appear to be incompatible with all existing JSONPath implementations, no existing implementation follows these rules, as evidenced in the Comparisons by and . None appear to evaluate both and to . These include both implementations in which the expression is well defined, and implementations in which the expression is a type error. For example, the Javascript implementations evaluate as , and as , because Javascript is comparing references to arrays (rather than values.) Implementations in which the expression is a type error either report an error or return an empty result list. I don't think you'd find precedence for these rules in any comparable query language, whether JSONPath, JMESPath, JSONAta, or XPATH 3.1. There is priori experience for defining comparisons for all JSON values (see e.g. JAVA Jayway), just as there is priori experience for treating some comparisons as type errors and taking those terms out of the evaluation (see e.g. JMESPath.) But the draft's approach to comparisons is different. It does not seem to follow any prior experience.\nWhat the says about comes down to the semantics of the comparison . The Filter Selector says: Since is non-numeric, the comparison produces a \"false\" result. Therefore, always results in an empty nodelist.\nBut the spec doesn't say that the result is , and it's incorrect to assume this behavior. It's equally valid to assume that such a comparison is a syntax error. This is the crux of this issue.\nThe spec describes comparisons as boolean expressions which therefore have the value true or false. The difficulty of using the terms and in that context would be possible confusion with the JSON values and . The spec is currently inconsistent in describing the results of boolean expressions as true/false or \"true\"/\"false\", but the fix there is to be consistent. As for for the syntax side, the spec's grammar implies that is syntactically valid.\nI feel it would be better to be explicit.\nAnd always returns a non-empty result list. As does . If the idea is that expressions involving unsupported constructions should return an empty result list, than none of the above expressions should return anything. There is prior experience for matching nothing in all cases, e.g. in JMESPath, both and , where the greater-than comparison isn't supported, match nothing. There is also prior experience in the JSONPath Comparisons where the presence of unsupported comparisons always results in an empty list. But the approach taken in the draft appears to be a minority of one. There appears to be no prior experience for it.\nI don't think we can be any more explicit than the following productions: What did you have in mind?\nI had a look and couldn't find these tests. Please could you provide a link or links.\nIt depends on what we want. If we say is syntactically valid, then we need to explicitly define that it (and NAME variants, etc) evaluate to (boolean, not JSON) because they are nonsensical (semantically invalid). But if we say that these expressions are syntactically invalid, then we need to update the ABNF to reflect that by splitting and so that we can properly restrict the syntax.\nEven if we made syntactically invalid, the same expression could arise from, for example, where is . So, since the spec needs to be clear about the semantics of such expressions, I don't really see the benefit of ruling some of them out syntactically if that would make the grammar messier.\nThen we need to be explicitly clear that such cases result in a evaluation. We don't currently have that.\nSee, for example, , , and . Bash (URL), Elixir (jaxon), and Ruby (jsonpath) all return empty result lists () for $[?(NAME (NAME == true))] $[?((NAME (NAME == true))] and $[?(!(NAME (NAME == true))] On the other hand, I don't think you could find any cases where evaluated to while evaluated to . Not in JSONPath, nor in any of the other good query languages like , , or . That is unique to the draft.\nOne solution would be to introduce \"not a boolean\" () analogous to \"not a number\" (). Semantically invalid boolean expressions, such as would yield . would obey the following laws, where is any boolean or : (The logical operator laws in the spec continue to hold, but only when none of , , or are .) The rule in the spec for filters would still apply: In other words, a semantically invalid boolean expression anywhere inside a filter's boolean expression causes the filter to not to select the current node. (Edited based on feedback from NAME What do you make of this solution? /cc NAME NAME\nThat would mean an implementation can no longer short-circuit on b && c where b is false. I don\u2019t think there is much point in getting an \u201cexception\u201d semantics for unsupported comparisons. If we do want to do them, they should be actual exception semantics. But the loss of short-circuiting is more important to me than the weirdness of returning false (true) in certain cases. Gr\u00fc\u00dfe, Carsten\nYea, good point. I wonder if there is a consistent set of laws which allow both true and to be annihilators for OR and allow both false and to be annihilators for AND? That would at least preserve short-circuiting, but the overall effect would be more difficult to describe as wouldn't then trump everything else.\nI'm attracted by the \"consistency with JMESPath\" argument. I.e. the strings and can replace each other with no change in effect. I thought that's what the draft said. I'd be sympathetic with making the literal string non-well-formed but as Glyn points out, that wouldn't make the need for clarity on the issue go away.\nRight. We might even point out that constructs like this could lead to warnings where the API provides a way to do this. Gr\u00fc\u00dfe, Carsten\nYes, the behaviour of the current draft for semantically invalid expressions seems to be the same as that of JMESPath after all. The says invalid comparisons (such as ordering operators applied to non-numbers) yield a value which is then treated equivalently to . I thought I'd verify this to be on the safe side. A with the expression evaluated against the JSON: gives the result: whereas gives the result: The evaluator with the expression applied to the same JSON gives the result: whereas the expression gives the result .\nEDIT: I think you may be right about this after all. Just not with your example, I believe you are testing with an implementation that does string comparisons differently from the spec (if you're using the online evaluator on the JMESPath site.) Yes Actually, the JMESPath implementation you're using looks like the Python version accessed through the . As the JMESPath author, James Saryerwinnie noted in , \"What's going on here is that the spec only defines comparisons as valid on numbers (URL). In 0.9.1, validation was added to enforce this requirement, so things that implicitly worked before now no longer work ... Given this is affecting multiple people I'll add back the support for strings now.\" James Saryerwinnie went on the suggest that he was going to update the spec as well to allow more general comparisons, but that didn't happen. James actually took a break from JMESPath at around that time to quite recently. Yes. Here it's actually comparing \"char\" < \"char\", and returning , not . and gives However, I did the following test with JMESPath.Net with gives gives which I think is consistent with the current draft.\nLet's bring this back home. I think we've decided that is not a syntax error. So now we are deciding how it evaluates. The leading proposal is that all of these evaluate to false: - More holistically, any expression which contains an invalid operation evaluates to false. This would imply that an invalid comparison itself does not result in a boolean directly. Rather (as suggested by NAME an implementation would need an internal placeholder (e.g. or ) that is persistent through all subsequent operations and ultimately is coerced to false so that the node is not selected. This would mean that the processing of each of the above expressions is as follows: - observes and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false At best, the use of something like is guidance for an implementation. The specification need only declare that expressions which contain invalid operations (at all) evaluate to false. How a developer accomplishes that is up to them.\nAbort evaluation and report an error (the vast majority of existing JSONPath implementations) Abort evaluation and return an empty list (a few existing JSONPath implementations) Continue evaluation while excluding the element affected by the dynamic error (JMESPath specification) All of that sounds complicated and unnecessary. Personally, I don't think any of these constructions are necessary. It is enough to say that in these situations a dynamic error has been detected, and the implementation must react in this way (whatever is decided upon.) As you say \"How a developer accomplishes that is up to them.\"\nUmmmm\u2026 I was under the impression that was identical to and thus your third item would be true. This seems violently counter-intuitive. We're not going to treat as a syntax error, but we are going to cause its presence to have a nonlocal effect on the entire containing expression. OK, it's back to a careful read of the draft for me.\nIf you want a nonlocal effect, it is probably best to think in terms of a throw/catch model. (And, yes, I think one needs catch as an explicit mechanism, not just implicitly at the outside of a filter expression.)\nIf that were the case, then would be true, which isn't right. The expression is still unevaluateable. Similarly with the expression. True in general, but for this specific case, it's the combined with that gives it away, and this can be detected at parse time. Regardless, we need to also consider the case where is true.\nI think the crucial thing is that these expressions have well defined semantics so that we can test the behaviour and ensure interoperation. I really don't think we should be encouraging the use of these expressions. My suffers from the lack of short-cutting. If we preserved short-cutting with a similar approach, we'd have to give up commutativity of and and probably some of the other boolean algebra laws. Overall, I think such approaches would make the spec larger and consequently increase its cognitive load for users and implementers alike. Producing a non-local effect is another solution, but I think that too is overly complex for the benefits. If we made the non-local effect that the filter selected no nodes, I think that would be semantically equivalent to my proposal. Also, I don't think we should introduce a try/catch mechanism as the spec complexity would far outweigh the benefits. The current spec: is quite compact in the way it handles the current issue preserves the boolean algebra laws has a precedent in the JMESPath spec. So I propose that we stick with the current spec.\nBut what's written in the spec doesn't resolve the vs. disparity. Both of these expressions should return an empty node list because both are semantically invalid. As proof of this, is logically equivalent to , which also returns the empty nodelist. If simply and immediately returns , then this equivalent no longer holds.\nI wish we could stop using the term \"semantically invalid\", because that doesn't say what the construct means. A lightweight variant of the catch/throw approach I would prefer would be to say that an expression fails. Constructs that have failing subexpressions also fail. A failing filter expression does not select the item for which the filter expression failed. Disadvantage: Either no short circuiting, or ambiguity with short circuiting. Without short circuiting, one cannot test whether a subsequent subexpression will fail and still get a deterministic result. (I'm not sure we have all we need to make such tests.)\nURL makes the current spec clearer: yields false and so yields true and yield false. Where this is a \"disparity\" or not depends on what semantics we are aiming at. \"should\" only if we adopt the NaB (or equivalent) or non-local effect semantics. (I agree with NAME that the term \"semantically invalid expression\" isn't very useful when we are trying to define the semantics of such expressions.) URL defines in terms of to preserve this very equivalence.\nI think these disadvantages should steer us away from catch/throw, or equivalent, approaches.\nI didn't follow any of this comment. I can't tell what you're saying the spec says vs what you think it should say. Certainly, you're not saying that should be evaluated to true, which would return ALL of the nodes, right? Because that's what it sounds like you're arguing for. Comparisons with booleans don't make sense. Ever. My proof above shows the absurdity of defining one expression to evaluate to false because it, by necessity, means that the converse evaluates to true when neither make sense at all. Call it what you want. I call it \"semantically invalid\" because that phrase accurately describes that there are no (common) operative semantics that make this expression evaluatable. Sure, if we define it to mean something, then that technically gives it semantic meaning, but in all ordinary cases of how works, this operation doesn't make sense and therefore is, by definition, semantically invalid. There should be one of two outcomes from this issue: It's a parse error. It's a runtime evaluation error. In both cases it's an error; that's not in question here. What kind of error and how it's handled are in question. We've already seen that it doesn't make sense to be a parse error because we need to account for the boolean value coming from the data (i.e. from ). That leaves us with a runtime evaluation error. Whenever a runtime evaluation error occurs, because we've decided that a syntactically valid path always returns, it MUST return an empty nodelist. Therefore, ANY expression that triggers an evaluation error MUST return an empty nodelist. This includes both and . I don't know how to stress this any more. There are no other logical options to resolve this issue.\nThis reasoning technique is also called proof by lack of imagination (related to the technique \"I don't get out much\"). I hear that you are arguing for some exception semantics (similar to the catch/throw I brought up). I can sympathize with that, because I also argued for that. However, we are really free to define the expression language as we want, and what we have now is not that much worse than what I was (incompletely) proposing.\n(Found the rationale why Erlang does things this way: URL Unfortunately, we can't ask Joe for additional details any more. But apparently these titans in language design for highly reliable systems saw a benefit in a complete total ordering of the type system. JSON does not have a type system, so anything we do will be invention.)\nMy comment was about the current spec, especially now that PR had landed. The current spec says that yields true. Yes (unless that comparison was part of a large boolean expression which sometimes yielded false) that would return all the nodes. That's precisely what I am arguing for as I think it's the best option. I suspect absurdity, like beauty, is in the eye of the beholder. I can see where you're coming from. I don't follow the chain of reasoning there. The current spec returns, but doesn't always return an empty nodelist. At the risk of repeating myself, the current spec is a logical option in the sense that it is well defined and consistent. I think we are all agreed that none of the options is perfect. All the options under consideration are logical, otherwise we'd rule them out. Some would take more words in the spec to explain and that puts a cognitive load on the user for very little benefit.\nNAME It seems your preferred approach is: Let's try to flesh this out a bit. What about shortcutting? For example, what should the behaviour of be? There would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow shortcuts. Result: all nodes are selected. Do not prescribe an order of evaluation and allow shortcuts. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow shortcuts (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nThis seems useful to me. The spec is free to specify that inequalities (, >=) are only for numbers. Short-circuit evaluation can be used to avoid errors, e.g.: will return the first array element.\nYou still have to parse this. And you have to know the value of before you evaluate it. That means you're able to know that this expression is nonsensical (to avoid \"invalid\") before you evaluate it. Thus shortcutting is moot.\nLet's use \"undesirable\" for expressions we don't like. The example that Glyn provided is just a stand-in for a more complex example that gets true from the JSON value. So the fact that you can parse this and find the undesirable part immediately is not so relevant. With shortcut semantics, the fact that the ignored part of the disjunction is undesirable also is irrelevant. So I do think that Glyn's question is valid.\nIn JMESPath, evaluates to, not , but \"absent\", represented in JMESPath by . When \"absent\" () is evaluated as a boolean according to the JMESPath rules of truth, it gives . It seems consistent that \"not absent\" is . What would you have if you resolved to the draft's notion of \"absent\", an empty node list?\nOk, let's try a modified example to avoid the syntactic issue. What should the behaviour of be (where results in a nodelist with a single node with value )? With NAME preferred approach, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nI'd like to go with the simplest possible thing unless are strong reasons not to. I think the simplest possible thing is: Type-incompatible comparisons yield false and there are no \"exceptions\" or nonlocal effects. So in this example you unambiguously get and select all the nodes and if you want to optimize by short-circuiting you can do that without fear of confusion or breakage. Type-incompatibility yielding false has the advantage that it's easy to explain in English and, I would argue, usually produces the desired effect. If I am filtering something on and turns out not to be a number - almost certainly an unexpected-data-shape problem - I do not want to match. But if I that with something else then I'm explicitly OK with not getting a match here. Yes, there are still corner-case surprises, for example: is true if is non-numeric, but is not true. I claim that this is at least easy to explain and understand in brief clear language.\nA better choice for your argument would have been JS as that was one of the original implementations. Similiar behavior works there (as I discovered testing it in my browser's console). ! and result in false and result in true Interestingly and both result in true for JS. This makes it very apparent that JS is just casting the to and performing the comparison. This kind of implicit casting is something that we have already explicitly decided against. This means that even the JS evaluation isn't a valid argument. I could just as easily use a .Net language or any other strongly typed language to show that doesn't even compile and is therefore nonsensical. Why do we continually compare with this? It's a deviated derivative of JSON Path. Are we trying to align JSON Path with it? To what end? I thought we were trying to define JSON Path, not redefine JMESPath to look like JSON Path. \"... develop a standards-track JSONPath specification that is technically sound and complete, based on the common semantics and other aspects of existing implementations. Where there are differences, the working group will analyze those differences and make choices that rough consensus considers technically best, with an aim toward minimizing disruption among the different JSONPath implementations.\" Yet no one has thought to see what the implementations do in these cases. I would say that the short-cutting is likely preferred, but I don't think my implementation would do it that way. My implementation would evaluate all operands then apply the . This would result in the strange comparison with and so it would select no nodes. C# would shortcut this. and would be considered \"side-effect\" evaluations and would be skipped. That said, strong typing would guarantee that these would evaluate to types that are valid for , so the resolved expression is guaranteed to make sense in the event shortcutting didn't happen. (In reality, the C# compiler would optimize to a constant , and ultimately the remaining expression wouldn't even make it into the compiled code, so really there's no short-cutting either.)\nWell, I chose Erlang because their total ordering is the result of a deliberate design process. Any ordering that you can derive in JavaScript is the result of a haphazard, now universally despised system of implicit conversions \u2014 not quite as good an example for my argument as Erlang\u2019s careful design. (Some people appear to have a tendency to derive processing rules for JSON from JavaScript. JSON is an interchange format, which derived its syntax from JavaScript. JSON has no processing rules, and so we have to add some. Any inspiration for the ones we decide to use in JSONPath is as good as any other, as long as it meets the needs of its users.) Gr\u00fc\u00dfe, Carsten\nFor the record, unlike Erland, comparisons in our current draft do not form a (a.k.a. a linear order). A total order satisfies this law: But, according to our current draft, both and are false. Our current draft does, however, provide a since satisfies these laws: That said, in our draft is not a strict partial order which would have to satisfy these laws: In our draft: is false, so is true, which breaks irreflexivity. both and are false, so and are true, which breaks asymmetry. Our current draft does however preserves the laws of boolean algebra (some of which would be broken by the alternatives being discussed in this issue).\nAnd typed comparisons in C# are the result of a deliberate design process. What's your point? You're invoking a selection bias. Our decision to not have type casting (i.e. ) demonstrates that types are important to us. That line of thinking necessitates that any comparison between different types must either error or return false because such a comparison is meaningless. We're not erroring, so we must return false.\nAre we now agreed then?\nIf by in agreement you mean that must also return false.\nAccording to the current draft, returns false and so returns true and thus returns false.\nOkay... You're twisting my examples. I want any expression that contains comparisons between types to return an empty nodelist.\nI want all items that are not back-ordered or backordered for less than 10 days.\nThe simplest possible thing would be to do what the vast majority of JSONPath implementations do when a comparison happens that is regarded as not supported, which is to abort evaluation and report a diagnostic. That has the additional advantage of being compatible with the charter \"capturing the common semantics of existing implementations\", if that still matters. It is also consistent with XPATH 3.1, see (URL) and (URL). The next simplest thing would be to abort evaluation and return an empty list (a few existing JSONPath implementations do that.)\nSpeaking with my co-chair hat on, I'd like to draw attention to the following text from section 3.1 of the current draft: \"The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further errors can be raised during application of the query to a value.\" I think this has for a long time represented the consensus of the WG. If someone wants to approach the problems raised in this issue by proposing an exception mechanism, that would require a specific proposal covering the details and how best to specify it. Absent such a proposal existing and getting consensus, I think approaches that include a run-time error/exception mechanism are out of bounds.\nCo-chair hat off: I could live with that. Among other things, it's easy to describe. It's not my favorite approach but it's sensible.\nOn 2022-07-20, at 22:15, Tim Bray NAME wrote: Please don\u2019t commingle exceptions with erroring out. Erroring out because of data fed to the expression is not consistent with the above invariant. Exceptions may be processed within the query (e.g., in the classical catch/throw form), and need not violate that invariant. \u201cNaB\u201d is an attempt to add to the data types in such a way that an exception can be handed up the evaluation tree as a return value. Exceptions tend to make the outcome of the query dependent of the sequence in which parts of the query expression are processed, so they may be violating other invariants (which are implicit in some of our minds). Gr\u00fc\u00dfe, Carsten\nI've been thinking why I keep liking having type-mismatch comparisons be just and made some progress. I think that is a compact way of saying . So if it's not a number, this is unsurprisingly false. If you believe this then it makes perfect sense that if is then is true but is false.\nNAME If is , then the spec's statement: implies that is false. Then the spec's statement: implies that , the negation of , is true. Essentially, and give non-intuitive results for non-numeric comparisons. This is surprising and far from ideal. However, because comparisons always produce a boolean value, they can be hedged around with other predicates to get the desired result, e.g. we can replace with which is equivalent to for numeric and is false for non-numeric .\nThe approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison means that: it is not possible to hedge around \"undesirable\" comparisons with other predicates, results tend to depend on the order in which subexpressions are evaluated. Let's take an example. Suppose we want to pick out all objects in an array such that the object has a key which is either at least 9 or equal to . For example, in the JSON: With the current spec, the filter has the desired effect and returns the nodelist: The alternative doesn't allow that kind of hedging around. A solution with the alternative approach is to use a list .\nNext, let's explore the ordering issue with the alternative approach. What should the behaviour of be (where results in a nodelist with a single node with value )? (Apologies this is for the third time of asking.) With the approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected, but the implementation cannot take advantage of the short-circuit as an optimisation. I think option 1 is preferable and probably the most intuitive option. I wonder if there are any other issues (apart from being rather prescriptive) with that option? The boolean algebra laws would only apply in general when \"undesirable\" comparisons are not present.\nI suggest changing this to \"\u2026using one of the operators , =. Then\u2026 I suggest removing this statement because we've defined , then there's some work to tidy up the definition of == and !=. And in every case, any comparison with a type mismatch is always false.\nActually, with a type mismatch is always true. I was hoping to extend the definition by negation to and , but that seems to create cognitive dissonances.\nIf you say that type mismatch always yields false, there's no problem, this reduces to true false, which is to say true. You don't need to say anything about order of evaluation. I think if the spec needs to specify order of evaluation, that is a very severe code smell. I'd find that very hard to accept.\nNo, the rule I'm proposing is simpler: Any comparison with a type mismatch is always false. Which means is not specified as \"the opposite of ==\" it's specified as meaning \"are values of the same non-structured type, and the values are not equal\". I can't tell wither and are equal or not if is not a boolean. They are not comparable.\nSo is always false. Not intuitive to me.\nHmm, having written that, I might have been going overboard. It would be perfectly OK to define as \"true if the operands are of different types, or of the same type but not equal\" and that would make perfect sense.\nWe also need to design in an escape clause for structured values, which so far we don't compare.\nOK, I think I have probably said enough on this, but I have been doing an unsatisfactory job communicating what seems to me like a reasonably straightforward proposal. Once the spec stabilizes a bit I'd be happy to do a PR. The key language, under \"Comparisons\" in 3.4.8, would be: True if both operands are of the same type, that type is not a structured type, and the values are equal; otherwise false True if either operand is of a structured type, or if the operands are of different types, or if they are of the same type but the values are unequal; otherwise false. True if both operands are numbers and the first operand is strictly greater than the second; otherwise false. True if both operands are numbers and the first operand is strictly less than the second; otherwise false. True if both operands are numbers and the first operand is greater than or equal to the second; otherwise false. True if both operands are numbers and the first operand is less than or equal to the second; otherwise false.\nThat approach, like the current draft, preserves the laws of boolean algebra and therefore the order of evaluation doesn't matter. It gets rid of some nasty surprises (such as ) which are present in the current draft. It seems that is the negation of , in which case it's probably simpler to define it as such. However, the approach is not completely free of surprises because it breaks: the converse relationship between and that if and only if not the converse relationship between and that if and only if not as well as: the reflexivity law (see above, e.g. is false) the strongly connected or total law (see above, e.g. it is not true that either or since both are false). Thus and would no longer be partial orders and and would no longer be strict total orders when these four operators are considered as binary relations over the whole set of values. We could rationalise this by thinking of these four operators as orderings of numbers with non-numbers added as \"unordered\" extensions.\nEh, I like writing each one out in clear English rather than having them depend on each other. But, editorial choice. It's still true if a & b are both numbers. If you accept that \"<\" means \"A is a number AND B is a number AND A I am very strongly against applying /// to anything but numbers and strings. All of glyn's proposals (1)-(5) define /// for all kinds of values. In which case you might want a diagnostic, like . Or the vast majority of existing JSONPath implementations.\nOn 2022-07-26, at 14:44, Daniel Parker NAME wrote: OK, so the new consensus is that we do want JSONPath to be able to break? (Not just syntax errors before the query meets data, but also data errors?) Gr\u00fc\u00dfe, Carsten\nIn the case of XPath 3.1, errors are raised or not, depending on a mode. Please let's avoid that. I personally don't think there is a convincing case for raising data errors. I would be open to raising optional, implementation-dependendant warnings. But I'm also comfortable with continuing the current design point of raising syntax errors based on the path alone and then absorbing any kind of anomaly in the data without failing the request.\nNAME Not even if, in Tim's words, \"Something is broken\"? Seriously broken? Proposals (1)-(5) all perform a common purpose: to take the comparisons that are less sensible, and resolve them to something, for completeness. Something is returned, and for all the proposals, that may be a non-empty result list, depending on the query. But consider a user playing around with an online query tool. Which result is more helpful? The result or the diagnostic The expression either side of operator \">\" must evaluate to numeric or string values\nI agree that diagnostics are great for attended situations, but for unattended scenarios, I'd prefer the behaviour to be robust.\nNAME For \"unattended scenarios\" we have error logs and alerts.\nYes, I'm comfortable with logging as a side effect of a robust operation (no failure after syntax checking the path).\nCo-chair hat off: While I do not have a strong opinion about diagnostics vs exceptions vs silent-false, I do note that this kind of type-mismatch problem is a corner case and I wonder if it justifies the investment of a whole bunch of WG effort. Co-chair hat back on: As I said, I think the current draft does represent a fairly long-held consensus of the WG. If we want to introduce a diagnostic/exception mechanism, it's not going to happen without a PR to make the option concrete as opposed to abstract. So if you want such a thing, arguing for it in this thread is not going to get us there.\nSo, one imagines something like: \"A JSONPath implementation MUST NOT cause a failure or exception as the consequence of applying a well-formed JSONPath to well-formed JSON data. An implementation SHOULD provide a mechanism whereby software can receive report of run-time errors such as type-incompatible comparisons.\" Hmm, I don't see anything in the draft spec about what happens if you try to apply a JSONPath and what you're trying to apply it to is broken, i.e. not valid JSON. Maybe we're in a world where callers have to live with failed applications anyhow?\nI'm not \"arguing for it\", I don't care what the authors of the draft eventually put in it. I'm simply providing comments or feedback on matters that may be helpful to the authors in thinking about the issues, or not. They can decide. There used to be more people doing that, but my impression is that most of them have dropped off.\nI think that deserves its own issue: URL\nI'd like to address this issue, so, unless someone someone objects strongly, I'll put together a PR for option 2 plus string comparisons. That won't necessarily be the end of the discussion, but at least it will give us a concrete alternative to the current draft.\nPlease do, and thanks in advance. We will do better if we have concrete texts before us to shape our discussion.\nOn 2022-07-19, at 15:55, Glyn Normington NAME wrote: The other reason that would speak against an option is that it lacks expressiveness, i.e., it is impossible to express a query that can be expected to be needed. I am specifically thinking about queries that first look at a data item to see what it is and then make a condition based on what was found. Gr\u00fc\u00dfe, Carsten\nOn 2022-07-20, at 22:23, Tim Bray NAME wrote: \u2026 which reminds us that this cannot currently be explicitly expressed. Gr\u00fc\u00dfe, Carsten\nI think there may be room for further fine-tuning but at this point I'd like to merge this and get a nice editors' draft that we can read end-to-end.", "new_text": "consisting of a single node, each such path is replaced by the value of its node and then: a comparison using the operator \"==\" yields true if and only if the comparison is between values of the same primitive type (numbers, strings, booleans, and \"null\") which are equal. a comparison using the operator \"!=\" yields true if and only if the comparison is not between values of the same primitive type (numbers, strings, booleans, and \"null\") which are equal. a comparison using one of the operators \"<\", \"<=\", \">\", or \">=\" yields true if and only if the comparison is between values of the same type which are both numbers or both strings and which satisfy the comparison: numbers in the I-JSON RFC7493 range of exact values MUST compare using the normal mathematical ordering; one or both numbers outside that range MAY compare using an implementation specific ordering the empty string compares less than any non-empty string a non-empty string compares less than another non-empty string if and only if the first string starts with a lower Unicode scalar value than the second string or if both strings start with the same Unicode scalar value and the remainder of the first string compares less than the remainder of the second string. Note that \"==\" comparisons between a structured value (that is, an object or an array) and any value, including the same structured value, yield false and \"!=\" comparisons between a structured value and any value, including the same structured value, yield true. 3.4.7.2.2.1."} {"id": "q-en-draft-ietf-jsonpath-base-6a45c603fc80d10d2087f41fce2471d6e793966c7c46c2f8f9d21df701bf7bb9", "old_text": "of the selector entries is kept as many times in the nodelist. To be valid, integer values in the \"element-index\" and \"slice-index\" components MUST be in the I-JSON range of exact values, see synsem- overview. 3.4.8.3.", "comments": "And: allow normal string comparison. use some phrases consistently (editorial change only). (Reviewers may like to view .) The options under consideration in issue were: . . . The NaB proposal, but with left-to-right evaluation and short-circuiting. The option of forcing a total order among all values, like Erlang does. The option of not selecting the current item if any type-mismatched comparisons occur anywhere in the expression. This is option 2 with string comparisons. Fixes URL\n(Do you mean beyond-BMP characters?) You do not need to encode beyond-BMP characters, but if you do, you indeed go through UTF-16 surrogate pairs. No, and that is true of other backslash-encoded characters, too. You can sort by Unicode Scalar Values. You can also sort by UTF-32 code units (obviously) and by UTF-8 code units (a.k.a. bytes) \u2014 UTF-8 was careful to preserve sorting order. Gr\u00fc\u00dfe, Carsten\nMerging. We can fine-tune in a follow-on PR.\nWhat is the expected behavior for expressions which are not semantically valid? For example, is only defined for numbers. So what if someone does Is this a parse error? Does it just evaluate to for all potential values? This kind of thing will be important to define if/when we decide to support more complex expressions (e.g. with mathematic operators) or inline JSON values such as objects or arrays.\nHere you can see the current behaviour of the various implementations for a similar case: URL\nThanks NAME But for this issue, I'd like to explore what we want for the specification. Also, for some context, I'm updating my library to operate in a \"specification compliant\" mode by default. Optionally, users will be able to configure for extended support like math operations and inline JSON (as I mentioned above). But until that stuff is actually included in the spec, I'd like it turned off by default.\nThe spec says: and the grammar in the spec includes as syntactically valid. (We could tighten up the grammar to exclude cases where non-numeric literals appear in ordered comparisons, but I don't think there's a great benefit in doing so.) The spec also says: So, evaluates to \"false\".\nNAME You're quite right. I noticed that the other day and fixed it, but I quoted the previous spec before the fix was merged. Apologies. Please see the revised wording in the filter .\nI don't understand numeric only semantic constraint. Is there a reason behind it? JSON has very few value types and lexical ordering of strings is well defined in Unicode. Standards like ISO8601, commonly used for representing date-time values in JSON, give consideration to lexical ordering. So a useful filter for me could be to return JSON objects that have a date greater than (or equal) to a date taken from somewhere else in the same instance. Having the comparison evaluate to false could also be confusing. For , what should yield for ? Should it be true? If so such an expression would return which is a non-numeric result even though a numeric comparison was performed. Or should breaking the semantic rule about numerics cause the entire predicate to yield false, regardless?\nI'd like to be sure that we keep to the topic here and not get distracted by my choice of example. Yes, comparisons with strings is widely well-defined, but this issue is about comparisons that don't align with what's in the spec currently. Currently, as defined by the spec, strings are not comparable. Maybe my question could be better served with the boolean example mentioned by NAME What happens in this case?\nIt's hard to tell from . But consider something that is stated in the draft, \"Note that comparisons between structured values, even if the values are equal, produce a \"false\" comparison result.\" So, the draft considers comparisons of structured values to be well formed, but always evaluates the comparison to false. Consequently, given element-for-element equal arrays and , and both evaluate to , and so presumably and both evaluate to . This suggests to me that the draft's authors intend comparisons that may be regarded as \"invalid\" to evaluate to . Would the draft also have gives and gives ? Regardless, the draft's rules for comparing structured values appear to be incompatible with all existing JSONPath implementations, no existing implementation follows these rules, as evidenced in the Comparisons by and . None appear to evaluate both and to . These include both implementations in which the expression is well defined, and implementations in which the expression is a type error. For example, the Javascript implementations evaluate as , and as , because Javascript is comparing references to arrays (rather than values.) Implementations in which the expression is a type error either report an error or return an empty result list. I don't think you'd find precedence for these rules in any comparable query language, whether JSONPath, JMESPath, JSONAta, or XPATH 3.1. There is priori experience for defining comparisons for all JSON values (see e.g. JAVA Jayway), just as there is priori experience for treating some comparisons as type errors and taking those terms out of the evaluation (see e.g. JMESPath.) But the draft's approach to comparisons is different. It does not seem to follow any prior experience.\nWhat the says about comes down to the semantics of the comparison . The Filter Selector says: Since is non-numeric, the comparison produces a \"false\" result. Therefore, always results in an empty nodelist.\nBut the spec doesn't say that the result is , and it's incorrect to assume this behavior. It's equally valid to assume that such a comparison is a syntax error. This is the crux of this issue.\nThe spec describes comparisons as boolean expressions which therefore have the value true or false. The difficulty of using the terms and in that context would be possible confusion with the JSON values and . The spec is currently inconsistent in describing the results of boolean expressions as true/false or \"true\"/\"false\", but the fix there is to be consistent. As for for the syntax side, the spec's grammar implies that is syntactically valid.\nI feel it would be better to be explicit.\nAnd always returns a non-empty result list. As does . If the idea is that expressions involving unsupported constructions should return an empty result list, than none of the above expressions should return anything. There is prior experience for matching nothing in all cases, e.g. in JMESPath, both and , where the greater-than comparison isn't supported, match nothing. There is also prior experience in the JSONPath Comparisons where the presence of unsupported comparisons always results in an empty list. But the approach taken in the draft appears to be a minority of one. There appears to be no prior experience for it.\nI don't think we can be any more explicit than the following productions: What did you have in mind?\nI had a look and couldn't find these tests. Please could you provide a link or links.\nIt depends on what we want. If we say is syntactically valid, then we need to explicitly define that it (and NAME variants, etc) evaluate to (boolean, not JSON) because they are nonsensical (semantically invalid). But if we say that these expressions are syntactically invalid, then we need to update the ABNF to reflect that by splitting and so that we can properly restrict the syntax.\nEven if we made syntactically invalid, the same expression could arise from, for example, where is . So, since the spec needs to be clear about the semantics of such expressions, I don't really see the benefit of ruling some of them out syntactically if that would make the grammar messier.\nThen we need to be explicitly clear that such cases result in a evaluation. We don't currently have that.\nSee, for example, , , and . Bash (URL), Elixir (jaxon), and Ruby (jsonpath) all return empty result lists () for $[?(NAME (NAME == true))] $[?((NAME (NAME == true))] and $[?(!(NAME (NAME == true))] On the other hand, I don't think you could find any cases where evaluated to while evaluated to . Not in JSONPath, nor in any of the other good query languages like , , or . That is unique to the draft.\nOne solution would be to introduce \"not a boolean\" () analogous to \"not a number\" (). Semantically invalid boolean expressions, such as would yield . would obey the following laws, where is any boolean or : (The logical operator laws in the spec continue to hold, but only when none of , , or are .) The rule in the spec for filters would still apply: In other words, a semantically invalid boolean expression anywhere inside a filter's boolean expression causes the filter to not to select the current node. (Edited based on feedback from NAME What do you make of this solution? /cc NAME NAME\nThat would mean an implementation can no longer short-circuit on b && c where b is false. I don\u2019t think there is much point in getting an \u201cexception\u201d semantics for unsupported comparisons. If we do want to do them, they should be actual exception semantics. But the loss of short-circuiting is more important to me than the weirdness of returning false (true) in certain cases. Gr\u00fc\u00dfe, Carsten\nYea, good point. I wonder if there is a consistent set of laws which allow both true and to be annihilators for OR and allow both false and to be annihilators for AND? That would at least preserve short-circuiting, but the overall effect would be more difficult to describe as wouldn't then trump everything else.\nI'm attracted by the \"consistency with JMESPath\" argument. I.e. the strings and can replace each other with no change in effect. I thought that's what the draft said. I'd be sympathetic with making the literal string non-well-formed but as Glyn points out, that wouldn't make the need for clarity on the issue go away.\nRight. We might even point out that constructs like this could lead to warnings where the API provides a way to do this. Gr\u00fc\u00dfe, Carsten\nYes, the behaviour of the current draft for semantically invalid expressions seems to be the same as that of JMESPath after all. The says invalid comparisons (such as ordering operators applied to non-numbers) yield a value which is then treated equivalently to . I thought I'd verify this to be on the safe side. A with the expression evaluated against the JSON: gives the result: whereas gives the result: The evaluator with the expression applied to the same JSON gives the result: whereas the expression gives the result .\nEDIT: I think you may be right about this after all. Just not with your example, I believe you are testing with an implementation that does string comparisons differently from the spec (if you're using the online evaluator on the JMESPath site.) Yes Actually, the JMESPath implementation you're using looks like the Python version accessed through the . As the JMESPath author, James Saryerwinnie noted in , \"What's going on here is that the spec only defines comparisons as valid on numbers (URL). In 0.9.1, validation was added to enforce this requirement, so things that implicitly worked before now no longer work ... Given this is affecting multiple people I'll add back the support for strings now.\" James Saryerwinnie went on the suggest that he was going to update the spec as well to allow more general comparisons, but that didn't happen. James actually took a break from JMESPath at around that time to quite recently. Yes. Here it's actually comparing \"char\" < \"char\", and returning , not . and gives However, I did the following test with JMESPath.Net with gives gives which I think is consistent with the current draft.\nLet's bring this back home. I think we've decided that is not a syntax error. So now we are deciding how it evaluates. The leading proposal is that all of these evaluate to false: - More holistically, any expression which contains an invalid operation evaluates to false. This would imply that an invalid comparison itself does not result in a boolean directly. Rather (as suggested by NAME an implementation would need an internal placeholder (e.g. or ) that is persistent through all subsequent operations and ultimately is coerced to false so that the node is not selected. This would mean that the processing of each of the above expressions is as follows: - observes and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false At best, the use of something like is guidance for an implementation. The specification need only declare that expressions which contain invalid operations (at all) evaluate to false. How a developer accomplishes that is up to them.\nAbort evaluation and report an error (the vast majority of existing JSONPath implementations) Abort evaluation and return an empty list (a few existing JSONPath implementations) Continue evaluation while excluding the element affected by the dynamic error (JMESPath specification) All of that sounds complicated and unnecessary. Personally, I don't think any of these constructions are necessary. It is enough to say that in these situations a dynamic error has been detected, and the implementation must react in this way (whatever is decided upon.) As you say \"How a developer accomplishes that is up to them.\"\nUmmmm\u2026 I was under the impression that was identical to and thus your third item would be true. This seems violently counter-intuitive. We're not going to treat as a syntax error, but we are going to cause its presence to have a nonlocal effect on the entire containing expression. OK, it's back to a careful read of the draft for me.\nIf you want a nonlocal effect, it is probably best to think in terms of a throw/catch model. (And, yes, I think one needs catch as an explicit mechanism, not just implicitly at the outside of a filter expression.)\nIf that were the case, then would be true, which isn't right. The expression is still unevaluateable. Similarly with the expression. True in general, but for this specific case, it's the combined with that gives it away, and this can be detected at parse time. Regardless, we need to also consider the case where is true.\nI think the crucial thing is that these expressions have well defined semantics so that we can test the behaviour and ensure interoperation. I really don't think we should be encouraging the use of these expressions. My suffers from the lack of short-cutting. If we preserved short-cutting with a similar approach, we'd have to give up commutativity of and and probably some of the other boolean algebra laws. Overall, I think such approaches would make the spec larger and consequently increase its cognitive load for users and implementers alike. Producing a non-local effect is another solution, but I think that too is overly complex for the benefits. If we made the non-local effect that the filter selected no nodes, I think that would be semantically equivalent to my proposal. Also, I don't think we should introduce a try/catch mechanism as the spec complexity would far outweigh the benefits. The current spec: is quite compact in the way it handles the current issue preserves the boolean algebra laws has a precedent in the JMESPath spec. So I propose that we stick with the current spec.\nBut what's written in the spec doesn't resolve the vs. disparity. Both of these expressions should return an empty node list because both are semantically invalid. As proof of this, is logically equivalent to , which also returns the empty nodelist. If simply and immediately returns , then this equivalent no longer holds.\nI wish we could stop using the term \"semantically invalid\", because that doesn't say what the construct means. A lightweight variant of the catch/throw approach I would prefer would be to say that an expression fails. Constructs that have failing subexpressions also fail. A failing filter expression does not select the item for which the filter expression failed. Disadvantage: Either no short circuiting, or ambiguity with short circuiting. Without short circuiting, one cannot test whether a subsequent subexpression will fail and still get a deterministic result. (I'm not sure we have all we need to make such tests.)\nURL makes the current spec clearer: yields false and so yields true and yield false. Where this is a \"disparity\" or not depends on what semantics we are aiming at. \"should\" only if we adopt the NaB (or equivalent) or non-local effect semantics. (I agree with NAME that the term \"semantically invalid expression\" isn't very useful when we are trying to define the semantics of such expressions.) URL defines in terms of to preserve this very equivalence.\nI think these disadvantages should steer us away from catch/throw, or equivalent, approaches.\nI didn't follow any of this comment. I can't tell what you're saying the spec says vs what you think it should say. Certainly, you're not saying that should be evaluated to true, which would return ALL of the nodes, right? Because that's what it sounds like you're arguing for. Comparisons with booleans don't make sense. Ever. My proof above shows the absurdity of defining one expression to evaluate to false because it, by necessity, means that the converse evaluates to true when neither make sense at all. Call it what you want. I call it \"semantically invalid\" because that phrase accurately describes that there are no (common) operative semantics that make this expression evaluatable. Sure, if we define it to mean something, then that technically gives it semantic meaning, but in all ordinary cases of how works, this operation doesn't make sense and therefore is, by definition, semantically invalid. There should be one of two outcomes from this issue: It's a parse error. It's a runtime evaluation error. In both cases it's an error; that's not in question here. What kind of error and how it's handled are in question. We've already seen that it doesn't make sense to be a parse error because we need to account for the boolean value coming from the data (i.e. from ). That leaves us with a runtime evaluation error. Whenever a runtime evaluation error occurs, because we've decided that a syntactically valid path always returns, it MUST return an empty nodelist. Therefore, ANY expression that triggers an evaluation error MUST return an empty nodelist. This includes both and . I don't know how to stress this any more. There are no other logical options to resolve this issue.\nThis reasoning technique is also called proof by lack of imagination (related to the technique \"I don't get out much\"). I hear that you are arguing for some exception semantics (similar to the catch/throw I brought up). I can sympathize with that, because I also argued for that. However, we are really free to define the expression language as we want, and what we have now is not that much worse than what I was (incompletely) proposing.\n(Found the rationale why Erlang does things this way: URL Unfortunately, we can't ask Joe for additional details any more. But apparently these titans in language design for highly reliable systems saw a benefit in a complete total ordering of the type system. JSON does not have a type system, so anything we do will be invention.)\nMy comment was about the current spec, especially now that PR had landed. The current spec says that yields true. Yes (unless that comparison was part of a large boolean expression which sometimes yielded false) that would return all the nodes. That's precisely what I am arguing for as I think it's the best option. I suspect absurdity, like beauty, is in the eye of the beholder. I can see where you're coming from. I don't follow the chain of reasoning there. The current spec returns, but doesn't always return an empty nodelist. At the risk of repeating myself, the current spec is a logical option in the sense that it is well defined and consistent. I think we are all agreed that none of the options is perfect. All the options under consideration are logical, otherwise we'd rule them out. Some would take more words in the spec to explain and that puts a cognitive load on the user for very little benefit.\nNAME It seems your preferred approach is: Let's try to flesh this out a bit. What about shortcutting? For example, what should the behaviour of be? There would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow shortcuts. Result: all nodes are selected. Do not prescribe an order of evaluation and allow shortcuts. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow shortcuts (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nThis seems useful to me. The spec is free to specify that inequalities (, >=) are only for numbers. Short-circuit evaluation can be used to avoid errors, e.g.: will return the first array element.\nYou still have to parse this. And you have to know the value of before you evaluate it. That means you're able to know that this expression is nonsensical (to avoid \"invalid\") before you evaluate it. Thus shortcutting is moot.\nLet's use \"undesirable\" for expressions we don't like. The example that Glyn provided is just a stand-in for a more complex example that gets true from the JSON value. So the fact that you can parse this and find the undesirable part immediately is not so relevant. With shortcut semantics, the fact that the ignored part of the disjunction is undesirable also is irrelevant. So I do think that Glyn's question is valid.\nIn JMESPath, evaluates to, not , but \"absent\", represented in JMESPath by . When \"absent\" () is evaluated as a boolean according to the JMESPath rules of truth, it gives . It seems consistent that \"not absent\" is . What would you have if you resolved to the draft's notion of \"absent\", an empty node list?\nOk, let's try a modified example to avoid the syntactic issue. What should the behaviour of be (where results in a nodelist with a single node with value )? With NAME preferred approach, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nI'd like to go with the simplest possible thing unless are strong reasons not to. I think the simplest possible thing is: Type-incompatible comparisons yield false and there are no \"exceptions\" or nonlocal effects. So in this example you unambiguously get and select all the nodes and if you want to optimize by short-circuiting you can do that without fear of confusion or breakage. Type-incompatibility yielding false has the advantage that it's easy to explain in English and, I would argue, usually produces the desired effect. If I am filtering something on and turns out not to be a number - almost certainly an unexpected-data-shape problem - I do not want to match. But if I that with something else then I'm explicitly OK with not getting a match here. Yes, there are still corner-case surprises, for example: is true if is non-numeric, but is not true. I claim that this is at least easy to explain and understand in brief clear language.\nA better choice for your argument would have been JS as that was one of the original implementations. Similiar behavior works there (as I discovered testing it in my browser's console). ! and result in false and result in true Interestingly and both result in true for JS. This makes it very apparent that JS is just casting the to and performing the comparison. This kind of implicit casting is something that we have already explicitly decided against. This means that even the JS evaluation isn't a valid argument. I could just as easily use a .Net language or any other strongly typed language to show that doesn't even compile and is therefore nonsensical. Why do we continually compare with this? It's a deviated derivative of JSON Path. Are we trying to align JSON Path with it? To what end? I thought we were trying to define JSON Path, not redefine JMESPath to look like JSON Path. \"... develop a standards-track JSONPath specification that is technically sound and complete, based on the common semantics and other aspects of existing implementations. Where there are differences, the working group will analyze those differences and make choices that rough consensus considers technically best, with an aim toward minimizing disruption among the different JSONPath implementations.\" Yet no one has thought to see what the implementations do in these cases. I would say that the short-cutting is likely preferred, but I don't think my implementation would do it that way. My implementation would evaluate all operands then apply the . This would result in the strange comparison with and so it would select no nodes. C# would shortcut this. and would be considered \"side-effect\" evaluations and would be skipped. That said, strong typing would guarantee that these would evaluate to types that are valid for , so the resolved expression is guaranteed to make sense in the event shortcutting didn't happen. (In reality, the C# compiler would optimize to a constant , and ultimately the remaining expression wouldn't even make it into the compiled code, so really there's no short-cutting either.)\nWell, I chose Erlang because their total ordering is the result of a deliberate design process. Any ordering that you can derive in JavaScript is the result of a haphazard, now universally despised system of implicit conversions \u2014 not quite as good an example for my argument as Erlang\u2019s careful design. (Some people appear to have a tendency to derive processing rules for JSON from JavaScript. JSON is an interchange format, which derived its syntax from JavaScript. JSON has no processing rules, and so we have to add some. Any inspiration for the ones we decide to use in JSONPath is as good as any other, as long as it meets the needs of its users.) Gr\u00fc\u00dfe, Carsten\nFor the record, unlike Erland, comparisons in our current draft do not form a (a.k.a. a linear order). A total order satisfies this law: But, according to our current draft, both and are false. Our current draft does, however, provide a since satisfies these laws: That said, in our draft is not a strict partial order which would have to satisfy these laws: In our draft: is false, so is true, which breaks irreflexivity. both and are false, so and are true, which breaks asymmetry. Our current draft does however preserves the laws of boolean algebra (some of which would be broken by the alternatives being discussed in this issue).\nAnd typed comparisons in C# are the result of a deliberate design process. What's your point? You're invoking a selection bias. Our decision to not have type casting (i.e. ) demonstrates that types are important to us. That line of thinking necessitates that any comparison between different types must either error or return false because such a comparison is meaningless. We're not erroring, so we must return false.\nAre we now agreed then?\nIf by in agreement you mean that must also return false.\nAccording to the current draft, returns false and so returns true and thus returns false.\nOkay... You're twisting my examples. I want any expression that contains comparisons between types to return an empty nodelist.\nI want all items that are not back-ordered or backordered for less than 10 days.\nThe simplest possible thing would be to do what the vast majority of JSONPath implementations do when a comparison happens that is regarded as not supported, which is to abort evaluation and report a diagnostic. That has the additional advantage of being compatible with the charter \"capturing the common semantics of existing implementations\", if that still matters. It is also consistent with XPATH 3.1, see (URL) and (URL). The next simplest thing would be to abort evaluation and return an empty list (a few existing JSONPath implementations do that.)\nSpeaking with my co-chair hat on, I'd like to draw attention to the following text from section 3.1 of the current draft: \"The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further errors can be raised during application of the query to a value.\" I think this has for a long time represented the consensus of the WG. If someone wants to approach the problems raised in this issue by proposing an exception mechanism, that would require a specific proposal covering the details and how best to specify it. Absent such a proposal existing and getting consensus, I think approaches that include a run-time error/exception mechanism are out of bounds.\nCo-chair hat off: I could live with that. Among other things, it's easy to describe. It's not my favorite approach but it's sensible.\nOn 2022-07-20, at 22:15, Tim Bray NAME wrote: Please don\u2019t commingle exceptions with erroring out. Erroring out because of data fed to the expression is not consistent with the above invariant. Exceptions may be processed within the query (e.g., in the classical catch/throw form), and need not violate that invariant. \u201cNaB\u201d is an attempt to add to the data types in such a way that an exception can be handed up the evaluation tree as a return value. Exceptions tend to make the outcome of the query dependent of the sequence in which parts of the query expression are processed, so they may be violating other invariants (which are implicit in some of our minds). Gr\u00fc\u00dfe, Carsten\nI've been thinking why I keep liking having type-mismatch comparisons be just and made some progress. I think that is a compact way of saying . So if it's not a number, this is unsurprisingly false. If you believe this then it makes perfect sense that if is then is true but is false.\nNAME If is , then the spec's statement: implies that is false. Then the spec's statement: implies that , the negation of , is true. Essentially, and give non-intuitive results for non-numeric comparisons. This is surprising and far from ideal. However, because comparisons always produce a boolean value, they can be hedged around with other predicates to get the desired result, e.g. we can replace with which is equivalent to for numeric and is false for non-numeric .\nThe approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison means that: it is not possible to hedge around \"undesirable\" comparisons with other predicates, results tend to depend on the order in which subexpressions are evaluated. Let's take an example. Suppose we want to pick out all objects in an array such that the object has a key which is either at least 9 or equal to . For example, in the JSON: With the current spec, the filter has the desired effect and returns the nodelist: The alternative doesn't allow that kind of hedging around. A solution with the alternative approach is to use a list .\nNext, let's explore the ordering issue with the alternative approach. What should the behaviour of be (where results in a nodelist with a single node with value )? (Apologies this is for the third time of asking.) With the approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected, but the implementation cannot take advantage of the short-circuit as an optimisation. I think option 1 is preferable and probably the most intuitive option. I wonder if there are any other issues (apart from being rather prescriptive) with that option? The boolean algebra laws would only apply in general when \"undesirable\" comparisons are not present.\nI suggest changing this to \"\u2026using one of the operators , =. Then\u2026 I suggest removing this statement because we've defined , then there's some work to tidy up the definition of == and !=. And in every case, any comparison with a type mismatch is always false.\nActually, with a type mismatch is always true. I was hoping to extend the definition by negation to and , but that seems to create cognitive dissonances.\nIf you say that type mismatch always yields false, there's no problem, this reduces to true false, which is to say true. You don't need to say anything about order of evaluation. I think if the spec needs to specify order of evaluation, that is a very severe code smell. I'd find that very hard to accept.\nNo, the rule I'm proposing is simpler: Any comparison with a type mismatch is always false. Which means is not specified as \"the opposite of ==\" it's specified as meaning \"are values of the same non-structured type, and the values are not equal\". I can't tell wither and are equal or not if is not a boolean. They are not comparable.\nSo is always false. Not intuitive to me.\nHmm, having written that, I might have been going overboard. It would be perfectly OK to define as \"true if the operands are of different types, or of the same type but not equal\" and that would make perfect sense.\nWe also need to design in an escape clause for structured values, which so far we don't compare.\nOK, I think I have probably said enough on this, but I have been doing an unsatisfactory job communicating what seems to me like a reasonably straightforward proposal. Once the spec stabilizes a bit I'd be happy to do a PR. The key language, under \"Comparisons\" in 3.4.8, would be: True if both operands are of the same type, that type is not a structured type, and the values are equal; otherwise false True if either operand is of a structured type, or if the operands are of different types, or if they are of the same type but the values are unequal; otherwise false. True if both operands are numbers and the first operand is strictly greater than the second; otherwise false. True if both operands are numbers and the first operand is strictly less than the second; otherwise false. True if both operands are numbers and the first operand is greater than or equal to the second; otherwise false. True if both operands are numbers and the first operand is less than or equal to the second; otherwise false.\nThat approach, like the current draft, preserves the laws of boolean algebra and therefore the order of evaluation doesn't matter. It gets rid of some nasty surprises (such as ) which are present in the current draft. It seems that is the negation of , in which case it's probably simpler to define it as such. However, the approach is not completely free of surprises because it breaks: the converse relationship between and that if and only if not the converse relationship between and that if and only if not as well as: the reflexivity law (see above, e.g. is false) the strongly connected or total law (see above, e.g. it is not true that either or since both are false). Thus and would no longer be partial orders and and would no longer be strict total orders when these four operators are considered as binary relations over the whole set of values. We could rationalise this by thinking of these four operators as orderings of numbers with non-numbers added as \"unordered\" extensions.\nEh, I like writing each one out in clear English rather than having them depend on each other. But, editorial choice. It's still true if a & b are both numbers. If you accept that \"<\" means \"A is a number AND B is a number AND A I am very strongly against applying /// to anything but numbers and strings. All of glyn's proposals (1)-(5) define /// for all kinds of values. In which case you might want a diagnostic, like . Or the vast majority of existing JSONPath implementations.\nOn 2022-07-26, at 14:44, Daniel Parker NAME wrote: OK, so the new consensus is that we do want JSONPath to be able to break? (Not just syntax errors before the query meets data, but also data errors?) Gr\u00fc\u00dfe, Carsten\nIn the case of XPath 3.1, errors are raised or not, depending on a mode. Please let's avoid that. I personally don't think there is a convincing case for raising data errors. I would be open to raising optional, implementation-dependendant warnings. But I'm also comfortable with continuing the current design point of raising syntax errors based on the path alone and then absorbing any kind of anomaly in the data without failing the request.\nNAME Not even if, in Tim's words, \"Something is broken\"? Seriously broken? Proposals (1)-(5) all perform a common purpose: to take the comparisons that are less sensible, and resolve them to something, for completeness. Something is returned, and for all the proposals, that may be a non-empty result list, depending on the query. But consider a user playing around with an online query tool. Which result is more helpful? The result or the diagnostic The expression either side of operator \">\" must evaluate to numeric or string values\nI agree that diagnostics are great for attended situations, but for unattended scenarios, I'd prefer the behaviour to be robust.\nNAME For \"unattended scenarios\" we have error logs and alerts.\nYes, I'm comfortable with logging as a side effect of a robust operation (no failure after syntax checking the path).\nCo-chair hat off: While I do not have a strong opinion about diagnostics vs exceptions vs silent-false, I do note that this kind of type-mismatch problem is a corner case and I wonder if it justifies the investment of a whole bunch of WG effort. Co-chair hat back on: As I said, I think the current draft does represent a fairly long-held consensus of the WG. If we want to introduce a diagnostic/exception mechanism, it's not going to happen without a PR to make the option concrete as opposed to abstract. So if you want such a thing, arguing for it in this thread is not going to get us there.\nSo, one imagines something like: \"A JSONPath implementation MUST NOT cause a failure or exception as the consequence of applying a well-formed JSONPath to well-formed JSON data. An implementation SHOULD provide a mechanism whereby software can receive report of run-time errors such as type-incompatible comparisons.\" Hmm, I don't see anything in the draft spec about what happens if you try to apply a JSONPath and what you're trying to apply it to is broken, i.e. not valid JSON. Maybe we're in a world where callers have to live with failed applications anyhow?\nI'm not \"arguing for it\", I don't care what the authors of the draft eventually put in it. I'm simply providing comments or feedback on matters that may be helpful to the authors in thinking about the issues, or not. They can decide. There used to be more people doing that, but my impression is that most of them have dropped off.\nI think that deserves its own issue: URL\nI'd like to address this issue, so, unless someone someone objects strongly, I'll put together a PR for option 2 plus string comparisons. That won't necessarily be the end of the discussion, but at least it will give us a concrete alternative to the current draft.\nPlease do, and thanks in advance. We will do better if we have concrete texts before us to shape our discussion.\nOn 2022-07-19, at 15:55, Glyn Normington NAME wrote: The other reason that would speak against an option is that it lacks expressiveness, i.e., it is impossible to express a query that can be expected to be needed. I am specifically thinking about queries that first look at a data item to see what it is and then make a condition based on what was found. Gr\u00fc\u00dfe, Carsten\nOn 2022-07-20, at 22:23, Tim Bray NAME wrote: \u2026 which reminds us that this cannot currently be explicitly expressed. Gr\u00fc\u00dfe, Carsten\nI think there may be room for further fine-tuning but at this point I'd like to merge this and get a nice editors' draft that we can read end-to-end.", "new_text": "of the selector entries is kept as many times in the nodelist. To be valid, integer values in the \"element-index\" and \"slice-index\" components MUST be in the I-JSON RFC7493 range of exact values, see synsem-overview. 3.4.8.3."} {"id": "q-en-draft-ietf-jsonpath-base-446f74b2169db2ee17fdd4c587672d5f22bce43197468cac09a80e3072774c21", "old_text": "Additional terms used in this specification are defined below. 1.2. This section is informative.", "comments": "This is a draft for us to contemplate and update to see if we can reach a satisfactory clarification.\nThinking about the discussion in and the current definitions ... leads me to the question, why object members can't be nodes. Consider a subtile modification of definitions: Member: A name/value pair in an object. Element: An index/value pair in an array. Node: A pair of a value along with its nodename (member name or element index). Root Node: The unique node with the name \"$\" whose value is the entire argument. Location: The unique path from the root node to a single node, manifested as a normalized path. This is an alternative node representation, i.e, in nodelists. This way confusing formulations like become obsolete. We also get a more simplified definition for children and anchestors equivalent to child nodes and anchestor nodes. It seems to be consistent to that . Is there something I am overlooking?\nWe could do this, but that would mean our data model no longer maps to JSON, which doesn't have free-floating members (or elements in the above definition). Also, from a practical point of view, one usually wants to look at a node in the sense we currently have it, not as a combination of the edge leading into it and the vertex itself (in graph theory, node also is usually a synonym to a vertex, not that combination). (I think we already got rid of the confusing wordings.)\nThat definition of node would not uniquely identify a value in the JSON tree. For instance, in the JSON: the above definition of node says there are two nodes with the value 1 and the nodename 0. This isn't a node, according to the above definition, because it does not have a nodename. In the above JSON example, the Normalized Paths and both point at a single node (with value 1 and the nodename 0). Also, following on from NAME comment, the whole point of JSONPath is to select/extract JSON values from a JSON value, so that rules out selecting/extracting object members, since these are not JSON values.\nI am not quite convinced yet. Hmm ... where is it specified, that a node definition must be unique. Nodes in an XML DOM tree aren't either. Object members have an explicit nodename. Array elements and the root node have implicit node names (index and '$'). With our current definition members/elements and nodes are seemingly total unrelated. So maybe we can get completely rid of the term 'node' with its abstract definition then.\nThat was a somewhat simplified statement. We do have a use for normalized paths, which identify nodes.\nThe consequences of a node not being unique in a tree include: Outputting a nodelist from a query would not be so useful because it would be missing location information. Converting a nodelist into an array of Normalized Paths would be tricky as the nodelist wouldn't define the order of duplicate entries. A node could have multiple parents, so the tree would't be a tree any more, it would be a directed acyclic graph. Not sure I follow. Object members are not values, do not have nodes, and so don't need a nodename. I don't think that's true. Each member has a value with a node. Each element has a node. Perhaps the (\"JSON Values as Trees of Nodes\") in PR makes things clearer? I don't think that's possible without introducing a synonym for \"node\".\nI think this should be clear now in the draft -- can we close this?\n2023-01-10 Interim: Yes, we can close.", "new_text": "Additional terms used in this specification are defined below. 1.1.1. This specification models the argument as a tree of JSON values, each with its own node. A node is either the root node or one of its descendants. This specification models the result of applying a query to the argument as a nodelist (a list of nodes). So nodes are the selectable parts of the argument. The only parts of an object that can be selected by a query are the member values. Member names and members (name/value pairs) cannot be selected. So member values have nodes, but members and member names do not. Similarly, member values are children of an object, but members and member names are not. 1.2. This section is informative."} {"id": "q-en-draft-ietf-jsonpath-base-9a65d1a6def3f300af175053bdf78a734985aed3d5d9eb11171aa9c29a989d5f", "old_text": "A syntactically valid segment MUST NOT produce errors when executing the query. This means that some operations that might be considered erroneous, such as indexing beyond the end of an array, simply result in fewer nodes being selected. Consider this example. With the argument \"{\"a\":[{\"b\":0},{\"b\":1},{\"c\":2}]}\", the query \"$.a[*].b\" selects the", "comments": "Avoid the term indexing, so we don't need to worry about whether it applies only to arrays or to arrays and objects.", "new_text": "A syntactically valid segment MUST NOT produce errors when executing the query. This means that some operations that might be considered erroneous, such as using an index lying outside the range of an array, simply result in fewer nodes being selected. Consider this example. With the argument \"{\"a\":[{\"b\":0},{\"b\":1},{\"c\":2}]}\", the query \"$.a[*].b\" selects the"} {"id": "q-en-draft-ietf-jsonpath-base-9a65d1a6def3f300af175053bdf78a734985aed3d5d9eb11171aa9c29a989d5f", "old_text": "2.5.3.2. The \"index-selector\" applied to an array selects an array element using a zero-based index. For example, the selector \"0\" selects the first and the selector \"4\" selects the fifth element of a sufficiently long array. Nothing is selected, and it is not an error, if the index lies outside the range of the array. Nothing is selected from a value that is not an array.", "comments": "Avoid the term indexing, so we don't need to worry about whether it applies only to arrays or to arrays and objects.", "new_text": "2.5.3.2. A non-negative \"index-selector\" applied to an array selects an array element using a zero-based index. For example, the selector \"0\" selects the first and the selector \"4\" selects the fifth element of a sufficiently long array. Nothing is selected, and it is not an error, if the index lies outside the range of the array. Nothing is selected from a value that is not an array."} {"id": "q-en-draft-ietf-jsonpath-base-9a65d1a6def3f300af175053bdf78a734985aed3d5d9eb11171aa9c29a989d5f", "old_text": "as slice bounds and must first be normalized. Normalization for this purpose is defined as: The result of the array indexing expression \"i\" applied to an array of length \"len\" is defined to be the result of the array slicing expression \"Normalize(i, len):Normalize(i, len)+1:1\". Slice expression parameters \"start\" and \"end\" are used to derive", "comments": "Avoid the term indexing, so we don't need to worry about whether it applies only to arrays or to arrays and objects.", "new_text": "as slice bounds and must first be normalized. Normalization for this purpose is defined as: The result of the array index expression \"i\" applied to an array of length \"len\" is defined to be the result of the array slicing expression \"Normalize(i, len):Normalize(i, len)+1:1\". Slice expression parameters \"start\" and \"end\" are used to derive"} {"id": "q-en-draft-ietf-jsonpath-base-9a65d1a6def3f300af175053bdf78a734985aed3d5d9eb11171aa9c29a989d5f", "old_text": "the lower bound and which is the upper bound: The slice expression selects elements with indices between the lower and upper bounds. In the following pseudocode, the \"a(i)\" construct expresses the 0-based indexing operation on the underlying array. When \"step = 0\", no elements are selected and the result array is empty.", "comments": "Avoid the term indexing, so we don't need to worry about whether it applies only to arrays or to arrays and objects.", "new_text": "the lower bound and which is the upper bound: The slice expression selects elements with indices between the lower and upper bounds. In the following pseudocode, \"a(i)\" is the \"i+1\"th element of the array \"a\" (i.e., \"a(0)\" is the first element, \"a(1)\" the second, and so forth). When \"step = 0\", no elements are selected and the result array is empty."} {"id": "q-en-draft-ietf-jsonpath-base-41d62993f8ed991ba32b456fadeff2d76a88466ef52465dab53c961a18b349c7", "old_text": ". A string is a well-formed JSONPath query if it conforms to the ABNF syntax in this document. A well-formed JSONPath query is valid if it also fulfills all semantic requirements posed by this document. To be valid, integer numbers in the JSONPath query that are relevant to the JSONPath processing (e.g., index values and steps) MUST be within the range of exact values defined in I-JSON RFC7493, namely within the interval [-(2 )+1, (2 )-1]). To be valid, strings on the right-hand side of the \"=~\" regex matching operator need to conform to I-D.draft-ietf-jsonpath-iregexp. The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further", "comments": "I like this structure. We might come up with more validity requirements, so we may want to keep this even if =~ goes away (which is pretty much what we decided last week).", "new_text": ". A string is a well-formed JSONPath query if it conforms to the ABNF syntax in this document. A well-formed JSONPath query is valid if it also fulfills all semantic requirements posed by this document, which are: Integer numbers in the JSONPath query that are relevant to the JSONPath processing (e.g., index values and steps) MUST be within the range of exact values defined in I-JSON RFC7493, namely within the interval [-(2 )+1, (2 )-1]. Strings on the right-hand side of the \"=~\" regex matching operator MUST conform to I-D.draft-ietf-jsonpath-iregexp. The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further"} {"id": "q-en-draft-ietf-jsonpath-base-e1b7da61fe3eef07050d2e352b24ad69dea159b02e8bb021a37218cc58493647", "old_text": "Note: \"double-quoted\" strings follow the JSON string syntax (RFC8259); \"single-quoted\" strings follow an analogous pattern (syntax-index). 2.5.1.2.", "comments": "Add pair of surrogate escapes to note on string format.\nLGTM, thanks. Just one editorial suggestion.", "new_text": "Note: \"double-quoted\" strings follow the JSON string syntax (RFC8259); \"single-quoted\" strings follow an analogous pattern (syntax-index). No attempt was made to improve on this syntax, so characters with scalar values above 0x10000, such as U+0026 U+0023 U+0031 U+0032 U+0039 U+0033 U+0030 U+0030 U+003B (\"🤔\", AMPERSAND, NUMBER SIGN, DIGIT ONE, DIGIT TWO, DIGIT NINE, DIGIT THREE, DIGIT ZERO, DIGIT ZERO, SEMICOLON), need to be represented by a pair of surrogate escapes (\"\"uD83EuDD14\"\" in this case). 2.5.1.2."} {"id": "q-en-draft-ietf-jsonpath-base-61bbd758f12b22e8f8b28378dd087b0c050ff7cc3b9001e4abb93f3f559c209f", "old_text": "A function argument is a \"filter-path\" or a \"comparable\". According to filter-selector, a \"function-expression\" is valid as a \"filter-path\" or a \"comparable\". Any function expressions in a query must be well-formed (by", "comments": "E.g.,\nI propose a slightly more compact naming of ABNF of - and -segment, which should be more consistent between them two. is well known from LaTeX and means or there.\nabove is wrong (and so it is in the current spec). It needs to read ... to not allow triple dot ( :-)\nNAME Wouldn't dotdot-wildcard etc. be less cryptic?\nHere are some quibbles with the current ABNF (prior to PR being merged) which we can sort out after The Great Renaming:\nYes, yes. See. Fixed in I like the way this looks like now. Maybe I don't understand; can you elaborate? Fixed this (and following) in Removed in To be fixed in overall overhaul, next Yes! To be fixed in overall overhaul, next To be fixed in overall overhaul, next Fixed in Discuss this in its own issue, I'd say. Fixed in It certainly does not -- it is the subset that is actually needed for normalized paths. To stay normalized, we don't allow random other ones.\nOf course ... I'm quite dispassionate here.\nPlease see my proposal in\nWith merged, we still have the expression cleanup the separate issue (enhancement) whether should be allowed, to mean\nMy remaining points are covered by PRs , , and .\nShouldn't be named according to all other above ?\nYes, I agree. See URL\nNAME Are we ready to close this issue now the PRs have been merged or did you have more expression cleanup in mind?\nThe only thing that comes to my mind is standardizing on the column where the is -- this is currently rather chaotic. But that can be a new issue.\nIndentation of : number of lines such indented\n(And it doesn't need to be consistent, just a bit less chaotic.)\nLet's make the a new issue. (I haven't got a good feel for the optimal solution.)\nBefore we close this issue, URL slipped through the net in my earlier PRs.\nThen could resolve to either or . We would introduce an ambiguity, we already found with Json Pointer () ...\nOK, with the latest blank space realignments that are part of (5e2cc85), I think we are done with this.\nThank you! ... or do we need two selectors here (Ch.2.61) child-longhand = \"[\" S selector 1(S \",\" S selector) S \"]\" given that '1' means 'at least one'.", "new_text": "A function argument is a \"filter-path\" or a \"comparable\". According to filter-selector, a \"function-expr\" is valid as a \"filter-path\" or a \"comparable\". Any function expressions in a query must be well-formed (by"} {"id": "q-en-draft-ietf-jsonpath-base-13989b8055ffbbd9d8d0ccd4876b7ca5445153dc2c4882e0a86f65e9a8ec79be", "old_text": "primitive values (that is, numbers, strings, \"true\", \"false\", and \"null\"). These can be obtained via literal values; Singular Queries, each of which selects at most one node the value of which is then used; and function expressions (see fnex) of type \"ValueType\" or \"NodesType\" (see type-conv). Literals can be notated in the way that is usual for JSON (with the extension that strings can use single-quote delimiters). Alphabetic", "comments": "Thanks to NAME for spotting this.\nNAME I presume you are ok with this change, so I'm going to merge it. Any concern, let me know and we can adjust the result.\nYes. Just didn't have chance to check this in context, but if I remember this properly, this change is good (and necessary).", "new_text": "primitive values (that is, numbers, strings, \"true\", \"false\", and \"null\"). These can be obtained via literal values; Singular Queries, each of which selects at most one node the value of which is then used; and function expressions (see fnex) of type \"ValueType\". Literals can be notated in the way that is usual for JSON (with the extension that strings can use single-quote delimiters). Alphabetic"} {"id": "q-en-draft-ietf-jsonpath-base-13989b8055ffbbd9d8d0ccd4876b7ca5445153dc2c4882e0a86f65e9a8ec79be", "old_text": "applies) \"NodesType\". If it occurs directly as a \"comparable\" in a comparison, the function is declared to have a result type of \"ValueType\", or (conversion applies) \"NodesType\". Otherwise, it occurs as an argument in another function expression, and the following rules for function arguments apply", "comments": "Thanks to NAME for spotting this.\nNAME I presume you are ok with this change, so I'm going to merge it. Any concern, let me know and we can adjust the result.\nYes. Just didn't have chance to check this in context, but if I remember this properly, this change is good (and necessary).", "new_text": "applies) \"NodesType\". If it occurs directly as a \"comparable\" in a comparison, the function is declared to have a result type of \"ValueType\". Otherwise, it occurs as an argument in another function expression, and the following rules for function arguments apply"} {"id": "q-en-draft-ietf-jsonpath-base-63e8100b16b86d3d55422b29ac535d17eb8f1217ffdcc221d702560aba796bfd", "old_text": "these are used to define \"!=\", \"<=\", \">\", and \">=\". When either side of a comparison results in an empty nodelist or \"Nothing\": a comparison using the operator \"==\" yields true if and only the other side also results in an empty nodelist or \"Nothing\".", "comments": "NAME Deleting the text should have addressed your original editorial issue. This PR also adds the missing cross-reference. If you're happy, please approve.", "new_text": "these are used to define \"!=\", \"<=\", \">\", and \">=\". When either side of a comparison results in an empty nodelist or \"Nothing\" (see typesys): a comparison using the operator \"==\" yields true if and only the other side also results in an empty nodelist or \"Nothing\"."} {"id": "q-en-draft-ietf-jsonpath-base-63e8100b16b86d3d55422b29ac535d17eb8f1217ffdcc221d702560aba796bfd", "old_text": "first string compares less than the remainder of the second string. Note that comparisons using the operator \"<\" yield false if either value being compared is an object, array, boolean, or \"null\". \"!=\", \"<=\", \">\", and \">=\" are defined in terms of the other comparison operators. For any \"a\" and \"b\":", "comments": "NAME Deleting the text should have addressed your original editorial issue. This PR also adds the missing cross-reference. If you're happy, please approve.", "new_text": "first string compares less than the remainder of the second string. \"!=\", \"<=\", \">\", and \">=\" are defined in terms of the other comparison operators. For any \"a\" and \"b\":"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "Clients are configured to use IP proxying over HTTP via an URI Template TEMPLATE. The URI template MAY contain two variables: \"target\" and \"ipproto\" (scope). The optionality of the variables needs to be considered when defining the template so that either the variable is self-identifying or it is possible to exclude it in the syntax.", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "Clients are configured to use IP proxying over HTTP via an URI Template TEMPLATE. The URI template MAY contain two variables: \"target\" and \"ipproto\"; see scope. The optionality of the variables needs to be considered when defining the template so that either the variable is self-identifying or it is possible to exclude it in the syntax."} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "To initiate an IP tunnel associated with a single HTTP stream, a client issues a request containing the \"connect-ip\" upgrade token. The target of the tunnel is indicated by the client to the IP proxy via the \"target_host\" and \"target_port\" variables of the URI Template; see client-config. When sending its IP proxying request, the client SHALL perform URI template expansion to determine the path and query of its request, see client-config. A successful response indicates that the IP proxy is willing to open an IP forwarding tunnel between it and the client. Any response other than a successful response indicates that the tunnel has not been formed. The lifetime of the IP forwarding tunnel is tied to the IP proxying request stream. The IP proxy MUST maintain all IP address and route", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "To initiate an IP tunnel associated with a single HTTP stream, a client issues a request containing the \"connect-ip\" upgrade token. When sending its IP proxying request, the client SHALL perform URI template expansion to determine the path and query of its request, see client-config. By virtue of the definition of the Capsule Protocol (see HTTP-DGRAM), IP proxying requests do not carry any message content. Similarly, successful IP proxying responses also do not carry any message content. 4.1. Upon receiving an IP proxying request: if the recipient is configured to use another HTTP proxy, it will act as an intermediary by forwarding the request to another HTTP server. Note that such intermediaries may need to re-encode the request if they forward it using a version of HTTP that is different from the one used to receive it, as the request encoding differs by version (see below). otherwise, the recipient will act as an IP proxy. It extracts the optional \"target\" and \"ipproto\" variables from the URI it has reconstructed from the request headers, decodes their percent- encoding, and establishes an IP tunnel. IP proxies MUST validate whether the decoded \"target\" and \"ipproto\" variables meet the requirements in scope. If they do not, the IP proxy MUST treat the request as malformed; see H2 and H3. If the \"target\" variable is a DNS name, the IP proxy MUST perform DNS resolution before replying to the HTTP request. If errors occur during this process, the IP proxy MUST reject the request and SHOULD send details using an appropriate Proxy-Status header field PROXY- STATUS. For example, if DNS resolution returns an error, the proxy can use the \"dns_error\" Proxy Error Type from PROXY-STATUS. The lifetime of the IP forwarding tunnel is tied to the IP proxying request stream. The IP proxy MUST maintain all IP address and route"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "tunnel due to a period of inactivity, but they MUST close the request stream when doing so. Along with a successful response, the IP proxy can send capsules to assign addresses and advertise routes to the client (capsules). The client can also assign addresses and advertise routes to the IP proxy for network-to-network routing. By virtue of the definition of the Capsule Protocol (see HTTP-DGRAM), IP proxying requests do not carry any message content. Similarly, successful IP proxying responses also do not carry any message content. 4.1. When using HTTP/1.1 H1, an IP proxying request will meet the following requirements:", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "tunnel due to a period of inactivity, but they MUST close the request stream when doing so. A successful response (as defined in Sections resp1 and resp23) indicates that the IP proxy has established an IP tunnel and is willing to proxy IP payloads. Any response other than a successful response indicates that the request has failed; thus, the client MUST abort the request. Along with a successful response, the IP proxy can send capsules to assign addresses and advertise routes to the client (capsules). The client can also assign addresses and advertise routes to the IP proxy for network-to-network routing. 4.2. When using HTTP/1.1 H1, an IP proxying request will meet the following requirements:"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "wishes to open an IP forwarding tunnel with no target or protocol limitations, it could send the following request: 4.2. The IP proxy indicates a successful response by replying with the following requirements:", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "wishes to open an IP forwarding tunnel with no target or protocol limitations, it could send the following request: 4.3. The IP proxy indicates a successful response by replying with the following requirements:"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "For example, the IP proxy could respond with: 4.3. When using HTTP/2 H2 or HTTP/3 H3, IP proxying requests use HTTP Extended CONNECT. This requires that servers send an HTTP Setting as", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "For example, the IP proxy could respond with: 4.4. When using HTTP/2 H2 or HTTP/3 H3, IP proxying requests use HTTP Extended CONNECT. This requires that servers send an HTTP Setting as"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "IP packet forwarding, or a specific proxied flow; see scope. An IP proxying request that does not conform to these restrictions is malformed (see H2 and H3). For example, if the client is configured with URI Template \"https://example.org/.well-known/masque/ip/{target}/{ipproto}/\" and wishes to open an IP forwarding tunnel with no target or protocol limitations, it could send the following request: 4.4. The IP proxy indicates a successful response by replying with the following requirements:", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "IP packet forwarding, or a specific proxied flow; see scope. An IP proxying request that does not conform to these restrictions is malformed; see H2 and H3. For example, if the client is configured with URI Template \"https://example.org/.well-known/masque/ip/{target}/{ipproto}/\" and wishes to open an IP forwarding tunnel with no target or protocol limitations, it could send the following request: 4.5. The IP proxy indicates a successful response by replying with the following requirements:"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "For example, the IP proxy could respond with: 4.5. Unlike UDP proxying requests, which require specifying a target host, IP proxying requests can allow endpoints to send arbitrary IP packets", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "For example, the IP proxy could respond with: 4.6. Unlike UDP proxying requests, which require specifying a target host, IP proxying requests can allow endpoints to send arbitrary IP packets"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "If present, the IP prefix length in \"target\" SHALL be preceded by a percent-encoded slash (\"/\"): \"%2F\". The IP prefix length MUST represent an integer between 0 and the length of the IP address in bits, inclusive. \"ipproto\" MUST represent an integer between 0 and 255 inclusive, or the wildcard value \"*\". IP proxies MAY perform access control using the scoping information provided by the client: if the client is not authorized to access any of the destinations included in the scope, then the IP proxy can immediately fail the request. 4.6. This document defines multiple new capsule types that allow endpoints to exchange IP configuration information. Both endpoints MAY send any number of these new capsules. 4.6.1. The ADDRESS_ASSIGN capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer of the list of IP", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "If present, the IP prefix length in \"target\" SHALL be preceded by a percent-encoded slash (\"/\"): \"%2F\". The IP prefix length MUST represent a decimal integer between 0 and the length of the IP address in bits, inclusive. \"ipproto\" MUST represent a decimal integer between 0 and 255 inclusive, or the wildcard value \"*\". IP proxies MAY perform access control using the scoping information provided by the client: if the client is not authorized to access any of the destinations included in the scope, then the IP proxy can immediately fail the request. 4.7. This document defines multiple new capsule types that allow endpoints to exchange IP configuration information. Both endpoints MAY send any number of these new capsules. 4.7.1. The ADDRESS_ASSIGN capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer of the list of IP"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "ADDRESS_REQUEST capsules, endpoints MAY send ADDRESS_ASSIGN capsules unprompted. 4.6.2. The ADDRESS_REQUEST capsule (see iana-types for the value of the capsule type) allows an endpoint to request assignment of IP", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "ADDRESS_REQUEST capsules, endpoints MAY send ADDRESS_ASSIGN capsules unprompted. 4.7.2. The ADDRESS_REQUEST capsule (see iana-types for the value of the capsule type) allows an endpoint to request assignment of IP"} {"id": "q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2", "old_text": "If an endpoint receives an ADDRESS_REQUEST capsule that contains zero Requested Addresses, it MUST abort the IP proxying request stream. 4.6.3. The ROUTE_ADVERTISEMENT capsule (see iana-types for the value of the capsule type) allows an endpoint to communicate to its peer that it", "comments": "I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good.", "new_text": "If an endpoint receives an ADDRESS_REQUEST capsule that contains zero Requested Addresses, it MUST abort the IP proxying request stream. 4.7.3. The ROUTE_ADVERTISEMENT capsule (see iana-types for the value of the capsule type) allows an endpoint to communicate to its peer that it"} {"id": "q-en-draft-ietf-masque-connect-ip-1ac49ff4455263e739b3e5ab597573a7515b88093e2013c973cc29ca382cb3e7", "old_text": "4.2.1. The ADDRESS_ASSIGN capsule allows an endpoint to inform its peer that it has assigned an IP address to it. It allows assigning a prefix which can contain multiple addresses. This capsule uses a Capsule Type of 0xfff100. Its value uses the following format: IP Version of this address assignment. MUST be either 4 or 6.", "comments": "This change adds text that addresses conveyed by ADDRESSASSIGN can be used in the \"source address\" field of an IP packet, while addresses conveyed by ROUTEADVERTISEMENT can be used in the \"destination address\" field. While we're here, also add a small clarification when a prefix is assigned/advertised that any of these addresses can be used in the source/destination field, respectively. A future extension could create semantics for these addreses, but that is not called out in the text at this time.\nI'm know I'm late looking at this but thanks as this small clarification is really helpful\nThe current text in this draft needs to be updated to match the changes from URL I'll write up a PR once existing conflicting PRs are merged.\nNAME is this unblocked now?\nWe need to clearly state that ADDRESSASSIGN defines a set of addresses the receiver of the capsule can use as a source IP address on packets it generates We need to clearly state that ROUTEADVERTISEMENT defines a set of addresses the receiver of the capsule can use as a destination IP address on packets it generates Should we rename the capsules to make this relationship more clear in all use cases?\nLet's bike shed the name here NAME NAME NAME\n(to clarify Tommy's comment - if anyone comes up with a new name they like, they should post it here. If not, we'll go with advertisement and withdraw)\nThanks!", "new_text": "4.2.1. The ADDRESS_ASSIGN capsule allows an endpoint to inform its peer that it has assigned an IP address or prefix to it. The ADDRESS_ASSIGN capsule allows assigning a prefix which can contain multiple addresses. Any of these addresses can be used as the source address on IP packets originated by the receiver of this capsule. This capsule uses a Capsule Type of 0xfff100. Its value uses the following format: IP Version of this address assignment. MUST be either 4 or 6."} {"id": "q-en-draft-ietf-masque-connect-ip-1ac49ff4455263e739b3e5ab597573a7515b88093e2013c973cc29ca382cb3e7", "old_text": "The number of bits in the IP Address that are used to define the prefix that is being assigned. This MUST be less than or equal to the length of the IP Address field, in bits. If the prefix length is equal to the length of the IP Address, the endpoint is only allowed to send packets from a single source address. If the prefix length is less than the length of the IP address, the endpoint is allowed to send packets from any source address that falls within the prefix. If an endpoint receives multiple ADDRESS_ASSIGN capsules, all of the assigned addresses or prefixes can be used. For example, multiple", "comments": "This change adds text that addresses conveyed by ADDRESSASSIGN can be used in the \"source address\" field of an IP packet, while addresses conveyed by ROUTEADVERTISEMENT can be used in the \"destination address\" field. While we're here, also add a small clarification when a prefix is assigned/advertised that any of these addresses can be used in the source/destination field, respectively. A future extension could create semantics for these addreses, but that is not called out in the text at this time.\nI'm know I'm late looking at this but thanks as this small clarification is really helpful\nThe current text in this draft needs to be updated to match the changes from URL I'll write up a PR once existing conflicting PRs are merged.\nNAME is this unblocked now?\nWe need to clearly state that ADDRESSASSIGN defines a set of addresses the receiver of the capsule can use as a source IP address on packets it generates We need to clearly state that ROUTEADVERTISEMENT defines a set of addresses the receiver of the capsule can use as a destination IP address on packets it generates Should we rename the capsules to make this relationship more clear in all use cases?\nLet's bike shed the name here NAME NAME NAME\n(to clarify Tommy's comment - if anyone comes up with a new name they like, they should post it here. If not, we'll go with advertisement and withdraw)\nThanks!", "new_text": "The number of bits in the IP Address that are used to define the prefix that is being assigned. This MUST be less than or equal to the length of the IP Address field, in bits. If the prefix length is equal to the length of the IP Address, the receiver of this capsule is only allowed to send packets from a single source address. If the prefix length is less than the length of the IP address, the receiver of this capsule is allowed to send packets from any source address that falls within the prefix. If an endpoint receives multiple ADDRESS_ASSIGN capsules, all of the assigned addresses or prefixes can be used. For example, multiple"} {"id": "q-en-draft-ietf-masque-connect-ip-1ac49ff4455263e739b3e5ab597573a7515b88093e2013c973cc29ca382cb3e7", "old_text": "notifies its peer that if the receiver of the ROUTE_ADVERTISEMENT capsule sends IP packets for this prefix in HTTP Datagrams, the sender of the capsule will forward them along its preexisting route. This capsule uses a Capsule Type of 0xfff102. Its value uses the following format: IP Version of this route advertisement. MUST be either 4 or 6.", "comments": "This change adds text that addresses conveyed by ADDRESSASSIGN can be used in the \"source address\" field of an IP packet, while addresses conveyed by ROUTEADVERTISEMENT can be used in the \"destination address\" field. While we're here, also add a small clarification when a prefix is assigned/advertised that any of these addresses can be used in the source/destination field, respectively. A future extension could create semantics for these addreses, but that is not called out in the text at this time.\nI'm know I'm late looking at this but thanks as this small clarification is really helpful\nThe current text in this draft needs to be updated to match the changes from URL I'll write up a PR once existing conflicting PRs are merged.\nNAME is this unblocked now?\nWe need to clearly state that ADDRESSASSIGN defines a set of addresses the receiver of the capsule can use as a source IP address on packets it generates We need to clearly state that ROUTEADVERTISEMENT defines a set of addresses the receiver of the capsule can use as a destination IP address on packets it generates Should we rename the capsules to make this relationship more clear in all use cases?\nLet's bike shed the name here NAME NAME NAME\n(to clarify Tommy's comment - if anyone comes up with a new name they like, they should post it here. If not, we'll go with advertisement and withdraw)\nThanks!", "new_text": "notifies its peer that if the receiver of the ROUTE_ADVERTISEMENT capsule sends IP packets for this prefix in HTTP Datagrams, the sender of the capsule will forward them along its preexisting route. Any of these addresses can be used as the destination address on IP packets originated by the receiver of this capsule. This capsule uses a Capsule Type of 0xfff102. Its value uses the following format: IP Version of this route advertisement. MUST be either 4 or 6."} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "requesting full-tunnel IP packet forwarding, or a specific proxied flow, see scope. Along with a request, the client can send a REGISTER_DATAGRAM_CONTEXT capsule HTTP-DGRAM to negotiate support for sending IP packets in HTTP Datagrams (packet-handling). Any 2xx (Successful) response indicates that the proxy is willing to open an IP forwarding tunnel between it and the client. Any response", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "requesting full-tunnel IP packet forwarding, or a specific proxied flow, see scope. The client SHOULD also include the \"Capsule-Protocol\" header with a value of \"?1\" to negotiate support for sending and receiving HTTP capsules (HTTP-DGRAM). Any 2xx (Successful) response indicates that the proxy is willing to open an IP forwarding tunnel between it and the client. Any response"} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "5. IP packets are encoded using HTTP Datagrams HTTP-DGRAM with the IP_PACKET HTTP Datagram Format Type (see value in iana-format-type). When using the IP_PACKET HTTP Datagram Format Type, full IP packets (from the IP Version field until the last byte of the IP Payload) are sent unmodified in the \"HTTP Datagram Payload\" field of an HTTP Datagram. In order to use HTTP Datagrams, the client will first decide whether or not it will attempt to use HTTP Datagram Contexts and then register its context ID (or lack thereof) using the corresponding registration capsule, see HTTP-DGRAM. When sending a registration capsule using the \"Datagram Format Type\" set to IP_PACKET, the \"Datagram Format Additional Data\" field SHALL be empty. Servers MUST NOT register contexts using the IP_PACKET HTTP Datagram Format Type. Clients MUST NOT register more than one context using the IP_PACKET HTTP Datagram Format Type. Endpoints MUST NOT close contexts using the IP_PACKET HTTP Datagram Format Type. If an endpoint detects a violation of any of these requirements, it MUST abort the stream. Clients MAY optimistically start sending proxied IP packets before receiving the response to its IP proxying request, noting however", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "5. This protocol allows future extensions to exchange HTTP Datagrams which carry different semantics from IP packets. For example, an extension could define a way to send compressed IP header fields. In order to allow for this extensibility, all HTTP Datagrams associated with IP proxying request streams start with a context ID, see payload-format. Context IDs are 62-bit integers (0 to 2 -1). Context IDs are encoded as variable-length integers, see Section 16 of QUIC. The context ID value of 0 is reserved for IP packets, while non-zero values are dynamically allocated: non-zero even-numbered context IDs are client-allocated, and odd-numbered context IDs are server-allocated. The context ID namespace is tied to a given HTTP request: it is possible for a context ID with the same numeric value to be simultaneously assigned different semantics in distinct requests, potentially with different semantics. Context IDs MUST NOT be re-allocated within a given HTTP namespace but MAY be allocated in any order. Once allocated, any context ID can be used by both client and server - only allocation carries separate namespaces to avoid requiring synchronization. Registration is the action by which an endpoint informs its peer of the semantics and format of a given context ID. This document does not define how registration occurs. Depending on the method being used, it is possible for datagrams to be received with Context IDs which have not yet been registered, for instance due to reordering of the datagram and the registration packets during transmission. 6. When associated with IP proxying request streams, the HTTP Datagram Payload field of HTTP Datagrams (see HTTP-DGRAM) has the format defined in dgram-format. Note that when HTTP Datagrams are encoded using QUIC DATAGRAM frames, the Context ID field defined below directly follows the Quarter Stream ID field which is at the start of the QUIC DATAGRAM frame payload: A variable-length integer that contains the value of the Context ID. If an HTTP/3 datagram which carries an unknown Context ID is received, the receiver SHALL either drop that datagram silently or buffer it temporarily (on the order of a round trip) while awaiting the registration of the corresponding Context ID. The payload of the datagram, whose semantics depend on value of the previous field. Note that this field can be empty. IP packets are encoded using HTTP Datagrams with the Context ID set to zero. When the Context ID is set to zero, the Payload field contains a full IP packet (from the IP Version field until the last byte of the IP Payload). Clients MAY optimistically start sending proxied IP packets before receiving the response to its IP proxying request, noting however"} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "request with a failure, or if the datagrams are received by the proxy before the request. Extensions to this mechanism MAY define new HTTP Datagram Format Types in order to use different semantics or encodings for IP payloads. For example, an extension could define a new HTTP Datagram Format Type which enables compression of IP header fields. When a CONNECT-IP endpoint receives an HTTP Datagram containing an IP packet, it will parse the packet's IP header, perform any local policy checks (e.g., source address validation), check their routing", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "request with a failure, or if the datagrams are received by the proxy before the request. When a CONNECT-IP endpoint receives an HTTP Datagram containing an IP packet, it will parse the packet's IP header, perform any local policy checks (e.g., source address validation), check their routing"} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "Endpoints MAY implement additional filtering policies on the IP packets they forward. 6. CONNECT-IP enables many different use cases that can benefit from IP packet proxying and tunnelling. These examples are provided to help illustrate some of the ways in which CONNECT-IP can be used. 6.1. The following example shows a point-to-network VPN setup, where a client receives a set of local addresses, and can send to any remote", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "Endpoints MAY implement additional filtering policies on the IP packets they forward. 7. CONNECT-IP enables many different use cases that can benefit from IP packet proxying and tunnelling. These examples are provided to help illustrate some of the ways in which CONNECT-IP can be used. 7.1. The following example shows a point-to-network VPN setup, where a client receives a set of local addresses, and can send to any remote"} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "case, the advertised route is restricted to 192.0.2.0/24, rather than 0.0.0.0/0. 6.2. The following example shows an IP flow forwarding setup, where a client requests to establish a forwarding tunnel to", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "case, the advertised route is restricted to 192.0.2.0/24, rather than 0.0.0.0/0. 7.2. The following example shows an IP flow forwarding setup, where a client requests to establish a forwarding tunnel to"} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "(2001:db8::3456), scoped to SCTP. The client can send and recieve SCTP IP packets to the remote host. 6.3. The following example shows a setup where a client is proxying UDP packets through a CONNECT-IP proxy in order to control connection", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "(2001:db8::3456), scoped to SCTP. The client can send and recieve SCTP IP packets to the remote host. 7.3. The following example shows a setup where a client is proxying UDP packets through a CONNECT-IP proxy in order to control connection"} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "UDP. The client can send and recieve UDP IP packets to the either of the server addresses to enable Happy Eyeballs through the proxy. 7. There are significant risks in allowing arbitrary clients to establish a tunnel to arbitrary servers, as that could allow bad", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "UDP. The client can send and recieve UDP IP packets to the either of the server addresses to enable Happy Eyeballs through the proxy. 8. There are significant risks in allowing arbitrary clients to establish a tunnel to arbitrary servers, as that could allow bad"} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "they SHOULD follow the guidance in BCP38 to help prevent denial of service attacks. 8. 8.1. This document will request IANA to register \"connect-ip\" in the HTTP Upgrade Token Registry maintained at", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "they SHOULD follow the guidance in BCP38 to help prevent denial of service attacks. 9. 9.1. This document will request IANA to register \"connect-ip\" in the HTTP Upgrade Token Registry maintained at"} {"id": "q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a", "old_text": "This document 8.2. This document will request IANA to register IP_PACKET in the \"HTTP Datagram Format Types\" registry established by HTTP-DGRAM. 8.3. This document will request IANA to add the following values to the \"HTTP Capsule Types\" registry created by HTTP-DGRAM:", "comments": "Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric", "new_text": "This document 9.2. This document will request IANA to add the following values to the \"HTTP Capsule Types\" registry created by HTTP-DGRAM:"} {"id": "q-en-draft-ietf-masque-connect-ip-12588bd5348e6d47ed076c8778ed9fb34cc3380ae7c20dcd137df3c1645d3a69", "old_text": "scope for this document but can be implemented using CONNECT-IP extensions. Since CONNECT-IP endpoints can proxy IP packets send by their peer, they SHOULD follow the guidance in BCP38 to help prevent denial of service attacks. 10.", "comments": "Connect-IP enables the Masque client to set the source address for any IP packets it requests the MASQUE proxy to forward to the external network. This enables the potential for source address spoofing by the MASQUE client or an entity prior to it. For proxies that assign addresses to the client and allows general access to Internet I think it would be really good to recommend or require the masque server to perform source address validation. However, there clearly exist some use cases where the MASQUE will be used to establish a virtual path between two networks and where source address validation is hard as neither network may be stubs networks. However, I think there are several things to address in the document. 1) Include in security description a discussion of the risks here and when it would be useful to perform source address validation. Especially for the assigned address(es) use case where rejecting all but the provided address range would be possible. 2) Consider if the security risks for certain deployments are such that source address validation is RECOMMENDED or REQUIRED\nI don't think this is correct. I thought there was text in the draft that enforced that the source address was in the set of addresses the endpoints agreed were routable. [4.2.1] ADDRESSASSIGN capsule says \"Any of these addresses can be used as the source address on IP packets originated by the receiver of this capsule.\". I suppose this means we need a clarification that a MASQUE proxy MUST drop any packets whose source address is not in the set of addresses agreed upon in ADDRESSASSIGN capsules.\nI think we can write a PR that addresses this. Here's a rough shape of a paragraph: how does that sound?\n+1 to this text.", "new_text": "scope for this document but can be implemented using CONNECT-IP extensions. Falsifying IP source addresses in sent traffic has been common for denial of service attacks. Implementations of this mechanism need to ensure that they do not facilitate such attacks. In particular, there are scenarios where an endpoint knows that its peer is only allowed to send IP packets from a given prefix. For example, that can happen through out of band configuration information, or when allowed prefixes are shared via ADDRESS_ASSIGN capsules. In such scenarios, endpoints MUST follow the recommendations from BCP38 to prevent source address spoofing. 10."} {"id": "q-en-draft-ietf-masque-connect-ip-69ffba9c33abb0faa9586c3d1e7e8fc9529b84e474b883ce8382cfbd68229fd8", "old_text": "4.2.1. The ADDRESS_ASSIGN capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer that it has assigned an IP address or prefix to it. The ADDRESS_ASSIGN capsule allows assigning a prefix which can contain multiple addresses. Any of these addresses can be used as the source address on IP packets originated by the receiver of this capsule. If an endpoint receives multiple ADDRESS_ASSIGN capsules, all of the assigned addresses or prefixes can be used. For example, multiple ADDRESS_ASSIGN capsules are necessary to assign both IPv4 and IPv6 addresses. In some deployments of CONNECT-IP, an endpoint needs to be assigned an address by its peer before it knows what source address to set on its own packets. For example, in the Remote Access case (example- remote) the client cannot send IP packets until it knows what address to use. In these deployments, endpoints need to send ADDRESS_ASSIGN capsules to allow their peers to send traffic. 4.2.2. The ADDRESS_REQUEST capsule (see iana-types for the value of the capsule type) allows an endpoint to request assignment of an IP address from its peer. This capsule is not required for simple client/proxy communication where the client only expects to receive one address from the proxy. The capsule allows the endpoint to optionally indicate a preference for which address it would get assigned. Upon receiving the ADDRESS_REQUEST capsule, an endpoint SHOULD assign an IP address to its peer, and then respond with an ADDRESS_ASSIGN capsule to inform the peer of the assignment. 4.2.3.", "comments": "NAME let's discuss that on . I'd like to merge this PR first then decide on a plan for before submitting the next revision to the datatracker\nSplitting {ADDRESSREQUESTROUTEADVERTISEMENT} into ...V4 and ..._V6 would result in smaller capsules that have fewer error conditions. It would also keep IPv4 and IPv6 logic distinct.\nI don't think the size of these capsules matters given their rarity. I can see the error condition argument but it's pretty minor and would come at the cost of making the specification longer. I don't feel too strongly here, but I think I have a slight preference for keeping things as they are.\nI agree with David that size is problem not a big concern. I think having one set of capsule is less complex.\nI also have a preference for keeping the capsules as they are with versions included; it also allows the route advertisement capsule to include routes for both v4 and v6 in one capsule, which I think is very important. I wouldn't want to lose that feature.\nI similarly have a preference for keeping v4 and v6 in one capsule and agree size is not a big concern.\nI just chatted with NAME to see what error conditions he was thinking of. Upon discussion (v6 route advertised when only v4 address is available), we concluded that this is not an error case, it's just a route that's not yet usable. We've concluded that this is something that we're comfortable leaving as-is.\nImplementing this on the client I was wondering if there should be a way for the client to ask for the server for an address without specifying what address it wants. This could allow a VPN client to mention that it wants an IPv6 address or not. If we think this could be useful, we could encode it as or .\nI believe asking for the Any address is what we discussed previously, so +1\nYou mean ADDRESS_REQUEST(IPv4-0.0.0.0/32) don't you. I think the format of the request works fine. But we do need to before we implement this into the specification.\nYes \u2014 if we have this, one option is to always have an ADDRESS_REQUEST expected from clients, and then there doesn't need to be signaling. Or we do the simple solution to 54 to say \"I expect to send N capsules with my request\".\nYou're right this could be a potential solution to . Let's discuss them together tomorrow\nRegarding IPv6, I think a standard request that would make sense is asking for any /64. I guess that works fine with this idea too?\nOh that's interesting, if the IP Address is or , the prefix length is a hint for the requested length. I like that.\nIETF 114: Add an editorial sentence to \"remember, what you ask for may not be what you get, especially with regards to address families\". A request should be paired with a response, multiple requests can just say \"you have the same ones you had before\". Clarify that it's cumulative. Also solves unassign/removal of addresses.\nSo there appear to exist some downside with the timing of the interactions when one want to use Address-Request capsule. So the basic problem is simply that the proxy will assign an address prior to the Address-Request capsule has been received, thus preventing the proxy from processing the request in the right context. So lets assume that the client has already created one connect-IP request with a limited scope and been assigned IP-A. Then it want to make an additional request for another limited scope proxying and would like to get the same address assigned. Due to the separation between the HTTP request for the Connect-IP and the capsule Address-Request a server can actually process the whole HTTP request prior to receiving the Address-Request. Resulting potentially in that another IP address assignment (IP-B) be provided to the client, despite it asking for IP-A to be assigned also to the request. So does the HTTP request needs an indicator parameter to tell the proxy to wait for one or more \"request\" capsules from the client before processing the request?\nOut of curiosity, in what scenario would the client \"like to get the same address assigned\"? It makes sense for the proxy to decide to reuse addresses to save its address pool when the scoping allows it, but why would the client care?\nI could imagine this happening if the client was trying to resume some previous session that it lost due to an outage? Seems to be niche, but it's certainly possible.\nSo the most obvious use case for this is to request iproto= CMP in combination with for example UDP. Then getting ICMP for another IP address then the one you already have UDP for is mostly useless if you intended to use the ICMP messages for hints when things fails for you UDP based protocol. I would note that for an endpoint liking to do P2P NAT traversal stuff getting an IP address rather than using Connect-UDP does make sense as it ensure the client has control. It enables the IP address for its UDP traffic to be Endpoint-Independent Mapping (RFC4787). This might be niche usages but I would like to avoid creating artificial barriers that makes things impossible or at least hard to realise.\nThanks, this makes sense. Should we suggest that clients bundle the ADDRESS_REQUEST in the same packet as the HTTP headers and suggest that servers check for those before responding?\nI am still worried that this creates issues as your request may not fit in a single packet to begin with. My solution would be to actually move the address request part into the HTTP request itself as header field. That would ensure an atomic operation and no risk for errors.\nMagnus, is your proposal to require, or to allow as an option, for the address request to be moved into the HTTP request? Why not bundle the capsules over a reliable delivery mechanism instead?\nI first of all want to ensure that you can do a request that ensures that for those cases when it is necessary to interpret the Connect-IP request in the context of desired IP address it can done. Without uncertainties or requiring to have wait time to see if there are capsules incoming. So I think primarily allow as an option. I think a question back to your usages, do you have cases where one would need additional capsules to correctly interpret the option. Should we look for an other solution than the one I proposed to better cover the necessary functionality?\nI want to be able to continue to do ADDRESS_ASSIGN capsules during the lifetime of the connection to support e.g., IPv6 Privacy extensions but without having to delegate the entire subnet. So I don't want to move to header-only.\nI think NAME is describing something similar to the / capsules that we had in the original Google . The only difference is that we would add an HTTP header to say \"start off in atomic mode and wait for atomic end before sending addresses and such\". While I think this does sound useful, does it need to be in the core spec or could it be an extension?\nIs there any reason to not bring back the capsules?\nThe only reason I can think of is to keep the core spec as short and simple as possible. But if we think this is an important feature then I'm not opposed to bringing back .\nI agree with the sentiment of keeping the core spec short and simple, but the trade-off is also knowing what the minimum viable featureset we can always rely on is. If we conclude that capsules are in that set, that IMHO weighs towards including them. Given that both Magnus and I have now presented use-cases for atomicity, I think this finally swings towards including?\nI agree with that\nSo why I used atomic here, was due to the split between HTTP and Capsules. So I don't see this issue currently motivates the need for ATOMIC capsules. I really would prefer that the necessary parameters for the CONNECT-IP request are in the HTTP request, either in the URI or in an HTTP field. To be clear, I understand the need for the capsules as they are useful for doing things after the initial request, but creating this binding between the HTTP request and the capsules are creating its sets of complexities. Likely bigger than duplicating the carrying of the necessary information results in, especially when they may be use cases related.\nI agree I with Magnus and think we should specify a ay to add the address request to the http request. If there is a timing dependency it's better to put all needed information into the same request than specifying extra and complex logic to achieve the same in hindsight. That doesn't mean that we don't need the address request capsule (for later use in the connection). I think we need both.\nI dislike having two ways of conveying the same thing. How about a simple HTTP header that tells the receiver that there are 4 capsules coming right after the HTTP headers?\nI'd prefer to avoid \"atomic capsule\" starts and ends. If we need a way to guarantee inclusion of the local address in the first request, having it be in the HTTP request makes sense, either as another URI parameter, a header to say what address you want, or a header to say you will have capsules that need to be parsed before responding. That latter option is flexible, certainly, but could be over-engineered. Another option is to tightly scope a header to the use case we think is relevant. If the use case is \"please use the same address as this other request stream I already have for connect-ip or connect-UDP\", we could just have a header to say for any connect- request that it is entangled with another and should share some properties. I could imagine this even without CONNECT-IP. Today with private relay, we have to ask the proxies to prefer to use the same local IP for everything in a connection from the client, but letting the client choose what gets grouped might be even nicer. This option also lets you issue multiple requests, say they should share, without needing to know exactly which IP any of them will get assigned first.\nUnfortunately requests and headers are end-to-end but streams are hop-by-hop so there is no easy HTTP way to have a request refer to another request. I'd be supportive of adding the header but I'd push back on adding a header that duplicates a capsule. This is starting to feel like something best-served by an extension\nYou could solve the correlation by having a \"group\" ID that you attach to all related requests. No need to refer by stream ID.\nWhat is the scope of a group ID? Connection? If so it's the same as stream IDs.\nI think this should be some extension, but it could be a UUID too, doesn't need to be connection specific.\nAh, thanks. Yeah UUID would work. Back to the issue at hand, I propose this feature request get moved to an extension.\nAll of these alternatives strike me as much more complex that the semantics of . I'd rather we have a single mechanism even if it's slightly more powerful and restrain it's use than try to invent new ways to solve each problem that arises.\nAtomic doesn't help coordinate between requests on different streams, etc, so it's not really solving the full problem here. Either way, I think this is something to defer to an extension.\nHeaders don't solve that problem either, so I'm not sure that argument really makes sense in this context. Cross stream coordination seems out of scope to me.\nSo you are taking this into directions I did not intended. I am just asking for how to ensure that the server does not answers the client's request prior to having received any capsules if they are coming. And the real issue is that the server can't know unless the client either includes the corresponding data in the HTTP request or includes an indication that there will be capsules that are relevant to the requests. So, my main worry is if waiting for capsule to answer will actually work if there are an HTTP 1.x intermediary connection on the path between the client and the server. I guess its is sending the capsule after the request and hope it makes it. Because to my understanding in HTTP datagram the peer does not expect any data to arrive so it must be upgraded. Thus, one can end up in cases where the capsule protocol does not make it, where the HTTP request does arrive.\nWhile we might want to have a more generic extension to relate information to multiple requests, I think the problem Magnus describes needs a solution now. The problem is that there is a dependency between replying/react to a connect request and potentially information send in capsules. The best solution is to send this information together with the connection request instead. I don't see any concern to have a http request or URI parameter and the capsule. What's the problem about that?\nMagnus, what is \"the client's request\" here? Is it the extended-connect itself, or a response to a capsule? If it's the former, the server can't know what capsules, if any, are coming at this time, as we haven't yet established we are using the capsule protocol. If it's the latter, capsules trivially solve this, as you cannot process any of the requests until the capsule is received. Mirja, I'd like to avoid having 2 ways to do things, especially since putting it on the HTTP request is a less-generic mechanism and cannot be reused during the connection, whereas capsules can. If we believe this feature is important, I'd like us to solve it generically, rather than special-casing one use case we are aware of right now.\nSo the client's request is the HTTP request that is only complete in semantics when combined with the request-address capsule.\nSorry Magnus, I'm not sure I understand what you're trying to say. I don't think we can enforce a semantic that a higher-level HTTP request is only complete after some bytes on the then-negotiated protocol have been exchanged.\nBut this is really the issue, there is currently no way for a client to ask the server to only provide a targeted connect-IP forwarding if the IP address it can provide is the one the client asks for in a consistent way. My interpretation of the specification is that either; 1) the server will never take the request-addr into account initially for the request, and interpret the request-addr after it has created a Connect-IP forwarding 2) the server will sometime take the request-addr into account by have a short delay awaiting for the request-addr capsule, which may or may not make it in time for the implementation specific timer. This non-consistency or failure to address a use case that I think several was interested in resolving is a real problem. A problem I think we will have to take with us to the meeting.\nThis ultimately stems from us not having an explicit capsule/message to indicate that forwarding should actually begin. Right now it's implicit as soon as the capsule is assembled with the IP addressing information. This is leading to your proposal to stick more things onto the initial request, whereas I believe the correct solution is to add more capabilities to the capsule protocol to express these scenarios.\nI will consider what you are saying. However, I will note that we might have different goals. I do have a need for identifiable streams that can be referred to in extension signalling. Using capsules does not achieve that with the current design as the stream identifier comes from the HTTP request, not the subsequent capsule signalling.\nI hope so, constructive discussion relies on all of us considering what the others are saying Given that we're all trying to solve slightly different problems here, I see two outcomes: we have one simple generic solution that happens to work for everyone we punt this to more specific extensions since these problems are not generic\nSo I think the issue here is what we consider core functionality. My observation here is that Alex is promoting functionality that will make it easy to make quite rich things for the network tunnelling use case that you are promoting. While I am bringing up something that to me appear core for the single endpoint tunnelling. So I think we actually need to understand the issues as well as what usage and how usable this is in common and how to handle where the different use cases actually benefits from having the functionality in different places.\nI'll reiterate my suggestion from the meeting: it would be better to have IP-version-specific capsule types for each capsule. It saves a byte and also avoids problems with version mismatches.Thanks! I left 2 minor editorial questions, but feel free to merge as-is.", "new_text": "4.2.1. The ADDRESS_ASSIGN capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer of the list of IP addresses or prefixes it has assigned to it. Every capsule contains the full list of IP prefixes currently assigned to the receiver. Any of these addresses can be used as the source address on IP packets originated by the receiver of this capsule. The ADDRESS_ASSIGN capsule contains a sequence of zero or more Assigned Addresses. Note that an ADDRESS_ASSIGN capsule can also indicate that a previously assigned address is no longer assigned. An ADDRESS_ASSIGN capsule can also be empty. In some deployments of CONNECT-IP, an endpoint needs to be assigned an address by its peer before it knows what source address to set on its own packets. For example, in the Remote Access case (example- remote) the client cannot send IP packets until it knows what address to use. In these deployments, the endpoint that is expecting an address assignment MUST send an ADDRESS_REQUEST capsule. This isn't required if the endpoint does not need any address assignment, for example when it is configured out-of-band with static addresses. While ADDRESS_ASSIGN capsules are commonly sent in response to ADDRESS_REQUEST capsules, endpoints MAY send ADDRESS_ASSIGN capsules unprompted. 4.2.2. The ADDRESS_REQUEST capsule (see iana-types for the value of the capsule type) allows an endpoint to request assignment of IP addresses from its peer. The capsule allows the endpoint to optionally indicate a preference for which address it would get assigned. The ADDRESS_REQUEST capsule contains a sequence of Requested Addresses. If the IP Address is all-zero (0.0.0.0 or ::), this indicates that the sender is requesting an address of that address family but does not have a preference for a specific address. In that scenario, the prefix length still indicates the sender's preference for the prefix length it is requesting. Upon receiving the ADDRESS_REQUEST capsule, an endpoint SHOULD assign an IP address to its peer, and then respond with an ADDRESS_ASSIGN capsule to inform the peer of the assignment. Note that the receiver of the ADDRESS_REQUEST capsule is not required to assign the requested address, and that it can also assign some requested addresses but not others. 4.2.3."} {"id": "q-en-draft-ietf-masque-connect-ip-69ffba9c33abb0faa9586c3d1e7e8fc9529b84e474b883ce8382cfbd68229fd8", "old_text": "receiving the response to its IP proxying request, noting however that those may not be processed by the proxy if it responds to the request with a failure, or if the datagrams are received by the proxy before the request. When a CONNECT-IP endpoint receives an HTTP Datagram containing an IP packet, it will parse the packet's IP header, perform any local", "comments": "NAME let's discuss that on . I'd like to merge this PR first then decide on a plan for before submitting the next revision to the datatracker\nSplitting {ADDRESSREQUESTROUTEADVERTISEMENT} into ...V4 and ..._V6 would result in smaller capsules that have fewer error conditions. It would also keep IPv4 and IPv6 logic distinct.\nI don't think the size of these capsules matters given their rarity. I can see the error condition argument but it's pretty minor and would come at the cost of making the specification longer. I don't feel too strongly here, but I think I have a slight preference for keeping things as they are.\nI agree with David that size is problem not a big concern. I think having one set of capsule is less complex.\nI also have a preference for keeping the capsules as they are with versions included; it also allows the route advertisement capsule to include routes for both v4 and v6 in one capsule, which I think is very important. I wouldn't want to lose that feature.\nI similarly have a preference for keeping v4 and v6 in one capsule and agree size is not a big concern.\nI just chatted with NAME to see what error conditions he was thinking of. Upon discussion (v6 route advertised when only v4 address is available), we concluded that this is not an error case, it's just a route that's not yet usable. We've concluded that this is something that we're comfortable leaving as-is.\nImplementing this on the client I was wondering if there should be a way for the client to ask for the server for an address without specifying what address it wants. This could allow a VPN client to mention that it wants an IPv6 address or not. If we think this could be useful, we could encode it as or .\nI believe asking for the Any address is what we discussed previously, so +1\nYou mean ADDRESS_REQUEST(IPv4-0.0.0.0/32) don't you. I think the format of the request works fine. But we do need to before we implement this into the specification.\nYes \u2014 if we have this, one option is to always have an ADDRESS_REQUEST expected from clients, and then there doesn't need to be signaling. Or we do the simple solution to 54 to say \"I expect to send N capsules with my request\".\nYou're right this could be a potential solution to . Let's discuss them together tomorrow\nRegarding IPv6, I think a standard request that would make sense is asking for any /64. I guess that works fine with this idea too?\nOh that's interesting, if the IP Address is or , the prefix length is a hint for the requested length. I like that.\nIETF 114: Add an editorial sentence to \"remember, what you ask for may not be what you get, especially with regards to address families\". A request should be paired with a response, multiple requests can just say \"you have the same ones you had before\". Clarify that it's cumulative. Also solves unassign/removal of addresses.\nSo there appear to exist some downside with the timing of the interactions when one want to use Address-Request capsule. So the basic problem is simply that the proxy will assign an address prior to the Address-Request capsule has been received, thus preventing the proxy from processing the request in the right context. So lets assume that the client has already created one connect-IP request with a limited scope and been assigned IP-A. Then it want to make an additional request for another limited scope proxying and would like to get the same address assigned. Due to the separation between the HTTP request for the Connect-IP and the capsule Address-Request a server can actually process the whole HTTP request prior to receiving the Address-Request. Resulting potentially in that another IP address assignment (IP-B) be provided to the client, despite it asking for IP-A to be assigned also to the request. So does the HTTP request needs an indicator parameter to tell the proxy to wait for one or more \"request\" capsules from the client before processing the request?\nOut of curiosity, in what scenario would the client \"like to get the same address assigned\"? It makes sense for the proxy to decide to reuse addresses to save its address pool when the scoping allows it, but why would the client care?\nI could imagine this happening if the client was trying to resume some previous session that it lost due to an outage? Seems to be niche, but it's certainly possible.\nSo the most obvious use case for this is to request iproto= CMP in combination with for example UDP. Then getting ICMP for another IP address then the one you already have UDP for is mostly useless if you intended to use the ICMP messages for hints when things fails for you UDP based protocol. I would note that for an endpoint liking to do P2P NAT traversal stuff getting an IP address rather than using Connect-UDP does make sense as it ensure the client has control. It enables the IP address for its UDP traffic to be Endpoint-Independent Mapping (RFC4787). This might be niche usages but I would like to avoid creating artificial barriers that makes things impossible or at least hard to realise.\nThanks, this makes sense. Should we suggest that clients bundle the ADDRESS_REQUEST in the same packet as the HTTP headers and suggest that servers check for those before responding?\nI am still worried that this creates issues as your request may not fit in a single packet to begin with. My solution would be to actually move the address request part into the HTTP request itself as header field. That would ensure an atomic operation and no risk for errors.\nMagnus, is your proposal to require, or to allow as an option, for the address request to be moved into the HTTP request? Why not bundle the capsules over a reliable delivery mechanism instead?\nI first of all want to ensure that you can do a request that ensures that for those cases when it is necessary to interpret the Connect-IP request in the context of desired IP address it can done. Without uncertainties or requiring to have wait time to see if there are capsules incoming. So I think primarily allow as an option. I think a question back to your usages, do you have cases where one would need additional capsules to correctly interpret the option. Should we look for an other solution than the one I proposed to better cover the necessary functionality?\nI want to be able to continue to do ADDRESS_ASSIGN capsules during the lifetime of the connection to support e.g., IPv6 Privacy extensions but without having to delegate the entire subnet. So I don't want to move to header-only.\nI think NAME is describing something similar to the / capsules that we had in the original Google . The only difference is that we would add an HTTP header to say \"start off in atomic mode and wait for atomic end before sending addresses and such\". While I think this does sound useful, does it need to be in the core spec or could it be an extension?\nIs there any reason to not bring back the capsules?\nThe only reason I can think of is to keep the core spec as short and simple as possible. But if we think this is an important feature then I'm not opposed to bringing back .\nI agree with the sentiment of keeping the core spec short and simple, but the trade-off is also knowing what the minimum viable featureset we can always rely on is. If we conclude that capsules are in that set, that IMHO weighs towards including them. Given that both Magnus and I have now presented use-cases for atomicity, I think this finally swings towards including?\nI agree with that\nSo why I used atomic here, was due to the split between HTTP and Capsules. So I don't see this issue currently motivates the need for ATOMIC capsules. I really would prefer that the necessary parameters for the CONNECT-IP request are in the HTTP request, either in the URI or in an HTTP field. To be clear, I understand the need for the capsules as they are useful for doing things after the initial request, but creating this binding between the HTTP request and the capsules are creating its sets of complexities. Likely bigger than duplicating the carrying of the necessary information results in, especially when they may be use cases related.\nI agree I with Magnus and think we should specify a ay to add the address request to the http request. If there is a timing dependency it's better to put all needed information into the same request than specifying extra and complex logic to achieve the same in hindsight. That doesn't mean that we don't need the address request capsule (for later use in the connection). I think we need both.\nI dislike having two ways of conveying the same thing. How about a simple HTTP header that tells the receiver that there are 4 capsules coming right after the HTTP headers?\nI'd prefer to avoid \"atomic capsule\" starts and ends. If we need a way to guarantee inclusion of the local address in the first request, having it be in the HTTP request makes sense, either as another URI parameter, a header to say what address you want, or a header to say you will have capsules that need to be parsed before responding. That latter option is flexible, certainly, but could be over-engineered. Another option is to tightly scope a header to the use case we think is relevant. If the use case is \"please use the same address as this other request stream I already have for connect-ip or connect-UDP\", we could just have a header to say for any connect- request that it is entangled with another and should share some properties. I could imagine this even without CONNECT-IP. Today with private relay, we have to ask the proxies to prefer to use the same local IP for everything in a connection from the client, but letting the client choose what gets grouped might be even nicer. This option also lets you issue multiple requests, say they should share, without needing to know exactly which IP any of them will get assigned first.\nUnfortunately requests and headers are end-to-end but streams are hop-by-hop so there is no easy HTTP way to have a request refer to another request. I'd be supportive of adding the header but I'd push back on adding a header that duplicates a capsule. This is starting to feel like something best-served by an extension\nYou could solve the correlation by having a \"group\" ID that you attach to all related requests. No need to refer by stream ID.\nWhat is the scope of a group ID? Connection? If so it's the same as stream IDs.\nI think this should be some extension, but it could be a UUID too, doesn't need to be connection specific.\nAh, thanks. Yeah UUID would work. Back to the issue at hand, I propose this feature request get moved to an extension.\nAll of these alternatives strike me as much more complex that the semantics of . I'd rather we have a single mechanism even if it's slightly more powerful and restrain it's use than try to invent new ways to solve each problem that arises.\nAtomic doesn't help coordinate between requests on different streams, etc, so it's not really solving the full problem here. Either way, I think this is something to defer to an extension.\nHeaders don't solve that problem either, so I'm not sure that argument really makes sense in this context. Cross stream coordination seems out of scope to me.\nSo you are taking this into directions I did not intended. I am just asking for how to ensure that the server does not answers the client's request prior to having received any capsules if they are coming. And the real issue is that the server can't know unless the client either includes the corresponding data in the HTTP request or includes an indication that there will be capsules that are relevant to the requests. So, my main worry is if waiting for capsule to answer will actually work if there are an HTTP 1.x intermediary connection on the path between the client and the server. I guess its is sending the capsule after the request and hope it makes it. Because to my understanding in HTTP datagram the peer does not expect any data to arrive so it must be upgraded. Thus, one can end up in cases where the capsule protocol does not make it, where the HTTP request does arrive.\nWhile we might want to have a more generic extension to relate information to multiple requests, I think the problem Magnus describes needs a solution now. The problem is that there is a dependency between replying/react to a connect request and potentially information send in capsules. The best solution is to send this information together with the connection request instead. I don't see any concern to have a http request or URI parameter and the capsule. What's the problem about that?\nMagnus, what is \"the client's request\" here? Is it the extended-connect itself, or a response to a capsule? If it's the former, the server can't know what capsules, if any, are coming at this time, as we haven't yet established we are using the capsule protocol. If it's the latter, capsules trivially solve this, as you cannot process any of the requests until the capsule is received. Mirja, I'd like to avoid having 2 ways to do things, especially since putting it on the HTTP request is a less-generic mechanism and cannot be reused during the connection, whereas capsules can. If we believe this feature is important, I'd like us to solve it generically, rather than special-casing one use case we are aware of right now.\nSo the client's request is the HTTP request that is only complete in semantics when combined with the request-address capsule.\nSorry Magnus, I'm not sure I understand what you're trying to say. I don't think we can enforce a semantic that a higher-level HTTP request is only complete after some bytes on the then-negotiated protocol have been exchanged.\nBut this is really the issue, there is currently no way for a client to ask the server to only provide a targeted connect-IP forwarding if the IP address it can provide is the one the client asks for in a consistent way. My interpretation of the specification is that either; 1) the server will never take the request-addr into account initially for the request, and interpret the request-addr after it has created a Connect-IP forwarding 2) the server will sometime take the request-addr into account by have a short delay awaiting for the request-addr capsule, which may or may not make it in time for the implementation specific timer. This non-consistency or failure to address a use case that I think several was interested in resolving is a real problem. A problem I think we will have to take with us to the meeting.\nThis ultimately stems from us not having an explicit capsule/message to indicate that forwarding should actually begin. Right now it's implicit as soon as the capsule is assembled with the IP addressing information. This is leading to your proposal to stick more things onto the initial request, whereas I believe the correct solution is to add more capabilities to the capsule protocol to express these scenarios.\nI will consider what you are saying. However, I will note that we might have different goals. I do have a need for identifiable streams that can be referred to in extension signalling. Using capsules does not achieve that with the current design as the stream identifier comes from the HTTP request, not the subsequent capsule signalling.\nI hope so, constructive discussion relies on all of us considering what the others are saying Given that we're all trying to solve slightly different problems here, I see two outcomes: we have one simple generic solution that happens to work for everyone we punt this to more specific extensions since these problems are not generic\nSo I think the issue here is what we consider core functionality. My observation here is that Alex is promoting functionality that will make it easy to make quite rich things for the network tunnelling use case that you are promoting. While I am bringing up something that to me appear core for the single endpoint tunnelling. So I think we actually need to understand the issues as well as what usage and how usable this is in common and how to handle where the different use cases actually benefits from having the functionality in different places.\nI'll reiterate my suggestion from the meeting: it would be better to have IP-version-specific capsule types for each capsule. It saves a byte and also avoids problems with version mismatches.Thanks! I left 2 minor editorial questions, but feel free to merge as-is.", "new_text": "receiving the response to its IP proxying request, noting however that those may not be processed by the proxy if it responds to the request with a failure, or if the datagrams are received by the proxy before the request. Since receiving addresses and routes is required in order to know that a packet can be sent through the tunnel, such optimistic packets might be dropped by the proxy if it chooses to provide different addressing or routing information than what the client assumed. When a CONNECT-IP endpoint receives an HTTP Datagram containing an IP packet, it will parse the packet's IP header, perform any local"} {"id": "q-en-draft-ietf-masque-connect-ip-69ffba9c33abb0faa9586c3d1e7e8fc9529b84e474b883ce8382cfbd68229fd8", "old_text": "9. There are significant risks in allowing arbitrary clients to establish a tunnel to arbitrary servers, as that could allow bad actors to send traffic and have it attributed to the proxy. Proxies", "comments": "NAME let's discuss that on . I'd like to merge this PR first then decide on a plan for before submitting the next revision to the datatracker\nSplitting {ADDRESSREQUESTROUTEADVERTISEMENT} into ...V4 and ..._V6 would result in smaller capsules that have fewer error conditions. It would also keep IPv4 and IPv6 logic distinct.\nI don't think the size of these capsules matters given their rarity. I can see the error condition argument but it's pretty minor and would come at the cost of making the specification longer. I don't feel too strongly here, but I think I have a slight preference for keeping things as they are.\nI agree with David that size is problem not a big concern. I think having one set of capsule is less complex.\nI also have a preference for keeping the capsules as they are with versions included; it also allows the route advertisement capsule to include routes for both v4 and v6 in one capsule, which I think is very important. I wouldn't want to lose that feature.\nI similarly have a preference for keeping v4 and v6 in one capsule and agree size is not a big concern.\nI just chatted with NAME to see what error conditions he was thinking of. Upon discussion (v6 route advertised when only v4 address is available), we concluded that this is not an error case, it's just a route that's not yet usable. We've concluded that this is something that we're comfortable leaving as-is.\nImplementing this on the client I was wondering if there should be a way for the client to ask for the server for an address without specifying what address it wants. This could allow a VPN client to mention that it wants an IPv6 address or not. If we think this could be useful, we could encode it as or .\nI believe asking for the Any address is what we discussed previously, so +1\nYou mean ADDRESS_REQUEST(IPv4-0.0.0.0/32) don't you. I think the format of the request works fine. But we do need to before we implement this into the specification.\nYes \u2014 if we have this, one option is to always have an ADDRESS_REQUEST expected from clients, and then there doesn't need to be signaling. Or we do the simple solution to 54 to say \"I expect to send N capsules with my request\".\nYou're right this could be a potential solution to . Let's discuss them together tomorrow\nRegarding IPv6, I think a standard request that would make sense is asking for any /64. I guess that works fine with this idea too?\nOh that's interesting, if the IP Address is or , the prefix length is a hint for the requested length. I like that.\nIETF 114: Add an editorial sentence to \"remember, what you ask for may not be what you get, especially with regards to address families\". A request should be paired with a response, multiple requests can just say \"you have the same ones you had before\". Clarify that it's cumulative. Also solves unassign/removal of addresses.\nSo there appear to exist some downside with the timing of the interactions when one want to use Address-Request capsule. So the basic problem is simply that the proxy will assign an address prior to the Address-Request capsule has been received, thus preventing the proxy from processing the request in the right context. So lets assume that the client has already created one connect-IP request with a limited scope and been assigned IP-A. Then it want to make an additional request for another limited scope proxying and would like to get the same address assigned. Due to the separation between the HTTP request for the Connect-IP and the capsule Address-Request a server can actually process the whole HTTP request prior to receiving the Address-Request. Resulting potentially in that another IP address assignment (IP-B) be provided to the client, despite it asking for IP-A to be assigned also to the request. So does the HTTP request needs an indicator parameter to tell the proxy to wait for one or more \"request\" capsules from the client before processing the request?\nOut of curiosity, in what scenario would the client \"like to get the same address assigned\"? It makes sense for the proxy to decide to reuse addresses to save its address pool when the scoping allows it, but why would the client care?\nI could imagine this happening if the client was trying to resume some previous session that it lost due to an outage? Seems to be niche, but it's certainly possible.\nSo the most obvious use case for this is to request iproto= CMP in combination with for example UDP. Then getting ICMP for another IP address then the one you already have UDP for is mostly useless if you intended to use the ICMP messages for hints when things fails for you UDP based protocol. I would note that for an endpoint liking to do P2P NAT traversal stuff getting an IP address rather than using Connect-UDP does make sense as it ensure the client has control. It enables the IP address for its UDP traffic to be Endpoint-Independent Mapping (RFC4787). This might be niche usages but I would like to avoid creating artificial barriers that makes things impossible or at least hard to realise.\nThanks, this makes sense. Should we suggest that clients bundle the ADDRESS_REQUEST in the same packet as the HTTP headers and suggest that servers check for those before responding?\nI am still worried that this creates issues as your request may not fit in a single packet to begin with. My solution would be to actually move the address request part into the HTTP request itself as header field. That would ensure an atomic operation and no risk for errors.\nMagnus, is your proposal to require, or to allow as an option, for the address request to be moved into the HTTP request? Why not bundle the capsules over a reliable delivery mechanism instead?\nI first of all want to ensure that you can do a request that ensures that for those cases when it is necessary to interpret the Connect-IP request in the context of desired IP address it can done. Without uncertainties or requiring to have wait time to see if there are capsules incoming. So I think primarily allow as an option. I think a question back to your usages, do you have cases where one would need additional capsules to correctly interpret the option. Should we look for an other solution than the one I proposed to better cover the necessary functionality?\nI want to be able to continue to do ADDRESS_ASSIGN capsules during the lifetime of the connection to support e.g., IPv6 Privacy extensions but without having to delegate the entire subnet. So I don't want to move to header-only.\nI think NAME is describing something similar to the / capsules that we had in the original Google . The only difference is that we would add an HTTP header to say \"start off in atomic mode and wait for atomic end before sending addresses and such\". While I think this does sound useful, does it need to be in the core spec or could it be an extension?\nIs there any reason to not bring back the capsules?\nThe only reason I can think of is to keep the core spec as short and simple as possible. But if we think this is an important feature then I'm not opposed to bringing back .\nI agree with the sentiment of keeping the core spec short and simple, but the trade-off is also knowing what the minimum viable featureset we can always rely on is. If we conclude that capsules are in that set, that IMHO weighs towards including them. Given that both Magnus and I have now presented use-cases for atomicity, I think this finally swings towards including?\nI agree with that\nSo why I used atomic here, was due to the split between HTTP and Capsules. So I don't see this issue currently motivates the need for ATOMIC capsules. I really would prefer that the necessary parameters for the CONNECT-IP request are in the HTTP request, either in the URI or in an HTTP field. To be clear, I understand the need for the capsules as they are useful for doing things after the initial request, but creating this binding between the HTTP request and the capsules are creating its sets of complexities. Likely bigger than duplicating the carrying of the necessary information results in, especially when they may be use cases related.\nI agree I with Magnus and think we should specify a ay to add the address request to the http request. If there is a timing dependency it's better to put all needed information into the same request than specifying extra and complex logic to achieve the same in hindsight. That doesn't mean that we don't need the address request capsule (for later use in the connection). I think we need both.\nI dislike having two ways of conveying the same thing. How about a simple HTTP header that tells the receiver that there are 4 capsules coming right after the HTTP headers?\nI'd prefer to avoid \"atomic capsule\" starts and ends. If we need a way to guarantee inclusion of the local address in the first request, having it be in the HTTP request makes sense, either as another URI parameter, a header to say what address you want, or a header to say you will have capsules that need to be parsed before responding. That latter option is flexible, certainly, but could be over-engineered. Another option is to tightly scope a header to the use case we think is relevant. If the use case is \"please use the same address as this other request stream I already have for connect-ip or connect-UDP\", we could just have a header to say for any connect- request that it is entangled with another and should share some properties. I could imagine this even without CONNECT-IP. Today with private relay, we have to ask the proxies to prefer to use the same local IP for everything in a connection from the client, but letting the client choose what gets grouped might be even nicer. This option also lets you issue multiple requests, say they should share, without needing to know exactly which IP any of them will get assigned first.\nUnfortunately requests and headers are end-to-end but streams are hop-by-hop so there is no easy HTTP way to have a request refer to another request. I'd be supportive of adding the header but I'd push back on adding a header that duplicates a capsule. This is starting to feel like something best-served by an extension\nYou could solve the correlation by having a \"group\" ID that you attach to all related requests. No need to refer by stream ID.\nWhat is the scope of a group ID? Connection? If so it's the same as stream IDs.\nI think this should be some extension, but it could be a UUID too, doesn't need to be connection specific.\nAh, thanks. Yeah UUID would work. Back to the issue at hand, I propose this feature request get moved to an extension.\nAll of these alternatives strike me as much more complex that the semantics of . I'd rather we have a single mechanism even if it's slightly more powerful and restrain it's use than try to invent new ways to solve each problem that arises.\nAtomic doesn't help coordinate between requests on different streams, etc, so it's not really solving the full problem here. Either way, I think this is something to defer to an extension.\nHeaders don't solve that problem either, so I'm not sure that argument really makes sense in this context. Cross stream coordination seems out of scope to me.\nSo you are taking this into directions I did not intended. I am just asking for how to ensure that the server does not answers the client's request prior to having received any capsules if they are coming. And the real issue is that the server can't know unless the client either includes the corresponding data in the HTTP request or includes an indication that there will be capsules that are relevant to the requests. So, my main worry is if waiting for capsule to answer will actually work if there are an HTTP 1.x intermediary connection on the path between the client and the server. I guess its is sending the capsule after the request and hope it makes it. Because to my understanding in HTTP datagram the peer does not expect any data to arrive so it must be upgraded. Thus, one can end up in cases where the capsule protocol does not make it, where the HTTP request does arrive.\nWhile we might want to have a more generic extension to relate information to multiple requests, I think the problem Magnus describes needs a solution now. The problem is that there is a dependency between replying/react to a connect request and potentially information send in capsules. The best solution is to send this information together with the connection request instead. I don't see any concern to have a http request or URI parameter and the capsule. What's the problem about that?\nMagnus, what is \"the client's request\" here? Is it the extended-connect itself, or a response to a capsule? If it's the former, the server can't know what capsules, if any, are coming at this time, as we haven't yet established we are using the capsule protocol. If it's the latter, capsules trivially solve this, as you cannot process any of the requests until the capsule is received. Mirja, I'd like to avoid having 2 ways to do things, especially since putting it on the HTTP request is a less-generic mechanism and cannot be reused during the connection, whereas capsules can. If we believe this feature is important, I'd like us to solve it generically, rather than special-casing one use case we are aware of right now.\nSo the client's request is the HTTP request that is only complete in semantics when combined with the request-address capsule.\nSorry Magnus, I'm not sure I understand what you're trying to say. I don't think we can enforce a semantic that a higher-level HTTP request is only complete after some bytes on the then-negotiated protocol have been exchanged.\nBut this is really the issue, there is currently no way for a client to ask the server to only provide a targeted connect-IP forwarding if the IP address it can provide is the one the client asks for in a consistent way. My interpretation of the specification is that either; 1) the server will never take the request-addr into account initially for the request, and interpret the request-addr after it has created a Connect-IP forwarding 2) the server will sometime take the request-addr into account by have a short delay awaiting for the request-addr capsule, which may or may not make it in time for the implementation specific timer. This non-consistency or failure to address a use case that I think several was interested in resolving is a real problem. A problem I think we will have to take with us to the meeting.\nThis ultimately stems from us not having an explicit capsule/message to indicate that forwarding should actually begin. Right now it's implicit as soon as the capsule is assembled with the IP addressing information. This is leading to your proposal to stick more things onto the initial request, whereas I believe the correct solution is to add more capabilities to the capsule protocol to express these scenarios.\nI will consider what you are saying. However, I will note that we might have different goals. I do have a need for identifiable streams that can be referred to in extension signalling. Using capsules does not achieve that with the current design as the stream identifier comes from the HTTP request, not the subsequent capsule signalling.\nI hope so, constructive discussion relies on all of us considering what the others are saying Given that we're all trying to solve slightly different problems here, I see two outcomes: we have one simple generic solution that happens to work for everyone we punt this to more specific extensions since these problems are not generic\nSo I think the issue here is what we consider core functionality. My observation here is that Alex is promoting functionality that will make it easy to make quite rich things for the network tunnelling use case that you are promoting. While I am bringing up something that to me appear core for the single endpoint tunnelling. So I think we actually need to understand the issues as well as what usage and how usable this is in common and how to handle where the different use cases actually benefits from having the functionality in different places.\nI'll reiterate my suggestion from the meeting: it would be better to have IP-version-specific capsule types for each capsule. It saves a byte and also avoids problems with version mismatches.Thanks! I left 2 minor editorial questions, but feel free to merge as-is.", "new_text": "9. Extensions to CONNECT-IP can define behavior changes to this mechanism. Such extensions SHOULD define new capsule types to exchange configuration information if needed. It is RECOMMENDED for extensions that modify addressing to specify that their extension capsules be sent before the ADDRESS_ASSIGN capsule and that they do not take effect until the ADDRESS_ASSIGN capsule is parsed. This allows modifications to address assignement to operate atomically. Similarly, extensions that modify routing SHOULD behave similarly with regards to the ROUTE_ADVERTISEMENT capsule. 10. There are significant risks in allowing arbitrary clients to establish a tunnel to arbitrary servers, as that could allow bad actors to send traffic and have it attributed to the proxy. Proxies"} {"id": "q-en-draft-ietf-masque-connect-ip-69ffba9c33abb0faa9586c3d1e7e8fc9529b84e474b883ce8382cfbd68229fd8", "old_text": "scenarios, endpoints MUST follow the recommendations from BCP38 to prevent source address spoofing. 10. 10.1. This document will request IANA to register \"connect-ip\" in the HTTP Upgrade Token Registry maintained at <>. 10.2. This document will request IANA to update the entry for the \"masque\" URI suffix in the \"Well-Known URIs\" registry maintained at", "comments": "NAME let's discuss that on . I'd like to merge this PR first then decide on a plan for before submitting the next revision to the datatracker\nSplitting {ADDRESSREQUESTROUTEADVERTISEMENT} into ...V4 and ..._V6 would result in smaller capsules that have fewer error conditions. It would also keep IPv4 and IPv6 logic distinct.\nI don't think the size of these capsules matters given their rarity. I can see the error condition argument but it's pretty minor and would come at the cost of making the specification longer. I don't feel too strongly here, but I think I have a slight preference for keeping things as they are.\nI agree with David that size is problem not a big concern. I think having one set of capsule is less complex.\nI also have a preference for keeping the capsules as they are with versions included; it also allows the route advertisement capsule to include routes for both v4 and v6 in one capsule, which I think is very important. I wouldn't want to lose that feature.\nI similarly have a preference for keeping v4 and v6 in one capsule and agree size is not a big concern.\nI just chatted with NAME to see what error conditions he was thinking of. Upon discussion (v6 route advertised when only v4 address is available), we concluded that this is not an error case, it's just a route that's not yet usable. We've concluded that this is something that we're comfortable leaving as-is.\nImplementing this on the client I was wondering if there should be a way for the client to ask for the server for an address without specifying what address it wants. This could allow a VPN client to mention that it wants an IPv6 address or not. If we think this could be useful, we could encode it as or .\nI believe asking for the Any address is what we discussed previously, so +1\nYou mean ADDRESS_REQUEST(IPv4-0.0.0.0/32) don't you. I think the format of the request works fine. But we do need to before we implement this into the specification.\nYes \u2014 if we have this, one option is to always have an ADDRESS_REQUEST expected from clients, and then there doesn't need to be signaling. Or we do the simple solution to 54 to say \"I expect to send N capsules with my request\".\nYou're right this could be a potential solution to . Let's discuss them together tomorrow\nRegarding IPv6, I think a standard request that would make sense is asking for any /64. I guess that works fine with this idea too?\nOh that's interesting, if the IP Address is or , the prefix length is a hint for the requested length. I like that.\nIETF 114: Add an editorial sentence to \"remember, what you ask for may not be what you get, especially with regards to address families\". A request should be paired with a response, multiple requests can just say \"you have the same ones you had before\". Clarify that it's cumulative. Also solves unassign/removal of addresses.\nSo there appear to exist some downside with the timing of the interactions when one want to use Address-Request capsule. So the basic problem is simply that the proxy will assign an address prior to the Address-Request capsule has been received, thus preventing the proxy from processing the request in the right context. So lets assume that the client has already created one connect-IP request with a limited scope and been assigned IP-A. Then it want to make an additional request for another limited scope proxying and would like to get the same address assigned. Due to the separation between the HTTP request for the Connect-IP and the capsule Address-Request a server can actually process the whole HTTP request prior to receiving the Address-Request. Resulting potentially in that another IP address assignment (IP-B) be provided to the client, despite it asking for IP-A to be assigned also to the request. So does the HTTP request needs an indicator parameter to tell the proxy to wait for one or more \"request\" capsules from the client before processing the request?\nOut of curiosity, in what scenario would the client \"like to get the same address assigned\"? It makes sense for the proxy to decide to reuse addresses to save its address pool when the scoping allows it, but why would the client care?\nI could imagine this happening if the client was trying to resume some previous session that it lost due to an outage? Seems to be niche, but it's certainly possible.\nSo the most obvious use case for this is to request iproto= CMP in combination with for example UDP. Then getting ICMP for another IP address then the one you already have UDP for is mostly useless if you intended to use the ICMP messages for hints when things fails for you UDP based protocol. I would note that for an endpoint liking to do P2P NAT traversal stuff getting an IP address rather than using Connect-UDP does make sense as it ensure the client has control. It enables the IP address for its UDP traffic to be Endpoint-Independent Mapping (RFC4787). This might be niche usages but I would like to avoid creating artificial barriers that makes things impossible or at least hard to realise.\nThanks, this makes sense. Should we suggest that clients bundle the ADDRESS_REQUEST in the same packet as the HTTP headers and suggest that servers check for those before responding?\nI am still worried that this creates issues as your request may not fit in a single packet to begin with. My solution would be to actually move the address request part into the HTTP request itself as header field. That would ensure an atomic operation and no risk for errors.\nMagnus, is your proposal to require, or to allow as an option, for the address request to be moved into the HTTP request? Why not bundle the capsules over a reliable delivery mechanism instead?\nI first of all want to ensure that you can do a request that ensures that for those cases when it is necessary to interpret the Connect-IP request in the context of desired IP address it can done. Without uncertainties or requiring to have wait time to see if there are capsules incoming. So I think primarily allow as an option. I think a question back to your usages, do you have cases where one would need additional capsules to correctly interpret the option. Should we look for an other solution than the one I proposed to better cover the necessary functionality?\nI want to be able to continue to do ADDRESS_ASSIGN capsules during the lifetime of the connection to support e.g., IPv6 Privacy extensions but without having to delegate the entire subnet. So I don't want to move to header-only.\nI think NAME is describing something similar to the / capsules that we had in the original Google . The only difference is that we would add an HTTP header to say \"start off in atomic mode and wait for atomic end before sending addresses and such\". While I think this does sound useful, does it need to be in the core spec or could it be an extension?\nIs there any reason to not bring back the capsules?\nThe only reason I can think of is to keep the core spec as short and simple as possible. But if we think this is an important feature then I'm not opposed to bringing back .\nI agree with the sentiment of keeping the core spec short and simple, but the trade-off is also knowing what the minimum viable featureset we can always rely on is. If we conclude that capsules are in that set, that IMHO weighs towards including them. Given that both Magnus and I have now presented use-cases for atomicity, I think this finally swings towards including?\nI agree with that\nSo why I used atomic here, was due to the split between HTTP and Capsules. So I don't see this issue currently motivates the need for ATOMIC capsules. I really would prefer that the necessary parameters for the CONNECT-IP request are in the HTTP request, either in the URI or in an HTTP field. To be clear, I understand the need for the capsules as they are useful for doing things after the initial request, but creating this binding between the HTTP request and the capsules are creating its sets of complexities. Likely bigger than duplicating the carrying of the necessary information results in, especially when they may be use cases related.\nI agree I with Magnus and think we should specify a ay to add the address request to the http request. If there is a timing dependency it's better to put all needed information into the same request than specifying extra and complex logic to achieve the same in hindsight. That doesn't mean that we don't need the address request capsule (for later use in the connection). I think we need both.\nI dislike having two ways of conveying the same thing. How about a simple HTTP header that tells the receiver that there are 4 capsules coming right after the HTTP headers?\nI'd prefer to avoid \"atomic capsule\" starts and ends. If we need a way to guarantee inclusion of the local address in the first request, having it be in the HTTP request makes sense, either as another URI parameter, a header to say what address you want, or a header to say you will have capsules that need to be parsed before responding. That latter option is flexible, certainly, but could be over-engineered. Another option is to tightly scope a header to the use case we think is relevant. If the use case is \"please use the same address as this other request stream I already have for connect-ip or connect-UDP\", we could just have a header to say for any connect- request that it is entangled with another and should share some properties. I could imagine this even without CONNECT-IP. Today with private relay, we have to ask the proxies to prefer to use the same local IP for everything in a connection from the client, but letting the client choose what gets grouped might be even nicer. This option also lets you issue multiple requests, say they should share, without needing to know exactly which IP any of them will get assigned first.\nUnfortunately requests and headers are end-to-end but streams are hop-by-hop so there is no easy HTTP way to have a request refer to another request. I'd be supportive of adding the header but I'd push back on adding a header that duplicates a capsule. This is starting to feel like something best-served by an extension\nYou could solve the correlation by having a \"group\" ID that you attach to all related requests. No need to refer by stream ID.\nWhat is the scope of a group ID? Connection? If so it's the same as stream IDs.\nI think this should be some extension, but it could be a UUID too, doesn't need to be connection specific.\nAh, thanks. Yeah UUID would work. Back to the issue at hand, I propose this feature request get moved to an extension.\nAll of these alternatives strike me as much more complex that the semantics of . I'd rather we have a single mechanism even if it's slightly more powerful and restrain it's use than try to invent new ways to solve each problem that arises.\nAtomic doesn't help coordinate between requests on different streams, etc, so it's not really solving the full problem here. Either way, I think this is something to defer to an extension.\nHeaders don't solve that problem either, so I'm not sure that argument really makes sense in this context. Cross stream coordination seems out of scope to me.\nSo you are taking this into directions I did not intended. I am just asking for how to ensure that the server does not answers the client's request prior to having received any capsules if they are coming. And the real issue is that the server can't know unless the client either includes the corresponding data in the HTTP request or includes an indication that there will be capsules that are relevant to the requests. So, my main worry is if waiting for capsule to answer will actually work if there are an HTTP 1.x intermediary connection on the path between the client and the server. I guess its is sending the capsule after the request and hope it makes it. Because to my understanding in HTTP datagram the peer does not expect any data to arrive so it must be upgraded. Thus, one can end up in cases where the capsule protocol does not make it, where the HTTP request does arrive.\nWhile we might want to have a more generic extension to relate information to multiple requests, I think the problem Magnus describes needs a solution now. The problem is that there is a dependency between replying/react to a connect request and potentially information send in capsules. The best solution is to send this information together with the connection request instead. I don't see any concern to have a http request or URI parameter and the capsule. What's the problem about that?\nMagnus, what is \"the client's request\" here? Is it the extended-connect itself, or a response to a capsule? If it's the former, the server can't know what capsules, if any, are coming at this time, as we haven't yet established we are using the capsule protocol. If it's the latter, capsules trivially solve this, as you cannot process any of the requests until the capsule is received. Mirja, I'd like to avoid having 2 ways to do things, especially since putting it on the HTTP request is a less-generic mechanism and cannot be reused during the connection, whereas capsules can. If we believe this feature is important, I'd like us to solve it generically, rather than special-casing one use case we are aware of right now.\nSo the client's request is the HTTP request that is only complete in semantics when combined with the request-address capsule.\nSorry Magnus, I'm not sure I understand what you're trying to say. I don't think we can enforce a semantic that a higher-level HTTP request is only complete after some bytes on the then-negotiated protocol have been exchanged.\nBut this is really the issue, there is currently no way for a client to ask the server to only provide a targeted connect-IP forwarding if the IP address it can provide is the one the client asks for in a consistent way. My interpretation of the specification is that either; 1) the server will never take the request-addr into account initially for the request, and interpret the request-addr after it has created a Connect-IP forwarding 2) the server will sometime take the request-addr into account by have a short delay awaiting for the request-addr capsule, which may or may not make it in time for the implementation specific timer. This non-consistency or failure to address a use case that I think several was interested in resolving is a real problem. A problem I think we will have to take with us to the meeting.\nThis ultimately stems from us not having an explicit capsule/message to indicate that forwarding should actually begin. Right now it's implicit as soon as the capsule is assembled with the IP addressing information. This is leading to your proposal to stick more things onto the initial request, whereas I believe the correct solution is to add more capabilities to the capsule protocol to express these scenarios.\nI will consider what you are saying. However, I will note that we might have different goals. I do have a need for identifiable streams that can be referred to in extension signalling. Using capsules does not achieve that with the current design as the stream identifier comes from the HTTP request, not the subsequent capsule signalling.\nI hope so, constructive discussion relies on all of us considering what the others are saying Given that we're all trying to solve slightly different problems here, I see two outcomes: we have one simple generic solution that happens to work for everyone we punt this to more specific extensions since these problems are not generic\nSo I think the issue here is what we consider core functionality. My observation here is that Alex is promoting functionality that will make it easy to make quite rich things for the network tunnelling use case that you are promoting. While I am bringing up something that to me appear core for the single endpoint tunnelling. So I think we actually need to understand the issues as well as what usage and how usable this is in common and how to handle where the different use cases actually benefits from having the functionality in different places.\nI'll reiterate my suggestion from the meeting: it would be better to have IP-version-specific capsule types for each capsule. It saves a byte and also avoids problems with version mismatches.Thanks! I left 2 minor editorial questions, but feel free to merge as-is.", "new_text": "scenarios, endpoints MUST follow the recommendations from BCP38 to prevent source address spoofing. 11. 11.1. This document will request IANA to register \"connect-ip\" in the HTTP Upgrade Token Registry maintained at <>. 11.2. This document will request IANA to update the entry for the \"masque\" URI suffix in the \"Well-Known URIs\" registry maintained at"} {"id": "q-en-draft-ietf-masque-connect-ip-69ffba9c33abb0faa9586c3d1e7e8fc9529b84e474b883ce8382cfbd68229fd8", "old_text": "IANA is requested to update the \"Reference\" field to include this document in addition to previous values from that field. 10.3. This document will request IANA to add the following values to the \"HTTP Capsule Types\" registry created by HTTP-DGRAM:", "comments": "NAME let's discuss that on . I'd like to merge this PR first then decide on a plan for before submitting the next revision to the datatracker\nSplitting {ADDRESSREQUESTROUTEADVERTISEMENT} into ...V4 and ..._V6 would result in smaller capsules that have fewer error conditions. It would also keep IPv4 and IPv6 logic distinct.\nI don't think the size of these capsules matters given their rarity. I can see the error condition argument but it's pretty minor and would come at the cost of making the specification longer. I don't feel too strongly here, but I think I have a slight preference for keeping things as they are.\nI agree with David that size is problem not a big concern. I think having one set of capsule is less complex.\nI also have a preference for keeping the capsules as they are with versions included; it also allows the route advertisement capsule to include routes for both v4 and v6 in one capsule, which I think is very important. I wouldn't want to lose that feature.\nI similarly have a preference for keeping v4 and v6 in one capsule and agree size is not a big concern.\nI just chatted with NAME to see what error conditions he was thinking of. Upon discussion (v6 route advertised when only v4 address is available), we concluded that this is not an error case, it's just a route that's not yet usable. We've concluded that this is something that we're comfortable leaving as-is.\nImplementing this on the client I was wondering if there should be a way for the client to ask for the server for an address without specifying what address it wants. This could allow a VPN client to mention that it wants an IPv6 address or not. If we think this could be useful, we could encode it as or .\nI believe asking for the Any address is what we discussed previously, so +1\nYou mean ADDRESS_REQUEST(IPv4-0.0.0.0/32) don't you. I think the format of the request works fine. But we do need to before we implement this into the specification.\nYes \u2014 if we have this, one option is to always have an ADDRESS_REQUEST expected from clients, and then there doesn't need to be signaling. Or we do the simple solution to 54 to say \"I expect to send N capsules with my request\".\nYou're right this could be a potential solution to . Let's discuss them together tomorrow\nRegarding IPv6, I think a standard request that would make sense is asking for any /64. I guess that works fine with this idea too?\nOh that's interesting, if the IP Address is or , the prefix length is a hint for the requested length. I like that.\nIETF 114: Add an editorial sentence to \"remember, what you ask for may not be what you get, especially with regards to address families\". A request should be paired with a response, multiple requests can just say \"you have the same ones you had before\". Clarify that it's cumulative. Also solves unassign/removal of addresses.\nSo there appear to exist some downside with the timing of the interactions when one want to use Address-Request capsule. So the basic problem is simply that the proxy will assign an address prior to the Address-Request capsule has been received, thus preventing the proxy from processing the request in the right context. So lets assume that the client has already created one connect-IP request with a limited scope and been assigned IP-A. Then it want to make an additional request for another limited scope proxying and would like to get the same address assigned. Due to the separation between the HTTP request for the Connect-IP and the capsule Address-Request a server can actually process the whole HTTP request prior to receiving the Address-Request. Resulting potentially in that another IP address assignment (IP-B) be provided to the client, despite it asking for IP-A to be assigned also to the request. So does the HTTP request needs an indicator parameter to tell the proxy to wait for one or more \"request\" capsules from the client before processing the request?\nOut of curiosity, in what scenario would the client \"like to get the same address assigned\"? It makes sense for the proxy to decide to reuse addresses to save its address pool when the scoping allows it, but why would the client care?\nI could imagine this happening if the client was trying to resume some previous session that it lost due to an outage? Seems to be niche, but it's certainly possible.\nSo the most obvious use case for this is to request iproto= CMP in combination with for example UDP. Then getting ICMP for another IP address then the one you already have UDP for is mostly useless if you intended to use the ICMP messages for hints when things fails for you UDP based protocol. I would note that for an endpoint liking to do P2P NAT traversal stuff getting an IP address rather than using Connect-UDP does make sense as it ensure the client has control. It enables the IP address for its UDP traffic to be Endpoint-Independent Mapping (RFC4787). This might be niche usages but I would like to avoid creating artificial barriers that makes things impossible or at least hard to realise.\nThanks, this makes sense. Should we suggest that clients bundle the ADDRESS_REQUEST in the same packet as the HTTP headers and suggest that servers check for those before responding?\nI am still worried that this creates issues as your request may not fit in a single packet to begin with. My solution would be to actually move the address request part into the HTTP request itself as header field. That would ensure an atomic operation and no risk for errors.\nMagnus, is your proposal to require, or to allow as an option, for the address request to be moved into the HTTP request? Why not bundle the capsules over a reliable delivery mechanism instead?\nI first of all want to ensure that you can do a request that ensures that for those cases when it is necessary to interpret the Connect-IP request in the context of desired IP address it can done. Without uncertainties or requiring to have wait time to see if there are capsules incoming. So I think primarily allow as an option. I think a question back to your usages, do you have cases where one would need additional capsules to correctly interpret the option. Should we look for an other solution than the one I proposed to better cover the necessary functionality?\nI want to be able to continue to do ADDRESS_ASSIGN capsules during the lifetime of the connection to support e.g., IPv6 Privacy extensions but without having to delegate the entire subnet. So I don't want to move to header-only.\nI think NAME is describing something similar to the / capsules that we had in the original Google . The only difference is that we would add an HTTP header to say \"start off in atomic mode and wait for atomic end before sending addresses and such\". While I think this does sound useful, does it need to be in the core spec or could it be an extension?\nIs there any reason to not bring back the capsules?\nThe only reason I can think of is to keep the core spec as short and simple as possible. But if we think this is an important feature then I'm not opposed to bringing back .\nI agree with the sentiment of keeping the core spec short and simple, but the trade-off is also knowing what the minimum viable featureset we can always rely on is. If we conclude that capsules are in that set, that IMHO weighs towards including them. Given that both Magnus and I have now presented use-cases for atomicity, I think this finally swings towards including?\nI agree with that\nSo why I used atomic here, was due to the split between HTTP and Capsules. So I don't see this issue currently motivates the need for ATOMIC capsules. I really would prefer that the necessary parameters for the CONNECT-IP request are in the HTTP request, either in the URI or in an HTTP field. To be clear, I understand the need for the capsules as they are useful for doing things after the initial request, but creating this binding between the HTTP request and the capsules are creating its sets of complexities. Likely bigger than duplicating the carrying of the necessary information results in, especially when they may be use cases related.\nI agree I with Magnus and think we should specify a ay to add the address request to the http request. If there is a timing dependency it's better to put all needed information into the same request than specifying extra and complex logic to achieve the same in hindsight. That doesn't mean that we don't need the address request capsule (for later use in the connection). I think we need both.\nI dislike having two ways of conveying the same thing. How about a simple HTTP header that tells the receiver that there are 4 capsules coming right after the HTTP headers?\nI'd prefer to avoid \"atomic capsule\" starts and ends. If we need a way to guarantee inclusion of the local address in the first request, having it be in the HTTP request makes sense, either as another URI parameter, a header to say what address you want, or a header to say you will have capsules that need to be parsed before responding. That latter option is flexible, certainly, but could be over-engineered. Another option is to tightly scope a header to the use case we think is relevant. If the use case is \"please use the same address as this other request stream I already have for connect-ip or connect-UDP\", we could just have a header to say for any connect- request that it is entangled with another and should share some properties. I could imagine this even without CONNECT-IP. Today with private relay, we have to ask the proxies to prefer to use the same local IP for everything in a connection from the client, but letting the client choose what gets grouped might be even nicer. This option also lets you issue multiple requests, say they should share, without needing to know exactly which IP any of them will get assigned first.\nUnfortunately requests and headers are end-to-end but streams are hop-by-hop so there is no easy HTTP way to have a request refer to another request. I'd be supportive of adding the header but I'd push back on adding a header that duplicates a capsule. This is starting to feel like something best-served by an extension\nYou could solve the correlation by having a \"group\" ID that you attach to all related requests. No need to refer by stream ID.\nWhat is the scope of a group ID? Connection? If so it's the same as stream IDs.\nI think this should be some extension, but it could be a UUID too, doesn't need to be connection specific.\nAh, thanks. Yeah UUID would work. Back to the issue at hand, I propose this feature request get moved to an extension.\nAll of these alternatives strike me as much more complex that the semantics of . I'd rather we have a single mechanism even if it's slightly more powerful and restrain it's use than try to invent new ways to solve each problem that arises.\nAtomic doesn't help coordinate between requests on different streams, etc, so it's not really solving the full problem here. Either way, I think this is something to defer to an extension.\nHeaders don't solve that problem either, so I'm not sure that argument really makes sense in this context. Cross stream coordination seems out of scope to me.\nSo you are taking this into directions I did not intended. I am just asking for how to ensure that the server does not answers the client's request prior to having received any capsules if they are coming. And the real issue is that the server can't know unless the client either includes the corresponding data in the HTTP request or includes an indication that there will be capsules that are relevant to the requests. So, my main worry is if waiting for capsule to answer will actually work if there are an HTTP 1.x intermediary connection on the path between the client and the server. I guess its is sending the capsule after the request and hope it makes it. Because to my understanding in HTTP datagram the peer does not expect any data to arrive so it must be upgraded. Thus, one can end up in cases where the capsule protocol does not make it, where the HTTP request does arrive.\nWhile we might want to have a more generic extension to relate information to multiple requests, I think the problem Magnus describes needs a solution now. The problem is that there is a dependency between replying/react to a connect request and potentially information send in capsules. The best solution is to send this information together with the connection request instead. I don't see any concern to have a http request or URI parameter and the capsule. What's the problem about that?\nMagnus, what is \"the client's request\" here? Is it the extended-connect itself, or a response to a capsule? If it's the former, the server can't know what capsules, if any, are coming at this time, as we haven't yet established we are using the capsule protocol. If it's the latter, capsules trivially solve this, as you cannot process any of the requests until the capsule is received. Mirja, I'd like to avoid having 2 ways to do things, especially since putting it on the HTTP request is a less-generic mechanism and cannot be reused during the connection, whereas capsules can. If we believe this feature is important, I'd like us to solve it generically, rather than special-casing one use case we are aware of right now.\nSo the client's request is the HTTP request that is only complete in semantics when combined with the request-address capsule.\nSorry Magnus, I'm not sure I understand what you're trying to say. I don't think we can enforce a semantic that a higher-level HTTP request is only complete after some bytes on the then-negotiated protocol have been exchanged.\nBut this is really the issue, there is currently no way for a client to ask the server to only provide a targeted connect-IP forwarding if the IP address it can provide is the one the client asks for in a consistent way. My interpretation of the specification is that either; 1) the server will never take the request-addr into account initially for the request, and interpret the request-addr after it has created a Connect-IP forwarding 2) the server will sometime take the request-addr into account by have a short delay awaiting for the request-addr capsule, which may or may not make it in time for the implementation specific timer. This non-consistency or failure to address a use case that I think several was interested in resolving is a real problem. A problem I think we will have to take with us to the meeting.\nThis ultimately stems from us not having an explicit capsule/message to indicate that forwarding should actually begin. Right now it's implicit as soon as the capsule is assembled with the IP addressing information. This is leading to your proposal to stick more things onto the initial request, whereas I believe the correct solution is to add more capabilities to the capsule protocol to express these scenarios.\nI will consider what you are saying. However, I will note that we might have different goals. I do have a need for identifiable streams that can be referred to in extension signalling. Using capsules does not achieve that with the current design as the stream identifier comes from the HTTP request, not the subsequent capsule signalling.\nI hope so, constructive discussion relies on all of us considering what the others are saying Given that we're all trying to solve slightly different problems here, I see two outcomes: we have one simple generic solution that happens to work for everyone we punt this to more specific extensions since these problems are not generic\nSo I think the issue here is what we consider core functionality. My observation here is that Alex is promoting functionality that will make it easy to make quite rich things for the network tunnelling use case that you are promoting. While I am bringing up something that to me appear core for the single endpoint tunnelling. So I think we actually need to understand the issues as well as what usage and how usable this is in common and how to handle where the different use cases actually benefits from having the functionality in different places.\nI'll reiterate my suggestion from the meeting: it would be better to have IP-version-specific capsule types for each capsule. It saves a byte and also avoids problems with version mismatches.Thanks! I left 2 minor editorial questions, but feel free to merge as-is.", "new_text": "IANA is requested to update the \"Reference\" field to include this document in addition to previous values from that field. 11.3. This document will request IANA to add the following values to the \"HTTP Capsule Types\" registry created by HTTP-DGRAM:"} {"id": "q-en-draft-ietf-masque-connect-udp-81338cb6a59b02bed63936b28e67927d5c2f19167186e281c386166ae633c6b9", "old_text": "This document defines the \"connect-udp\" HTTP Upgrade Token. \"connect- udp\" uses the Capsule Protocol as defined in HTTP-DGRAM. A \"connect-udp\" request requests that the recipient proxy establish a tunnel over a single HTTP stream to the destination target identified by the \"target_host\" and \"target_port\" variables of the URI Template (see client-config). If the request is successful, the proxy commits to converting received HTTP Datagrams into UDP packets and vice versa until the tunnel is closed. Tunnels are commonly used to create an end-to-end virtual connection, which can then be secured using QUIC QUIC or another protocol running over UDP.", "comments": "The technicalities here are fine but I think someone will pick up on the grammar sooner or later. How about something like below (take or leave it) \"Clients issue requests containing a \"connect-udp\" upgrade token in initiate an UDP tunnel associated with a single HTTP stream. The client indicates, to the proxy, the target of the tunnel using the \"targethost\" and \"targetport\" variables of the URI template (see ).", "new_text": "This document defines the \"connect-udp\" HTTP Upgrade Token. \"connect- udp\" uses the Capsule Protocol as defined in HTTP-DGRAM. Clients issue requests containing a \"connect-udp\" upgrade token to initiate a UDP tunnel associated with a single HTTP stream. The target of the tunnel is indicated by the client to the proxy via the \"target_host\" and \"target_port\" variables of the URI Template (see client-config). If the request is successful, the proxy commits to converting received HTTP Datagrams into UDP packets and vice versa until the tunnel is closed. Tunnels are commonly used to create an end-to-end virtual connection, which can then be secured using QUIC QUIC or another protocol running over UDP."} {"id": "q-en-draft-ietf-masque-connect-udp-86d1f7de01269cab6f7d1628f2c36cffa708dd01341febafbb4863fc3b67997d", "old_text": "udp/{target_host}/{target_port}/\" and wishes to open a UDP proxying tunnel to target 192.0.2.42:443, it could send the following request: 3.3. The UDP proxy SHALL indicate a successful response by replying with", "comments": "Given that this draft is couched in terms of , the use of in HTTP/1.1 is surprising. At the least, a reason should be mentioned.\nClosing for now, as I have more serious concerns that may make this OBE.\nReopening.", "new_text": "udp/{target_host}/{target_port}/\" and wishes to open a UDP proxying tunnel to target 192.0.2.42:443, it could send the following request: In HTTP/1.1, this protocol uses the GET method to mimic the design of the WebSocket Protocol WEBSOCKET. 3.3. The UDP proxy SHALL indicate a successful response by replying with"} {"id": "q-en-draft-ietf-masque-connect-udp-d06a4f062b4a8d72483773d53d5819b19aa783b01ba5cce553f1249d0a2ba75a", "old_text": "The path component of the URI Template MUST start with a slash \"/\". All template variables MUST be within the path component of the URI. The URI template MUST contain the two variables \"target_host\" and \"target_port\" and MAY contain other variables.", "comments": "The client configuration section says: ... but the examples contain: Which is it?\nThis was an accident, I suspect it dates back to when the document referred to which contains both path and query. Fixing to mention both path and query via .", "new_text": "The path component of the URI Template MUST start with a slash \"/\". All template variables MUST be within the path or query components of the URI. The URI template MUST contain the two variables \"target_host\" and \"target_port\" and MAY contain other variables."} {"id": "q-en-draft-ietf-masque-connect-udp-5f934a07bf124316cc080790a9de811293464ba040132e506851de7e8982cf5f", "old_text": "the method SHALL be \"GET\". the request-target SHALL use absolute-form (see H1). the request SHALL include a single Host header field containing the origin of the UDP proxy.", "comments": "Is this strictly necessary?\nYou're right, it's unnecessary and doesn't add anything. Removing via .", "new_text": "the method SHALL be \"GET\". the request SHALL include a single Host header field containing the origin of the UDP proxy."} {"id": "q-en-draft-ietf-masque-connect-udp-49e7758c5e5bad0d054f2c3dee6eb9db65352d6524cfd493778bcbcc0992e622", "old_text": "contains the unmodified payload of a UDP packet (referred to as \"data octets\" in UDP). Clients MAY optimistically start sending UDP packets in HTTP Datagrams before receiving the response to its UDP proxying request. However, implementors should note that such proxied packets may not be processed by the UDP proxy if it responds to the request with a failure, or if the proxied packets are received by the UDP proxy before the request. By virtue of the definition of the UDP header UDP, it is not possible to encode UDP payloads longer than 65527 bytes. Therefore, endpoints MUST NOT send HTTP Datagrams with a Payload field longer than 65527", "comments": "PR in response to Al's OPSDIR review -- Reviewer: Al Morton Review result: Has Nits Hi David, this is the OPS-DIR review. I tried to stay in my lane. I found this paragraph confusing (which doesn't mean it is incorrect, just that someone with limited background read the section 2 requirements list, all MUSTs, and then the paragraph below, which seems to have a conflicting MUST with the SHOULD and MAY that follows). Help readers like me understand the options you are allowing here: If the client detects that any of the requirements above are not met by a URI Template, the client MUST reject its configuration and fail the request without sending it to the UDP proxy. While clients SHOULD validate the requirements above, some clients MAY use a general-purpose URI Template implementation that lacks this specific validation. I see that Christer's GEN_ART review picked-up on this too (I looked after I was done...). In Section 5, there seems to be a possible operations issue for proxy operators: Clients MAY optimistically start sending UDP packets in HTTP Datagrams before receiving the response to its UDP proxying request. However, implementors should note that such proxied packets may not be processed by the UDP proxy if it responds to the request with a failure, or if the proxied packets are received by the UDP proxy before the request. This seems like a good place to limit the amount of optimistic traffic, given that the request is not yet accepted. (also, would the optimistic traffic use Context ID zero?) In Performance Considerations UDP proxies SHOULD strive to avoid increasing burstiness of UDP traffic: they SHOULD NOT queue packets in order to increase batching. This requirement is written qualitatively, so users might \"know a violation if they see it\", but not hold the proxy system/operator to any specific performance without providing more details here. Inter-packet delay variation measurements on proxy ingress and egress would characterize increased burstiness well. Another solution is s/SHOULD/should/ (I doubt some increased burstiness will be avoidable, or enforceable.) Thanks in advance for considering these comments, Al", "new_text": "contains the unmodified payload of a UDP packet (referred to as \"data octets\" in UDP). By virtue of the definition of the UDP header UDP, it is not possible to encode UDP payloads longer than 65527 bytes. Therefore, endpoints MUST NOT send HTTP Datagrams with a Payload field longer than 65527"} {"id": "q-en-draft-ietf-masque-connect-udp-49e7758c5e5bad0d054f2c3dee6eb9db65352d6524cfd493778bcbcc0992e622", "old_text": "Payload field is longer than that limit without buffering the capsule contents. 6. UDP proxies SHOULD strive to avoid increasing burstiness of UDP", "comments": "PR in response to Al's OPSDIR review -- Reviewer: Al Morton Review result: Has Nits Hi David, this is the OPS-DIR review. I tried to stay in my lane. I found this paragraph confusing (which doesn't mean it is incorrect, just that someone with limited background read the section 2 requirements list, all MUSTs, and then the paragraph below, which seems to have a conflicting MUST with the SHOULD and MAY that follows). Help readers like me understand the options you are allowing here: If the client detects that any of the requirements above are not met by a URI Template, the client MUST reject its configuration and fail the request without sending it to the UDP proxy. While clients SHOULD validate the requirements above, some clients MAY use a general-purpose URI Template implementation that lacks this specific validation. I see that Christer's GEN_ART review picked-up on this too (I looked after I was done...). In Section 5, there seems to be a possible operations issue for proxy operators: Clients MAY optimistically start sending UDP packets in HTTP Datagrams before receiving the response to its UDP proxying request. However, implementors should note that such proxied packets may not be processed by the UDP proxy if it responds to the request with a failure, or if the proxied packets are received by the UDP proxy before the request. This seems like a good place to limit the amount of optimistic traffic, given that the request is not yet accepted. (also, would the optimistic traffic use Context ID zero?) In Performance Considerations UDP proxies SHOULD strive to avoid increasing burstiness of UDP traffic: they SHOULD NOT queue packets in order to increase batching. This requirement is written qualitatively, so users might \"know a violation if they see it\", but not hold the proxy system/operator to any specific performance without providing more details here. Inter-packet delay variation measurements on proxy ingress and egress would characterize increased burstiness well. Another solution is s/SHOULD/should/ (I doubt some increased burstiness will be avoidable, or enforceable.) Thanks in advance for considering these comments, Al", "new_text": "Payload field is longer than that limit without buffering the capsule contents. If a UDP proxy receives an HTTP Datagram before it has received the corresponding request, it SHALL either drop that HTTP Datagram silently or buffer it temporarily (on the order of a round trip) while awaiting the corresponding request. Note that buffering datagrams (either because the request was not yet received, or because the Context ID is not yet known) consumes resources. Receivers that buffer datagrams SHOULD apply buffering limits in order to reduce the risk of resource exhaustion occuring. For example, receivers can limit the total number of buffered datagrams, or the cumulative size of buffered datagrams, on a per- stream, per-context, or per-connection basis. A client MAY optimistically start sending UDP packets in HTTP Datagrams before receiving the response to its UDP proxying request. However, implementors should note that such proxied packets may not be processed by the UDP proxy if it responds to the request with a failure, or if the proxied packets are received by the UDP proxy before the request and the UDP proxy chooses to not buffer them. 6. UDP proxies SHOULD strive to avoid increasing burstiness of UDP"} {"id": "q-en-draft-ietf-masque-connect-udp-64daf0b4dbcb06bbcfd470f01312d58dbde2d10e167aa00de2e79c67fc12f25a", "old_text": "consider its CONNECT-UDP request as failed. The proxy that is creating the UDP socket to the destination responds to the CONNECT-UDP request with a 2xx (Successful) response, and indicates it supports HTTP Datagrams by sending the corresponding registration capsule. Clients MAY optimistically start sending proxied UDP packets before receiving the response to its CONNECT-UDP request, noting however", "comments": "This text was a mistake I made when converting from Datagram-Flow-Id to capsules.\nSection 4 states that the proxy responds to a CONNECT-UDP request with 2xx and a corresponding registration capsule to indicate end-to-end support of datagrams. If the client sends a REGISTERDATAGRAMCONTEXT capsule, the corresponding capsule from the proxy will be of the same kind, and it will register a new server-generated context ID. So now we would have two context IDs for the same stream, which seems a bit unnecessary. Could we simply omit the step where the proxy sends a registration capsule and simply state that a 2xx response to the CONNECT-UDP request is sufficient indication of end-to-end datagram support?\nMy reading of the spec is different, URL says So for example, if the client sends a request on stream 4 and then REGISTERDATAGRAMCONTEXT { Context ID (0)}, the server will respond with REGISTERDATAGRAMCONTEXT { Context ID (0)} to confirm that the context is accepted.\nThat makes sense, I was assuming that the \"used in both directions\" referred to datagrams. I think my confusion in part came from the same section that says: But sending a registration capsule with an already registered context ID is not a new registration I suppose. Maybe it's good to clarify this a bit in the datagram spec. I'll open a separate issue there.\nI agree the draft is a bit ambiguous here.\nIndeed, the spec is unclear here, my apologies. I got this wrong when I rewrote the old Datagram-Flow-Id text. The server doesn't need to echo the registration capsule here since the client's registration is bidirectional. NAME we should add text to HTTP Datagrams to clarify what we want to do about echoing registrations, is it required allowed forbidden I'm leaning towards (3) unless we have a use case for communicating the fact that a context is \"accepted\" and in which case I'd say (2).\nAha! In that case I apologise too. This seems important for interop! Now I remember, I think some people felt registration ack wasn't needed as long as there was a way to reject. And we have that with CLOSEDATAGRAMCONTEXT. So I think (3), modelled as implicit accept and explicit close, sounds good. That cuts down the chatter (and probably simplifies some code in my implementation).", "new_text": "consider its CONNECT-UDP request as failed. The proxy that is creating the UDP socket to the destination responds to the CONNECT-UDP request with a 2xx (Successful) response. Clients MAY optimistically start sending proxied UDP packets before receiving the response to its CONNECT-UDP request, noting however"} {"id": "q-en-draft-ietf-masque-connect-udp-3d4b453fefbe7481379e20797fdcb7d57bb450a3f26c389c2abede4ee0a3e7fe", "old_text": "5.1. UDP proxying does not create an IP-in-IP tunnel, so the guidance in RFC6040 about transferring ECN marks between inner and outer IP headers does not apply. There is no inner IP header in UDP proxying", "comments": "A proxy can receive a UDP packet from an origin to a client that cannot fit within a DATAGRAM frame, given either the limit set by the maximum datagram frame size, or based on the current path MTU. In this case, one solution is for the proxy to generate ICMP errors back to the sender. If that sender implements PMTU discovery, it should adjust its size.\nI think adding advice here is good, though we can't mandate sending ICMP because most OSes don't let user space applications send ICMP.\nYes, we can't mandate it, but we should recommend it I think. For a full-blown production server, it should be possible.\nRecommendation seems sensible. Although this does get me thinking about what the expectations of fragmentation are, we can't expect all UDP protocols to set DF.\nThe DATAGRAM capsule type offers one way to address this. Capsules can be arbitrarily large, because they'll be reassembled by QUIC. If it won't fit in a QUIC DATAGRAM and you don't want to drop it, it goes in a DATAGRAM capsule. Now, that might not always be desirable, because capsules don't have all the behavior that causes you to choose datagrams in the first place; a proxy that behaves that way will mess with PMTUD, and in the degenerate case cause the server to select an MTU back to the client that always gets encoded in capsules. Perhaps the client should be able to inform the server whether it prefers oversize messages to be tunneled or dropped?", "new_text": "5.1. When using HTTP/3 with the QUIC Datagram extension DGRAM, UDP payloads are transmitted in QUIC DATAGRAM frames. Since those cannot be fragmented, they can only carry payloads up to a given length determined by the QUIC connection configuration and the path MTU. If a proxy is using QUIC DATAGRAM frames and it receives a UDP payload from the target that will not fit inside a QUIC DATAGRAM frame, the proxy SHOULD NOT send the UDP payload in a DATAGRAM capsule, as that defeats the end-to-end unreliability characteristic that methods such as Datagram Packetization Layer Path MTU Discovery (DPLPMTUD) depend on RFC8899. In this scenario, the proxy SHOULD drop the UDP payload and send an ICMP \"Packet Too Big\" message to the target RFC4443. 5.2. UDP proxying does not create an IP-in-IP tunnel, so the guidance in RFC6040 about transferring ECN marks between inner and outer IP headers does not apply. There is no inner IP header in UDP proxying"} {"id": "q-en-draft-ietf-masque-connect-udp-1bf222cae3320f87e17b3b727f8a81411546e88d6c667de45f02ae355dc344cc", "old_text": "recovery (e.g., QUIC), and the underlying HTTP connection runs over TCP, the proxied traffic will incur at least two nested loss recovery mechanisms. This can reduce performance as both can sometimes independently retransmit the same data. To avoid this, HTTP/3 datagrams SHOULD be used. 6.1.", "comments": "recovery (e.g., [QUIC]), and the underlying HTTP connection runs over TCP, the proxied traffic will incur at least two nested loss recovery mechanisms. This can reduce performance as both can sometimes independently retransmit the same data. To avoid this, HTTP/3 datagrams SHOULD be used. This text could maybe be clearer. Is it trying to say \"You SHOULD use H3 + Datagram\" or \"You SHOULD use H3 and if you do you should use datagram\". I.e., is it possible to use H3 but not datagram?\nYou're right, this is unclear. That text predates the refactor of HTTP/3 Datagram to HTTP Datagrams, so we should tweak it to say \"To avoid this, UDP proxying SHOULD be performed over HTTP/3 to allow leveraging the QUIC DATAGRAM frame.\"", "new_text": "recovery (e.g., QUIC), and the underlying HTTP connection runs over TCP, the proxied traffic will incur at least two nested loss recovery mechanisms. This can reduce performance as both can sometimes independently retransmit the same data. To avoid this, UDP proxying SHOULD be performed over HTTP/3 to allow leveraging the QUIC DATAGRAM frame. 6.1."} {"id": "q-en-draft-ietf-masque-h3-datagram-6b45e69b7790e6accd185440f629bf075033ad27ff3013c16370fd9769355a77", "old_text": "that datagram silently or buffer it temporarily while awaiting the creation of the corresponding stream. 4. This specification introduces the Capsule Protocol. The Capsule Protocol is a sequence of type-length-value tuples that new HTTP Methods or new HTTP Upgrade Tokens can choose to use. It allows endpoints to reliably communicate request-related information end-to- end on HTTP request streams, even in the presence of HTTP intermediaries. The Capsule Protocol can be used to exchange HTTP Datagrams when HTTP is running over a transport that does not support the QUIC DATAGRAM frame. This specification defines the \"data stream\" of an HTTP request as the bidirectional stream of bytes that follow the headers in both", "comments": "This may be answered and I just missed it. After reading I think that for HTTP/1 and HTTP/2 Datagrams (via capsules) can only be sent for new methods/upgrade tokens. But it wasn't clear to me if that same property was true for HTTP/3. For example, would it be valid to send HTTP/3 datagrams along with a GET request? If so, I think this means that for HTTP/3 a GET request could have both a body AND datagrams?\nThat's a very good point. If anything, we should be prescriptive about what endpoints do if they receive a datagram associated with an existing method like GET. I suspect drop or error are the most likely candidates here.\nI vote for close the connection with error. The datagram has the relate to something in the request. If the request doesn't or cannot articulate a datagram use, the sender of datagram is probably broken or stupid.\nSince datagrams are strongly associated with a stream, I think it'd be more consistent to make this a stream error. Regardless, folks my want an extension to GET that allows sending datagrams with it, but it makes sense to require that this extension be negotiated via headers or settings first, so the semantics of such datagrams are agreed upon beforehand. So, I'm supportive of making this a stream error, with the understanding that any extension can lift this requirement between consenting parties.\nI'd prefer to make people suffer but I could live with a stream error. Like you suggest, by letting people people agree on a change of requirements, we support extension. But the default of erroring helps datagram avoid the mess othat GET with body is.\nAlso, I reserve the right to count the number of these failures and close the connection if it reaches my annoyance threshold\nAh yes . Sounds good, I'll write up a PR for this once the design team PRs are landed.\nSounds good to me. An extension could just as easily lift a connection error requirement as it could a stream error requirement in case that had any bearing on the thinking here. (A connection error might be nice because it prevents a confused peer from sending a possibly incessant stream of datagrams. On the other hand, confused peers gonna be confused and David's point about datagrams being strongly associated with a request is a good point)\nI think it should be a stream error to enable method-agnostic gateways and proxies, which may be multiplexing different clients' traffic onto a single next-hop connection.\nIn , NAME points out that Content-Type doesn't make sense with the capsule protocol. Should we prohibit it?\nIn , NAME points out that status codes 204, 205, and 206 don't make sense with the capsule protocol. Should we prohibit them?\nThe spec currently says: In HTTP/1.x, the message content is generally delineated by the Content-Length header field or the Transfer-Encoding; another request can follow on the same connection. One of the (many) ways in which CONNECT and Upgrade are special is that in H1 it takes over the connection and makes the request extend to the end of the connection, preventing any further requests from being made. The description attempts to be generic about method, but implicitly assumes an indefinite length stream of bytes which (in H1) is only true of CONNECT or Upgrade. This concept is correct for what we have in mind, but the boundaries aren't crisply described right now. NAME may have opinions.\nThe sentence right after the bit you pasted is: That constrains this to CONNECT/Upgrade/new methods. Is that not enough?\nI think \"new methods\" is probably the most questionable. It's not clear to me that a new method can obtain the special behaviors of CONNECT. If it were restricted to upgrade tokens (whether with HTTP/1.1 Upgrade or RFC 8441 Extended CONNECT), I think that would address the concern.\nI believe what he means is this definition of a data stream only holds for CONNECT requests. There is no mechanism to extend that to new request methods since all other methods are required to have length-delimited data (or no data).\nLikewise, I don't know of any reason we would want to support such extension methods in the future, since that is a known failure point for CONNECT. It is far simpler for a new method to require chunked or data frames.\nRestricting this to extended CONNECT and Upgrade sounds reasonable to me\nFor Upgrade, the new protocol takes over after the current response is complete. Hence, the description above doesn't really work for Upgrade either.\nNAME the intent of that text was to mean \"after the current response is complete\" - do you have a proposal for phrasing that would work better?\nSemantics says: URL I think is consistent with the definition of \"data stream\" in this document. (And in fact uses the term \"data stream\" without defining it.) That doesn't seem to match deferring the new protocol until after the response.\nI don't think there is any technical distinction between the two descriptions; it's nomenclature to say that what follows the CONNECT is a tunnel ended by connection closure, whereas what follows Upgrade is a different protocol that might very well come back to this one (as determined by the other protocol).\nAnd point to URL This is an important part of the security analysis.\nOnly if you believe that the Sec- prefix is useful for the Capsule-Protocol header field. I don't.\nSorry, I don't mean that it's the only thing that makes it work, merely that if were going to talk about how to analyze it, this is something we should say.\nNAME can you clarify your intent? Your issue title starts with \"Add to JS\" but I suspect you didn't mean that we should modify the JavaScript specification. Would a note next to where the \"Capsule-Protocol\" header is defined resolve this?\nAaargh. I meant \"add to the security considerations\".\nThanks, that makes more sense :) Adding something to security considerations sounds good to me, I'll add text to the . Note that this was filed on the wrong repository, is the droid you're looking for.\nSo this assumption is useful in that it helps us avoid an analysis. Though my sense is that that analysis isn't especially difficult. The header field exists to enable datagram forwarding behaviour in an intermediary. We only have to show that there is no harm inflicted if a client engages that behaviour for requests, under the constraint that the client is otherwise authorized to make from a web context. The risk comes if this forwarding behaviour somehow generates different semantics as a result. As the same client also provides the stream of data that will be forwarded, it is trivial to see that they are already in a position to engage whatever semantics they choose. The only scenario that might benefit an attacker is where the elision of datagrams (or things that look like them) might be used to evade security protections that scan the stream of data. For an attack to be effective, those same protections would have to be ignorant of datagram forwarding behaviour at the same intermediary. This binding to CONNECT is a useful shortcut that means we don't need to engage with all that stuff, which wouldn't be that easy to write down. (This aversion to engagement is, I believe, a large part of why people want to slap \"Sec-\" on all new headers.)\nMT's framing of this is helpful. Adding some security consideration about what might go wrong for an intermediary seems potentially useful. Spending text on why a browser is special in this regard seems less compelling to nme.\nI was about to write a PR for this, but I think we should first If we decide that capsules are only allowed for HTTP upgrade tokens (and not new methods) then we've restricted ourselves to CONNECT and Upgrade which are both inaccessible from JavaScript.\nLGTM thanks", "new_text": "that datagram silently or buffer it temporarily while awaiting the creation of the corresponding stream. HTTP/3 datagrams MUST only be sent with an association to a stream that supports semantics for HTTP Datagrams. For example, existing HTTP methods GET and POST do not define semantics for associated HTTP Datagrams; therefore, HTTP/3 datagrams cannot be sent associated with GET or POST request streams. If an endpoint receives an HTTP/3 datagram associated with a method that has no known semantics for HTTP Datagrams, it MUST abort the corresponding stream with H3_GENERAL_PROTOCOL_ERROR. Future extensions MAY remove these requirements if they define semantics for such HTTP Datagrams and negotiate mutual support. 4. This specification introduces the Capsule Protocol. The Capsule Protocol is a sequence of type-length-value tuples that new HTTP Upgrade Tokens can choose to use. It allows endpoints to reliably communicate request-related information end-to-end on HTTP request streams, even in the presence of HTTP intermediaries. The Capsule Protocol can be used to exchange HTTP Datagrams when HTTP is running over a transport that does not support the QUIC DATAGRAM frame. This specification defines the \"data stream\" of an HTTP request as the bidirectional stream of bytes that follow the headers in both"} {"id": "q-en-draft-ietf-masque-h3-datagram-6b45e69b7790e6accd185440f629bf075033ad27ff3013c16370fd9769355a77", "old_text": "HTTP message content after the headers. Note that use of the Capsule Protocol is not required to use HTTP Datagrams. If a new HTTP Method or Upgrade Token is only defined over transports that support QUIC DATAGRAM frames, they might not need a stream encoding. Additionally, definitions of new HTTP Methods or of new HTTP Upgrade Tokens can use HTTP Datagrams with their own data stream protocol. However, new HTTP Method or Upgrade Tokens SHOULD use the Capsule Protocol unless they have a good reason not to. 4.1. Definitions of new HTTP Methods or of new HTTP Upgrade Tokens can state that their data stream uses the Capsule Protocol. If they do so, that means that the contents of their data stream uses the following format (using the notation from the \"Notational Conventions\" section of QUIC): A variable-length integer indicating the Type of the capsule. Endpoints that receive a capsule with an unknown Capsule Type MUST", "comments": "This may be answered and I just missed it. After reading I think that for HTTP/1 and HTTP/2 Datagrams (via capsules) can only be sent for new methods/upgrade tokens. But it wasn't clear to me if that same property was true for HTTP/3. For example, would it be valid to send HTTP/3 datagrams along with a GET request? If so, I think this means that for HTTP/3 a GET request could have both a body AND datagrams?\nThat's a very good point. If anything, we should be prescriptive about what endpoints do if they receive a datagram associated with an existing method like GET. I suspect drop or error are the most likely candidates here.\nI vote for close the connection with error. The datagram has the relate to something in the request. If the request doesn't or cannot articulate a datagram use, the sender of datagram is probably broken or stupid.\nSince datagrams are strongly associated with a stream, I think it'd be more consistent to make this a stream error. Regardless, folks my want an extension to GET that allows sending datagrams with it, but it makes sense to require that this extension be negotiated via headers or settings first, so the semantics of such datagrams are agreed upon beforehand. So, I'm supportive of making this a stream error, with the understanding that any extension can lift this requirement between consenting parties.\nI'd prefer to make people suffer but I could live with a stream error. Like you suggest, by letting people people agree on a change of requirements, we support extension. But the default of erroring helps datagram avoid the mess othat GET with body is.\nAlso, I reserve the right to count the number of these failures and close the connection if it reaches my annoyance threshold\nAh yes . Sounds good, I'll write up a PR for this once the design team PRs are landed.\nSounds good to me. An extension could just as easily lift a connection error requirement as it could a stream error requirement in case that had any bearing on the thinking here. (A connection error might be nice because it prevents a confused peer from sending a possibly incessant stream of datagrams. On the other hand, confused peers gonna be confused and David's point about datagrams being strongly associated with a request is a good point)\nI think it should be a stream error to enable method-agnostic gateways and proxies, which may be multiplexing different clients' traffic onto a single next-hop connection.\nIn , NAME points out that Content-Type doesn't make sense with the capsule protocol. Should we prohibit it?\nIn , NAME points out that status codes 204, 205, and 206 don't make sense with the capsule protocol. Should we prohibit them?\nThe spec currently says: In HTTP/1.x, the message content is generally delineated by the Content-Length header field or the Transfer-Encoding; another request can follow on the same connection. One of the (many) ways in which CONNECT and Upgrade are special is that in H1 it takes over the connection and makes the request extend to the end of the connection, preventing any further requests from being made. The description attempts to be generic about method, but implicitly assumes an indefinite length stream of bytes which (in H1) is only true of CONNECT or Upgrade. This concept is correct for what we have in mind, but the boundaries aren't crisply described right now. NAME may have opinions.\nThe sentence right after the bit you pasted is: That constrains this to CONNECT/Upgrade/new methods. Is that not enough?\nI think \"new methods\" is probably the most questionable. It's not clear to me that a new method can obtain the special behaviors of CONNECT. If it were restricted to upgrade tokens (whether with HTTP/1.1 Upgrade or RFC 8441 Extended CONNECT), I think that would address the concern.\nI believe what he means is this definition of a data stream only holds for CONNECT requests. There is no mechanism to extend that to new request methods since all other methods are required to have length-delimited data (or no data).\nLikewise, I don't know of any reason we would want to support such extension methods in the future, since that is a known failure point for CONNECT. It is far simpler for a new method to require chunked or data frames.\nRestricting this to extended CONNECT and Upgrade sounds reasonable to me\nFor Upgrade, the new protocol takes over after the current response is complete. Hence, the description above doesn't really work for Upgrade either.\nNAME the intent of that text was to mean \"after the current response is complete\" - do you have a proposal for phrasing that would work better?\nSemantics says: URL I think is consistent with the definition of \"data stream\" in this document. (And in fact uses the term \"data stream\" without defining it.) That doesn't seem to match deferring the new protocol until after the response.\nI don't think there is any technical distinction between the two descriptions; it's nomenclature to say that what follows the CONNECT is a tunnel ended by connection closure, whereas what follows Upgrade is a different protocol that might very well come back to this one (as determined by the other protocol).\nAnd point to URL This is an important part of the security analysis.\nOnly if you believe that the Sec- prefix is useful for the Capsule-Protocol header field. I don't.\nSorry, I don't mean that it's the only thing that makes it work, merely that if were going to talk about how to analyze it, this is something we should say.\nNAME can you clarify your intent? Your issue title starts with \"Add to JS\" but I suspect you didn't mean that we should modify the JavaScript specification. Would a note next to where the \"Capsule-Protocol\" header is defined resolve this?\nAaargh. I meant \"add to the security considerations\".\nThanks, that makes more sense :) Adding something to security considerations sounds good to me, I'll add text to the . Note that this was filed on the wrong repository, is the droid you're looking for.\nSo this assumption is useful in that it helps us avoid an analysis. Though my sense is that that analysis isn't especially difficult. The header field exists to enable datagram forwarding behaviour in an intermediary. We only have to show that there is no harm inflicted if a client engages that behaviour for requests, under the constraint that the client is otherwise authorized to make from a web context. The risk comes if this forwarding behaviour somehow generates different semantics as a result. As the same client also provides the stream of data that will be forwarded, it is trivial to see that they are already in a position to engage whatever semantics they choose. The only scenario that might benefit an attacker is where the elision of datagrams (or things that look like them) might be used to evade security protections that scan the stream of data. For an attack to be effective, those same protections would have to be ignorant of datagram forwarding behaviour at the same intermediary. This binding to CONNECT is a useful shortcut that means we don't need to engage with all that stuff, which wouldn't be that easy to write down. (This aversion to engagement is, I believe, a large part of why people want to slap \"Sec-\" on all new headers.)\nMT's framing of this is helpful. Adding some security consideration about what might go wrong for an intermediary seems potentially useful. Spending text on why a browser is special in this regard seems less compelling to nme.\nI was about to write a PR for this, but I think we should first If we decide that capsules are only allowed for HTTP upgrade tokens (and not new methods) then we've restricted ourselves to CONNECT and Upgrade which are both inaccessible from JavaScript.\nLGTM thanks", "new_text": "HTTP message content after the headers. Note that use of the Capsule Protocol is not required to use HTTP Datagrams. If a new HTTP Upgrade Token is only defined over transports that support QUIC DATAGRAM frames, they might not need a stream encoding. Additionally, definitions of new HTTP Upgrade Tokens can use HTTP Datagrams with their own data stream protocol. However, new HTTP Upgrade Tokens that wish to use HTTP Datagrams SHOULD use the Capsule Protocol unless they have a good reason not to. 4.1. Definitions of new HTTP Upgrade Tokens can state that their data stream uses the Capsule Protocol. If they do so, that means that the contents of their data stream uses the following format (using the notation from the \"Notational Conventions\" section of QUIC): A variable-length integer indicating the Type of the capsule. Endpoints that receive a capsule with an unknown Capsule Type MUST"} {"id": "q-en-draft-ietf-masque-h3-datagram-6b45e69b7790e6accd185440f629bf075033ad27ff3013c16370fd9769355a77", "old_text": "By virtue of the definition of the data stream, the Capsule Protocol is not in use on responses unless the response includes a 2xx (Successful) status code. The Capsule Protocol MUST NOT be used with messages that contain Content-Length or Transfer-Encoding header fields. 4.2. Definitions of new HTTP Methods or of new HTTP Upgrade Tokens that use the Capsule Protocol MAY use the Capsule-Protocol header field to simplify intermediary processing. \"Capsule-Protocol\" is an Item Structured Header RFC8941. Its value MUST be a Boolean. Its ABNF is:", "comments": "This may be answered and I just missed it. After reading I think that for HTTP/1 and HTTP/2 Datagrams (via capsules) can only be sent for new methods/upgrade tokens. But it wasn't clear to me if that same property was true for HTTP/3. For example, would it be valid to send HTTP/3 datagrams along with a GET request? If so, I think this means that for HTTP/3 a GET request could have both a body AND datagrams?\nThat's a very good point. If anything, we should be prescriptive about what endpoints do if they receive a datagram associated with an existing method like GET. I suspect drop or error are the most likely candidates here.\nI vote for close the connection with error. The datagram has the relate to something in the request. If the request doesn't or cannot articulate a datagram use, the sender of datagram is probably broken or stupid.\nSince datagrams are strongly associated with a stream, I think it'd be more consistent to make this a stream error. Regardless, folks my want an extension to GET that allows sending datagrams with it, but it makes sense to require that this extension be negotiated via headers or settings first, so the semantics of such datagrams are agreed upon beforehand. So, I'm supportive of making this a stream error, with the understanding that any extension can lift this requirement between consenting parties.\nI'd prefer to make people suffer but I could live with a stream error. Like you suggest, by letting people people agree on a change of requirements, we support extension. But the default of erroring helps datagram avoid the mess othat GET with body is.\nAlso, I reserve the right to count the number of these failures and close the connection if it reaches my annoyance threshold\nAh yes . Sounds good, I'll write up a PR for this once the design team PRs are landed.\nSounds good to me. An extension could just as easily lift a connection error requirement as it could a stream error requirement in case that had any bearing on the thinking here. (A connection error might be nice because it prevents a confused peer from sending a possibly incessant stream of datagrams. On the other hand, confused peers gonna be confused and David's point about datagrams being strongly associated with a request is a good point)\nI think it should be a stream error to enable method-agnostic gateways and proxies, which may be multiplexing different clients' traffic onto a single next-hop connection.\nIn , NAME points out that Content-Type doesn't make sense with the capsule protocol. Should we prohibit it?\nIn , NAME points out that status codes 204, 205, and 206 don't make sense with the capsule protocol. Should we prohibit them?\nThe spec currently says: In HTTP/1.x, the message content is generally delineated by the Content-Length header field or the Transfer-Encoding; another request can follow on the same connection. One of the (many) ways in which CONNECT and Upgrade are special is that in H1 it takes over the connection and makes the request extend to the end of the connection, preventing any further requests from being made. The description attempts to be generic about method, but implicitly assumes an indefinite length stream of bytes which (in H1) is only true of CONNECT or Upgrade. This concept is correct for what we have in mind, but the boundaries aren't crisply described right now. NAME may have opinions.\nThe sentence right after the bit you pasted is: That constrains this to CONNECT/Upgrade/new methods. Is that not enough?\nI think \"new methods\" is probably the most questionable. It's not clear to me that a new method can obtain the special behaviors of CONNECT. If it were restricted to upgrade tokens (whether with HTTP/1.1 Upgrade or RFC 8441 Extended CONNECT), I think that would address the concern.\nI believe what he means is this definition of a data stream only holds for CONNECT requests. There is no mechanism to extend that to new request methods since all other methods are required to have length-delimited data (or no data).\nLikewise, I don't know of any reason we would want to support such extension methods in the future, since that is a known failure point for CONNECT. It is far simpler for a new method to require chunked or data frames.\nRestricting this to extended CONNECT and Upgrade sounds reasonable to me\nFor Upgrade, the new protocol takes over after the current response is complete. Hence, the description above doesn't really work for Upgrade either.\nNAME the intent of that text was to mean \"after the current response is complete\" - do you have a proposal for phrasing that would work better?\nSemantics says: URL I think is consistent with the definition of \"data stream\" in this document. (And in fact uses the term \"data stream\" without defining it.) That doesn't seem to match deferring the new protocol until after the response.\nI don't think there is any technical distinction between the two descriptions; it's nomenclature to say that what follows the CONNECT is a tunnel ended by connection closure, whereas what follows Upgrade is a different protocol that might very well come back to this one (as determined by the other protocol).\nAnd point to URL This is an important part of the security analysis.\nOnly if you believe that the Sec- prefix is useful for the Capsule-Protocol header field. I don't.\nSorry, I don't mean that it's the only thing that makes it work, merely that if were going to talk about how to analyze it, this is something we should say.\nNAME can you clarify your intent? Your issue title starts with \"Add to JS\" but I suspect you didn't mean that we should modify the JavaScript specification. Would a note next to where the \"Capsule-Protocol\" header is defined resolve this?\nAaargh. I meant \"add to the security considerations\".\nThanks, that makes more sense :) Adding something to security considerations sounds good to me, I'll add text to the . Note that this was filed on the wrong repository, is the droid you're looking for.\nSo this assumption is useful in that it helps us avoid an analysis. Though my sense is that that analysis isn't especially difficult. The header field exists to enable datagram forwarding behaviour in an intermediary. We only have to show that there is no harm inflicted if a client engages that behaviour for requests, under the constraint that the client is otherwise authorized to make from a web context. The risk comes if this forwarding behaviour somehow generates different semantics as a result. As the same client also provides the stream of data that will be forwarded, it is trivial to see that they are already in a position to engage whatever semantics they choose. The only scenario that might benefit an attacker is where the elision of datagrams (or things that look like them) might be used to evade security protections that scan the stream of data. For an attack to be effective, those same protections would have to be ignorant of datagram forwarding behaviour at the same intermediary. This binding to CONNECT is a useful shortcut that means we don't need to engage with all that stuff, which wouldn't be that easy to write down. (This aversion to engagement is, I believe, a large part of why people want to slap \"Sec-\" on all new headers.)\nMT's framing of this is helpful. Adding some security consideration about what might go wrong for an intermediary seems potentially useful. Spending text on why a browser is special in this regard seems less compelling to nme.\nI was about to write a PR for this, but I think we should first If we decide that capsules are only allowed for HTTP upgrade tokens (and not new methods) then we've restricted ourselves to CONNECT and Upgrade which are both inaccessible from JavaScript.\nLGTM thanks", "new_text": "By virtue of the definition of the data stream, the Capsule Protocol is not in use on responses unless the response includes a 2xx (Successful) status code. The Capsule Protocol MUST NOT be used with messages that contain Content-Length, Content-Type, or Transfer-Encoding header fields. Additionally, HTTP status codes 204 (No Content), 205 (Reset Content), and 206 (Partial Content) MUST NOT be sent on responses that use the Capsule Protocol. 4.2. Definitions of new HTTP Upgrade Tokens that use the Capsule Protocol MAY use the Capsule-Protocol header field to simplify intermediary processing. \"Capsule-Protocol\" is an Item Structured Header RFC8941. Its value MUST be a Boolean. Its ABNF is:"} {"id": "q-en-draft-ietf-masque-h3-datagram-6b45e69b7790e6accd185440f629bf075033ad27ff3013c16370fd9769355a77", "old_text": "?1. A Capsule-Protocol header field with a value of ?0 has the same semantics as when the header is not present. Intermediaries MAY use this header field to allow processing of HTTP Datagrams for unknown HTTP methods or unknown HTTP Upgrade Tokens. The Capsule-Protocol header field MUST NOT be sent multiple times on a message. The Capsule-Protocol header field MUST NOT be used on", "comments": "This may be answered and I just missed it. After reading I think that for HTTP/1 and HTTP/2 Datagrams (via capsules) can only be sent for new methods/upgrade tokens. But it wasn't clear to me if that same property was true for HTTP/3. For example, would it be valid to send HTTP/3 datagrams along with a GET request? If so, I think this means that for HTTP/3 a GET request could have both a body AND datagrams?\nThat's a very good point. If anything, we should be prescriptive about what endpoints do if they receive a datagram associated with an existing method like GET. I suspect drop or error are the most likely candidates here.\nI vote for close the connection with error. The datagram has the relate to something in the request. If the request doesn't or cannot articulate a datagram use, the sender of datagram is probably broken or stupid.\nSince datagrams are strongly associated with a stream, I think it'd be more consistent to make this a stream error. Regardless, folks my want an extension to GET that allows sending datagrams with it, but it makes sense to require that this extension be negotiated via headers or settings first, so the semantics of such datagrams are agreed upon beforehand. So, I'm supportive of making this a stream error, with the understanding that any extension can lift this requirement between consenting parties.\nI'd prefer to make people suffer but I could live with a stream error. Like you suggest, by letting people people agree on a change of requirements, we support extension. But the default of erroring helps datagram avoid the mess othat GET with body is.\nAlso, I reserve the right to count the number of these failures and close the connection if it reaches my annoyance threshold\nAh yes . Sounds good, I'll write up a PR for this once the design team PRs are landed.\nSounds good to me. An extension could just as easily lift a connection error requirement as it could a stream error requirement in case that had any bearing on the thinking here. (A connection error might be nice because it prevents a confused peer from sending a possibly incessant stream of datagrams. On the other hand, confused peers gonna be confused and David's point about datagrams being strongly associated with a request is a good point)\nI think it should be a stream error to enable method-agnostic gateways and proxies, which may be multiplexing different clients' traffic onto a single next-hop connection.\nIn , NAME points out that Content-Type doesn't make sense with the capsule protocol. Should we prohibit it?\nIn , NAME points out that status codes 204, 205, and 206 don't make sense with the capsule protocol. Should we prohibit them?\nThe spec currently says: In HTTP/1.x, the message content is generally delineated by the Content-Length header field or the Transfer-Encoding; another request can follow on the same connection. One of the (many) ways in which CONNECT and Upgrade are special is that in H1 it takes over the connection and makes the request extend to the end of the connection, preventing any further requests from being made. The description attempts to be generic about method, but implicitly assumes an indefinite length stream of bytes which (in H1) is only true of CONNECT or Upgrade. This concept is correct for what we have in mind, but the boundaries aren't crisply described right now. NAME may have opinions.\nThe sentence right after the bit you pasted is: That constrains this to CONNECT/Upgrade/new methods. Is that not enough?\nI think \"new methods\" is probably the most questionable. It's not clear to me that a new method can obtain the special behaviors of CONNECT. If it were restricted to upgrade tokens (whether with HTTP/1.1 Upgrade or RFC 8441 Extended CONNECT), I think that would address the concern.\nI believe what he means is this definition of a data stream only holds for CONNECT requests. There is no mechanism to extend that to new request methods since all other methods are required to have length-delimited data (or no data).\nLikewise, I don't know of any reason we would want to support such extension methods in the future, since that is a known failure point for CONNECT. It is far simpler for a new method to require chunked or data frames.\nRestricting this to extended CONNECT and Upgrade sounds reasonable to me\nFor Upgrade, the new protocol takes over after the current response is complete. Hence, the description above doesn't really work for Upgrade either.\nNAME the intent of that text was to mean \"after the current response is complete\" - do you have a proposal for phrasing that would work better?\nSemantics says: URL I think is consistent with the definition of \"data stream\" in this document. (And in fact uses the term \"data stream\" without defining it.) That doesn't seem to match deferring the new protocol until after the response.\nI don't think there is any technical distinction between the two descriptions; it's nomenclature to say that what follows the CONNECT is a tunnel ended by connection closure, whereas what follows Upgrade is a different protocol that might very well come back to this one (as determined by the other protocol).\nAnd point to URL This is an important part of the security analysis.\nOnly if you believe that the Sec- prefix is useful for the Capsule-Protocol header field. I don't.\nSorry, I don't mean that it's the only thing that makes it work, merely that if were going to talk about how to analyze it, this is something we should say.\nNAME can you clarify your intent? Your issue title starts with \"Add to JS\" but I suspect you didn't mean that we should modify the JavaScript specification. Would a note next to where the \"Capsule-Protocol\" header is defined resolve this?\nAaargh. I meant \"add to the security considerations\".\nThanks, that makes more sense :) Adding something to security considerations sounds good to me, I'll add text to the . Note that this was filed on the wrong repository, is the droid you're looking for.\nSo this assumption is useful in that it helps us avoid an analysis. Though my sense is that that analysis isn't especially difficult. The header field exists to enable datagram forwarding behaviour in an intermediary. We only have to show that there is no harm inflicted if a client engages that behaviour for requests, under the constraint that the client is otherwise authorized to make from a web context. The risk comes if this forwarding behaviour somehow generates different semantics as a result. As the same client also provides the stream of data that will be forwarded, it is trivial to see that they are already in a position to engage whatever semantics they choose. The only scenario that might benefit an attacker is where the elision of datagrams (or things that look like them) might be used to evade security protections that scan the stream of data. For an attack to be effective, those same protections would have to be ignorant of datagram forwarding behaviour at the same intermediary. This binding to CONNECT is a useful shortcut that means we don't need to engage with all that stuff, which wouldn't be that easy to write down. (This aversion to engagement is, I believe, a large part of why people want to slap \"Sec-\" on all new headers.)\nMT's framing of this is helpful. Adding some security consideration about what might go wrong for an intermediary seems potentially useful. Spending text on why a browser is special in this regard seems less compelling to nme.\nI was about to write a PR for this, but I think we should first If we decide that capsules are only allowed for HTTP upgrade tokens (and not new methods) then we've restricted ourselves to CONNECT and Upgrade which are both inaccessible from JavaScript.\nLGTM thanks", "new_text": "?1. A Capsule-Protocol header field with a value of ?0 has the same semantics as when the header is not present. Intermediaries MAY use this header field to allow processing of HTTP Datagrams for unknown HTTP Upgrade Tokens; note that this is only possible for HTTP Upgrade or Extended CONNECT. The Capsule-Protocol header field MUST NOT be sent multiple times on a message. The Capsule-Protocol header field MUST NOT be used on"} {"id": "q-en-draft-ietf-masque-h3-datagram-6b45e69b7790e6accd185440f629bf075033ad27ff3013c16370fd9769355a77", "old_text": "datagrams, it's best for all implementations that support this feature to always send this Settings parameter, see setting. 8. 8.1.", "comments": "This may be answered and I just missed it. After reading I think that for HTTP/1 and HTTP/2 Datagrams (via capsules) can only be sent for new methods/upgrade tokens. But it wasn't clear to me if that same property was true for HTTP/3. For example, would it be valid to send HTTP/3 datagrams along with a GET request? If so, I think this means that for HTTP/3 a GET request could have both a body AND datagrams?\nThat's a very good point. If anything, we should be prescriptive about what endpoints do if they receive a datagram associated with an existing method like GET. I suspect drop or error are the most likely candidates here.\nI vote for close the connection with error. The datagram has the relate to something in the request. If the request doesn't or cannot articulate a datagram use, the sender of datagram is probably broken or stupid.\nSince datagrams are strongly associated with a stream, I think it'd be more consistent to make this a stream error. Regardless, folks my want an extension to GET that allows sending datagrams with it, but it makes sense to require that this extension be negotiated via headers or settings first, so the semantics of such datagrams are agreed upon beforehand. So, I'm supportive of making this a stream error, with the understanding that any extension can lift this requirement between consenting parties.\nI'd prefer to make people suffer but I could live with a stream error. Like you suggest, by letting people people agree on a change of requirements, we support extension. But the default of erroring helps datagram avoid the mess othat GET with body is.\nAlso, I reserve the right to count the number of these failures and close the connection if it reaches my annoyance threshold\nAh yes . Sounds good, I'll write up a PR for this once the design team PRs are landed.\nSounds good to me. An extension could just as easily lift a connection error requirement as it could a stream error requirement in case that had any bearing on the thinking here. (A connection error might be nice because it prevents a confused peer from sending a possibly incessant stream of datagrams. On the other hand, confused peers gonna be confused and David's point about datagrams being strongly associated with a request is a good point)\nI think it should be a stream error to enable method-agnostic gateways and proxies, which may be multiplexing different clients' traffic onto a single next-hop connection.\nIn , NAME points out that Content-Type doesn't make sense with the capsule protocol. Should we prohibit it?\nIn , NAME points out that status codes 204, 205, and 206 don't make sense with the capsule protocol. Should we prohibit them?\nThe spec currently says: In HTTP/1.x, the message content is generally delineated by the Content-Length header field or the Transfer-Encoding; another request can follow on the same connection. One of the (many) ways in which CONNECT and Upgrade are special is that in H1 it takes over the connection and makes the request extend to the end of the connection, preventing any further requests from being made. The description attempts to be generic about method, but implicitly assumes an indefinite length stream of bytes which (in H1) is only true of CONNECT or Upgrade. This concept is correct for what we have in mind, but the boundaries aren't crisply described right now. NAME may have opinions.\nThe sentence right after the bit you pasted is: That constrains this to CONNECT/Upgrade/new methods. Is that not enough?\nI think \"new methods\" is probably the most questionable. It's not clear to me that a new method can obtain the special behaviors of CONNECT. If it were restricted to upgrade tokens (whether with HTTP/1.1 Upgrade or RFC 8441 Extended CONNECT), I think that would address the concern.\nI believe what he means is this definition of a data stream only holds for CONNECT requests. There is no mechanism to extend that to new request methods since all other methods are required to have length-delimited data (or no data).\nLikewise, I don't know of any reason we would want to support such extension methods in the future, since that is a known failure point for CONNECT. It is far simpler for a new method to require chunked or data frames.\nRestricting this to extended CONNECT and Upgrade sounds reasonable to me\nFor Upgrade, the new protocol takes over after the current response is complete. Hence, the description above doesn't really work for Upgrade either.\nNAME the intent of that text was to mean \"after the current response is complete\" - do you have a proposal for phrasing that would work better?\nSemantics says: URL I think is consistent with the definition of \"data stream\" in this document. (And in fact uses the term \"data stream\" without defining it.) That doesn't seem to match deferring the new protocol until after the response.\nI don't think there is any technical distinction between the two descriptions; it's nomenclature to say that what follows the CONNECT is a tunnel ended by connection closure, whereas what follows Upgrade is a different protocol that might very well come back to this one (as determined by the other protocol).\nAnd point to URL This is an important part of the security analysis.\nOnly if you believe that the Sec- prefix is useful for the Capsule-Protocol header field. I don't.\nSorry, I don't mean that it's the only thing that makes it work, merely that if were going to talk about how to analyze it, this is something we should say.\nNAME can you clarify your intent? Your issue title starts with \"Add to JS\" but I suspect you didn't mean that we should modify the JavaScript specification. Would a note next to where the \"Capsule-Protocol\" header is defined resolve this?\nAaargh. I meant \"add to the security considerations\".\nThanks, that makes more sense :) Adding something to security considerations sounds good to me, I'll add text to the . Note that this was filed on the wrong repository, is the droid you're looking for.\nSo this assumption is useful in that it helps us avoid an analysis. Though my sense is that that analysis isn't especially difficult. The header field exists to enable datagram forwarding behaviour in an intermediary. We only have to show that there is no harm inflicted if a client engages that behaviour for requests, under the constraint that the client is otherwise authorized to make from a web context. The risk comes if this forwarding behaviour somehow generates different semantics as a result. As the same client also provides the stream of data that will be forwarded, it is trivial to see that they are already in a position to engage whatever semantics they choose. The only scenario that might benefit an attacker is where the elision of datagrams (or things that look like them) might be used to evade security protections that scan the stream of data. For an attack to be effective, those same protections would have to be ignorant of datagram forwarding behaviour at the same intermediary. This binding to CONNECT is a useful shortcut that means we don't need to engage with all that stuff, which wouldn't be that easy to write down. (This aversion to engagement is, I believe, a large part of why people want to slap \"Sec-\" on all new headers.)\nMT's framing of this is helpful. Adding some security consideration about what might go wrong for an intermediary seems potentially useful. Spending text on why a browser is special in this regard seems less compelling to nme.\nI was about to write a PR for this, but I think we should first If we decide that capsules are only allowed for HTTP upgrade tokens (and not new methods) then we've restricted ourselves to CONNECT and Upgrade which are both inaccessible from JavaScript.\nLGTM thanks", "new_text": "datagrams, it's best for all implementations that support this feature to always send this Settings parameter, see setting. Since use of the Capsule Protocol is restricted to new HTTP Upgrade Tokens, it is not accessible from Web Platform APIs (such as those commonly accessed via JavaScript in web browsers). 8. 8.1."} {"id": "q-en-draft-ietf-masque-h3-datagram-5f2171df0ab9f415d5eada9d0120c4f52c8070a767045312d378961e2f37b8c4", "old_text": "the security and congestion-control properties of QUIC. However, QUIC DATAGRAM frames do not provide a means to demultiplex application contexts. This document describes how to use QUIC DATAGRAM frames when the application protocol running over QUIC is HTTP/3. It associates datagrams with client-initiated bidirectional streams. Additionally, this document defines the Capsule Protocol that can convey datagrams over prior versions of HTTP. 1.", "comments": "In Section 2 we start with the statement So let's just use that clearer description in the abstract and introduction too,", "new_text": "the security and congestion-control properties of QUIC. However, QUIC DATAGRAM frames do not provide a means to demultiplex application contexts. This document describes how to use QUIC DATAGRAM frames with HTTP/3 by association with HTTP requests. Additionally, this document defines the Capsule Protocol that can convey datagrams over prior versions of HTTP. 1."} {"id": "q-en-draft-ietf-masque-h3-datagram-5f2171df0ab9f415d5eada9d0120c4f52c8070a767045312d378961e2f37b8c4", "old_text": "leveraging the security and congestion-control properties of QUIC. However, QUIC DATAGRAM frames do not provide a means to demultiplex application contexts. This document describes how to use QUIC DATAGRAM frames when the application protocol running over QUIC is HTTP/3 H3. It associates datagrams with client-initiated bidirectional streams. Additionally, this document defines the Capsule Protocol that can convey datagrams over prior versions of HTTP. This document is structured as follows:", "comments": "In Section 2 we start with the statement So let's just use that clearer description in the abstract and introduction too,", "new_text": "leveraging the security and congestion-control properties of QUIC. However, QUIC DATAGRAM frames do not provide a means to demultiplex application contexts. This document describes how to use QUIC DATAGRAM frames with HTTP/3 H3 by association with HTTP requests. Additionally, this document defines the Capsule Protocol that can convey datagrams over prior versions of HTTP. This document is structured as follows:"} {"id": "q-en-draft-ietf-masque-h3-datagram-896c9afe8b4ccf3abbd83a25ea7bfae08beb9a4622a3187d03f534f577fb8e1d", "old_text": "large as to not be usable, the implementation SHOULD discard the capsule without buffering its contents into memory. Note that use of the Capsule Protocol is not required to use HTTP Datagrams. If an HTTP extension that uses HTTP Datagrams is only defined over transports that support QUIC DATAGRAM frames, it might not need a stream encoding. Additionally, HTTP extensions can use HTTP Datagrams with their own data stream protocol. However, new HTTP extensions that wish to use HTTP Datagrams SHOULD use the Capsule Protocol unless they have a good reason not to. 4.", "comments": "This is a good step but I think it overlooks one of the points that Magnus articulated. If an HTTP extension does what we say here, it might fall foul of intermediaries and protocol version conversion. That not some we have to solve but I suspect if we can concisely describe those considerations we would address the issue.\nGood point, added text\nThanks, this is much clearer and is a descent explanation of the issues and motivation behind the should. I guess a more correct part will require much more text.\nThe first SHOULD should be a MUST because there is a clear exception specified but in all other cases this is a MUST. The second SHOULD should really be a MUST. Or why is that a SHOULD? If there is a reason for a SHOULD this needs to be explained but I think it would break interoperability.\nI personally preferred MUST, however the consensus of the design team was SHOULD, and that was confirmed by the WG. Let's no reopen that unless we have new information.\nOn , NAME said:\nI think the main motivation was that the MUST was unenforceable, and that there will be intermediaries that explicitly want to block unknown extensions for security reasons.\nWhatever we might prefer, isn't this a case where whatever the specification says, intermediaries will do what they want?\nI agree. That's why in the interest of making progress I'd rather move forward with the design team consensus instead of reopening this topic.\ndesign team outputs don't have any special status here; perhaps the answer is not to say \"SHOULD this\" or \"MUST that\" but to instead articulate the intent behind the design and the potential consequences of not complying with that intent. Perhaps the key reason for mandating forwarding is that the meaning of various protocol elements is only valid in the context of all of the messages that an endpoint says. That is, you might send a bunch of datagram capsules that have meaning derived from other capsules. Remove the other capsules and you change the meaning of those datagram capsules. An intermediary that adds, modifies, or removes any capsules needs to understand the totality of the effect of their interventions. This is why I remain skeptical about the value of the generic capsule protocol. It only has value if it can be acted upon generically, but it is not obvious that it can be. Put differently, the existence of a generic capsule protocol implies a number of constraints on the operation of extensions that are not obvious and not clearly articulated. NAME might be asking about similar effects in .\nSo lets start with explaining the interoperability issue that is at the core of this: So the draft currently has several statements that interact with each other to cause this interoperability issue. Section 3.5 says: 'Note that use of the Capsule Protocol is not required to use HTTP Datagrams. If an HTTP extension that uses HTTP Datagrams is only defined over transports that support QUIC DATAGRAM frames, it might not need a stream encoding.' Section 3.5 also says: 'An intermediary can reencode HTTP Datagrams as it forwards them. In other words, an intermediary MAY send a DATAGRAM capsule to forward an HTTP Datagram which was received in a QUIC DATAGRAM frame, and vice versa.' Thus the following may occur: Endpoint 1 sends a HTTP datagram as HTTP/3 Datagram over QUIC datagram. The HTTP intermediary re-encode this blind of its context to an Capsule Datagram and forward it to another HTTP intermediary that forwards it to an HTTP/3 endpoint using Datagram Capsule. As this HTTP datagram usage is one that does not require to support of Capsule Datagram this endpoint is not setup to handle this Datagram capsule and discards it. So this can either be fixed in various ways, not only what this issue text suggests: requiring the Datagram capsule support for all HTTP datagram handling nodes. In other words change the first quote. Require the HTTP intermediary to re-encode any Datagram capsules to HTTP/3 datagram if the next hop is know to support it. i.e. make additional requirements on how HTTP Intermediaries act when forwarding. Forbid blind forwarding of HTTP datagram in HTTP intermediaries. I think we should settle this part before going back to see how the text Mirja suggested changing matter. I think 1 and 2 are both working solutions, and would like to avoid 3. I think there is also a question on the implication of what it means to support HTTP datagram when one is an HTTP/3 endpoint or intermediary. Who is expected to support the Datagram capsule if the next node is not capable? So I think at the bottom this issue is really the question about the possibility to convert between HTTP/3 Datagram and Datagram capsule. I think we need to settle that question first. It is a question of what requirements one have on endpoints vs intermediaries. Lets start in that end.\nI actually think I prefer your option 3, caveated by saying \"if you're an intermediary and you don't know the upgrade token, don't insert datagram capsules\" . What concern do you have about option 3?\nOkay, I am skeptical against 3) on how it may make deployment of future extensions that require HTTP Datagram but doesn't need other than Intermediary forwarding. In that case deployment could be hampered if not all HTTP intermediaries are also HTTP/3 capable. It appears that using capsule protocol would be easier to enable over a legacy system. That might be a misconception from my part.\nNAME Thank you for clarifying, now I understand where the disconnect is. The draft currently says and you interpreted that as but that's not what we intended, we meant . I've written up to remedy this\nNAME great that we managed to sort out what was the issue. As Lucas noted in the PR unless all HTTP Datagram implementation support and accept receiving Datagram capsule, even if not intended by the HTTP Extension, if unaware of HTTP extension transformation is allowed, it can fail. I think we need a bit more feedback on where people stand in this aspect.\nThanks NAME If the HTTP extension doesn't use capsules, then the implementation won't parse the data stream as capsules so there is no risk of accidentally receiving capsules. We added text to to clarify the risks of not using the capsule protocol.\nI think this was closed prematurely because the issue originally was about MUST or SHOULD and I don't think we concluded on this. If the argument is that this is not enforceable, I think we should not use normatively language at all (and say something like \"intermediaries are expected ...\"). However, using SHOULD were we actually mean MUST, weakens the requirement and risk that this is misunderstood to be optional (while I think this is the core of the whole capsule protocol).\nReopening to discuss SHOULD vs MUST.\nI'd consider capsules to be similar to header fields (headers specifically, not trailers). Here's what HTTP says about them in URL >A proxy MUST forward unrecognized header fields unless the field name is listed in the header field () or the proxy is specifically configured to block, or otherwise transform, such fields. Other recipients SHOULD ignore unrecognized header and trailer fields. Adhering to these requirements allows HTTP's functionality to be extended without updating or removing deployed intermediaries. Yeah, in actuality an intermediary can do lots of stuff, and in practice they do. But if a MUST is good enough for HTTP headers, why isn't it here?\nFrom memory it was either NAME or NAME who cared strongly about this not being a MUST. I'm fine either way as long as we don't drag this one needlessly.\nTo summarize the discussion on this issue up to this point: we're debating whether to tell intermediaries that they MUST or SHOULD forward unknown capsules unmodified implementors of intermediaries will not react to either one of these words differently endpoints have no way of enforcing this MUST some folks felt very strongly against MUST the working group landed on this design and no new information has surfaced since Based on the points above, there is no reason to hold up document progress based on a single word that will not have any impact. Therefore I'm closing this issue with no action. Hearing no objections, we\u2019re declaring consensus. Thanks again to the participants of the Design Team for their hard work in preparing these changes, and to everyone else for offering reviews and feedback! Best, Chris and Eric", "new_text": "large as to not be usable, the implementation SHOULD discard the capsule without buffering its contents into memory. Note that it is possible for an HTTP extension to use HTTP Datagrams without using the Capsule Protocol. For example, if an HTTP extension that uses HTTP Datagrams is only defined over transports that support QUIC DATAGRAM frames, it might not need a stream encoding. Additionally, HTTP extensions can use HTTP Datagrams with their own data stream protocol. However, new HTTP extensions that wish to use HTTP Datagrams SHOULD use the Capsule Protocol as failing to do so will make it harder for the HTTP extension to support versions of HTTP other than HTTP/3 and will prevent interoperability with intermediaries that only support the Capsule Protocol. 4."} {"id": "q-en-draft-ietf-masque-h3-datagram-22ea6c58fee4f52524884e083a6fc93c5c0a022c2db30b9d2621e2470dc6e475", "old_text": "8. This document does not have additional security considerations beyond those defined in QUIC and DGRAM. 9.", "comments": "As discussed at IETF 110, some MASQUE servers may prefer to avoid sticking out (i.e. they may wish to be indistinguishable from a non-MASQUE-capable HTTP/3 server). The H3_DATAGRAM SETTINGS parameter may stick out. Therefore, we should add a note about this to the Security Considerations section. A simple solution could be to encourage widespread HTTP/3 servers to always send this.", "new_text": "8. Since this feature requires sending an HTTP/3 Settings parameter, it \"sticks out\". In other words, probing clients can learn whether a server supports this feature. Implementations that support this feature SHOULD always send this Settings parameter to avoid leaking the fact that there are applications using HTTP/3 datagrams enabled on this endpoint. 9."} {"id": "q-en-draft-ietf-masque-h3-datagram-56b6dfbdbcfd784ebc277392e39e6da0aa8596f236e74e8ae81c706455512bbb", "old_text": "given QUIC connection, HTTP datagrams contain two layers of multiplexing. First, the QUIC DATAGRAM frame payload starts with an encoded stream identifier that associates the datagram with a given QUIC stream. Second, datagrams carry a context identifier (see datagram-contexts) that allows multiplexing multiple datagram contexts related to a given HTTP request. Conceptually, the first layer of multiplexing is per-hop, while the second is end-to-end.", "comments": "This change allows clients to disable contexts for a given stream. This uses a new REGISTERDATAGRAMNO_CONTEXT capsule for negotiation. The important property here is that it requires capsule support even when contexts are not required. This is critical to the health of the ecosystem as it will ensure that intermediaries that support datagrams but not capsules cannot be deployed, as that would prevent future extensibility.\nEditorial comments aside, I support the protocol feature of an optional DATAGRAM Context ID. Using a registration capsule, as proposed in this PR, is a very logical way of achieving that. I plan to implement this in my WIP code.\nLooks like there's support for merging this. Some additional editorial work is needed, but we'll do that in followups.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nLooks good overall.", "new_text": "given QUIC connection, HTTP datagrams contain two layers of multiplexing. First, the QUIC DATAGRAM frame payload starts with an encoded stream identifier that associates the datagram with a given QUIC stream. Second, datagrams optionally carry a context identifier (see datagram-contexts) that allows multiplexing multiple datagram contexts related to a given HTTP request. Conceptually, the first layer of multiplexing is per-hop, while the second is end-to-end."} {"id": "q-en-draft-ietf-masque-h3-datagram-56b6dfbdbcfd784ebc277392e39e6da0aa8596f236e74e8ae81c706455512bbb", "old_text": "the datagram: the context identifier then maps to a compression context that the receiver can use to reconstruct the elided data. Contexts are identified within the scope of a given request by a numeric value, referred to as the context ID. A context ID is a 62-bit integer (0 to 2^62-1). While stream IDs are a per-hop concept, context IDs are an end-to-end concept. In other words, if a datagram travels through one or more", "comments": "This change allows clients to disable contexts for a given stream. This uses a new REGISTERDATAGRAMNO_CONTEXT capsule for negotiation. The important property here is that it requires capsule support even when contexts are not required. This is critical to the health of the ecosystem as it will ensure that intermediaries that support datagrams but not capsules cannot be deployed, as that would prevent future extensibility.\nEditorial comments aside, I support the protocol feature of an optional DATAGRAM Context ID. Using a registration capsule, as proposed in this PR, is a very logical way of achieving that. I plan to implement this in my WIP code.\nLooks like there's support for merging this. Some additional editorial work is needed, but we'll do that in followups.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nLooks good overall.", "new_text": "the datagram: the context identifier then maps to a compression context that the receiver can use to reconstruct the elided data. Contexts are optional, their use is negotiated on each request stream using registration capsules, see register-capsule and register-no- context-capsule. When contexts are used, they are identified within the scope of a given request by a numeric value, referred to as the context ID. A context ID is a 62-bit integer (0 to 2^62-1). While stream IDs are a per-hop concept, context IDs are an end-to-end concept. In other words, if a datagram travels through one or more"} {"id": "q-en-draft-ietf-masque-h3-datagram-56b6dfbdbcfd784ebc277392e39e6da0aa8596f236e74e8ae81c706455512bbb", "old_text": "streams, and those have stream IDs that are divisible by four.) A variable-length integer indicating the context ID of the datagram (see datagram-contexts). The payload of the datagram, whose semantics are defined by individual applications. Note that this field can be empty.", "comments": "This change allows clients to disable contexts for a given stream. This uses a new REGISTERDATAGRAMNO_CONTEXT capsule for negotiation. The important property here is that it requires capsule support even when contexts are not required. This is critical to the health of the ecosystem as it will ensure that intermediaries that support datagrams but not capsules cannot be deployed, as that would prevent future extensibility.\nEditorial comments aside, I support the protocol feature of an optional DATAGRAM Context ID. Using a registration capsule, as proposed in this PR, is a very logical way of achieving that. I plan to implement this in my WIP code.\nLooks like there's support for merging this. Some additional editorial work is needed, but we'll do that in followups.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nLooks good overall.", "new_text": "streams, and those have stream IDs that are divisible by four.) A variable-length integer indicating the context ID of the datagram (see datagram-contexts). Whether or not this field is present depends on which registration capsules were exchanged on the associated stream: if a REGISTER_DATAGRAM_CONTEXT capsule (see register-capsule) has been sent or received on this stream, then the field is present; if a REGISTER_DATAGRAM_NO_CONTEXT capsule (see register-no-context-capsule) has been sent or received, then this field is absent; if neither has been sent or received, then it is not yet possible to parse this datagram and the receiver MUST either drop that datagram silently or buffer it temporarily while awaiting the registration capsule. The payload of the datagram, whose semantics are defined by individual applications. Note that this field can be empty."} {"id": "q-en-draft-ietf-masque-h3-datagram-56b6dfbdbcfd784ebc277392e39e6da0aa8596f236e74e8ae81c706455512bbb", "old_text": "Servers MUST NOT send a REGISTER_DATAGRAM_CONTEXT capsule on a stream before they have received at least one REGISTER_DATAGRAM_CONTEXT capsule from the client on that stream. This ensures that clients control whether datagrams are allowed for a given request. If a client receives a REGISTER_DATAGRAM_CONTEXT capsule on a stream where the client has not yet sent a REGISTER_DATAGRAM_CONTEXT capsule, the client MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.2. The CLOSE_DATAGRAM_CONTEXT capsule (type=0x01) allows an endpoint to inform its peer that it will no longer send or parse received datagrams associated with a given context ID. Its Capsule Data field", "comments": "This change allows clients to disable contexts for a given stream. This uses a new REGISTERDATAGRAMNO_CONTEXT capsule for negotiation. The important property here is that it requires capsule support even when contexts are not required. This is critical to the health of the ecosystem as it will ensure that intermediaries that support datagrams but not capsules cannot be deployed, as that would prevent future extensibility.\nEditorial comments aside, I support the protocol feature of an optional DATAGRAM Context ID. Using a registration capsule, as proposed in this PR, is a very logical way of achieving that. I plan to implement this in my WIP code.\nLooks like there's support for merging this. Some additional editorial work is needed, but we'll do that in followups.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nLooks good overall.", "new_text": "Servers MUST NOT send a REGISTER_DATAGRAM_CONTEXT capsule on a stream before they have received at least one REGISTER_DATAGRAM_CONTEXT capsule or one REGISTER_DATAGRAM_NO_CONTEXT capsule from the client on that stream. This ensures that clients control whether datagrams are allowed for a given request. If a client receives a REGISTER_DATAGRAM_CONTEXT capsule on a stream where the client has not yet sent a REGISTER_DATAGRAM_CONTEXT capsule, the client MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. Servers MUST NOT send a REGISTER_DATAGRAM_CONTEXT capsule on a stream where it has received a REGISTER_DATAGRAM_NO_CONTEXT capsule. If a client receives a REGISTER_DATAGRAM_CONTEXT capsule on a stream where the client has sent a REGISTER_DATAGRAM_NO_CONTEXT capsule, the client MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.2. The REGISTER_DATAGRAM_NO_CONTEXT capsule (type=0x03) allows a client to inform the server that datagram contexts will not be used with this stream. It also informs the server of the encoding and semantics of datagrams associated with this stream. Its Capsule Data field consists of: A string of comma-separated key-value pairs to enable extensibility, see the definition of the same field in register- capsule for details. Note that this registration is unilateral and bidirectional: the client unilaterally defines the semantics it will apply to the datagrams it sends and receives with this stream. Endpoints MUST NOT send DATAGRAM frames without a Context ID until they have either sent or received a REGISTER_DATAGRAM_NO_CONTEXT Capsule. However, due to reordering, an endpoint that receives a DATAGRAM frame before receiving either a REGISTER_DATAGRAM_CONTEXT capsule or a REGISTER_DATAGRAM_NO_CONTEXT capsule MUST NOT treat it as an error, it SHALL instead drop the DATAGRAM frame silently, or buffer it temporarily while awaiting a REGISTER_DATAGRAM_NO_CONTEXT capsule or the corresponding REGISTER_DATAGRAM_CONTEXT capsule. Servers MUST NOT send the REGISTER_DATAGRAM_NO_CONTEXT capsule. If a client receives a REGISTER_DATAGRAM_NO_CONTEXT capsule, the client MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. Clients MUST NOT send more than one REGISTER_DATAGRAM_NO_CONTEXT capsule on a stream. If a server receives a second REGISTER_DATAGRAM_NO_CONTEXT capsule on the same stream, the server MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. Clients MUST NOT send a REGISTER_DATAGRAM_NO_CONTEXT capsule on a stream before they have sent at least one HEADERS frame on that stream. This removes the need to buffer REGISTER_DATAGRAM_CONTEXT capsules when the server needs information from headers to determine how to react to the capsule. If a server receives a REGISTER_DATAGRAM_NO_CONTEXT capsule on a stream that hasn't yet received a HEADERS frame, the server MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. Clients MUST NOT send both REGISTER_DATAGRAM_CONTEXT capsules and REGISTER_DATAGRAM_NO_CONTEXT capsules on the same stream. If a server receives both a REGISTER_DATAGRAM_CONTEXT capsule and a REGISTER_DATAGRAM_NO_CONTEXT capsule on the same stream, the server MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.3. The CLOSE_DATAGRAM_CONTEXT capsule (type=0x01) allows an endpoint to inform its peer that it will no longer send or parse received datagrams associated with a given context ID. Its Capsule Data field"} {"id": "q-en-draft-ietf-masque-h3-datagram-56b6dfbdbcfd784ebc277392e39e6da0aa8596f236e74e8ae81c706455512bbb", "old_text": "MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.3. The DATAGRAM capsule (type=0x02) allows an endpoint to send a datagram frame over an HTTP stream. This is particularly useful when", "comments": "This change allows clients to disable contexts for a given stream. This uses a new REGISTERDATAGRAMNO_CONTEXT capsule for negotiation. The important property here is that it requires capsule support even when contexts are not required. This is critical to the health of the ecosystem as it will ensure that intermediaries that support datagrams but not capsules cannot be deployed, as that would prevent future extensibility.\nEditorial comments aside, I support the protocol feature of an optional DATAGRAM Context ID. Using a registration capsule, as proposed in this PR, is a very logical way of achieving that. I plan to implement this in my WIP code.\nLooks like there's support for merging this. Some additional editorial work is needed, but we'll do that in followups.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nLooks good overall.", "new_text": "MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.4. The DATAGRAM capsule (type=0x02) allows an endpoint to send a datagram frame over an HTTP stream. This is particularly useful when"} {"id": "q-en-draft-ietf-masque-h3-datagram-56b6dfbdbcfd784ebc277392e39e6da0aa8596f236e74e8ae81c706455512bbb", "old_text": "Its Capsule Data field consists of: A variable-length integer indicating the context ID of the datagram (see datagram-contexts). The payload of the datagram, whose semantics are defined by individual applications. Note that this field can be empty.", "comments": "This change allows clients to disable contexts for a given stream. This uses a new REGISTERDATAGRAMNO_CONTEXT capsule for negotiation. The important property here is that it requires capsule support even when contexts are not required. This is critical to the health of the ecosystem as it will ensure that intermediaries that support datagrams but not capsules cannot be deployed, as that would prevent future extensibility.\nEditorial comments aside, I support the protocol feature of an optional DATAGRAM Context ID. Using a registration capsule, as proposed in this PR, is a very logical way of achieving that. I plan to implement this in my WIP code.\nLooks like there's support for merging this. Some additional editorial work is needed, but we'll do that in followups.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nLooks good overall.", "new_text": "Its Capsule Data field consists of: A variable-length integer indicating the context ID of the datagram (see datagram-contexts). Whether or not this field is present depends on which registration capsules were exchanged on the associated stream: if a REGISTER_DATAGRAM_CONTEXT capsule (see register-capsule) has been sent or received on this stream, then the field is present; if a REGISTER_DATAGRAM_NO_CONTEXT capsule (see register-no-context-capsule) has been sent or received, then this field is absent; if neither has been sent or received, then it is not yet possible to parse this datagram and the receiver MUST either drop that datagram silently or buffer it temporarily while awaiting the registration capsule. The payload of the datagram, whose semantics are defined by individual applications. Note that this field can be empty."} {"id": "q-en-draft-ietf-mops-streaming-opcons-84ab13ebd4a52d98b59a83b508dcdfbace60b7170c50d9246e182f974d08c2a7", "old_text": "3.2.1. There are many reasons why path characteristics might change suddenly, but we can divide these reasons into two categories: If the path topology changes. For example, routing changes, which can happen in normal operation, may result in traffic being", "comments": "This tries to address both of:\nDiscussion from editor's conference call: Can we provide a reference for useful properties? Network analytics, application analytics, or both? Currently application analytics, which is OK. Is that clear enough? Who is supposed to notice changes from an expected baseline? Is mentioning QLOG helpful? Have we said clearly enough that baselines themselves can fluctuate, before the kinds of path characteristic changes we're talking about kick in?\nI think just \"baseline\" is clear enough to me, in context with this section listing some relevant stats. I guess \"baseline pattern of some relevant statistics\" is maybe more precise, but I'm not sure \"baseline pattern\" really gets there.\nfrom NAME 's review (URL) Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder.\nThis comment helpfully points out the changing network architecture (a LOT of wireless), and we should be clearer about what we're talking about in Section 3.2 (our mental was mostly wired) and in 5.5.3 (where we focused on wireless). This material should go in Section 3.2, and should be referenced in 5.5.3..\nRelated to\nFrom NAME (URL) Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful.\nRelated to\nThis works for me. I'll merge it. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "3.2.1. There are many reasons why path characteristics might change in normal operation, for example: If the path topology changes. For example, routing changes, which can happen in normal operation, may result in traffic being"} {"id": "q-en-draft-ietf-mops-streaming-opcons-84ab13ebd4a52d98b59a83b508dcdfbace60b7170c50d9246e182f974d08c2a7", "old_text": "If cross traffic that also traverses part or all of the same path topology increases or decreases, especially if this new cross traffic is \"inelastic,\" and does not, itself, respond to indications of path congestion. To recognize that a path carrying streaming media is not behaving the way it normally does, having an expected baseline that describes the way it normally does is fundamental. Analytics that aid in that recognition can be more or less sophisticated and can be as simple as noticing that the apparent round trip times for media traffic carried over TCP transport on some paths are suddenly and significantly longer than usual. Passive monitors can detect changes in the elapsed time between the acknowledgements for specific TCP segments from a TCP receiver since TCP octet sequence numbers and acknowledgements for those sequence numbers are carried in the clear, even if the TCP payload itself is encrypted. See reliable-behavior for more information. As transport protocols evolve to encrypt their transport header fields, one side effect of increasing encryption is the kind of passive monitoring, or even \"performance enhancement\" (RFC3135) that was possible with the older transport protocols (UDP, described in unreliable-behavior and TCP, described in reliable-behavior) is no longer possible with newer transport protocols such as QUIC (described in quic-behavior). The IETF has specified a \"latency spin bit\" mechanism in Section 17.4 of RFC9000 to allow passive latency monitoring from observation points on the network path throughout the duration of a connection, but currently chartered work in the IETF is focusing on endpoint monitoring and reporting, rather than on passive monitoring. One example is the \"qlog\" mechanism I-D.ietf-quic-qlog-main-schema, a protocol-agnostic mechanism used to provide better visibility for encrypted protocols such as QUIC (I-D.ietf-quic-qlog-quic-events) and for HTTP/3 (I-D.ietf-quic-qlog-h3-events). 3.3.", "comments": "This tries to address both of:\nDiscussion from editor's conference call: Can we provide a reference for useful properties? Network analytics, application analytics, or both? Currently application analytics, which is OK. Is that clear enough? Who is supposed to notice changes from an expected baseline? Is mentioning QLOG helpful? Have we said clearly enough that baselines themselves can fluctuate, before the kinds of path characteristic changes we're talking about kick in?\nI think just \"baseline\" is clear enough to me, in context with this section listing some relevant stats. I guess \"baseline pattern of some relevant statistics\" is maybe more precise, but I'm not sure \"baseline pattern\" really gets there.\nfrom NAME 's review (URL) Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder.\nThis comment helpfully points out the changing network architecture (a LOT of wireless), and we should be clearer about what we're talking about in Section 3.2 (our mental was mostly wired) and in 5.5.3 (where we focused on wireless). This material should go in Section 3.2, and should be referenced in 5.5.3..\nRelated to\nFrom NAME (URL) Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful.\nRelated to\nThis works for me. I'll merge it. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "If cross traffic that also traverses part or all of the same path topology increases or decreases, especially if this new cross traffic is \"inelastic,\" and does not respond to indications of path congestion. Wireless links (Wi-Fi, 5G, LTE, etc.) often see rapid changes to capacity from changes in radio interference and signal strength as endpoints move. To recognize that a path carrying streaming media has experienced a change, maintaining a baseline that captures its prior properties is fundamental. Analytics that aid in that recognition can be more or less sophisticated and can usefully operate on several different time scales, from milliseconds to hours or days. Useful properties to monitor for changes can include: round-trip times loss rate (and explicit congestion notification (ECN) (RFC3168 when in use) out of order packet rate packet and byte receive rate application level goodput properties of other connections carrying competing traffic, in addition to the connections carrying the streaming media externally provided measurements, for example from network cards or metrics collected by the operating system 3.3."} {"id": "q-en-draft-ietf-mops-streaming-opcons-84ab13ebd4a52d98b59a83b508dcdfbace60b7170c50d9246e182f974d08c2a7", "old_text": "5.5.3. As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detection have emerged from radio interference and signal strength effects. Each of these technologies can experience sudden changes in capacity as the end user device moves from place to place and encounters new sources of interference. Microwave ovens, for example, can cause a throughput degradation of more than a factor of 2 while active Micro. 5G and LTE likewise can easily see rate variation by a factor of 2 or more over a span of seconds as users move around. These swings in actual transport capacity can result in user experience issues that can be exacerbated by insufficiently responsive ABR algorithms. 5.6.", "comments": "This tries to address both of:\nDiscussion from editor's conference call: Can we provide a reference for useful properties? Network analytics, application analytics, or both? Currently application analytics, which is OK. Is that clear enough? Who is supposed to notice changes from an expected baseline? Is mentioning QLOG helpful? Have we said clearly enough that baselines themselves can fluctuate, before the kinds of path characteristic changes we're talking about kick in?\nI think just \"baseline\" is clear enough to me, in context with this section listing some relevant stats. I guess \"baseline pattern of some relevant statistics\" is maybe more precise, but I'm not sure \"baseline pattern\" really gets there.\nfrom NAME 's review (URL) Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder.\nThis comment helpfully points out the changing network architecture (a LOT of wireless), and we should be clearer about what we're talking about in Section 3.2 (our mental was mostly wired) and in 5.5.3 (where we focused on wireless). This material should go in Section 3.2, and should be referenced in 5.5.3..\nRelated to\nFrom NAME (URL) Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful.\nRelated to\nThis works for me. I'll merge it. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "5.5.3. As many end devices have moved to wireless connections for the final hop (such as Wi-Fi, 5G, LTE, etc.), new problems in bandwidth detection have emerged. In most real-world operating environments, wireless links can often experience sudden changes in capacity as the end user device moves from place to place or encounters new sources of interference. Microwave ovens, for example, can cause a throughput degradation in Wi-Fi of more than a factor of 2 while active Micro. 5G and LTE likewise can easily see rate variation by a factor of 2 or more over a span of seconds as users move around. These swings in actual transport capacity can result in user experience issues when interacting with ABR algorithms that aren't tuned to handle the capacity variation gracefully. 5.6."} {"id": "q-en-draft-ietf-mops-streaming-opcons-b082ecc0ce0e2fe0bcfc25a977ff2b521ff38a90dc8a6a166cb6eb2b4e4126e1", "old_text": "Operational Considerations for Streaming Media draft-ietf-mops-streaming-opcons-00 Abstract", "comments": "This is based on the post-IETF 107 MOPS meeting where we talked about this. I chased some more references.\nWe should try to improve this reference - it's a long page, additions are at the top, so we have to either scroll for a long time or search for the date to get the relevant part of the page. Maybe reach out to ATT to get fragment tags that work?", "new_text": "Operational Considerations for Streaming Media draft-ietf-mops-streaming-opcons-01 Abstract"} {"id": "q-en-draft-ietf-mops-streaming-opcons-b082ecc0ce0e2fe0bcfc25a977ff2b521ff38a90dc8a6a166cb6eb2b4e4126e1", "old_text": "watch for increasing numbers of end users uploading significant amounts of content. 3. 3.1.", "comments": "This is based on the post-IETF 107 MOPS meeting where we talked about this. I chased some more references.\nWe should try to improve this reference - it's a long page, additions are at the top, so we have to either scroll for a long time or search for the date to get the relevant part of the page. Maybe reach out to ATT to get fragment tags that work?", "new_text": "watch for increasing numbers of end users uploading significant amounts of content. 2.6. The causes of unpredictable usage described in sec-unpredictable were more or less the result of human choices, but we were reminded during a post-IETF 107 meeting that humans are not always in control, and forces of nature can cause enormous fluctuations in traffic patterns. In his talk, Sanjay Mishra Mishra reported that after the CoViD-19 pandemic broke out in early 2020, Comcast's streaming and web video consumption rose by 38%, with their reported peak traffic up 32% overall between March 1 to March 30 Comcast, AT&T reported a 28% jump in core network traffic (single day in April, as compared to pre stay-at-home daily average traffic), with video accounting for nearly half of all mobile network traffic, while social networking and web browsing remained the highest percentage (almost a quarter each) of overall mobility traffic ATT, and Verizon reported similar trends with video traffic up 36% over an average day (pre COVID-19) Verizon. We note that other operators saw similar spikes during this time period. Craig Labowitz Labovitz reported Weekday peak traffic increases over 45%-50% from pre-lockdown levels, A 30% increase in upstream traffic over their pre-pandemic levels, and A steady increase in the overall volume of DDoS traffic, with amounts exceeding the pre-pandemic levels by 40%. (He attributed this increase to the significant rise in gaming-related DDoS attacks (LabovitzDDoS), as gaming usage also increased.) 3. 3.1."} {"id": "q-en-draft-ietf-mops-streaming-opcons-56998967af0798ec0d9556c5e297bdc196ea219d6922432d1c594ac7e30b868b", "old_text": "Abstract This document provides an overview of operational networking issues that pertain to quality of experience in delivery of video and other high-bitrate media over the internet. 1.", "comments": "Ali, this PR looks nice. Two meta-issues (which we can talk about whenever). I changed my references in the draft to a more complete format (using ins: can result in odd formatting in the References section). We don't have to do that for all the references in -04, but that would be a good thing to do in an editorial pass. I see that you're making some terminology changes (from \"delivery\" to \"streaming\", stuff like that). I agree with the changes you made in this PR, but we may want to make sure that they are made globally, for consistency.\nI was not sure about the reference format, normally I just use the casual format: Authors, \"Title,\" Journal/Conference, Date. I saw the missing author names and realized \"ins\" would do the trick. By no means, I am tied to that format - don't know even know what \"ins\" stands for here :)\nFrom URL : \"as in the author list, ins is an abbreviation for \"initials/surname\";\" I tend to think an editorial pass on references would be better as a separate commit, where they need cleaning. Anything that somehow made it in and breaks the \"make\" needs fixing, but others maybe later?\nI suggest a definition along these lines: Streaming is transmission of a continuous content from a server to a client and its simultaneous consumption by the client. (\"simultaneous\" is the key here) This has two implications: 1) Server transmission rate (loosely or tightly) matches to client consumption rate. That is, no buffer overrun or underrun is desirable/acceptable. 2) Client consumption rate is also limited by real-time constraints as opposed to just bandwidth availability. That is, client cannot fetch the content not available yet.\n+1, this sgtm. But this probably implies a refactor of the intro section and the abstract, to better focus on the relevant use case.\nI can take a crack at this.\nAli has text for this, will create a PR for the issue.", "new_text": "Abstract This document provides an overview of operational networking issues that pertain to quality of experience in streaming of video and other high-bitrate media over the internet. 1."} {"id": "q-en-draft-ietf-mops-streaming-opcons-56998967af0798ec0d9556c5e297bdc196ea219d6922432d1c594ac7e30b868b", "old_text": "compound annual growth rate continuing at 34% (from Appendix D of CVNI). In many contexts, video traffic can be handled transparently as generic application-level traffic. However, as the volume of video traffic continues to grow, it's becoming increasingly important to", "comments": "Ali, this PR looks nice. Two meta-issues (which we can talk about whenever). I changed my references in the draft to a more complete format (using ins: can result in odd formatting in the References section). We don't have to do that for all the references in -04, but that would be a good thing to do in an editorial pass. I see that you're making some terminology changes (from \"delivery\" to \"streaming\", stuff like that). I agree with the changes you made in this PR, but we may want to make sure that they are made globally, for consistency.\nI was not sure about the reference format, normally I just use the casual format: Authors, \"Title,\" Journal/Conference, Date. I saw the missing author names and realized \"ins\" would do the trick. By no means, I am tied to that format - don't know even know what \"ins\" stands for here :)\nFrom URL : \"as in the author list, ins is an abbreviation for \"initials/surname\";\" I tend to think an editorial pass on references would be better as a separate commit, where they need cleaning. Anything that somehow made it in and breaks the \"make\" needs fixing, but others maybe later?\nI suggest a definition along these lines: Streaming is transmission of a continuous content from a server to a client and its simultaneous consumption by the client. (\"simultaneous\" is the key here) This has two implications: 1) Server transmission rate (loosely or tightly) matches to client consumption rate. That is, no buffer overrun or underrun is desirable/acceptable. 2) Client consumption rate is also limited by real-time constraints as opposed to just bandwidth availability. That is, client cannot fetch the content not available yet.\n+1, this sgtm. But this probably implies a refactor of the intro section and the abstract, to better focus on the relevant use case.\nI can take a crack at this.\nAli has text for this, will create a PR for the issue.", "new_text": "compound annual growth rate continuing at 34% (from Appendix D of CVNI). A substantial part of this growth is due to increased use of streaming video, although the amount of video traffic in real-time communications (for example, online videoconferencing) has also grown significantly. While both streaming video and videoconferencing have real-time delivery and latency requirements, these requirements vary from one application to another. For example, videoconferencing demands an end-to-end (one-way) latency of a few hundreds of milliseconds whereas live streaming can tolerate latencies of several seconds. This document specifically focuses on the streaming applications and defines streaming as follows: Streaming is transmission of a continuous media from a server to a client and its simultaneous consumption by the client. Here, continous media refers to media and associated streams such as video, audio, metadata, etc. In this definition, the critical term is \"simultaneous\", as it is not considered streaming if one downloads a video file and plays it after the download is completed, which would be called download-and-play. This has two implications. First, server's transmission rate must (loosely or tightly) match to client's consumption rate for an uninterrupted playback. That is, the client must not run out of data (buffer underrun) or take more than it can keep (buffer overrun) as any excess media is simply discarded. Second, client's consumption rate is limited by not only bandwidth availability but also the real- time constraints. That is, the client cannot fetch media that is not available yet. In many contexts, video traffic can be handled transparently as generic application-level traffic. However, as the volume of video traffic continues to grow, it's becoming increasingly important to"} {"id": "q-en-draft-ietf-mops-streaming-opcons-56998967af0798ec0d9556c5e297bdc196ea219d6922432d1c594ac7e30b868b", "old_text": "further. In this case, the immersive content is typically referred to as volumetric media. One way to represent the volumetric media is to use point clouds, where streaming a single object may easily require a bitrate of 30 Mbps or higher. Refer to PCC for more details. 2.2.", "comments": "Ali, this PR looks nice. Two meta-issues (which we can talk about whenever). I changed my references in the draft to a more complete format (using ins: can result in odd formatting in the References section). We don't have to do that for all the references in -04, but that would be a good thing to do in an editorial pass. I see that you're making some terminology changes (from \"delivery\" to \"streaming\", stuff like that). I agree with the changes you made in this PR, but we may want to make sure that they are made globally, for consistency.\nI was not sure about the reference format, normally I just use the casual format: Authors, \"Title,\" Journal/Conference, Date. I saw the missing author names and realized \"ins\" would do the trick. By no means, I am tied to that format - don't know even know what \"ins\" stands for here :)\nFrom URL : \"as in the author list, ins is an abbreviation for \"initials/surname\";\" I tend to think an editorial pass on references would be better as a separate commit, where they need cleaning. Anything that somehow made it in and breaks the \"make\" needs fixing, but others maybe later?\nI suggest a definition along these lines: Streaming is transmission of a continuous content from a server to a client and its simultaneous consumption by the client. (\"simultaneous\" is the key here) This has two implications: 1) Server transmission rate (loosely or tightly) matches to client consumption rate. That is, no buffer overrun or underrun is desirable/acceptable. 2) Client consumption rate is also limited by real-time constraints as opposed to just bandwidth availability. That is, client cannot fetch the content not available yet.\n+1, this sgtm. But this probably implies a refactor of the intro section and the abstract, to better focus on the relevant use case.\nI can take a crack at this.\nAli has text for this, will create a PR for the issue.", "new_text": "further. In this case, the immersive content is typically referred to as volumetric media. One way to represent the volumetric media is to use point clouds, where streaming a single object may easily require a bitrate of 30 Mbps or higher. Refer to MPEGI and PCC for more details. 2.2."} {"id": "q-en-draft-ietf-mops-streaming-opcons-56998967af0798ec0d9556c5e297bdc196ea219d6922432d1c594ac7e30b868b", "old_text": "3.1. Adaptive BitRate (ABR) is a sort of application-level response strategy in which the receiving media player attempts to detect the available bandwidth of the network path by experiment or by observing the successful application-layer download speed, then chooses a video bitrate (among the limited number of available options) that fits within that bandwidth, typically adjusting as changes in available bandwidth occur in the network or changes in capabilities occur in the player (such as available memory, CPU, display size, etc.). The choice of bitrate occurs within the context of optimizing for some metric monitored by the video player, such as highest achievable video quality, or lowest rate of expected rebuffering events. 3.2. ABR playback is commonly implemented by video players using HLS RFC8216 or DASH DASH to perform a reliable segmented delivery of video data over HTTP. Different player implementations and receiving devices use different strategies, often proprietary algorithms (called rate adaptation or bitrate selection algorithms), to perform available bandwidth estimation/prediction and the bitrate selection. Most players only use passive observations, i.e., they do not generate probe traffic to measure the available bandwidth. This kind of bandwidth-measurement systems can experience trouble in several ways that can be affected by networking design choices.", "comments": "Ali, this PR looks nice. Two meta-issues (which we can talk about whenever). I changed my references in the draft to a more complete format (using ins: can result in odd formatting in the References section). We don't have to do that for all the references in -04, but that would be a good thing to do in an editorial pass. I see that you're making some terminology changes (from \"delivery\" to \"streaming\", stuff like that). I agree with the changes you made in this PR, but we may want to make sure that they are made globally, for consistency.\nI was not sure about the reference format, normally I just use the casual format: Authors, \"Title,\" Journal/Conference, Date. I saw the missing author names and realized \"ins\" would do the trick. By no means, I am tied to that format - don't know even know what \"ins\" stands for here :)\nFrom URL : \"as in the author list, ins is an abbreviation for \"initials/surname\";\" I tend to think an editorial pass on references would be better as a separate commit, where they need cleaning. Anything that somehow made it in and breaks the \"make\" needs fixing, but others maybe later?\nI suggest a definition along these lines: Streaming is transmission of a continuous content from a server to a client and its simultaneous consumption by the client. (\"simultaneous\" is the key here) This has two implications: 1) Server transmission rate (loosely or tightly) matches to client consumption rate. That is, no buffer overrun or underrun is desirable/acceptable. 2) Client consumption rate is also limited by real-time constraints as opposed to just bandwidth availability. That is, client cannot fetch the content not available yet.\n+1, this sgtm. But this probably implies a refactor of the intro section and the abstract, to better focus on the relevant use case.\nI can take a crack at this.\nAli has text for this, will create a PR for the issue.", "new_text": "3.1. Adaptive BitRate (ABR) is a sort of application-level response strategy in which the streaming client attempts to detect the available bandwidth of the network path by observing the successful application-layer download speed, then chooses a bitrate for each of the video, audio, subtitles and metadata (among the limited number of available options) that fits within that bandwidth, typically adjusting as changes in available bandwidth occur in the network or changes in capabilities occur during the playback (such as available memory, CPU, display size, etc.). The choice of bitrate occurs within the context of optimizing for some metric monitored by the client, such as highest achievable video quality or lowest chances for a rebuffering (playback stall). 3.2. ABR playback is commonly implemented by streaming clients using HLS RFC8216 or DASH DASH to perform a reliable segmented delivery of media over HTTP. Different implementations use different strategies ABRSurvey, often proprietary algorithms (called rate adaptation or bitrate selection algorithms) to perform available bandwidth estimation/prediction and the bitrate selection. Most clients only use passive observations, i.e., they do not generate probe traffic to measure the available bandwidth. This kind of bandwidth-measurement systems can experience trouble in several ways that can be affected by networking design choices."} {"id": "q-en-draft-ietf-mops-streaming-opcons-ba041c9484371dbad8ddb2942d5cad49bb5710c8783a678252aba545b4543550", "old_text": "9. Thanks to Mark Nottingham, Glenn Deen, Dave Oran, Aaron Falk, Kyle Rose, Leslie Daigle, Lucas Pardue, Matt Stock, Alexandre Gouaillard, and Mike English for their very helpful reviews and comments. 10. References", "comments": "We've been getting help from people, but the ACKs section hasn't changed much since -03 was submitted in November 2020.\nI found like two names that we had missed, and added them to the ACKs section. I think we can merge this any time.", "new_text": "9. Thanks to Alexandre Gouaillard, Aaron Falk, Dave Oran, Glenn Deen, Kyle Rose, Leslie Daigle, Lucas Pardue, Mark Nottingham, Matt Stock, Mike English, Roni Even, and Will Law for very helpful suggestions, reviews and comments. (If we missed your name, please let us know!) 10. References"} {"id": "q-en-draft-ietf-ppm-dap-ca2f45ac35d14ad725035c7650b5d91ded1b88e28e2177ad509322b7d33eb746", "old_text": "errors. The \"instance\" value MUST be the endpoint to which the request was targeted. The problem document MUST also include a \"taskid\" member which contains the associated PPM task ID, encoded with base64 RFC4648 (this value is always known, see task- configuration). In the remainder of this document, we use the tokens in the table above to refer to error types, rather than the full URNs. For", "comments": "Section 3.1 says to include the PPM task ID in a field on the problem details JSON object, but the task ID may be any 32 byte value, and may not be representable as a Unicode string. I propose that we base64-encode the task ID here to resolve this.", "new_text": "errors. The \"instance\" value MUST be the endpoint to which the request was targeted. The problem document MUST also include a \"taskid\" member which contains the associated PPM task ID, encoded with base64 using the standard alphabet RFC4648 (this value is always known, see task-configuration). In the remainder of this document, we use the tokens in the table above to refer to error types, rather than the full URNs. For"} {"id": "q-en-draft-ietf-ppm-dap-8f4aeea0cf523e4f6d9b4ae29d482793a6496b2e4ffb60bda97a9e7e99f4c4e8", "old_text": "This protocol is extensible and allows for the addition of new cryptographic schemes that implement the VDAF interface specified in I-D.draft-cfrg-patton-vdaf. Candidates include: \"prio3\", which allows for aggregate statistics such as sum, mean, histograms, etc. This class of VDAFs is based on Prio CGB17 and", "comments": "Would be helpful to readers if this reference was fixed in the Editor's Copy: [I-D.draft-cfrg-patton-vdaf] \" BROKEN REFERENCE \".\nMaybe change each reference to ? That way it'll render like in the text, which reads a bit nicer.", "new_text": "This protocol is extensible and allows for the addition of new cryptographic schemes that implement the VDAF interface specified in VDAF. Candidates include: \"prio3\", which allows for aggregate statistics such as sum, mean, histograms, etc. This class of VDAFs is based on Prio CGB17 and"} {"id": "q-en-draft-ietf-ppm-dap-8f4aeea0cf523e4f6d9b4ae29d482793a6496b2e4ffb60bda97a9e7e99f4c4e8", "old_text": "client ever sees the values for individual clients. In order to address this problem, the aggregators engage in a secure, multi-party computation specified by the chosen VDAF I-D.draft-cfrg- patton-vdaf in order to prepare a report for aggregation. At the beginning of this computation, each aggregator is in possession of an input share uploaded by the client. At the end of the computation, each aggregator is in posession of either an \"output share\" that is ready to be aggregated or an indication that a valid output share could not be computed. To facilitiate this computation, the input shares generated by the client include information used by the aggregators during aggregation", "comments": "Would be helpful to readers if this reference was fixed in the Editor's Copy: [I-D.draft-cfrg-patton-vdaf] \" BROKEN REFERENCE \".\nMaybe change each reference to ? That way it'll render like in the text, which reads a bit nicer.", "new_text": "client ever sees the values for individual clients. In order to address this problem, the aggregators engage in a secure, multi-party computation specified by the chosen VDAF VDAF in order to prepare a report for aggregation. At the beginning of this computation, each aggregator is in possession of an input share uploaded by the client. At the end of the computation, each aggregator is in posession of either an \"output share\" that is ready to be aggregated or an indication that a valid output share could not be computed. To facilitiate this computation, the input shares generated by the client include information used by the aggregators during aggregation"} {"id": "q-en-draft-ietf-ppm-dap-8f4aeea0cf523e4f6d9b4ae29d482793a6496b2e4ffb60bda97a9e7e99f4c4e8", "old_text": "\"vdaf_verify_param\": The aggregator's VDAF verification parameter output by the setup algorithm computed jointly by the aggregators before the start of the PPM protocol I-D.draft-cfrg-patton-vdaf). [OPEN ISSUE: This is yet to be specified. See issue#161.] Finally, the collector is configured with the HPKE secret key corresponding to \"collector_hpke_config\".", "comments": "Would be helpful to readers if this reference was fixed in the Editor's Copy: [I-D.draft-cfrg-patton-vdaf] \" BROKEN REFERENCE \".\nMaybe change each reference to ? That way it'll render like in the text, which reads a bit nicer.", "new_text": "\"vdaf_verify_param\": The aggregator's VDAF verification parameter output by the setup algorithm computed jointly by the aggregators before the start of the PPM protocol VDAF). [OPEN ISSUE: This is yet to be specified. See issue#161.] Finally, the collector is configured with the HPKE secret key corresponding to \"collector_hpke_config\"."} {"id": "q-en-draft-ietf-ppm-dap-f95573723f24e74e04e03115fc42113f2c9bce52c70474109fb422b118a66ea6", "old_text": "The following terms are used: This document uses the presentation language of RFC8446. 2.", "comments": "The only language I can find in the document that prescribes serialization is in Section 1.2 (please point out if I missed anything): We intend this statement to be prescriptive, i.e., encoding/decoding of messages is as defined in RFC8446. Another valid interpretation might be \"we define the fields of each message following the conventions of RFC8446, but leaving encoding/decoding unspecified\". I think it would help to refine this statement a bit. WDYTA:\nSGTM FWIW (this was our interpretation, and I think it makes sense to clarify)\nYeah, this seems like a fine clarification. I don't think it was under specified before, but more information for those unfamiliar with TLS-style encoding doesn't hurt.", "new_text": "The following terms are used: This document uses the presentation language of RFC8446 to define messages in the PPM protocol. Encoding and decoding of these messages as byte strings also follows RFC8446. 2."} {"id": "q-en-draft-ietf-ppm-dap-e5c5d5d558d17abf7e36b3429c3ade2942f14578d77fba590d2bc2bc435f818b", "old_text": "6. Prio assumes a powerful adversary with the ability to compromise an unbounded number of clients. In doing so, the adversary can provide malicious (yet truthful) inputs to the aggregation function. Prio also assumes that all but one server operates honestly, where a dishonest server does not execute the protocol faithfully as specified. The system also assumes that servers communicate over secure and mutually authenticated channels. In practice, this can be done by TLS or some other form of application-layer authentication. In the presence of this adversary, Prio provides two important properties for computing an aggregation function F: Privacy. The aggregators and collector learn only the output of F computed over all client inputs, and nothing else. Robustness. As long as the aggregators execute the input- validation protocol correctly, a malicious client can skew the output of F only by reporting false (untruthful) input. The output cannot be influenced in any other way. There are several additional constraints that a Prio deployment must satisfy in order to achieve these goals: Minimum batch size. The aggregation batch size has an obvious impact on privacy. (A batch size of one hides nothing of the input.) Aggregation function choice. Some aggregation functions leak slightly more than the function output itself. [TODO: discuss these in more detail.] 6.1. In this section, we enumerate the actors participating in the Prio system and enumerate their assets (secrets that are either inherently valuable or which confer some capability that enables further attack", "comments": "This change is primarily editorial and includes the following changes: Replace references to \"Prio\" with references to a generic VDAF. (This section was written long ago when we had Prio in mind.) Elaborate on known issues for collect requests. Discuss Sybil attacks, including enumerating the different types ().\nFrom dkg in Vienna:\nIt seems like there are a bunch of assumptions about Sybil attacks generally that are not well articulated yet. This would certainly help.\nThe list of attacks is really a list of things we ought to fix in the protocol, so I would not even bother including them in the security considerations. If we want to use this as an opportunity to identify open issues that need to be addressed before we consider the protocol secure, I would simply list the open issues and keep discussion in GitHub. (That is, the text here seems redundant with the issue )", "new_text": "6. PPM assumes an active attacker that controls the network and has the ability to statically corrupt any number of clients, aggregators, and collectors. That is, the attacker can learn the secret state of any party prior to the start of its attack. For example, it may coerce a client into providing malicious input shares for aggregation or coerce an aggregator into diverting from the protocol specified (e.g., by divulging its input shares to the attacker). In the presence of this adversary, PPM aims to achieve the following high-level secure aggregation goals: Privacy. Clients trust that some aggregator is honest. That is, as long as at least one aggregator executes the protocol faithfully, the parties learn nothing beyond the aggregate result (i.e., the output of the aggregation function computed over the honest measurements). Correctness. The collector trusts that the aggregators execute the protocol correctly. That is, as long as the aggregators execute the protocol faithfully, a malicious client can skew the aggregate result only by reporting a false (untruthful) measurement. The result cannot be influenced in any other way. Currently, the specification does not achieve these goals. In particular, there are several open issues that need to be addressed before these goals are met. Details for each issue are below. When crafted maliciously, collect requests may leak more information about the measurements than the system intends. For example, the spec currently allows sequences of collect requests to reveal an aggregate result for a batch smaller than the minimum batch size. [OPEN ISSUE: See issue#195. This also has implications for how we solve issue#183.] Even benign collect requests may leak information beyond what one might expect intuitively. For example, the Poplar1 VDAF VDAF can be used to compute the set of heavy hitters among a set of arbitrary bit strings uploaded by clients. This requires multiple evaluations of the VDAF, the results of which reveal information to the aggregators and collector beyond what follows from the heavy hitters themselves. Note that this leakage can be mitigated using differential privacy. [OPEN ISSUE: We have yet not specified how to add DP.] The core PPM spec does not defend against Sybil attacks. In this type of attack, the adversary adds to a batch a number of reports that skew the aggregate result in its favor. For example: The result may reveal additional information about the honest measurements, leading to a privacy violation; or the result may have some property that is desirable to the adversary (\"stats poisoning\"). The upload sub-protocol includes an extensions mechanism that can be used to prevent -- or at least mitigate -- these types of attacks. See upload-extensions. [OPEN ISSUE: No such extension has been implemented, so we're not yet sure if the current mechanism is sufficient.] Attacks may also come from the network. Thus, it is required that the aggregators and collector communicate with one another over mutually authenticated and confidential channels. The core PPM spec does not specify such a mechanism beyond requiring server authentication for HTTPS sessions. Note that clients are not required to authenticate themselves. [OPEN ISSUE: It might be better to be prescriptive about leader authentication in leader- helper channels and collector authenticaiton in collector-leader channels. For the latter we have issue#155.] 6.1. [OPEN ISSUE: This subsection is a bit out-of-date.] In this section, we enumerate the actors participating in the Prio system and enumerate their assets (secrets that are either inherently valuable or which confer some capability that enables further attack"} {"id": "q-en-draft-ietf-ppm-dap-dbd68199431972862df22187188bb885ebd6ad2218d66e60a00d5e20717c5160", "old_text": "To generate the report, the client begins by sharding its measurement into a sequence of input shares as specified by the VDAF in use. To encrypt an input share, the client first generates an HPKE RFC9180 context for the aggregator by running where \"pk\" is the aggregator's public key and \"server_role\" is the Role of the intended recipient (\"0x02\" for the leader and \"0x03\" for the helper). In general, the info string for computing the HPKE context is suffixed by two bytes, the first of which identifies the role of the sender and the second of which identifies the role of the intended recipient. \"enc\" is the HPKE encapsulated key and \"context\" is the HPKE context used by the client for encryption. The payload is encrypted as where \"input_share\" is the aggregator's input share and \"nonce\" and \"extensions\" are the corresponding fields of \"Report\". Clients MUST NOT use the same \"enc\" for multiple reports. The leader responds to well-formed requests to \"[leader]/upload\" with HTTP status code 200 OK and an empty body. Malformed requests are", "comments": "Specifically: Specify use of the \"one-shot\" HPKE APIs defined in RFC9180, replacing equivalent use of the \"multi-shot\" APIs. (This change is intended to be a clarification, not a functional change.) Move the task_id parameter from the \"info\" parameter to the \"aad\" parameter. This is intended to protect against key-commitment-related attacks[1]. (This is a functional change.) Clarify that the URL field is an encapsulated key, rather than an encryption context. (This is a clarification, not a functional change.) [1] See discussion in URL\nPer URL section 8.1: PPM effectively uses the single-shot APIs; we should consider refactoring PPM's usage of HPKE to use only one of applicationinfo & AAD. Following the advice of the HPKE RFC, we would want to use applicationinfo. This would permit slightly simpler implementations. I am not sure how much it would simplify the HPKE computations.\nShould we then formally use the single shot HPKE APIs from ?\nThat would be my recommendation (though, somewhat confusingly given the advice quoted above, those APIs still allow specifying both application info & AAD). Implementations can & do already use the single-shot APIs, so I strongly suspect specifying the single-shot APIs will be feasible. & .]\n+1 to using single-shot APIs, though I'm not sure I agree that using either info or add (but not both) is simpler. What's the reasoning?\nThe reasoning is: a) it follows the advice of the HPKE spec (quoted above) b) AFAICT, in a single-shot setting, application info & AAD serve exactly the same purpose, so using both is arbitrary/confusing/a potential source of mistakes. (in a multi-shot setting, application info is bound to all encryptions/decryptions done by the same context; AAD is bound to a single encryption/decryption)\nTBH I'm not sure I agree with the quoted guidance here. There is a concept in protocol design known as \"domain separation\", the goal of which is to provide some kind of binding of long-term secret key operations to the context in which they were used. Imagine, for example, that someone used an HPKE secret key for PPM and some other protocol. Suppose further that both are using the empty string for . Then the security of our PPM deployment depends on how the other protocol uses the derived AEAD key, since in both protocols may end up deriving the same AEAD key. One way to avoid creating this attack surface is to pick an string that no other application is likely to choose so that derived keys are guaranteed to not collide (except with some negligible probability). That said, I'd go for changing how we use and . Something like this might be simpler: Let be a fixed string that identifies the protocol. It would be good if this string were relatively long, say the SHA-256 hash of \"ppm-00\". Let encode the task ID, sender/receiver roles, etc.\nI definitely agree that application info should include information allowing domain separation, for the reasons you mention. (Indeed, the current PPM spec includes the strings \"ppm input share\" or \"ppm aggregate share\" in its application infos, presumably for this purpose.) But I think that's orthogonal to the question of whether we should also use AAD, since the application info could also include information other than the \"domain separation parameter\". Picking one example to make things concrete, a client upload request sets its application info to and its AAD to . But it could just as easily set the application info to (i.e. the concatenation of the two previous strings) and not use AAD. (Or alternatively, it could place that string in the AAD and not use application info.) My argument is that it's simpler to use only one of the parameters, since in the single-shot setting that PPM is using HPKE, the two parameters serve exactly the same purpose (i.e. including additionally-authenticated data).\nAs I see it, our goal is to make sure that decryption only succeeds if the sender and receiver agree on the \"context\", i.e., the task ID, sender/receiver role, nonce, extensions, and so on. This way a MITM cannot force the aggregator to interpret an input share incorrectly. Does this sound reasonable to you? From the point of view of this threat model, I don't think it's immediately clear that sticking the context in or is equivalent. For example, if we stuff the context in , then it influences the derivation of the AEAD key; but depending on the AEAD, it might be feasible for an attacker to find a find two keys that decrypt the same ciphertext. (See URL) More analysis will be needed to say for sure whether this weakness in the AEAD amounts to an attack against our protocol. In the meantime, sticking the \"context\" in seems like a more conservative choice to me.\nGiven your concern, your approach SGTM. It sounds like this may be a concern worth raising with the folks working on the HPKE spec, given the advice to \"use the Setup info parameter for specifying auxiliary authenticated information\", but I'm not going to chase this down currently.\nSo I think there's two tasks for this issue: Move task ID from to Use single-shot APIs for HPKE Anything else?\nYep, I think that's it.\n(reopening for now as the fix PR has been reverted temporarily)\nPerfect, great work!", "new_text": "To generate the report, the client begins by sharding its measurement into a sequence of input shares as specified by the VDAF in use. To encrypt an input share, the client generates an HPKE RFC9180 ciphertext and encapsulated key for the aggregator by running where \"pk\" is the aggregator's public key; \"server_role\" is the Role of the intended recipient (\"0x02\" for the leader and \"0x03\" for the helper); \"task_id\", \"nonce\", and \"extensions\" are the corresponding fields of \"Report\"; and \"input_share\" is the aggregator's input share. The leader responds to well-formed requests to \"[leader]/upload\" with HTTP status code 200 OK and an empty body. Malformed requests are"} {"id": "q-en-draft-ietf-ppm-dap-dbd68199431972862df22187188bb885ebd6ad2218d66e60a00d5e20717c5160", "old_text": "report share as invalid with the error \"hpke-unknown-config-id\". Otherwise, it decrypts the payload with the following procedure: where \"sk\" is the HPKE secret key, \"task_id\" is the task ID, \"nonce\" and \"extensions\" are the nonce and extensions of the report share respectively, and \"server_role\" is 0x02 if the aggregator is the leader and 0x03 otherwise. If decryption fails, the aggregator marks the report share as invalid with the error \"hpke-decrypt-error\". Otherwise, it outputs the resulting \"input_share\". 4.3.1.4.", "comments": "Specifically: Specify use of the \"one-shot\" HPKE APIs defined in RFC9180, replacing equivalent use of the \"multi-shot\" APIs. (This change is intended to be a clarification, not a functional change.) Move the task_id parameter from the \"info\" parameter to the \"aad\" parameter. This is intended to protect against key-commitment-related attacks[1]. (This is a functional change.) Clarify that the URL field is an encapsulated key, rather than an encryption context. (This is a clarification, not a functional change.) [1] See discussion in URL\nPer URL section 8.1: PPM effectively uses the single-shot APIs; we should consider refactoring PPM's usage of HPKE to use only one of applicationinfo & AAD. Following the advice of the HPKE RFC, we would want to use applicationinfo. This would permit slightly simpler implementations. I am not sure how much it would simplify the HPKE computations.\nShould we then formally use the single shot HPKE APIs from ?\nThat would be my recommendation (though, somewhat confusingly given the advice quoted above, those APIs still allow specifying both application info & AAD). Implementations can & do already use the single-shot APIs, so I strongly suspect specifying the single-shot APIs will be feasible. & .]\n+1 to using single-shot APIs, though I'm not sure I agree that using either info or add (but not both) is simpler. What's the reasoning?\nThe reasoning is: a) it follows the advice of the HPKE spec (quoted above) b) AFAICT, in a single-shot setting, application info & AAD serve exactly the same purpose, so using both is arbitrary/confusing/a potential source of mistakes. (in a multi-shot setting, application info is bound to all encryptions/decryptions done by the same context; AAD is bound to a single encryption/decryption)\nTBH I'm not sure I agree with the quoted guidance here. There is a concept in protocol design known as \"domain separation\", the goal of which is to provide some kind of binding of long-term secret key operations to the context in which they were used. Imagine, for example, that someone used an HPKE secret key for PPM and some other protocol. Suppose further that both are using the empty string for . Then the security of our PPM deployment depends on how the other protocol uses the derived AEAD key, since in both protocols may end up deriving the same AEAD key. One way to avoid creating this attack surface is to pick an string that no other application is likely to choose so that derived keys are guaranteed to not collide (except with some negligible probability). That said, I'd go for changing how we use and . Something like this might be simpler: Let be a fixed string that identifies the protocol. It would be good if this string were relatively long, say the SHA-256 hash of \"ppm-00\". Let encode the task ID, sender/receiver roles, etc.\nI definitely agree that application info should include information allowing domain separation, for the reasons you mention. (Indeed, the current PPM spec includes the strings \"ppm input share\" or \"ppm aggregate share\" in its application infos, presumably for this purpose.) But I think that's orthogonal to the question of whether we should also use AAD, since the application info could also include information other than the \"domain separation parameter\". Picking one example to make things concrete, a client upload request sets its application info to and its AAD to . But it could just as easily set the application info to (i.e. the concatenation of the two previous strings) and not use AAD. (Or alternatively, it could place that string in the AAD and not use application info.) My argument is that it's simpler to use only one of the parameters, since in the single-shot setting that PPM is using HPKE, the two parameters serve exactly the same purpose (i.e. including additionally-authenticated data).\nAs I see it, our goal is to make sure that decryption only succeeds if the sender and receiver agree on the \"context\", i.e., the task ID, sender/receiver role, nonce, extensions, and so on. This way a MITM cannot force the aggregator to interpret an input share incorrectly. Does this sound reasonable to you? From the point of view of this threat model, I don't think it's immediately clear that sticking the context in or is equivalent. For example, if we stuff the context in , then it influences the derivation of the AEAD key; but depending on the AEAD, it might be feasible for an attacker to find a find two keys that decrypt the same ciphertext. (See URL) More analysis will be needed to say for sure whether this weakness in the AEAD amounts to an attack against our protocol. In the meantime, sticking the \"context\" in seems like a more conservative choice to me.\nGiven your concern, your approach SGTM. It sounds like this may be a concern worth raising with the folks working on the HPKE spec, given the advice to \"use the Setup info parameter for specifying auxiliary authenticated information\", but I'm not going to chase this down currently.\nSo I think there's two tasks for this issue: Move task ID from to Use single-shot APIs for HPKE Anything else?\nYep, I think that's it.\n(reopening for now as the fix PR has been reverted temporarily)\nPerfect, great work!", "new_text": "report share as invalid with the error \"hpke-unknown-config-id\". Otherwise, it decrypts the payload with the following procedure: where \"sk\" is the HPKE secret key, and \"server_role\" is the role of the aggregator (\"0x02\" for the leader and \"0x03\" for the helper). If decryption fails, the aggregator marks the report share as invalid with the error \"hpke-decrypt-error\". Otherwise, it outputs the resulting \"input_share\". 4.3.1.4."} {"id": "q-en-draft-ietf-ppm-dap-dbd68199431972862df22187188bb885ebd6ad2218d66e60a00d5e20717c5160", "old_text": "\"AggregateShareReq\" is done as follows: where \"pk\" is the HPKE public key encoded by the collector's HPKE key, and server_role is is \"0x02\" for the leader and \"0x03\" for a helper. The collector decrypts these aggregate shares using the opposite process. Specifically, given an encrypted input share, denoted \"enc_share\", for a given batch interval, denoted \"batch_interval\", decryption works as follows: where \"sk\" is the HPKE secret key, \"task_id\" is the task ID for a given collect request, and \"server_role\" is the role of the server that sent the aggregate share (\"0x02\" for the leader and \"0x03\" for the helper). 4.4.5.", "comments": "Specifically: Specify use of the \"one-shot\" HPKE APIs defined in RFC9180, replacing equivalent use of the \"multi-shot\" APIs. (This change is intended to be a clarification, not a functional change.) Move the task_id parameter from the \"info\" parameter to the \"aad\" parameter. This is intended to protect against key-commitment-related attacks[1]. (This is a functional change.) Clarify that the URL field is an encapsulated key, rather than an encryption context. (This is a clarification, not a functional change.) [1] See discussion in URL\nPer URL section 8.1: PPM effectively uses the single-shot APIs; we should consider refactoring PPM's usage of HPKE to use only one of applicationinfo & AAD. Following the advice of the HPKE RFC, we would want to use applicationinfo. This would permit slightly simpler implementations. I am not sure how much it would simplify the HPKE computations.\nShould we then formally use the single shot HPKE APIs from ?\nThat would be my recommendation (though, somewhat confusingly given the advice quoted above, those APIs still allow specifying both application info & AAD). Implementations can & do already use the single-shot APIs, so I strongly suspect specifying the single-shot APIs will be feasible. & .]\n+1 to using single-shot APIs, though I'm not sure I agree that using either info or add (but not both) is simpler. What's the reasoning?\nThe reasoning is: a) it follows the advice of the HPKE spec (quoted above) b) AFAICT, in a single-shot setting, application info & AAD serve exactly the same purpose, so using both is arbitrary/confusing/a potential source of mistakes. (in a multi-shot setting, application info is bound to all encryptions/decryptions done by the same context; AAD is bound to a single encryption/decryption)\nTBH I'm not sure I agree with the quoted guidance here. There is a concept in protocol design known as \"domain separation\", the goal of which is to provide some kind of binding of long-term secret key operations to the context in which they were used. Imagine, for example, that someone used an HPKE secret key for PPM and some other protocol. Suppose further that both are using the empty string for . Then the security of our PPM deployment depends on how the other protocol uses the derived AEAD key, since in both protocols may end up deriving the same AEAD key. One way to avoid creating this attack surface is to pick an string that no other application is likely to choose so that derived keys are guaranteed to not collide (except with some negligible probability). That said, I'd go for changing how we use and . Something like this might be simpler: Let be a fixed string that identifies the protocol. It would be good if this string were relatively long, say the SHA-256 hash of \"ppm-00\". Let encode the task ID, sender/receiver roles, etc.\nI definitely agree that application info should include information allowing domain separation, for the reasons you mention. (Indeed, the current PPM spec includes the strings \"ppm input share\" or \"ppm aggregate share\" in its application infos, presumably for this purpose.) But I think that's orthogonal to the question of whether we should also use AAD, since the application info could also include information other than the \"domain separation parameter\". Picking one example to make things concrete, a client upload request sets its application info to and its AAD to . But it could just as easily set the application info to (i.e. the concatenation of the two previous strings) and not use AAD. (Or alternatively, it could place that string in the AAD and not use application info.) My argument is that it's simpler to use only one of the parameters, since in the single-shot setting that PPM is using HPKE, the two parameters serve exactly the same purpose (i.e. including additionally-authenticated data).\nAs I see it, our goal is to make sure that decryption only succeeds if the sender and receiver agree on the \"context\", i.e., the task ID, sender/receiver role, nonce, extensions, and so on. This way a MITM cannot force the aggregator to interpret an input share incorrectly. Does this sound reasonable to you? From the point of view of this threat model, I don't think it's immediately clear that sticking the context in or is equivalent. For example, if we stuff the context in , then it influences the derivation of the AEAD key; but depending on the AEAD, it might be feasible for an attacker to find a find two keys that decrypt the same ciphertext. (See URL) More analysis will be needed to say for sure whether this weakness in the AEAD amounts to an attack against our protocol. In the meantime, sticking the \"context\" in seems like a more conservative choice to me.\nGiven your concern, your approach SGTM. It sounds like this may be a concern worth raising with the folks working on the HPKE spec, given the advice to \"use the Setup info parameter for specifying auxiliary authenticated information\", but I'm not going to chase this down currently.\nSo I think there's two tasks for this issue: Move task ID from to Use single-shot APIs for HPKE Anything else?\nYep, I think that's it.\n(reopening for now as the fix PR has been reverted temporarily)\nPerfect, great work!", "new_text": "\"AggregateShareReq\" is done as follows: where \"pk\" is the HPKE public key encoded by the collector's HPKE key, \"server_role\" is \"0x02\" for the leader and \"0x03\" for a helper. The collector decrypts these aggregate shares using the opposite process. Specifically, given an encrypted input share, denoted \"enc_share\", for a given batch interval, denoted \"batch_interval\", decryption works as follows: where \"sk\" is the HPKE secret key, \"task_id\" is the task ID for the collect request, and \"server_role\" is the role of the server that sent the aggregate share (\"0x02\" for the leader and \"0x03\" for the helper). 4.4.5."} {"id": "q-en-draft-ietf-ppm-dap-bf3d867901f989e47d01ed511f428c6184a75ba025340fb86bcc207970f4a759", "old_text": "\"batch_interval\" is the batch interval of the request. \"report_count\" is the number of reports included in the aggregation. \"checksum\" is the checksum computed over the set of client reports. \"job_id\" is the ID of the aggregation job. This request MUST be authenticated as described in https-sender-auth. To handle the leader's request, the helper first ensures that the request meets the requirements for batch parameters following the", "comments": "AggregateShareReq should have an aggregation parameter field instead of aggregation job ID.\nThanks NAME for spotting this!", "new_text": "\"batch_interval\" is the batch interval of the request. \"agg_param\", an aggregation parameter for the VDAF being executed. \"report_count\" is the number of reports included in the aggregation. \"checksum\" is the checksum computed over the set of client reports. This request MUST be authenticated as described in https-sender-auth. To handle the leader's request, the helper first ensures that the request meets the requirements for batch parameters following the"} {"id": "q-en-draft-ietf-ppm-dap-83bd53a7e711cf013bb0fed7293d4b3b02a1673d48c04143724dae511a663d61", "old_text": "where \"[leader]\" is the first entry in the task's aggregator endpoints. The payload is structured as follows: This message is called the Client's report. It consists of a \"header\" and the encrypted input share of each of the Aggregators. The header consists of the task ID and report \"metadata\". The metadata consists of the following fields: A timestamp representing the time at which the report was generated. Specifically, the \"time\" field is set to the number of", "comments": "Align with the latest VDAF draft. This requires two minor protocol changes: 02 added a value called the \"publicshare\" to the VDAF syntax. This is to support Poplar1 (and likely other schemes that need a similar extractability property). To accommodate this, the publicshare field has been added to the Report and ReportShare structures and appended to the AAD for input share encryption. (This may end up being useful for privacy.) 02 added the report count to the aggregate result computation. This is already passed to the Collector in the CollectResp, so we just need to update the aggregate result computation in the draft to match.\nRebased and pushed a minor update: The public share in Report and ReportStore now both have the same length prefix.\nRebased.\nThis will be VDAF-02 or 03, depending on when we need to cut DAP-02.\nVDAF-03 is out, I will send a PR by end of week. URL", "new_text": "where \"[leader]\" is the first entry in the task's aggregator endpoints. The payload is structured as follows: This message is called the Client's report. It consists of the report metaata, the \"public share\" output by the VDAF's input- distribution algorithm, and the encrypted input share of each of the Aggregators. The header consists of the task ID and report \"metadata\". The metadata consists of the following fields: A timestamp representing the time at which the report was generated. Specifically, the \"time\" field is set to the number of"} {"id": "q-en-draft-ietf-ppm-dap-83bd53a7e711cf013bb0fed7293d4b3b02a1673d48c04143724dae511a663d61", "old_text": "4.4.1.3. Each report share has a corresponding task ID, report metadata (timestamp, nonce, and extensions), and encrypted input share. Let \"task_id\", \"metadata\", and \"encrypted_input_share\" denote these values, respectively. Given these values, an aggregator decrypts the input share as follows. First, the aggregator looks up the HPKE config and corresponding secret key indicated by \"encrypted_input_share.config_id\". If not found, then it marks the report share as invalid with the error \"hpke-unknown-config-id\". Otherwise, it decrypts the payload with the following procedure:", "comments": "Align with the latest VDAF draft. This requires two minor protocol changes: 02 added a value called the \"publicshare\" to the VDAF syntax. This is to support Poplar1 (and likely other schemes that need a similar extractability property). To accommodate this, the publicshare field has been added to the Report and ReportShare structures and appended to the AAD for input share encryption. (This may end up being useful for privacy.) 02 added the report count to the aggregate result computation. This is already passed to the Collector in the CollectResp, so we just need to update the aggregate result computation in the draft to match.\nRebased and pushed a minor update: The public share in Report and ReportStore now both have the same length prefix.\nRebased.\nThis will be VDAF-02 or 03, depending on when we need to cut DAP-02.\nVDAF-03 is out, I will send a PR by end of week. URL", "new_text": "4.4.1.3. Each report share has a corresponding task ID, report metadata (timestamp, nonce, and extensions), the public share sent to each Aggregator, and the recipient's encrypted input share. Let \"task_id\", \"metadata\", \"public_share\", and \"encrypted_input_share\" denote these values, respectively. Given these values, an aggregator decrypts the input share as follows. First, the aggregator looks up the HPKE config and corresponding secret key indicated by \"encrypted_input_share.config_id\". If not found, then it marks the report share as invalid with the error \"hpke-unknown-config-id\". Otherwise, it decrypts the payload with the following procedure:"} {"id": "q-en-draft-ietf-ppm-dap-83bd53a7e711cf013bb0fed7293d4b3b02a1673d48c04143724dae511a663d61", "old_text": "\"vdaf_verify_key\" is the VDAF verification key shared by the aggregators; \"agg_id\" is the aggregator ID (\"0x00\" for the Leader and \"0x01\" for the helper); and \"agg_param\" is the opaque aggregation parameter distributed to the aggregtors by the collector. If either step fails, the aggregator marks the report as invalid with error \"vdaf-prep-error\". Otherwise, the value \"out\" is interpreted", "comments": "Align with the latest VDAF draft. This requires two minor protocol changes: 02 added a value called the \"publicshare\" to the VDAF syntax. This is to support Poplar1 (and likely other schemes that need a similar extractability property). To accommodate this, the publicshare field has been added to the Report and ReportShare structures and appended to the AAD for input share encryption. (This may end up being useful for privacy.) 02 added the report count to the aggregate result computation. This is already passed to the Collector in the CollectResp, so we just need to update the aggregate result computation in the draft to match.\nRebased and pushed a minor update: The public share in Report and ReportStore now both have the same length prefix.\nRebased.\nThis will be VDAF-02 or 03, depending on when we need to cut DAP-02.\nVDAF-03 is out, I will send a PR by end of week. URL", "new_text": "\"vdaf_verify_key\" is the VDAF verification key shared by the aggregators; \"agg_id\" is the aggregator ID (\"0x00\" for the Leader and \"0x01\" for the helper); \"agg_param\" is the opaque aggregation parameter distributed to the aggregtors by the collector; \"public_share\" is the public share generated by the client and distributed to each aggregator; and \"input_share\" is the aggregator's input share. If either step fails, the aggregator marks the report as invalid with error \"vdaf-prep-error\". Otherwise, the value \"out\" is interpreted"} {"id": "q-en-draft-ietf-ppm-dap-83bd53a7e711cf013bb0fed7293d4b3b02a1673d48c04143724dae511a663d61", "old_text": "aggregate shares into an aggregate result using the VDAF's \"agg_shares_to_result\" algorithm. In particular, let \"agg_shares\" denote the ordered sequence of aggregator shares, ordered by aggregator index, and let \"agg_param\" be the opaque aggregation parameter. The final aggregate result is computed as follows: 4.5.4.", "comments": "Align with the latest VDAF draft. This requires two minor protocol changes: 02 added a value called the \"publicshare\" to the VDAF syntax. This is to support Poplar1 (and likely other schemes that need a similar extractability property). To accommodate this, the publicshare field has been added to the Report and ReportShare structures and appended to the AAD for input share encryption. (This may end up being useful for privacy.) 02 added the report count to the aggregate result computation. This is already passed to the Collector in the CollectResp, so we just need to update the aggregate result computation in the draft to match.\nRebased and pushed a minor update: The public share in Report and ReportStore now both have the same length prefix.\nRebased.\nThis will be VDAF-02 or 03, depending on when we need to cut DAP-02.\nVDAF-03 is out, I will send a PR by end of week. URL", "new_text": "aggregate shares into an aggregate result using the VDAF's \"agg_shares_to_result\" algorithm. In particular, let \"agg_shares\" denote the ordered sequence of aggregator shares, ordered by aggregator index, let \"report_count\" denote the report count sent by the Leader, and let \"agg_param\" be the opaque aggregation parameter. The final aggregate result is computed as follows: 4.5.4."} {"id": "q-en-draft-ietf-ppm-dap-a2a31b7e86e24a541077eeda329646d39df92450907730252caf10a5c7985ad2", "old_text": "participating clients or permit some attacks on robustness. This auxiliary information could be removed by having clients submit reports to an anonymizing proxy server which would then use Oblivious HTTP I-D.thomson-http-oblivious to forward inputs to the DAP leader, without requiring any server participating in DAP to be aware of whatever client authentication or attestation scheme is in use. 7.4.", "comments": "And while we're at it, update an obsolete reference to Oblivious HTTP, which has long since been adopted by its WG. VDAF-05 isn't expected until Monday 3/13, but we can get this change ready to go.\nDAP-04 will need some routine changes: [x] Update version tags for HPKE to () [x] Bump VDAF-03 to 05 () [x] Update change log () [x] Run spell check ()", "new_text": "participating clients or permit some attacks on robustness. This auxiliary information could be removed by having clients submit reports to an anonymizing proxy server which would then use Oblivious HTTP I-D.draft-ietf-ohai-ohttp-07 to forward inputs to the DAP leader, without requiring any server participating in DAP to be aware of whatever client authentication or attestation scheme is in use. 7.4."} {"id": "q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547", "old_text": "*Collect:* The collector makes one or more requests to the leader in order to obtain the final output of the protocol. Before the output can be computed, the aggregators (i.e, the leader and helpers) need to have verified and aggregated a sufficient number of inputs. Depending on the PA protocol, it may be possible for the aggregators to do so immediately when reports are uploaded. (See prio.) However, in general it is necessary for them to wait until (a) enough reports have been uploaded and (b) the collector has made a request. (See hits.) [TODO: Say that the protocol involves multiple helpers. The downside of using secret sharing is that the protocol requires at least two servers to be online during the entire data aggregation process. To ameliorate this problem, we run the protocol in parallel with multiple helpers.] 3. This section specifies a protocol for executing generic PA tasks.", "comments": "Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?", "new_text": "*Collect:* The collector makes one or more requests to the leader in order to obtain the final output of the protocol. Before the output can be computed, the aggregators (i.e, the leader and helper) need to have verified and aggregated a sufficient number of inputs. Depending on the PA protocol, it may be possible for the aggregators to do so immediately when reports are uploaded. (See prio.) However, in general it is necessary for them to wait until (a) enough reports have been uploaded and (b) the collector has made a request. (See hits.) 3. This section specifies a protocol for executing generic PA tasks."} {"id": "q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547", "old_text": "\"task\": The PA task. \"helper_urls\": The helpers' endpoint URLs. \"collector_config\": The HPKE configuration of the collector (described in hpke-config). [OPEN ISSUE: Maybe the collector's", "comments": "Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?", "new_text": "\"task\": The PA task. \"helper_url\": The helpers endpoint URL. \"collector_config\": The HPKE configuration of the collector (described in hpke-config). [OPEN ISSUE: Maybe the collector's"} {"id": "q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547", "old_text": "3.3.2. For each URL \"[helper]\" in \"PAUploadStartResp.helper_urls\", the client sends a GET request to \"[helper]/key_config\". The helper responds with status 200 and an \"HpkeConfig\" message. Next, the client collects the set of helpers it will upload shares to. It ignores a helper if: the client and helper failed to establish a secure, helper- authenticated channel;", "comments": "Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?", "new_text": "3.3.2. Let \"[helper]\" denote the helper URL encoded by \"PAUploadStartResp.param.helper_url\". When a client sends a GET request to \"[helper]/key_config\", the helper responds with status 200 and an \"HpkeConfig\" message. The client aborts if any of the following happen: the client and helper failed to establish a secure, helper- authenticated channel;"} {"id": "q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547", "old_text": "the key config specifies a KEM, KDF, or AEAD algorithm the client doesn't recognize. If the set of supported helpers is empty, then the client aborts and alerts the leader with \"no supported helpers\". Otherwise, for each supported helper the client issues a POST request to \"[leader]/upload_finish\" with a payload constructed as described below. [OPEN ISSUE: Should the request URL encode the PA task? This would be necessary if we make \"upload_start\" an idempotent GET per issue#48.]", "comments": "Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?", "new_text": "the key config specifies a KEM, KDF, or AEAD algorithm the client doesn't recognize. [OPEN ISSUE: Should the request URL encode the PA task? This would be necessary if we make \"upload_start\" an idempotent GET per issue#48.]"} {"id": "q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547", "old_text": "that about a client? I'm not sure, so I'd be inclined to remove this unless we have a concrete use case.] The client begins by setting up an HPKE I-D.irtf-cfrg-hpke context for the helper by running where \"pk\" is the KEM public key encoded by \"HpkeConfig.public_key\". The outputs are the helper's encapsulated context \"enc\" and the", "comments": "Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?", "new_text": "that about a client? I'm not sure, so I'd be inclined to remove this unless we have a concrete use case.] Next, the client issues a POST request to \"[leader]/upload_finish\". It begins by setting up an HPKE I-D.irtf-cfrg-hpke context for the helper by running where \"pk\" is the KEM public key encoded by \"HpkeConfig.public_key\". The outputs are the helper's encapsulated context \"enc\" and the"} {"id": "q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547", "old_text": "SetupBaseS() will be, as well as the aad for context.Seal(). The aad might be the entire \"transcript\" between the client and helper.] [OPEN ISSUE: Is it safe to generate the proof once, then secret-share between each (leader, helper) pair? Probably not in general, but maybe for Prio?] [OPEN ISSUE: allow server to send joint randomness in UploadStartResp, and then enforce uniqueness via double-spend state or something else (see issue#48).]", "comments": "Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?", "new_text": "SetupBaseS() will be, as well as the aad for context.Seal(). The aad might be the entire \"transcript\" between the client and helper.] [OPEN ISSUE: allow server to send joint randomness in UploadStartResp, and then enforce uniqueness via double-spend state or something else (see issue#48).]"} {"id": "q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547", "old_text": "We sometimes refer to this message as the _report_. The message contains the \"task\" fields of the previous request. In addition, it includes the time (in seconds since the beginning of UNIX time) at which the report was generated, the helper's HPKE config id, endpoint URL, and the helper and leader shares. The helper share has the following structure: Field \"enc\" encodes the helper's encapsulated HPKE context. The remainder of the structure contains the share itself, the structure", "comments": "Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?", "new_text": "We sometimes refer to this message as the _report_. The message contains the \"task\" fields of the previous request. In addition, it includes the time (in seconds since the beginning of UNIX time) at which the report was generated, the helper's HPKE config id, and the helper and leader shares. The helper share has the following structure: Field \"enc\" encodes the helper's encapsulated HPKE context. The remainder of the structure contains the share itself, the structure"} {"id": "q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547", "old_text": "3.4.2.1. The process begins with a PACollectReq. The leader collects a sequence of reports that are all associated with the same PA task, helper URL, and helper HPKE config id. Let \"[helper]\" denote \"PACollectReq.helper_url\". The leader sends a POST request to \"[helper]/aggregate\" with the following message: The structure contains the PA task, the helper's HPKE config id, an opaque _helper state_ string, and a sequence of _sub-requests_, each", "comments": "Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?", "new_text": "3.4.2.1. The process begins with a PACollectReq. The leader collects a sequence of reports that are all associated with the same PA task. Let \"[helper]\" denote \"PAParam.helper_url\", where \"PAParam\" is the PA parameters structure associated \"PAAggregateReq.task.id\". The leader sends a POST request to \"[helper]/aggregate\" with the following message: The structure contains the PA task, the helper's HPKE config id, an opaque _helper state_ string, and a sequence of _sub-requests_, each"} {"id": "q-en-draft-ietf-rats-reference-interaction-models-0ff26db6348df57abf3959a40daad47ffa987b048e6f1d3a5d0f989c157bf2f3", "old_text": "The term \"Remote Attestation\" is a common expression and often associated or connoted with certain properties. The term \"Remote\" in this context does not necessarily refer to a remote entity in the scope of network topologies or the Internet. It rather refers to a decoupled system or entities that exchange the payload of the Conceptual Message type called Evidence I-D.ietf-rats-architecture. This conveyance can also be \"Local\", if the Verifier role is part of the same entity as the Attester role, e.g., separate system components of the same Composite Device (a single RATS entity). Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Readers of this document should be familiar with the concept of Layered Attestation as described in Section 4.3 Two Types of Environments of an Attester in I-D.ietf- rats-architecture and the definition of Attestation as described in I-D.ietf-rats-tpm-based-network-device-attest. 3.", "comments": "Based on URL In Section 1, 3 Disambiguation About \u201cthese types of co-located environments\u201d, The previous sentences are about the Verifier and Attester don\u2019t need to be remote. So \u201cthese\u201d is weird here, because here is about the attesting environment and target environment.\nSo, can this issue then be closed? Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "The term \"Remote Attestation\" is a common expression and often associated or connoted with certain properties. The term \"Remote\" in this context does not necessarily refer to a remote entity in the scope of network topologies or the Internet. It rather refers to decoupled systems or entities that exchange the payload of the Conceptual Message type called Evidence I-D.ietf-rats-architecture. This conveyance can also be \"Local\", if the Verifier role is part of the same entity as the Attester role, e.g., separate system components of the same Composite Device (a single RATS entity). Even if an entity takes on two or more different roles, the functions they provide typically reside in isolated environments that are components of the same entity. Examples of such isolated environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Readers of this document should be familiar with the concept of Layered Attestation as described in Section 4.3 Two Types of Environments of an Attester in I-D.ietf-rats-architecture and the definition of Attestation as described in I-D.ietf-rats-tpm-based-network-device- attest. 3."} {"id": "q-en-draft-ietf-rats-reference-interaction-models-04f270d0a723445d17e5fc1b773981256795f3bbef3b718b2c11d358558264ba", "old_text": "_mandatory_ Reference Claims are components of Reference Values as defined in I-D.ietf-rats-architecture. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Reference Claims are used to appraise the Claims received from an Attester. For example, Reference Claims MAY be Reference Integrity Measurements (RIM) or assertions that are implicitly trusted because they are signed by a trusted authority (see Endorsements in I-D.ietf-rats-architecture). Reference Claims typically represent (trusted) Claim sets about an Attester's intended platform operational state.", "comments": "Based on URL I suggest using Reference Values to keep consistent with the RATS arch.\n+1 for \"Reference Values\"\non it Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "_mandatory_ Reference Values as defined in I-D.ietf-rats-architecture. This specific type of Claims is used to appraise Claims incorporated in Evidence. For example, Reference Values MAY be Reference Integrity Measurements (RIM) or assertions that are implicitly trusted because they are signed by a trusted authority (see Endorsements in I-D.ietf-rats-architecture). Reference Values typically represent (trusted) Claim sets about an Attester's intended platform operational state."} {"id": "q-en-draft-ietf-rats-reference-interaction-models-04f270d0a723445d17e5fc1b773981256795f3bbef3b718b2c11d358558264ba", "old_text": "signature, the Attester Identity, and the Handle, and then appraises the Claims. Appraisal procedures are application-specific and can be conducted via comparison of the Claims with corresponding Reference Claims, such as Reference Integrity Measurements. The final output of the Verifier are Attestation Results. Attestation Results constitute new Claim Sets about the properties and characteristics of an Attester, which enables Relying Parties, for example, to assess an", "comments": "Based on URL I suggest using Reference Values to keep consistent with the RATS arch.\n+1 for \"Reference Values\"\non it Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "signature, the Attester Identity, and the Handle, and then appraises the Claims. Appraisal procedures are application-specific and can be conducted via comparison of the Claims with corresponding Reference Values, such as Reference Integrity Measurements. The final output of the Verifier are Attestation Results. Attestation Results constitute new Claim Sets about the properties and characteristics of an Attester, which enables Relying Parties, for example, to assess an"} {"id": "q-en-draft-ietf-sedate-datetime-extended-ce3bfecaba71d5f0762e3f4cc4e47298853989406f0c6ceccad95f6f9ac15533", "old_text": "order to allow the inclusion of an optional suffix. The extended date/time format is described by the rule \"date-time- ext\". \"date-time\" is imported from RFC3339, \"ALPHA\" and \"DIGIT\" from RFC5234. Note that a \"time-zone\" is syntactically similar to a \"suffix-tag\", but does not include an equals sign. This special case is only", "comments": "URL supports offset time zones, e.g. . For backwards compatibility, this RFC should support them too. This PR also adds clarifying text to several time-zone-related definitions.\nMakes sense. Should I start one? Yep, we added these in Temporal primarily for compatibility with prior art in URL URL In Java (and currently in Temporal) a time zone ID can either be an IANA name or an offset. The ID is an index into a set of rules ( in Java, in Temporal) for converting between exact time and local time. If the ID is an offset, there's only one trivial rule because the offset is always the same. I don't have a strong opinion about the name of offset time zones.\nNAME - Let me know if you have responses to the questions in my comment above, or how you'd like me to change this PR. Thanks!\nI think we still need the mailing list discussion first. If we get to a certain state of clarity by March 18, we might be able to post yet another update of the draft right before IETF week.\nSounds good. I sent mail to the mailing list. Waiting for responses.\nI plan to merge this within a couple of hours, so we have a merged draft to look at in Monday's WG meeting. Since the discussion hasn't concluded yet, I'll edit the merge so it is useful for those discussions; we can edit some more after Monday's meeting.\nSounds good to me. Based on your comments here and on the mailing list, I think we're in agreement on the important points.\nHi NAME - is the merged draft available? If not, I'm happy to make the edits you and NAME suggested here in this PR, but didn't want to step on changes you're making in parallel. Let me know how I can help. Thanks!\nI merged the PR and made some edits in main. I included you in the contributor list; please advise if that is OK, the information is right and whether you want to include any more contact/affiliation information.\nGreat! Your edits look good to me. I may follow up after the meeting with minor text clarification PRs but nothing important. Info looks right. I'm a volunteer not working on behalf of my employer so no need for additional affiliation. Thanks!\nThe current ABNF for the time zone extension does not support offset time zones, e.g. . These zones are supported in URL, which is the prior art driving the bracketed time zone extension format. Therefore, I'm assuming that it makes sense for the standard to also allow offsets in brackets too. Current ABNF: URL Suggested ABNF for, where is imported from RFC 3339: Example of Java usage:\nPlease excuse my ignorance: What is the meaning of ? We'd need some text explaining the semantics.\nThis format asserts that derived timestamps (e.g. add one day) should also use the offset. It's mostly for backwards compatibility with URL It's not recommended for normal use. Heres the explanatory text I added in : >\nI assume that conflicts between the timestamp offset and a bracketed offset (e.g. ) should be handled the same as other conflicts, per discussion in URL", "new_text": "order to allow the inclusion of an optional suffix. The extended date/time format is described by the rule \"date-time- ext\". \"date-time\" and \"time-numoffset\" are imported from RFC3339, \"ALPHA\" and \"DIGIT\" from RFC5234. Note that a \"time-zone\" is syntactically similar to a \"suffix-tag\", but does not include an equals sign. This special case is only"} {"id": "q-en-draft-ietf-sedate-datetime-extended-485369b363d9e8c4a6e966d2a62e8431c3e5251fcb20e70e6ff8ccb17e9b8e94", "old_text": "Each distinct instant in time can be represented in a descriptive text format using a timestamp, and ISO8601 standardizes a widely- adopted timestamp format, which forms the basis of RFC3339. However, this format only allows timestamps to contain very little additional relevant information, which means that, beyond that, any contextual information related to a given timestamp needs to be either handled separately or attached to it in a non-standard manner. This is already a pressing issue for applications that handle each instant with an associated time zone name, to take into account", "comments": "We need a name that people will use until they can say RFC9999 timestamps (and maybe even when they can).\n\"Extended Timestamps\" or \"Extended Internet Timestamps\" ?\nRFC 3339 says \"Internet Date/Time Format\". That is what sedate says at the moment as well, but that can't stay the same. So I'm considering moving to: Internet Date/Time Format Extended The alternative .... Extended Internet Date/Time Format sounds like a date/time format for the Extended Internet (EI is actually in use as \"Extensible Internet\", which this shouldn't be confused with). I'd rather call it Fritz, but I haven't found a way to retronym this...\nThanks", "new_text": "Each distinct instant in time can be represented in a descriptive text format using a timestamp, and ISO8601 standardizes a widely- adopted timestamp format, which forms the basis of the Internet Date/ Time Format RFC3339. However, this format only allows timestamps to contain very little additional relevant information, which means that, beyond that, any contextual information related to a given timestamp needs to be either handled separately or attached to it in a non-standard manner. This is already a pressing issue for applications that handle each instant with an associated time zone name, to take into account"} {"id": "q-en-draft-ietf-sedate-datetime-extended-485369b363d9e8c4a6e966d2a62e8431c3e5251fcb20e70e6ff8ccb17e9b8e94", "old_text": "The format provides a generalized way to attach any additional information to the timestamp. This document does not address extensions to the format where the semantic result is no longer a fixed timestamp that is referenced to a (past or future) UTC time. For instance, it does not address:", "comments": "We need a name that people will use until they can say RFC9999 timestamps (and maybe even when they can).\n\"Extended Timestamps\" or \"Extended Internet Timestamps\" ?\nRFC 3339 says \"Internet Date/Time Format\". That is what sedate says at the moment as well, but that can't stay the same. So I'm considering moving to: Internet Date/Time Format Extended The alternative .... Extended Internet Date/Time Format sounds like a date/time format for the Extended Internet (EI is actually in use as \"Extensible Internet\", which this shouldn't be confused with). I'd rather call it Fritz, but I haven't found a way to retronym this...\nThanks", "new_text": "The format provides a generalized way to attach any additional information to the timestamp. We refer to this format as the Internet Extended Date/Time Format (IXDTF). This document does not address extensions to the format where the semantic result is no longer a fixed timestamp that is referenced to a (past or future) UTC time. For instance, it does not address:"} {"id": "q-en-draft-ietf-sedate-datetime-extended-485369b363d9e8c4a6e966d2a62e8431c3e5251fcb20e70e6ff8ccb17e9b8e94", "old_text": "For the format defined here, suffix tags are always : They can be added or left out as desired by the generator of the string in Internet Date/Time Format. (An application might require the presence of specific suffix tags, though.) Without further indication, they are also : Even if included in the Internet Date/Time Format string, the recipient is free to ignore the suffix tag. Reasons might include that the recipient does not implement (or know about) the specific suffix key, or that it does recognize the key but cannot act on the value provided. A suffix tag may also indicate that it is : The recipient is advised that it MUST not act on the Internet Date/ Time Format string unless it can process the suffix tag as specified. A critical suffix tag is indicated by following its opening bracket with an exclamation mark (see \"critical-flag\" in abnf). Internet Date/Time Format strings such as: are internally inconsistent, as Europe/Paris does not use a time zone offset of 0 (which is indicated in the \"Z\", an abbreviation for", "comments": "We need a name that people will use until they can say RFC9999 timestamps (and maybe even when they can).\n\"Extended Timestamps\" or \"Extended Internet Timestamps\" ?\nRFC 3339 says \"Internet Date/Time Format\". That is what sedate says at the moment as well, but that can't stay the same. So I'm considering moving to: Internet Date/Time Format Extended The alternative .... Extended Internet Date/Time Format sounds like a date/time format for the Extended Internet (EI is actually in use as \"Extensible Internet\", which this shouldn't be confused with). I'd rather call it Fritz, but I haven't found a way to retronym this...\nThanks", "new_text": "For the format defined here, suffix tags are always : They can be added or left out as desired by the generator of the string in Internet Extended Date/Time Format (IXDTF). (An application might require the presence of specific suffix tags, though.) Without further indication, they are also : Even if included in the IXDTF string, the recipient is free to ignore the suffix tag. Reasons might include that the recipient does not implement (or know about) the specific suffix key, or that it does recognize the key but cannot act on the value provided. A suffix tag may also indicate that it is : The recipient is advised that it MUST not act on the Internet Extended Date/Time Format (IXDTF) string unless it can process the suffix tag as specified. A critical suffix tag is indicated by following its opening bracket with an exclamation mark (see \"critical-flag\" in abnf). IXDTF strings such as: are internally inconsistent, as Europe/Paris does not use a time zone offset of 0 (which is indicated in the \"Z\", an abbreviation for"} {"id": "q-en-draft-ietf-sedate-datetime-extended-485369b363d9e8c4a6e966d2a62e8431c3e5251fcb20e70e6ff8ccb17e9b8e94", "old_text": "However, all have an internal inconsistency or an unrecognized suffix key/ value, so a recipient MUST treat the Internet Date/Time Format string as erroneous. Note that this does not mean that an application is disallowed to perform additional processing on elective suffix tags, e.g., asking", "comments": "We need a name that people will use until they can say RFC9999 timestamps (and maybe even when they can).\n\"Extended Timestamps\" or \"Extended Internet Timestamps\" ?\nRFC 3339 says \"Internet Date/Time Format\". That is what sedate says at the moment as well, but that can't stay the same. So I'm considering moving to: Internet Date/Time Format Extended The alternative .... Extended Internet Date/Time Format sounds like a date/time format for the Extended Internet (EI is actually in use as \"Extensible Internet\", which this shouldn't be confused with). I'd rather call it Fritz, but I haven't found a way to retronym this...\nThanks", "new_text": "However, all have an internal inconsistency or an unrecognized suffix key/ value, so a recipient MUST treat the IXDTF string as erroneous. Note that this does not mean that an application is disallowed to perform additional processing on elective suffix tags, e.g., asking"} {"id": "q-en-draft-ietf-sedate-datetime-extended-485369b363d9e8c4a6e966d2a62e8431c3e5251fcb20e70e6ff8ccb17e9b8e94", "old_text": "The following rules extend the ABNF syntax defined in RFC3339 in order to allow the inclusion of an optional suffix. The extended date/time format is described by the rule \"date-time- ext\". \"date-time\" and \"time-numoffset\" are imported from RFC3339, \"ALPHA\" and \"DIGIT\" from RFC5234.", "comments": "We need a name that people will use until they can say RFC9999 timestamps (and maybe even when they can).\n\"Extended Timestamps\" or \"Extended Internet Timestamps\" ?\nRFC 3339 says \"Internet Date/Time Format\". That is what sedate says at the moment as well, but that can't stay the same. So I'm considering moving to: Internet Date/Time Format Extended The alternative .... Extended Internet Date/Time Format sounds like a date/time format for the Extended Internet (EI is actually in use as \"Extensible Internet\", which this shouldn't be confused with). I'd rather call it Fritz, but I haven't found a way to retronym this...\nThanks", "new_text": "The following rules extend the ABNF syntax defined in RFC3339 in order to allow the inclusion of an optional suffix. The Internet Extended Date/Time Format (IXDTF) is described by the rule \"date-time-ext\". \"date-time\" and \"time-numoffset\" are imported from RFC3339, \"ALPHA\" and \"DIGIT\" from RFC5234."} {"id": "q-en-draft-ietf-sedate-datetime-extended-485369b363d9e8c4a6e966d2a62e8431c3e5251fcb20e70e6ff8ccb17e9b8e94", "old_text": "3.2. Here are some examples of Internet extended date/time format. rfc3339-datetime represents 39 minutes and 57 seconds after the 16th hour of December 19th, 1996 with an offset of -08:00 from UTC. Note", "comments": "We need a name that people will use until they can say RFC9999 timestamps (and maybe even when they can).\n\"Extended Timestamps\" or \"Extended Internet Timestamps\" ?\nRFC 3339 says \"Internet Date/Time Format\". That is what sedate says at the moment as well, but that can't stay the same. So I'm considering moving to: Internet Date/Time Format Extended The alternative .... Extended Internet Date/Time Format sounds like a date/time format for the Extended Internet (EI is actually in use as \"Extensible Internet\", which this shouldn't be confused with). I'd rather call it Fritz, but I haven't found a way to retronym this...\nThanks", "new_text": "3.2. Here are some examples of Internet Extended Date/Time Format (IXDTF). rfc3339-datetime represents 39 minutes and 57 seconds after the 16th hour of December 19th, 1996 with an offset of -08:00 from UTC. Note"} {"id": "q-en-draft-ietf-taps-transport-security-0ea7f8e388599df4ab6fd61190b8c9ecacf5e4c2087f429852fb0d0061717662", "old_text": "5.1. Configuration interfaces are used to configure the security protocols before a handshake begins or the keys are negotiated. Identities and Private Keys (IPK): The application can provide its identity, credentials (e.g., certificates), and private keys, or", "comments": "We got another Secdir review: Reviewer: Paul Wouters Review result: Has Nits I have reviewed this document as part of the security directorate's ongoing effort to review all IETF documents being processed by the IESG. These comments were written primarily for the benefit of the security area directors. Document editors and WG chairs should treat these comments just like any other last call comments. The summary of the review is Has Nits Compared to my last review (-09) a lot of text has been removed, reducing the technical details and giving a briefer (but arguably cleaner) overview. I do still have a personal preference of dropping CurveCP and MinimalT as I don't really think these are anything but abandoned research items. I have never seen or heard of these being deployed. I would be more tempted to list openconnect (draft-mavrogiannopoulos-openconnect) which actually does see quite some deployment. If the intent is really to show different kind of API's, than there are other esoteric examples that could be included, such as Off-The-Record(OTR) or Signal, which are mostly encrypted chat programs, but with the ability to generate session keys for encrypted bulk transport (eg for video/audio or file transfer) Section 5.1 \"Session Cache Management (CM):\" which does not include IKEv2/IPsec, even though it does have session resumption (RFC 5723). I guess this is because the section limits itself to \"the application\" that can restart the session. But for IKEv2/IPsec the same could be said. An application that after a long idle period sends a packet, triggers a kernel ACQUIRE, which triggers an IKEv2 session resumption. WireGuard does not have this capability as there are no API/hooks for packet trigger tunnel events (AFAIK) \"Pre-Shared Key Import (PSKI):\" lists WireGuard, but AFAIK it does not support PSK based authentication - only public key based authentication. While I understand the IKEv2 entry (PSKs are used for authentication of peers), I am not sure the ESP entry should be here. ESP does not \"authenticate peers\", unless you call \"being able to decrypt and authenticate packets\" as an instance of \"authenticating peer\" Section 5.2 This states \"This can call into the application to offload validation.\". This \"can\" is only not supported by WireGuard, as all of this is happening inside the kernel. Maybe a note could be useful here? \"Source Address Validation\" - It is a little unclear why TCP based protocols are not listed here. They (implicitly) do source address validation. Perhaps the introduction in this paragraph can state this more explicitly. Eg \"for those protocols that do not use TCP and therefor do not have builtin source address validation .......\"\nOn which protocols to include, MinimalT and CurveCP have different feature sets and interfaces exposed, which I think warrants their inclusion. Things like Signal and OTR indeed have seen very wide deployment, though they're asynchronous, application-layer protocols akin to MLS, which I think is out of scope. I'd also not like to add draft-mavrogiannopoulos-openconnect, if for no other reason than to draw a line in the sand. (We run the risk of indefinitely delaying this while adding more protocols.) On Session Cache Management, this is an explicit interface, and the example Paul mentions, while correct, does not exercise such an explicit interface. On WireGuard and PSK import, WireGuard uses the IKpsk2 Noise pattern (see URL, and URL), in which a responder uses its PSK to additionally authenticate a connection. On ESP and PSK import, we note that importing a PSK can be used for \"encrypting (and authenticating) communication with a peer,\" which remains true for IPsec. (Right NAME I'll address the issues in 5.2 in a PR!", "new_text": "5.1. Configuration interfaces are used to configure the security protocols before a handshake begins or keys are negotiated. Identities and Private Keys (IPK): The application can provide its identity, credentials (e.g., certificates), and private keys, or"} {"id": "q-en-draft-ietf-taps-transport-security-0ea7f8e388599df4ab6fd61190b8c9ecacf5e4c2087f429852fb0d0061717662", "old_text": "Identity Validation (IV): During a handshake, the security protocol will conduct identity validation of the peer. This can call into the application to offload validation. TLS", "comments": "We got another Secdir review: Reviewer: Paul Wouters Review result: Has Nits I have reviewed this document as part of the security directorate's ongoing effort to review all IETF documents being processed by the IESG. These comments were written primarily for the benefit of the security area directors. Document editors and WG chairs should treat these comments just like any other last call comments. The summary of the review is Has Nits Compared to my last review (-09) a lot of text has been removed, reducing the technical details and giving a briefer (but arguably cleaner) overview. I do still have a personal preference of dropping CurveCP and MinimalT as I don't really think these are anything but abandoned research items. I have never seen or heard of these being deployed. I would be more tempted to list openconnect (draft-mavrogiannopoulos-openconnect) which actually does see quite some deployment. If the intent is really to show different kind of API's, than there are other esoteric examples that could be included, such as Off-The-Record(OTR) or Signal, which are mostly encrypted chat programs, but with the ability to generate session keys for encrypted bulk transport (eg for video/audio or file transfer) Section 5.1 \"Session Cache Management (CM):\" which does not include IKEv2/IPsec, even though it does have session resumption (RFC 5723). I guess this is because the section limits itself to \"the application\" that can restart the session. But for IKEv2/IPsec the same could be said. An application that after a long idle period sends a packet, triggers a kernel ACQUIRE, which triggers an IKEv2 session resumption. WireGuard does not have this capability as there are no API/hooks for packet trigger tunnel events (AFAIK) \"Pre-Shared Key Import (PSKI):\" lists WireGuard, but AFAIK it does not support PSK based authentication - only public key based authentication. While I understand the IKEv2 entry (PSKs are used for authentication of peers), I am not sure the ESP entry should be here. ESP does not \"authenticate peers\", unless you call \"being able to decrypt and authenticate packets\" as an instance of \"authenticating peer\" Section 5.2 This states \"This can call into the application to offload validation.\". This \"can\" is only not supported by WireGuard, as all of this is happening inside the kernel. Maybe a note could be useful here? \"Source Address Validation\" - It is a little unclear why TCP based protocols are not listed here. They (implicitly) do source address validation. Perhaps the introduction in this paragraph can state this more explicitly. Eg \"for those protocols that do not use TCP and therefor do not have builtin source address validation .......\"\nOn which protocols to include, MinimalT and CurveCP have different feature sets and interfaces exposed, which I think warrants their inclusion. Things like Signal and OTR indeed have seen very wide deployment, though they're asynchronous, application-layer protocols akin to MLS, which I think is out of scope. I'd also not like to add draft-mavrogiannopoulos-openconnect, if for no other reason than to draw a line in the sand. (We run the risk of indefinitely delaying this while adding more protocols.) On Session Cache Management, this is an explicit interface, and the example Paul mentions, while correct, does not exercise such an explicit interface. On WireGuard and PSK import, WireGuard uses the IKpsk2 Noise pattern (see URL, and URL), in which a responder uses its PSK to additionally authenticate a connection. On ESP and PSK import, we note that importing a PSK can be used for \"encrypting (and authenticating) communication with a peer,\" which remains true for IPsec. (Right NAME I'll address the issues in 5.2 in a PR!", "new_text": "Identity Validation (IV): During a handshake, the security protocol will conduct identity validation of the peer. This can offload validation or occur transparently to the application. TLS"} {"id": "q-en-draft-ietf-taps-transport-security-0ea7f8e388599df4ab6fd61190b8c9ecacf5e4c2087f429852fb0d0061717662", "old_text": "Source Address Validation (SAV): The handshake protocol may interact with the transport protocol or application to validate the address of the remote peer that has sent data. This involves sending a cookie exchange to avoid DoS attacks. DTLS", "comments": "We got another Secdir review: Reviewer: Paul Wouters Review result: Has Nits I have reviewed this document as part of the security directorate's ongoing effort to review all IETF documents being processed by the IESG. These comments were written primarily for the benefit of the security area directors. Document editors and WG chairs should treat these comments just like any other last call comments. The summary of the review is Has Nits Compared to my last review (-09) a lot of text has been removed, reducing the technical details and giving a briefer (but arguably cleaner) overview. I do still have a personal preference of dropping CurveCP and MinimalT as I don't really think these are anything but abandoned research items. I have never seen or heard of these being deployed. I would be more tempted to list openconnect (draft-mavrogiannopoulos-openconnect) which actually does see quite some deployment. If the intent is really to show different kind of API's, than there are other esoteric examples that could be included, such as Off-The-Record(OTR) or Signal, which are mostly encrypted chat programs, but with the ability to generate session keys for encrypted bulk transport (eg for video/audio or file transfer) Section 5.1 \"Session Cache Management (CM):\" which does not include IKEv2/IPsec, even though it does have session resumption (RFC 5723). I guess this is because the section limits itself to \"the application\" that can restart the session. But for IKEv2/IPsec the same could be said. An application that after a long idle period sends a packet, triggers a kernel ACQUIRE, which triggers an IKEv2 session resumption. WireGuard does not have this capability as there are no API/hooks for packet trigger tunnel events (AFAIK) \"Pre-Shared Key Import (PSKI):\" lists WireGuard, but AFAIK it does not support PSK based authentication - only public key based authentication. While I understand the IKEv2 entry (PSKs are used for authentication of peers), I am not sure the ESP entry should be here. ESP does not \"authenticate peers\", unless you call \"being able to decrypt and authenticate packets\" as an instance of \"authenticating peer\" Section 5.2 This states \"This can call into the application to offload validation.\". This \"can\" is only not supported by WireGuard, as all of this is happening inside the kernel. Maybe a note could be useful here? \"Source Address Validation\" - It is a little unclear why TCP based protocols are not listed here. They (implicitly) do source address validation. Perhaps the introduction in this paragraph can state this more explicitly. Eg \"for those protocols that do not use TCP and therefor do not have builtin source address validation .......\"\nOn which protocols to include, MinimalT and CurveCP have different feature sets and interfaces exposed, which I think warrants their inclusion. Things like Signal and OTR indeed have seen very wide deployment, though they're asynchronous, application-layer protocols akin to MLS, which I think is out of scope. I'd also not like to add draft-mavrogiannopoulos-openconnect, if for no other reason than to draw a line in the sand. (We run the risk of indefinitely delaying this while adding more protocols.) On Session Cache Management, this is an explicit interface, and the example Paul mentions, while correct, does not exercise such an explicit interface. On WireGuard and PSK import, WireGuard uses the IKpsk2 Noise pattern (see URL, and URL), in which a responder uses its PSK to additionally authenticate a connection. On ESP and PSK import, we note that importing a PSK can be used for \"encrypting (and authenticating) communication with a peer,\" which remains true for IPsec. (Right NAME I'll address the issues in 5.2 in a PR!", "new_text": "Source Address Validation (SAV): The handshake protocol may interact with the transport protocol or application to validate the address of the remote peer that has sent data. This involves sending a cookie exchange to avoid DoS attacks. (This list omits protocols which depend on TCP and therefore implicitly perform SAV.) DTLS"} {"id": "q-en-draft-ietf-taps-transport-security-edb604cbbde350b25887a1f93423ad48d4525c248774cb5df0930fd160521ab6", "old_text": "5.2. Cryptographic algorithm negotiation: Transport security protocols should permit applications to configure supported or enabled cryptographic algorithms. Transport dependency: None. Application dependency: Application awareness of supported or desired algorithms. Application authentication delegation: Some protocols may completely defer endpoint authentication to applications, e.g., to reduce online protocol complexity.", "comments": "WIP addressing . I don't want to pull this as-is: I just want comments on the format and usefulness of this. This closes the circle created by the list of protocol features in section 3, and indicates where we probably want more consistency and completeness in that list so the derivation of common optional features is more obvious.\nStill a WIP. I have a bunch of comments as a result of doing this exercise: Feature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Wireguard is IP encapsulation within UDP, but this is not stated anywhere in the doc. The set of features we list here should be called out or evident from the protocol features subsections under each protocol description. I think there's still some confusion in sections around symmetric authentication vs. signatures, i.e., AEAD-style authentication (\"packets belong to this session\") vs. PKI endpoint authentication (\"this connection belongs to a peer I trust and is not being MITMed\"). This may be a comment on the unfortunateness of the literature using \"authenticated\" for both.\n\nFeature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Can you give examples? Can you please open an issue to address this? See the other open pull request. :-) Where in particular are these things confused?\nNAME ping!\nSorry, am away for the holiday weekend. The ones I have listed with a \"?\", for instance. I guessed on a few of the others, as well (e.g., configuration extensions for MinimalT). The right answer is for me to go back through this table and enumerate all the areas where the text isn't clear, and fix them. I'll file an issue. Will do. I'm not quite sure what I was thinking here. After reading it through again, I think I'm okay with how this is presented.\nNAME can you please update this PR?\nA few more questions: Is there any existing client puzzle/proof of work standardized for TLS or DTLS, or is URL as far as something's gotten? Same goes for IPsec; I can find no standardized version of any of these things. Should algorithms like TLS, DTLS, etc. that support native authentication but also can export a cryptographic channel binding be listed as supporting authentication delegation? My instinct is yes.\nNAME None that I'm aware of! Yes, I think that's correct.\nNAME Will you be able to complete this PR this week?\nYes, though I could use some guidance on the ?'s in the table, if you have enough context to resolve them. Otherwise, I will do some digging.\nNAME IKEv2+ESP/AD: Yes, via EAP(-TLS). IKEv2+ESP/AFN: Yes, via vendor and configuration payloads. SRTP+DTLS/CM: No. ZRTP: Omit, since it's a variant? WireGuard/CM: No, connections are bound to IP addresses. WireGuard/SC: No, session resumption is not supported. WireGuard/LHP: Yes, for transport packets only. MinimalT/AD: No. MinimalT/AFN: No. MinimalT/LHP: Yes. CurveCP/AD: No.\nRFR.\nShip it.\nIt's probably easier and more consumable to encode the matrix of protocols providing a particular feature as a table.\nI'm assigning this to you, Kyle. Will you have time to take a stab at it?\nYep, I'll work on it this afternoon.", "new_text": "5.2. Cryptographic algorithm negotiation (AN): Transport security protocols should permit applications to configure supported or enabled cryptographic algorithms. Transport dependency: None. Application dependency: Application awareness of supported or desired algorithms. Application authentication delegation (AD): Some protocols may completely defer endpoint authentication to applications, e.g., to reduce online protocol complexity."} {"id": "q-en-draft-ietf-taps-transport-security-edb604cbbde350b25887a1f93423ad48d4525c248774cb5df0930fd160521ab6", "old_text": "Application dependency: Application opt-in and policy for endpoint authentication Mutual authentication: Transport security protocols should allow each endpoint to authenticate the other if required by the application. Transport dependency: None.", "comments": "WIP addressing . I don't want to pull this as-is: I just want comments on the format and usefulness of this. This closes the circle created by the list of protocol features in section 3, and indicates where we probably want more consistency and completeness in that list so the derivation of common optional features is more obvious.\nStill a WIP. I have a bunch of comments as a result of doing this exercise: Feature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Wireguard is IP encapsulation within UDP, but this is not stated anywhere in the doc. The set of features we list here should be called out or evident from the protocol features subsections under each protocol description. I think there's still some confusion in sections around symmetric authentication vs. signatures, i.e., AEAD-style authentication (\"packets belong to this session\") vs. PKI endpoint authentication (\"this connection belongs to a peer I trust and is not being MITMed\"). This may be a comment on the unfortunateness of the literature using \"authenticated\" for both.\n\nFeature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Can you give examples? Can you please open an issue to address this? See the other open pull request. :-) Where in particular are these things confused?\nNAME ping!\nSorry, am away for the holiday weekend. The ones I have listed with a \"?\", for instance. I guessed on a few of the others, as well (e.g., configuration extensions for MinimalT). The right answer is for me to go back through this table and enumerate all the areas where the text isn't clear, and fix them. I'll file an issue. Will do. I'm not quite sure what I was thinking here. After reading it through again, I think I'm okay with how this is presented.\nNAME can you please update this PR?\nA few more questions: Is there any existing client puzzle/proof of work standardized for TLS or DTLS, or is URL as far as something's gotten? Same goes for IPsec; I can find no standardized version of any of these things. Should algorithms like TLS, DTLS, etc. that support native authentication but also can export a cryptographic channel binding be listed as supporting authentication delegation? My instinct is yes.\nNAME None that I'm aware of! Yes, I think that's correct.\nNAME Will you be able to complete this PR this week?\nYes, though I could use some guidance on the ?'s in the table, if you have enough context to resolve them. Otherwise, I will do some digging.\nNAME IKEv2+ESP/AD: Yes, via EAP(-TLS). IKEv2+ESP/AFN: Yes, via vendor and configuration payloads. SRTP+DTLS/CM: No. ZRTP: Omit, since it's a variant? WireGuard/CM: No, connections are bound to IP addresses. WireGuard/SC: No, session resumption is not supported. WireGuard/LHP: Yes, for transport packets only. MinimalT/AD: No. MinimalT/AFN: No. MinimalT/LHP: Yes. CurveCP/AD: No.\nRFR.\nShip it.\nIt's probably easier and more consumable to encode the matrix of protocols providing a particular feature as a table.\nI'm assigning this to you, Kyle. Will you have time to take a stab at it?\nYep, I'll work on it this afternoon.", "new_text": "Application dependency: Application opt-in and policy for endpoint authentication Mutual authentication (MA): Transport security protocols should allow each endpoint to authenticate the other if required by the application. Transport dependency: None."} {"id": "q-en-draft-ietf-taps-transport-security-edb604cbbde350b25887a1f93423ad48d4525c248774cb5df0930fd160521ab6", "old_text": "Application dependency: Mutual authentication required for application support. DoS mitigation: Transport security protocols may need to support volumetric DoS prevention via, e.g., cookies or initiator-side puzzles. Transport dependency: None. Application dependency: None. Connection mobility: Sessions should not be bound to a network connection (or 5-tuple). This allows cryptographic key material and other state information to be reused in the event of a connection change. Examples of this include a NAT rebinding that occurs without a client's knowledge. Transport dependency: Connections are unreliable or can change due to unpredictable network events, e.g., NAT re-bindings. Application dependency: None. Source validation: Source validation must be provided to mitigate server-targeted DoS attacks. This can be done with puzzles or cookies. Transport dependency: Packets may arrive as datagrams instead of streams from unauthenticated sources. Application dependency: None. Application-layer feature negotiation: The type of application using a transport security protocol often requires features configured at the connection establishment layer, e.g., ALPN RFC7301. Moreover, application-layer features may often be used to offload the session to another server which can better handle the request. (The TLS SNI is one example of such a feature.) As such, transport security protocols should provide a generic mechanism to allow for such application-specific features and options to be configured or otherwise negotiated. Transport dependency: None. Application dependency: Specification of application-layer features or functionality. Configuration extensions: The protocol negotiation should be extensible with addition of new configuration options. Transport dependency: None.", "comments": "WIP addressing . I don't want to pull this as-is: I just want comments on the format and usefulness of this. This closes the circle created by the list of protocol features in section 3, and indicates where we probably want more consistency and completeness in that list so the derivation of common optional features is more obvious.\nStill a WIP. I have a bunch of comments as a result of doing this exercise: Feature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Wireguard is IP encapsulation within UDP, but this is not stated anywhere in the doc. The set of features we list here should be called out or evident from the protocol features subsections under each protocol description. I think there's still some confusion in sections around symmetric authentication vs. signatures, i.e., AEAD-style authentication (\"packets belong to this session\") vs. PKI endpoint authentication (\"this connection belongs to a peer I trust and is not being MITMed\"). This may be a comment on the unfortunateness of the literature using \"authenticated\" for both.\n\nFeature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Can you give examples? Can you please open an issue to address this? See the other open pull request. :-) Where in particular are these things confused?\nNAME ping!\nSorry, am away for the holiday weekend. The ones I have listed with a \"?\", for instance. I guessed on a few of the others, as well (e.g., configuration extensions for MinimalT). The right answer is for me to go back through this table and enumerate all the areas where the text isn't clear, and fix them. I'll file an issue. Will do. I'm not quite sure what I was thinking here. After reading it through again, I think I'm okay with how this is presented.\nNAME can you please update this PR?\nA few more questions: Is there any existing client puzzle/proof of work standardized for TLS or DTLS, or is URL as far as something's gotten? Same goes for IPsec; I can find no standardized version of any of these things. Should algorithms like TLS, DTLS, etc. that support native authentication but also can export a cryptographic channel binding be listed as supporting authentication delegation? My instinct is yes.\nNAME None that I'm aware of! Yes, I think that's correct.\nNAME Will you be able to complete this PR this week?\nYes, though I could use some guidance on the ?'s in the table, if you have enough context to resolve them. Otherwise, I will do some digging.\nNAME IKEv2+ESP/AD: Yes, via EAP(-TLS). IKEv2+ESP/AFN: Yes, via vendor and configuration payloads. SRTP+DTLS/CM: No. ZRTP: Omit, since it's a variant? WireGuard/CM: No, connections are bound to IP addresses. WireGuard/SC: No, session resumption is not supported. WireGuard/LHP: Yes, for transport packets only. MinimalT/AD: No. MinimalT/AFN: No. MinimalT/LHP: Yes. CurveCP/AD: No.\nRFR.\nShip it.\nIt's probably easier and more consumable to encode the matrix of protocols providing a particular feature as a table.\nI'm assigning this to you, Kyle. Will you have time to take a stab at it?\nYep, I'll work on it this afternoon.", "new_text": "Application dependency: Mutual authentication required for application support. DoS mitigation (DM): Transport security protocols may need to support volumetric DoS prevention via, e.g., cookies or initiator- side puzzles. Transport dependency: None. Application dependency: None. Connection mobility (CM): Sessions should not be bound to a network connection (or 5-tuple). This allows cryptographic key material and other state information to be reused in the event of a connection change. Examples of this include a NAT rebinding that occurs without a client's knowledge. Transport dependency: Connections are unreliable or can change due to unpredictable network events, e.g., NAT re-bindings. Application dependency: None. Source validation (SV): Source validation must be provided to mitigate server-targeted DoS attacks. This can be done with puzzles or cookies. Transport dependency: Packets may arrive as datagrams instead of streams from unauthenticated sources. Application dependency: None. Application-layer feature negotiation (AFN): The type of application using a transport security protocol often requires features configured at the connection establishment layer, e.g., ALPN RFC7301. Moreover, application-layer features may often be used to offload the session to another server which can better handle the request. (The TLS SNI is one example of such a feature.) As such, transport security protocols should provide a generic mechanism to allow for such application-specific features and options to be configured or otherwise negotiated. Transport dependency: None. Application dependency: Specification of application-layer features or functionality. Configuration extensions (CX): The protocol negotiation should be extensible with addition of new configuration options. Transport dependency: None."} {"id": "q-en-draft-ietf-taps-transport-security-edb604cbbde350b25887a1f93423ad48d4525c248774cb5df0930fd160521ab6", "old_text": "Application dependency: Specification of application-specific extensions. Session caching (and management): Sessions should be cacheable to enable reuse and amortize the cost of performing session establishment handshakes. Transport dependency: None. Application dependency: None. Length-hiding padding: Applications may wish to defer traffic padding to the security protocol to deter traffic analysis attacks. Transport dependency: None. Application dependency: Knowledge of desired padding policies. 6. This section describes the interface surface exposed by the security", "comments": "WIP addressing . I don't want to pull this as-is: I just want comments on the format and usefulness of this. This closes the circle created by the list of protocol features in section 3, and indicates where we probably want more consistency and completeness in that list so the derivation of common optional features is more obvious.\nStill a WIP. I have a bunch of comments as a result of doing this exercise: Feature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Wireguard is IP encapsulation within UDP, but this is not stated anywhere in the doc. The set of features we list here should be called out or evident from the protocol features subsections under each protocol description. I think there's still some confusion in sections around symmetric authentication vs. signatures, i.e., AEAD-style authentication (\"packets belong to this session\") vs. PKI endpoint authentication (\"this connection belongs to a peer I trust and is not being MITMed\"). This may be a comment on the unfortunateness of the literature using \"authenticated\" for both.\n\nFeature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Can you give examples? Can you please open an issue to address this? See the other open pull request. :-) Where in particular are these things confused?\nNAME ping!\nSorry, am away for the holiday weekend. The ones I have listed with a \"?\", for instance. I guessed on a few of the others, as well (e.g., configuration extensions for MinimalT). The right answer is for me to go back through this table and enumerate all the areas where the text isn't clear, and fix them. I'll file an issue. Will do. I'm not quite sure what I was thinking here. After reading it through again, I think I'm okay with how this is presented.\nNAME can you please update this PR?\nA few more questions: Is there any existing client puzzle/proof of work standardized for TLS or DTLS, or is URL as far as something's gotten? Same goes for IPsec; I can find no standardized version of any of these things. Should algorithms like TLS, DTLS, etc. that support native authentication but also can export a cryptographic channel binding be listed as supporting authentication delegation? My instinct is yes.\nNAME None that I'm aware of! Yes, I think that's correct.\nNAME Will you be able to complete this PR this week?\nYes, though I could use some guidance on the ?'s in the table, if you have enough context to resolve them. Otherwise, I will do some digging.\nNAME IKEv2+ESP/AD: Yes, via EAP(-TLS). IKEv2+ESP/AFN: Yes, via vendor and configuration payloads. SRTP+DTLS/CM: No. ZRTP: Omit, since it's a variant? WireGuard/CM: No, connections are bound to IP addresses. WireGuard/SC: No, session resumption is not supported. WireGuard/LHP: Yes, for transport packets only. MinimalT/AD: No. MinimalT/AFN: No. MinimalT/LHP: Yes. CurveCP/AD: No.\nRFR.\nShip it.\nIt's probably easier and more consumable to encode the matrix of protocols providing a particular feature as a table.\nI'm assigning this to you, Kyle. Will you have time to take a stab at it?\nYep, I'll work on it this afternoon.", "new_text": "Application dependency: Specification of application-specific extensions. Session caching and management (SC): Sessions should be cacheable to enable reuse and amortize the cost of performing session establishment handshakes. Transport dependency: None. Application dependency: None. Length-hiding padding (LHP): Applications may wish to defer traffic padding to the security protocol to deter traffic analysis attacks. Transport dependency: None. Application dependency: Knowledge of desired padding policies. 5.3. The following table lists the availability of the above-listed optional features in each of the analyzed protocols. \"Mandatory\" indicates that the feature is intrinsic to the protocol and cannot be disabled. \"Supported\" indicates that the feature is optionally provided natively or through a (standardized, where applicable) extension. M=Mandatory S=Supported but not required U=Unsupported *=On TCP; MPTCP would provide this ability **=TCP provides SYN cookies natively, but these are not cryptographically strong +=For transport packets only 6. This section describes the interface surface exposed by the security"} {"id": "q-en-draft-ietf-tls-ctls-baf94badb4711777a705aae2874ed5be318dd50cd20221d6084e0dfdb7edecc9", "old_text": "4.2.2. [[TODO]] 4.3.", "comments": "everything else is self-delimiting and is just a source of confusion (5) Replace PSK TODO with an open issue marker (6) use varints instead of remainder of message. Got a bit carried away", "new_text": "4.2.2. [[OPEN ISSUE: Limiting this to one value would potentially save some bytes here, at the cost of generality.]] 4.3."} {"id": "q-en-draft-ietf-tls-esni-0b41aae9c3e52c10d4130fb5fbae3591bacaaabdea798402c7b5796aba133940", "old_text": "7.2. I-D.ietf-tls-sni-encryption lists several requirements for SNI encryption. In this section, we re-iterate these requirements and assess the ESNI design against them. 7.2.1. Since the SNI encryption key is derived from a (EC)DH operation between the client's ephemeral and server's semi-static ESNI key, the", "comments": "Since OCSP is the server name, ESNI should probably enforce OCSP stapling (similar to ) or any other way of encrypted OCSP checks. Ignoring OCSP status checks entirely makes it hard to revoke a certificate, even if it's only valid for a few months.\nNAME what behavior do you expect here? Would clients fail hard if server don't send stapled responses? It's good to view ESNI as part of the leaky boat problem. It plugs one hole, but not all of them. Implementations probably should do something about cleartext OCSP if they want all holes plugged.\nThank you for the changes. LGTM.", "new_text": "7.2. ESNI requires encrypted DNS to be an effective privacy protection mechanism. However, verifying the server's identity from the Certificate message, particularly when using the X509 CertificateType, may result in additional network traffic that may reveal the server identity. Examples of this traffic may include requests for revocation information, such as OCSP or CRL traffic, or requests for repository information, such as authorityInformationAccess. It may also include implementation- specific traffic for additional information sources as part of verification. Implementations SHOULD avoid leaking information that may identify the server. Even when sent over an encrypted transport, such requests may result in indirect exposure of the server's identity, such as indicating a specific CA or service being used. To mitigate this risk, servers SHOULD deliver such information in-band when possible, such as through the use of OCSP stapling, and clients SHOULD take steps to minimize or protect such requests during certificate validation. 7.3. I-D.ietf-tls-sni-encryption lists several requirements for SNI encryption. In this section, we re-iterate these requirements and assess the ESNI design against them. 7.3.1. Since the SNI encryption key is derived from a (EC)DH operation between the client's ephemeral and server's semi-static ESNI key, the"} {"id": "q-en-draft-ietf-tls-esni-0b41aae9c3e52c10d4130fb5fbae3591bacaaabdea798402c7b5796aba133940", "old_text": "Hello, with a different ephemeral key share, as the terminating server will fail to decrypt and verify the ESNI value. 7.2.2. This design depends upon DNS as a vehicle for semi-static public key distribution. Server operators may partition their private keys", "comments": "Since OCSP is the server name, ESNI should probably enforce OCSP stapling (similar to ) or any other way of encrypted OCSP checks. Ignoring OCSP status checks entirely makes it hard to revoke a certificate, even if it's only valid for a few months.\nNAME what behavior do you expect here? Would clients fail hard if server don't send stapled responses? It's good to view ESNI as part of the leaky boat problem. It plugs one hole, but not all of them. Implementations probably should do something about cleartext OCSP if they want all holes plugged.\nThank you for the changes. LGTM.", "new_text": "Hello, with a different ephemeral key share, as the terminating server will fail to decrypt and verify the ESNI value. 7.3.2. This design depends upon DNS as a vehicle for semi-static public key distribution. Server operators may partition their private keys"} {"id": "q-en-draft-ietf-tls-esni-0b41aae9c3e52c10d4130fb5fbae3591bacaaabdea798402c7b5796aba133940", "old_text": "by sending different Resource Records containing ESNIRecord and ESNIKeys values with different keys using a short TTL. 7.2.3. This design requires servers to decrypt ClientHello messages with ClientEncryptedSNI extensions carrying valid digests. Thus, it is", "comments": "Since OCSP is the server name, ESNI should probably enforce OCSP stapling (similar to ) or any other way of encrypted OCSP checks. Ignoring OCSP status checks entirely makes it hard to revoke a certificate, even if it's only valid for a few months.\nNAME what behavior do you expect here? Would clients fail hard if server don't send stapled responses? It's good to view ESNI as part of the leaky boat problem. It plugs one hole, but not all of them. Implementations probably should do something about cleartext OCSP if they want all holes plugged.\nThank you for the changes. LGTM.", "new_text": "by sending different Resource Records containing ESNIRecord and ESNIKeys values with different keys using a short TTL. 7.3.3. This design requires servers to decrypt ClientHello messages with ClientEncryptedSNI extensions carrying valid digests. Thus, it is"} {"id": "q-en-draft-ietf-tls-esni-0b41aae9c3e52c10d4130fb5fbae3591bacaaabdea798402c7b5796aba133940", "old_text": "server. This attack is bound by the number of valid TCP connections an attacker can open. 7.2.4. As more clients enable ESNI support, e.g., as normal part of Web browser functionality, with keys supplied by shared hosting", "comments": "Since OCSP is the server name, ESNI should probably enforce OCSP stapling (similar to ) or any other way of encrypted OCSP checks. Ignoring OCSP status checks entirely makes it hard to revoke a certificate, even if it's only valid for a few months.\nNAME what behavior do you expect here? Would clients fail hard if server don't send stapled responses? It's good to view ESNI as part of the leaky boat problem. It plugs one hole, but not all of them. Implementations probably should do something about cleartext OCSP if they want all holes plugged.\nThank you for the changes. LGTM.", "new_text": "server. This attack is bound by the number of valid TCP connections an attacker can open. 7.3.4. As more clients enable ESNI support, e.g., as normal part of Web browser functionality, with keys supplied by shared hosting"} {"id": "q-en-draft-ietf-tls-esni-0b41aae9c3e52c10d4130fb5fbae3591bacaaabdea798402c7b5796aba133940", "old_text": "ESNI extensions (see grease-extensions), which helps ensure the ecosystem handles the values correctly. 7.2.5. This design is not forward secret because the server's ESNI key is static. However, the window of exposure is bound by the key lifetime. It is RECOMMENDED that servers rotate keys frequently. 7.2.6. This design permits servers operating in Split Mode to forward connections directly to backend origin servers, thereby avoiding unnecessary MiTM attacks. 7.2.7. Assuming ESNI records retrieved from DNS are validated, e.g., via DNSSEC or fetched from a trusted Recursive Resolver, spoofing a", "comments": "Since OCSP is the server name, ESNI should probably enforce OCSP stapling (similar to ) or any other way of encrypted OCSP checks. Ignoring OCSP status checks entirely makes it hard to revoke a certificate, even if it's only valid for a few months.\nNAME what behavior do you expect here? Would clients fail hard if server don't send stapled responses? It's good to view ESNI as part of the leaky boat problem. It plugs one hole, but not all of them. Implementations probably should do something about cleartext OCSP if they want all holes plugged.\nThank you for the changes. LGTM.", "new_text": "ESNI extensions (see grease-extensions), which helps ensure the ecosystem handles the values correctly. 7.3.5. This design is not forward secret because the server's ESNI key is static. However, the window of exposure is bound by the key lifetime. It is RECOMMENDED that servers rotate keys frequently. 7.3.6. This design permits servers operating in Split Mode to forward connections directly to backend origin servers, thereby avoiding unnecessary MiTM attacks. 7.3.7. Assuming ESNI records retrieved from DNS are validated, e.g., via DNSSEC or fetched from a trusted Recursive Resolver, spoofing a"} {"id": "q-en-draft-ietf-tls-esni-0b41aae9c3e52c10d4130fb5fbae3591bacaaabdea798402c7b5796aba133940", "old_text": "client validates the server certificate against the public name before retrying. 7.2.8. This design has no impact on application layer protocol negotiation. It may affect connection routing, server certificate selection, and client certificate verification. Thus, it is compatible with multiple protocols. 7.3. Note that the backend server has no way of knowing what the SNI was, but that does not lead to additional privacy exposure because the", "comments": "Since OCSP is the server name, ESNI should probably enforce OCSP stapling (similar to ) or any other way of encrypted OCSP checks. Ignoring OCSP status checks entirely makes it hard to revoke a certificate, even if it's only valid for a few months.\nNAME what behavior do you expect here? Would clients fail hard if server don't send stapled responses? It's good to view ESNI as part of the leaky boat problem. It plugs one hole, but not all of them. Implementations probably should do something about cleartext OCSP if they want all holes plugged.\nThank you for the changes. LGTM.", "new_text": "client validates the server certificate against the public name before retrying. 7.3.8. This design has no impact on application layer protocol negotiation. It may affect connection routing, server certificate selection, and client certificate verification. Thus, it is compatible with multiple protocols. 7.4. Note that the backend server has no way of knowing what the SNI was, but that does not lead to additional privacy exposure because the"} {"id": "q-en-draft-ietf-tls-esni-433564eb5fd093a5ea42c2b0210108a85ec53006646801203af261c5bb505b94", "old_text": "nonce re-use since the client's ESNI key share, and thus the value of Zx, does not change across ClientHello retries.) [[TODO: label swapping fixes a bug in the spec, though this may not be the best way to deal with HRR. See https://github.com/tlswg/ draft-ietf-tls-esni/issues/121 and https://github.com/tlswg/draft- ietf-tls-esni/pull/170 for more details.]] The client then creates a ClientESNIInner structure: A random 16-octet value to be echoed by the server in the", "comments": "I got a little tripped up on this point! Maybe you have a better way to explain it.\nThanks, NAME", "new_text": "nonce re-use since the client's ESNI key share, and thus the value of Zx, does not change across ClientHello retries.) Note that ESNIContents will not be directly transmitted to the server in the ClientHello. The server will instead reconstruct the same object by obtaining its values from ClientEncryptedSNI and ClientHello. [[TODO: label swapping fixes a bug in the spec, though this may not be the best way to deal with HRR. See https://github.com/tlswg/ draft-ietf-tls-esni/issues/121 and https://github.com/tlswg/draft- ietf-tls-esni/pull/170 for more details.]] Same value as ClientEncryptedSNI.record_digest. Same value as ClientEncryptedSNI.key_share. Same nonce as ClientHello.random. The client then creates a ClientESNIInner structure: A random 16-octet value to be echoed by the server in the"} {"id": "q-en-draft-ietf-tls-esni-cb6f3665c13a427d50195b815ecb45a81055e3de22ef25d4d42ac0257d4e8cb8", "old_text": "receipt of a second ClientHello message with a ClientECH value, servers set up their HPKE context and decrypt ClientECH as follows: It is an error for the client to offer ECH before the HelloRetryRequest but not after. Likewise, it is an error for the client to offer ECH after the HelloRetryRequest but not before. If the client-facing server accepts ECH for the first ClientHello but not the second, or it accepts ECH for the second ClientHello but not the first, then the client MUST abort the handshake with an \"illegal_parameter\" alert. [[OPEN ISSUE: Should we be using the PSK input or the info input? On the one hand, the requirements on info seem weaker, but maybe", "comments": "It is an error for the client to offer ECH after HRR but not before. Likewise, it is an error for the client to offer ECH before HRR but not after. This change ensures the server aborts in this case.\nSuppose we hit one of the messy HRR cases (URL), where the HRR is valid for the outer ClientHello, but not the inner ClientHello. The client then cannot construct a valid inner ClientHello. How would that square with this text? Are you envisioning the client just generates a random ECH extension, or...?\nI don't think this is a new problem. I see the PR as clarifying existing behavior. Can you please propose text for the HRR nightmare in ? :-)\nAh, you're right. Sorry, I misread this PR as introducing this requirement. In principle, we only need the server to require ECH in CH2 if it decided to accept ECH on CH1. That would allow the client to not send ECH the second time, but maybe we don't want to do that, depending on what we do for ... Do we know what we want to do for ? Simplest seems to be saying clients MUST align them. (Though how does that work with later HRR-sensitive extensions? Are funny ECH-adding proxies a thing?) Or maybe it's just a SHOULD? Or maybe we come up with some other protocol trick so ECH acceptance is visible at HRR (which would probably reopen the don't-stick-out can of worms yet again). Of these, only the MUST avoids collision with this PR's (existing) client requirement.\nYeah, that's my preference. This logic is already complicated enough. The primary pain point I see is enumerating (and prescribing) what is a HRR-sensitive field.\nSame, with that pain point being the big question in my head. Whether an HRR is valid for a ClientHello isn't an implementation decision, so in theory it's enumerable. But more extensions may come in later. And in the other direction, even a very strong SHOULD means we need to pick a client behavior for the weird case and allow for it in server behavior.\nRegardless of where we land on , this PR seems appropriate to me.\nIn a normal ECHO handshake with HelloRetryRequest there are 4 ClientHellos sent: ClientHelloOuter1, ClientHelloInner1, ClientHelloOuter2, ClientHelloInner2. Encryption of ClientHelloInner2 is bound to echohrrkey so the server must have the ECHOConfig used for ClientHelloInner1 in order to process ECHO the second time. And by the time the client receives encrypted handshake traffic it is not useful to signal if ClientHelloInner1 was successfully decrypted, what matters is to know if ECHO is globally successful or not. So one aspect that I find not explained enough is whether the server replying with HRR should use ClientHelloInner1 in the transcript, or instead always takes ClientHelloOuter1 (data on the wire) regardless of the first decryption status. The ECHO status accept/reject can be conveyed based on distinction ClientHello2 inner/outer only. This point impacts the number of transcripts the client may have to try, i.e. can be sure the combination (ClientHelloInner1, ClientHelloOuter2) is never valid.\nHi NAME the spec has changed quite a bit since this question was first posed. In particular, we've landed , which changes the server behavior so that it provides an explicit signal of ECH acceptance in its SH. Hence, no more trial decryption. I'm wondering if your question still applies? I believe what to do in case of HRR is well-specified at this point. There is no such ECH accdeptance signal provided in the HRR, so to determine if ECH was accepted, the client must wait until the SH. Until that point, the client needs to compute two transcripts in parallel: one assuming the ClientHelloOuter was used, and another assuming the ClientHelloInner was used.\nI don't think resolves this. If the client has different HRR-relevant preferences (key share and cipher suite) between CHInner and CHOuter, I believe HRR is still a mess because the HRR message doesn't contain a signal for whether ECH was accepted. Though I suppose it is a little less complex now that you don't need to go as far as trial decrypting. Different HRR-relevant preferences means the HRR may be good for one ClientHello but not the other, yet the client needs to manage that state and defer the actual error-handling to when ECH acceptance is known. An option where the HRR message included an ECH acceptance signal would close this, but then we have sticking out woes. The draft touches on this, but somewhat vaguely. URL We've been intending that our implementation would always match inner and outer preferences for HRR-sensitive fields, to avoid this case. But having this odd complexity cliff hidden in the spec is poor. I'd advocate we either require or at least strongly recommend clients do this.\nsuggests to add the following text to the HRR section: after. Likewise, it is an error for the client to offer ECH after the HelloRetryRequest but not before. If the client-facing server accepts ECH for the first ClientHello but not the second, or it accepts ECH for the second ClientHello but not the first, then it MUST abort the handshake with an \"illegal_parameter\" alert. I think this solves the problem, at least partially. It ensures that the HRR path aligns with ECH acceptance/rejection.\nThat doesn't address the problem. This is about complexity for the client, not the server. Yes, the server needs to enforce consistency between the two modes, but it is easy for it to do this. Imagine you're the client and your CHOuter has: keyshares = {X25519} supportedgroups = {X25519, P-256, P-384} ciphersuites = {TLSAES128GCMSHA256, TLSAES256GCMSHA384} And your CHInner has: keyshares = {P-256} supportedgroups = {X25519, P-256, P-384} ciphersuites = {TLSAES256GCMSHA384, TLSCHACHA20POLY1305SHA256} Now consider how you have to respond to each of these HRRs. Remember that at the time you process the HRR, you don't know if ECH was accepted. ciphersuite = TLSAES256GCMSHA384; keyshare = P-384 ciphersuite = TLSAES128CCMSHA256; keyshare = P-521 ciphersuite = TLSCHACHA20POLY1305SHA256; keyshare = X25519 ciphersuite = TLSAES128GCMSHA256; keyshare = P-256 ciphersuite = TLSCHACHA20POLY1305SHA256; keyshare = X25519 1 is valid for both ClientHellos. This is the easy case and you can compute new CHInner and CHOuter values. 2 is valid for neither ClientHello. If you can detect this, you can error immediately. 3 is valid for neither ClientHello, but for messy reasons. The key share is valid for CHOuter but the cipher suite is not. The cipher suite is valid for CHInner but the key share is not. (Recall that a key share in HRR is allowed if it is in CH.supportedgroups and not CH.keyshares. If it were in CH.keyshares, you shouldn't have sent HRR.) 4 is valid for only CHOuter, so you can't error. It is not possible to compute a CHInner, so I guess you drop the extension? But you still need to make a note to raise an error later if you see SH which claims it did accept ECH. 5 is valid for only CHInner, so you can't error. It is not possible to compute a CHOuter, but you have to, so I guess you send something garbage? But you still need to make a note to raise an error if you see SH which claims it rejected ECH. This is a huge mess. The client can avoid this mess by always matching keyshares, supportedgroups, and cipher_suites between CHInner and CHOuter. That means it can process HRR without knowing which CH to use. It is not obvious in the spec that you should do this, and the spec allows you to not do this.\nI see, I wasn't thinking about client complexity. Thanks for laying out the various edge cases. I would favor being stricter about how the keyshares, supportedgroups, and ciphersuites are chosen. At the very least, we should guide implementations towards ensuring they are the same in the CHInner and CHOuter. An alternative would be to add an ECH signal to HRR. We can't do the same trick we did with the URL, as this would stick out. But maybe there's another way? I thought of using HRR.sessionid, but that could get messy.\nI think there is still ambiguity in the design about what CH is used by the client-facing server in its own transcript when forwarding CHInner to the backend server. Or else the combination (ECH accepted before HRR, ECH rejected after HRR) can be explicitly forbidden, but I don't see why/how.\nIncidentally, this is addressed by URL This adds the following to client-facing server behavior: after. Likewise, it is an error for the client to offer ECH after the HelloRetryRequest but not before. If either of these conditions occurs, then the client-facing server MUST abort the handshake with an \"illegal_parameter\" alert. It's not hard for the client-facing server to enforce this. It just needs to remember if an HRR was triggered and whether ECH was offered in in the first CH.\nOk that will do it. Assuming \"ECH not offered\" is same as \"ECH not accepted\", due to greasing.\nWith merged, I wonder if the ESNIKeys struct should just be renamed to something not specific to ESNI, but something more like \"generic TLS config in DNS\" so that it can be more cleanly reused by unrelated features. \"ServerConfiguration\" comes to mind from the early TLS 1.3 days.\nIf we were to re-use these keys for other purposes, that seems fine. However, I'm not sure we want to do that. Semi-static keys, for example, should ideally be separate.\nThe keys yes, but I imagine the extensions field can hold \"arbitrary\" data, so while the keys would be ESNI-specific, you could also include extensions that have nothing to do with ESNI.\nThinking about this some more, this would also require changing the \"_esni.\" prefix to something else. In any case I can see how this might be out of scope for this spec. In the end it's a matter of deciding whether future TLS extensions that need similar DNS records, should be able to reuse the same structure (so no additional TXT record with its own prefix would be required), or if they should define their own structure (and have a separate TXT record). I'm happy to close this if it goes too much out of scope.\nRight, I think it's verging on being out of scope. What do others think? NAME NAME\nMy preference goes to keeping the name as-is, considering the fact that the role of ESNIKeys is (at least for the moment) to negotiate properties between the client and the fronting server (not the hidden server), and that the only property we need to negotiate between the two is the information necessary for ESNI protection. It is true that the DNS record can covey properties related to the hidden server (as you know, I've argued for using it to carry the server certificate chain). But IMO that's a change of concept, and I prefer keeping the concept simple for the time being.\nOk.", "new_text": "receipt of a second ClientHello message with a ClientECH value, servers set up their HPKE context and decrypt ClientECH as follows: If the client offered ECH in the first ClientHello, then it MUST offer ECH in the second. Likewise, if the client did not offer ECH in the first ClientHello, then it MUST NOT not offer ECH in the second. [[OPEN ISSUE: Should we be using the PSK input or the info input? On the one hand, the requirements on info seem weaker, but maybe"} {"id": "q-en-draft-ietf-tls-esni-cb6f3665c13a427d50195b815ecb45a81055e3de22ef25d4d42ac0257d4e8cb8", "old_text": "negotiating ECH, including servers which do not implement this specification. 7.2. When the client-facing server accepts ECH, it forwards the", "comments": "It is an error for the client to offer ECH after HRR but not before. Likewise, it is an error for the client to offer ECH before HRR but not after. This change ensures the server aborts in this case.\nSuppose we hit one of the messy HRR cases (URL), where the HRR is valid for the outer ClientHello, but not the inner ClientHello. The client then cannot construct a valid inner ClientHello. How would that square with this text? Are you envisioning the client just generates a random ECH extension, or...?\nI don't think this is a new problem. I see the PR as clarifying existing behavior. Can you please propose text for the HRR nightmare in ? :-)\nAh, you're right. Sorry, I misread this PR as introducing this requirement. In principle, we only need the server to require ECH in CH2 if it decided to accept ECH on CH1. That would allow the client to not send ECH the second time, but maybe we don't want to do that, depending on what we do for ... Do we know what we want to do for ? Simplest seems to be saying clients MUST align them. (Though how does that work with later HRR-sensitive extensions? Are funny ECH-adding proxies a thing?) Or maybe it's just a SHOULD? Or maybe we come up with some other protocol trick so ECH acceptance is visible at HRR (which would probably reopen the don't-stick-out can of worms yet again). Of these, only the MUST avoids collision with this PR's (existing) client requirement.\nYeah, that's my preference. This logic is already complicated enough. The primary pain point I see is enumerating (and prescribing) what is a HRR-sensitive field.\nSame, with that pain point being the big question in my head. Whether an HRR is valid for a ClientHello isn't an implementation decision, so in theory it's enumerable. But more extensions may come in later. And in the other direction, even a very strong SHOULD means we need to pick a client behavior for the weird case and allow for it in server behavior.\nRegardless of where we land on , this PR seems appropriate to me.\nIn a normal ECHO handshake with HelloRetryRequest there are 4 ClientHellos sent: ClientHelloOuter1, ClientHelloInner1, ClientHelloOuter2, ClientHelloInner2. Encryption of ClientHelloInner2 is bound to echohrrkey so the server must have the ECHOConfig used for ClientHelloInner1 in order to process ECHO the second time. And by the time the client receives encrypted handshake traffic it is not useful to signal if ClientHelloInner1 was successfully decrypted, what matters is to know if ECHO is globally successful or not. So one aspect that I find not explained enough is whether the server replying with HRR should use ClientHelloInner1 in the transcript, or instead always takes ClientHelloOuter1 (data on the wire) regardless of the first decryption status. The ECHO status accept/reject can be conveyed based on distinction ClientHello2 inner/outer only. This point impacts the number of transcripts the client may have to try, i.e. can be sure the combination (ClientHelloInner1, ClientHelloOuter2) is never valid.\nHi NAME the spec has changed quite a bit since this question was first posed. In particular, we've landed , which changes the server behavior so that it provides an explicit signal of ECH acceptance in its SH. Hence, no more trial decryption. I'm wondering if your question still applies? I believe what to do in case of HRR is well-specified at this point. There is no such ECH accdeptance signal provided in the HRR, so to determine if ECH was accepted, the client must wait until the SH. Until that point, the client needs to compute two transcripts in parallel: one assuming the ClientHelloOuter was used, and another assuming the ClientHelloInner was used.\nI don't think resolves this. If the client has different HRR-relevant preferences (key share and cipher suite) between CHInner and CHOuter, I believe HRR is still a mess because the HRR message doesn't contain a signal for whether ECH was accepted. Though I suppose it is a little less complex now that you don't need to go as far as trial decrypting. Different HRR-relevant preferences means the HRR may be good for one ClientHello but not the other, yet the client needs to manage that state and defer the actual error-handling to when ECH acceptance is known. An option where the HRR message included an ECH acceptance signal would close this, but then we have sticking out woes. The draft touches on this, but somewhat vaguely. URL We've been intending that our implementation would always match inner and outer preferences for HRR-sensitive fields, to avoid this case. But having this odd complexity cliff hidden in the spec is poor. I'd advocate we either require or at least strongly recommend clients do this.\nsuggests to add the following text to the HRR section: after. Likewise, it is an error for the client to offer ECH after the HelloRetryRequest but not before. If the client-facing server accepts ECH for the first ClientHello but not the second, or it accepts ECH for the second ClientHello but not the first, then it MUST abort the handshake with an \"illegal_parameter\" alert. I think this solves the problem, at least partially. It ensures that the HRR path aligns with ECH acceptance/rejection.\nThat doesn't address the problem. This is about complexity for the client, not the server. Yes, the server needs to enforce consistency between the two modes, but it is easy for it to do this. Imagine you're the client and your CHOuter has: keyshares = {X25519} supportedgroups = {X25519, P-256, P-384} ciphersuites = {TLSAES128GCMSHA256, TLSAES256GCMSHA384} And your CHInner has: keyshares = {P-256} supportedgroups = {X25519, P-256, P-384} ciphersuites = {TLSAES256GCMSHA384, TLSCHACHA20POLY1305SHA256} Now consider how you have to respond to each of these HRRs. Remember that at the time you process the HRR, you don't know if ECH was accepted. ciphersuite = TLSAES256GCMSHA384; keyshare = P-384 ciphersuite = TLSAES128CCMSHA256; keyshare = P-521 ciphersuite = TLSCHACHA20POLY1305SHA256; keyshare = X25519 ciphersuite = TLSAES128GCMSHA256; keyshare = P-256 ciphersuite = TLSCHACHA20POLY1305SHA256; keyshare = X25519 1 is valid for both ClientHellos. This is the easy case and you can compute new CHInner and CHOuter values. 2 is valid for neither ClientHello. If you can detect this, you can error immediately. 3 is valid for neither ClientHello, but for messy reasons. The key share is valid for CHOuter but the cipher suite is not. The cipher suite is valid for CHInner but the key share is not. (Recall that a key share in HRR is allowed if it is in CH.supportedgroups and not CH.keyshares. If it were in CH.keyshares, you shouldn't have sent HRR.) 4 is valid for only CHOuter, so you can't error. It is not possible to compute a CHInner, so I guess you drop the extension? But you still need to make a note to raise an error later if you see SH which claims it did accept ECH. 5 is valid for only CHInner, so you can't error. It is not possible to compute a CHOuter, but you have to, so I guess you send something garbage? But you still need to make a note to raise an error if you see SH which claims it rejected ECH. This is a huge mess. The client can avoid this mess by always matching keyshares, supportedgroups, and cipher_suites between CHInner and CHOuter. That means it can process HRR without knowing which CH to use. It is not obvious in the spec that you should do this, and the spec allows you to not do this.\nI see, I wasn't thinking about client complexity. Thanks for laying out the various edge cases. I would favor being stricter about how the keyshares, supportedgroups, and ciphersuites are chosen. At the very least, we should guide implementations towards ensuring they are the same in the CHInner and CHOuter. An alternative would be to add an ECH signal to HRR. We can't do the same trick we did with the URL, as this would stick out. But maybe there's another way? I thought of using HRR.sessionid, but that could get messy.\nI think there is still ambiguity in the design about what CH is used by the client-facing server in its own transcript when forwarding CHInner to the backend server. Or else the combination (ECH accepted before HRR, ECH rejected after HRR) can be explicitly forbidden, but I don't see why/how.\nIncidentally, this is addressed by URL This adds the following to client-facing server behavior: after. Likewise, it is an error for the client to offer ECH after the HelloRetryRequest but not before. If either of these conditions occurs, then the client-facing server MUST abort the handshake with an \"illegal_parameter\" alert. It's not hard for the client-facing server to enforce this. It just needs to remember if an HRR was triggered and whether ECH was offered in in the first CH.\nOk that will do it. Assuming \"ECH not offered\" is same as \"ECH not accepted\", due to greasing.\nWith merged, I wonder if the ESNIKeys struct should just be renamed to something not specific to ESNI, but something more like \"generic TLS config in DNS\" so that it can be more cleanly reused by unrelated features. \"ServerConfiguration\" comes to mind from the early TLS 1.3 days.\nIf we were to re-use these keys for other purposes, that seems fine. However, I'm not sure we want to do that. Semi-static keys, for example, should ideally be separate.\nThe keys yes, but I imagine the extensions field can hold \"arbitrary\" data, so while the keys would be ESNI-specific, you could also include extensions that have nothing to do with ESNI.\nThinking about this some more, this would also require changing the \"_esni.\" prefix to something else. In any case I can see how this might be out of scope for this spec. In the end it's a matter of deciding whether future TLS extensions that need similar DNS records, should be able to reuse the same structure (so no additional TXT record with its own prefix would be required), or if they should define their own structure (and have a separate TXT record). I'm happy to close this if it goes too much out of scope.\nRight, I think it's verging on being out of scope. What do others think? NAME NAME\nMy preference goes to keeping the name as-is, considering the fact that the role of ESNIKeys is (at least for the moment) to negotiate properties between the client and the fronting server (not the hidden server), and that the only property we need to negotiate between the two is the information necessary for ESNI protection. It is true that the DNS record can covey properties related to the hidden server (as you know, I've argued for using it to carry the server certificate chain). But IMO that's a change of concept, and I prefer keeping the concept simple for the time being.\nOk.", "new_text": "negotiating ECH, including servers which do not implement this specification. 7.1.1. It is an error for the client to offer ECH before the HelloRetryRequest but not after. Likewise, it is an error for the client to offer ECH after the HelloRetryRequest but not before. If either of these conditions occurs, then the client-facing server MUST abort the handshake with an \"illegal_parameter\" alert. 7.2. When the client-facing server accepts ECH, it forwards the"} {"id": "q-en-draft-ietf-tls-esni-79cbf1589463581d3c421eaf77480a48703da5b86509f985299adbb7c1a811af", "old_text": "If the server sends an \"encrypted_client_hello\" extension, the client MUST check the extension syntactically and abort the connection with a \"decode_error\" alert if it is invalid. Offering a GREASE extension is not considered offering an encrypted ClientHello for purposes of requirements in client-behavior. In", "comments": "The intent was that the server always sends the retry keys (it cannot distinguish stale keys from GREASE when the client is actually trying to connect to the public name), while the client just ignores them, but I did a poor job of describing this, so add a key sentence. This addresses .\n(EDITED.) To recap the discussion on , your point is that the client-facing server needn't attempt to distinguish GREASE from non-GREASE based on outer SNI, because if the outer SNI != public name, then the client knows to ignore the retry config and continue as usual. I think this is weird because if rejection was the intent, then the client is supposed to abort with \"ech_required\". So the server only knows that non-GREASE was intended once it receives this signal.\nI mean, yeah, that's how this was designed and is what we merged to the spec. :-) This formulation, in particular, does not break if the client tries to connect to the public name (it's a perfectly valid URL) without knowing the ECH config (maybe it the HTTPS record didn't get through Do53) but still sends GREASE (because the intent is that every client can GREASE). It seems to me breaks this. Do you have an alternate formulation which keeps this working? Connecting to the public name is especially important if the public name is a non-throwaway domain. The only requirement on the public name is that it's some name which knows about and serves the client-facing server's config. (I.e. this particular backend server is the same entity as the client-facing server.) doesn't relax this requirement, since the public name continues to be the entity trusted to replace, and possibly disable, ECH on config mismatch.\nI don't see this as an issue. If the client sends GREASE, it doesn't expect to use ECH, so it won't abort. The text here only requires that the client check the extension if present, and ignore it if absent.\nI don't have an alternative formulation. I was evnisioning the public name only being verified on the ECH rejection path, or when neither real nor dummy ECH was offered to the client-facing server. I'm fine with rejection not being known until \"ech_required\". I support this PR as the resolution to .\nNAME and I were sitting down and thinking and we realized the following scenario can happen: URL is on URL, which supports ECH URL has an HTTPS record, an A record, AAAA record pointing at the IP for URL A user of a browsers latest release that supports ECH and greases the extension is using a resolver that doesn't resolve HTTPS records This results in an ECH request being made with URL, and a rejection of the ECH. So now the question is what the client should do and what should servers expect? Try again, with the outer SNI the one matching the one in the new config Use the connection authoritatively for URL since the connection has already revealed the issue.\nThe intention was the latter, but the text could probably be clearer. I don't think there's much point in trying again since the name has already been revealed. See URL\nNAME is this still an issue now that is merged?\nI think the client behavior remains equally unspecified. In particular the lengthy argument NAME and I had about server enforcement of the client behavior I think isn't changed.\nCare to file a PR, presuming the latter behavior is intended?\nA proposed resolution is here: .\nNAME a proposed fix has been merged. See .\nI think this text works.", "new_text": "If the server sends an \"encrypted_client_hello\" extension, the client MUST check the extension syntactically and abort the connection with a \"decode_error\" alert if it is invalid. It otherwise ignores the extension and MUST NOT use the retry keys. Offering a GREASE extension is not considered offering an encrypted ClientHello for purposes of requirements in client-behavior. In"} {"id": "q-en-draft-ietf-tls-esni-0ba7625b962dc6393ecda3646532580fd0dd09c4d578c44f4a01100d0ecd4115", "old_text": "10.8.6. This design permits servers operating in Split Mode to forward connections directly to backend origin servers, thereby avoiding unnecessary MiTM attacks. 10.8.7. Assuming ECH records retrieved from DNS are authenticated, e.g., via DNSSEC or fetched from a trusted Recursive Resolver, spoofing a server operating in Split Mode is not possible. See plaintext-dns for more details regarding plaintext DNS. Authenticating the ECHConfigs structure naturally authenticates the included public name. This also authenticates any retry signals from the server because the client validates the server certificate against the public name before retrying. 10.8.8. This design has no impact on application layer protocol negotiation. It may affect connection routing, server certificate selection, and client certificate verification. Thus, it is compatible with multiple protocols. 10.9.", "comments": "We use a mix of sentence case and title case in the document. The RFC style guide says to use title case. This PR aligns with that. URL Additionally, the ESNI criteria document has changed slightly since the comparison against criteria section was written. I've aligned the titles with the final wording in RFC8744. RFC8744 has also merged \"Proper security context\" and \"Split server spoofing\", so I've done the same here. \"Support Multiple Protocols\" also now talks about transport protocols and ALPN, so I've added a few words about that.", "new_text": "10.8.6. This design permits servers operating in Split Mode to forward connections directly to backend origin servers. The client authenticates the identity of the backend origin server, thereby avoiding unnecessary MiTM attacks. Conversely, assuming ECH records retrieved from DNS are authenticated, e.g., via DNSSEC or fetched from a trusted Recursive Resolver, spoofing a client-facing server operating in Split Mode is not possible. See plaintext-dns for more details regarding plaintext DNS. Authenticating the ECHConfigs structure naturally authenticates the included public name. This also authenticates any retry signals from the client-facing server because the client validates the server certificate against the public name before retrying. 10.8.7. This design has no impact on application layer protocol negotiation. It may affect connection routing, server certificate selection, and client certificate verification. Thus, it is compatible with multiple application and transport protocols. By encrypting the entire ClientHello, this design additionally supports encrypting the ALPN extension. 10.9."} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "\"encrypted_client_hello\" extension and proceeds with the handshake as usual, per RFC8446, Section 4.1.2. If it supports ECH but does not recognize the configuration specified by the client, then it ignores the extension and terminates the handshake using the ClientHelloOuter. This is referred to as \"ECH rejection\". When ECH is rejected, the server sends an acceptable ECH configuration in its EncryptedExtensions message. If it supports ECH and recognizes the configuration, then it attempts to decrypt the ClientHelloInner. It aborts the handshake if decryption fails; otherwise it forwards the ClientHelloInner to the backend, who terminates the connection. This is referred to as \"ECH acceptance\". Upon receiving the server's response, the client determines whether or not ECH was accepted and proceeds with the handshake accordingly.", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "\"encrypted_client_hello\" extension and proceeds with the handshake as usual, per RFC8446, Section 4.1.2. If it supports ECH but cannot decrypt the extension, then it terminates the handshake using the ClientHelloOuter. This is referred to as \"ECH rejection\". When ECH is rejected, the server sends an acceptable ECH configuration in its EncryptedExtensions message. If it supports ECH and decrypts the extension, it forwards the ClientHelloInner to the backend, who terminates the connection. This is referred to as \"ECH acceptance\". Upon receiving the server's response, the client determines whether or not ECH was accepted and proceeds with the handshake accordingly."} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "\"ECHConfig.cipher_suites\" list. The configuration identifier, equal to \"Expand(Extract(\"\", config), \"tls13 ech config id\", Nh)\", where \"config\" is the \"ECHConfig\" structure and \"Extract\", \"Expand\", and \"Nh\" are as specified by the cipher suite KDF. (Passing the literal \"\"\"\" as the salt is interpreted by \"Extract\" as no salt being provided.)", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "\"ECHConfig.cipher_suites\" list. The configuration identifier, equal to \"Expand(Extract(\"\", config), \"tls ech config id\", Nh)\", where \"config\" is the \"ECHConfig\" structure and \"Extract\", \"Expand\", and \"Nh\" are as specified by the cipher suite KDF. (Passing the literal \"\"\"\" as the salt is interpreted by \"Extract\" as no salt being provided.)"} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "Note that the HPKE functions Deserialize and SetupBaseS are those which match \"ECHConfig.kem_id\" and the AEAD/KDF used with \"context\" are those which match the client's chosen preference from \"ECHConfig.cipher_suites\". The value of the \"encrypted_client_hello\" extension in the ClientHelloOuter is a \"ClientECH\" with the following values:", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "Note that the HPKE functions Deserialize and SetupBaseS are those which match \"ECHConfig.kem_id\" and the AEAD/KDF used with \"context\" are those which match the client's chosen preference from \"ECHConfig.cipher_suites\". The \"info\" parameter to SetupBaseS is the concatenation of \"tls ech\", a zero byte, and the serialized ECHConfig. The value of the \"encrypted_client_hello\" extension in the ClientHelloOuter is a \"ClientECH\" with the following values:"} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "\"payload\", as computed above. 6.2. This section describes a deterministic padding mechanism based on the", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "\"payload\", as computed above. If optional configuration identifiers (see optional-configs)) are used, the \"config_id\" field MAY be empty or randomly generated. Unless specified by the application using (D)TLS or externally configured on both sides, implementations MUST compute the field as specified in encrypted-client-hello. 6.2. This section describes a deterministic padding mechanism based on the"} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "of the first ClientHelloInner via the derived ech_hrr_key by modifying HPKE setup as follows: Clients then encrypt the second ClientHelloInner using this new HPKE context. In doing so, the encrypted value is also authenticated by ech_hrr_key. The rationale for this is described in flow-hrr-hijack. Client-facing servers perform the corresponding process when decrypting second ClientHelloInner messages. In particular, upon", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "of the first ClientHelloInner via the derived ech_hrr_key by modifying HPKE setup as follows: The \"info\" parameter to SetupPSKS is the concatenation of \"tls ech\", a zero byte, and the serialized ECHConfig. Clients then encrypt the second ClientHelloInner using this new HPKE context. In doing so, the encrypted value is also authenticated by ech_hrr_key. The rationale for this is described in flow-hrr-hijack. Client-facing servers perform the corresponding process when decrypting second ClientHelloInner messages. In particular, upon"} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "servers set up their HPKE context and decrypt ClientECH as follows: ClientHelloOuterAAD is computed from the second ClientHelloOuter as described in authenticating-outer. If the client offered ECH in the first ClientHello, then it MUST offer ECH in the second. Likewise, if the client did not offer ECH", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "servers set up their HPKE context and decrypt ClientECH as follows: ClientHelloOuterAAD is computed from the second ClientHelloOuter as described in authenticating-outer. The \"info\" parameter to SetupPSKR is computed as above. If the client offered ECH in the first ClientHello, then it MUST offer ECH in the second. Likewise, if the client did not offer ECH"} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "extension will result in a new ClientHello to process, so even the client's TLS version preferences may have changed. The ClientECH value is said to match a known ECHConfig if there exists an ECHConfig that can be used to successfully decrypt ClientECH.payload. This matching procedure should be done using one of the following two checks: Compare ClientECH.config_id against identifiers of known ECHConfig and choose the one that matches. Use trial decryption of ClientECH.payload with known ECHConfig and choose the one that succeeds. Some uses of ECH, such as local discovery mode, may omit the ClientECH.config_id since it can be used as a tracking vector. In such cases, trial decryption should be used for matching ClientECH to known ECHConfig. Unless specified by the application using (D)TLS or externally configured on both sides, implementations MUST use the first method. If the ClientECH value does not match any known ECHConfig structure, it MUST ignore the extension and proceed with the connection, with the following added behavior: It MUST include the \"encrypted_client_hello\" extension in its EncryptedExtensions with the \"retry_configs\" field set to one or more ECHConfig structures with up-to-date keys. Servers MAY supply multiple ECHConfig values of different versions. This allows a server to support multiple versions at once. If offered, the server MUST ignore the \"pre_shared_key\" extension in the ClientHello. Note that an unrecognized ClientECH.config_id value may be a GREASE ECH extension (see grease-extensions), so it is necessary for servers to proceed with the connection and rely on the client to abort if ECH was required. In particular, the unrecognized value alone does not indicate a misconfigured ECH advertisement (misconfiguration). Instead, servers can measure occurrences of the \"ech_required\" alert to detect this case. Once a suitable ECHConfig is found, the server verifies that the ECHConfig supports the cipher suite indicated by ClientECH.cipher_suite and that the version of ECH indicated by the client matches the ECHConfig.version. If not, then the server MUST abort with an \"illegal_parameter\" alert. Otherwise, the server decrypts ClientECH.payload, using the private key skR corresponding to ECHConfig, as follows: ClientHelloOuterAAD is computed from ClientHelloOuter as described in authenticating-outer. If decryption fails, the server MUST abort the connection with a \"decrypt_error\" alert. Otherwise, the server reconstructs ClientHelloInner from EncodedClientHelloInner, as described in encoding-inner. Upon determining the ClientHelloInner, the client-facing server then forwards the ClientHelloInner to the appropriate backend server,", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "extension will result in a new ClientHello to process, so even the client's TLS version preferences may have changed. First, the server collects a set of candidate ECHConfigs. This set is determined by one of the two following methods: Compare ClientECH.config_id against identifiers of known ECHConfigs and select the one that matches, if any, as a candidate. Collect all known ECHConfigs as candidates, with trial decryption below determining the final selection. Some uses of ECH, such as local discovery mode, may omit the ClientECH.config_id since it can be used as a tracking vector. In such cases, the second method should be used for matching ClientECH to known ECHConfig. See optional-configs. Unless specified by the application using (D)TLS or externally configured on both sides, implementations MUST use the first method. The server then iterates over all candidate ECHConfigs, attempting to decrypt the \"encrypted_client_hello\" extension: The server verifies that the ECHConfig supports the cipher suite indicated by the ClientECH.cipher_suite and that the version of ECH indicated by the client matches the ECHConfig.version. If not, the server continues to the next candidate ECHConfig. Next, the server decrypts ClientECH.payload, using the private key skR corresponding to ECHConfig, as follows: ClientHelloOuterAAD is computed from ClientHelloOuter as described in authenticating-outer. The \"info\" parameter to SetupBaseS is the concatenation \"tls ech\", a zero byte, and the serialized ECHConfig. If decryption fails, the server continues to the next candidate ECHConfig. Otherwise, the server reconstructs ClientHelloInner from EncodedClientHelloInner, as described in encoding-inner. It then stops consider candidate ECHConfigs. Upon determining the ClientHelloInner, the client-facing server then forwards the ClientHelloInner to the appropriate backend server,"} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "ClientHelloInner. The client-facing server forwards all other TLS messages between the client and backend server unmodified. 7.1.1. In case a HelloRetryRequest (HRR) is sent, the client-facing server", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "ClientHelloInner. The client-facing server forwards all other TLS messages between the client and backend server unmodified. Otherwise, if all candidate ECHConfigs fail to decrypt the extension, the client-facing server MUST ignore the extension and proceed with the connection using ClientHelloOuter. This connection proceeds as usual, except the server MUST include the \"encrypted_client_hello\" extension in its EncryptedExtensions with the \"retry_configs\" field set to one or more ECHConfig structures with up-to-date keys. Servers MAY supply multiple ECHConfig values of different versions. This allows a server to support multiple versions at once. Note that decryption failure could indicate a GREASE ECH extension (see grease-extensions), so it is necessary for servers to proceed with the connection and rely on the client to abort if ECH was required. In particular, the unrecognized value alone does not indicate a misconfigured ECH advertisement (misconfiguration). Instead, servers can measure occurrences of the \"ech_required\" alert to detect this case. 7.1.1. In case a HelloRetryRequest (HRR) is sent, the client-facing server"} {"id": "q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e", "old_text": "extension but CH1 does not, then the server MUST abort the handshake with an \"illegal_parameter\" alert. Suppose the \"encrypted_client_hello\" extension is sent in both CH1 and CH2, If the configuration identifier (see ech-configuration) differs between CH1 and CH2, then the server MUST abort with an \"illegal_parameter\" alert. [[OPEN ISSUE: If the client-facing server implements stateless HRR, it has no way to send a cookie, short of as-yet-unspecified", "comments": "The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision.", "new_text": "extension but CH1 does not, then the server MUST abort the handshake with an \"illegal_parameter\" alert. If the \"encrypted_client_hello\" extension is sent in CH2, the server follows the procedure in client-facing-server to decrypt the extension, but it uses the previously-selected ECHConfig as the set of candidate ECHConfigs. If decryption fails, the server aborts the connection with a \"decrypt_error\" alert rather than continuing the handshake with the second ClientHelloOuter. [[OPEN ISSUE: If the client-facing server implements stateless HRR, it has no way to send a cookie, short of as-yet-unspecified"} {"id": "q-en-draft-ietf-tls-esni-8824ed4577342af9ee46b6aa8efcf4649bf095eb199dada318802f133a0db4af", "old_text": "The non-empty name of the client-facing server, i.e., the entity trusted to update the ECH configuration. This is used to correct misconfigured clients, as described in handle-server-response. A list of extensions that the client must take into consideration when generating a ClientHello message. These are described below", "comments": "[Edit: deleted \"Fixes\" line for ]\nFrom , we also talked about IP addresses and such. Do we still want to do that? Whether the client validation is SHOULD or MUST (TBH I kinda prefer MUST, given the implications on private services... if it's a SHOULD, we should talk about why so implementations can make an informed decision), we do also need to say what values are valid for the server.\nOK, switched to a MUST. Added IP addresses as well. I'm calling out to for IPv4/IPv6 parsing. I'd like to avoid any ambiguities that might creep in if I were allowed to define the grammar, like allowing octal numbers in an IPv4 address, e.g. != . For hostname parsing, I'm borrowing from so we can parse IDNs.\nI think that is a mistake.\nWhy is that?\nI think IP addresses are a mistake because they can introduce parsing ambiguities (as pointed out above), and sometimes those cause problems (URL). Also, SNI is really limited to hostname (as NAME has pointed out before, referencing AGL's post), and having the two mismatch in their capabilities bothers me.\nAgreed that the capability mismatch between SNI and ECHConfig.publicname is odd. However, there is already a capability mismatch between ECHConfig.publicname and names allowable on server certificates. At least in theory, the server's certificate might not have a hostname. That being said, I don't have a strong opinion either way. If we do decide to disallow IP addresses, I think we'd still want the client to look for them so it can reject those ECHConfigs that use IP addresses. This is about stopping wacky values from bubbling up to the application.\nThe client sending the ECH or the server receiving it? Current TLS stacks don't seem to check, on either side. (At least OpenSSL and its major forks)\nI was referring to the client, but yeah, scratch that idea. If public_name can only be a hostname, the client should only validate that it looks like a hostname.\nI don't think the capability mismatch between SNI and ECHConfig.publicname is a fatal. We can move ECHConfig.publicname up an abstraction level. I think we should do this whether we allow IPs or not. The public name isn't \"put this in ClientHelloOuter.servername\" but \"the name you are trying to connect to in the outer handshake\". In particular, this encompasses both ClientHelloOuter.servername and certificate verification. To fill in ClientHelloOuter.servername, you use the rules in RFC6066: servername is the name you are trying to connect to, provided it's a DNS name. Under this model, it is perfectly coherent for \"the name you are trying to connect to\" to be an IP address, if we want to do that. If we wish to do that then, yeah, we'll need to spell out the syntax here. That syntax clearly does already exist (URLs, etc.), but not in TLS. I don't particular care whether we do it, though. How about this: Right now, IP addresses are implicitly not okay by way of CHOuter.servername = ECHConfig.publicname. So let's not introduce them in this PR and merely iron out the DNS name validation. This PR adds to, but does not We then rework the text to redefine public_name as above. This is a prerequisite to allowing IPs, but I think it's worth doing either way. We decide the IP question (interim?) and with the result. (Sorry, I think some of this confusion is my fault. I filed two related, but separate, discussion points in , which was confusing. My comment in URL was most in reaction to \"; that we shouldn't close it out without resolving both points.)\nI think I'd mostly be against supporting IP addresses in ECHConfig.publicname. I suspect including an IP address in URL would break some things. The comment below is from OpenSSL's ssl/statem/extensionssrvr.c: / Although the intent was for servername to be extensible, RFC 4366 was not clear about it; and so OpenSSL among other implementations, always and only allows a 'hostname' name types. RFC 6066 corrected the mistake but adding new name types is nevertheless no longer feasible, so act as if no other SNI types can exist, to simplify parsing. Also note that the RFC permits only one SNI value per type, i.e., we can only have a single hostname. / Separately, we could land ourselves with yet more confusion related to the address to which one ought send a ClientHello, (the address in the ECHConfig or in the URL or the A/AAAA of one of the names involved, and which to prefer if some things have DNSSEC etc etc.). Cheers, S.\nNAME Right, I think it's unambiguous, whatever we do with IP addresses, they won't go in ClientHelloOuter.servername. Any use of this, if we want it, would be for public name verification, not the servername value. See my comment above with how to think about public_name.\nSGTM, folks. Just dropped the parts about IP addresses.\nLooks like I was a little overzealous removing the IP address text. We still need to check that publicname is not an IPv4 address because dotted-decimal IPv4 addresses would slip through the cracks of hostname validation. This is bad because a malicious/misconfigured DNS would cause the ECH client to break 's rule on the contents of \"servername\": (Re-added this IPv4 check in the last commit.)\nI would rather make \"validate the host name\" be a SHOULD and remove all these requirements.\nECH implementations not validating that string risks surprising behavior for a lot of applications, and potentially more attack surface to private services than before. Have you looked at the discussion in issue , in particular: Sadly, the thing that makes that work (logic earlier in the application) doesn't apply here. See URL\nCan you explain your concerns in more detail? I think the application should be responsible for validation.\nThe application should never see the value of ECHConfig.public_name, right?\nSee , specifically URL and URL Well, there are several things going on here: First, different systems may differently divide the \"TLS\" part and the \"application\" part. The spec mostly captures behavior of the overall client system, because that's all we can usefully talk about. (For example, ECH specifies a retry flow that involves making a new connection. At the level of many TLS libraries, including OpenSSL + derivatives, the caller makes connections. So instead the TLS library can expose APIs and document the caller's responsibilities as part of enabling ECH. Other interfaces may be create their own connections and handle this internally.) So whether it's the application or TLS, at the implementation level, isn't a hard deciding factor. Second, with that said, yes, I expect that the majority of TLS application interfaces will treat the ECHConfigList as an opaque blob. That is a good division of responsibilities because the TLS half already needs to process individual ECHConfigs in the ECHConfigList and select one based on its internal capabilities. Having to thread through application-specific DNS validation seems like it'd be complex. (But this is just my view as an implementor. The spec merely describes the overall behavior of the client system. How you choose to layer it is an implementation decision.) Finally, there's the question of whether TLS as a spec should have opinions here or if it should punt to some upper layer spec profile, as we did with 0-RTT. I don't think that's warranted here. RFC6066 already decided what can go in server_name, so TLS already is plenty opinionated on the shape of strings meant to be DNS names. Thus, IMO, the simplest and most practical story is to have TLS handle this, rather than thread it up through all the layers.\nI am willing to accept that I am in the rough here.\nAs noted in URL, I don't think we should be putting IP addresses in .\nIn the interest of dealing with one problem at a time, this PR no longer attempts to spell out how to recognize IP addresses. Roughly: The client MUST validate as a LDH hostname. The client SHOULD exclude IP address literals, which may come in many strange forms, or else it may send a non-compliant \"servername\" extension. If we care to make excluding IP addresses a MUST, let's work out the details in another PR. Same goes for if we want to explicitly support IP address literals \u2014 we still need to validate them to protect server applications from receiving IP addresses in non-standard notations, e.g. octal.\nI think there are a couple things going on here: ECHConfig.publicname, as currently written, is described as what goes in CHOuter.servername. That wasn't right and we should have made it outer reference identity. I believe NAME is going to do a separate PR after this goes through. SNI or reference identity, we need to define what the type of the thing that goes in ECHConfig.publicname and what validation the client does. In particular, without validation, it gives the network more control over what goes into CHOuter.servername than would otherwise be the case. Normally, in a typical HTTPS client, TLS can just assume the hostname has passed whatever checks earlier in the stack (URL parsing, DNS lookup, etc.) and so checks in TLS are not load-bearing. The public name breaks this because the TLS layer is fabricating a hostname on its own. So we should have TLS validate it before running too far with it. This PR addresses (2).\nFollowing up on NAME comment, I was indeed planning on clarifying that ECHConfig.publicname is a reference identity, and planned on renaming this to something other than to match. Basically: may be either a name or IP address, validated according to the rules in this PR, and clients will use that identity when constructing the ClientHello per the normal rules. (If it's a name, put it in \"servername,\" else don't put it in that name.) I think we can probably land this PR as-is. Is everyone else OK with this plan?\nIt's a nit, but I don't think I've ever seen a use of the term \"identity\" that didn't eventually cause confusion. I think public_name was better TBH. Sorry if I missed the discussion but: Are one or more IPv4/IPv6 addresses allowed and if so with what syntax? If only 1 is allowed, why? (Syntax still needs a reference.) I don't know how an IP address here ought be handled if it differs from those in the SVCB address hints fields. (Not to mention the origin's A/AAAA records.) Lastly, I forget why we even want an IP address there at all. Thanks, S.\nI assumed there's at most one identity in this field, be it a name or address, but NAME can clarify. Also, I think the identity here only pertains to how the client-facing server is authenticated. If a server uses an IP-address certificate for the rejection path then it might specify the single IP address of that certificate. (Clients don't use SVCB hints for this, so they're orthogonal.)\nThere can be many SANs even if those are IP addresses. So I think you mean that you want exactly one IP address here and if one is here that MUST match one SAN iPaddress from the cert? Personally I don't think that's useful enough to support but maybe I missed why it was really good. TBH, I doubt implementers will notice that orthogonality. And those deploying even less would be my guess. If one is allowed put a single IP address in an ECHConfig and others in SVCB hints then someone needs to say what to do when those aren't the same. S.\nTrue, but I'm just commenting on parity with what's there right now. (ECHConfig.public_name is just one name.) We may change that later if people want it? I don't see a reason to, though. Yeah, that's what I'm thinking. Hmm, yeah, we may be able to clarify this, but given that we don't discuss anything about the SVCB record (beyond the config) currently, it may not be needed.\nCurrently, this PR explicitly states that the client SHOULD ignore the ECHConfig if its contains an IP address. Unfortunately, what constitutes a textual representation of an IPv4 address is pretty underspecified, so I didn't manage to come up with a foolproof validation algorithm, otherwise I would make it a MUST. If we want to allow IP addresses, let's figure it out in another PR.\nSo I don't think that we need to worry about IPv6 here. That will never match LDH. The IPv4 thing is unfortunate. I think that the risk is that the public name is passed to a certificate validation library that subsequently treats the name as an IP address (matching against an ipAddress SAN rather than dNSName SAN) when doing fallback process. If there is any potential for confusion about this string (or look-alikes if we consider the octal mess) then it might happen that there is variation in what certificates are considered OK. That leads to a reliance on the certification authorities being careful about issuance for names that look like IP addresses. We don't want that. Either way, I don't like \"SHOULD\" here. It retains the potential confusion. If one client component is sloppy and doesn't bother other components are then forced to contend with that choice. I think that I've come around to NAME viewpoint on this. It's reasonable to require a domain name for fallback in all cases, even where a server is identified by IP address alone. I think that means prohibiting IPv4 addresses, just to avoid the potential confusion. That is, if the name contains exactly 4 labels that are only digits, then the record is invalid. IPv6 addresses will look after themselves as they aren't valid LDH A-labels. Note: I don't know if there are rules that might allow 172.0.2.2045 to be registered, which is clearly not an IP address. I believe that the main protection we have against valid DNS names appearing to be IPv4 addresses is the gTLD registration rules (from URL Section 2.2.1.3.1 clause 1.2.1 this is currently assured as only a-z is permitted from the ASCII range).\nNAME can you drill into this? I must be misunderstanding you (and maybe the problem), but are you suggesting that strings which look like IP addresses might be parsed incorrectly by the thing validating certificates against such look-alike addresses? They also have to be careful about names that aren't actually domain names (\"DROP TABLES\" or whatever). How is this a substantially new requirement? Prohibiting IP addresses here seems overly restrictive. Given that it doesn't seem challenging to support, and does not, to my knowledge, introduce any new requirements for CAs or certificate validation APIs, I'd like to hear more reasons why we ought to definitely rule them out. And as NAME says, we should hash this out in a separate issue and PR. In the interest of making forward progress, I'm going to merge this PR as-is, and we can discuss more in .\nIn most cases, I don't anticipate problems: there will be one certificate validator used during fallback and that will make a single determination. IP address lookalikes in that case probably won't be a problem. However, in cases where there are multiple components involved, there might be IP address lookalikes that pass one validator but not another. That might be used to attack clients.\nGiven how IPv4 literals can get in some applications, I suspect the TLS library classifying and rejecting IPv4 literals in a way that ensures the calling application always interprets it as a DNS name is hopeless. I'm now inclined to say we should: Do and say publicname is exclusively DNS. If anyone adds IP in the future, the expectation will be a separate extension and thus not require any parsing to pick semantics. The exclusively-DNS publicname field is validated according to DNS rules and leave it at that. That at least avoids completely unbounded DNS control on the ClientHello. Concede that we can't reliably exclude IPv4 literals at this layer. Call this out and say that clients MUST NOT validate the string as IP addresses. If your verifier has separate APIs for configuring DNS vs IP names for subjectAltName, feel free to pass the string as a DNS input and move on. If your verifier's API takes a more URL-like hostname, you should use whatever logic you used to distinguish DNS vs. IP and fail the verification if you think the string is an IP. (Effectively that verifier believes \"DNS:1.2.3.4\" is never valid because it has no way to spell it.) This does mean the DNS can cause ClientHelloOuter to violate the RFC6066 prohibition on IPv4 literals, but since RFC6066 failed to specify what syntax it was prohibiting, the meaning of that rule is somewhat unclear.\nIn , there's been a lot of discussion around how and whether or not IP addresses should be allowed as a reference identity of the client-facing server. This issue tracks sorting this particular question out. NAME suggested in the case the reference identity is an IP address. (Whether or not this address is in the ECHConfig is separate.) NAME suggested . (Whether or not this allows the outer SNI to be empty is separate.) So there seems to be two questions we need to sort out, in relation to : Should the ECHConfig be allowed to supply IP addresses? And if so, how do client stacks validate them? Should clients be allowed to validate servers using IP addresses as a reference identity? (If yes, then the ClientHelloOuter.SNI must be allowed to be empty/omitted.) I think (2) should be 'yes', given that's currently possible today introduces no new complexity. I think it should be feasible to spell out the requirements to make (1) feasible, so I also think that should be 'yes'. What do others think?\nYes and yes. RFC 8738 using reverse-IP notation, which seems like a halfway-reasonable option. An \"empty\" public name seems a little more flexible (doesn't require servers to carry certificates for each other's IP addresses), and ought to be easy enough to support on clients.\nNAME to clarify this: You're suggesting that if ECHConfig is empty then the client should use the IP address of the client-facing server as the reference identity, right? That seems like a reasonable way around the language we might need to specify what is a valid address in ECHConfig. (If the reference identity is not empty, then it MUST be a name, otherwise it's the IP address, and clients should connect and validate based on that identity accordingly.)\nNAME Correct. NAME previously raised a concern that this could complicate client implementations: the TLS stack can no longer make use of a connected socket given solely on the ECHConfig and inner reference identity; it also needs the remote IP in some cases. This is true, but I think the remote IP is essentially always \"at hand\" so perhaps it can be passed in unconditionally without too much difficulty.\nI don't think that's true when the client is using a proxy, depending on how much of DNS is done on the client and how much is done on the proxy. If the client does the SVCB lookup itself but asks the proxy to look up TargetName, as the SVCB draft , the client won't have the remote IP of the client-facing server available. (Are there weird networks that rewrite addresses at the DNS level? That would also break with this idea.)\nTo try and sharpen NAME point: will the client always have access to the client-facing server IP? I don't see how this won't always be true, since the client connects to something, but maybe I'm misunderstanding the proxy scenario.\nNAME is correct about the behavior when using a SOCKS5 or HTTP CONNECT proxy in domain-oriented mode. The proxy performs A/AAAA resolution and TCP connection establishment, and does not provide this information back to the client, so the client doesn't know which server IP it is using. That does suggest that \"empty\" won't work.\nOh! I see. The proxy is the one establishing the transport connection.\nThere's probably at least three modes here, if not more: Proxy gets the origin DNS name and takes over the whole DNS lookup. Client doesn't do SVCB at all. The only way to salvage SVCB-based features is to invent a header for the proxy to tell the client what SVCB path it followed in the CONNECT response. Client does SVCB lookup and gives TargetName to the proxy. Client does the whole lookup and gives the IP address to the proxy. The one that breaks empty names is (2). Though, given that proxies today are often used to access internal services (i.e. without the proxy, you may not even be able to resolve the name), I expect (1) to be the default for existing schemes. Whereas (3) might make sense for other kinds of proxy use cases. (I'm actually not entirely sure when you'd want (2) despite the spec recommendation... maybe if you're relying on the proxy to rewrite IPs, but the proxy needs to a DNS name to do so?)\nThis is presumably not the right forum for this topic, but mode (2) is the one that gives you correct Happy Eyeballs, geo-DNS, etc.\nI seem to recall some DNS IPv6 transition thing that maybe has that property. S.\nYes, there are transition technologies that rewrite addresses. Clients get a v6 address because they only have v6, but the server address is v4. As for the general proxy problem, there is no value in any design that does not involve the client having the ECHConfig. It can't use the public IP (because it doesn't know it always), but it can still populate servername according to the value of publicname. I might have missed something there though. From David's , option 1 is a bit silly (it adds unnecessary latency), but it also gives you the best locality if your goal is to use the proxy egress as the client network location (option 2 as Ben only works if you care about the network location of the client, at which point you might as well not use a proxy). The only difference between 2 and 3 there is whether the proxy needs to perform its own name resolution, which might differ from the client (and cause problems as a result; but that's mostly just a case of insisting that clients don't choose option 2). I don't understand why it's so hard to require the use of a domain name. It shouldn't be \"\". Adding other forms of reference identity adds complexity to what is already a morass of horrifying complexity, so we should insist on justification for every addition. And I'm not seeing any concrete arguments in favour of the complex option. So my answer is No to both questions.\nI don't agree with Martin's characterization of the proxy behaviors, but I don't think this is the place to discuss that topic anyway. To me, the main advantage of supporting IP reference identities is for origins that are not using a large-scale hosting provider, and thus cannot rely on a widely shared . Domain names are typically more closely linked to an organization than IP addresses (especially with VM hosting), and more expensive to change. An IP address ensures that the client does not populate the outer SNI, and supports a configuration where the default certificate only contains an IP address SAN. This minimizes the information visible to an adversary unless they can also identify the service that runs on this IP address (e.g. by scanning the whole DNS). An alternative to IP address in would be a flag in the ECHConfig indicating that the specified reference identity is in the default certificate (which is implicitly true for IP reference identities). This would allow clients to omit the SNI, improving small services' privacy against a passive adversary, but active adversaries could fetch the default certificate to learn the .\nFor everyone's consideration, we now have as an alternate solution to this problem (thanks to NAME which does two things: (1) avoids overloading a single field for two types of values (names and addresses), and (2) punts IP address validation to a future change. While I sympathize with Martin's view, it seems overly restrictive to just prohibit IP addresses outright. This change allows us to sort that out in a future change, and should make everyone happy. Thoughts?\nRe , the idea is largely me going \"oops, I'm sorry for the mess\". :-) When I filed , I was mostly interested in the client validation. But most places with DNS strings have a semi-overlap with IP literal syntax, and RFC6066 already prohibits of IP literals. (Without declaring an actual syntax being prohibited!) So it seemed hard to resolve those questions without going one way or another on IPs. So I brought up that question too as a seemingly minor (hah) side question on the main issue. That side question has turned out to be way more messy than I anticipated. Since there isn't a strong need for IPs as public identities, but also unease to completely close that door, I propose we tweak the encoding so it's easy enough to add later (reuse the extension mechanism we're already putting in), but not actually bother in this draft.\nIsn't it the case that the extension scheme would allow us to add IP addresses later, even without ?\nThere's lots of different ways we can accomplish this, and it'll come down to personal taste. For example: Make publicname optional in the main ECHConfig (and waste the two byte length field), and allow an extension to specify the IP address. Move publicname to a mandatory extension (as 426 does), and allow future extensions to specify addresses override the name. etc. Given the bikeshed that's happening here, it seems prudent to at least separate these things (names and addresses). If we can agree on that, then we just need to decide where we put those bits. does the job one way. If someone wants to propose an alternative PR to put the bits somewhere else, please do so!\nIf we're really talking about a bikeshed, then the \"no change\" option works for me. It's fewer bits on the wire (by 4), it is less complicated to enforce it being mandatory, and we already have it implemented. I do think that public_name being optional has some impact here. It means that you don't have a reference identity for fallback handling. I think that's probably legitimate. There might be cases where fallback is not needed.\nHmm... how do you see this working if a future extension wants to specify IP addresses as a fallback but the publicname is not allowed to be optional? I agree that there will be cases where fallback is not needed. DNS-SD is one of them. And in that case, removing the publicname field (or allowing it to be empty, or whatever) will be needed. I think addresses that pretty nicely.\nAhh, I misunderstood. It's not that the extension is mandatory to use, but it is mandatory to understand when present. Let's be really crisp about the words we use. Of course, an empty value is an equally good way to indicate that the value isn't operative. It's one byte in the less-common case to save 4 bytes in the common case. The question occurs to me: is there any possibility that you could have both a publicname extension and this hypothetical ipreference_identity extension? What would that mean? I want to be really open about my priors here: I think that extensibility along this axis is an anti-feature for ECH. The mandatory-to-understand extensions are part of what really confirmed that for me. I think that the current format would be improved by removing the extensions block.\nUnclear -- I think it'd be up to the hypothetical extension to define how those things interact. I hear you. If we didn't care about IP addresses for the fallback certificate, much of this would be simplified. (And admittedly the desire to remove the extensions block would increase for me, too.) I'm going to take this to the list, since we don't seem to be getting anywhere without consensus on that point.\nThe draft already defines a notion of \"mandatory\". Although does both (let's call the other one \"required\"?), which I think is the minimal encoding change because... I don't think we should open that can of worms. Even though not having a fallback flow is really the server's decision, I think the client has a stake in this too. It's useful to nudge the ecosystem towards things that work well. If a server deploys without fallback, underestimating the predictability of DNS caching, it will break rarely in some clients. That sort of thing is bad UX and increases the pressure for clients to adopt insecure fallbacks. Unless there is a clear need, I don't think we should allow that. (Otherwise not setting it up is deceptively attractive for a naive server since they make fewer decision.) The extensibility story for ECH is a little funny. You send an ECHConfigList and the client ignores any ECHConfigs it can't understand. So if we say today's ECH clients require a publicname extension, future ECH clients can still support a publicip extension that relaxes the publicname requirement. Today's ECH clients will just ignore the publicip-using configs, which is the desired behavior. But I should say I have no stake in this IP mess and do not care whether we allow those. Just pick something that everyone's happy with, please. was an attempt to punt this out of the draft while keeping all the existing semantics unchanged (hence required extension; the correct reading is that it is purely an encoding change).\nThat point about making it hard NOT to provide a fallback makes me more inclined to reject . If the effect is just a change in encoding, it's a less efficient encoding and one that is harder to validate properly. If the effect is to make it optional under certain conditions, you have just argued effectively for why that has negative consequences.\nIt's really a different point and maybe deserves it's own issue but since I'm not logged into github and email works... The \"critical\" bit in x.509 extensions IMO didn't work. IIRC we really designed it for the basicConstraints extension, as that had to be understood by relying parties, but the result was that we more or less only had one shot at introducing any critical extensions - unless I'm forgetting stuff (which is entirely possible:-) pretty much all later attempts to add new critical extensions failed as they'd break deployed clients. We'd have been better off adding the is-a-ca flag and anything else we knew to be critical the TBSCertificate. (Again IIRC, we put both critical and non-critical stuff into extensions because the UniqueID fields in TBSCertificate that were added in x.509v2 hadn't been useful so there was a reluctance to put new fields in there at that point.) I don't see why criticality is different for ECH extensions. My bet is that any new critical extension will need a new TLS extension code point once the first ECH RFC is issued. So I'd remove the extensions fields we've defined and if not then at least get rid of the idea of criticality. Cheers, S.\nNAME I filed to discuss ECHConfig.extensions -- let's try and keep this issue focused on the IP address question(s).\nAddressed in , closing.\nSome edge cases around the publicname: What if the public name is some random string which is not a valid DNS name? (What's the right terminology here? Same from RFC6066?) Does the client reject the ECHConfig? My feeling is yes. Imagine you have some random service accessible on a private address. Today, it can assume it'll never see in the servername field. Even if an attacker pointed at that private address, the attacker is constrained by valid DNS strings. But the public name is never directly queried in DNS. Is a server deployment allowed to use an IP certificate for the ECH rejection identity? (That was actually the original idea for the fallback flow.) If so, how do they specify it? Do they use the ASCII serialization of the IPv4 or IPv6 address? Note that IP addresses do not go in server name, so the client would need to recognize this and leave server_name empty. NAME\nI imagined the outer name would be set from the config similar to how an application sets the inner name. That is, one provides a string to and, internally, the library determines whether or not that string is a valid domain name (and thus goes in the server name) or is an IP address (and thus is used to verify the IP-based cert that might come back from the server). ASCII serialization of addresses seems fine here, if one just takes that string and passes it to the function (or equivalent).\nWell, OpenSSL and derivatives have a lower-level API and just take the server name as an opaque string. :-) But yeah, that seems a reasonable model. Whatever we pick, we should write it down. What do you think should happen if the public name is neither a valid DNS name, nor a valid IP address? Like . ECHConfig parse error would be my inclination.\nYeah, that seems like a configuration error on behalf of the server.\nThat seems like it's already an unsafe assumption, right? Malicious clients could send SQL-injection strings to vulnerable servers without ECH. I suppose ECH introduces some asymmetry; one malicious DNS server could cause N clients to send the malicious payload.\nBacking up a step, if we care to protect applications from malicious/malformed DNS hostnames, it seems prudent to validate all incoming server_name values on the server, whether they came from ClientHelloInner/ClientHelloOuter or even a non-ECH ClientHello. (Assuming we're using the RFC6066 HostName definition.)\nNAME yeah, that's basically what I was thinking the function would do. Would you have cycles to prep a PR to resolve this issue?\nIt depends. Services listening on private addresses often make a lot of assumptions. These assumptions are pretty questionable, but we put in effort to anyway because that is reality. Yes, any server listening on a network should tolerate arbitrary input and behave well. Nonetheless, the reality is private services are often buggy. Where we break soft assumptions that might have previously (mostly) worked, we should at least think about it. Here, it seems there's no point in breaking the assumption.\nAh, servers at private addresses was not even on my radar. I support client validation of ECHConfig.publicname at parse time AND server validation of all ClientHello* servername extensions. NAME Happy to do a PR.\n(I don't think the ECH spec should say anything about server validation here. The server looks at whichever ClientHello it selects and does whatever it would normally do with the server name in the selected one.)\nThere doesn't seem to be a standard that reflects actual current practice. Chrome and Firefox accept SNIs with underscores, which are not allowed in valid DNS names (URL).\nYep, thanks for raising that, Carrick. Perhaps we should make hostname validation a SHOULD rather than MUST?\nThe issue doesn't sound like a MUST vs. a SHOULD but rather what they SHOULD or MUST do. I guess we could say \"do whatever you usually do with the SNI.\"\nOh and yeah, SHOULD seems like a better idea than MUST since there are conflicting opinions about the right way to go about it.\nIt's a little tricky due to what layers we're talking about. Chrome doesn't particularly validate the SNI at the TLS level. Rather, we leave that to URL parsing and DNS logic earlier in the pipeline. If we've gotten far enough to make a TLS connection, we assume we're happy with the string we're passing to SNI and certificate validation. I imagine this is reasonably common: OpenSSL likewise treats the setservername API as an opaque byte string. ECH breaks this assumption by bypassing all the earlier layers. Suddenly a TLS-level construct (ECHConfig) is fabricating a DNS name on its own, and now TLS needs to pick up all the validation.\nTechnically couldn't DNS logic validate the DNS name in the ECHConfig?\nHi folks, PR is a little more complete now, if you get a chance :)\nTo close the loop on this: it could, but we're treating the ECH parts of SVCB as opaque to DNS. [edit: I miswrote \"opaque to TLS\" at first] I think that's desirable because TLS may introduce new ECH extensions or ECH versions that the DNS stack may not be aware of. Also, any validation on the URL parsing / DNS query path happens in the process of making a request to the DNS server at all, whereas any validation here would be a check afterwards on the result. Whatever we do, it'll end up looking like someone, DNS or TLS, implementing a check we need to write down.\nTo come back to this: because SNI does not support IP addresses, IP certs must be the server's default-cert, i.e. they must be served to requests that omit the SNI. Thus, we merely need a way to tell clients to omit . My suggestion is to allow to be empty, meaning \"omit outer SNI\".\nKeeping in mind that the client uses to authenticate the server's , the client now needs to be aware of which IP address it's connecting to. I suppose this isn't a protocol concern, but it's more ergonomic when the ECHConfig contains all the information TLS needs to do ECH.\nOK. As I mentioned yesterday, I think should probably be an X.509 .\nI do not think GeneralName will work. There's somewhat a mismatch between what the application thinks of as identities and how those identities are encoded into X.509. GeneralName is part of the latter. What does a browser certificate verifier, which only believes in things that fit in a URL, do if it's told the identity is now an EDIPartyName? What SNI, etc., does that translate into, given that our identity-to-TLS-config algorithm takes an HTTPS origin, not an X.509 name? (Note that the type field in SNI is fiction. There is and will be .) More importantly, this glomming too many things into an already overloaded issue. If you want to pursue this, can you please fork GeneralName into a separate topic? The status quo in the existing text is that the public name is clearly only good for DNS or DNS+IP, depending on how much you believe in URL syntax. We already have a need to better specify that case, including client validation. Let's fix that, and then we can separate tackle more complex cases.\nStrongly disagree. GeneralName was a reasonable guess at a thing to do 20 years ago but is no longer a structure that ought be leveraged - we have no need for EDI nor X.400 names for ECH and even tempting someone to write code for ORAddress handling is risky - there's >1 can of worms there all of which are best avoided. S.\nAgree with Stephen. Using GeneralName is like hunting butterflies with an elephant gun. :)\nAddressed in , closing.\nThis seems like it would prohibit URL NAME NAME would this be a problem for the DNS-SD case?High-level, I think that we need to agree on what this value is for first. Editorially, I think that this text needs more space to move. Trying to cram all this stuff into a single paragraph isn't doing us any favours. Make a new section for this. It's certainly important enough to do properly. My sense is that the right answer to the underlying issues is that this is a reference identity rather than something we put in servername in ClientHelloOuter. In that case, the fallback is achieved by establishing a connection and validating that the entity providing the updated configuration is able to answer for this reference identity. You then construct ClientHelloOuter in such a way as to ensure that you are able to successfully connect to that reference identity if ClientHelloInner cannot be accepted. If this is just a name, then this is over-specified. If the goal is to describe what goes in ClientHelloOuter server_name, none of the validation is necessary. If the server wants you to put junk in the field, then I don't see how that junk would be any more identifying than an identifying DNS name. You really only need special IP address rules if you intend for this to be a reference identity. The Server Name Indication (SNI) extension in TLS has a provision to provide names other than host names[1]. None have even been defined to my knowledge, but it's there. OpenSSL (and possibly others) have had a long-standing bug[2] (fixed in master) that means that different types of names will cause an error. To be clear: I live in a glass house and am not throwing stones; these things happen. However, it means that a huge fraction of the TLS deployment will not be able to accept a different name type should one ever be defined. (This issue might have been caused by the fact that the original[3] spec didn't define the extension in such a way that unknown name types could be skipped over.) Therefore we (i.e. BoringSSL, and thus Google) are proposing to give up on this and implement our parser such that the SNI extension is only allowed to contain a single host name value. (This is compatible with all known clients.) We're assuming that since this is already the de-facto reality that there will be little objection. I'm sending this mostly to record the fact so that, if someone tries to define a new name type in the future, they won't waste their time. If the community wishes to indicate a different type of name in the future, a new extension can be defined. This is already effectively the case because we wouldn't fight this level of incompatibility when there's any other option. (I think the lesson here is that protocols should have a single joint, and that it should be kept well oiled. For TLS, that means that extensions should have minimal extensionality in themselves and that we should generally rely on the main extensions mechanism for these sorts of things.) [1] URL [2] URL \u2013 note that the data pointer is not updated. [3] URL Cheers AGL Adam Langley EMAIL URL", "new_text": "The non-empty name of the client-facing server, i.e., the entity trusted to update the ECH configuration. This is used to correct misconfigured clients, as described in handle-server-response. This value MUST NOT begin or end with an ASCII dot and MUST be parsable as a dot-separated sequence of LDH labels, as defined in RFC5890, Section 2.3.1. Clients MUST ignore any \"ECHConfig\" structure whose \"public_name\" does not meet these criteria. Note that these criteria are incomplete; they incidentally rule out textual representations of IPv6 addresses (see RFC3986, Section 3.2.2), but do not exclude IPv4 addresses in standard dotted-decimal or other non-standard notations such as octal and hexadecimal (see RFC3986, Section 7.4). If \"public_name\" contains a literal IPv4 or IPv6 address, the client SHOULD ignore the \"ECHConfig\" to avoid sending a non-compliant \"server_name\" extension on the ClientHelloOuter (see RFC6066, Section 3). A list of extensions that the client must take into consideration when generating a ClientHello message. These are described below"} {"id": "q-en-draft-ietf-tls-esni-96e4deec6af2269676d05dbef0cc19ec289656061425f2e564302dac1f47edaf", "old_text": "\"psk_key_exchange_modes\" from the ClientHelloInner into the ClientHelloOuter. [[OPEN ISSUE: We currently require HRR-sensitive parameters to match in ClientHelloInner and ClientHelloOuter in order to simplify client- side logic in the event of HRR. See https://github.com/tlswg/draft-", "comments": "This should probably land after . cc NAME\nI don't know the answer to this question, but, from the TLS 1.3 RFC: I noticed this while testing with a pre-baked ClientHello that had both extensions. The ECH draft clearly required that the extension be removed, but I wondered about .\nGiven URL, URL, and URL, I'd say quite the opposite.\nAh, I see text has been added saying it must appear in the ClientHelloOuter. Thanks for the references. I was working from draft-10.\nHere's a fun case. Suppose the client talks to a 0-RTT-capable, ECH-capable server. It'll send earlydata + presharedkey in ClientHelloInner, wrap that in some ClientHelloOuter, and follow it up with early data records. Now the server rejects ECH and handshakes with the ClientHelloOuter. But the client has already sent early data, so the server needs to know to skip past it. I believe this means we need to require the client mirror ClientHelloInner.earlydata in ClientHelloOuter, even though it's not actually offering to resume anything. Sending earlydata without presharedkey is slightly weird (if tolerated at all), so we might even want to recommend a fake outer presharedkey. We almost_ don't care about this: the client may as well stop the handshake at server Finished and skip the client Finished flight anyway, for purposes of the recovery flow. However: What's not what the draft currently says If the server rejects ECH, handshakes with ClientHelloOuter, and sends HelloRetryRequest, we do care about the client's second flight getting through. That second case is extra fun because the client-facing server must look for the second ClientHello too.\nWasn't this already the outcome from ? (That is, if the inner CH has a PSK, then the outer one ought have a dummy PSK, too.) As for mirroring earlydata, why can the server not just ignore the application data (as if it received earlydata and decided to ignore it)? Taking a quick look back at RFC8446, the server behavior where clients send application data without early_data is not defined, so are you concerned about that, or something else?\nWell, I think we only got as far as MAY in , but yeah I think this is another reason. Regarding ignoring application data, our server implementation only skips early data if there was an early_data extension that we rejected. I believe that matches RFC8446 since it doesn't say to skip application data in general, just that case. And, yeah, that's the concern.\nSince this is behavior for the client-facing server, which supports ECH, can't we specify the desired behavior in this spec? That is, we might say that a client-facing server which rejects ECH MUST ignore any early data sent by the server, or something. Would that work?\nWe could, but then it'd break the nice property that a plain RFC8446 server will handshake ClientHelloOuter without any fuss. This property is great for rollback safety. (Otherwise any ECH server deployment needs to be two-stage, and introduces a lower bound that you cannot rollback beyond.) And elsewhere we've tried to avoid contradicting RFC8446. Yet another option would be to tell the client to retry on certain kinds of errors if it offered 0-RTT in ClientHelloInner, but that's a bit more fuss since retries are often outside the TLS stack. (Besides, the fact that you offer 0-RTT in ClientHelloInner is all but public anyway. While we don't have a cleartext 0-RTT marker[], it's pretty obvious from timing that some post-ClientHello record came before ServerHello.) [] In TCP... I want to say QUIC does, but I could be misremembering.\nTrue! This is fun. :-)\nI'm not sure why ignoring stuff is fuss. What's the alternative? To terminate the handshake if the server receives early data? Wouldn't that be more fuss? If RFC8446 doesn't specify what to do in this situation, I'm not sure why specifying it is in contradiction to RFC8446 (as opposed to expanding on it).\nI think RFC8446 does say what to do in this situation. Section 5.2 says at the bottom: URL The default for the record layer is that decryption errors are fatal (as it should be). Early data skipping is an exception to this described in section 4.2.10, and the exception only applies when the server declines an earlydata extension. Oh no, the goal is to ignore stuff. That's what makes the recovery flow work. The issue is that, in RFC8446, servers only ignore the early data if there was an earlydata extension in ClientHello which they declined. Otherwise, it's just a fatal decryption error. In order to meet that condition, ClientHelloOuter needs the early_data extension whenever ClientHelloInner does.\nACK -- thanks for closing the loop!", "new_text": "\"psk_key_exchange_modes\" from the ClientHelloInner into the ClientHelloOuter. When the client offers the \"early_data\" extension in ClientHelloInner, it MUST also include the \"early_data\" extension in ClientHelloOuter. This allows servers that reject ECH and use ClientHelloOuter to safely ignore any early data sent by the client per RFC8446, Section 4.2.10. [[OPEN ISSUE: We currently require HRR-sensitive parameters to match in ClientHelloInner and ClientHelloOuter in order to simplify client- side logic in the event of HRR. See https://github.com/tlswg/draft-"} {"id": "q-en-draft-ietf-tls-esni-54dc0e344e46f27b139f6d9ae30ef83379097042dea318c0b5e2bef4e9d4cd3d", "old_text": "ClientHelloInner. The list of HPKE KDF and AEAD identifier pairs clients can use for encrypting ClientHelloInner. The client-facing server advertises a sequence of ECH configurations to clients, serialized as follows.", "comments": "Not sure if this is needed, but I guess it would be another way to generate failures;-)\nGood clarification! This was already somewhat in the real-ech section, so I just replaced this with a forward pointer there.", "new_text": "ClientHelloInner. The list of HPKE KDF and AEAD identifier pairs clients can use for encrypting ClientHelloInner. See real-ech for how clients choose from this list. The client-facing server advertises a sequence of ECH configurations to clients, serialized as follows."} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-feb7a78641fc2426a0ce9fa728380f8b2f357713673065a8e2bc082e9570326c", "old_text": "Deprecating MD5 and SHA-1 signature hashes in TLS 1.2 draft-ietf-tls-md5-sha1-deprecate-04 Abstract The MD5 and SHA-1 hashing algorithms are steadily weakening in strength and their deprecation process should begin for their use in TLS 1.2 digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. This document updates RFC 5246 and RFC 7525. 1.", "comments": "NAME thanks, updated with the additional changes.\nNAME NAME NAME Feel free to merge when you get a chance\nGreat! Now on to Daniel's comments.", "new_text": "Deprecating MD5 and SHA-1 signature hashes in TLS 1.2 draft-ietf-tls-md5-sha1-deprecate-05 Abstract The MD5 and SHA-1 hashing algorithms are increasingly vulnerable to attack and this document deprecates their use in TLS 1.2 digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. This document updates RFC 5246 and RFC 7525. 1."} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-feb7a78641fc2426a0ce9fa728380f8b2f357713673065a8e2bc082e9570326c", "old_text": "recommendation to use stronger language deprecating use of both SHA-1 and MD5. The prior text did not explicitly include MD5 or SHA-1; and this text adds guidance to ensure that these algorithms have been deprecated.. Section 4.3:", "comments": "NAME thanks, updated with the additional changes.\nNAME NAME NAME Feel free to merge when you get a chance\nGreat! Now on to Daniel's comments.", "new_text": "recommendation to use stronger language deprecating use of both SHA-1 and MD5. The prior text did not explicitly include MD5 or SHA-1; and this text adds guidance to ensure that these algorithms have been deprecated. Section 4.3:"} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-feb7a78641fc2426a0ce9fa728380f8b2f357713673065a8e2bc082e9570326c", "old_text": "recommended) as defined by RFC8447. The following entries are to be updated: Other entries of the resgistry remain the same. 9. Concerns with TLS 1.2 implementations falling back to SHA-1 is an issue. This draft updates the TLS 1.2 specification to deprecate support for MD5 and SHA-1 for digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection.", "comments": "NAME thanks, updated with the additional changes.\nNAME NAME NAME Feel free to merge when you get a chance\nGreat! Now on to Daniel's comments.", "new_text": "recommended) as defined by RFC8447. The following entries are to be updated: Other entries of the registry remain the same. 9. Concerns with TLS 1.2 implementations falling back to SHA-1 is an issue. This document updates the TLS 1.2 specification to deprecate support for MD5 and SHA-1 for digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection."} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-820e8a4d73fdcc5e8b30b85d8bcae2e65f4ea967b60a2a6d8484e7453d751c9d", "old_text": "2. Clients MUST NOT include MD5 and SHA-1 in the signature_algorithms extension. If a client does not send a signature_algorithms extension, then the server MUST abort the handshake and send a handshake_failure alert, except when digital signatures are not used (for example, when using PSK ciphers). 3.", "comments": "cc NAME NAME NAME\nI believe this I-D should still include the \"update: 5246 (if approved)\" header, even though we took out the 5246 OLD/NEW section. We are still proposing that 5246 be changed to require signature_algorithms extension be sent and that MD5/SHA1 never be sent. In other words, I think you can merge this PR assuming there are no conflicts.\nVerifying that this is okay on list, but Daniel suggested the following change for s6: Add the following immediately bfore the OLD/NEW, i.e., right after \"due to MD5 and SHA-1 being deprecated.\": In Section 7.1.4.1: the following text is removed: If the client supports only the default hash and signature algorithms (listed in this section), it MAY omit the signature_algorithms extension.\nResulted in a number of changes - see .\nI think you also need to update the abstract and introduction sections, as they both mention \"This document updates ...\" (same as URL). Will probably conflict with URL so we should merge that first.I believe this can be merged as is. I.e., don't remove the \"udpates\" header, change the abstract, or intro.", "new_text": "2. Clients MUST include the signature_algorithms extension. Clients MUST NOT include MD5 and SHA-1 in this extension. 3."} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-820e8a4d73fdcc5e8b30b85d8bcae2e65f4ea967b60a2a6d8484e7453d751c9d", "old_text": "4. Servers MUST NOT include MD5 and SHA-1 in ServerKeyExchange messages. If a client receives a MD5 or SHA-1 signature in a ServerKeyExchange message it MUST abort the connection with the illegal_parameter alert. 5.", "comments": "cc NAME NAME NAME\nI believe this I-D should still include the \"update: 5246 (if approved)\" header, even though we took out the 5246 OLD/NEW section. We are still proposing that 5246 be changed to require signature_algorithms extension be sent and that MD5/SHA1 never be sent. In other words, I think you can merge this PR assuming there are no conflicts.\nVerifying that this is okay on list, but Daniel suggested the following change for s6: Add the following immediately bfore the OLD/NEW, i.e., right after \"due to MD5 and SHA-1 being deprecated.\": In Section 7.1.4.1: the following text is removed: If the client supports only the default hash and signature algorithms (listed in this section), it MAY omit the signature_algorithms extension.\nResulted in a number of changes - see .\nI think you also need to update the abstract and introduction sections, as they both mention \"This document updates ...\" (same as URL). Will probably conflict with URL so we should merge that first.I believe this can be merged as is. I.e., don't remove the \"udpates\" header, change the abstract, or intro.", "new_text": "4. Servers MUST NOT include MD5 and SHA-1 in ServerKeyExchange messages. If no other signature algorithms are available (for example, if the client does not send a signature_algorithms extension), the server MUST abort the handshake with a handshake_failure alert or select a different cipher suite. 5."} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-820e8a4d73fdcc5e8b30b85d8bcae2e65f4ea967b60a2a6d8484e7453d751c9d", "old_text": "6. RFC5246, The Transport Layer Security (TLS) Protocol Version 1.2, suggests that implementations can assume support for MD5 and SHA-1 by their peer. This update changes the suggestion to assume support for SHA-256 instead, due to MD5 and SHA-1 being deprecated. In Section 7.4.1.4.1: the text should be revised from: OLD: \"Note: this is a change from TLS 1.1 where there are no explicit rules, but as a practical matter one can assume that the peer supports MD5 and SHA- 1.\" NEW: \"Note: This is a change from TLS 1.1 where there are no explicit rules, but as a practical matter one can assume that the peer supports SHA-256.\" 7. The document updates the \"TLS SignatureScheme\" registry to change the recommended status of SHA-1 based signature schemes to N (not recommended) as defined by RFC8447. The following entries are to be", "comments": "cc NAME NAME NAME\nI believe this I-D should still include the \"update: 5246 (if approved)\" header, even though we took out the 5246 OLD/NEW section. We are still proposing that 5246 be changed to require signature_algorithms extension be sent and that MD5/SHA1 never be sent. In other words, I think you can merge this PR assuming there are no conflicts.\nVerifying that this is okay on list, but Daniel suggested the following change for s6: Add the following immediately bfore the OLD/NEW, i.e., right after \"due to MD5 and SHA-1 being deprecated.\": In Section 7.1.4.1: the following text is removed: If the client supports only the default hash and signature algorithms (listed in this section), it MAY omit the signature_algorithms extension.\nResulted in a number of changes - see .\nI think you also need to update the abstract and introduction sections, as they both mention \"This document updates ...\" (same as URL). Will probably conflict with URL so we should merge that first.I believe this can be merged as is. I.e., don't remove the \"udpates\" header, change the abstract, or intro.", "new_text": "6. The document updates the \"TLS SignatureScheme\" registry to change the recommended status of SHA-1 based signature schemes to N (not recommended) as defined by RFC8447. The following entries are to be"} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-820e8a4d73fdcc5e8b30b85d8bcae2e65f4ea967b60a2a6d8484e7453d751c9d", "old_text": "[RFC5246][RFC8447][RFC-to-be] 8. Concerns with TLS 1.2 implementations falling back to SHA-1 is an issue. This document updates the TLS 1.2 specification to deprecate support for MD5 and SHA-1 for digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. 9. The authors would like to thank Hubert Kario for his help in writing the initial draft. We are also grateful to Daniel Migault, Martin", "comments": "cc NAME NAME NAME\nI believe this I-D should still include the \"update: 5246 (if approved)\" header, even though we took out the 5246 OLD/NEW section. We are still proposing that 5246 be changed to require signature_algorithms extension be sent and that MD5/SHA1 never be sent. In other words, I think you can merge this PR assuming there are no conflicts.\nVerifying that this is okay on list, but Daniel suggested the following change for s6: Add the following immediately bfore the OLD/NEW, i.e., right after \"due to MD5 and SHA-1 being deprecated.\": In Section 7.1.4.1: the following text is removed: If the client supports only the default hash and signature algorithms (listed in this section), it MAY omit the signature_algorithms extension.\nResulted in a number of changes - see .\nI think you also need to update the abstract and introduction sections, as they both mention \"This document updates ...\" (same as URL). Will probably conflict with URL so we should merge that first.I believe this can be merged as is. I.e., don't remove the \"udpates\" header, change the abstract, or intro.", "new_text": "[RFC5246][RFC8447][RFC-to-be] 7. Concerns with TLS 1.2 implementations falling back to SHA-1 is an issue. This document updates the TLS 1.2 specification to deprecate support for MD5 and SHA-1 for digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. 8. The authors would like to thank Hubert Kario for his help in writing the initial draft. We are also grateful to Daniel Migault, Martin"} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-6eb0c99a0ce951650a87d5230b2fd09367f7be93fbc3fd7e9b297f28c9c7b300", "old_text": "The MD5 and SHA-1 hashing algorithms are steadily weakening in strength and their deprecation process should begin for their use in TLS 1.2 digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. 1. The usage of MD5 and SHA-1 for signature hashing in TLS 1.2 is specified in RFC5246. MD5 and SHA-1 have been proven to be insecure, subject to collision attacks. RFC6151 details the security considerations, including collision attacks for MD5, published in 2011. NIST formally deprecated use of SHA-1 in 2011 NISTSP800-131A-R2 and disallowed its use for digital signatures at the end of 2013, based on both the Wang, et. al, attack and the potential for brute-force attack. In 2016, researchers from INRIA identified a new class of transcript collision attacks on TLS (and other protocols) that rely on efficient collision-finding algorithms on the underlying hash constructions Transcript-Collision. Further, in 2017, researchers from Google and CWI Amsterdam SHA-1-Collision proved SHA-1 collision attacks were practical. This document updates RFC5246 and RFC7525 in such a way that MD5 and SHA-1 MUST NOT be used for digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. 1.1. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in RFC2119. 2.", "comments": "Addressing Roman's AD comments.\nhas 2 conflicts, not sure how to resolve, the updates look fine. Please merge.\nThere was a conflict with the 8174 reference being in the other branch. I added it in because it's now part of the terminology paragraph.\nPlease address the following IDNits: -- The document seems to lack an IANA Considerations section. (See Section 2.2 of URL for how to handle the case when there are no actions for IANA.) -- The draft header indicates that this document updates RFC5246, but the abstract doesn't seem to mention this, which it should. -- The draft header indicates that this document updates RFC7525, but the abstract doesn't seem to mention this, which it should. Section 1. Editorial. -- s/RFC 5246 [RFC5246]/[RFC5246]/ -- s/RFC 6151 [RFC6151]/[RFC6151]/ -- s/RFC7525 [RFC7525]/[RFC7525]/ Section 1. Editorial. For symmetry with the rest of the text: OLD RFC 6151 [RFC6151] details the security considerations, including collision attacks for MD5, published in 2011. NEW In 2011, [RFC6151] detailed the security considerations, including collision attacks for MD5. Section 1. Please provide a reference for \"Wang, et al\". Is there a reference to provide for the \"the potential for brute-force attack\" Section 6. Editorial Nit. s/RFC5246 [RFC5246]/[RFC5246]/ Section 6. Move the text \"In Section 7.4.1.4.1: the text should be revised from\" out of the \"OLD\" block of text to be its own intro paragraph so that the OLD vs. NEW is a clear cut-and-paste. Section 7. Editorial. s/ RFC7525 [RFC7525]/[RFC7525]/ Section 7. SHA-1 is also not mentioned in RFC7525. Recommend: OLD The prior text did not explicitly include MD5 and this text adds it to ensure it is understood as having been deprecated. NEW The prior text did not explicitly include MD5 or SHA-1; and this text adds guidance to ensure that these algorithms have been deprecated. Section 7. Editorial. Grammar. OLD In addition, the use of the SHA-256 hash algorithm is RECOMMENDED, SHA-1 or MD5 MUST NOT be used NEW In addition, the use of the SHA-256 hash algorithm is RECOMMENDED; and SHA-1 or MD5 MUST NOT be used Section 10.2 Please make RFC5246 a normative reference.", "new_text": "The MD5 and SHA-1 hashing algorithms are steadily weakening in strength and their deprecation process should begin for their use in TLS 1.2 digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. This document updates RFC 5246 and RFC 7525. 1. The usage of MD5 and SHA-1 for signature hashing in TLS 1.2 is specified in RFC5246. MD5 and SHA-1 have been proven to be insecure, subject to collision attacks Wang. In 2011, RFC6151 detailed the security considerations, including collision attacks for MD5. NIST formally deprecated use of SHA-1 in 2011 NISTSP800-131A-R2 and disallowed its use for digital signatures at the end of 2013, based on both the Wang, et. al, attack and the potential for brute-force attack. In 2016, researchers from INRIA identified a new class of transcript collision attacks on TLS (and other protocols) that rely on efficient collision-finding algorithms on the underlying hash constructions Transcript-Collision. Further, in 2017, researchers from Google and CWI Amsterdam SHA-1-Collision proved SHA-1 collision attacks were practical. This document updates RFC5246 and RFC7525 in such a way that MD5 and SHA-1 MUST NOT be used for digital signatures. However, this document does not deprecate SHA-1 in HMAC for record protection. 1.1. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. 2."} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-6eb0c99a0ce951650a87d5230b2fd09367f7be93fbc3fd7e9b297f28c9c7b300", "old_text": "their peer. This update changes the suggestion to assume support for SHA-256 instead, due to MD5 and SHA-1 being deprecated. OLD: In Section 7.4.1.4.1: the text should be revised from \" Note: this is a change from TLS 1.1 where there are no explicit rules, but as a practical matter one can assume that the peer supports MD5 and SHA- 1.\" NEW:", "comments": "Addressing Roman's AD comments.\nhas 2 conflicts, not sure how to resolve, the updates look fine. Please merge.\nThere was a conflict with the 8174 reference being in the other branch. I added it in because it's now part of the terminology paragraph.\nPlease address the following IDNits: -- The document seems to lack an IANA Considerations section. (See Section 2.2 of URL for how to handle the case when there are no actions for IANA.) -- The draft header indicates that this document updates RFC5246, but the abstract doesn't seem to mention this, which it should. -- The draft header indicates that this document updates RFC7525, but the abstract doesn't seem to mention this, which it should. Section 1. Editorial. -- s/RFC 5246 [RFC5246]/[RFC5246]/ -- s/RFC 6151 [RFC6151]/[RFC6151]/ -- s/RFC7525 [RFC7525]/[RFC7525]/ Section 1. Editorial. For symmetry with the rest of the text: OLD RFC 6151 [RFC6151] details the security considerations, including collision attacks for MD5, published in 2011. NEW In 2011, [RFC6151] detailed the security considerations, including collision attacks for MD5. Section 1. Please provide a reference for \"Wang, et al\". Is there a reference to provide for the \"the potential for brute-force attack\" Section 6. Editorial Nit. s/RFC5246 [RFC5246]/[RFC5246]/ Section 6. Move the text \"In Section 7.4.1.4.1: the text should be revised from\" out of the \"OLD\" block of text to be its own intro paragraph so that the OLD vs. NEW is a clear cut-and-paste. Section 7. Editorial. s/ RFC7525 [RFC7525]/[RFC7525]/ Section 7. SHA-1 is also not mentioned in RFC7525. Recommend: OLD The prior text did not explicitly include MD5 and this text adds it to ensure it is understood as having been deprecated. NEW The prior text did not explicitly include MD5 or SHA-1; and this text adds guidance to ensure that these algorithms have been deprecated. Section 7. Editorial. Grammar. OLD In addition, the use of the SHA-256 hash algorithm is RECOMMENDED, SHA-1 or MD5 MUST NOT be used NEW In addition, the use of the SHA-256 hash algorithm is RECOMMENDED; and SHA-1 or MD5 MUST NOT be used Section 10.2 Please make RFC5246 a normative reference.", "new_text": "their peer. This update changes the suggestion to assume support for SHA-256 instead, due to MD5 and SHA-1 being deprecated. In Section 7.4.1.4.1: the text should be revised from: OLD: \"Note: this is a change from TLS 1.1 where there are no explicit rules, but as a practical matter one can assume that the peer supports MD5 and SHA- 1.\" NEW:"} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-6eb0c99a0ce951650a87d5230b2fd09367f7be93fbc3fd7e9b297f28c9c7b300", "old_text": "(TLS) and Datagram Transport Layer Security (DTLS) recommends use of SHA-256 as a minimum requirement. This update moves the minimum recommendation to use stronger language deprecating use of both SHA-1 and MD5. The prior text did not explicitly include MD5 and this text adds it to ensure it is understood as having been deprecated. Section 4.3:", "comments": "Addressing Roman's AD comments.\nhas 2 conflicts, not sure how to resolve, the updates look fine. Please merge.\nThere was a conflict with the 8174 reference being in the other branch. I added it in because it's now part of the terminology paragraph.\nPlease address the following IDNits: -- The document seems to lack an IANA Considerations section. (See Section 2.2 of URL for how to handle the case when there are no actions for IANA.) -- The draft header indicates that this document updates RFC5246, but the abstract doesn't seem to mention this, which it should. -- The draft header indicates that this document updates RFC7525, but the abstract doesn't seem to mention this, which it should. Section 1. Editorial. -- s/RFC 5246 [RFC5246]/[RFC5246]/ -- s/RFC 6151 [RFC6151]/[RFC6151]/ -- s/RFC7525 [RFC7525]/[RFC7525]/ Section 1. Editorial. For symmetry with the rest of the text: OLD RFC 6151 [RFC6151] details the security considerations, including collision attacks for MD5, published in 2011. NEW In 2011, [RFC6151] detailed the security considerations, including collision attacks for MD5. Section 1. Please provide a reference for \"Wang, et al\". Is there a reference to provide for the \"the potential for brute-force attack\" Section 6. Editorial Nit. s/RFC5246 [RFC5246]/[RFC5246]/ Section 6. Move the text \"In Section 7.4.1.4.1: the text should be revised from\" out of the \"OLD\" block of text to be its own intro paragraph so that the OLD vs. NEW is a clear cut-and-paste. Section 7. Editorial. s/ RFC7525 [RFC7525]/[RFC7525]/ Section 7. SHA-1 is also not mentioned in RFC7525. Recommend: OLD The prior text did not explicitly include MD5 and this text adds it to ensure it is understood as having been deprecated. NEW The prior text did not explicitly include MD5 or SHA-1; and this text adds guidance to ensure that these algorithms have been deprecated. Section 7. Editorial. Grammar. OLD In addition, the use of the SHA-256 hash algorithm is RECOMMENDED, SHA-1 or MD5 MUST NOT be used NEW In addition, the use of the SHA-256 hash algorithm is RECOMMENDED; and SHA-1 or MD5 MUST NOT be used Section 10.2 Please make RFC5246 a normative reference.", "new_text": "(TLS) and Datagram Transport Layer Security (DTLS) recommends use of SHA-256 as a minimum requirement. This update moves the minimum recommendation to use stronger language deprecating use of both SHA-1 and MD5. The prior text did not explicitly include MD5 or SHA-1; and this text adds guidance to ensure that these algorithms have been deprecated.. Section 4.3:"} {"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-6eb0c99a0ce951650a87d5230b2fd09367f7be93fbc3fd7e9b297f28c9c7b300", "old_text": "Servers SHOULD authenticate using certificates with at least a 2048-bit modulus for the public key. In addition, the use of the SHA-256 hash algorithm is RECOMMENDED, SHA-1 or MD5 MUST NOT be used (see CAB-Baseline for more details). Clients MUST indicate to servers that they request SHA-256, by using the \"Signature Algorithms\" extension defined in TLS 1.2. 8.", "comments": "Addressing Roman's AD comments.\nhas 2 conflicts, not sure how to resolve, the updates look fine. Please merge.\nThere was a conflict with the 8174 reference being in the other branch. I added it in because it's now part of the terminology paragraph.\nPlease address the following IDNits: -- The document seems to lack an IANA Considerations section. (See Section 2.2 of URL for how to handle the case when there are no actions for IANA.) -- The draft header indicates that this document updates RFC5246, but the abstract doesn't seem to mention this, which it should. -- The draft header indicates that this document updates RFC7525, but the abstract doesn't seem to mention this, which it should. Section 1. Editorial. -- s/RFC 5246 [RFC5246]/[RFC5246]/ -- s/RFC 6151 [RFC6151]/[RFC6151]/ -- s/RFC7525 [RFC7525]/[RFC7525]/ Section 1. Editorial. For symmetry with the rest of the text: OLD RFC 6151 [RFC6151] details the security considerations, including collision attacks for MD5, published in 2011. NEW In 2011, [RFC6151] detailed the security considerations, including collision attacks for MD5. Section 1. Please provide a reference for \"Wang, et al\". Is there a reference to provide for the \"the potential for brute-force attack\" Section 6. Editorial Nit. s/RFC5246 [RFC5246]/[RFC5246]/ Section 6. Move the text \"In Section 7.4.1.4.1: the text should be revised from\" out of the \"OLD\" block of text to be its own intro paragraph so that the OLD vs. NEW is a clear cut-and-paste. Section 7. Editorial. s/ RFC7525 [RFC7525]/[RFC7525]/ Section 7. SHA-1 is also not mentioned in RFC7525. Recommend: OLD The prior text did not explicitly include MD5 and this text adds it to ensure it is understood as having been deprecated. NEW The prior text did not explicitly include MD5 or SHA-1; and this text adds guidance to ensure that these algorithms have been deprecated. Section 7. Editorial. Grammar. OLD In addition, the use of the SHA-256 hash algorithm is RECOMMENDED, SHA-1 or MD5 MUST NOT be used NEW In addition, the use of the SHA-256 hash algorithm is RECOMMENDED; and SHA-1 or MD5 MUST NOT be used Section 10.2 Please make RFC5246 a normative reference.", "new_text": "Servers SHOULD authenticate using certificates with at least a 2048-bit modulus for the public key. In addition, the use of the SHA-256 hash algorithm is RECOMMENDED; and SHA-1 or MD5 MUST NOT be used (see CAB-Baseline for more details). Clients MUST indicate to servers that they request SHA- 256, by using the \"Signature Algorithms\" extension defined in TLS 1.2. 8."} {"id": "q-en-draft-ietf-tls-ticketrequest-7770c27cd08564df50446692cf19dddf45f59a062d413b7e72fd4e9527a9e80a", "old_text": "will typically be the minimum of the server's self-imposed limit and TicketRequestContents.count. Servers that support ticket requests MUST NOT echo \"ticket_request\" in the EncryptedExtensions message. A client MUST abort the connection with an \"illegal_parameter\" alert if the \"ticket_request\" extension is present in the EncryptedExtensions message. If a client receives a HelloRetryRequest, the presence (or absence) of the \"ticket_request\" extension MUST be maintained in the second", "comments": "Addresses and . The server hint came up in the context of QUIC, e.g., as a possible optimization to let clients know if they should expect any post handshake messages. cc NAME NAME\nLooks like this got merged without any business days to review it :) but otherwise the text looks good, approved.\nEvery day is a business day! ;-)\nFrom NAME\nFixed by .\nYou probably want to be clear that ticket_request cannot appear in HRR, given that you have a section on it.\nFixed in .\nI'm not a huge fan of sending extensions that aren't really needed, but this seems fine. I missed the QUIC discussion (or maybe it's still in my inbox). BTW, I still like the idea of hoisting tickets to the application layer, but ekr seems to think that they belong at the TLS layer. I guess we'll just have to keep disagreeing on that point :)", "new_text": "will typically be the minimum of the server's self-imposed limit and TicketRequestContents.count. A server that supports ticket requests MAY echo the \"ticket_request\" extension in the EncryptedExtensions message. If present, it contains a TicketRequestContents structure, where TicketRequestContents.count indicates the number of tickets the server expects to send to the client. Servers MUST NOT send the \"ticket_request\" extension in ServerHello or HelloRetryRequest messages. A client MUST abort the connection with an \"illegal_parameter\" alert if the \"ticket_request\" extension is present in either of these messages. If a client receives a HelloRetryRequest, the presence (or absence) of the \"ticket_request\" extension MUST be maintained in the second"} {"id": "q-en-draft-ietf-tls-ticketrequest-7770c27cd08564df50446692cf19dddf45f59a062d413b7e72fd4e9527a9e80a", "old_text": "IANA is requested to Create an entry, ticket_request(TBD), in the existing registry for ExtensionType (defined in RFC8446), with \"TLS 1.3\" column values being set to \"CH\", and \"Recommended\" column being set to \"Yes\". 5.", "comments": "Addresses and . The server hint came up in the context of QUIC, e.g., as a possible optimization to let clients know if they should expect any post handshake messages. cc NAME NAME\nLooks like this got merged without any business days to review it :) but otherwise the text looks good, approved.\nEvery day is a business day! ;-)\nFrom NAME\nFixed by .\nYou probably want to be clear that ticket_request cannot appear in HRR, given that you have a section on it.\nFixed in .\nI'm not a huge fan of sending extensions that aren't really needed, but this seems fine. I missed the QUIC discussion (or maybe it's still in my inbox). BTW, I still like the idea of hoisting tickets to the application layer, but ekr seems to think that they belong at the TLS layer. I guess we'll just have to keep disagreeing on that point :)", "new_text": "IANA is requested to Create an entry, ticket_request(TBD), in the existing registry for ExtensionType (defined in RFC8446), with \"TLS 1.3\" column values being set to \"CH, EE\", and \"Recommended\" column being set to \"Yes\". 5."} {"id": "q-en-draft-ietf-webtrans-http3-7954fb6457feccf3bb1f575e1276f7e99c130b8a8afe1f93864d18746ad0ec77", "old_text": "4.2. WebTransport clients can initiate bidirectional streams by opening an HTTP/3 bidirectional stream and sending an HTTP/3 frame with type \"WEBTRANSPORT_STREAM\" (type=0x41). The format of the frame SHALL be the frame type, followed by the session ID, encoded as a variable- length integer, followed by the user-specified stream data (fig-bidi- client). The frame SHALL last until the end of the stream. 4.3. WebTransport servers can initiate bidirectional streams by opening a bidirectional stream within the HTTP/3 connection. Note that since HTTP/3 does not define any semantics for server-initiated bidirectional streams, this document is a normative reference for the semantics of such streams for all HTTP/3 connections in which the SETTINGS_ENABLE_WEBTRANSPORT option is negotiated. The format of those streams SHALL be the session ID, encoded as a variable-length integer, followed by the user-specified stream data (fig-bidi- server). 4.4. Datagrams can be sent using the DATAGRAM frame as defined in QUIC- DATAGRAM and HTTP3-DATAGRAM. For all HTTP/3 connections in which the", "comments": "This matches agreement at the last interim. We should separate this into a different draft, but for now this will make the draft match implementation.\nPartially addresses . cc NAME NAME\nHttp3Transport currently assumes that server-initiated bidirectional streams and datagrams can only be used for WebTransport. I suggest we prefix each server-initiated bidirectional stream and datagram with a new HTTP/3 frame type for WebTransport. This way, a future standard could define it's own frame type to run alongside WebTransport.\nUsing an H3 frame type for server initiated bidi streams would also make the design more consistent, instead of the asymmetric design there is today. It costs a byte per bidi stream, but it seems worth it for consistency and flexibility. For datagrams, I believe that we'd want to be able to coexist with other H3 extensions, such as MASQUE, and I believe that's possible without any extra overhead by adding a Session -> Datagram flow ID mapping when a WebTransport session is created.\nIf it's a single byte, it's more of a stream type (like the unidirectional stream-type) than a frame type, right? There's a discussion Victor pointed me to in , where it's suggested that datagrams belonging to a WebTransport session use the Flow ID = to the Session ID. It would be nice to avoid the extra state and lookup.\nAh yes, it's more of a stream type. In that case, I'm unsure of whether there's a good way of eliminating the asymmetry. The only option I can think of is to not use a stream type for server bidi streams and only use the frame, like for the client initiated bidi streams? Thanks for the pointer to , it looks like there are a few potential designs for datagrams flow IDs. If in doubt, simpler is probably better. I'll think more about this one.\nI agree we should use the same mechanics for client and server bidi streams.\nAfter thinking about this more, not having a stream type doesn't seem so bad to me and it does provide symmetry. Stream types seem like an obvious design choice, but I believe HTTP/3 is sufficiently constrained that one could infer the stream type based on the first frame on the stream. For example, the control stream MUST start with SETTINGS. In retrospect, having both stream types and frame types is a bit awkward, and it likely would have been cleaner to only have frame types, but that ship has sailed.\nAfter having implemented this for hackathon, I can confirm that the lack of stream preface on server-bidi streams is annoying and requires special casing (I actually simulate receiving one). Let's please add one. I took a bit of a shortcut and treated the stream preface as both a stream type AND a frame type. After peeking at the unidirectional stream input to discover that it's WT, I let the parser run on that and treat it as a frame type, making it symmetric with the bidi case. Is it possible to define a reserved frame type with the same value as the stream type?\nGiven HTTP/3 does not have a stream type varint for bidi streams, I think a potential way forward is to say that the first frame on the bidi stream can indicate the 'type' of the stream, but that there is no explicit type identifier on the wire. Feel free to ignore the below comment, as it's apparently unchanged from January Looking back, SETTINGS is required to be the first frame on the control stream, so the control stream type is a bit redundant and the same is true for QPACK and I believe could be true for server push? Obviously we're not going to redesign HTTP/3 stream and frame types at this stage in the process, but making bidi streams consistent for both perspectives seems preferable to me.\nDiscussed at IETF 110. There seems to be a general agreement that we should make server-initiated bidi streams use the same encoding as client-initiated ones (that is to say, frames). There was some interest expressed in splitting definition of server-initiated bidi streams into their own draft; NAME and NAME are to reach out to httpbis about the appropriate venue for that.\nJust bringing some of the discussion from here: HTTP should provide some consistent guidance for the format of HTTP/3 extension streams. Client-bidi extension streams have to start with a frame or series of frames, to be compatible for H3. One of these frames has to identify the type of stream. This is how WT is currently specified. Unidirectional streams all start with a Stream Type, but after that are inconsistent in H3 the control stream is: Stream Type + Frames QPACK Streams are: Stream Type + QPACK instructions Push streams are: Stream Type + Preface (Push ID) + Frames Stream Type seems like a more general extension mechanism, since it can be used to completely decouple from HTTP/3 frame parsing.\nI find stream types to be less extensible than frames: frames allow you to do anything that stream types do, while additionally providing more opportunity for extensibility. I would prefer server-initiated bidirectional streams to behave like client-initiated bidirectional streams instead of unidirectional streams, as that makes implementations simpler.\nJanuary Me said: So I think I'm arguing against myself and I'm going to stop. My other comments on this thread we that I didn't like was the asymmetry of WT uni streams, which currently are . My past self seemed to be advocating that we change WT uni streams to or perhaps . Do we want general guidance about how extensions should use Stream Type vs Frames on unidirectional streams -- specifically that there should be at least one frame?\nThe place to put that guidance would have been draft-ietf-quic-http, and I think that ship has sailed. For unidirectional streams, I like your framing proposal as that enables extension frames down the road.\nWell, not too late to put out a BCP or something, especially if there's going to be some kind of formal guidance on server-initiated bidi streams. I'll open a separate issue about changing the WT uni framing.\nI think this specific issue was fixed by . Please reopen if I'm wrong.", "new_text": "4.2. WebTransport endpoints can initiate bidirectional streams by opening an HTTP/3 bidirectional stream and sending an HTTP/3 frame with type \"WEBTRANSPORT_STREAM\" (type=0x41). The format of the frame SHALL be the frame type, followed by the session ID, encoded as a variable- length integer, followed by the user-specified stream data (fig-bidi- client). The frame SHALL last until the end of the stream. HTTP/3 does not by itself define any semantics for server-initiated bidirectional streams. If WebTransport setting is negotiated by both endpoints, the syntax of the server-initiated bidirectional streams SHALL be the same as the syntax of client-initated bidirectional streams, that is, a sequence of HTTP/3 frames. The only frame defined by this document for use within server-initiated bidirectional streams is WEBTRANSPORT_STREAM. TODO: move the paragraph above into a separate draft; define what happens with already existing HTTP/3 frames on server-initiated bidirectional streams. 4.3. Datagrams can be sent using the DATAGRAM frame as defined in QUIC- DATAGRAM and HTTP3-DATAGRAM. For all HTTP/3 connections in which the"} {"id": "q-en-draft-ietf-webtrans-http3-7954fb6457feccf3bb1f575e1276f7e99c130b8a8afe1f93864d18746ad0ec77", "old_text": "MTUs can vary. TODO: Describe how the path MTU can be computed, specifically propagation across HTTP proxies. 4.5. In WebTransport over HTTP/3, the client MAY send its SETTINGS frame, as well as multiple WebTransport CONNECT requests, WebTransport data", "comments": "This matches agreement at the last interim. We should separate this into a different draft, but for now this will make the draft match implementation.\nPartially addresses . cc NAME NAME\nHttp3Transport currently assumes that server-initiated bidirectional streams and datagrams can only be used for WebTransport. I suggest we prefix each server-initiated bidirectional stream and datagram with a new HTTP/3 frame type for WebTransport. This way, a future standard could define it's own frame type to run alongside WebTransport.\nUsing an H3 frame type for server initiated bidi streams would also make the design more consistent, instead of the asymmetric design there is today. It costs a byte per bidi stream, but it seems worth it for consistency and flexibility. For datagrams, I believe that we'd want to be able to coexist with other H3 extensions, such as MASQUE, and I believe that's possible without any extra overhead by adding a Session -> Datagram flow ID mapping when a WebTransport session is created.\nIf it's a single byte, it's more of a stream type (like the unidirectional stream-type) than a frame type, right? There's a discussion Victor pointed me to in , where it's suggested that datagrams belonging to a WebTransport session use the Flow ID = to the Session ID. It would be nice to avoid the extra state and lookup.\nAh yes, it's more of a stream type. In that case, I'm unsure of whether there's a good way of eliminating the asymmetry. The only option I can think of is to not use a stream type for server bidi streams and only use the frame, like for the client initiated bidi streams? Thanks for the pointer to , it looks like there are a few potential designs for datagrams flow IDs. If in doubt, simpler is probably better. I'll think more about this one.\nI agree we should use the same mechanics for client and server bidi streams.\nAfter thinking about this more, not having a stream type doesn't seem so bad to me and it does provide symmetry. Stream types seem like an obvious design choice, but I believe HTTP/3 is sufficiently constrained that one could infer the stream type based on the first frame on the stream. For example, the control stream MUST start with SETTINGS. In retrospect, having both stream types and frame types is a bit awkward, and it likely would have been cleaner to only have frame types, but that ship has sailed.\nAfter having implemented this for hackathon, I can confirm that the lack of stream preface on server-bidi streams is annoying and requires special casing (I actually simulate receiving one). Let's please add one. I took a bit of a shortcut and treated the stream preface as both a stream type AND a frame type. After peeking at the unidirectional stream input to discover that it's WT, I let the parser run on that and treat it as a frame type, making it symmetric with the bidi case. Is it possible to define a reserved frame type with the same value as the stream type?\nGiven HTTP/3 does not have a stream type varint for bidi streams, I think a potential way forward is to say that the first frame on the bidi stream can indicate the 'type' of the stream, but that there is no explicit type identifier on the wire. Feel free to ignore the below comment, as it's apparently unchanged from January Looking back, SETTINGS is required to be the first frame on the control stream, so the control stream type is a bit redundant and the same is true for QPACK and I believe could be true for server push? Obviously we're not going to redesign HTTP/3 stream and frame types at this stage in the process, but making bidi streams consistent for both perspectives seems preferable to me.\nDiscussed at IETF 110. There seems to be a general agreement that we should make server-initiated bidi streams use the same encoding as client-initiated ones (that is to say, frames). There was some interest expressed in splitting definition of server-initiated bidi streams into their own draft; NAME and NAME are to reach out to httpbis about the appropriate venue for that.\nJust bringing some of the discussion from here: HTTP should provide some consistent guidance for the format of HTTP/3 extension streams. Client-bidi extension streams have to start with a frame or series of frames, to be compatible for H3. One of these frames has to identify the type of stream. This is how WT is currently specified. Unidirectional streams all start with a Stream Type, but after that are inconsistent in H3 the control stream is: Stream Type + Frames QPACK Streams are: Stream Type + QPACK instructions Push streams are: Stream Type + Preface (Push ID) + Frames Stream Type seems like a more general extension mechanism, since it can be used to completely decouple from HTTP/3 frame parsing.\nI find stream types to be less extensible than frames: frames allow you to do anything that stream types do, while additionally providing more opportunity for extensibility. I would prefer server-initiated bidirectional streams to behave like client-initiated bidirectional streams instead of unidirectional streams, as that makes implementations simpler.\nJanuary Me said: So I think I'm arguing against myself and I'm going to stop. My other comments on this thread we that I didn't like was the asymmetry of WT uni streams, which currently are . My past self seemed to be advocating that we change WT uni streams to or perhaps . Do we want general guidance about how extensions should use Stream Type vs Frames on unidirectional streams -- specifically that there should be at least one frame?\nThe place to put that guidance would have been draft-ietf-quic-http, and I think that ship has sailed. For unidirectional streams, I like your framing proposal as that enables extension frames down the road.\nWell, not too late to put out a BCP or something, especially if there's going to be some kind of formal guidance on server-initiated bidi streams. I'll open a separate issue about changing the WT uni framing.\nI think this specific issue was fixed by . Please reopen if I'm wrong.", "new_text": "MTUs can vary. TODO: Describe how the path MTU can be computed, specifically propagation across HTTP proxies. 4.4. In WebTransport over HTTP/3, the client MAY send its SETTINGS frame, as well as multiple WebTransport CONNECT requests, WebTransport data"} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "Coding is a reliability mechanism that is distinct and separated from the loss detection of congestion controls. Using coding can be a useful way to better deal with tail losses or with networks with non- congestion losses. However, coding mechanisms could hide congestion signals to the server. This memo proposes a discussion on how coding and congestion signals could interact and proposes best current practice. 1.", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "Coding is a reliability mechanism that is distinct and separated from the loss detection of congestion controls. Using coding can be a useful way to better deal with tail losses or with networks with non- congestion losses. However, coding mechanisms should not hide congestion signals. This memo offers a discussion of how coding and congestion control interact. 1."} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "it is not an IETF product and is not a standard. There are cases where deploying coding improves the quality of the transmission. As example, the server may hardly detect tail losses that impact may impact the application layer. Another example may be the networks where non-congestion losses are persistent and prevent the server from exploiting the link capacity. RFC5681 defines TCP as a loss-based congestion control and coding mechanisms can hide congestion signals to the server. Coding is a reliability mechanism that is distinct and separated from the loss detection of congestion controls. This memo discusses", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "it is not an IETF product and is not a standard. There are cases where deploying coding improves the quality of the transmission. As an example, the server may hardly detect tail losses that impact may impact the application layer [MICHAEL: I don't understand this. Why would it not detect them? And why is detecting them the problem?]. Another example are networks where non- congestion losses are persistent and prevent a sender from exploiting the link capacity. RFC5681 defines TCP as a loss-based congestion control; because coding repairs such losses, blindly applying it may easily lead to an implementation that also hides a congestion signal to the sender. It is important to ensure that such information hiding does not occur. Coding is a reliability mechanism that is distinct and separated from the loss detection of congestion controls. This memo discusses"} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "consider congestion control aspects when proposing coding solutions. The proposed recommendations apply for coding at the transport or application layer and coding for tunnels is out-of-the scope of the document. 2. fig:sep-channel presents the notations that will be used in this document and introduce the Congestion Control (CC) and Forward Erasure Correction (FEC) channels. Congestion Control channel carries data packets (from the server to the client) and a potential information signaling the packets that have been received (from the client to the server). Forward Erasure Correction channel carries coded packets (from the server to the client) and a potiential information signaling the packets that have been repaired (from the client to the server). It is worth pointing out that there are cases where these channels are not separated. 3.", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "consider congestion control aspects when proposing coding solutions. The proposed recommendations apply for coding at the transport or application layer. Coding for tunnels is out of scope for the document. 2. fig:sep-channel presents the notations that will be used in this document and introduce the Congestion Control (CC) and Forward Erasure Correction (FEC) channels. The Congestion Control channel carries data packets (from a server to a client) and a potential information signaling the packets that have been received (from the client to the server). The Forward Erasure Correction channel carries coded packets (from the server to the client) and a potiential information signaling the packets that have been repaired (from the client to the server). It is worth pointing out that there are cases where these channels are not separated. Inside a host, the CC and FEC entities can be regarded as conceptually separate: As this diagram shows, the inputs to FEC (data to work upon, and signaling from the receiver about losses and/or repaired blocks) are distinct from the inputs to CC. The latter calculates a sending rate or window from network measurements, and it takes the data to send as input, sometimes along with application requirements such as upper/ lower rate bounds, periods of quiescence, or a priority. It is not clear that the ACK signals feeding into a congestion control algorithm are useful to FEC in their raw form, and vice versa - information about repaired blocks may be quite irrelevant to a CC algorithm. However, there can be meaningful other interactions (indicated by the horizontal double arrow) between the two entities, usually as a result of their operation rather than by relaying their own raw inputs. For example, the network measurements carried out by CC can yield a longer-term statistical measure such as a loss ratio which is useful input for a FEC coding scheme. Similarly, unequal error protection using fountain codes can be used to assign different priorities to blocks of data, and these priorities can be honored by a CC mechanism. 3."} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "3.2. The document focuses on end-to-end coding, i.e. in cases where coding is added at the server and client end points. The discussions should then consider fairness with non-coding solutions. 4. The solution can be described as follows: The client MUST indicate to the server that one or multiple packets have been repaired using a coding scheme. The \"repaired packet\" signal does not guarantee that the packet actually needed to be repaired, since it could have been delayed but not lost. The server MUST be able to detect the \"repaired packet\" signal. The base solution does not describe how the congestion control reacts to such signal. The rationale behind this solution is the following: the server has more information on the congestion status and the application characteristics. Moreover, congestion control should not be splitted between multiple entities otherwise non-optimized decisions may be taken. The proposed solution applies for coding in the transport and application layers. The proposed approach is inline with the one in I-D.swett-nwcrg-coding-for-quic. The proposed solution does not applies for the interaction between coding under the transport layer (i.e. not end-to-end), such as coding for tunnels. 5.", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "3.2. The document focuses on end-to-end coding, i.e. cases where coding is added at the server and client end points. The discussions should then consider fairness with non-coding solutions. 4. The solution can be described as follows: [MICHAEL: So far, it hasn't been clear that we're going to describe a \"solution\". A solution to which problem? I thought we're only discussing how FEC and CC relate.] The client indicates to the server that one or multiple packets have been repaired using a coding scheme. The \"repaired packet\" signal does not guarantee that the packet actually needed to be repaired, since it could have been delayed but not lost. The server must be able to detect the \"repaired packet\" signal. The base solution does not describe how the congestion control reacts to such signal. The rationale behind this solution is the following: the server has more information on the congestion status and the application characteristics. Moreover, congestion control should not be split between multiple entities, as otherwise non-optimized decisions may be taken. The proposed solution applies for coding in the transport and application layers. The proposed approach is in-line with the one in I-D.swett-nwcrg-coding-for-quic. The proposed solution does not apply for the interaction between coding under the transport layer (i.e. not end-to-end), such as coding for tunnels. 5."} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "In this solution, the coded packets are sent on top of what is allowed by a congestion window. Examples of the solution could be adding a given pourcentage of the congestion window as supplementary packets or sending a given amount of coded packets at a given rate. The redundancy flow can be decorrelated from the congestion control that manages source packets : a secondary congestion control can be introduced, such as in coupled congestion control for RTP media I- D.ietf-rmcat-coupled-cc. An example would be to exploit a lower than best-effort congestion control RFC6297. The advantage of such solution is that coding would help in challenges cases where transmission losses are persistent. The drawback of such solution is that it may result in coding solutions being unfair towards non-coding solutions. This solutions may result in adding congestion in congested networks.", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "In this solution, the coded packets are sent on top of what is allowed by a congestion window. Examples of the solution could be adding a given percentage of the congestion window as supplementary packets or sending a given amount of coded packets at a given rate. The redundancy flow can be decorrelated from the congestion control that manages source packets: a secondary congestion control can be introduced, such as in coupled congestion control for RTP media I- D.ietf-rmcat-coupled-cc. An example would be to exploit a lower than best-effort congestion control RFC6297. The advantage of such a solution is that coding would help in challenging cases where transmission losses are persistent. The drawback of such a solution is that it may result in coding solutions being unfair towards non-coding solutions. This solutions may result in adding congestion in congested networks."} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "or preferably send coded packets instead of the following packets in the send buffer. The advantage of this solution is that it does not contribute in adding more congestion than the congestion window allows. Indeed, all traffic (source and redundancy) is controlled by one congestion control only and TCP metrics for fairness can be indifferently applied in this case. The main drawback is the decrease of goodput if coded packets are sent but are not used at the client side. 6. Delay-based congestion controls ignore packets that have been repaired with coding. There is no need to define best current pratices in this case. However, more discussions are required for congestion controls that use loss as congestion signals (potentially among other congestion detection mechanism). 6.1. In this solution, the server reacts to repaired packet signals as to congestion-implied packet losses. That being said, this does not necessarily means that the packets have actually been lost. The server may have other means to identify that the packet was just out- of-ordered and ignore the repaired packet signals. The advantages of the solution are (1) that coding mechanisms do not hide congestion signals, such as packets voluntary dropped by a AQM RFC7567 and (2) packets may be repaired faster than with traditionnal retransmission mechanisms. The drawback of this solution is that, if there is a high non- congestion loss rate, the congestion control throughput may decrease", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "or preferably send coded packets instead of the following packets in the send buffer. The advantage of this solution is that it does not add more traffic than the congestion window allows. Indeed, all traffic (source and redundancy) is controlled by one congestion control only and TCP metrics for fairness can be indifferently applied in this case. The main drawback is the decrease of goodput if coded packets are sent but are not used at the client side. 6. [MICHAEL: I would remove this section entirely: it only adds to confusion. If the inputs are seen as distinct, and we say that the raw inputs shouldn't be shared, then why would one even think about what delay-based CC does with this input? It's just not the right input.] Delay-based congestion controls ignore packets that have been repaired with coding. There is no need to define best current pratices in this case. However, more discussions are required for congestion controls that use loss as congestion signals (potentially among other congestion detection mechanism). 6.1. [MICHAEL: As above, lets remove all of this please. This text gets in the way of a clean separation.] In this solution, the server reacts to repaired packet signals as to congestion-implied packet losses. That being said, this does not necessarily mean that the packets have actually been lost. The server may have other means to identify that the packet was just out-of-order and ignore the repaired packet signals. The advantages of the solution are (1) that coding mechanisms do not hide congestion signals, such as packets voluntary dropped by a AQM RFC7567 and (2) packets may be repaired faster than with a traditional retransmission mechanisms. The drawback of this solution is that, if there is a high non- congestion loss rate, the congestion control throughput may decrease"} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "6.2. In this solution, the server does not reduce the congestion window with the same amount when the \"repaired packet\" signal is received, i.e. when a packet has been lost but repaired. Example of this solution could be based on RFC8511 or considering that recovering an isolated packet is not an actual sign of congestion. The advantage of the solution is that in cases where there is no actual congestion, coding could help in improving the transmission", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "6.2. [MICHAEL: this is even just a terrible idea. There's no good rationale to back off less when a packet is repaired.] In this solution, the server does not reduce the congestion window with the same amount when the \"repaired packet\" signal is received, i.e. when a packet has been lost but repaired. Example of this solution could be based on RFC8511 or considering that recovering an isolated packet is not an actual sign of congestion. The advantage of the solution is that in cases where there is no actual congestion, coding could help in improving the transmission"} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "6.3. This is the case for delay-based congestion controls. The interaction between delay-based congestion controls and the delay induced by a coding mechanisms is an open research activity. That being said, a potential approach would be that loss-based congestion control ignores the \"repaired packet\" signal. The advantage of this solution is that coding would provided substantial benefits in cases where there are transmission losses.", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "6.3. [MICHAEL: as above, also here, the signals are discussed as going into both blocks, leading to a strange unification of things that shouldn't be together. Let's remove this. Also, the idea of ignoring packet loss for CC is appalling anyway.] This is the case for delay-based congestion controls. The interaction between delay- based congestion controls and the delay induced by a coding mechanisms is an open research activity. That being said, a potential approach would be that loss-based congestion control ignores the \"repaired packet\" signal. The advantage of this solution is that coding would provided substantial benefits in cases where there are transmission losses."} {"id": "q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb", "old_text": "7. This section provides a summary on the content in previous sections. The fig:summary sums up some recommendations. It is worth pointing out that the \"coding without congestion\" considers that coded packets are sent along with original data packets, in opposition with the solution where coded packets are transmitted only when there is no more original packets to transmit. Moreover, the values indicated in this Figure consider a channel that does not exhibit a high loss pattern. 8.", "comments": "This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 \u00e0 09:28, mwelzl a \u00e9crit : Yes ! Definitely agree.\nLe 17/01/2020 \u00e0 13:50, francoismichel a \u00e9crit : OK for me.", "new_text": "7. [MICHAEL: I'm sorry to say that I find this table to be in the same style, of wrongly mixing things together, and would like to see it removed altogether.] This section provides a summary on the content in previous sections. The fig:summary sums up some recommendations. It is worth pointing out that the \"coding without congestion\" considers that coded packets are sent along with original data packets, in opposition with the solution where coded packets are transmitted only when there is no more original packets to transmit. Moreover, the values indicated in this Figure consider a channel that does not exhibit a high loss pattern. 8."} {"id": "q-en-draft-webtransport-http2-1bd1fc007018974e6ed616a669abcbcf65e6909665b7891c121a4b0a788da314", "old_text": "5.4. frames implicitly create a stream and carry stream data. The Type field in the frame takes the form 0b00001XXX (or the set of values from 0x08 to 0x0f) to maximize compatibility with QUIC. However, unlike QUIC, there are only one bit used to determine the fields that are present in the frame: The FIN bit (0x01) indicates that the frame marks the end of the stream. The final size of the stream is the sum of the length, in bytes, of all data previously sent on this stream and the length of this frame without the length of the Stream ID. 5.5.", "comments": "The definition was a little loose, so I rewrote it. We only need to use 0x0a and 0x0b, which is lucky and it lets us avoid much of the mess that QUIC has around bits in the type. I've also added definitions for the fields in the frame.", "new_text": "5.4. frames implicitly create a stream and carry stream data. The Type field in the frame is either 0x0a or 0x0b. This uses the same frame types as a QUIC STREAM frame with the OFF bit clear and the LEN bit set. The FIN bit (0x01) in the frame type indicates that the frame marks the end of the stream in one direction. Stream data consists of any number of 0x0a frames followed by a terminal 0x0b frame. frames contain the following fields: The stream ID for the stream. Zero or more bytes of data for the stream. Empty frames MUST NOT be used unless they open or close a stream; an endpoint MAY treat an empty frame that neither starts nor ends a stream as a session error. 5.5."} {"id": "q-en-dtls-conn-id-eed97ec2f6bc1387adc82e9592ad18272bafc07cc11512b68336d0af7748c5f9", "old_text": "ClientHello, MUST contain the ConnectionId structure. This structure contains the CID value the client wishes the server to use when sending messages to the client. A zero-length CID value indicates that the client is prepared to send with a CID but does not wish the server to use one when sending. A server willing to use CIDs will respond with a \"connection_id\" extension in the ServerHello, containing the CID it wishes the client to use when sending messages towards it. A zero-length value indicates that the server will send with the client's CID but does not wish the client to include a CID. Because each party sends the value in the \"connection_id\" extension it wants to receive as a CID in encrypted records, it is possible for", "comments": "URL Francesca Palombini has entered the following ballot position for draft-ietf-tls-dtls-connection-id-11: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you for the work on this document. I only have minor comments and nits below. Francesca sending messages to the client. A zero-length CID value indicates that the client is prepared to send with a CID but does not wish the server to use one when sending. ... to use when sending messages towards it. A zero-length value indicates that the server will send with the client's CID but does not wish the client to include a CID. FP: clarification question: I am not sure the following formulation is very clear to me: \"to send with a(/the client's) CID\". Could \"send with\" be rephrased to clarify? The previous paragraph uses \"using a CID value\", that would be better IMO. the record format defined in {{dtls-ciphertext} with the new MAC FP: nit - missing \"}\" in markdown. The following MAC algorithm applies to block ciphers that use the with Encrypt-then-MAC processing described in [RFC7366]. FP: remove \"with\" Section 10.1 FP: I believe you should specify 1. what allowed values are for this column (i.e. Y or N, and what they mean) and 2. what happens to the existing entries - namely that they all get \"N\" value. Section 10.2 FP: Just checking - why is 53 \"incompatible with this document\"? Value Extension Name TLS 1.3 DTLS Only Recommended Reference FP: nit- s/DTLS Only/DTLS-Only to be consistent with 10.1", "new_text": "ClientHello, MUST contain the ConnectionId structure. This structure contains the CID value the client wishes the server to use when sending messages to the client. A zero-length CID value indicates that the client is prepared to send using a CID but does not wish the server to use one when sending. A server willing to use CIDs will respond with a \"connection_id\" extension in the ServerHello, containing the CID it wishes the client to use when sending messages towards it. A zero-length value indicates that the server will send using the client's CID but does not wish the client to include a CID when sending. Because each party sends the value in the \"connection_id\" extension it wants to receive as a CID in encrypted records, it is possible for"} {"id": "q-en-dtls-conn-id-eed97ec2f6bc1387adc82e9592ad18272bafc07cc11512b68336d0af7748c5f9", "old_text": "5.2. The following MAC algorithm applies to block ciphers that use the with Encrypt-then-MAC processing described in RFC7366. 5.3.", "comments": "URL Francesca Palombini has entered the following ballot position for draft-ietf-tls-dtls-connection-id-11: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you for the work on this document. I only have minor comments and nits below. Francesca sending messages to the client. A zero-length CID value indicates that the client is prepared to send with a CID but does not wish the server to use one when sending. ... to use when sending messages towards it. A zero-length value indicates that the server will send with the client's CID but does not wish the client to include a CID. FP: clarification question: I am not sure the following formulation is very clear to me: \"to send with a(/the client's) CID\". Could \"send with\" be rephrased to clarify? The previous paragraph uses \"using a CID value\", that would be better IMO. the record format defined in {{dtls-ciphertext} with the new MAC FP: nit - missing \"}\" in markdown. The following MAC algorithm applies to block ciphers that use the with Encrypt-then-MAC processing described in [RFC7366]. FP: remove \"with\" Section 10.1 FP: I believe you should specify 1. what allowed values are for this column (i.e. Y or N, and what they mean) and 2. what happens to the existing entries - namely that they all get \"N\" value. Section 10.2 FP: Just checking - why is 53 \"incompatible with this document\"? Value Extension Name TLS 1.3 DTLS Only Recommended Reference FP: nit- s/DTLS Only/DTLS-Only to be consistent with 10.1", "new_text": "5.2. The following MAC algorithm applies to block ciphers that use the Encrypt-then-MAC processing described in RFC7366. 5.3."} {"id": "q-en-dtls-conn-id-eed97ec2f6bc1387adc82e9592ad18272bafc07cc11512b68336d0af7748c5f9", "old_text": "publication, the early allocation will be deprecated in favor of this assignment. Note: The value \"N\" in the Recommended column is set because this extension is intended only for specific use cases. This document describes the behavior of this extension for DTLS 1.2 only; it is not", "comments": "URL Francesca Palombini has entered the following ballot position for draft-ietf-tls-dtls-connection-id-11: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you for the work on this document. I only have minor comments and nits below. Francesca sending messages to the client. A zero-length CID value indicates that the client is prepared to send with a CID but does not wish the server to use one when sending. ... to use when sending messages towards it. A zero-length value indicates that the server will send with the client's CID but does not wish the client to include a CID. FP: clarification question: I am not sure the following formulation is very clear to me: \"to send with a(/the client's) CID\". Could \"send with\" be rephrased to clarify? The previous paragraph uses \"using a CID value\", that would be better IMO. the record format defined in {{dtls-ciphertext} with the new MAC FP: nit - missing \"}\" in markdown. The following MAC algorithm applies to block ciphers that use the with Encrypt-then-MAC processing described in [RFC7366]. FP: remove \"with\" Section 10.1 FP: I believe you should specify 1. what allowed values are for this column (i.e. Y or N, and what they mean) and 2. what happens to the existing entries - namely that they all get \"N\" value. Section 10.2 FP: Just checking - why is 53 \"incompatible with this document\"? Value Extension Name TLS 1.3 DTLS Only Recommended Reference FP: nit- s/DTLS Only/DTLS-Only to be consistent with 10.1", "new_text": "publication, the early allocation will be deprecated in favor of this assignment. A new column \"DTLS-Only\" is added to the registry. The valid entries are \"Y\" if the extension is only applicable to DTLS, \"N\" otherwise. All the pre-existing entries are given the value \"N\". Note: The value \"N\" in the Recommended column is set because this extension is intended only for specific use cases. This document describes the behavior of this extension for DTLS 1.2 only; it is not"} {"id": "q-en-dtls-conn-id-491ca7522e0b2f51c31390861190aafa04e06ed259110342a483e2d9795db4eb", "old_text": "6. dtls-example2 shows an example exchange where a connection id is used uni-directionally from the client to the server. 7.", "comments": "URL The example shows in flight 5 Certificate --------> ClientKeyExchange CertificateVerify [ChangeCipherSpec] Finished (cid=100) Does this imply that the flight 5 is send with cid = 100 or only the Finished? I would prefer to be able to send also the other handshake messages of the flight with a cid, that would make it harder to disturb handshakes by spoofed handshake messages.\nIn DTLS 1.2 only the Finished message is encrypted. In the current design we make use of the CIDs only once encryption is enabled. I should explain that only the Finished message contains a CID here.\nSee additional explanation in PR : URL\nI would enjoy it, if the cid could be extended into the not encrypted handshake messages also. Is there any reason, not to do so? My consideration is to make the handshake \"slightly more\" protected against spoofing. The current protection in RFC6347 is limited to URL and URL . The first applies only to the CLIENTHELLO (\"cookie\" protection again spoofed CLIENTHELLOs). The second (MAC and Anti-Replay) is only usable after a successful handshake. If an attacker spoofs other handshake messages, the attacker may block a handshake from being successful (even if the time window for that is very small and so it would be a rather expensive attack). Using a CID would enable a implementation either to handshake even when addresses are changing during the handshake or to protect the handshake slightly more (not supporting address changes but) use the CID to validate the handshake message belonging to this handshake. I would leave the chosen strategy to the implementation, enabling implementations to flexible chose the strategy depending on the current handshake load (means: supporting address changes in handshakes as normal operation, but switch dynamically to the more protective one, if a certain relative number of handshake fails.) My current work in progress: URL contains already this feature as experimental extension. It's easy to be added to a cid implementation.\nSection 4.1.2 describes an attack where an attacker injects traffic that cannot be correctly processed due to a MAC failure. The reason why MAC failures in DTLS do not lead to a connection termination (unlike in TLS) is that they are much easier to mount: an attacker only needs to know the source and destination IP and port. Section 4.2.1 talks about the cookie exchange, which has been added to prevent amplification attacks against third parties. I would argue that these two attacks are very different from what you are trying to tackle. You would like to add resiliency against on-path (?) attacks sending spoofed handshake messages. I cannot see how the CID provides that extra protection. I understand the use case of address changes during the handshake. The question is whether this is actually something real. If the IP address & port changes during the handshake then this is indeed annoying but how often does this happen?\nI mentioned 4.1.2 and 4.2.1 just to sum up the current protection mechanisms. The attack, I consider, is a off-path. e.g. using PSK, someone spoofs a CLIENTKEYEXCHANGE with the identity \"ghost\". Now it's hard for the server to differentiate the wrong from the right CLIENTKEYEXCHANGE with identity \"me\". With the CID in the handshake, spoofing into the handshake will be much harder. I don't assume, that such attacks will be easy or happen frequently. But assuming something as a \"theft detection\" for something valuable, it may be used to block the reporting message.\nI don't understand why we'd need to enable CID during handshake, and in particular on Finished? Any sane handshake will complete well before any NAT rebinding has a chance to mess with the path (note that in-flight handshake messages refresh the port bindings...). What kind of use case would that address? Is it connection migrations while handshaking? I'm not sure the \"increased robustness against spoofing\" argument is a very strong one: the handshake is either robust as-is or we are in trouble :-) Also, would that imply reconsidering the security analysis of the handshake? My preference instead would be to send tls12cid records only after the handshake has completed successfully, i.e. on the first application_data.\nFMPOV: protects in the long period before the handshake. With the cookie you could not only spoof CLIENTHELLOs an trigger heavy functions. protects in the long period after the handshake. With the MAC filter it's also not possible to spoof messages at that period (or even repeat them). But in the short period (few seconds) of the handshake itself, I couldn't find a straight forward method, to distinguish between the right inner handshake messages and spoofed ones. Using the CID would offer a similar protection for those inner handshake messages as the cookie for the CLIENTHELLO. I don't assume, that such attack will happen too frequently. But I could assume, if someone has a high interest, such an attack may be used to \"deny\" a handshake of a certain peer for a certain time period, e.g. \"theft detection\" for one device may be blocked for a couple of minutes. Only because the CID would offer protection against this \"for free\", if used during the handshake, I started to try to introduce that.\nLet me admit, that the attack in the inner handshake is not likely. Some implementations seems to use the 4.1.2 mechanism even in epoch 0, that may help, but introduces other risks (attack possibilities for the inner handshake). FMPOV, currently you would need too much implementation knowledge to execute such an attack. But, though in my opinion, using the CID within the handshake, offers a protection \"for free\", I think it's worth to be considered.\nSorry, I'm not sure I understand which kind of extra protection we'd get: when handshaking the CID is spoofable as much as the source address in the UDP packet.\nAnother argument for not changing the wire image of the record layer during the handshake phase is possible middleboxes interference.\nThe cookie of 4.2.1 is also spoofable, but for successful spoofing it must be the right cookie for that time period and address. The same applies for the CID. If combined with the address, then it protects, because a spoofer must know the CID. (And sure, it's visible and .. but the cookie is also visible. The point is also the time period, when this CID could be used for spoofing. And that's very short, comparable with the short valid time-period of the cookie.)\nI'm still confused, could you characterise precisely your attacker in terms of its goal and position in the network? Thanks!\nI don't want to change the wire image more, than this draft already does! With outer type and inner types nothing additional to the already defined additions is introduces, except that it is used already within the handshake. If you want, I update my wip URL and document, how to use a CID in the handshake. Then you see in wireshark, how it works.\nWith outer type tls12cid(25) and inner types changecipher_spec(20), alert(21), handshake(22) nothing additional to the already defined additions is introduced Yep That's the trouble. Most \"enterprise security\" middleboxes want to be able to look into the handshake to determine whether a connection is ok according to their policies. After the handshake completes and the connection is considered alright, they just act as blind packet forwarders. To do the handshake inspection they depend on the established wire image of the TLS/DTLS protocol. If we introduce the ability to change the wire format during handshake (as opposed to the post-handshake phase only) we might end up impairing clients that sit behind those boxes.\nGoal: Deny successful handshake of a selected peer. Position: off-path, but know the peer's address and the interesting time period. Something missing?\nSmells for me, that a \"list of typed cid\" (see ) may be used, to enable a peer behind such a box to inform the other, not to use the CID during the handshake. types: \"plain\" - cid only after handshake \"plain+\" - cid also during handshake\nJust for my personal interest: can you name a middlebox, which supports DTLS?\nSorry, now you triggered me: Why should those middleboxes forward my tlscid record, when my address has changed, but block my tlscid record, when it contains a handshake? If the middleboxes are unaware of this draft, but forward my tlscid record even with changed address, I would assume, they forward my tlscid record also with a inner handshake message.\nDeny successful handshake of a selected peer. Position: off-path, but know the peer's address and the interesting time period. OK, thanks very much. Frankly, the combination of perfect timing as well as the randomization provided by the source port makes the attack window pretty narrow. On the other hand, allocating state too early (i.e., allocating CIDs without strong commitment from client side) opens up an interesting amount of attack surface on server side.\nThis is a tricky question. Boxes like typically have scripting capabilities which give their admins lots of leeway. I do not build nor operate any of these so my direct experience is ~0, but speaking with the Symantec guys exactly about how to modify the DTLS wire image to signal the presence of CID in a middlebox-tolerant way, their feedback was \"to be sure, just leave the handshake alone as much as possible\"... I have no reason to doubt their assessment, also given the ways TLS 1.3 was hit in this same space.\nAbove I wrote nothing else. ??? what's that ??? Why should the use of cid in the inner handshake messages as well allocate anything additionally? The cid is provided in the CLIENTHELLO and/or SERVERHELLO, it's currently just not send in the inner handshake messages. But there is not more or less to allocate, if they are used with other handshake message or not.\nI have also no reason to doubt them, but I'm very unsure, if an address change will then work at all with such boxes. I would really prefer to test such an address change. I couldn't find any hint, that that box supports DTLS at all.\nSo the critical issue seems: with TLS there are bad experience changing the records with DTLS this experience seems to be missing. Some expect them to be similar bad. I unfortunately suspect them to be either a real disaster (address change not working at all), or not that hard. Without boxes to test and/or people with DTLS-boxes experience, it will be hard to find a solution. So, try to use the mailing list to get more feedback?\nSorry, I fail to understand your argument. What to do with an unknown 5-tuple vs what to do with an unknown C-T in a context where you expect to understand the semantics of the flow seem orthogonal to me.\nI don't really understand the above sentence so I know a reply is quite risky :-) Anyway, what I wanted to say is that if you want to use CID during handshake in any meaningful way, it means the party requiring it (e.g., server) needs to allocate state internally to be able to check it on receive. If this allocation is triggered only by one (or two) CH(s), it's pretty easy to mount a \"CID exhaustion\" attack against the server. Requiring the client to complete the handshake before promoting a \"2-tuple\" connection into a \"CID\" connection makes the attack much more demanding and therefore a lot less likely.\n(some overlap in the discussion :-) so a updated my comment with the question above.) Try to explain it: Which rules should a box apply to a tls_cid record? It's only valid after a successful handshake? But, how should such a record then be assigned to that successful handshake? Either by knowing the CID (which requires to be aware of this draft), or much weaker by the address, but then it will not work with address changes.\nOK, so you don't allocate something on the server side for the CID in the CLIENTHELLO until the handshake is finished? Or you send a CID in the SERVERHELLO without allocating it before the handshake is finished? May be I'm too focused on my implementation, but without an other implementation, which shows what you wrote, it's also hard for me to understand your comments.\nAt least, I think you target the CID the server side is using. How is it ensured, that the CID in the SERVER_HELLO is unique at least for all established connections and all ongoing handshakes, which are not terminated at that time point?\nIn my message above s/CID exhaustion/resource exhaustion/ sorry... And yes, you can certainly separate the act of reserving the CID when you decide to send it in the CH from creating state associated with the CID-based session lookup paraphernalia. For example you could use a big enough CID space (say 8 bytes) and reserve your CIDs by means of a monotonically increasing function. And at a later time (my proposal is after the handshake is completed successfully) you grab the memory needed for the CID based session lookup stuff...\nAnother point is that we'd put more bytes on the wire, increasing the risk of message fragmentation with small MTUs in a situation, the handshake, where the timing of the messaging is such that NAT timeouts are very unlikely and therefore the 5-tuple based demuxing more than adequate.\nI can't see a relation of \"resource exhaustion\" with the usage of the CID in the handshake.\nThat's right. In my experience, \"small MTUs\" are rarely used. The highest risk is caused by a larger CLIENTHELLO, which then gets fragmented (see \"resource exhaustion\"). Though CID makes the CLIENTHELLO larger, it's more an argument, that CID may not be support on such path. FMPOV, a \"optimal support\" for peers with a very narrow link, should be moved into a other discussion.\nThe part with having the choice to either \"protect the handshake\" or \"support address changes\" should just leave it to the implementation, which strategy is chosen. I would go for a configurable solution, which enables a \"dynamic protection\", if too many handshakes are ongoing.\nI had a chat with Achim and he agreed that we can leave the feature of using CIDs for handshake messages for DTLS 1.3. This will allow us to get this work done faster. DTLS 1.3 encrypts handshake messages earlier and hence already the handshake itself utilize CIDs.\nNAME Thanks for your time!\nHi Hannes, Achim, Good, thanks both for taking the time to sort this out. I think it's a very reasonable choice to not introduce scope inflation at this (late) stage. One thing we still need to do though is tighten up a bit the language around when exactly the CID is put on the wire. Something like: \"Once a non-empty CID is successfully negotiated, it MUST be sent on the XYZ message and on all subsequent records of the connection\" would remove all the uncertainty I guess. I personally have an inclination for XYZ to be the first application data, mainly because I'd like to commit any resources only after I have full confirmation that the client is a good one. However, I don't care whatever choice we end up with, as long as it's written down precisely in 2119 language.\nI'm not sure, what \"commit\" includes or means exactly. To verify the FINISH you will need the \"keying material from the crypto context (derived from the specific mastersecret)\". According URL a good estimation will be about 128 bytes. In your wording, you use them before you \"commit\" them? So adding a couple of bytes for CID to that storage, should not make a difference, or?\nI mean making a certain CID \"used\" (not just reserved) and making room for the CID in a lookup table. But, as I said, I won't be unhappy if the \"first CID record\" happens to be something different than the application data.\nFrom my side the specification is also free to chose, when the peer should start to use the tls_cid records.\nNAME I'm still not sure, if I really understand the requirements of your approach. My current feeling is, that you store all the keying and cid stuff in a \"context\" and provide a address:port-lookup-table for that stored context. One finish, you then want to store this context into a cid-lookup-table. If that's the plan, a cid usage even in the handshake doesn't matter. For epoch 0 records, just still use the address:port lookup and verify the record cid with the stored one. I personally prefer a \"relaxed definition\", but your case will not be wrong. It just doesn't support address changes during handshake. I guess, after a period of experience, the most user will select that option and use it to secure the handshake without supporting an address change. But that would be left to future experience. After the call with NAME I think it's better not to extend this draft with that idea and instead focus on DTLS 1.3. Therefore I removed the code for this extended usage from my wip and closed this issue. In the meantime, it's really hard to collect all statements over several issues. So, if you still interested in using the RFC6347 record for the finish and start with the tls12_cid records after that, I think, open a new issues (or PR) would make it easier to fetch up this idea.\nIf this feature is negotiated on, then the MAC calculation changes, even if no connection ID is sent, because of . This needs to be clearly sign-posted.\nMy preferred interpretation is: uses the original RFC6347, section 4.1 record format URL indicated by the already used content type ids. That includes also the unchanged MAC.\nCreated a pull request to add clarifying text: URL\nNAME it would be great, if you could explain, if you want to use the new record type with an empty CID in order to obfuscate the DTLS payload length. Please note, that currently the specification of this draft turns more into a \"strict\" usage (see issue ) which will not support the new record type with an empty CID.\nInteresting question. Yes, I would like to be able to use the new record type to pad records, but I don't think that it is an especially important consideration. If the design is easier without that, and I suspect that it is, then I am OK with not doing that. A one-octet connection ID is not that much of a burden to pay to get this feature.\nSo you would then got for adding a 1 byte cid, but still use the ip-address:port to map it to the security context (keying stuff)? Or do you consider only a few connections? For me that is just one of the scenario's, I'm really afraid of! It makes the usage of cid more indeterministic, if the spec \"ignores\" that kind of usage instead of explicitly considering it! FMPOV, it will be easier, to support then a empty CID record indicating, that still the ip-address:port must be used instead of a 1 byte cid, but then being not strict on the mapping.\nAddressed in submitted version -04\nIn Figure 2, parentheses are used to signify both extensions (connection_id=100) and the addition to the record header (cid=100). I suggest using a different notation here.\nNAME can you take a crack at fixing this\nHere is an update: URL\nUpdated diagram merged into the draft.", "new_text": "6. dtls-example2 shows an example exchange where a connection id is used uni-directionally from the client to the server. To indicate that a connection_id has zero length we use the term 'connection_id=empty'. Note: In the example exchange the CID is included in the record layer once encryption is enabled. In DTLS 1.2 only one handshake message is encrypted, namely the Finished message. Since the example shows how to use the CID for payloads sent from the client to the server only the record layer payload containing the Finished messagen contains a CID. Application data payloads sent from the client to the server contain a CID in this example as well. 7."} {"id": "q-en-dtls-rrc-2a06dc674e51c4ca96c43725e2f1282d4b62acaf31d06b5dd87302de7aca4476", "old_text": "10. IANA is requested to allocate an entry to the TLS \"ContentType\" registry, for the \"return_routability_check(TBD2)\" message defined in this document. The \"return_routability_check\" content type is only applicable to DTLS 1.2 and 1.3. IANA is requested to allocate the extension code point (TBD1) for the \"rrc\" extension to the \"TLS ExtensionType Values\" registry as described in tbl-ext. 11. Issues against this document are tracked at https://github.com/tlswg/", "comments": "This PR is on top of\nWe probably need to create a new IANA registry for RRC message types. The registry should be initialised with the 3 message types we define in the draft.\nOk.", "new_text": "10. [[to-be-removed: RFC Editor: please replace RFCthis with this RFC number and remove this note.]] 10.1. IANA is requested to allocate an entry to the TLS \"ContentType\" registry, for the \"return_routability_check(TBD2)\" message defined in this document. The \"return_routability_check\" content type is only applicable to DTLS 1.2 and 1.3. 10.2. IANA is requested to allocate the extension code point (TBD1) for the \"rrc\" extension to the \"TLS ExtensionType Values\" registry as described in tbl-ext. 10.3. IANA is requested to create a new sub-registry for RRC Message Types in the TLS Parameters registry IANA.tls-parameters, with the policy \"expert review\" RFC8126. Each entry in the registry must include: The initial state of this sub-registry is as follows: 11. Issues against this document are tracked at https://github.com/tlswg/"} {"id": "q-en-dtls-rrc-a4ea99957dde331ed7b19732972ce76c378698861bdf74ba361d4d9e5d956bea", "old_text": "those datagrams are cryptographically authenticated). On-path adversaries can, in general, pose a harm to connectivity. 10. [[to-be-removed: RFC Editor: please replace RFCthis with this RFC", "comments": "Signed-off-by: Thomas Fossati\nCC NAME\nWe don't currently discuss any privacy implication associated with CID reuse. We should at least: document how to ensure a new CID is used during path validation in 1.3 flag that 1.2 has no way to work around linkability because CID have the same lifetime as the session", "new_text": "those datagrams are cryptographically authenticated). On-path adversaries can, in general, pose a harm to connectivity. When using DTLS 1.3, peers SHOULD avoid using the same CID on multiple network paths, in particular when initiating connection migration or when probing a new network path, as described in path- validation, as an adversary can otherwise correlate the communication interaction across those different paths. DTLS 1.3 provides mechanisms to ensure that a new CID can always be used. In general, an endpoint should proactively send a RequestConnectionId message to ask for new CIDs as soon as the pool of spare CIDs is depleted (or goes below a threshold). Also, in case a peer might have exhausted available CIDs, a migrating endpoint could include NewConnectionId in packets sent on the new path to make sure that the subsequent path validation can use fresh CIDs. Note that DTLS 1.2 does not offer the ability to request new CIDs during the session lifetime since CIDs have the same life-span of the connection. Therefore, deployments that use DTLS in multihoming environments SHOULD refuse to use CIDs with DTLS 1.2 and switch to DTLS 1.3 if the correlation privacy threat is a concern. 10. [[to-be-removed: RFC Editor: please replace RFCthis with this RFC"} {"id": "q-en-dtls-rrc-a301d8ad3868bbd1da3d6a731144440ee49aa382def826f4ae319cffbda797db", "old_text": "active path, implementations SHOULD use T = 3xRTT. If an implementation has no way to obtain information regarding the RTT of the active path, a value of 1s SHOULD be used. Profiles for specific deployment environments - for example, constrained networks I-D.ietf-uta-tls13-iot-profile - MAY specify a", "comments": "/cc NAME Signed-off-by: Thomas Fossati\nIn this second paragraph, does this mean T=1s or RTT=1s ?\nFMPOV: (because that refers to the default timeout of DTLS 1.2 RFC6347)", "new_text": "active path, implementations SHOULD use T = 3xRTT. If an implementation has no way to obtain information regarding the RTT of the active path, T SHOULD be set to 1s. Profiles for specific deployment environments - for example, constrained networks I-D.ietf-uta-tls13-iot-profile - MAY specify a"} {"id": "q-en-dtls13-spec-1684f288f41e73a766c2f8f8cd0a0c0761b5304183ffea2bf5e136d6bea9a226", "old_text": "Handshake messages are potentially larger than any given datagram, thus creating the problem of IP fragmentation. Datagram transport protocols, like UDP, are more vulnerable to denial of service attacks and require a return-routability check with the help of cookies to be integrated into the handshake. A detailed discussion of countermeasures can be found in dos. 3.1.", "comments": "Attempt to address though I'm not entirely happy with this text. Having gone through the existing text looking at \"vulnerable\" and \"susceptible\", it doesn't seem in particularly bad shape, so my comment there may have been stronger than needed.\nCopied from EKR's repo Ben Kaduk wrote: \"We talk in a couple places about datagram protocols being \u201cvulnerable\u201d or \u201csusceptible\u201d to DoS attacks, which leads me to at least partially read that as meaning that the protocol\u2019s own service will be disrupted; as we know, this is not the whole story, as the reflection/amplification part can facilitate DoS attacks targeted at other services/networks. So perhaps some rewording is in order. \"\nNAME I would welcome a PR here.\nNAME did this,", "new_text": "Handshake messages are potentially larger than any given datagram, thus creating the problem of IP fragmentation. Datagram transport protocols, like UDP, are susceptible to abusive behavior effecting denial of service attacks against nonparticipants, and require a return-routability check with the help of cookies to be integrated into the handshake. A detailed discussion of countermeasures can be found in dos. 3.1."} {"id": "q-en-dtls13-spec-1684f288f41e73a766c2f8f8cd0a0c0761b5304183ffea2bf5e136d6bea9a226", "old_text": "error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports. If DTLS is being carried over a transport that is resistant to forgery (e.g., SCTP with SCTP-AUTH), then it is safer to send alerts", "comments": "Attempt to address though I'm not entirely happy with this text. Having gone through the existing text looking at \"vulnerable\" and \"susceptible\", it doesn't seem in particularly bad shape, so my comment there may have been stronger than needed.\nCopied from EKR's repo Ben Kaduk wrote: \"We talk in a couple places about datagram protocols being \u201cvulnerable\u201d or \u201csusceptible\u201d to DoS attacks, which leads me to at least partially read that as meaning that the protocol\u2019s own service will be disrupted; as we know, this is not the whole story, as the reflection/amplification part can facilitate DoS attacks targeted at other services/networks. So perhaps some rewording is in order. \"\nNAME I would welcome a PR here.\nNAME did this,", "new_text": "error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. If DTLS is being carried over a transport that is resistant to forgery (e.g., SCTP with SCTP-AUTH), then it is safer to send alerts"} {"id": "q-en-dtls13-spec-cab61e949a5ef26427246668f320dc5c039603672154c4d296f99edde163691f", "old_text": "datagram. Omitting the length field MUST only be used for the last record in a datagram. Implementations which send multiple records in the same datagram SHOULD omit the connection id from all but the first record; receiving implementations MUST assume that any subsequent records without connection IDs belong to the same association. Sending implementations MUST NOT mix records from multiple DTLS associations in the same datagram. If the second or later record has a connection ID which does not correspond to the same association used for previous records, the rest of the datagram MUST be discarded. When expanded, the epoch and sequence number can be combined into an unpacked RecordNumber structure, as shown below:", "comments": "As proposed on list.\nHanno Becker points out that the current text does not cryptographically protect the CID when we have coalesced packets. I created PR to add this feature, but I think instead we should simply require that all packets have CIDs once the CID is established. This seems like a net reduction in complexity. There would be two changes here: Re-emphasize the point already in the DTLS CID spec that you abort if a CID is expected and none is found. Remove the text about allowing implicit CIDs.\nNAME NAME\nThis was resolved by . Please re-open if that's not the case!", "new_text": "datagram. Omitting the length field MUST only be used for the last record in a datagram. If a connection ID is negotiated, then it MUST be contained in all datagrams. Sending implementations MUST NOT mix records from multiple DTLS associations in the same datagram. If the second or later record has a connection ID which does not correspond to the same association used for previous records, the rest of the datagram MUST be discarded. When expanded, the epoch and sequence number can be combined into an unpacked RecordNumber structure, as shown below:"} {"id": "q-en-dtls13-spec-5c131a325c54c0458a1923be5a1142d1e93dc7f55b8674d1c09c6e41df585949", "old_text": "5.7.3. DTLS 1.3 makes use of the following categories of post-handshake messages:", "comments": "NAME NAME\n(Probably want to fix the typo for \"Fixes\" in the PR title, but the content looks good to me)\nDTLS 1.3 does not really change this aspect compared to earlier DTLS versions. We could reference a number of the other TLS WG documents that attempt to reduce the size of the certificate message, such as Client Certificate URL Cached Info Certificate Compression Following the guidelines in RFC 7925 Other certificate types It may be possible to use the fragmentation mechanism to send one certificate after the other in the certificate chain. Is this what you had in mind?", "new_text": "5.7.3. DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension RFC7924 and certificate compression RFC8879. 5.7.4. DTLS 1.3 makes use of the following categories of post-handshake messages:"} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. TLS cannot be used directly in datagram environments for the following five reasons:", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications, such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. Note that while low-latency streaming and gaming use DTLS to protect data (e.g. for protection of a WebRTC data channel), telephony utilizes DTLS for key establishment, and Secure Real-time Transport Protocol (SRTP) for protection of data RFC5763. TLS cannot be used directly in datagram environments for the following five reasons:"} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "thus avoiding concerns about interleaving multiple HelloRetryRequests. 3.2. In DTLS, each handshake message is assigned a specific sequence", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "thus avoiding concerns about interleaving multiple HelloRetryRequests. For more detail on timeouts and retransmission, see timeout- retransmissions. 3.2. In DTLS, each handshake message is assigned a specific sequence"} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. Multiple DTLS records MAY be placed in a single datagram. Records are encoded consecutively. The length field from DTLS records", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. For more information about PMTU issues see pmtu- issues. Multiple DTLS records MAY be placed in a single datagram. Records are encoded consecutively. The length field from DTLS records"} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. If DTLS is being carried over a transport that is resistant to forgery (e.g., SCTP with SCTP-AUTH), then it is safer to send alerts", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, the practice of generating fatal alerts is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. If DTLS is being carried over a transport that is resistant to forgery (e.g., SCTP with SCTP-AUTH), then it is safer to send alerts"} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "5.8.2. Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 RFC6298) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted,", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "5.8.2. The configuration of timer settings varies with implementations and certain deployment environments require timer value adjustments. Mishandling of the timer can lead to serious congestion problems, for example if many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 RFC6298) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time-sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted,"} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension RFC7924 and certificate compression RFC8879. 5.8.4.", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. Several extensions have been standardized to reduce the size of the certificate message, for example the cached information extension RFC7924, certificate compression RFC8879 and RFC6066, which defines the \"client_certificate_url\" extension allowing DTLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. 5.8.4."} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId message if an earlier message of the same type has not yet been acknowledged.", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have sent KeyUpdate, NewConnectionId or RequestConnectionId message if an earlier message of the same type has not yet been acknowledged."} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. ack-msg provides the necessary background details for this interaction.", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt or decrypt messages found in Record 1. ack-msg provides the necessary background details for this interaction."} {"id": "q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814", "old_text": "peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message when it received record 1 indicating loss of record 0, the entire flight would be retransmitted. When DTLS 1.3 is used in deployments with lossy networks, such as low-power, long range radio networks as well as low-power mesh networks, the use of ACKs is recommended. The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. 8.", "comments": "This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in", "new_text": "peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in dtls-key-update if the client does not send the ACK message when it received record 1 indicating loss of record 0, the entire flight would be retransmitted. When DTLS 1.3 is used in deployments with lossy networks, such as low-power, long range radio networks as well as low-power mesh networks, the use of ACKs is recommended. The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its maximum retransmission timemout value is reached. 8."} {"id": "q-en-dtls13-spec-49e1629cf2ed317a26fcca6c5093190790e150affbe38187f8b7b064cec9832a", "old_text": "sequence number fields in the DTLSCiphertext structure have been reduced from those in previous versions. The DTLSCiphertext structure has a variable length header. DTLSPlaintext records are used to send unprotected records and", "comments": "Yet another attempt to This builds on Chris's PR but just omits the epoch entirely from the AEAD nonce calculation. This is more consistent with TLS and doesn't require special case reasoning about why we only need to build in the bottom 16 bits. This relies entirely on the keys being different. Needs analysis.\nThis seems promising.\nNAME this LGTM, but there's a conflict. Can you resolve prior to merging? NAME are you OK with the new rationale, and text that allows it to be relaxed in the future?\nIf I have understood and calculated correctly DTLS 1.3 limits the number of packets in a AES-GCM connection to 2^40.5 compared to 2^64 in DTLS 1.2. This is quite a severe limitation that is not mentioned explicitly. The AEAD limits in themselves are not a problem, but the combination of AEAD limits (2^24.5) and a small epoch (2^16) is a problem. I think this is major problem in some use cases of DTLS. E.g. most 3GPP 5G use cases where DTLS or DTLS/SCTP is used for semi-permanent connections (lasting years) and where the terminating the connection cause severe disturbances. I did not find any discussion about this problem (but I could easily have missed it) It is known that the limit of 2^48 packets in SRTP can be reached and 2^40.5 is significantly smaller than this... If the document was not in such a late stage, I would have suggested to increase the size of the epoch to 32 bits..... I think this and the other problems DTLS 1.3 has with semi-permanent connections has to be solved sooner than later.\nNAME thanks for filing this. I hear you about the late stage, but IIRC we never represent the entire epoch on the wire in any case, so it seems like just changing it to 32 bits would be pretty easy, so why not do it now?\nHow do the chacha20 numbers compare to the AES-GCM ones?\nNAME I don't think (D)TLS 1.3 puts any limits on ChaCha20 so my understanding is understanding is that the total number of packets can be protected in the current drafts is ChaCha20 2^64 (2^48 2^16) AES-GCM 2^40.5 (2^24.5 2^16) AES-GCM 2^39 (2^23 * 2^16)\nNAME I am all for changing this now. I made a PR that change epoch from uint16 to uint32. As you say, it seems like a pretty easy change to do. Is uint32 the right size?\nUnless you think we should do uint64. NAME NAME what would we have to do to make this change?\nURL Magnus Westerlund thinks uint64 would be a safer bet.Michael T\u00fcxen suggest to wrap the uint16 epoch instead. That might be an easier solution. I don't see a direct problem in DTLSPlaintext, ACK message, or recordsequencenumber with a uint16 epoch that is allowed to wrap like: 4, 5, ... 2^16-1, 4, 5, ... 2^16-1, 4, 5, ...\nNAME NAME NAME since it's through the IESG we would need to at least raise that this change was made after they had approved it. I suspect that doing some kind of 2-week heads up to the WG/IESG that we want to make the change before landing it. But, if we do that or something more drastic that's really up to NAME\nI want to point out that allowing wrapping per may not be the best solution if one considers to use the DTLS epoch as part of the input to the TLS exporter as suggested in . Then one want an input that doesn't wrap, and then to ensure that one doesn't practically run into any limits, then an uint64 would be better. An unit32 appears to be sufficient for a quite extreme use case with minimal messages size (which is one DTLS record with 4 byte user message) and high bandwidth (10 Gbps) sustained over a period of more than 5 years before the epoch would wrap. However, to my understanding the full epoch value are so rarely included in transmitted DTLS messages that its size have minimal implication on the overhead that it is not worth saving 4-bytes to be certain to have eliminated any practical limitation.\nI think a 2-week WGLC and notifying the IESG will suffice.\nFYI. We have now run into quite many problems with the interworking of semi-permanent DTLS/SCTP connections (lasting years) and DTLS 1.3: No Diffie-Hellman rekeying No post-handshake server authentication No rekeying of the exporter_secret 2^40.5 packets limit 2 minute MSL requirement 2^38.5 byte SCTP user message size limit Our initial plan was to just replace DTLS 1.2 with DTLS 1.3, leave DTLS/SCTP as is, but as the problems keep adding up we are now startnig to consider quite big changes to DTLS/SCTP 1.3 to make this work: URL Using new DTLS connections instead of KeyUpdate could potentially address the first five, but is a significant change.\nHi, Based on the problems long-lived DTLS/SCTP had with DTLS 1.3 (as well as DTLS 1.2 without renegotiation and respecting AEAD limits) we restructured DTLS/SCTP (RFC6083bis) to use parallel connections instead of relying on renegotiation and post-handshake messages. We think this quite easily solves all the problems we had without requiring any updates of DTLS 1.3. It also makes DTLS/SCTP less dependant on version specific DTLS mechanisms. It will hopefully work without changes with any future DTLS 1.4. URL We can now firmly RECOMMEND DTLS 1.3 which I think is very important. 5G is to a large degree replacing or complementing IPsec with TLS, DTLS, and DTLS/SCTP. I will do my best to transition 3GPP to DTLS 1.3 asap. FYI, 3GPP DTLS/SCTP associations have very long lifetimes (can be years), strive for five nines (99.999%) availability and cannot be teared down without causing major disruptions. With such long lifetimes we think mutual reauthentication, rekeying of the SCTP-AUTH key, and forcing attackers to dynamic key exfiltration [RFC7624] are required features. I still think should be done. 2^40.5 packets is quite limiting.\nSeems OK.Thanks -- this LGTM. We'll need to do a brief consensus call given the type of change.", "new_text": "sequence number fields in the DTLSCiphertext structure have been reduced from those in previous versions. The DTLS epoch serialized in DTLSPlaintext is 2 octets long for compatibility with DTLS 1.2. However, this value is set as the least significant 2 octets of the connection epoch, which is an 8 octet counter incremented on every KeyUpdate. See seq-and-epoch for details. The sequence number is set to be the low order 48 bits of the 64 bit sequence number. Plaintext records MUST NOT be sent with sequence numbers that would exceed 2^48-1, so the upper 16 bits will always be 0. The DTLSCiphertext structure has a variable length header. DTLSPlaintext records are used to send unprotected records and"} {"id": "q-en-dtls13-spec-49e1629cf2ed317a26fcca6c5093190790e150affbe38187f8b7b064cec9832a", "old_text": "compatibility purposes. It MUST be ignored for all purposes. See TLS13; Appendix D.1 for the rationale for this. The unified header (unified_hdr) is a structure of variable length, as shown in cid_hdr.", "comments": "Yet another attempt to This builds on Chris's PR but just omits the epoch entirely from the AEAD nonce calculation. This is more consistent with TLS and doesn't require special case reasoning about why we only need to build in the bottom 16 bits. This relies entirely on the keys being different. Needs analysis.\nThis seems promising.\nNAME this LGTM, but there's a conflict. Can you resolve prior to merging? NAME are you OK with the new rationale, and text that allows it to be relaxed in the future?\nIf I have understood and calculated correctly DTLS 1.3 limits the number of packets in a AES-GCM connection to 2^40.5 compared to 2^64 in DTLS 1.2. This is quite a severe limitation that is not mentioned explicitly. The AEAD limits in themselves are not a problem, but the combination of AEAD limits (2^24.5) and a small epoch (2^16) is a problem. I think this is major problem in some use cases of DTLS. E.g. most 3GPP 5G use cases where DTLS or DTLS/SCTP is used for semi-permanent connections (lasting years) and where the terminating the connection cause severe disturbances. I did not find any discussion about this problem (but I could easily have missed it) It is known that the limit of 2^48 packets in SRTP can be reached and 2^40.5 is significantly smaller than this... If the document was not in such a late stage, I would have suggested to increase the size of the epoch to 32 bits..... I think this and the other problems DTLS 1.3 has with semi-permanent connections has to be solved sooner than later.\nNAME thanks for filing this. I hear you about the late stage, but IIRC we never represent the entire epoch on the wire in any case, so it seems like just changing it to 32 bits would be pretty easy, so why not do it now?\nHow do the chacha20 numbers compare to the AES-GCM ones?\nNAME I don't think (D)TLS 1.3 puts any limits on ChaCha20 so my understanding is understanding is that the total number of packets can be protected in the current drafts is ChaCha20 2^64 (2^48 2^16) AES-GCM 2^40.5 (2^24.5 2^16) AES-GCM 2^39 (2^23 * 2^16)\nNAME I am all for changing this now. I made a PR that change epoch from uint16 to uint32. As you say, it seems like a pretty easy change to do. Is uint32 the right size?\nUnless you think we should do uint64. NAME NAME what would we have to do to make this change?\nURL Magnus Westerlund thinks uint64 would be a safer bet.Michael T\u00fcxen suggest to wrap the uint16 epoch instead. That might be an easier solution. I don't see a direct problem in DTLSPlaintext, ACK message, or recordsequencenumber with a uint16 epoch that is allowed to wrap like: 4, 5, ... 2^16-1, 4, 5, ... 2^16-1, 4, 5, ...\nNAME NAME NAME since it's through the IESG we would need to at least raise that this change was made after they had approved it. I suspect that doing some kind of 2-week heads up to the WG/IESG that we want to make the change before landing it. But, if we do that or something more drastic that's really up to NAME\nI want to point out that allowing wrapping per may not be the best solution if one considers to use the DTLS epoch as part of the input to the TLS exporter as suggested in . Then one want an input that doesn't wrap, and then to ensure that one doesn't practically run into any limits, then an uint64 would be better. An unit32 appears to be sufficient for a quite extreme use case with minimal messages size (which is one DTLS record with 4 byte user message) and high bandwidth (10 Gbps) sustained over a period of more than 5 years before the epoch would wrap. However, to my understanding the full epoch value are so rarely included in transmitted DTLS messages that its size have minimal implication on the overhead that it is not worth saving 4-bytes to be certain to have eliminated any practical limitation.\nI think a 2-week WGLC and notifying the IESG will suffice.\nFYI. We have now run into quite many problems with the interworking of semi-permanent DTLS/SCTP connections (lasting years) and DTLS 1.3: No Diffie-Hellman rekeying No post-handshake server authentication No rekeying of the exporter_secret 2^40.5 packets limit 2 minute MSL requirement 2^38.5 byte SCTP user message size limit Our initial plan was to just replace DTLS 1.2 with DTLS 1.3, leave DTLS/SCTP as is, but as the problems keep adding up we are now startnig to consider quite big changes to DTLS/SCTP 1.3 to make this work: URL Using new DTLS connections instead of KeyUpdate could potentially address the first five, but is a significant change.\nHi, Based on the problems long-lived DTLS/SCTP had with DTLS 1.3 (as well as DTLS 1.2 without renegotiation and respecting AEAD limits) we restructured DTLS/SCTP (RFC6083bis) to use parallel connections instead of relying on renegotiation and post-handshake messages. We think this quite easily solves all the problems we had without requiring any updates of DTLS 1.3. It also makes DTLS/SCTP less dependant on version specific DTLS mechanisms. It will hopefully work without changes with any future DTLS 1.4. URL We can now firmly RECOMMEND DTLS 1.3 which I think is very important. 5G is to a large degree replacing or complementing IPsec with TLS, DTLS, and DTLS/SCTP. I will do my best to transition 3GPP to DTLS 1.3 asap. FYI, 3GPP DTLS/SCTP associations have very long lifetimes (can be years), strive for five nines (99.999%) availability and cannot be teared down without causing major disruptions. With such long lifetimes we think mutual reauthentication, rekeying of the SCTP-AUTH key, and forcing attackers to dynamic key exfiltration [RFC7624] are required features. I still think should be done. 2^40.5 packets is quite limiting.\nSeems OK.Thanks -- this LGTM. We'll need to do a brief consensus call given the type of change.", "new_text": "compatibility purposes. It MUST be ignored for all purposes. See TLS13; Appendix D.1 for the rationale for this. The least significant 2 bytes of the connection epoch value. The unified header (unified_hdr) is a structure of variable length, as shown in cid_hdr."} {"id": "q-en-dtls13-spec-49e1629cf2ed317a26fcca6c5093190790e150affbe38187f8b7b064cec9832a", "old_text": "When expanded, the epoch and sequence number can be combined into an unpacked RecordNumber structure, as shown below: This 64-bit value is used in the ACK message as well as in the \"record_sequence_number\" input to the AEAD function. The entire header value shown in hdr_examples (but prior to record number encryption, see rne) is used as as the additional data value for the AEAD function. For instance, if the minimal variant is used, the AAD is 2 octets long. Note that this design is different from the additional data calculation for DTLS 1.2 and for DTLS 1.2 with Connection ID. 4.1.", "comments": "Yet another attempt to This builds on Chris's PR but just omits the epoch entirely from the AEAD nonce calculation. This is more consistent with TLS and doesn't require special case reasoning about why we only need to build in the bottom 16 bits. This relies entirely on the keys being different. Needs analysis.\nThis seems promising.\nNAME this LGTM, but there's a conflict. Can you resolve prior to merging? NAME are you OK with the new rationale, and text that allows it to be relaxed in the future?\nIf I have understood and calculated correctly DTLS 1.3 limits the number of packets in a AES-GCM connection to 2^40.5 compared to 2^64 in DTLS 1.2. This is quite a severe limitation that is not mentioned explicitly. The AEAD limits in themselves are not a problem, but the combination of AEAD limits (2^24.5) and a small epoch (2^16) is a problem. I think this is major problem in some use cases of DTLS. E.g. most 3GPP 5G use cases where DTLS or DTLS/SCTP is used for semi-permanent connections (lasting years) and where the terminating the connection cause severe disturbances. I did not find any discussion about this problem (but I could easily have missed it) It is known that the limit of 2^48 packets in SRTP can be reached and 2^40.5 is significantly smaller than this... If the document was not in such a late stage, I would have suggested to increase the size of the epoch to 32 bits..... I think this and the other problems DTLS 1.3 has with semi-permanent connections has to be solved sooner than later.\nNAME thanks for filing this. I hear you about the late stage, but IIRC we never represent the entire epoch on the wire in any case, so it seems like just changing it to 32 bits would be pretty easy, so why not do it now?\nHow do the chacha20 numbers compare to the AES-GCM ones?\nNAME I don't think (D)TLS 1.3 puts any limits on ChaCha20 so my understanding is understanding is that the total number of packets can be protected in the current drafts is ChaCha20 2^64 (2^48 2^16) AES-GCM 2^40.5 (2^24.5 2^16) AES-GCM 2^39 (2^23 * 2^16)\nNAME I am all for changing this now. I made a PR that change epoch from uint16 to uint32. As you say, it seems like a pretty easy change to do. Is uint32 the right size?\nUnless you think we should do uint64. NAME NAME what would we have to do to make this change?\nURL Magnus Westerlund thinks uint64 would be a safer bet.Michael T\u00fcxen suggest to wrap the uint16 epoch instead. That might be an easier solution. I don't see a direct problem in DTLSPlaintext, ACK message, or recordsequencenumber with a uint16 epoch that is allowed to wrap like: 4, 5, ... 2^16-1, 4, 5, ... 2^16-1, 4, 5, ...\nNAME NAME NAME since it's through the IESG we would need to at least raise that this change was made after they had approved it. I suspect that doing some kind of 2-week heads up to the WG/IESG that we want to make the change before landing it. But, if we do that or something more drastic that's really up to NAME\nI want to point out that allowing wrapping per may not be the best solution if one considers to use the DTLS epoch as part of the input to the TLS exporter as suggested in . Then one want an input that doesn't wrap, and then to ensure that one doesn't practically run into any limits, then an uint64 would be better. An unit32 appears to be sufficient for a quite extreme use case with minimal messages size (which is one DTLS record with 4 byte user message) and high bandwidth (10 Gbps) sustained over a period of more than 5 years before the epoch would wrap. However, to my understanding the full epoch value are so rarely included in transmitted DTLS messages that its size have minimal implication on the overhead that it is not worth saving 4-bytes to be certain to have eliminated any practical limitation.\nI think a 2-week WGLC and notifying the IESG will suffice.\nFYI. We have now run into quite many problems with the interworking of semi-permanent DTLS/SCTP connections (lasting years) and DTLS 1.3: No Diffie-Hellman rekeying No post-handshake server authentication No rekeying of the exporter_secret 2^40.5 packets limit 2 minute MSL requirement 2^38.5 byte SCTP user message size limit Our initial plan was to just replace DTLS 1.2 with DTLS 1.3, leave DTLS/SCTP as is, but as the problems keep adding up we are now startnig to consider quite big changes to DTLS/SCTP 1.3 to make this work: URL Using new DTLS connections instead of KeyUpdate could potentially address the first five, but is a significant change.\nHi, Based on the problems long-lived DTLS/SCTP had with DTLS 1.3 (as well as DTLS 1.2 without renegotiation and respecting AEAD limits) we restructured DTLS/SCTP (RFC6083bis) to use parallel connections instead of relying on renegotiation and post-handshake messages. We think this quite easily solves all the problems we had without requiring any updates of DTLS 1.3. It also makes DTLS/SCTP less dependant on version specific DTLS mechanisms. It will hopefully work without changes with any future DTLS 1.4. URL We can now firmly RECOMMEND DTLS 1.3 which I think is very important. 5G is to a large degree replacing or complementing IPsec with TLS, DTLS, and DTLS/SCTP. I will do my best to transition 3GPP to DTLS 1.3 asap. FYI, 3GPP DTLS/SCTP associations have very long lifetimes (can be years), strive for five nines (99.999%) availability and cannot be teared down without causing major disruptions. With such long lifetimes we think mutual reauthentication, rekeying of the SCTP-AUTH key, and forcing attackers to dynamic key exfiltration [RFC7624] are required features. I still think should be done. 2^40.5 packets is quite limiting.\nSeems OK.Thanks -- this LGTM. We'll need to do a brief consensus call given the type of change.", "new_text": "When expanded, the epoch and sequence number can be combined into an unpacked RecordNumber structure, as shown below: This 128-bit value is used in the ACK message. The entire header value shown in hdr_examples (but prior to record number encryption, see rne) is used as as the additional data value for the AEAD function. For instance, if the minimal variant is used, the AAD is 2 octets long. Note that this design is different from the additional data calculation for DTLS 1.2 and for DTLS 1.2 with Connection ID. In DTLS 1.3 the 64-bit sequence_number is used as the sequence number for the AEAD computation; unlike DTLS 1.2, the epoch is not included. 4.1."} {"id": "q-en-dtls13-spec-49e1629cf2ed317a26fcca6c5093190790e150affbe38187f8b7b064cec9832a", "old_text": "Implementations MUST either abandon an association or re-key prior to allowing the sequence number to wrap. Implementations MUST NOT allow the epoch to wrap, but instead MUST establish a new association, terminating the old association. 4.2.2. When receiving protected DTLS records, the recipient does not have a full epoch or sequence number value in the record and so there is some opportunity for ambiguity. Because the full epoch and sequence number are used to compute the per-record nonce, failure to reconstruct these values leads to failure to deprotect the record, and so implementations MAY use a mechanism of their choice to determine the full values. This section provides an algorithm which is comparatively simple and which implementations are RECOMMENDED to follow. If the epoch bits match those of the current epoch, then implementations SHOULD reconstruct the sequence number by computing", "comments": "Yet another attempt to This builds on Chris's PR but just omits the epoch entirely from the AEAD nonce calculation. This is more consistent with TLS and doesn't require special case reasoning about why we only need to build in the bottom 16 bits. This relies entirely on the keys being different. Needs analysis.\nThis seems promising.\nNAME this LGTM, but there's a conflict. Can you resolve prior to merging? NAME are you OK with the new rationale, and text that allows it to be relaxed in the future?\nIf I have understood and calculated correctly DTLS 1.3 limits the number of packets in a AES-GCM connection to 2^40.5 compared to 2^64 in DTLS 1.2. This is quite a severe limitation that is not mentioned explicitly. The AEAD limits in themselves are not a problem, but the combination of AEAD limits (2^24.5) and a small epoch (2^16) is a problem. I think this is major problem in some use cases of DTLS. E.g. most 3GPP 5G use cases where DTLS or DTLS/SCTP is used for semi-permanent connections (lasting years) and where the terminating the connection cause severe disturbances. I did not find any discussion about this problem (but I could easily have missed it) It is known that the limit of 2^48 packets in SRTP can be reached and 2^40.5 is significantly smaller than this... If the document was not in such a late stage, I would have suggested to increase the size of the epoch to 32 bits..... I think this and the other problems DTLS 1.3 has with semi-permanent connections has to be solved sooner than later.\nNAME thanks for filing this. I hear you about the late stage, but IIRC we never represent the entire epoch on the wire in any case, so it seems like just changing it to 32 bits would be pretty easy, so why not do it now?\nHow do the chacha20 numbers compare to the AES-GCM ones?\nNAME I don't think (D)TLS 1.3 puts any limits on ChaCha20 so my understanding is understanding is that the total number of packets can be protected in the current drafts is ChaCha20 2^64 (2^48 2^16) AES-GCM 2^40.5 (2^24.5 2^16) AES-GCM 2^39 (2^23 * 2^16)\nNAME I am all for changing this now. I made a PR that change epoch from uint16 to uint32. As you say, it seems like a pretty easy change to do. Is uint32 the right size?\nUnless you think we should do uint64. NAME NAME what would we have to do to make this change?\nURL Magnus Westerlund thinks uint64 would be a safer bet.Michael T\u00fcxen suggest to wrap the uint16 epoch instead. That might be an easier solution. I don't see a direct problem in DTLSPlaintext, ACK message, or recordsequencenumber with a uint16 epoch that is allowed to wrap like: 4, 5, ... 2^16-1, 4, 5, ... 2^16-1, 4, 5, ...\nNAME NAME NAME since it's through the IESG we would need to at least raise that this change was made after they had approved it. I suspect that doing some kind of 2-week heads up to the WG/IESG that we want to make the change before landing it. But, if we do that or something more drastic that's really up to NAME\nI want to point out that allowing wrapping per may not be the best solution if one considers to use the DTLS epoch as part of the input to the TLS exporter as suggested in . Then one want an input that doesn't wrap, and then to ensure that one doesn't practically run into any limits, then an uint64 would be better. An unit32 appears to be sufficient for a quite extreme use case with minimal messages size (which is one DTLS record with 4 byte user message) and high bandwidth (10 Gbps) sustained over a period of more than 5 years before the epoch would wrap. However, to my understanding the full epoch value are so rarely included in transmitted DTLS messages that its size have minimal implication on the overhead that it is not worth saving 4-bytes to be certain to have eliminated any practical limitation.\nI think a 2-week WGLC and notifying the IESG will suffice.\nFYI. We have now run into quite many problems with the interworking of semi-permanent DTLS/SCTP connections (lasting years) and DTLS 1.3: No Diffie-Hellman rekeying No post-handshake server authentication No rekeying of the exporter_secret 2^40.5 packets limit 2 minute MSL requirement 2^38.5 byte SCTP user message size limit Our initial plan was to just replace DTLS 1.2 with DTLS 1.3, leave DTLS/SCTP as is, but as the problems keep adding up we are now startnig to consider quite big changes to DTLS/SCTP 1.3 to make this work: URL Using new DTLS connections instead of KeyUpdate could potentially address the first five, but is a significant change.\nHi, Based on the problems long-lived DTLS/SCTP had with DTLS 1.3 (as well as DTLS 1.2 without renegotiation and respecting AEAD limits) we restructured DTLS/SCTP (RFC6083bis) to use parallel connections instead of relying on renegotiation and post-handshake messages. We think this quite easily solves all the problems we had without requiring any updates of DTLS 1.3. It also makes DTLS/SCTP less dependant on version specific DTLS mechanisms. It will hopefully work without changes with any future DTLS 1.4. URL We can now firmly RECOMMEND DTLS 1.3 which I think is very important. 5G is to a large degree replacing or complementing IPsec with TLS, DTLS, and DTLS/SCTP. I will do my best to transition 3GPP to DTLS 1.3 asap. FYI, 3GPP DTLS/SCTP associations have very long lifetimes (can be years), strive for five nines (99.999%) availability and cannot be teared down without causing major disruptions. With such long lifetimes we think mutual reauthentication, rekeying of the SCTP-AUTH key, and forcing attackers to dynamic key exfiltration [RFC7624] are required features. I still think should be done. 2^40.5 packets is quite limiting.\nSeems OK.Thanks -- this LGTM. We'll need to do a brief consensus call given the type of change.", "new_text": "Implementations MUST either abandon an association or re-key prior to allowing the sequence number to wrap. 4.2.2. When receiving protected DTLS records, the recipient does not have a full epoch or sequence number value in the record and so there is some opportunity for ambiguity. Because the full sequence number is used to compute the per-record nonce and the epoch determines the keys, failure to reconstruct these values leads to failure to deprotect the record, and so implementations MAY use a mechanism of their choice to determine the full values. This section provides an algorithm which is comparatively simple and which implementations are RECOMMENDED to follow. If the epoch bits match those of the current epoch, then implementations SHOULD reconstruct the sequence number by computing"} {"id": "q-en-dtls13-spec-49e1629cf2ed317a26fcca6c5093190790e150affbe38187f8b7b064cec9832a", "old_text": "include handshake messages, such as post-handshake messages (e.g., a NewSessionTicket message). epoch value (4 to 2^16-1) is used for payloads protected using keys from the [sender]_application_traffic_secret_N (N>0). Using these reserved epoch values a receiver knows what cipher state", "comments": "Yet another attempt to This builds on Chris's PR but just omits the epoch entirely from the AEAD nonce calculation. This is more consistent with TLS and doesn't require special case reasoning about why we only need to build in the bottom 16 bits. This relies entirely on the keys being different. Needs analysis.\nThis seems promising.\nNAME this LGTM, but there's a conflict. Can you resolve prior to merging? NAME are you OK with the new rationale, and text that allows it to be relaxed in the future?\nIf I have understood and calculated correctly DTLS 1.3 limits the number of packets in a AES-GCM connection to 2^40.5 compared to 2^64 in DTLS 1.2. This is quite a severe limitation that is not mentioned explicitly. The AEAD limits in themselves are not a problem, but the combination of AEAD limits (2^24.5) and a small epoch (2^16) is a problem. I think this is major problem in some use cases of DTLS. E.g. most 3GPP 5G use cases where DTLS or DTLS/SCTP is used for semi-permanent connections (lasting years) and where the terminating the connection cause severe disturbances. I did not find any discussion about this problem (but I could easily have missed it) It is known that the limit of 2^48 packets in SRTP can be reached and 2^40.5 is significantly smaller than this... If the document was not in such a late stage, I would have suggested to increase the size of the epoch to 32 bits..... I think this and the other problems DTLS 1.3 has with semi-permanent connections has to be solved sooner than later.\nNAME thanks for filing this. I hear you about the late stage, but IIRC we never represent the entire epoch on the wire in any case, so it seems like just changing it to 32 bits would be pretty easy, so why not do it now?\nHow do the chacha20 numbers compare to the AES-GCM ones?\nNAME I don't think (D)TLS 1.3 puts any limits on ChaCha20 so my understanding is understanding is that the total number of packets can be protected in the current drafts is ChaCha20 2^64 (2^48 2^16) AES-GCM 2^40.5 (2^24.5 2^16) AES-GCM 2^39 (2^23 * 2^16)\nNAME I am all for changing this now. I made a PR that change epoch from uint16 to uint32. As you say, it seems like a pretty easy change to do. Is uint32 the right size?\nUnless you think we should do uint64. NAME NAME what would we have to do to make this change?\nURL Magnus Westerlund thinks uint64 would be a safer bet.Michael T\u00fcxen suggest to wrap the uint16 epoch instead. That might be an easier solution. I don't see a direct problem in DTLSPlaintext, ACK message, or recordsequencenumber with a uint16 epoch that is allowed to wrap like: 4, 5, ... 2^16-1, 4, 5, ... 2^16-1, 4, 5, ...\nNAME NAME NAME since it's through the IESG we would need to at least raise that this change was made after they had approved it. I suspect that doing some kind of 2-week heads up to the WG/IESG that we want to make the change before landing it. But, if we do that or something more drastic that's really up to NAME\nI want to point out that allowing wrapping per may not be the best solution if one considers to use the DTLS epoch as part of the input to the TLS exporter as suggested in . Then one want an input that doesn't wrap, and then to ensure that one doesn't practically run into any limits, then an uint64 would be better. An unit32 appears to be sufficient for a quite extreme use case with minimal messages size (which is one DTLS record with 4 byte user message) and high bandwidth (10 Gbps) sustained over a period of more than 5 years before the epoch would wrap. However, to my understanding the full epoch value are so rarely included in transmitted DTLS messages that its size have minimal implication on the overhead that it is not worth saving 4-bytes to be certain to have eliminated any practical limitation.\nI think a 2-week WGLC and notifying the IESG will suffice.\nFYI. We have now run into quite many problems with the interworking of semi-permanent DTLS/SCTP connections (lasting years) and DTLS 1.3: No Diffie-Hellman rekeying No post-handshake server authentication No rekeying of the exporter_secret 2^40.5 packets limit 2 minute MSL requirement 2^38.5 byte SCTP user message size limit Our initial plan was to just replace DTLS 1.2 with DTLS 1.3, leave DTLS/SCTP as is, but as the problems keep adding up we are now startnig to consider quite big changes to DTLS/SCTP 1.3 to make this work: URL Using new DTLS connections instead of KeyUpdate could potentially address the first five, but is a significant change.\nHi, Based on the problems long-lived DTLS/SCTP had with DTLS 1.3 (as well as DTLS 1.2 without renegotiation and respecting AEAD limits) we restructured DTLS/SCTP (RFC6083bis) to use parallel connections instead of relying on renegotiation and post-handshake messages. We think this quite easily solves all the problems we had without requiring any updates of DTLS 1.3. It also makes DTLS/SCTP less dependant on version specific DTLS mechanisms. It will hopefully work without changes with any future DTLS 1.4. URL We can now firmly RECOMMEND DTLS 1.3 which I think is very important. 5G is to a large degree replacing or complementing IPsec with TLS, DTLS, and DTLS/SCTP. I will do my best to transition 3GPP to DTLS 1.3 asap. FYI, 3GPP DTLS/SCTP associations have very long lifetimes (can be years), strive for five nines (99.999%) availability and cannot be teared down without causing major disruptions. With such long lifetimes we think mutual reauthentication, rekeying of the SCTP-AUTH key, and forcing attackers to dynamic key exfiltration [RFC7624] are required features. I still think should be done. 2^40.5 packets is quite limiting.\nSeems OK.Thanks -- this LGTM. We'll need to do a brief consensus call given the type of change.", "new_text": "include handshake messages, such as post-handshake messages (e.g., a NewSessionTicket message). epoch value (4 to 2^64-1) is used for payloads protected using keys from the [sender]_application_traffic_secret_N (N>0). Using these reserved epoch values a receiver knows what cipher state"} {"id": "q-en-dtls13-spec-49e1629cf2ed317a26fcca6c5093190790e150affbe38187f8b7b064cec9832a", "old_text": "successful ACK processing updates the keys of the KeyUpdate message sender, which is reflected in the change of epoch values. 9. If the client and server have negotiated the \"connection_id\"", "comments": "Yet another attempt to This builds on Chris's PR but just omits the epoch entirely from the AEAD nonce calculation. This is more consistent with TLS and doesn't require special case reasoning about why we only need to build in the bottom 16 bits. This relies entirely on the keys being different. Needs analysis.\nThis seems promising.\nNAME this LGTM, but there's a conflict. Can you resolve prior to merging? NAME are you OK with the new rationale, and text that allows it to be relaxed in the future?\nIf I have understood and calculated correctly DTLS 1.3 limits the number of packets in a AES-GCM connection to 2^40.5 compared to 2^64 in DTLS 1.2. This is quite a severe limitation that is not mentioned explicitly. The AEAD limits in themselves are not a problem, but the combination of AEAD limits (2^24.5) and a small epoch (2^16) is a problem. I think this is major problem in some use cases of DTLS. E.g. most 3GPP 5G use cases where DTLS or DTLS/SCTP is used for semi-permanent connections (lasting years) and where the terminating the connection cause severe disturbances. I did not find any discussion about this problem (but I could easily have missed it) It is known that the limit of 2^48 packets in SRTP can be reached and 2^40.5 is significantly smaller than this... If the document was not in such a late stage, I would have suggested to increase the size of the epoch to 32 bits..... I think this and the other problems DTLS 1.3 has with semi-permanent connections has to be solved sooner than later.\nNAME thanks for filing this. I hear you about the late stage, but IIRC we never represent the entire epoch on the wire in any case, so it seems like just changing it to 32 bits would be pretty easy, so why not do it now?\nHow do the chacha20 numbers compare to the AES-GCM ones?\nNAME I don't think (D)TLS 1.3 puts any limits on ChaCha20 so my understanding is understanding is that the total number of packets can be protected in the current drafts is ChaCha20 2^64 (2^48 2^16) AES-GCM 2^40.5 (2^24.5 2^16) AES-GCM 2^39 (2^23 * 2^16)\nNAME I am all for changing this now. I made a PR that change epoch from uint16 to uint32. As you say, it seems like a pretty easy change to do. Is uint32 the right size?\nUnless you think we should do uint64. NAME NAME what would we have to do to make this change?\nURL Magnus Westerlund thinks uint64 would be a safer bet.Michael T\u00fcxen suggest to wrap the uint16 epoch instead. That might be an easier solution. I don't see a direct problem in DTLSPlaintext, ACK message, or recordsequencenumber with a uint16 epoch that is allowed to wrap like: 4, 5, ... 2^16-1, 4, 5, ... 2^16-1, 4, 5, ...\nNAME NAME NAME since it's through the IESG we would need to at least raise that this change was made after they had approved it. I suspect that doing some kind of 2-week heads up to the WG/IESG that we want to make the change before landing it. But, if we do that or something more drastic that's really up to NAME\nI want to point out that allowing wrapping per may not be the best solution if one considers to use the DTLS epoch as part of the input to the TLS exporter as suggested in . Then one want an input that doesn't wrap, and then to ensure that one doesn't practically run into any limits, then an uint64 would be better. An unit32 appears to be sufficient for a quite extreme use case with minimal messages size (which is one DTLS record with 4 byte user message) and high bandwidth (10 Gbps) sustained over a period of more than 5 years before the epoch would wrap. However, to my understanding the full epoch value are so rarely included in transmitted DTLS messages that its size have minimal implication on the overhead that it is not worth saving 4-bytes to be certain to have eliminated any practical limitation.\nI think a 2-week WGLC and notifying the IESG will suffice.\nFYI. We have now run into quite many problems with the interworking of semi-permanent DTLS/SCTP connections (lasting years) and DTLS 1.3: No Diffie-Hellman rekeying No post-handshake server authentication No rekeying of the exporter_secret 2^40.5 packets limit 2 minute MSL requirement 2^38.5 byte SCTP user message size limit Our initial plan was to just replace DTLS 1.2 with DTLS 1.3, leave DTLS/SCTP as is, but as the problems keep adding up we are now startnig to consider quite big changes to DTLS/SCTP 1.3 to make this work: URL Using new DTLS connections instead of KeyUpdate could potentially address the first five, but is a significant change.\nHi, Based on the problems long-lived DTLS/SCTP had with DTLS 1.3 (as well as DTLS 1.2 without renegotiation and respecting AEAD limits) we restructured DTLS/SCTP (RFC6083bis) to use parallel connections instead of relying on renegotiation and post-handshake messages. We think this quite easily solves all the problems we had without requiring any updates of DTLS 1.3. It also makes DTLS/SCTP less dependant on version specific DTLS mechanisms. It will hopefully work without changes with any future DTLS 1.4. URL We can now firmly RECOMMEND DTLS 1.3 which I think is very important. 5G is to a large degree replacing or complementing IPsec with TLS, DTLS, and DTLS/SCTP. I will do my best to transition 3GPP to DTLS 1.3 asap. FYI, 3GPP DTLS/SCTP associations have very long lifetimes (can be years), strive for five nines (99.999%) availability and cannot be teared down without causing major disruptions. With such long lifetimes we think mutual reauthentication, rekeying of the SCTP-AUTH key, and forcing attackers to dynamic key exfiltration [RFC7624] are required features. I still think should be done. 2^40.5 packets is quite limiting.\nSeems OK.Thanks -- this LGTM. We'll need to do a brief consensus call given the type of change.", "new_text": "successful ACK processing updates the keys of the KeyUpdate message sender, which is reflected in the change of epoch values. With a 128-bit key as in AES-256, rekeying 2^64 times has a high probability of key reuse within a given connection. Note that even if the key repeats, the IV is also independently generated. In order to provide an extra margin of security, sending implementations MUST NOT allow the epoch to exceed 2^48-1. In order to allow this value to be changed later, receiving implementations MUST NOT enforce this rule. If a sending implementation receives a KeyUpdate with request_update set to \"update_requested\", it MUST NOT send its own KeyUpdate if that would cause it to exceed these limits. 9. If the client and server have negotiated the \"connection_id\""} {"id": "q-en-dtls13-spec-fb3ad54c2ea0d8ebe209664b60ddb087c776aaa8df4785251961477d07f9f78a", "old_text": "the \"TLS Alerts\" registry with value 52. IANA is requested to allocate two values in the \"TLS Handshake Type\" registry, defined in TLS13, for RequestConnectionId (TBD), and NewConnectionId (TBD), as defined in this document. The value for the \"DTLS-OK\" columns are \"Y\". IANA is requested to add this RFC as a reference to the TLS Cipher", "comments": "As far as referring to the snake case in the main body, I went and looked at RFC 8446 and it does not refer to the snake case. I tend to think that if we make the change in 5.2 and A.2, that should be enough breadcrumbs for people to figure it out. I put them in the same order as they are in the registry and moved up newsessionticket. Triple check this change please.\nThis I-D registers two TLS HandshakeTypes. All of the other fields in the registry use snake_case.", "new_text": "the \"TLS Alerts\" registry with value 52. IANA is requested to allocate two values in the \"TLS Handshake Type\" registry, defined in TLS13, for request_connection_id (TBD), and new_connection_id (TBD), as defined in this document. The value for the \"DTLS-OK\" columns are \"Y\". IANA is requested to add this RFC as a reference to the TLS Cipher"} {"id": "q-en-dtls13-spec-4381dad45da18f45cf8010565e93228ee58bd31a88f549e5b75aa743f214b692", "old_text": "the connection establishment. With these exceptions, the DTLS message formats, flows, and logic are the same as those of TLS 1.3. 5.1.", "comments": "Right now, my NSS patch for compatibility mode in TLS also includes changes to DTLS. Those changes include the ServerHello format changes, but NSS will ignore the session_id and never send a ChangeCipherSpec from either client or server.\nNote that we probably still want to ignore CCS in DTLS - DTLS discards things that it doesn't want to see - this is about generating CCS.", "new_text": "the connection establishment. With these exceptions, the DTLS message formats, flows, and logic are the same as those of TLS 1.3. DTLS implementations SHOULD not use the TLS 1.3 \"compatibility mode\" described in I-D.ietf-tls-tls13, Section D.4 and DTLS servers SHOULD ignore the \"session_id\" value generated by the client rather than sending ChangeCipherSpec messages. Implementations MUST still ignore ChangeCipherSpec messages received during the handshake and at all other times SHOULD treat them as if they had failed to deprotect. 5.1."} {"id": "q-en-dtls13-spec-b7b0f024b7a9cc4a0aaf6d8ae7b9d4eaeb96147a5fe4697c9700bb356b17087c", "old_text": "Same as for TLS 1.3. A DTLS 1.3-only client MUST set the legacy_cookie field to zero length. Same as for TLS 1.3.", "comments": "Addressing issue raised by Martin at URL\nShould the server reject the ClientHello if it appears as though the client is responding to a HelloVerifyRequest? What if it also contains a cookie extension?\nI don't think the server should reject a ClientHello if it thinks the Client should provide a cookie previously sent with the HelloVerifyRequest. This would require the server to keep state and to implement more complex logic in the server-side. If the ClientHello contains a cookie in the ClientHello.legacycookie field in the DTLS 1.3 field then IMHO an \"illegalparameter\" alert has to be sent. I created a PR to add a note about this: URL There is, however, the question why whether we should omit some of the legacy fields for use with DTLS 1.3 (not in the ClientHello because it would be needed for backwards compatibility) but for the ServerHello/HelloRetryRequest.\nFixed", "new_text": "Same as for TLS 1.3. A DTLS 1.3-only client MUST set the legacy_cookie field to zero length. If a DTLS 1.3 ClientHello is received with any other value in this field, the server MUST abort the handshake with an \"illegal_parameter\" alert. Same as for TLS 1.3."} {"id": "q-en-dynlink-c60cb4d22b3d0fc0e790f4866d7f825a53771acd559666e4d3c74c3761e5a0d3", "old_text": "4.1.3. When the Push method is assigned to a binding, the source endpoint sends PUT requests to the destination resource when the Conditional Notification Attributes are satisfied for the source resource. The source endpoint SHOULD only send a notification request if any included Conditional Notification Attributes are met. The binding entry for this method MUST be stored on the source endpoint. 4.2.", "comments": "NAME NAME I've created a pull request to incorporate the exec binding method in the draft. This brings it in line with and also now has a reference to the draft itself. It would be great if you can quickly review the proposed text and see if you are ok with it\nNow merged, after review comments received from NAME to highlight differences between the Push and Exec binding methods", "new_text": "4.1.3. The Push method can be used to allow a source endpoint to replace an outdated resource state at the destination with a newer representation. When the Push method is assigned to a binding, the source endpoint sends PUT requests to the destination resource when the Conditional Notification Attributes are satisfied for the source resource. The source endpoint SHOULD only send a notification request if any included Conditional Notification Attributes are met. The binding entry for this method MUST be stored on the source endpoint. 4.1.4. An alternative means for a source endpoint to deliver change-of-state notifications to a destination resource is to use the Execute Method. While the Push method simply updates the state of the destination resource with the representation of the source resource, Execute can be used when the destination endpoint wishes to receive all state changes from a source. This allows, for example, the existence of a resource collection consisting of all the state changes at the destination endpoint. When the Execute method is assigned to a binding, the source endpoint sends POST requests to the destination resource when the Conditional Notification Attributes are satisfied for the source resource. The source endpoint SHOULD only send a notification request if any included Conditional Notification Attributes are met. The binding entry for this method MUST be stored on the source endpoint. Note: Both the Push and the Execute methods are examples of Server Push mechanisms that are being researched in the Thing-to-Thing Research Group (T2TRG) I-D.irtf-t2trg-rest-iot. 4.2."} {"id": "q-en-dynlink-c60cb4d22b3d0fc0e790f4866d7f825a53771acd559666e4d3c74c3761e5a0d3", "old_text": "9. draft-ietf-core-dynlink-09 Corrections in Table 1, Table 2, Figure 2.", "comments": "NAME NAME I've created a pull request to incorporate the exec binding method in the draft. This brings it in line with and also now has a reference to the draft itself. It would be great if you can quickly review the proposed text and see if you are ok with it\nNow merged, after review comments received from NAME to highlight differences between the Push and Exec binding methods", "new_text": "9. draft-ietf-core-dynlink-10 Binding methods now support both POST and PUT operations for server push. draft-ietf-core-dynlink-09 Corrections in Table 1, Table 2, Figure 2."} {"id": "q-en-dynlink-2019adef54feecc9e8a7945d5fc6048498c2168bbc36bc538acd322d853837d3", "old_text": "seconds, between two consecutive notifications (whether or not the resource value has changed). In the absence of this parameter, the maximum period is up to the server. The maximum period MUST be greater than zero and MUST be greater than the minimum period parameter (if present) otherwise the receiver MUST return a CoAP error code 4.00 \"Bad Request\" (or equivalent). 3.1.3.", "comments": "Signed-off-by: David Navarro For the reasons stated in .\nHi, OMA LwM2M is using Minimum Period (pmin) and Maximum Period (pmax) attributes. The DynLink draft has this requirement: In its version 1.0, LwM2M had a different version of this requirement in the form: Thus having pmax equal to pmin was allowed. In LwM2M 1.1 we aligned this requirement with the DynLink one, making pmin equal to pmax forbidden. However, it was that having pmin equal pmax is a valid use case if the client wants the notification to be sent exactly every N seconds. Is there a strong opinion on forbidding pmax to be equal to pmin, or are people open to changing the Dynlink requirement ? Regards\nNAME After the discussion at IETF 109, I think we can agree that there are no strong opinions against allowing pmax to be greater than or equal to pmin in the draft. However in order to avoid any ambiguity in interpretation, do we want reflect the discussion so far, so that the draft has pmax == pmin, specifically for this use case, ie when this condition is met, then the CoAP server sends (is it a MUST/SHOULD/MAY?) the resource representation every N seconds? Also how does the client then cancel this request?\nClosing this issue, with the draft allowing pmax to have equality to pmin", "new_text": "seconds, between two consecutive notifications (whether or not the resource value has changed). In the absence of this parameter, the maximum period is up to the server. The maximum period MUST be greater than zero and MUST be greater than, or equal to, the minimum period parameter (if present) otherwise the receiver MUST return a CoAP error code 4.00 \"Bad Request\" (or equivalent). 3.1.3."} {"id": "q-en-echo-request-tag-64f273f72342570165cba971b6da6ecb1f205c517364756dcb80064deabad9d3", "old_text": "the request does not contain a fresh Echo option value, and the server cannot verify the freshness of the request in some other way, the server MUST NOT process the request further and SHOULD send a 4.01 Unauthorized response with an Echo option. The application decides under what conditions a CoAP request to a resource is required to be fresh. These conditions can for example", "comments": "\"The server MAY include the same Echo option value in several different responses and to different clients.\" Clarification based on Jim's review.", "new_text": "the request does not contain a fresh Echo option value, and the server cannot verify the freshness of the request in some other way, the server MUST NOT process the request further and SHOULD send a 4.01 Unauthorized response with an Echo option. The server MAY include the same Echo option value in several different responses and to different clients. The application decides under what conditions a CoAP request to a resource is required to be fresh. These conditions can for example"} {"id": "q-en-edhoc-e0b78eb3a15b63b372b0be65ee24376df04b123487e916b62ff8df9b69256ecf", "old_text": "Cryptographically, EDHOC does not put requirements on the lower layers. EDHOC is not bound to a particular transport layer, and can be used in environments without IP. The transport is responsible to handle message loss, reordering, message duplication, fragmentation, and denial of service protection, where necessary. The Initiator and the Responder need to have agreed on a transport to be used for EDHOC, see applicability. It is recommended to transport", "comments": "Looks good.\nThe current draft has the following text in several places: \"If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in {{error}}, and the protocol MUST be discontinued.\" I am not sure this is a good idea. This means that the availability of EDHOC will be low. An attacker can e.g. send a single byte as message_4 and it would make EDHOC shut down imidiatly. I think we need to soften this and give the implementation some more choice here. Are there constrained radio protocols where noice could be mistaken for an actual message and forwarded to EDHOC or do all constrained radio have strong enough CRC to make sure that that more or less never happens?\nI just changed the following text OLD: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHALL select the most preferred cipher suite of those. NEW: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHOULD select the most preferred cipher suite of those (note that error messages are not authenticated and may be forged). Made me think about that I don't think the group has discussed if some error messages should be authenticated. Some error messages to message2 could include a MAC to prove that the error was sent by the same party that sent message3. error messages to message3 and message4 could be authenticated. I am uncertain if this is worth specifying. At least not as long as there are easier ways to an attacker to end the protocol. I think what needs to be done is to specify that a processing error does not need to discontinue the protocol, the received message can have been an attack. receiveing an error does not need to discontinue the protocol, the received error can have been an attack.\nfrom 20210422 interim: CB perhaps wisely said: \"Use cases exist for multiplexing over same port/URI, so applications should normally sort/demultiplex traffic before start interpreting as EDHOC, which implies treating invalid messages that get to EDHOC code as an error or attack\"\nYes, my understanding is that we only need to be concerned about attacks where an attacker sends a packet crafted so that it will be processed by EDHOC (which is not hard).\nI made a pull request addressing this issue Added text on demultiplexing. Changed procession so implementation are not mandated to shut down the session if an attacker sends a few bytes to the EDHOC resource / port.. Added more security consideration on Denial of service.\nWriting this PR made me think about whether we want to mandate that error messages are always fatal as well as downgrade protection. The \"\"Wrong selected cipher suite\" error is fatal because it is sent in repsponse to message1. You could consider similar types of error sent in response to message2 and message3. E.g. IDCRED format not supported or CRED format not supported or X.509 public key algorithm not supported. [BBFGKZ16] defines the following theoretical downgrade protection for a session: The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack. But this definition has very little to do with practical security. In practice it does not matter if an attacker can influence the chosen parameter in the current session or a following session. I am still confused regarding the purpose of the applicability statement. Is it a informal section describing things the parties need to both support to make the protocol work or is a normative security section relevant for policy and downgrade. If we want downgrade protection over multiple sessions, the parties need to agree or securely negotiate everything. Downgrade protection for a session does not make much sense to me if an attacker can force the parties to set up a second session with lower security.\nDo we need to align the terminology, we use both \"session\" and \"instance\" in the draft?\nYes, I just changed \"protocol\" to \"session\" not remembering that the draft use instance in other places.\nWe can resolve this later, I opened . Unless there are other comments we will soon merge this.\nThe PR has been discussed and merged. Closing\nCurrent specification mandates sending error messages upon certain failures. But in cases when message processing fails when no state has been allocated or when no connection identifier has been found or then a more appropriate action would be to just silently drop the message.\nI think the MUST (if possible) is good. Debugging without any error messages is not nice. We now other protocols that it is good to mandate error messages. It is kind of obvious that can not send an error if you don't now where/how to send it. Don't know if that has to be pointed out....\nMigth be good to not sent error if you are under a DoS attack.\nThis issue is addressed by pull request that also address availability\nThe PR has been discussed and merged. Closing", "new_text": "Cryptographically, EDHOC does not put requirements on the lower layers. EDHOC is not bound to a particular transport layer, and can be used in environments without IP. The application using EDHOC is responsible to handle message loss, reordering, message duplication, fragmentation, demultiplex EDHOC messages from other types of messages, and denial of service protection, where necessary. The Initiator and the Responder need to have agreed on a transport to be used for EDHOC, see applicability. It is recommended to transport"} {"id": "q-en-edhoc-e0b78eb3a15b63b372b0be65ee24376df04b123487e916b62ff8df9b69256ecf", "old_text": "Pass AD_1 to the security application. If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in error, and the protocol MUST be discontinued. 5.4.", "comments": "Looks good.\nThe current draft has the following text in several places: \"If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in {{error}}, and the protocol MUST be discontinued.\" I am not sure this is a good idea. This means that the availability of EDHOC will be low. An attacker can e.g. send a single byte as message_4 and it would make EDHOC shut down imidiatly. I think we need to soften this and give the implementation some more choice here. Are there constrained radio protocols where noice could be mistaken for an actual message and forwarded to EDHOC or do all constrained radio have strong enough CRC to make sure that that more or less never happens?\nI just changed the following text OLD: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHALL select the most preferred cipher suite of those. NEW: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHOULD select the most preferred cipher suite of those (note that error messages are not authenticated and may be forged). Made me think about that I don't think the group has discussed if some error messages should be authenticated. Some error messages to message2 could include a MAC to prove that the error was sent by the same party that sent message3. error messages to message3 and message4 could be authenticated. I am uncertain if this is worth specifying. At least not as long as there are easier ways to an attacker to end the protocol. I think what needs to be done is to specify that a processing error does not need to discontinue the protocol, the received message can have been an attack. receiveing an error does not need to discontinue the protocol, the received error can have been an attack.\nfrom 20210422 interim: CB perhaps wisely said: \"Use cases exist for multiplexing over same port/URI, so applications should normally sort/demultiplex traffic before start interpreting as EDHOC, which implies treating invalid messages that get to EDHOC code as an error or attack\"\nYes, my understanding is that we only need to be concerned about attacks where an attacker sends a packet crafted so that it will be processed by EDHOC (which is not hard).\nI made a pull request addressing this issue Added text on demultiplexing. Changed procession so implementation are not mandated to shut down the session if an attacker sends a few bytes to the EDHOC resource / port.. Added more security consideration on Denial of service.\nWriting this PR made me think about whether we want to mandate that error messages are always fatal as well as downgrade protection. The \"\"Wrong selected cipher suite\" error is fatal because it is sent in repsponse to message1. You could consider similar types of error sent in response to message2 and message3. E.g. IDCRED format not supported or CRED format not supported or X.509 public key algorithm not supported. [BBFGKZ16] defines the following theoretical downgrade protection for a session: The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack. But this definition has very little to do with practical security. In practice it does not matter if an attacker can influence the chosen parameter in the current session or a following session. I am still confused regarding the purpose of the applicability statement. Is it a informal section describing things the parties need to both support to make the protocol work or is a normative security section relevant for policy and downgrade. If we want downgrade protection over multiple sessions, the parties need to agree or securely negotiate everything. Downgrade protection for a session does not make much sense to me if an attacker can force the parties to set up a second session with lower security.\nDo we need to align the terminology, we use both \"session\" and \"instance\" in the draft?\nYes, I just changed \"protocol\" to \"session\" not remembering that the draft use instance in other places.\nWe can resolve this later, I opened . Unless there are other comments we will soon merge this.\nThe PR has been discussed and merged. Closing\nCurrent specification mandates sending error messages upon certain failures. But in cases when message processing fails when no state has been allocated or when no connection identifier has been found or then a more appropriate action would be to just silently drop the message.\nI think the MUST (if possible) is good. Debugging without any error messages is not nice. We now other protocols that it is good to mandate error messages. It is kind of obvious that can not send an error if you don't now where/how to send it. Don't know if that has to be pointed out....\nMigth be good to not sent error if you are under a DoS attack.\nThis issue is addressed by pull request that also address availability\nThe PR has been discussed and merged. Closing", "new_text": "Pass AD_1 to the security application. If any processing step fails, the Responder SHOULD send an EDHOC error message back, formatted as defined in error, and the session MUST be discontinued. Sending error messages is essential for debugging but MAY e.g. be skipped due to denial of service reasons, see security. 5.4."} {"id": "q-en-edhoc-e0b78eb3a15b63b372b0be65ee24376df04b123487e916b62ff8df9b69256ecf", "old_text": "Pass AD_2 to the security application. If any verification step fails, the Initiator MUST send an EDHOC error message back, formatted as defined in error, and the protocol MUST be discontinued. 5.5.", "comments": "Looks good.\nThe current draft has the following text in several places: \"If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in {{error}}, and the protocol MUST be discontinued.\" I am not sure this is a good idea. This means that the availability of EDHOC will be low. An attacker can e.g. send a single byte as message_4 and it would make EDHOC shut down imidiatly. I think we need to soften this and give the implementation some more choice here. Are there constrained radio protocols where noice could be mistaken for an actual message and forwarded to EDHOC or do all constrained radio have strong enough CRC to make sure that that more or less never happens?\nI just changed the following text OLD: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHALL select the most preferred cipher suite of those. NEW: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHOULD select the most preferred cipher suite of those (note that error messages are not authenticated and may be forged). Made me think about that I don't think the group has discussed if some error messages should be authenticated. Some error messages to message2 could include a MAC to prove that the error was sent by the same party that sent message3. error messages to message3 and message4 could be authenticated. I am uncertain if this is worth specifying. At least not as long as there are easier ways to an attacker to end the protocol. I think what needs to be done is to specify that a processing error does not need to discontinue the protocol, the received message can have been an attack. receiveing an error does not need to discontinue the protocol, the received error can have been an attack.\nfrom 20210422 interim: CB perhaps wisely said: \"Use cases exist for multiplexing over same port/URI, so applications should normally sort/demultiplex traffic before start interpreting as EDHOC, which implies treating invalid messages that get to EDHOC code as an error or attack\"\nYes, my understanding is that we only need to be concerned about attacks where an attacker sends a packet crafted so that it will be processed by EDHOC (which is not hard).\nI made a pull request addressing this issue Added text on demultiplexing. Changed procession so implementation are not mandated to shut down the session if an attacker sends a few bytes to the EDHOC resource / port.. Added more security consideration on Denial of service.\nWriting this PR made me think about whether we want to mandate that error messages are always fatal as well as downgrade protection. The \"\"Wrong selected cipher suite\" error is fatal because it is sent in repsponse to message1. You could consider similar types of error sent in response to message2 and message3. E.g. IDCRED format not supported or CRED format not supported or X.509 public key algorithm not supported. [BBFGKZ16] defines the following theoretical downgrade protection for a session: The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack. But this definition has very little to do with practical security. In practice it does not matter if an attacker can influence the chosen parameter in the current session or a following session. I am still confused regarding the purpose of the applicability statement. Is it a informal section describing things the parties need to both support to make the protocol work or is a normative security section relevant for policy and downgrade. If we want downgrade protection over multiple sessions, the parties need to agree or securely negotiate everything. Downgrade protection for a session does not make much sense to me if an attacker can force the parties to set up a second session with lower security.\nDo we need to align the terminology, we use both \"session\" and \"instance\" in the draft?\nYes, I just changed \"protocol\" to \"session\" not remembering that the draft use instance in other places.\nWe can resolve this later, I opened . Unless there are other comments we will soon merge this.\nThe PR has been discussed and merged. Closing\nCurrent specification mandates sending error messages upon certain failures. But in cases when message processing fails when no state has been allocated or when no connection identifier has been found or then a more appropriate action would be to just silently drop the message.\nI think the MUST (if possible) is good. Debugging without any error messages is not nice. We now other protocols that it is good to mandate error messages. It is kind of obvious that can not send an error if you don't now where/how to send it. Don't know if that has to be pointed out....\nMigth be good to not sent error if you are under a DoS attack.\nThis issue is addressed by pull request that also address availability\nThe PR has been discussed and merged. Closing", "new_text": "Pass AD_2 to the security application. If any processing step fails, the Initiator SHOULD send an EDHOC error message back, formatted as defined in error. Sending error messages is essential for debugging but MAY e.g.be skipped if a session cannot be found or due to denial of service reasons, see security. If an error message is sent, the session MUST be discontinued. 5.5."} {"id": "q-en-edhoc-e0b78eb3a15b63b372b0be65ee24376df04b123487e916b62ff8df9b69256ecf", "old_text": "security application. The application can now derive application keys using the EDHOC-Exporter interface. If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in error, and the protocol MUST be discontinued. After verifying message_3, the Responder is assured that the Initiator has calculated the key PRK_4x3m (explicit key confirmation)", "comments": "Looks good.\nThe current draft has the following text in several places: \"If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in {{error}}, and the protocol MUST be discontinued.\" I am not sure this is a good idea. This means that the availability of EDHOC will be low. An attacker can e.g. send a single byte as message_4 and it would make EDHOC shut down imidiatly. I think we need to soften this and give the implementation some more choice here. Are there constrained radio protocols where noice could be mistaken for an actual message and forwarded to EDHOC or do all constrained radio have strong enough CRC to make sure that that more or less never happens?\nI just changed the following text OLD: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHALL select the most preferred cipher suite of those. NEW: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHOULD select the most preferred cipher suite of those (note that error messages are not authenticated and may be forged). Made me think about that I don't think the group has discussed if some error messages should be authenticated. Some error messages to message2 could include a MAC to prove that the error was sent by the same party that sent message3. error messages to message3 and message4 could be authenticated. I am uncertain if this is worth specifying. At least not as long as there are easier ways to an attacker to end the protocol. I think what needs to be done is to specify that a processing error does not need to discontinue the protocol, the received message can have been an attack. receiveing an error does not need to discontinue the protocol, the received error can have been an attack.\nfrom 20210422 interim: CB perhaps wisely said: \"Use cases exist for multiplexing over same port/URI, so applications should normally sort/demultiplex traffic before start interpreting as EDHOC, which implies treating invalid messages that get to EDHOC code as an error or attack\"\nYes, my understanding is that we only need to be concerned about attacks where an attacker sends a packet crafted so that it will be processed by EDHOC (which is not hard).\nI made a pull request addressing this issue Added text on demultiplexing. Changed procession so implementation are not mandated to shut down the session if an attacker sends a few bytes to the EDHOC resource / port.. Added more security consideration on Denial of service.\nWriting this PR made me think about whether we want to mandate that error messages are always fatal as well as downgrade protection. The \"\"Wrong selected cipher suite\" error is fatal because it is sent in repsponse to message1. You could consider similar types of error sent in response to message2 and message3. E.g. IDCRED format not supported or CRED format not supported or X.509 public key algorithm not supported. [BBFGKZ16] defines the following theoretical downgrade protection for a session: The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack. But this definition has very little to do with practical security. In practice it does not matter if an attacker can influence the chosen parameter in the current session or a following session. I am still confused regarding the purpose of the applicability statement. Is it a informal section describing things the parties need to both support to make the protocol work or is a normative security section relevant for policy and downgrade. If we want downgrade protection over multiple sessions, the parties need to agree or securely negotiate everything. Downgrade protection for a session does not make much sense to me if an attacker can force the parties to set up a second session with lower security.\nDo we need to align the terminology, we use both \"session\" and \"instance\" in the draft?\nYes, I just changed \"protocol\" to \"session\" not remembering that the draft use instance in other places.\nWe can resolve this later, I opened . Unless there are other comments we will soon merge this.\nThe PR has been discussed and merged. Closing\nCurrent specification mandates sending error messages upon certain failures. But in cases when message processing fails when no state has been allocated or when no connection identifier has been found or then a more appropriate action would be to just silently drop the message.\nI think the MUST (if possible) is good. Debugging without any error messages is not nice. We now other protocols that it is good to mandate error messages. It is kind of obvious that can not send an error if you don't now where/how to send it. Don't know if that has to be pointed out....\nMigth be good to not sent error if you are under a DoS attack.\nThis issue is addressed by pull request that also address availability\nThe PR has been discussed and merged. Closing", "new_text": "security application. The application can now derive application keys using the EDHOC-Exporter interface. If any processing step fails, the Responder SHOULD send an EDHOC error message back, formatted as defined in error. Sending error messages is essential for debugging but MAY e.g.be skipped if a session cannot be found or due to denial of service reasons, see security. If an error message is sent, the session MUST be discontinued. After verifying message_3, the Responder is assured that the Initiator has calculated the key PRK_4x3m (explicit key confirmation)"} {"id": "q-en-edhoc-e0b78eb3a15b63b372b0be65ee24376df04b123487e916b62ff8df9b69256ecf", "old_text": "transported depends on lower layers, which need to enable error messages to be sent and processed as intended. All error messages in EDHOC are fatal. After sending an error message, the sender MUST discontinue the protocol. The receiver SHOULD treat an error message as an indication that the other party likely has discontinued the protocol. But as the error message is not authenticated, a received error messages might also have been sent by an attacker and the receiver MAY therefore try to continue the protocol. error SHALL be a CBOR Sequence (see CBOR) as defined below", "comments": "Looks good.\nThe current draft has the following text in several places: \"If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in {{error}}, and the protocol MUST be discontinued.\" I am not sure this is a good idea. This means that the availability of EDHOC will be low. An attacker can e.g. send a single byte as message_4 and it would make EDHOC shut down imidiatly. I think we need to soften this and give the implementation some more choice here. Are there constrained radio protocols where noice could be mistaken for an actual message and forwarded to EDHOC or do all constrained radio have strong enough CRC to make sure that that more or less never happens?\nI just changed the following text OLD: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHALL select the most preferred cipher suite of those. NEW: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHOULD select the most preferred cipher suite of those (note that error messages are not authenticated and may be forged). Made me think about that I don't think the group has discussed if some error messages should be authenticated. Some error messages to message2 could include a MAC to prove that the error was sent by the same party that sent message3. error messages to message3 and message4 could be authenticated. I am uncertain if this is worth specifying. At least not as long as there are easier ways to an attacker to end the protocol. I think what needs to be done is to specify that a processing error does not need to discontinue the protocol, the received message can have been an attack. receiveing an error does not need to discontinue the protocol, the received error can have been an attack.\nfrom 20210422 interim: CB perhaps wisely said: \"Use cases exist for multiplexing over same port/URI, so applications should normally sort/demultiplex traffic before start interpreting as EDHOC, which implies treating invalid messages that get to EDHOC code as an error or attack\"\nYes, my understanding is that we only need to be concerned about attacks where an attacker sends a packet crafted so that it will be processed by EDHOC (which is not hard).\nI made a pull request addressing this issue Added text on demultiplexing. Changed procession so implementation are not mandated to shut down the session if an attacker sends a few bytes to the EDHOC resource / port.. Added more security consideration on Denial of service.\nWriting this PR made me think about whether we want to mandate that error messages are always fatal as well as downgrade protection. The \"\"Wrong selected cipher suite\" error is fatal because it is sent in repsponse to message1. You could consider similar types of error sent in response to message2 and message3. E.g. IDCRED format not supported or CRED format not supported or X.509 public key algorithm not supported. [BBFGKZ16] defines the following theoretical downgrade protection for a session: The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack. But this definition has very little to do with practical security. In practice it does not matter if an attacker can influence the chosen parameter in the current session or a following session. I am still confused regarding the purpose of the applicability statement. Is it a informal section describing things the parties need to both support to make the protocol work or is a normative security section relevant for policy and downgrade. If we want downgrade protection over multiple sessions, the parties need to agree or securely negotiate everything. Downgrade protection for a session does not make much sense to me if an attacker can force the parties to set up a second session with lower security.\nDo we need to align the terminology, we use both \"session\" and \"instance\" in the draft?\nYes, I just changed \"protocol\" to \"session\" not remembering that the draft use instance in other places.\nWe can resolve this later, I opened . Unless there are other comments we will soon merge this.\nThe PR has been discussed and merged. Closing\nCurrent specification mandates sending error messages upon certain failures. But in cases when message processing fails when no state has been allocated or when no connection identifier has been found or then a more appropriate action would be to just silently drop the message.\nI think the MUST (if possible) is good. Debugging without any error messages is not nice. We now other protocols that it is good to mandate error messages. It is kind of obvious that can not send an error if you don't now where/how to send it. Don't know if that has to be pointed out....\nMigth be good to not sent error if you are under a DoS attack.\nThis issue is addressed by pull request that also address availability\nThe PR has been discussed and merged. Closing", "new_text": "transported depends on lower layers, which need to enable error messages to be sent and processed as intended. Errors in EDHOC are fatal. After sending an error message, the sender MUST discontinue the protocol. The receiver SHOULD treat an error message as an indication that the other party likely has discontinued the protocol. But as the error message is not authenticated, a received error message might also have been sent by an attacker and the receiver MAY therefore try to continue the protocol. error SHALL be a CBOR Sequence (see CBOR) as defined below"} {"id": "q-en-edhoc-e0b78eb3a15b63b372b0be65ee24376df04b123487e916b62ff8df9b69256ecf", "old_text": "cipher suite, and the parameters defined in asym-msg4-proc. If any verification step fails the Initiator MUST send an EDHOC error message back, formatted as defined in error, and the protocol MUST be discontinued. 7.2.", "comments": "Looks good.\nThe current draft has the following text in several places: \"If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in {{error}}, and the protocol MUST be discontinued.\" I am not sure this is a good idea. This means that the availability of EDHOC will be low. An attacker can e.g. send a single byte as message_4 and it would make EDHOC shut down imidiatly. I think we need to soften this and give the implementation some more choice here. Are there constrained radio protocols where noice could be mistaken for an actual message and forwarded to EDHOC or do all constrained radio have strong enough CRC to make sure that that more or less never happens?\nI just changed the following text OLD: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHALL select the most preferred cipher suite of those. NEW: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHOULD select the most preferred cipher suite of those (note that error messages are not authenticated and may be forged). Made me think about that I don't think the group has discussed if some error messages should be authenticated. Some error messages to message2 could include a MAC to prove that the error was sent by the same party that sent message3. error messages to message3 and message4 could be authenticated. I am uncertain if this is worth specifying. At least not as long as there are easier ways to an attacker to end the protocol. I think what needs to be done is to specify that a processing error does not need to discontinue the protocol, the received message can have been an attack. receiveing an error does not need to discontinue the protocol, the received error can have been an attack.\nfrom 20210422 interim: CB perhaps wisely said: \"Use cases exist for multiplexing over same port/URI, so applications should normally sort/demultiplex traffic before start interpreting as EDHOC, which implies treating invalid messages that get to EDHOC code as an error or attack\"\nYes, my understanding is that we only need to be concerned about attacks where an attacker sends a packet crafted so that it will be processed by EDHOC (which is not hard).\nI made a pull request addressing this issue Added text on demultiplexing. Changed procession so implementation are not mandated to shut down the session if an attacker sends a few bytes to the EDHOC resource / port.. Added more security consideration on Denial of service.\nWriting this PR made me think about whether we want to mandate that error messages are always fatal as well as downgrade protection. The \"\"Wrong selected cipher suite\" error is fatal because it is sent in repsponse to message1. You could consider similar types of error sent in response to message2 and message3. E.g. IDCRED format not supported or CRED format not supported or X.509 public key algorithm not supported. [BBFGKZ16] defines the following theoretical downgrade protection for a session: The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack. But this definition has very little to do with practical security. In practice it does not matter if an attacker can influence the chosen parameter in the current session or a following session. I am still confused regarding the purpose of the applicability statement. Is it a informal section describing things the parties need to both support to make the protocol work or is a normative security section relevant for policy and downgrade. If we want downgrade protection over multiple sessions, the parties need to agree or securely negotiate everything. Downgrade protection for a session does not make much sense to me if an attacker can force the parties to set up a second session with lower security.\nDo we need to align the terminology, we use both \"session\" and \"instance\" in the draft?\nYes, I just changed \"protocol\" to \"session\" not remembering that the draft use instance in other places.\nWe can resolve this later, I opened . Unless there are other comments we will soon merge this.\nThe PR has been discussed and merged. Closing\nCurrent specification mandates sending error messages upon certain failures. But in cases when message processing fails when no state has been allocated or when no connection identifier has been found or then a more appropriate action would be to just silently drop the message.\nI think the MUST (if possible) is good. Debugging without any error messages is not nice. We now other protocols that it is good to mandate error messages. It is kind of obvious that can not send an error if you don't now where/how to send it. Don't know if that has to be pointed out....\nMigth be good to not sent error if you are under a DoS attack.\nThis issue is addressed by pull request that also address availability\nThe PR has been discussed and merged. Closing", "new_text": "cipher suite, and the parameters defined in asym-msg4-proc. If any verification step fails the Initiator MUST send an EDHOC error message back, formatted as defined in error, and the session MUST be discontinued. 7.2."} {"id": "q-en-edhoc-e0b78eb3a15b63b372b0be65ee24376df04b123487e916b62ff8df9b69256ecf", "old_text": "the initiator to demonstrate reachability at its apparent network address. 8.6. The availability of a secure random number generator is essential for", "comments": "Looks good.\nThe current draft has the following text in several places: \"If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in {{error}}, and the protocol MUST be discontinued.\" I am not sure this is a good idea. This means that the availability of EDHOC will be low. An attacker can e.g. send a single byte as message_4 and it would make EDHOC shut down imidiatly. I think we need to soften this and give the implementation some more choice here. Are there constrained radio protocols where noice could be mistaken for an actual message and forwarded to EDHOC or do all constrained radio have strong enough CRC to make sure that that more or less never happens?\nI just changed the following text OLD: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHALL select the most preferred cipher suite of those. NEW: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHOULD select the most preferred cipher suite of those (note that error messages are not authenticated and may be forged). Made me think about that I don't think the group has discussed if some error messages should be authenticated. Some error messages to message2 could include a MAC to prove that the error was sent by the same party that sent message3. error messages to message3 and message4 could be authenticated. I am uncertain if this is worth specifying. At least not as long as there are easier ways to an attacker to end the protocol. I think what needs to be done is to specify that a processing error does not need to discontinue the protocol, the received message can have been an attack. receiveing an error does not need to discontinue the protocol, the received error can have been an attack.\nfrom 20210422 interim: CB perhaps wisely said: \"Use cases exist for multiplexing over same port/URI, so applications should normally sort/demultiplex traffic before start interpreting as EDHOC, which implies treating invalid messages that get to EDHOC code as an error or attack\"\nYes, my understanding is that we only need to be concerned about attacks where an attacker sends a packet crafted so that it will be processed by EDHOC (which is not hard).\nI made a pull request addressing this issue Added text on demultiplexing. Changed procession so implementation are not mandated to shut down the session if an attacker sends a few bytes to the EDHOC resource / port.. Added more security consideration on Denial of service.\nWriting this PR made me think about whether we want to mandate that error messages are always fatal as well as downgrade protection. The \"\"Wrong selected cipher suite\" error is fatal because it is sent in repsponse to message1. You could consider similar types of error sent in response to message2 and message3. E.g. IDCRED format not supported or CRED format not supported or X.509 public key algorithm not supported. [BBFGKZ16] defines the following theoretical downgrade protection for a session: The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack. But this definition has very little to do with practical security. In practice it does not matter if an attacker can influence the chosen parameter in the current session or a following session. I am still confused regarding the purpose of the applicability statement. Is it a informal section describing things the parties need to both support to make the protocol work or is a normative security section relevant for policy and downgrade. If we want downgrade protection over multiple sessions, the parties need to agree or securely negotiate everything. Downgrade protection for a session does not make much sense to me if an attacker can force the parties to set up a second session with lower security.\nDo we need to align the terminology, we use both \"session\" and \"instance\" in the draft?\nYes, I just changed \"protocol\" to \"session\" not remembering that the draft use instance in other places.\nWe can resolve this later, I opened . Unless there are other comments we will soon merge this.\nThe PR has been discussed and merged. Closing\nCurrent specification mandates sending error messages upon certain failures. But in cases when message processing fails when no state has been allocated or when no connection identifier has been found or then a more appropriate action would be to just silently drop the message.\nI think the MUST (if possible) is good. Debugging without any error messages is not nice. We now other protocols that it is good to mandate error messages. It is kind of obvious that can not send an error if you don't now where/how to send it. Don't know if that has to be pointed out....\nMigth be good to not sent error if you are under a DoS attack.\nThis issue is addressed by pull request that also address availability\nThe PR has been discussed and merged. Closing", "new_text": "the initiator to demonstrate reachability at its apparent network address. An attacker can also send faked message_2, message_3, message_4, or error in an attempt to trick the receiving party to send an error message and discontinue the session. EDHOC implementations MAY evaluate if a received message is likely to have be forged by and attacker and ignore it without sending an error message or discontinuing the session. 8.6. The availability of a secure random number generator is essential for"} {"id": "q-en-edhoc-dd68cae5bf5de723c95db588a8e59b11f8f0e635ba0d15e8747d14993bb499ef", "old_text": "Application keys and other application specific data can be derived using the EDHOC-Exporter interface defined as: where label is a tstr defined by the application and length is a uint defined by the application. The label SHALL be different for each different exporter value. The transcript hash TH_4 is a CBOR encoded bstr and the input to the hash function is a CBOR Sequence. where H() is the hash function in the selected cipher suite. Example use of the EDHOC-Exporter is given in I-D.ietf-core-oscore-edhoc. To provide forward secrecy in an even more efficient way than re- running EDHOC, EDHOC provides the function EDHOC-KeyUpdate. When EDHOC-KeyUpdate is called the old PRK_4x3m is deleted and the new PRk_4x3m is calculated as a \"hash\" of the old key using the Extract function as illustrated by the following pseudocode: 5.", "comments": "(Last two commits on wrong branch.)\nThe prototype of EDHOC-Exporter() has remained EDHOC-Exporter(label, length), while it should be EDHOC-Exporter(label, context, length) as per Alternative 3 discussed in issue URL Correct?\nThanks NAME your comment is addressed in the latest commit. I merge this now.\nThe text use both the terms \"session\" and \"protocol instance\", would be good to settle for one term.\nJust chose one, I don't think this needs an issue\nIn the security considerations \"session\" makes most sense to me. There is also the term \"connection\". I will make a proposal.\nThe spec should likely forbid multiple calls to the EDHOC-Exporter interface with the same label, in order to prevent the same key being reused. Discussed with NAME and NAME\nGood catch. Yes, we should have some text about that, but it should probably more be a requirement on the application using EDHOC that on EDHOC itself. Two different application should defintly not use the same label. The same application might rederive the same key several times as long as it keep track of its AEAD nonces and replay window. I thought that we had already IANA registry proposed for this, but that is not the case. We should probably have one. If we have a IANA registry the labels get more fixed and it might be that we need a context input parameter as well.\nIf we make a IANA registrer for labels, I think we need to let the application add additional information in some way. Two alternatives: We make an IANA register for label prefixes, and let the application append information to the label prefix. This was recently planned for EAP-TLS (and methods built on EAP-TLS) but was changes because the TLS export labels are not really ment to be labels. This would not change the current exporter interface. We expand the interface with a context parameter. This is similar to the TLS exporter. A problem with the TLS exporter is that some implementations decided to not even support context.\nNAME how is the context parameter intended to be used? Could you give an example? Would this be some sort of \"salt\" when deriving the key?\nDon't register labels, register label prefixes.\nThe motivation for \"context\" or \"label postfixes\" is to increase the flexibility for the application. Restricting the application to IANA registered labels is very limiting. An application might want to use the EDHOC exporter to derive a number of keys and we should encourage that. The alternative is that the application needs to implement its own key derivation, which might be less secure and might be outside of the TEE. I called it \"context\" just because TLS calls the paramater in its exporter context.\n\"Be like TLS\" is of course a good motivation. Technically, a label prefix is not different from a context, it just saves the need to separate prefix and suffix.\n(And of course there is nothing wrong with multiple calls with the same label; the application has a \"right to forget\" :-)\nThe solutions should be equal when it comes to security. The choice should be made based on what is easiest for implementators and apllications using EDHOC. Alt 1. We make an IANA register for label prefixes: \"OSCORE Master Secret\" \"OSCORE Master Salt\" The Export function stays the same: EDHOC-Exporter(label, length) = EDHOC-KDF(PRK4x3m, TH4, label, length) An application is allowed to append things to the label prefix: \"OSCORE Master Secret\" \"OSCORE Master Secret PickleRick\" Alt 2. We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, [label, ctx] , length) An application is not allowed to append things to the label and uses the ctx instead: label = \"OSCORE Master Secret\" ctx = \"\" label = \"OSCORE Master Secret\" ctx = \"PickleRick\"\nNo major opinion. Alt 2 breaks the test vector for the OSCORE security context. It would be good to get more input.\nAny preference NAME NAME NAME ?\nIf desired the CTX is more convenient it can always be added in a wrapper on top of EDHOC-Exporter(). (I assume re-use of label+ctx in Alt 2 is equivalent to re-use of label in Alt 1?)\nI prefer to have explicitly specified. We can pass to the concatenation of and . When is undefined/empty, test vectors are not affected. I guess this is what NAME is referring to?\nlabel is a CBOR byte string. While alt 2 use an CBOR array. Definging alt2 as a wrapper seems more complication in my view.\nWould be good if people stated alt 1. or alt 2. or made a concrete suggestion for alt 3.\nAlt 3 (CBOR sequence) We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, labelcontext, length) where is a CBOR sequence: If you want to keep the test vectors intact, then you could make optional in the CDDL, but not sure how to specify an optional parameter in the interface definition.\nThanks. That is a valid alt 3. but ctx should probably be a bstr\nI don't think keeping the test vectors intact is an important goal at all. Let's decide on what is simplest for a user of the exporter interface.\nSomething like that, yes. Later post do differentiate the type of \"label\" and \"ctx\" and in light of that and changing test vectors Alt3 seems clean.\nI am fine with alt 3. I can start making PR for that unless somebody has different ideas. I don't alt 1 is a good alternative anymore as is restricts context to being a unicode string. Which would require some base64 encoding or similar for a general context.\nPlease review alt. 3 as specified in . Unless there are any comments we would like to merge this.\nThis is included in -07 so I close the issue.\nComment by Martin Disch in URL \"The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed.\" Should we provide more details about COSE constructs, e.g. in appendix A.2?\n(pasting my response from the mailing list here for completeness) Thanks for following up on this. Yes, in my opinion expanding on the COSE constructs would be helpful. But I also sympathize with not wanting to duplicate what is already defined in another RFC (and possibly getting it wrong, as Carsten points out). Considering that there are quite a few implementers already and besides mine you have only received one other similar comment, maybe this isn't an issue for a meaningful majority. In any case, the draft has made great strides already in terms of accessibility and is in a good state now, so I don't see improvements in this area as an absolute necessity.\nThe two COSE constructs that are used in EHDOC, and are fairly simple. They don't use the recursive structure or embed . I think it is possible to add some additional clarifications on the structure of the COSE messages in the EHDOC draft without having to copy too much text from RFC 8152. As part of a thesis I built a proof-of-concept implementation of EDHOC in the Rust programming language available at URL The naming of the repository is due to the subject being both EDHOC and OSCORE on embedded devices, but the following refers only to the EDHOC implementation. It's based on draft-selander-ace-cose-ecdhe-14 and as such, sadly, outdated. Since it was just one part of the overall work, I only managed to implement the subset I needed, which is RPK and the signature-signature based scheme (corresponding to what draft-selander-lake-edhoc calls method 0). I also took some other shortcuts, such as not supporting auxiliary data. Fortunately, test vectors were about to be introduced just around the time I was working on it, so I've been able to successfully verify my implementation against them when John shared them with me for some preliminary testing. Since I didn't have any prior experience implementing this kind of software, I would definitely consider it more of an experiment, but it's out there and it works, in case it can be useful to anybody. I'm convinced that EDHOC will be valuable particularly in the context of OSCORE, possibly even beyond that. As for the draft itself, I would like to mention one aspect that I think could be improved from the perspective of an implementer. Part of the promise of EDHOC is that it's simple and lightweight due to reusing COSE. That is certainly the case, but breaks down somewhat when working on a system where no fully featured library for it is available. This was the case for me and I personally found it difficult to quickly figure out which aspects of COSE I needed to implement. The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed. I know the authors are aware of this and there has been some effort to inline more information directly in the specification, which I greatly appreciate. It would be very nice to have an entirely self-contained document to work with. But I have to again point out my lack of experience in that area, it's entirely possible that this draft is in fact much better than others in this regard and I just don't know it yet. With that I would like to support the adoption and am looking forward to seeing how it develops. Kind regards, Martin", "new_text": "Application keys and other application specific data can be derived using the EDHOC-Exporter interface defined as: label_context is a CBOR sequence: where label is a registered tstr from the EDHOC Exporter Label registry (exporter-label), context is a bstr defined by the application, and length is a uint defined by the application. The (label, context) pair must be unique, i.e. a (label, context) MUST NOT be used for two different purposes. However an application can re-derive the same key several times as long as it is done in a secure way. For example, in most encryption algorithms the same (key, nonce) pair must not be reused. The transcript hash TH_4 is a CBOR encoded bstr and the input to the hash function is a CBOR Sequence. where H() is the hash function in the selected cipher suite. Examples of use of the EDHOC-Exporter are given in asym-msg4-proc and I-D.ietf-core-oscore-edhoc. To provide forward secrecy in an even more efficient way than re- running EDHOC, EDHOC provides the function EDHOC-KeyUpdate. When EDHOC-KeyUpdate is called the old PRK_4x3m is deleted and the new PRK_4x3m is calculated as a \"hash\" of the old key using the Extract function as illustrated by the following pseudocode: 5."} {"id": "q-en-edhoc-dd68cae5bf5de723c95db588a8e59b11f8f0e635ba0d15e8747d14993bb499ef", "old_text": "9.1. IANA has created a new registry titled \"EDHOC Cipher Suites\" under the new heading \"EDHOC\". The registration procedure is \"Expert Review\". The columns of the registry are Value, Array, Description, and Reference, where Value is an integer and the other columns are text strings. The initial contents of the registry are: 9.2. IANA has created a new registry entitled \"EDHOC Method Type\" under the new heading \"EDHOC\". The registration procedure is \"Expert", "comments": "(Last two commits on wrong branch.)\nThe prototype of EDHOC-Exporter() has remained EDHOC-Exporter(label, length), while it should be EDHOC-Exporter(label, context, length) as per Alternative 3 discussed in issue URL Correct?\nThanks NAME your comment is addressed in the latest commit. I merge this now.\nThe text use both the terms \"session\" and \"protocol instance\", would be good to settle for one term.\nJust chose one, I don't think this needs an issue\nIn the security considerations \"session\" makes most sense to me. There is also the term \"connection\". I will make a proposal.\nThe spec should likely forbid multiple calls to the EDHOC-Exporter interface with the same label, in order to prevent the same key being reused. Discussed with NAME and NAME\nGood catch. Yes, we should have some text about that, but it should probably more be a requirement on the application using EDHOC that on EDHOC itself. Two different application should defintly not use the same label. The same application might rederive the same key several times as long as it keep track of its AEAD nonces and replay window. I thought that we had already IANA registry proposed for this, but that is not the case. We should probably have one. If we have a IANA registry the labels get more fixed and it might be that we need a context input parameter as well.\nIf we make a IANA registrer for labels, I think we need to let the application add additional information in some way. Two alternatives: We make an IANA register for label prefixes, and let the application append information to the label prefix. This was recently planned for EAP-TLS (and methods built on EAP-TLS) but was changes because the TLS export labels are not really ment to be labels. This would not change the current exporter interface. We expand the interface with a context parameter. This is similar to the TLS exporter. A problem with the TLS exporter is that some implementations decided to not even support context.\nNAME how is the context parameter intended to be used? Could you give an example? Would this be some sort of \"salt\" when deriving the key?\nDon't register labels, register label prefixes.\nThe motivation for \"context\" or \"label postfixes\" is to increase the flexibility for the application. Restricting the application to IANA registered labels is very limiting. An application might want to use the EDHOC exporter to derive a number of keys and we should encourage that. The alternative is that the application needs to implement its own key derivation, which might be less secure and might be outside of the TEE. I called it \"context\" just because TLS calls the paramater in its exporter context.\n\"Be like TLS\" is of course a good motivation. Technically, a label prefix is not different from a context, it just saves the need to separate prefix and suffix.\n(And of course there is nothing wrong with multiple calls with the same label; the application has a \"right to forget\" :-)\nThe solutions should be equal when it comes to security. The choice should be made based on what is easiest for implementators and apllications using EDHOC. Alt 1. We make an IANA register for label prefixes: \"OSCORE Master Secret\" \"OSCORE Master Salt\" The Export function stays the same: EDHOC-Exporter(label, length) = EDHOC-KDF(PRK4x3m, TH4, label, length) An application is allowed to append things to the label prefix: \"OSCORE Master Secret\" \"OSCORE Master Secret PickleRick\" Alt 2. We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, [label, ctx] , length) An application is not allowed to append things to the label and uses the ctx instead: label = \"OSCORE Master Secret\" ctx = \"\" label = \"OSCORE Master Secret\" ctx = \"PickleRick\"\nNo major opinion. Alt 2 breaks the test vector for the OSCORE security context. It would be good to get more input.\nAny preference NAME NAME NAME ?\nIf desired the CTX is more convenient it can always be added in a wrapper on top of EDHOC-Exporter(). (I assume re-use of label+ctx in Alt 2 is equivalent to re-use of label in Alt 1?)\nI prefer to have explicitly specified. We can pass to the concatenation of and . When is undefined/empty, test vectors are not affected. I guess this is what NAME is referring to?\nlabel is a CBOR byte string. While alt 2 use an CBOR array. Definging alt2 as a wrapper seems more complication in my view.\nWould be good if people stated alt 1. or alt 2. or made a concrete suggestion for alt 3.\nAlt 3 (CBOR sequence) We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, labelcontext, length) where is a CBOR sequence: If you want to keep the test vectors intact, then you could make optional in the CDDL, but not sure how to specify an optional parameter in the interface definition.\nThanks. That is a valid alt 3. but ctx should probably be a bstr\nI don't think keeping the test vectors intact is an important goal at all. Let's decide on what is simplest for a user of the exporter interface.\nSomething like that, yes. Later post do differentiate the type of \"label\" and \"ctx\" and in light of that and changing test vectors Alt3 seems clean.\nI am fine with alt 3. I can start making PR for that unless somebody has different ideas. I don't alt 1 is a good alternative anymore as is restricts context to being a unicode string. Which would require some base64 encoding or similar for a general context.\nPlease review alt. 3 as specified in . Unless there are any comments we would like to merge this.\nThis is included in -07 so I close the issue.\nComment by Martin Disch in URL \"The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed.\" Should we provide more details about COSE constructs, e.g. in appendix A.2?\n(pasting my response from the mailing list here for completeness) Thanks for following up on this. Yes, in my opinion expanding on the COSE constructs would be helpful. But I also sympathize with not wanting to duplicate what is already defined in another RFC (and possibly getting it wrong, as Carsten points out). Considering that there are quite a few implementers already and besides mine you have only received one other similar comment, maybe this isn't an issue for a meaningful majority. In any case, the draft has made great strides already in terms of accessibility and is in a good state now, so I don't see improvements in this area as an absolute necessity.\nThe two COSE constructs that are used in EHDOC, and are fairly simple. They don't use the recursive structure or embed . I think it is possible to add some additional clarifications on the structure of the COSE messages in the EHDOC draft without having to copy too much text from RFC 8152. As part of a thesis I built a proof-of-concept implementation of EDHOC in the Rust programming language available at URL The naming of the repository is due to the subject being both EDHOC and OSCORE on embedded devices, but the following refers only to the EDHOC implementation. It's based on draft-selander-ace-cose-ecdhe-14 and as such, sadly, outdated. Since it was just one part of the overall work, I only managed to implement the subset I needed, which is RPK and the signature-signature based scheme (corresponding to what draft-selander-lake-edhoc calls method 0). I also took some other shortcuts, such as not supporting auxiliary data. Fortunately, test vectors were about to be introduced just around the time I was working on it, so I've been able to successfully verify my implementation against them when John shared them with me for some preliminary testing. Since I didn't have any prior experience implementing this kind of software, I would definitely consider it more of an experiment, but it's out there and it works, in case it can be useful to anybody. I'm convinced that EDHOC will be valuable particularly in the context of OSCORE, possibly even beyond that. As for the draft itself, I would like to mention one aspect that I think could be improved from the perspective of an implementer. Part of the promise of EDHOC is that it's simple and lightweight due to reusing COSE. That is certainly the case, but breaks down somewhat when working on a system where no fully featured library for it is available. This was the case for me and I personally found it difficult to quickly figure out which aspects of COSE I needed to implement. The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed. I know the authors are aware of this and there has been some effort to inline more information directly in the specification, which I greatly appreciate. It would be very nice to have an entirely self-contained document to work with. But I have to again point out my lack of experience in that area, it's entirely possible that this draft is in fact much better than others in this regard and I just don't know it yet. With that I would like to support the adoption and am looking forward to seeing how it develops. Kind regards, Martin", "new_text": "9.1. IANA has created a new registry titled \"EDHOC Exporter Label\" under the new heading \"EDHOC\". The registration procedure is \"Expert Review\". The columns of the registry are Label, Description, and Reference. All columns are text strings. The initial contents of the registry are: 9.2. IANA has created a new registry titled \"EDHOC Cipher Suites\" under the new heading \"EDHOC\". The registration procedure is \"Expert Review\". The columns of the registry are Value, Array, Description, and Reference, where Value is an integer and the other columns are text strings. The initial contents of the registry are: 9.3. IANA has created a new registry entitled \"EDHOC Method Type\" under the new heading \"EDHOC\". The registration procedure is \"Expert"} {"id": "q-en-edhoc-dd68cae5bf5de723c95db588a8e59b11f8f0e635ba0d15e8747d14993bb499ef", "old_text": "strings. The initial contents of the registry is shown in fig- method-types. 9.3. IANA has created a new registry entitled \"EDHOC Error Codes\" under the new heading \"EDHOC\". The registration procedure is", "comments": "(Last two commits on wrong branch.)\nThe prototype of EDHOC-Exporter() has remained EDHOC-Exporter(label, length), while it should be EDHOC-Exporter(label, context, length) as per Alternative 3 discussed in issue URL Correct?\nThanks NAME your comment is addressed in the latest commit. I merge this now.\nThe text use both the terms \"session\" and \"protocol instance\", would be good to settle for one term.\nJust chose one, I don't think this needs an issue\nIn the security considerations \"session\" makes most sense to me. There is also the term \"connection\". I will make a proposal.\nThe spec should likely forbid multiple calls to the EDHOC-Exporter interface with the same label, in order to prevent the same key being reused. Discussed with NAME and NAME\nGood catch. Yes, we should have some text about that, but it should probably more be a requirement on the application using EDHOC that on EDHOC itself. Two different application should defintly not use the same label. The same application might rederive the same key several times as long as it keep track of its AEAD nonces and replay window. I thought that we had already IANA registry proposed for this, but that is not the case. We should probably have one. If we have a IANA registry the labels get more fixed and it might be that we need a context input parameter as well.\nIf we make a IANA registrer for labels, I think we need to let the application add additional information in some way. Two alternatives: We make an IANA register for label prefixes, and let the application append information to the label prefix. This was recently planned for EAP-TLS (and methods built on EAP-TLS) but was changes because the TLS export labels are not really ment to be labels. This would not change the current exporter interface. We expand the interface with a context parameter. This is similar to the TLS exporter. A problem with the TLS exporter is that some implementations decided to not even support context.\nNAME how is the context parameter intended to be used? Could you give an example? Would this be some sort of \"salt\" when deriving the key?\nDon't register labels, register label prefixes.\nThe motivation for \"context\" or \"label postfixes\" is to increase the flexibility for the application. Restricting the application to IANA registered labels is very limiting. An application might want to use the EDHOC exporter to derive a number of keys and we should encourage that. The alternative is that the application needs to implement its own key derivation, which might be less secure and might be outside of the TEE. I called it \"context\" just because TLS calls the paramater in its exporter context.\n\"Be like TLS\" is of course a good motivation. Technically, a label prefix is not different from a context, it just saves the need to separate prefix and suffix.\n(And of course there is nothing wrong with multiple calls with the same label; the application has a \"right to forget\" :-)\nThe solutions should be equal when it comes to security. The choice should be made based on what is easiest for implementators and apllications using EDHOC. Alt 1. We make an IANA register for label prefixes: \"OSCORE Master Secret\" \"OSCORE Master Salt\" The Export function stays the same: EDHOC-Exporter(label, length) = EDHOC-KDF(PRK4x3m, TH4, label, length) An application is allowed to append things to the label prefix: \"OSCORE Master Secret\" \"OSCORE Master Secret PickleRick\" Alt 2. We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, [label, ctx] , length) An application is not allowed to append things to the label and uses the ctx instead: label = \"OSCORE Master Secret\" ctx = \"\" label = \"OSCORE Master Secret\" ctx = \"PickleRick\"\nNo major opinion. Alt 2 breaks the test vector for the OSCORE security context. It would be good to get more input.\nAny preference NAME NAME NAME ?\nIf desired the CTX is more convenient it can always be added in a wrapper on top of EDHOC-Exporter(). (I assume re-use of label+ctx in Alt 2 is equivalent to re-use of label in Alt 1?)\nI prefer to have explicitly specified. We can pass to the concatenation of and . When is undefined/empty, test vectors are not affected. I guess this is what NAME is referring to?\nlabel is a CBOR byte string. While alt 2 use an CBOR array. Definging alt2 as a wrapper seems more complication in my view.\nWould be good if people stated alt 1. or alt 2. or made a concrete suggestion for alt 3.\nAlt 3 (CBOR sequence) We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, labelcontext, length) where is a CBOR sequence: If you want to keep the test vectors intact, then you could make optional in the CDDL, but not sure how to specify an optional parameter in the interface definition.\nThanks. That is a valid alt 3. but ctx should probably be a bstr\nI don't think keeping the test vectors intact is an important goal at all. Let's decide on what is simplest for a user of the exporter interface.\nSomething like that, yes. Later post do differentiate the type of \"label\" and \"ctx\" and in light of that and changing test vectors Alt3 seems clean.\nI am fine with alt 3. I can start making PR for that unless somebody has different ideas. I don't alt 1 is a good alternative anymore as is restricts context to being a unicode string. Which would require some base64 encoding or similar for a general context.\nPlease review alt. 3 as specified in . Unless there are any comments we would like to merge this.\nThis is included in -07 so I close the issue.\nComment by Martin Disch in URL \"The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed.\" Should we provide more details about COSE constructs, e.g. in appendix A.2?\n(pasting my response from the mailing list here for completeness) Thanks for following up on this. Yes, in my opinion expanding on the COSE constructs would be helpful. But I also sympathize with not wanting to duplicate what is already defined in another RFC (and possibly getting it wrong, as Carsten points out). Considering that there are quite a few implementers already and besides mine you have only received one other similar comment, maybe this isn't an issue for a meaningful majority. In any case, the draft has made great strides already in terms of accessibility and is in a good state now, so I don't see improvements in this area as an absolute necessity.\nThe two COSE constructs that are used in EHDOC, and are fairly simple. They don't use the recursive structure or embed . I think it is possible to add some additional clarifications on the structure of the COSE messages in the EHDOC draft without having to copy too much text from RFC 8152. As part of a thesis I built a proof-of-concept implementation of EDHOC in the Rust programming language available at URL The naming of the repository is due to the subject being both EDHOC and OSCORE on embedded devices, but the following refers only to the EDHOC implementation. It's based on draft-selander-ace-cose-ecdhe-14 and as such, sadly, outdated. Since it was just one part of the overall work, I only managed to implement the subset I needed, which is RPK and the signature-signature based scheme (corresponding to what draft-selander-lake-edhoc calls method 0). I also took some other shortcuts, such as not supporting auxiliary data. Fortunately, test vectors were about to be introduced just around the time I was working on it, so I've been able to successfully verify my implementation against them when John shared them with me for some preliminary testing. Since I didn't have any prior experience implementing this kind of software, I would definitely consider it more of an experiment, but it's out there and it works, in case it can be useful to anybody. I'm convinced that EDHOC will be valuable particularly in the context of OSCORE, possibly even beyond that. As for the draft itself, I would like to mention one aspect that I think could be improved from the perspective of an implementer. Part of the promise of EDHOC is that it's simple and lightweight due to reusing COSE. That is certainly the case, but breaks down somewhat when working on a system where no fully featured library for it is available. This was the case for me and I personally found it difficult to quickly figure out which aspects of COSE I needed to implement. The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed. I know the authors are aware of this and there has been some effort to inline more information directly in the specification, which I greatly appreciate. It would be very nice to have an entirely self-contained document to work with. But I have to again point out my lack of experience in that area, it's entirely possible that this draft is in fact much better than others in this regard and I just don't know it yet. With that I would like to support the adoption and am looking forward to seeing how it develops. Kind regards, Martin", "new_text": "strings. The initial contents of the registry is shown in fig- method-types. 9.4. IANA has created a new registry entitled \"EDHOC Error Codes\" under the new heading \"EDHOC\". The registration procedure is"} {"id": "q-en-edhoc-dd68cae5bf5de723c95db588a8e59b11f8f0e635ba0d15e8747d14993bb499ef", "old_text": "is a CDDL defined type, and Description is a text string. The initial contents of the registry is shown in fig-error-codes. 9.4. IANA has added the well-known URI 'edhoc' to the Well-Known URIs registry.", "comments": "(Last two commits on wrong branch.)\nThe prototype of EDHOC-Exporter() has remained EDHOC-Exporter(label, length), while it should be EDHOC-Exporter(label, context, length) as per Alternative 3 discussed in issue URL Correct?\nThanks NAME your comment is addressed in the latest commit. I merge this now.\nThe text use both the terms \"session\" and \"protocol instance\", would be good to settle for one term.\nJust chose one, I don't think this needs an issue\nIn the security considerations \"session\" makes most sense to me. There is also the term \"connection\". I will make a proposal.\nThe spec should likely forbid multiple calls to the EDHOC-Exporter interface with the same label, in order to prevent the same key being reused. Discussed with NAME and NAME\nGood catch. Yes, we should have some text about that, but it should probably more be a requirement on the application using EDHOC that on EDHOC itself. Two different application should defintly not use the same label. The same application might rederive the same key several times as long as it keep track of its AEAD nonces and replay window. I thought that we had already IANA registry proposed for this, but that is not the case. We should probably have one. If we have a IANA registry the labels get more fixed and it might be that we need a context input parameter as well.\nIf we make a IANA registrer for labels, I think we need to let the application add additional information in some way. Two alternatives: We make an IANA register for label prefixes, and let the application append information to the label prefix. This was recently planned for EAP-TLS (and methods built on EAP-TLS) but was changes because the TLS export labels are not really ment to be labels. This would not change the current exporter interface. We expand the interface with a context parameter. This is similar to the TLS exporter. A problem with the TLS exporter is that some implementations decided to not even support context.\nNAME how is the context parameter intended to be used? Could you give an example? Would this be some sort of \"salt\" when deriving the key?\nDon't register labels, register label prefixes.\nThe motivation for \"context\" or \"label postfixes\" is to increase the flexibility for the application. Restricting the application to IANA registered labels is very limiting. An application might want to use the EDHOC exporter to derive a number of keys and we should encourage that. The alternative is that the application needs to implement its own key derivation, which might be less secure and might be outside of the TEE. I called it \"context\" just because TLS calls the paramater in its exporter context.\n\"Be like TLS\" is of course a good motivation. Technically, a label prefix is not different from a context, it just saves the need to separate prefix and suffix.\n(And of course there is nothing wrong with multiple calls with the same label; the application has a \"right to forget\" :-)\nThe solutions should be equal when it comes to security. The choice should be made based on what is easiest for implementators and apllications using EDHOC. Alt 1. We make an IANA register for label prefixes: \"OSCORE Master Secret\" \"OSCORE Master Salt\" The Export function stays the same: EDHOC-Exporter(label, length) = EDHOC-KDF(PRK4x3m, TH4, label, length) An application is allowed to append things to the label prefix: \"OSCORE Master Secret\" \"OSCORE Master Secret PickleRick\" Alt 2. We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, [label, ctx] , length) An application is not allowed to append things to the label and uses the ctx instead: label = \"OSCORE Master Secret\" ctx = \"\" label = \"OSCORE Master Secret\" ctx = \"PickleRick\"\nNo major opinion. Alt 2 breaks the test vector for the OSCORE security context. It would be good to get more input.\nAny preference NAME NAME NAME ?\nIf desired the CTX is more convenient it can always be added in a wrapper on top of EDHOC-Exporter(). (I assume re-use of label+ctx in Alt 2 is equivalent to re-use of label in Alt 1?)\nI prefer to have explicitly specified. We can pass to the concatenation of and . When is undefined/empty, test vectors are not affected. I guess this is what NAME is referring to?\nlabel is a CBOR byte string. While alt 2 use an CBOR array. Definging alt2 as a wrapper seems more complication in my view.\nWould be good if people stated alt 1. or alt 2. or made a concrete suggestion for alt 3.\nAlt 3 (CBOR sequence) We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, labelcontext, length) where is a CBOR sequence: If you want to keep the test vectors intact, then you could make optional in the CDDL, but not sure how to specify an optional parameter in the interface definition.\nThanks. That is a valid alt 3. but ctx should probably be a bstr\nI don't think keeping the test vectors intact is an important goal at all. Let's decide on what is simplest for a user of the exporter interface.\nSomething like that, yes. Later post do differentiate the type of \"label\" and \"ctx\" and in light of that and changing test vectors Alt3 seems clean.\nI am fine with alt 3. I can start making PR for that unless somebody has different ideas. I don't alt 1 is a good alternative anymore as is restricts context to being a unicode string. Which would require some base64 encoding or similar for a general context.\nPlease review alt. 3 as specified in . Unless there are any comments we would like to merge this.\nThis is included in -07 so I close the issue.\nComment by Martin Disch in URL \"The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed.\" Should we provide more details about COSE constructs, e.g. in appendix A.2?\n(pasting my response from the mailing list here for completeness) Thanks for following up on this. Yes, in my opinion expanding on the COSE constructs would be helpful. But I also sympathize with not wanting to duplicate what is already defined in another RFC (and possibly getting it wrong, as Carsten points out). Considering that there are quite a few implementers already and besides mine you have only received one other similar comment, maybe this isn't an issue for a meaningful majority. In any case, the draft has made great strides already in terms of accessibility and is in a good state now, so I don't see improvements in this area as an absolute necessity.\nThe two COSE constructs that are used in EHDOC, and are fairly simple. They don't use the recursive structure or embed . I think it is possible to add some additional clarifications on the structure of the COSE messages in the EHDOC draft without having to copy too much text from RFC 8152. As part of a thesis I built a proof-of-concept implementation of EDHOC in the Rust programming language available at URL The naming of the repository is due to the subject being both EDHOC and OSCORE on embedded devices, but the following refers only to the EDHOC implementation. It's based on draft-selander-ace-cose-ecdhe-14 and as such, sadly, outdated. Since it was just one part of the overall work, I only managed to implement the subset I needed, which is RPK and the signature-signature based scheme (corresponding to what draft-selander-lake-edhoc calls method 0). I also took some other shortcuts, such as not supporting auxiliary data. Fortunately, test vectors were about to be introduced just around the time I was working on it, so I've been able to successfully verify my implementation against them when John shared them with me for some preliminary testing. Since I didn't have any prior experience implementing this kind of software, I would definitely consider it more of an experiment, but it's out there and it works, in case it can be useful to anybody. I'm convinced that EDHOC will be valuable particularly in the context of OSCORE, possibly even beyond that. As for the draft itself, I would like to mention one aspect that I think could be improved from the perspective of an implementer. Part of the promise of EDHOC is that it's simple and lightweight due to reusing COSE. That is certainly the case, but breaks down somewhat when working on a system where no fully featured library for it is available. This was the case for me and I personally found it difficult to quickly figure out which aspects of COSE I needed to implement. The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed. I know the authors are aware of this and there has been some effort to inline more information directly in the specification, which I greatly appreciate. It would be very nice to have an entirely self-contained document to work with. But I have to again point out my lack of experience in that area, it's entirely possible that this draft is in fact much better than others in this regard and I just don't know it yet. With that I would like to support the adoption and am looking forward to seeing how it develops. Kind regards, Martin", "new_text": "is a CDDL defined type, and Description is a text string. The initial contents of the registry is shown in fig-error-codes. 9.5. IANA has added the well-known URI 'edhoc' to the Well-Known URIs registry."} {"id": "q-en-edhoc-dd68cae5bf5de723c95db588a8e59b11f8f0e635ba0d15e8747d14993bb499ef", "old_text": "Related information: None 9.5. IANA has added the media type 'application/edhoc' to the Media Types registry.", "comments": "(Last two commits on wrong branch.)\nThe prototype of EDHOC-Exporter() has remained EDHOC-Exporter(label, length), while it should be EDHOC-Exporter(label, context, length) as per Alternative 3 discussed in issue URL Correct?\nThanks NAME your comment is addressed in the latest commit. I merge this now.\nThe text use both the terms \"session\" and \"protocol instance\", would be good to settle for one term.\nJust chose one, I don't think this needs an issue\nIn the security considerations \"session\" makes most sense to me. There is also the term \"connection\". I will make a proposal.\nThe spec should likely forbid multiple calls to the EDHOC-Exporter interface with the same label, in order to prevent the same key being reused. Discussed with NAME and NAME\nGood catch. Yes, we should have some text about that, but it should probably more be a requirement on the application using EDHOC that on EDHOC itself. Two different application should defintly not use the same label. The same application might rederive the same key several times as long as it keep track of its AEAD nonces and replay window. I thought that we had already IANA registry proposed for this, but that is not the case. We should probably have one. If we have a IANA registry the labels get more fixed and it might be that we need a context input parameter as well.\nIf we make a IANA registrer for labels, I think we need to let the application add additional information in some way. Two alternatives: We make an IANA register for label prefixes, and let the application append information to the label prefix. This was recently planned for EAP-TLS (and methods built on EAP-TLS) but was changes because the TLS export labels are not really ment to be labels. This would not change the current exporter interface. We expand the interface with a context parameter. This is similar to the TLS exporter. A problem with the TLS exporter is that some implementations decided to not even support context.\nNAME how is the context parameter intended to be used? Could you give an example? Would this be some sort of \"salt\" when deriving the key?\nDon't register labels, register label prefixes.\nThe motivation for \"context\" or \"label postfixes\" is to increase the flexibility for the application. Restricting the application to IANA registered labels is very limiting. An application might want to use the EDHOC exporter to derive a number of keys and we should encourage that. The alternative is that the application needs to implement its own key derivation, which might be less secure and might be outside of the TEE. I called it \"context\" just because TLS calls the paramater in its exporter context.\n\"Be like TLS\" is of course a good motivation. Technically, a label prefix is not different from a context, it just saves the need to separate prefix and suffix.\n(And of course there is nothing wrong with multiple calls with the same label; the application has a \"right to forget\" :-)\nThe solutions should be equal when it comes to security. The choice should be made based on what is easiest for implementators and apllications using EDHOC. Alt 1. We make an IANA register for label prefixes: \"OSCORE Master Secret\" \"OSCORE Master Salt\" The Export function stays the same: EDHOC-Exporter(label, length) = EDHOC-KDF(PRK4x3m, TH4, label, length) An application is allowed to append things to the label prefix: \"OSCORE Master Secret\" \"OSCORE Master Secret PickleRick\" Alt 2. We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, [label, ctx] , length) An application is not allowed to append things to the label and uses the ctx instead: label = \"OSCORE Master Secret\" ctx = \"\" label = \"OSCORE Master Secret\" ctx = \"PickleRick\"\nNo major opinion. Alt 2 breaks the test vector for the OSCORE security context. It would be good to get more input.\nAny preference NAME NAME NAME ?\nIf desired the CTX is more convenient it can always be added in a wrapper on top of EDHOC-Exporter(). (I assume re-use of label+ctx in Alt 2 is equivalent to re-use of label in Alt 1?)\nI prefer to have explicitly specified. We can pass to the concatenation of and . When is undefined/empty, test vectors are not affected. I guess this is what NAME is referring to?\nlabel is a CBOR byte string. While alt 2 use an CBOR array. Definging alt2 as a wrapper seems more complication in my view.\nWould be good if people stated alt 1. or alt 2. or made a concrete suggestion for alt 3.\nAlt 3 (CBOR sequence) We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, labelcontext, length) where is a CBOR sequence: If you want to keep the test vectors intact, then you could make optional in the CDDL, but not sure how to specify an optional parameter in the interface definition.\nThanks. That is a valid alt 3. but ctx should probably be a bstr\nI don't think keeping the test vectors intact is an important goal at all. Let's decide on what is simplest for a user of the exporter interface.\nSomething like that, yes. Later post do differentiate the type of \"label\" and \"ctx\" and in light of that and changing test vectors Alt3 seems clean.\nI am fine with alt 3. I can start making PR for that unless somebody has different ideas. I don't alt 1 is a good alternative anymore as is restricts context to being a unicode string. Which would require some base64 encoding or similar for a general context.\nPlease review alt. 3 as specified in . Unless there are any comments we would like to merge this.\nThis is included in -07 so I close the issue.\nComment by Martin Disch in URL \"The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed.\" Should we provide more details about COSE constructs, e.g. in appendix A.2?\n(pasting my response from the mailing list here for completeness) Thanks for following up on this. Yes, in my opinion expanding on the COSE constructs would be helpful. But I also sympathize with not wanting to duplicate what is already defined in another RFC (and possibly getting it wrong, as Carsten points out). Considering that there are quite a few implementers already and besides mine you have only received one other similar comment, maybe this isn't an issue for a meaningful majority. In any case, the draft has made great strides already in terms of accessibility and is in a good state now, so I don't see improvements in this area as an absolute necessity.\nThe two COSE constructs that are used in EHDOC, and are fairly simple. They don't use the recursive structure or embed . I think it is possible to add some additional clarifications on the structure of the COSE messages in the EHDOC draft without having to copy too much text from RFC 8152. As part of a thesis I built a proof-of-concept implementation of EDHOC in the Rust programming language available at URL The naming of the repository is due to the subject being both EDHOC and OSCORE on embedded devices, but the following refers only to the EDHOC implementation. It's based on draft-selander-ace-cose-ecdhe-14 and as such, sadly, outdated. Since it was just one part of the overall work, I only managed to implement the subset I needed, which is RPK and the signature-signature based scheme (corresponding to what draft-selander-lake-edhoc calls method 0). I also took some other shortcuts, such as not supporting auxiliary data. Fortunately, test vectors were about to be introduced just around the time I was working on it, so I've been able to successfully verify my implementation against them when John shared them with me for some preliminary testing. Since I didn't have any prior experience implementing this kind of software, I would definitely consider it more of an experiment, but it's out there and it works, in case it can be useful to anybody. I'm convinced that EDHOC will be valuable particularly in the context of OSCORE, possibly even beyond that. As for the draft itself, I would like to mention one aspect that I think could be improved from the perspective of an implementer. Part of the promise of EDHOC is that it's simple and lightweight due to reusing COSE. That is certainly the case, but breaks down somewhat when working on a system where no fully featured library for it is available. This was the case for me and I personally found it difficult to quickly figure out which aspects of COSE I needed to implement. The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed. I know the authors are aware of this and there has been some effort to inline more information directly in the specification, which I greatly appreciate. It would be very nice to have an entirely self-contained document to work with. But I have to again point out my lack of experience in that area, it's entirely possible that this draft is in fact much better than others in this regard and I just don't know it yet. With that I would like to support the adoption and am looking forward to seeing how it develops. Kind regards, Martin", "new_text": "Related information: None 9.6. IANA has added the media type 'application/edhoc' to the Media Types registry."} {"id": "q-en-edhoc-dd68cae5bf5de723c95db588a8e59b11f8f0e635ba0d15e8747d14993bb499ef", "old_text": "Change Controller: IESG 9.6. IANA has added the media type 'application/edhoc' to the CoAP Content-Formats registry.", "comments": "(Last two commits on wrong branch.)\nThe prototype of EDHOC-Exporter() has remained EDHOC-Exporter(label, length), while it should be EDHOC-Exporter(label, context, length) as per Alternative 3 discussed in issue URL Correct?\nThanks NAME your comment is addressed in the latest commit. I merge this now.\nThe text use both the terms \"session\" and \"protocol instance\", would be good to settle for one term.\nJust chose one, I don't think this needs an issue\nIn the security considerations \"session\" makes most sense to me. There is also the term \"connection\". I will make a proposal.\nThe spec should likely forbid multiple calls to the EDHOC-Exporter interface with the same label, in order to prevent the same key being reused. Discussed with NAME and NAME\nGood catch. Yes, we should have some text about that, but it should probably more be a requirement on the application using EDHOC that on EDHOC itself. Two different application should defintly not use the same label. The same application might rederive the same key several times as long as it keep track of its AEAD nonces and replay window. I thought that we had already IANA registry proposed for this, but that is not the case. We should probably have one. If we have a IANA registry the labels get more fixed and it might be that we need a context input parameter as well.\nIf we make a IANA registrer for labels, I think we need to let the application add additional information in some way. Two alternatives: We make an IANA register for label prefixes, and let the application append information to the label prefix. This was recently planned for EAP-TLS (and methods built on EAP-TLS) but was changes because the TLS export labels are not really ment to be labels. This would not change the current exporter interface. We expand the interface with a context parameter. This is similar to the TLS exporter. A problem with the TLS exporter is that some implementations decided to not even support context.\nNAME how is the context parameter intended to be used? Could you give an example? Would this be some sort of \"salt\" when deriving the key?\nDon't register labels, register label prefixes.\nThe motivation for \"context\" or \"label postfixes\" is to increase the flexibility for the application. Restricting the application to IANA registered labels is very limiting. An application might want to use the EDHOC exporter to derive a number of keys and we should encourage that. The alternative is that the application needs to implement its own key derivation, which might be less secure and might be outside of the TEE. I called it \"context\" just because TLS calls the paramater in its exporter context.\n\"Be like TLS\" is of course a good motivation. Technically, a label prefix is not different from a context, it just saves the need to separate prefix and suffix.\n(And of course there is nothing wrong with multiple calls with the same label; the application has a \"right to forget\" :-)\nThe solutions should be equal when it comes to security. The choice should be made based on what is easiest for implementators and apllications using EDHOC. Alt 1. We make an IANA register for label prefixes: \"OSCORE Master Secret\" \"OSCORE Master Salt\" The Export function stays the same: EDHOC-Exporter(label, length) = EDHOC-KDF(PRK4x3m, TH4, label, length) An application is allowed to append things to the label prefix: \"OSCORE Master Secret\" \"OSCORE Master Secret PickleRick\" Alt 2. We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, [label, ctx] , length) An application is not allowed to append things to the label and uses the ctx instead: label = \"OSCORE Master Secret\" ctx = \"\" label = \"OSCORE Master Secret\" ctx = \"PickleRick\"\nNo major opinion. Alt 2 breaks the test vector for the OSCORE security context. It would be good to get more input.\nAny preference NAME NAME NAME ?\nIf desired the CTX is more convenient it can always be added in a wrapper on top of EDHOC-Exporter(). (I assume re-use of label+ctx in Alt 2 is equivalent to re-use of label in Alt 1?)\nI prefer to have explicitly specified. We can pass to the concatenation of and . When is undefined/empty, test vectors are not affected. I guess this is what NAME is referring to?\nlabel is a CBOR byte string. While alt 2 use an CBOR array. Definging alt2 as a wrapper seems more complication in my view.\nWould be good if people stated alt 1. or alt 2. or made a concrete suggestion for alt 3.\nAlt 3 (CBOR sequence) We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, labelcontext, length) where is a CBOR sequence: If you want to keep the test vectors intact, then you could make optional in the CDDL, but not sure how to specify an optional parameter in the interface definition.\nThanks. That is a valid alt 3. but ctx should probably be a bstr\nI don't think keeping the test vectors intact is an important goal at all. Let's decide on what is simplest for a user of the exporter interface.\nSomething like that, yes. Later post do differentiate the type of \"label\" and \"ctx\" and in light of that and changing test vectors Alt3 seems clean.\nI am fine with alt 3. I can start making PR for that unless somebody has different ideas. I don't alt 1 is a good alternative anymore as is restricts context to being a unicode string. Which would require some base64 encoding or similar for a general context.\nPlease review alt. 3 as specified in . Unless there are any comments we would like to merge this.\nThis is included in -07 so I close the issue.\nComment by Martin Disch in URL \"The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed.\" Should we provide more details about COSE constructs, e.g. in appendix A.2?\n(pasting my response from the mailing list here for completeness) Thanks for following up on this. Yes, in my opinion expanding on the COSE constructs would be helpful. But I also sympathize with not wanting to duplicate what is already defined in another RFC (and possibly getting it wrong, as Carsten points out). Considering that there are quite a few implementers already and besides mine you have only received one other similar comment, maybe this isn't an issue for a meaningful majority. In any case, the draft has made great strides already in terms of accessibility and is in a good state now, so I don't see improvements in this area as an absolute necessity.\nThe two COSE constructs that are used in EHDOC, and are fairly simple. They don't use the recursive structure or embed . I think it is possible to add some additional clarifications on the structure of the COSE messages in the EHDOC draft without having to copy too much text from RFC 8152. As part of a thesis I built a proof-of-concept implementation of EDHOC in the Rust programming language available at URL The naming of the repository is due to the subject being both EDHOC and OSCORE on embedded devices, but the following refers only to the EDHOC implementation. It's based on draft-selander-ace-cose-ecdhe-14 and as such, sadly, outdated. Since it was just one part of the overall work, I only managed to implement the subset I needed, which is RPK and the signature-signature based scheme (corresponding to what draft-selander-lake-edhoc calls method 0). I also took some other shortcuts, such as not supporting auxiliary data. Fortunately, test vectors were about to be introduced just around the time I was working on it, so I've been able to successfully verify my implementation against them when John shared them with me for some preliminary testing. Since I didn't have any prior experience implementing this kind of software, I would definitely consider it more of an experiment, but it's out there and it works, in case it can be useful to anybody. I'm convinced that EDHOC will be valuable particularly in the context of OSCORE, possibly even beyond that. As for the draft itself, I would like to mention one aspect that I think could be improved from the perspective of an implementer. Part of the promise of EDHOC is that it's simple and lightweight due to reusing COSE. That is certainly the case, but breaks down somewhat when working on a system where no fully featured library for it is available. This was the case for me and I personally found it difficult to quickly figure out which aspects of COSE I needed to implement. The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed. I know the authors are aware of this and there has been some effort to inline more information directly in the specification, which I greatly appreciate. It would be very nice to have an entirely self-contained document to work with. But I have to again point out my lack of experience in that area, it's entirely possible that this draft is in fact much better than others in this regard and I just don't know it yet. With that I would like to support the adoption and am looking forward to seeing how it develops. Kind regards, Martin", "new_text": "Change Controller: IESG 9.7. IANA has added the media type 'application/edhoc' to the CoAP Content-Formats registry."} {"id": "q-en-edhoc-dd68cae5bf5de723c95db588a8e59b11f8f0e635ba0d15e8747d14993bb499ef", "old_text": "Reference: [[this document]] 9.7. The IANA Registries established in this document is defined as \"Expert Review\". This section gives some general guidelines for what", "comments": "(Last two commits on wrong branch.)\nThe prototype of EDHOC-Exporter() has remained EDHOC-Exporter(label, length), while it should be EDHOC-Exporter(label, context, length) as per Alternative 3 discussed in issue URL Correct?\nThanks NAME your comment is addressed in the latest commit. I merge this now.\nThe text use both the terms \"session\" and \"protocol instance\", would be good to settle for one term.\nJust chose one, I don't think this needs an issue\nIn the security considerations \"session\" makes most sense to me. There is also the term \"connection\". I will make a proposal.\nThe spec should likely forbid multiple calls to the EDHOC-Exporter interface with the same label, in order to prevent the same key being reused. Discussed with NAME and NAME\nGood catch. Yes, we should have some text about that, but it should probably more be a requirement on the application using EDHOC that on EDHOC itself. Two different application should defintly not use the same label. The same application might rederive the same key several times as long as it keep track of its AEAD nonces and replay window. I thought that we had already IANA registry proposed for this, but that is not the case. We should probably have one. If we have a IANA registry the labels get more fixed and it might be that we need a context input parameter as well.\nIf we make a IANA registrer for labels, I think we need to let the application add additional information in some way. Two alternatives: We make an IANA register for label prefixes, and let the application append information to the label prefix. This was recently planned for EAP-TLS (and methods built on EAP-TLS) but was changes because the TLS export labels are not really ment to be labels. This would not change the current exporter interface. We expand the interface with a context parameter. This is similar to the TLS exporter. A problem with the TLS exporter is that some implementations decided to not even support context.\nNAME how is the context parameter intended to be used? Could you give an example? Would this be some sort of \"salt\" when deriving the key?\nDon't register labels, register label prefixes.\nThe motivation for \"context\" or \"label postfixes\" is to increase the flexibility for the application. Restricting the application to IANA registered labels is very limiting. An application might want to use the EDHOC exporter to derive a number of keys and we should encourage that. The alternative is that the application needs to implement its own key derivation, which might be less secure and might be outside of the TEE. I called it \"context\" just because TLS calls the paramater in its exporter context.\n\"Be like TLS\" is of course a good motivation. Technically, a label prefix is not different from a context, it just saves the need to separate prefix and suffix.\n(And of course there is nothing wrong with multiple calls with the same label; the application has a \"right to forget\" :-)\nThe solutions should be equal when it comes to security. The choice should be made based on what is easiest for implementators and apllications using EDHOC. Alt 1. We make an IANA register for label prefixes: \"OSCORE Master Secret\" \"OSCORE Master Salt\" The Export function stays the same: EDHOC-Exporter(label, length) = EDHOC-KDF(PRK4x3m, TH4, label, length) An application is allowed to append things to the label prefix: \"OSCORE Master Secret\" \"OSCORE Master Secret PickleRick\" Alt 2. We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, [label, ctx] , length) An application is not allowed to append things to the label and uses the ctx instead: label = \"OSCORE Master Secret\" ctx = \"\" label = \"OSCORE Master Secret\" ctx = \"PickleRick\"\nNo major opinion. Alt 2 breaks the test vector for the OSCORE security context. It would be good to get more input.\nAny preference NAME NAME NAME ?\nIf desired the CTX is more convenient it can always be added in a wrapper on top of EDHOC-Exporter(). (I assume re-use of label+ctx in Alt 2 is equivalent to re-use of label in Alt 1?)\nI prefer to have explicitly specified. We can pass to the concatenation of and . When is undefined/empty, test vectors are not affected. I guess this is what NAME is referring to?\nlabel is a CBOR byte string. While alt 2 use an CBOR array. Definging alt2 as a wrapper seems more complication in my view.\nWould be good if people stated alt 1. or alt 2. or made a concrete suggestion for alt 3.\nAlt 3 (CBOR sequence) We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, labelcontext, length) where is a CBOR sequence: If you want to keep the test vectors intact, then you could make optional in the CDDL, but not sure how to specify an optional parameter in the interface definition.\nThanks. That is a valid alt 3. but ctx should probably be a bstr\nI don't think keeping the test vectors intact is an important goal at all. Let's decide on what is simplest for a user of the exporter interface.\nSomething like that, yes. Later post do differentiate the type of \"label\" and \"ctx\" and in light of that and changing test vectors Alt3 seems clean.\nI am fine with alt 3. I can start making PR for that unless somebody has different ideas. I don't alt 1 is a good alternative anymore as is restricts context to being a unicode string. Which would require some base64 encoding or similar for a general context.\nPlease review alt. 3 as specified in . Unless there are any comments we would like to merge this.\nThis is included in -07 so I close the issue.\nComment by Martin Disch in URL \"The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed.\" Should we provide more details about COSE constructs, e.g. in appendix A.2?\n(pasting my response from the mailing list here for completeness) Thanks for following up on this. Yes, in my opinion expanding on the COSE constructs would be helpful. But I also sympathize with not wanting to duplicate what is already defined in another RFC (and possibly getting it wrong, as Carsten points out). Considering that there are quite a few implementers already and besides mine you have only received one other similar comment, maybe this isn't an issue for a meaningful majority. In any case, the draft has made great strides already in terms of accessibility and is in a good state now, so I don't see improvements in this area as an absolute necessity.\nThe two COSE constructs that are used in EHDOC, and are fairly simple. They don't use the recursive structure or embed . I think it is possible to add some additional clarifications on the structure of the COSE messages in the EHDOC draft without having to copy too much text from RFC 8152. As part of a thesis I built a proof-of-concept implementation of EDHOC in the Rust programming language available at URL The naming of the repository is due to the subject being both EDHOC and OSCORE on embedded devices, but the following refers only to the EDHOC implementation. It's based on draft-selander-ace-cose-ecdhe-14 and as such, sadly, outdated. Since it was just one part of the overall work, I only managed to implement the subset I needed, which is RPK and the signature-signature based scheme (corresponding to what draft-selander-lake-edhoc calls method 0). I also took some other shortcuts, such as not supporting auxiliary data. Fortunately, test vectors were about to be introduced just around the time I was working on it, so I've been able to successfully verify my implementation against them when John shared them with me for some preliminary testing. Since I didn't have any prior experience implementing this kind of software, I would definitely consider it more of an experiment, but it's out there and it works, in case it can be useful to anybody. I'm convinced that EDHOC will be valuable particularly in the context of OSCORE, possibly even beyond that. As for the draft itself, I would like to mention one aspect that I think could be improved from the perspective of an implementer. Part of the promise of EDHOC is that it's simple and lightweight due to reusing COSE. That is certainly the case, but breaks down somewhat when working on a system where no fully featured library for it is available. This was the case for me and I personally found it difficult to quickly figure out which aspects of COSE I needed to implement. The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed. I know the authors are aware of this and there has been some effort to inline more information directly in the specification, which I greatly appreciate. It would be very nice to have an entirely self-contained document to work with. But I have to again point out my lack of experience in that area, it's entirely possible that this draft is in fact much better than others in this regard and I just don't know it yet. With that I would like to support the adoption and am looking forward to seeing how it develops. Kind regards, Martin", "new_text": "Reference: [[this document]] 9.8. The IANA Registries established in this document is defined as \"Expert Review\". This section gives some general guidelines for what"} {"id": "q-en-edhoc-c88b88987464c8005000518bf3fcf6ba5750aef9e9771cec68ff0f861ab109cb", "old_text": "Responder, respectively, see m3 and m2. When the credential is a certificate, CRED_x is an end-entity certificate (i.e., not the certificate chain), encoded as a CBOR bstr. In X.509 and C509 certificates, signature keys typically have key usage \"digitalSignature\" and Diffie-Hellman public keys typically have key usage \"keyAgreement\". To prevent misbinding attacks in systems where an attacker can register public keys without proving knowledge of the private key,", "comments": "CRED_R - bstr containing the credential This is not true for C509 which is an CBOR array", "new_text": "Responder, respectively, see m3 and m2. When the credential is a certificate, CRED_x is an end-entity certificate (i.e., not the certificate chain). In X.509 and C509 certificates, signature keys typically have key usage \"digitalSignature\" and Diffie-Hellman public keys typically have key usage \"keyAgreement\". To prevent misbinding attacks in systems where an attacker can register public keys without proving knowledge of the private key,"} {"id": "q-en-edhoc-c88b88987464c8005000518bf3fcf6ba5750aef9e9771cec68ff0f861ab109cb", "old_text": "external_aad = << TH_2, CRED_R, ? EAD_2 >> CRED_R - bstr containing the credential of the Responder, see EAD_2 = unprotected external authorization data, see", "comments": "CRED_R - bstr containing the credential This is not true for C509 which is an CBOR array", "new_text": "external_aad = << TH_2, CRED_R, ? EAD_2 >> CRED_R - CBOR item containing the credential of the Responder, see EAD_2 = unprotected external authorization data, see"} {"id": "q-en-edhoc-c88b88987464c8005000518bf3fcf6ba5750aef9e9771cec68ff0f861ab109cb", "old_text": "external_aad = << TH_3, CRED_I, ? EAD_3 >> CRED_I - bstr containing the credential of the Initiator, see id_cred. EAD_3 = protected external authorization data, see", "comments": "CRED_R - bstr containing the credential This is not true for C509 which is an CBOR array", "new_text": "external_aad = << TH_3, CRED_I, ? EAD_3 >> CRED_I - CBOR item containing the credential of the Initiator, see id_cred. EAD_3 = protected external authorization data, see"} {"id": "q-en-edhoc-f00d218684a9a13d2bbeea0aff2b1713a3a711564ef48d4034bb422db34b6ec3", "old_text": "algorithms (AEAD, hash) in the selected cipher suite (see cs) and the application can make use of the established connection identifiers C_I and C_R (see ci). EDHOC may be used with the media type application/edhoc defined in media-type. The Initiator can derive symmetric application keys after creating EDHOC message_3, see exporter. Protected application data can", "comments": "Signed-off-by: David Navarro\nNAME NAME and others: Any response to John's comment?\nNAME I included John's suggestion in the PR.\nLooks good to me. Other opinions?\nAs specified in section , the payload in some CoAP messages can be an EDHOC message prepended by either a C_x identifier or \"true\". The draft states: However, the payload is actually a CBOR Sequence as defined by . Thus, we could define and use a new media type \"application/edhoc+cbor-seq\". Note that the usage of the Content-Format option is optional.\nMakes sense to me. NAME NAME others: Comments? Good that you bring up the optionality of Content-Format, we should stress that in this text. The discussion in Hackathon also reminded me about the common-option-compression which NAME presented in the May 12 interim 2021 - how to replace the Uri-Path: \"/.well-known/edhoc\" with a, say, integer valued CoAP option.\nMakes a lot of sense to me, too.\nI find this distinction good and the solution consequential, still open for comments.", "new_text": "algorithms (AEAD, hash) in the selected cipher suite (see cs) and the application can make use of the established connection identifiers C_I and C_R (see ci). EDHOC may be used with the media type application/edhoc+cbor-seq defined in media-type. The Initiator can derive symmetric application keys after creating EDHOC message_3, see exporter. Protected application data can"} {"id": "q-en-edhoc-f00d218684a9a13d2bbeea0aff2b1713a3a711564ef48d4034bb422db34b6ec3", "old_text": "9.8. IANA has added the media type \"application/edhoc\" to the \"Media Types\" registry. Type name: application Subtype name: edhoc Required parameters: N/A", "comments": "Signed-off-by: David Navarro\nNAME NAME and others: Any response to John's comment?\nNAME I included John's suggestion in the PR.\nLooks good to me. Other opinions?\nAs specified in section , the payload in some CoAP messages can be an EDHOC message prepended by either a C_x identifier or \"true\". The draft states: However, the payload is actually a CBOR Sequence as defined by . Thus, we could define and use a new media type \"application/edhoc+cbor-seq\". Note that the usage of the Content-Format option is optional.\nMakes sense to me. NAME NAME others: Comments? Good that you bring up the optionality of Content-Format, we should stress that in this text. The discussion in Hackathon also reminded me about the common-option-compression which NAME presented in the May 12 interim 2021 - how to replace the Uri-Path: \"/.well-known/edhoc\" with a, say, integer valued CoAP option.\nMakes a lot of sense to me, too.\nI find this distinction good and the solution consequential, still open for comments.", "new_text": "9.8. IANA has added the media types \"application/edhoc+cbor-seq\" and \"application/cid-edhoc+cbor-seq\" to the \"Media Types\" registry. 9.8.1. Type name: application Subtype name: edhoc+cbor-seq Required parameters: N/A"} {"id": "q-en-edhoc-f00d218684a9a13d2bbeea0aff2b1713a3a711564ef48d4034bb422db34b6ec3", "old_text": "Change Controller: IESG 9.9. IANA has added the media type \"application/edhoc\" to the \"CoAP Content-Formats\" registry under the group name \"Constrained RESTful Environments (CoRE) Parameters\". Media Type: application/edhoc Encoding: ID: TBD42 Reference: [[this document]] 9.10.", "comments": "Signed-off-by: David Navarro\nNAME NAME and others: Any response to John's comment?\nNAME I included John's suggestion in the PR.\nLooks good to me. Other opinions?\nAs specified in section , the payload in some CoAP messages can be an EDHOC message prepended by either a C_x identifier or \"true\". The draft states: However, the payload is actually a CBOR Sequence as defined by . Thus, we could define and use a new media type \"application/edhoc+cbor-seq\". Note that the usage of the Content-Format option is optional.\nMakes sense to me. NAME NAME others: Comments? Good that you bring up the optionality of Content-Format, we should stress that in this text. The discussion in Hackathon also reminded me about the common-option-compression which NAME presented in the May 12 interim 2021 - how to replace the Uri-Path: \"/.well-known/edhoc\" with a, say, integer valued CoAP option.\nMakes a lot of sense to me, too.\nI find this distinction good and the solution consequential, still open for comments.", "new_text": "Change Controller: IESG 9.8.2. Type name: application Subtype name: cid-edhoc+cbor-seq Required parameters: N/A Optional parameters: N/A Encoding considerations: binary Security considerations: See Section 7 of this document. Interoperability considerations: N/A Published specification: [[this document]] (this document) Applications that use this media type: To be identified Fragment identifier considerations: N/A Additional information: Magic number(s): N/A File extension(s): N/A Macintosh file type code(s): N/A Person & email address to contact for further information: See \"Authors' Addresses\" section. Intended usage: COMMON Restrictions on usage: N/A Author: See \"Authors' Addresses\" section. Change Controller: IESG 9.9. IANA has added the media types \"application/edhoc+cbor-seq\" and \"application/cid-edhoc+cbor-seq\" to the \"CoAP Content-Formats\" registry under the group name \"Constrained RESTful Environments (CoRE) Parameters\". 9.10."} {"id": "q-en-edhoc-269ae1a707f2b310b2f8ed05977da108442446c03ae58df4ade916a4b40f6a24", "old_text": "of SP-800-56A. For secp256r1, secp384r1, and secp521r1, at least partial public-key validation MUST be done. So-called selfie attacks are mitigated as long as the Initiator does not have its own identity in the set of Responder identities it is allowed to communicate with. In trust on first use (TOFU) use cases", "comments": "A static key should not be used both for X25519 and Ed25519. This PR corresponds to .\nPlease review the proposed change.\nThe standard does not restrict private key usages. In particular, it could be tempting to use the public/private keys for the ed25519 signature scheme as a public and private key pair for the static authentication mode with the elliptic curve X25519. This does not have clear direct and practical implications. However, this usage falls outside the existing cryptanalysis for the signature scheme, and is a bad practice from a theoretical point of view. This would require a dedicated cryptographic analysis, similar to [2]. [1, Section 12] typically recommends that different keys are used for different algorithms, but it may be unclear if the EDHOC use case falls under this recommandation, or if an extra one could be pertinent. We recommend to mention this issue somewhere in the standard, or explicitly forbid such a shared key usage. [1]: URL [2]: URL On using the same key pair for Ed25519 and an X25519 based KEM. Erik Thormarker.\nI've seen single keys used for both ed25519 and x25519 during plugtests; IIUC the agreement among the plugtesters back then was that this was not intended use and should generally not be done. This might serve as a data point that illustrates the need for a clear statement.\nMentioning this issue and refering to [1, Section 12] seems like a good idea. The text would need to mention EDHOC methods as [1, Section 12] only talks about algorithms and the ECDHE in EDHOC is not strictly a COSE algorithm. Writing that using the same key requires a dedicated cryptographic analysis, similar to [2] also seems like a good idea. (As a comparision, I don't think TLS 1.3 states anything about forbidding the public key in the certificate to be used for Elgamal, ECIES, or ECDHE outside of TLS).\nShould we propose a PR for this?\nNAME Please go ahead. Prepare for the option to have a reference to an analysis showing this is secure under certain conditions.", "new_text": "of SP-800-56A. For secp256r1, secp384r1, and secp521r1, at least partial public-key validation MUST be done. A static Diffie-Hellman key for authentication in method 1, 2 or 3 MUST NOT not be used as a digital signature key for authentication in method 0, unless proven secure by a dedicated cryptographic analysis. A preliminary conjecture is that a minor change to EDHOC may be sufficient to fit the analysis of secure shared signature and ECDH key usage in [Degabriele2011] and [[Thormarker2021]]. So-called selfie attacks are mitigated as long as the Initiator does not have its own identity in the set of Responder identities it is allowed to communicate with. In trust on first use (TOFU) use cases"} {"id": "q-en-edhoc-f0fbe4dfaa52da4a3ce2d246f8cd2d89337fc4e47d7e88f79b95d194a2adbfbf", "old_text": "protect against passive attacker as active attackers can always get the Responder's identity by sending their own message_1. EDHOC uses the Expand function (typically HKDF-Expand) as a binary additive stream cipher. HKDF-Expand is not often used as it is slow on long messages, and most applications require both IND-CCA confidentiality as well as integrity protection. For the encryption of message_2, any speed difference is negligible, IND-CCA does not increase security, and integrity is provided by the inner MAC (and signature depending on method). Requirements for how to securely generate, validate, and process the ephemeral public keys depend on the elliptic curve. For X25519 and", "comments": "Reformulating security of expand as stream cipher instead of removing. HKDF-Expand is very often used.Added note of HKDF-Expand length restriction.Key use for different algorithms is already covered by COSE. Nothing is special with use inside of EDHOC. The text is clearly not correct at as signate can also be used in other methods.\nChanneling from Erik. Use of the HKDF as EDHOC-KDF for generation of KEYSTREAM2 and encryption of message2 induces a max size. For example, HKDF-SHA256 has max output size 255*32 = 8160 bytes.", "new_text": "protect against passive attacker as active attackers can always get the Responder's identity by sending their own message_1. EDHOC uses the Expand function (typically HKDF-Expand) as a binary additive stream cipher which is proven secure as long as the expand function is a PRF. HKDF-Expand is not often used as a stream cipher as it is slow on long messages, and most applications require both IND-CCA confidentiality as well as integrity protection. For the encryption of message_2, any speed difference is negligible, IND-CCA does not increase security, and integrity is provided by the inner MAC (and signature depending on method). Requirements for how to securely generate, validate, and process the ephemeral public keys depend on the elliptic curve. For X25519 and"} {"id": "q-en-edhoc-f0fbe4dfaa52da4a3ce2d246f8cd2d89337fc4e47d7e88f79b95d194a2adbfbf", "old_text": "of SP-800-56A. For secp256r1, secp384r1, and secp521r1, at least partial public-key validation MUST be done. A static Diffie-Hellman key for authentication in method 1, 2 or 3 MUST NOT not be used as a digital signature key for authentication in method 0, unless proven secure by a dedicated cryptographic analysis. A preliminary conjecture is that a minor change to EDHOC may be sufficient to fit the analysis of secure shared signature and ECDH key usage in Degabriele11 and Thormarker21.", "comments": "Reformulating security of expand as stream cipher instead of removing. HKDF-Expand is very often used.Added note of HKDF-Expand length restriction.Key use for different algorithms is already covered by COSE. Nothing is special with use inside of EDHOC. The text is clearly not correct at as signate can also be used in other methods.\nChanneling from Erik. Use of the HKDF as EDHOC-KDF for generation of KEYSTREAM2 and encryption of message2 induces a max size. For example, HKDF-SHA256 has max output size 255*32 = 8160 bytes.", "new_text": "of SP-800-56A. For secp256r1, secp384r1, and secp521r1, at least partial public-key validation MUST be done. As noted in Section 12 of I-D.ietf-cose-rfc8152bis-struct the use of a single key for multiple algorithms is strongly disencouraged unless proven secure by a dedicated cryptographic analysis. In particular this recommendation applies to using the same private key for static Diffie-Hellman authentication and digital signature authentication. A preliminary conjecture is that a minor change to EDHOC may be sufficient to fit the analysis of secure shared signature and ECDH key usage in Degabriele11 and Thormarker21."} {"id": "q-en-edhoc-f0fbe4dfaa52da4a3ce2d246f8cd2d89337fc4e47d7e88f79b95d194a2adbfbf", "old_text": "such code to be read or tampered with by code outside that environment. The sequence of transcript hashes in EHDOC (TH_2, TH_3, TH_4) do not make use of a so called running hash, this is a design choice as running hashes are often not supported on constrained platforms.", "comments": "Reformulating security of expand as stream cipher instead of removing. HKDF-Expand is very often used.Added note of HKDF-Expand length restriction.Key use for different algorithms is already covered by COSE. Nothing is special with use inside of EDHOC. The text is clearly not correct at as signate can also be used in other methods.\nChanneling from Erik. Use of the HKDF as EDHOC-KDF for generation of KEYSTREAM2 and encryption of message2 induces a max size. For example, HKDF-SHA256 has max output size 255*32 = 8160 bytes.", "new_text": "such code to be read or tampered with by code outside that environment. Note that HKDF-Expand has a relativly small maximum output length of 255 * hash_length. This means that when when SHA-256 is used as hash algorithm, message_2 cannot be longer than 8160 bytes. The sequence of transcript hashes in EHDOC (TH_2, TH_3, TH_4) do not make use of a so called running hash, this is a design choice as running hashes are often not supported on constrained platforms."} {"id": "q-en-edhoc-f0aba156749dc0726c890222f94977367f8f6bdc42e3937cdc55b7db3a5a7761", "old_text": "much previous information as possible. EDHOC is furthermore designed to be as compact and lightweight as possible, in terms of message sizes, processing, and the ability to reuse already existing CBOR, COSE, and CoAP libraries. To simplify for implementors, the use of CBOR and COSE in EDHOC is summarized in CBORandCOSE. Test vectors including CBOR diagnostic", "comments": "I think it would be good to point out early that the EDHOC protocol (just like TLS) only provides proof-of-possesion. Authentication has to be done by the application. I have seen horrible uses of TLS where the implementors believe RFC 8446 when it says it provides authentication.Added text for \"Unauthenticated Operation\". As pointed out by Marco, this is more general than just TOFU. \"Unauthenticated operation\" is the term used by TLS 1.3\nI think this is ready to merge.\nThe selfie attack mitigation implies that an initiator is stopping the exchange if it receives its own identity. An active attacker can then use this behavior to test if an initiator identity is the same one as a responder identity. In turn, it implies that the initiator anonymity does not hold against an active attacker when this mitigation is enabled. To judge if this is an issue in practice, the following question should be answered: Are there scenarios where an identity is shared between an initiator and a receiver, but the initiator identity should still be protected? One could maybe think about the case where multiple initiators and receivers are on the same server, sometimes sharing public keys and using distinct ports, but it is expected that the initiators identities are still protected. Possible actions: mention privacy loss due to the selfie attack mitigation. update the selfie attack mitigation, by mentioning that initiators should conclude the exchange as usual, but can check at the end whether they are talking to themselves or not. remove the selfie attack mitigation. Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, and the initiator will be talking to itself while it believes to be talking to the receiver. When there are only public keys, as in EDHOC, it is unclear to me if what is currently called in the draft is indeed a selfie attack. An initiator talking to itself knows it and does not believe to be talking to some other identity. As such this is why it could be possible to either remove the mention of selfie attacks in the draft, or update it.\nI and R are expected to stop the exchange it they receive any identity they do not want to talk to. This likely includes their own identity. Good point that this leaks information. I think that should be added, but I don't see that the own identity is special. >Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, >and the initiator will be talking to itself while it believes to be talking to the receiver. >When there are only public keys, as in EDHOC, it is unclear to me if what is currently >called in the draft is indeed a selfie attack. The selfie attack mitigation is intended for Trust On First Use (TOFU) use cases, where there is no authentication in the initial setup. I don't think an initiator talking to itself in a TOFU use case would not know it except if they checked R's public key. Do you some better suggestion than \"Selfie attack\". I agree that it is not perfect. \"Selfie attacks\" has in older papers been called reflection attacks, but the term reflection attacks has also been used for very different attacks. One option is to not give the attack any name and instead just describe the attack.\nI think I like to not give any name, as it does not perfectly match the classical definition. Here, the property which is violated is \"If an EDHOC exchange is completed, two distinct identities/computers/agents were active\". Which is different to violating the authentication of the protocol as is the case in classical selfie attacks. Good point. So what we are talking about in this issue is a slightly different privacy leak, which happens when the set of parties you are willing to talk to is equal to all responders minus yourself. To be more concrete, let me try to put it in a kind of drafty formal description that could replace the existing formulation. First the existing formulation: Now a possible replacement:\nThanks for the input! This issue is related to the TOFU use case which is still to be detailed (Appendix D.5). We probably should wait with the update until we have progressed that.\nRelates to\nI made PR for this issue. This is also related to PR and PR . The suggested text in is largely based on the suggested text above but I made some changes. I did not think this fitted well with the rest of the draft. The sentence was also not really needed. This is only true in the case where the set of identifiers is a small finite set. In the general case the set of identifiers the Initiator is not willing to communicate with is infinite. I suggest to add the following recommendation for TOFU To not leak any long-term identifiers, it is recommended to use a freshly generated authentication key as identity in each initial TOFU exchange.\nI merged the PR to master. Keeping the issue open for a while.\nThe current text is not really explaining how TOFU works and the security considerations. Would be good with some more text. TOFU might also go agains requirements in other parts of the document that says that \"EDHOC is mutually authenticated.\nThis is conflicting with present use of COSE. Requires a separate section in the draft, preferably an appendix. We need to describe the use case, trust model, how to use EDHOC and the security properties. Can we reference some other RFC?\nLooking at adding text on TOFU I think there is some more work to do. I don't TOFU should require anything specicial. The Section \"Identities and trust anchors {#identities}\" is fluffy. It is not clear that this is just general recommendations that is outside the scope of the EDHOC protocol and the reposability of the application. EDHOC provides proof-of-possesion and transfers a credential that enables authentication. Thats it. An EDHOC implementation might implemend Authentication for a specific use case, but that is still outside the EDHOC protocol. Any description of the interface between EDHOC and the application when it comes to credential is missing. In general EDHOC provides proof-of-possesion of the private key and then gives CRED_x to the application that says YES/NO. How the application does chain validation and identity validation (or the lack of it like in TOFU) is outside of EDHOC.\nStatus: This issue is waiting on a restructure, shortening, clarification on the responsibility of the EDHOC protocol and the application. and\nThe restructuring is done and merged with the master. Appendix D.5 is placeholder for TOFU.\nI made PR for TOFU. As Marco pointed out this should be more general than TOFU. I changed the heading to \"unauthenticated opearation\" which is the term used by TLS 1.3\nMerged adding text to the empty TOFU section. Closing\nI made some minor updates, have a look.", "new_text": "much previous information as possible. EDHOC is furthermore designed to be as compact and lightweight as possible, in terms of message sizes, processing, and the ability to reuse already existing CBOR, COSE, and CoAP libraries. Like (D)TLS, authentication is the responsibility of the application, EDHOC identifies (and optionally transports) authentication credentials, and provides proof-of- possession of the private authentication key. To simplify for implementors, the use of CBOR and COSE in EDHOC is summarized in CBORandCOSE. Test vectors including CBOR diagnostic"} {"id": "q-en-edhoc-f0aba156749dc0726c890222f94977367f8f6bdc42e3937cdc55b7db3a5a7761", "old_text": "So-called selfie attacks are mitigated as long as the Initiator does not have its own identity in the set of Responder identities it is allowed to communicate with. In trust on first use (TOFU) use cases the Initiator should verify that the Responder's identity is not equal to its own. Any future EHDOC methods using e.g., pre-shared keys might need to mitigate this in other ways. 8.3.", "comments": "I think it would be good to point out early that the EDHOC protocol (just like TLS) only provides proof-of-possesion. Authentication has to be done by the application. I have seen horrible uses of TLS where the implementors believe RFC 8446 when it says it provides authentication.Added text for \"Unauthenticated Operation\". As pointed out by Marco, this is more general than just TOFU. \"Unauthenticated operation\" is the term used by TLS 1.3\nI think this is ready to merge.\nThe selfie attack mitigation implies that an initiator is stopping the exchange if it receives its own identity. An active attacker can then use this behavior to test if an initiator identity is the same one as a responder identity. In turn, it implies that the initiator anonymity does not hold against an active attacker when this mitigation is enabled. To judge if this is an issue in practice, the following question should be answered: Are there scenarios where an identity is shared between an initiator and a receiver, but the initiator identity should still be protected? One could maybe think about the case where multiple initiators and receivers are on the same server, sometimes sharing public keys and using distinct ports, but it is expected that the initiators identities are still protected. Possible actions: mention privacy loss due to the selfie attack mitigation. update the selfie attack mitigation, by mentioning that initiators should conclude the exchange as usual, but can check at the end whether they are talking to themselves or not. remove the selfie attack mitigation. Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, and the initiator will be talking to itself while it believes to be talking to the receiver. When there are only public keys, as in EDHOC, it is unclear to me if what is currently called in the draft is indeed a selfie attack. An initiator talking to itself knows it and does not believe to be talking to some other identity. As such this is why it could be possible to either remove the mention of selfie attacks in the draft, or update it.\nI and R are expected to stop the exchange it they receive any identity they do not want to talk to. This likely includes their own identity. Good point that this leaks information. I think that should be added, but I don't see that the own identity is special. >Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, >and the initiator will be talking to itself while it believes to be talking to the receiver. >When there are only public keys, as in EDHOC, it is unclear to me if what is currently >called in the draft is indeed a selfie attack. The selfie attack mitigation is intended for Trust On First Use (TOFU) use cases, where there is no authentication in the initial setup. I don't think an initiator talking to itself in a TOFU use case would not know it except if they checked R's public key. Do you some better suggestion than \"Selfie attack\". I agree that it is not perfect. \"Selfie attacks\" has in older papers been called reflection attacks, but the term reflection attacks has also been used for very different attacks. One option is to not give the attack any name and instead just describe the attack.\nI think I like to not give any name, as it does not perfectly match the classical definition. Here, the property which is violated is \"If an EDHOC exchange is completed, two distinct identities/computers/agents were active\". Which is different to violating the authentication of the protocol as is the case in classical selfie attacks. Good point. So what we are talking about in this issue is a slightly different privacy leak, which happens when the set of parties you are willing to talk to is equal to all responders minus yourself. To be more concrete, let me try to put it in a kind of drafty formal description that could replace the existing formulation. First the existing formulation: Now a possible replacement:\nThanks for the input! This issue is related to the TOFU use case which is still to be detailed (Appendix D.5). We probably should wait with the update until we have progressed that.\nRelates to\nI made PR for this issue. This is also related to PR and PR . The suggested text in is largely based on the suggested text above but I made some changes. I did not think this fitted well with the rest of the draft. The sentence was also not really needed. This is only true in the case where the set of identifiers is a small finite set. In the general case the set of identifiers the Initiator is not willing to communicate with is infinite. I suggest to add the following recommendation for TOFU To not leak any long-term identifiers, it is recommended to use a freshly generated authentication key as identity in each initial TOFU exchange.\nI merged the PR to master. Keeping the issue open for a while.\nThe current text is not really explaining how TOFU works and the security considerations. Would be good with some more text. TOFU might also go agains requirements in other parts of the document that says that \"EDHOC is mutually authenticated.\nThis is conflicting with present use of COSE. Requires a separate section in the draft, preferably an appendix. We need to describe the use case, trust model, how to use EDHOC and the security properties. Can we reference some other RFC?\nLooking at adding text on TOFU I think there is some more work to do. I don't TOFU should require anything specicial. The Section \"Identities and trust anchors {#identities}\" is fluffy. It is not clear that this is just general recommendations that is outside the scope of the EDHOC protocol and the reposability of the application. EDHOC provides proof-of-possesion and transfers a credential that enables authentication. Thats it. An EDHOC implementation might implemend Authentication for a specific use case, but that is still outside the EDHOC protocol. Any description of the interface between EDHOC and the application when it comes to credential is missing. In general EDHOC provides proof-of-possesion of the private key and then gives CRED_x to the application that says YES/NO. How the application does chain validation and identity validation (or the lack of it like in TOFU) is outside of EDHOC.\nStatus: This issue is waiting on a restructure, shortening, clarification on the responsibility of the EDHOC protocol and the application. and\nThe restructuring is done and merged with the master. Appendix D.5 is placeholder for TOFU.\nI made PR for TOFU. As Marco pointed out this should be more general than TOFU. I changed the heading to \"unauthenticated opearation\" which is the term used by TLS 1.3\nMerged adding text to the empty TOFU section. Closing\nI made some minor updates, have a look.", "new_text": "So-called selfie attacks are mitigated as long as the Initiator does not have its own identity in the set of Responder identities it is allowed to communicate with. In Trust on first use (TOFU) use cases, see tofu, the Initiator should verify that the Responder's identity is not equal to its own. Any future EHDOC methods using e.g., pre- shared keys might need to mitigate this in other ways. 8.3."} {"id": "q-en-edhoc-c89125385b5fe84d1696c113efe15f44ea5c18b02e6f68f23a8fbc3a139dd205", "old_text": "3.8. In order to reduce round trips and the number of messages or to simplify processing, external security applications may be integrated into EDHOC by transporting authorization related data in the messages. EDHOC allows opaque external authorization data (EAD) to be sent in each of the four EDHOC messages (EAD_1, EAD_2, EAD_3, EAD_4). External authorization data is a CBOR sequence (see CBOR) consisting of one or more (ead_label, ead_value) pairs as defined below: A security application using external authorization data need to register an ead_label, specify the ead_value format for each message (see iana-ead), and describe processing and security considerations. The EAD fields of EDHOC must not be used for generic application data. Examples of the use of EAD is provided in ead-appendix. 3.9.", "comments": "Addressing\nIssue needed.\nShould we allow or forbid that eadlabel = k and eadlabel = -k correspond to different ead-values? For example, if an (eadlabel, eadvalue) is always critical: Say, eadvalue is an ACE Access Token with eadlabel = -4, always critical. Then we can either: forbid the use of eadlabel = 4 altogether, or allocate an always non-critical EAD to eadlabel = 4, say, eadvalue is \"service indication\" as in In case 1 we will have some \"holes\" in the register. One positive integer blocked for each always critical (eadlabel, eadvalue) , one negative blocked for each always non-critical (eadlabel, eadvalue), and 0 cannot be used. (It is not clear to me that there are many always non-critical, but always critical is probably common.) In case 2, we could still have the registration policy that: if an EAD can be either critical or non-critical then two eadlabel with same absolute value should be reserved for it. What is the main risk with case 2? If someone misunderstands or by accident changes sign on an always /non-/critical EAD this results most likely in an error due to wrong eadvalue (like in the example above), but potentially in an unintended EAD message. (Another registration policy could be that the eadvalues corresponding to ead_labels k and -k should have incompatible CDDL so the error is discovered ...) The EAD specification needs anyway to specify what are the processing rules and if you violate those more or less anything could happen. Comments?\nBTW, we may want a term for the ordered pair (eadlabel, eadvalue). The term \"EAD\" or \"EAD field\" is referring to the message field, which may consist of multiple (eadlabel, eadvalue).\nJust seems like overoptimization to me. I would prefer if we follow the model that we agreed on for C509 which is similar to what CoAP Options is using. If you think it is important to use the complete code point space I think it would be better not assign any functionality to the minus sign and register critical and non-critical idenpendently. The original proposal seems to give two different meanings to the minus sign. The minus value is either a criticallity toggle or a completly different EAD.\nIntroducing the term \"EAD item\" for (eadlabel, eadvalue)\nEdit: Let's include the main points of this PR in -15 as a input to the design team meeting.\nI made a proposal. The text now include the concepts \"EAD item\" and \"critical\"/\"non-critical\" EAD item. There is also more details about how applications makes use of EAD: each application registers its own eadlabels eadlabel is a positive integer associated to a particular eadvalue if the application transports different eadvalues then multiple eadlabels need to be registered the application specifies how EAD items are transported in EDHOC messages, a particular EAD item may be transported in multiple messages (i.e. different EADx). I'll merge this into -15 and look forward to comments!\nThere is a new PR suggesting this. There is no background discussion in the PR . There would be good with some use case. The definition of what critical need and how the recipient handles critical would need a lot of discussion and fine-tuning.The handling of critical extensions in C.509 is to my understanding confusing and problematic. Both in the specification and in implementations.\nPlease see for background discussion and use cases.\nSeems very unclear that the current PR can be the basis for a ECH type mechanism. PR should motivate why it is the basis for an ECH type mechanism or we should have some discussion regarding use cases for a general critical non-critical EAD mechanism. I do not see that follow from .\nHaving critical/non-critical EAD seems like a reasonable thing in its own, but it is not clear to me that it is a step on the way to a ECH mechanism in EDHOC.\nShould we add another column in IANA register? Type: always critical / always non-critical / application determines criticality\nIf we define such a Type, the following text can be removed from specification and replaced by IANA consideration: \"A specification registring a new EAD label MUST describe if the EAD item is always critical, always non-critical, or if it can be decided by the application in the sending endpoint.\"\nI merged which addresses some points raised in this issue. Please review and comment if there is anything missing. The consequences of EDHOC processing of critical / non-critical EAD items is described, but how the security application makes use of this property is left to the associated specification. (In particular, no registration of criticality in the IANA register.)\nThis is motivated by two considerations, the concerns for clients selecting a hidden service in a protected way (pinging NAME do you have a pointer to your hackathon project), and TLS's encrypted Client Hello (superseding eSNI, encrypted server name indication). The rough situation for either case is that a client has some out-of-band information about the server that includes a public key for encryption (even if the encryption key on its own may be insufficient to authenticate the server). As a straw man proposal (mainly as scaffolding for the \"how to enable it\" steps later), consider the following: (I bet that reading the will give valuable input for making this complete). To ensure that responders (who are, in their identity selection, the more exposed side) can't be probed for their support of encrypted EAD, encrypted EAD should be marked as \"elective\" in the sense that a server may ignore it if it does not support it. (It'd still become part of the transcript). If a server receives an EAD it can not decrypt (or which, after decryption, contains nothing valid, eg. because the wrong GE was used), it needs to just continue and pick the CREDR / IDCREDR it would always have picked. AIU we do not yet distinguish between EAD that are elective and those that are critical. I do not know whether we'll even have any \"critical\" ones (as the peer's lack of action on them should be enough indication that they were unsupported), but if we do distinguish (eg. along the positive/negative number line), encrypted EAD should be optional. This has the upside that initiators that can spare the few bytes can always include some random bytes in the encrypted EAD, and thus ensure that actual users of encrypted EAD do not stand out (or that connections with encrypted EAD would be blocked). It should be a design goal of encrypted EAD that any use of EAD can always be plausibly claimed to just be random bytes added because the specification says one should do that ever now and then. For a practical example, a vulnerable service (say, anything civil rights related) might be operated at the same IP address as a more wide-spread service (say, a large public LwM2M provider). The civil rights service would publish its address together with the public key GE and its service name in DNS records similar to . A client contacting it would start an EDHOC exchange with the public address and encrypt the service name with GEX in the EAD1. The server would respond with credentials for the civil rights service, to which the client then may either present an anonymous (CCS) identity, or (now that it knows it's speaking to the right peer) its own CREDI / IDCREDI. If the CRED sizes are always the same, this exchange should be indistinguishable from an exchange done between an LwM2M client and the LwM2M backend when the client has chosen to send some more garbage bytes. (For application such as eighthalve's, it should be noted that there is probably no need for any further proof of identity inside the EAD1 plaintext: Possession of GE in combination with the suitable plaintext should suffice to open the present alternative server credentials, any secret whose possession is proven in the plaintext part would be exactly as secret as GE, as no attacker without E can obtain the plaintext.). To condense this all into a few actionable points: We should have a code point in eadlabel for \"encrypted EAD\" data. (We may not even have to specify how it works, that can be specified later or even be guided by the out-of-band information that needs to be present for all of this anyway). That code point either needs to be described explicitly as \"MUST ignore if not supported / content not understood\" for all of EDHOC from the start, or there needs to be general wording about which ead_labels are to be ignored if unprocessable. We should encourage initiators to send some random garbage there if they can afford it. (A LoRA device certainly can't, a 6lo device probably neigther, but a guard proxy establishing a connection probably can). This is not only to protect initiators that need it, but also to ensure that it is not being blocked, and that responders don't reveal whether they support it.\nThis has parallels to , which will take some work to pick out in detail.\nI'm trying to understand what would need to be changed in EDHOC to support this functionality. The distinction between critical and elective EADs. This seems like a good general construct. Perhaps by designated subsets of labels, e.g. negative or even eablabel are elective. The use of a designated eablabel for encrypted EAD1 seems less general and is somewhat breaking with the mindset that one application is reserving a label and defines the associated processing. Is there a problem with specifying a particular (elective) EAD for this application and associate a policy that it is used by default, with garbage when there is nothing to send? (I couldn't see obfuscating eadlabel really achieved much in practice, but I may have missed something.)\nThe first, yes. As for the labels, if the application reserves a label and defines the processing, it is showing a passive attacker that it is being used; this may be undesirable. Applications should have some label behind which they can hide the fact that this application is being used; I think it is preferable if that is shared by as many applications as practical (including the \"it's really just garbage\" application that some hosts may want to run). Whether that requires that they also share an encryption mechanism or not, I am unsure.\nNAME at the IETF Hackathon, we put together a working proof of concept for URL Is that what you mean? That's not really related to ECH, but came out of MASQUE.\nNAME The three-party setup seems very useful in different settings, so there may be multiple use cases where R receives EAD1 intended for a third party and needs to know that to do with it. Currently eadlabel is filling that function. If we can assume that R by default has designated third party, or sufficient information about the third party is carried in EAD1, then eadlabel could probably be encrypted. (In principle, EAD1 could be a COSEEncrypt0 or any tagged COSE object, with information needed by R in the unprotected header.) So how do we handle the case when there are multiple candidate third parties and you want to hide both the intent and the third party identity? A designated eadlabel is possible (or two, if critical / elective is baked into the label) but I don\u2019t see how this label will be used other applications (so to acheive uncertainty about what is intended) because the co-existence of different candidate applications using EAD1 requires a method for R to distinguish between them. Maybe I'm missing something?\nFor the attacker to be unable to know which service the user is contacting, it would be important that an initiator receiving the credentials to the wrong service name fails in the same way has an initiator for which verification of the responder signature would fail. Otherwise, an active attacker could test if I intended to use the protected service by xoring the encrypted EAD1 with some random bytes. The responder would always see EAD1 as some random bytes and answer with the unprotected service, and then, if the initiator wanted to talk to the protected service, it will fail after decrypting the message and seeing the wrong service credentials if the initiator want to talk to the unprotected service, it will fail during the verification of the signature as the two transcript will not match. Those two failures should then be indistinguishable from the attacker point of view (even with timing attacks).\nAt least critical / elective EAD should be included in -15\nThe critical / elective logic could in principle be implemented in each application. It would be good to provide more use cases to support the case of making this a standardized feature. Clearly there is a benefit with tagging an EAD as \"critical\" since some operations require the information carried in the EAD to complete, e.g. authorization information from a trusted third party may be required for the Initiator to make any use of the authentication. But what EAD content is not critical? Is the following a good example for elective EAD? A service provider is allowing a variety of devices in its network to authenticate using EDHOC and grants different access rights depending on the capabilities and trustworthiness of the device. Later, the IETF specifies the EAD for remote attestation (see Appendix E). New devices which implements this EAD and appropriately attests to the attestation request carried in the EAD field get certain access which old devices doesn't get. But the old devices should still get basic access. If all EAD were critical then the old device which doesn't understand the new EAD would be forced to discontinue the protocol at the arrival of the remote attestation EAD. If the EAD is elective, then both old and new devices can be supported. Question: With this example in mind, is critical / elective a property of the EAD type, or isn't it rather a boolean set by the application? I can think of another use case where remote attestation is mandatory and it doesn't make sense to define different EADs for these two use cases. (Then we are somehow back at the application specified property. But it may still make sense to have a standardized way of expressing criticality.) Thoughts?\nThe critical/elective property always needs to be encoded in the message (as an LSB, or as the number's sign) no matter whether criticality is the sender's choice or whether it's part of the registration. From a recent chat I think that registering one number, and then allowing it to be used in positive (elective) or negative (critical) role as by the sender's choice. Application authors would describe in which criticality/-ies it makes sense to use their number, just as they describe in which of the messages it makes sense. (The alternative is to allow registering elective and critical options independently, and that application authors who do need to give the senders a choice of using either would register a pair of options; really would work just as well). Introducing RATS data seems like a very good use case to illustrate EAD examples. (In this particular case I don't know why a sender would insist the attestation to be processed into its network joining). To rephrase the original use cases with recently published terminology, we can refer to . That reference might go with a suggestion that devices that can afford sending a few more bytes should occasionally do so in a grease-style non-critical option, or maybe in a non-critical Encrypted-EAD1 option right away. I'd welcome if ace-ake-autz already defined an EAD field for what could resemble Encrypted-Client-Hello; AIU it currently refers to an older version of EDHOC. It could be updated to place LOCW and ENCID into separate EAD items, and the latter could be more generally defined as \"a container for encrypted data which the responder, possibly guided by other EAD1 fields, can use to come up with a suitable IDR\". (How encid is then constructed would not be a property of using that particular option, but follow from using the LOCW option). Then, there would be no need for a defined Grease field, as that ENCID field could serve the very same purpose.\nSince there seems to be a case for non-critical EAD I made PR . Please review. (Yes, ace-ake-authz needs to be updated, following the closing of this issue.)\nI did not read this discussion before now as it did not seem relevant for the base EDHOC protocol. The PR seems quite far from the use case described here. PR described a general critical, non-critical mechanism similar to X.509 extensions while the discussion in this issue is about an ECH type of mechanism..... PR does not seem to be the basis of a good ECH mechanism.... NAME could you have a short presentation at LAKE IETF 114? (edit: fixed Christian's github name)\nI think it is clear that PR does not solve the use case outlined here. In particular, the comment from NAME above is not considered as far as I can see. But the point about critical / non-critical (eadlabel, eadvalue) appeared in this discussion and merits to be specified as a general feature.\nI think it would be good with longer discussion about ECH type mechanisms in EDHOC. Might be added on later but it could also benefit from being added alrady in the base EDHOC specification. For a use case where the Initiator knows the long term key of the Responder the TLS ECH solution is a quite ugly hack that uses two different keys and an ineficient message flow. The use of a single key as discussed above is an improvement, but the use of a noise XX pattern is not optimal. If you want to design something for the use case where the Initiator knows the long term key of the Responder it makes more sense with noise IK which is used by WireGuard. If the use case where Initiator knows the long term key of the Responder is small the above approach makes sense. If is is big it might make more sense to define new method(s) based on the noise IK pattern.\nCould we have a design team meeting or an interim to discuss this issue?\nIn the use case that the Initiator gets the Responders static public key beforehand The Responder would in this case have a static DH key. The Initiator could use Static DH or a singature.\nPerhaps a breakout meeting during/just before IETF 114?\nClosing this since NAME agreed that the introduction of critical/non-critical EAD (PR , included in -15) in combination with an example using third party encrypted EAD (as in ace-ake-authz) resolves the issue.", "new_text": "3.8. In order to reduce round trips and the number of messages, or to simplify processing, external security applications may be integrated into EDHOC by transporting authorization related data in the messages. EDHOC allows processing of external authorization data (EAD) to be defined in a separate specification, and sent in dedicated fields of the four EDHOC messages (EAD_1, EAD_2, EAD_3, EAD_4). EAD is opaque data to EDHOC. Each EAD field is a CBOR sequence (see CBOR) consisting of one or more EAD items (ead_label, ead_value) as defined below: A security application using external authorization data need to register a positive ead_label and the associated ead_value format for each EAD item it uses (see iana-ead), and describe processing and security considerations. Each application registers their own EAD items and defines associated operations. The application may define multiple uses of certain EAD items, e.g., the same EAD item may be used in different EDHOC messages with the same application. An EAD item can be either critical or non-critical, determined by the sign of the ead_label in the transported EAD item included in the EDHOC message. Using the registered positive value indicates that the EAD item is non-critical. The corresponding negative value indicates that the EAD item is critical. ead_label = 0 MUST NOT be used. If an endpoint receives a critical EAD item it does not recognize or a critical EAD item that contains information that it cannot process, the EDHOC protocol MUST be discontinued. A non-critical EAD item can be ignored. The specification registring a new EAD label needs to describe under what conditions the EAD item is critical or non-critical. The EAD fields of EDHOC must not be used for generic application data. Examples of the use of EAD are provided in ead-appendix. 3.9."} {"id": "q-en-edhoc-c89125385b5fe84d1696c113efe15f44ea5c18b02e6f68f23a8fbc3a139dd205", "old_text": "Authorization Data\" under the new group name \"Ephemeral Diffie- Hellman Over COSE (EDHOC)\". The registration procedure is \"Specification Required\". The columns of the registry are Label, Message, Description, and Reference, where Label is an integer and the other columns are text strings. 9.6.", "comments": "Addressing\nIssue needed.\nShould we allow or forbid that eadlabel = k and eadlabel = -k correspond to different ead-values? For example, if an (eadlabel, eadvalue) is always critical: Say, eadvalue is an ACE Access Token with eadlabel = -4, always critical. Then we can either: forbid the use of eadlabel = 4 altogether, or allocate an always non-critical EAD to eadlabel = 4, say, eadvalue is \"service indication\" as in In case 1 we will have some \"holes\" in the register. One positive integer blocked for each always critical (eadlabel, eadvalue) , one negative blocked for each always non-critical (eadlabel, eadvalue), and 0 cannot be used. (It is not clear to me that there are many always non-critical, but always critical is probably common.) In case 2, we could still have the registration policy that: if an EAD can be either critical or non-critical then two eadlabel with same absolute value should be reserved for it. What is the main risk with case 2? If someone misunderstands or by accident changes sign on an always /non-/critical EAD this results most likely in an error due to wrong eadvalue (like in the example above), but potentially in an unintended EAD message. (Another registration policy could be that the eadvalues corresponding to ead_labels k and -k should have incompatible CDDL so the error is discovered ...) The EAD specification needs anyway to specify what are the processing rules and if you violate those more or less anything could happen. Comments?\nBTW, we may want a term for the ordered pair (eadlabel, eadvalue). The term \"EAD\" or \"EAD field\" is referring to the message field, which may consist of multiple (eadlabel, eadvalue).\nJust seems like overoptimization to me. I would prefer if we follow the model that we agreed on for C509 which is similar to what CoAP Options is using. If you think it is important to use the complete code point space I think it would be better not assign any functionality to the minus sign and register critical and non-critical idenpendently. The original proposal seems to give two different meanings to the minus sign. The minus value is either a criticallity toggle or a completly different EAD.\nIntroducing the term \"EAD item\" for (eadlabel, eadvalue)\nEdit: Let's include the main points of this PR in -15 as a input to the design team meeting.\nI made a proposal. The text now include the concepts \"EAD item\" and \"critical\"/\"non-critical\" EAD item. There is also more details about how applications makes use of EAD: each application registers its own eadlabels eadlabel is a positive integer associated to a particular eadvalue if the application transports different eadvalues then multiple eadlabels need to be registered the application specifies how EAD items are transported in EDHOC messages, a particular EAD item may be transported in multiple messages (i.e. different EADx). I'll merge this into -15 and look forward to comments!\nThere is a new PR suggesting this. There is no background discussion in the PR . There would be good with some use case. The definition of what critical need and how the recipient handles critical would need a lot of discussion and fine-tuning.The handling of critical extensions in C.509 is to my understanding confusing and problematic. Both in the specification and in implementations.\nPlease see for background discussion and use cases.\nSeems very unclear that the current PR can be the basis for a ECH type mechanism. PR should motivate why it is the basis for an ECH type mechanism or we should have some discussion regarding use cases for a general critical non-critical EAD mechanism. I do not see that follow from .\nHaving critical/non-critical EAD seems like a reasonable thing in its own, but it is not clear to me that it is a step on the way to a ECH mechanism in EDHOC.\nShould we add another column in IANA register? Type: always critical / always non-critical / application determines criticality\nIf we define such a Type, the following text can be removed from specification and replaced by IANA consideration: \"A specification registring a new EAD label MUST describe if the EAD item is always critical, always non-critical, or if it can be decided by the application in the sending endpoint.\"\nI merged which addresses some points raised in this issue. Please review and comment if there is anything missing. The consequences of EDHOC processing of critical / non-critical EAD items is described, but how the security application makes use of this property is left to the associated specification. (In particular, no registration of criticality in the IANA register.)\nThis is motivated by two considerations, the concerns for clients selecting a hidden service in a protected way (pinging NAME do you have a pointer to your hackathon project), and TLS's encrypted Client Hello (superseding eSNI, encrypted server name indication). The rough situation for either case is that a client has some out-of-band information about the server that includes a public key for encryption (even if the encryption key on its own may be insufficient to authenticate the server). As a straw man proposal (mainly as scaffolding for the \"how to enable it\" steps later), consider the following: (I bet that reading the will give valuable input for making this complete). To ensure that responders (who are, in their identity selection, the more exposed side) can't be probed for their support of encrypted EAD, encrypted EAD should be marked as \"elective\" in the sense that a server may ignore it if it does not support it. (It'd still become part of the transcript). If a server receives an EAD it can not decrypt (or which, after decryption, contains nothing valid, eg. because the wrong GE was used), it needs to just continue and pick the CREDR / IDCREDR it would always have picked. AIU we do not yet distinguish between EAD that are elective and those that are critical. I do not know whether we'll even have any \"critical\" ones (as the peer's lack of action on them should be enough indication that they were unsupported), but if we do distinguish (eg. along the positive/negative number line), encrypted EAD should be optional. This has the upside that initiators that can spare the few bytes can always include some random bytes in the encrypted EAD, and thus ensure that actual users of encrypted EAD do not stand out (or that connections with encrypted EAD would be blocked). It should be a design goal of encrypted EAD that any use of EAD can always be plausibly claimed to just be random bytes added because the specification says one should do that ever now and then. For a practical example, a vulnerable service (say, anything civil rights related) might be operated at the same IP address as a more wide-spread service (say, a large public LwM2M provider). The civil rights service would publish its address together with the public key GE and its service name in DNS records similar to . A client contacting it would start an EDHOC exchange with the public address and encrypt the service name with GEX in the EAD1. The server would respond with credentials for the civil rights service, to which the client then may either present an anonymous (CCS) identity, or (now that it knows it's speaking to the right peer) its own CREDI / IDCREDI. If the CRED sizes are always the same, this exchange should be indistinguishable from an exchange done between an LwM2M client and the LwM2M backend when the client has chosen to send some more garbage bytes. (For application such as eighthalve's, it should be noted that there is probably no need for any further proof of identity inside the EAD1 plaintext: Possession of GE in combination with the suitable plaintext should suffice to open the present alternative server credentials, any secret whose possession is proven in the plaintext part would be exactly as secret as GE, as no attacker without E can obtain the plaintext.). To condense this all into a few actionable points: We should have a code point in eadlabel for \"encrypted EAD\" data. (We may not even have to specify how it works, that can be specified later or even be guided by the out-of-band information that needs to be present for all of this anyway). That code point either needs to be described explicitly as \"MUST ignore if not supported / content not understood\" for all of EDHOC from the start, or there needs to be general wording about which ead_labels are to be ignored if unprocessable. We should encourage initiators to send some random garbage there if they can afford it. (A LoRA device certainly can't, a 6lo device probably neigther, but a guard proxy establishing a connection probably can). This is not only to protect initiators that need it, but also to ensure that it is not being blocked, and that responders don't reveal whether they support it.\nThis has parallels to , which will take some work to pick out in detail.\nI'm trying to understand what would need to be changed in EDHOC to support this functionality. The distinction between critical and elective EADs. This seems like a good general construct. Perhaps by designated subsets of labels, e.g. negative or even eablabel are elective. The use of a designated eablabel for encrypted EAD1 seems less general and is somewhat breaking with the mindset that one application is reserving a label and defines the associated processing. Is there a problem with specifying a particular (elective) EAD for this application and associate a policy that it is used by default, with garbage when there is nothing to send? (I couldn't see obfuscating eadlabel really achieved much in practice, but I may have missed something.)\nThe first, yes. As for the labels, if the application reserves a label and defines the processing, it is showing a passive attacker that it is being used; this may be undesirable. Applications should have some label behind which they can hide the fact that this application is being used; I think it is preferable if that is shared by as many applications as practical (including the \"it's really just garbage\" application that some hosts may want to run). Whether that requires that they also share an encryption mechanism or not, I am unsure.\nNAME at the IETF Hackathon, we put together a working proof of concept for URL Is that what you mean? That's not really related to ECH, but came out of MASQUE.\nNAME The three-party setup seems very useful in different settings, so there may be multiple use cases where R receives EAD1 intended for a third party and needs to know that to do with it. Currently eadlabel is filling that function. If we can assume that R by default has designated third party, or sufficient information about the third party is carried in EAD1, then eadlabel could probably be encrypted. (In principle, EAD1 could be a COSEEncrypt0 or any tagged COSE object, with information needed by R in the unprotected header.) So how do we handle the case when there are multiple candidate third parties and you want to hide both the intent and the third party identity? A designated eadlabel is possible (or two, if critical / elective is baked into the label) but I don\u2019t see how this label will be used other applications (so to acheive uncertainty about what is intended) because the co-existence of different candidate applications using EAD1 requires a method for R to distinguish between them. Maybe I'm missing something?\nFor the attacker to be unable to know which service the user is contacting, it would be important that an initiator receiving the credentials to the wrong service name fails in the same way has an initiator for which verification of the responder signature would fail. Otherwise, an active attacker could test if I intended to use the protected service by xoring the encrypted EAD1 with some random bytes. The responder would always see EAD1 as some random bytes and answer with the unprotected service, and then, if the initiator wanted to talk to the protected service, it will fail after decrypting the message and seeing the wrong service credentials if the initiator want to talk to the unprotected service, it will fail during the verification of the signature as the two transcript will not match. Those two failures should then be indistinguishable from the attacker point of view (even with timing attacks).\nAt least critical / elective EAD should be included in -15\nThe critical / elective logic could in principle be implemented in each application. It would be good to provide more use cases to support the case of making this a standardized feature. Clearly there is a benefit with tagging an EAD as \"critical\" since some operations require the information carried in the EAD to complete, e.g. authorization information from a trusted third party may be required for the Initiator to make any use of the authentication. But what EAD content is not critical? Is the following a good example for elective EAD? A service provider is allowing a variety of devices in its network to authenticate using EDHOC and grants different access rights depending on the capabilities and trustworthiness of the device. Later, the IETF specifies the EAD for remote attestation (see Appendix E). New devices which implements this EAD and appropriately attests to the attestation request carried in the EAD field get certain access which old devices doesn't get. But the old devices should still get basic access. If all EAD were critical then the old device which doesn't understand the new EAD would be forced to discontinue the protocol at the arrival of the remote attestation EAD. If the EAD is elective, then both old and new devices can be supported. Question: With this example in mind, is critical / elective a property of the EAD type, or isn't it rather a boolean set by the application? I can think of another use case where remote attestation is mandatory and it doesn't make sense to define different EADs for these two use cases. (Then we are somehow back at the application specified property. But it may still make sense to have a standardized way of expressing criticality.) Thoughts?\nThe critical/elective property always needs to be encoded in the message (as an LSB, or as the number's sign) no matter whether criticality is the sender's choice or whether it's part of the registration. From a recent chat I think that registering one number, and then allowing it to be used in positive (elective) or negative (critical) role as by the sender's choice. Application authors would describe in which criticality/-ies it makes sense to use their number, just as they describe in which of the messages it makes sense. (The alternative is to allow registering elective and critical options independently, and that application authors who do need to give the senders a choice of using either would register a pair of options; really would work just as well). Introducing RATS data seems like a very good use case to illustrate EAD examples. (In this particular case I don't know why a sender would insist the attestation to be processed into its network joining). To rephrase the original use cases with recently published terminology, we can refer to . That reference might go with a suggestion that devices that can afford sending a few more bytes should occasionally do so in a grease-style non-critical option, or maybe in a non-critical Encrypted-EAD1 option right away. I'd welcome if ace-ake-autz already defined an EAD field for what could resemble Encrypted-Client-Hello; AIU it currently refers to an older version of EDHOC. It could be updated to place LOCW and ENCID into separate EAD items, and the latter could be more generally defined as \"a container for encrypted data which the responder, possibly guided by other EAD1 fields, can use to come up with a suitable IDR\". (How encid is then constructed would not be a property of using that particular option, but follow from using the LOCW option). Then, there would be no need for a defined Grease field, as that ENCID field could serve the very same purpose.\nSince there seems to be a case for non-critical EAD I made PR . Please review. (Yes, ace-ake-authz needs to be updated, following the closing of this issue.)\nI did not read this discussion before now as it did not seem relevant for the base EDHOC protocol. The PR seems quite far from the use case described here. PR described a general critical, non-critical mechanism similar to X.509 extensions while the discussion in this issue is about an ECH type of mechanism..... PR does not seem to be the basis of a good ECH mechanism.... NAME could you have a short presentation at LAKE IETF 114? (edit: fixed Christian's github name)\nI think it is clear that PR does not solve the use case outlined here. In particular, the comment from NAME above is not considered as far as I can see. But the point about critical / non-critical (eadlabel, eadvalue) appeared in this discussion and merits to be specified as a general feature.\nI think it would be good with longer discussion about ECH type mechanisms in EDHOC. Might be added on later but it could also benefit from being added alrady in the base EDHOC specification. For a use case where the Initiator knows the long term key of the Responder the TLS ECH solution is a quite ugly hack that uses two different keys and an ineficient message flow. The use of a single key as discussed above is an improvement, but the use of a noise XX pattern is not optimal. If you want to design something for the use case where the Initiator knows the long term key of the Responder it makes more sense with noise IK which is used by WireGuard. If the use case where Initiator knows the long term key of the Responder is small the above approach makes sense. If is is big it might make more sense to define new method(s) based on the noise IK pattern.\nCould we have a design team meeting or an interim to discuss this issue?\nIn the use case that the Initiator gets the Responders static public key beforehand The Responder would in this case have a static DH key. The Initiator could use Static DH or a singature.\nPerhaps a breakout meeting during/just before IETF 114?\nClosing this since NAME agreed that the introduction of critical/non-critical EAD (PR , included in -15) in combination with an example using third party encrypted EAD (as in ace-ake-authz) resolves the issue.", "new_text": "Authorization Data\" under the new group name \"Ephemeral Diffie- Hellman Over COSE (EDHOC)\". The registration procedure is \"Specification Required\". The columns of the registry are Label, Description, and Reference, where Label is a positive integer and the other columns are text strings. 9.6."} {"id": "q-en-edhoc-e199f2edf9f10cf92a0c9f84bd86459f1cd718b8d3490ed82ed9aae17f065e4e", "old_text": "3.1. The EDHOC protocol consists of three mandatory messages (message_1, message_2, message_3) between Initiator and Responder, an optional fourth message (message_4), and an error message. All EDHOC messages are CBOR Sequences RFC8742, and are deterministically encoded. fig- flow illustrates an EDHOC message flow with the optional fourth message as well as the content of each message. The protocol elements in the figure are introduced in overview and asym. Message formatting and processing are specified in asym and error.", "comments": "URL I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.\u201d So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems \u201ccredentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.\u201d Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn\u2019t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement?\nYes. The document should talk about RPK a bit more as that is term that people might search for. Should be explained that CWT is the format EDHOC uses for raw public keys (RPK). Hi all: I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.\u201d So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems \u201ccredentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.\u201d Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn\u2019t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement? Beyond these clarifications, I think the document is ready. Best Regards. Rafa Marin-Lopez, PhD Dept. Information and Communications Engineering (DIIC) Faculty of Computer Science-University of Murcia 30100 Murcia - Spain Telf: +34868888501 Fax: +34868884151 e-mail: EMAIL", "new_text": "3.1. The EDHOC protocol consists of three mandatory messages (message_1, message_2, message_3) between an Initiator and a Responder, an optional fourth message (message_4), and an error message. The roles have slightly different security properties which should be considered when the roles are assigned, see sec-prop. All EDHOC messages are CBOR Sequences RFC8742, and are deterministically encoded. fig-flow illustrates an EDHOC message flow with the optional fourth message as well as the content of each message. The protocol elements in the figure are introduced in overview and asym. Message formatting and processing are specified in asym and error."} {"id": "q-en-edhoc-e199f2edf9f10cf92a0c9f84bd86459f1cd718b8d3490ed82ed9aae17f065e4e", "old_text": "respectively, and the public authentication keys are denoted G_I and G_R, respectively. For X.509 certificates the authentication key is represented with a SubjectPublicKeyInfo field. For CWT and CCS (see auth-cred)) the authentication key is represented with a 'cnf' claim RFC8747 containing a COSE_Key RFC9052. 3.5.2. The authentication credentials, CRED_I and CRED_R, contain the public authentication key of the Initiator and the Responder, respectively. EDHOC relies on COSE for identification of credentials (see id_cred), for example X.509 certificates RFC5280, C509 certificates I-D.ietf- cose-cbor-encoded-cert, CWTs RFC8392 and CWT Claims Sets (CCS) RFC8392. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. Since CRED_R is used in the integrity verification, see asym- msg2-proc, it needs to be specified such that it is identical when", "comments": "URL I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.\u201d So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems \u201ccredentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.\u201d Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn\u2019t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement?\nYes. The document should talk about RPK a bit more as that is term that people might search for. Should be explained that CWT is the format EDHOC uses for raw public keys (RPK). Hi all: I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.\u201d So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems \u201ccredentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.\u201d Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn\u2019t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement? Beyond these clarifications, I think the document is ready. Best Regards. Rafa Marin-Lopez, PhD Dept. Information and Communications Engineering (DIIC) Faculty of Computer Science-University of Murcia 30100 Murcia - Spain Telf: +34868888501 Fax: +34868884151 e-mail: EMAIL", "new_text": "respectively, and the public authentication keys are denoted G_I and G_R, respectively. For X.509 certificates the authentication key is represented by a SubjectPublicKeyInfo field. For CWT and CCS (see auth-cred)) the authentication key is represented by a 'cnf' claim RFC8747 containing a COSE_Key RFC9052. In EDHOC, a raw public key (RPK) is an authentication key encoded as a COSE_Key wrapped in a CCS. 3.5.2. The authentication credentials, CRED_I and CRED_R, contain the public authentication key of the Initiator and the Responder, respectively. The authentication credential typically also contains other parameters that needs to be verified by the application, see auth- validation, and in particular information about the identity (\"subject\") of the endpoint to prevent misbinding attacks, see identities. EDHOC relies on COSE for identification of credentials (see id_cred), for example X.509 certificates RFC5280, C509 certificates I-D.ietf- cose-cbor-encoded-cert, CWTs RFC8392 and CWT Claims Sets (CCS) RFC8392. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. The Initiator and the Responder MAY use different types of authentication credentials, e.g., one uses an RPK and the other uses a public key certificate. Since CRED_R is used in the integrity verification, see asym- msg2-proc, it needs to be specified such that it is identical when"} {"id": "q-en-edhoc-e199f2edf9f10cf92a0c9f84bd86459f1cd718b8d3490ed82ed9aae17f065e4e", "old_text": "SHALL be the C509Certificate I-D.ietf-cose-cbor-encoded-cert. When the authentication credential is a CWT including a COSE_Key, then CRED_x SHALL be the untagged CWT. When the authentication credential includes a COSE_Key but is not in a CWT, CRED_x SHALL be an untagged CCS. Naked COSE_Keys are thus dressed as CCS when used in EDHOC, which is done by prefixing the COSE_Key with 0xA108A101. An example of a CRED_x is shown below:", "comments": "URL I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.\u201d So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems \u201ccredentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.\u201d Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn\u2019t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement?\nYes. The document should talk about RPK a bit more as that is term that people might search for. Should be explained that CWT is the format EDHOC uses for raw public keys (RPK). Hi all: I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.\u201d So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems \u201ccredentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.\u201d Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn\u2019t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement? Beyond these clarifications, I think the document is ready. Best Regards. Rafa Marin-Lopez, PhD Dept. Information and Communications Engineering (DIIC) Faculty of Computer Science-University of Murcia 30100 Murcia - Spain Telf: +34868888501 Fax: +34868884151 e-mail: EMAIL", "new_text": "SHALL be the C509Certificate I-D.ietf-cose-cbor-encoded-cert. When the authentication credential is a CWT including a COSE_Key, CRED_x SHALL be the untagged CWT. When the authentication credential includes a COSE_Key but is not in a CWT, CRED_x SHALL be an untagged CCS. This is how RPKs are encoded, see fig-ccs for an example. Naked COSE_Keys are thus dressed as CCS when used in EDHOC, in its simplest form by prefixing the COSE_Key with 0xA108A101 (a map with a 'cnf' claim). In that case the resulting authentication credential contains no other identity than the public key itself, see identities. An example of a CRED_x is shown below:"} {"id": "q-en-edhoc-e199f2edf9f10cf92a0c9f84bd86459f1cd718b8d3490ed82ed9aae17f065e4e", "old_text": "protocol execution (specifically, cipher suite, see cs) but other parameters are only communicated and may not be negotiated (e.g., which authentication method is used, see method). Yet other parameters need to be known out-of-band. The purpose of an application profile is to describe the intended use of EDHOC to allow for the relevant processing and verifications to be", "comments": "URL I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.\u201d So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems \u201ccredentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.\u201d Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn\u2019t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement?\nYes. The document should talk about RPK a bit more as that is term that people might search for. Should be explained that CWT is the format EDHOC uses for raw public keys (RPK). Hi all: I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.\u201d So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems \u201ccredentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.\u201d Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn\u2019t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement? Beyond these clarifications, I think the document is ready. Best Regards. Rafa Marin-Lopez, PhD Dept. Information and Communications Engineering (DIIC) Faculty of Computer Science-University of Murcia 30100 Murcia - Spain Telf: +34868888501 Fax: +34868884151 e-mail: EMAIL", "new_text": "protocol execution (specifically, cipher suite, see cs) but other parameters are only communicated and may not be negotiated (e.g., which authentication method is used, see method). Yet other parameters need to be known out-of-band. The application decides which endpoint is Initiator and which is Responder. The purpose of an application profile is to describe the intended use of EDHOC to allow for the relevant processing and verifications to be"} {"id": "q-en-edhoc-082fb38c743d8d05e0f42a1a3c58b11bc4f92afb2e402b31c336313d6a0c2b87", "old_text": "Transport of external authorization data. EDHOC is designed to encrypt and integrity protect as much information as possible. Symmetric keys and random material derived using EDHOC_KDF are derived with as much previous information as possible, see fig-edhoc-kdf. EDHOC is furthermore designed to be as compact and lightweight as possible, in terms of message sizes, processing, and the ability to reuse already existing CBOR, COSE, and CoAP libraries. Like in (D)TLS, authentication is the responsibility of the application. EDHOC identifies (and optionally transports)", "comments": "I strongly think that all 10 EDHOC-KDF should be in the same table The text about maclengthx need to use the defined term hash_length\nI made one commit for review.", "new_text": "Transport of external authorization data. EDHOC is designed to encrypt and integrity protect as much information as possible. Symmetric keys and random material used in EDHOC are derived using EDHOC_KDF with as much previous information as possible, see fig-edhoc-kdf. EDHOC is furthermore designed to be as compact and lightweight as possible, in terms of message sizes, processing, and the ability to reuse already existing CBOR, COSE, and CoAP libraries. Like in (D)TLS, authentication is the responsibility of the application. EDHOC identifies (and optionally transports)"} {"id": "q-en-edhoc-082fb38c743d8d05e0f42a1a3c58b11bc4f92afb2e402b31c336313d6a0c2b87", "old_text": "length, the output length in bits. fig-edhoc-kdf lists derivations made with EDHOC_KDF during message processing, where hash_length - length of output size of the EDHOC hash algorithm of the selected cipher suite", "comments": "I strongly think that all 10 EDHOC-KDF should be in the same table The text about maclengthx need to use the defined term hash_length\nI made one commit for review.", "new_text": "length, the output length in bits. fig-edhoc-kdf lists derivations made with EDHOC_KDF, where hash_length - length of output size of the EDHOC hash algorithm of the selected cipher suite"} {"id": "q-en-edhoc-082fb38c743d8d05e0f42a1a3c58b11bc4f92afb2e402b31c336313d6a0c2b87", "old_text": "4.1.3. The pseudorandom key PRK_out, derived as shown in fig-edhoc-kdf is the output session key of a successful EDHOC exchange. Keys for applications are derived using EDHOC_Exporter from PRK_exporter (see exporter) which in turn is derived from PRK_out. For the purpose of generating application keys, it is sufficient to store PRK_out or PRK_exporter. (Note that the word \"store\" used here does not imply that the application has access to the plaintext PRK_out since that may be reserved for code within a Trusted Execution Environment, see impl-cons). 4.2. This section defines EDHOC_Exporter in terms of EDHOC_KDF and PRK_out. A key update function is defined in keyupdate. 4.2.1.", "comments": "I strongly think that all 10 EDHOC-KDF should be in the same table The text about maclengthx need to use the defined term hash_length\nI made one commit for review.", "new_text": "4.1.3. The pseudorandom key PRK_out, derived as shown in fig-edhoc-kdf, is the output session key of a successful EDHOC exchange. Keys for applications are derived using EDHOC_Exporter (see exporter) from PRK_exporter, which in turn is derived from PRK_out as shown in fig-edhoc-kdf. For the purpose of generating application keys, it is sufficient to store PRK_out or PRK_exporter. (Note that the word \"store\" used here does not imply that the application has access to the plaintext PRK_out since that may be reserved for code within a Trusted Execution Environment, see impl-cons). 4.2. This section defines EDHOC_Exporter in terms of EDHOC_KDF and PRK_exporter. A key update function is defined in keyupdate. 4.2.1."} {"id": "q-en-edhoc-082fb38c743d8d05e0f42a1a3c58b11bc4f92afb2e402b31c336313d6a0c2b87", "old_text": "length is a uint defined by the application PRK_exporter is derived from PRK_out: where hash_length denotes the output size in bytes of the EDHOC hash algorithm of the selected cipher suite. The (exporter_label, context) pair used in EDHOC_Exporter must be unique, i.e., an (exporter_label, context) MUST NOT be used for two different purposes. However an application can re-derive the same", "comments": "I strongly think that all 10 EDHOC-KDF should be in the same table The text about maclengthx need to use the defined term hash_length\nI made one commit for review.", "new_text": "length is a uint defined by the application The (exporter_label, context) pair used in EDHOC_Exporter must be unique, i.e., an (exporter_label, context) MUST NOT be used for two different purposes. However an application can re-derive the same"} {"id": "q-en-edhoc-082fb38c743d8d05e0f42a1a3c58b11bc4f92afb2e402b31c336313d6a0c2b87", "old_text": "(method equals 1 or 3), then mac_length_2 is the EDHOC MAC length of the selected cipher suite. If the Responder authenticates with a signature key (method equals 0 or 2), then mac_length_2 is equal to the output size of the EDHOC hash algorithm of the selected cipher suite. ID_CRED_R - identifier to facilitate the retrieval of CRED_R, see", "comments": "I strongly think that all 10 EDHOC-KDF should be in the same table The text about maclengthx need to use the defined term hash_length\nI made one commit for review.", "new_text": "(method equals 1 or 3), then mac_length_2 is the EDHOC MAC length of the selected cipher suite. If the Responder authenticates with a signature key (method equals 0 or 2), then mac_length_2 is equal to hash_length. ID_CRED_R - identifier to facilitate the retrieval of CRED_R, see"} {"id": "q-en-edhoc-082fb38c743d8d05e0f42a1a3c58b11bc4f92afb2e402b31c336313d6a0c2b87", "old_text": "(method equals 2 or 3), then mac_length_3 is the EDHOC MAC length of the selected cipher suite. If the Initiator authenticates with a signature key (method equals 0 or 1), then mac_length_3 is equal to the output size of the EDHOC hash algorithm of the selected cipher suite. ID_CRED_I - identifier to facilitate the retrieval of CRED_I, see", "comments": "I strongly think that all 10 EDHOC-KDF should be in the same table The text about maclengthx need to use the defined term hash_length\nI made one commit for review.", "new_text": "(method equals 2 or 3), then mac_length_3 is the EDHOC MAC length of the selected cipher suite. If the Initiator authenticates with a signature key (method equals 0 or 1), then mac_length_3 is equal to hash_length. ID_CRED_I - identifier to facilitate the retrieval of CRED_I, see"} {"id": "q-en-external-psk-design-team-7e1119db1ab6ca52f916ccfec2dc73bc2049ab4cf70e87a6fc279a8f9ce2ceaf", "old_text": "4. Applications MUST use external PSKs that adhere to the following requirements: Each PSK MUST be derived from at least 128 of entropy and MUST be at least 128-bits long unless the TLS handshake is being used with a separate key establishment mechanism such as a Diffie-Hellman exchange. This recommendation protects against passive attacks using exhaustive search of the PSK. Each PSK MUST NOT be shared between with more than two logical nodes. As a result, an agent that acts as both a client and a", "comments": "This builds on .\nSummary: use a high-entropy PSK with DH, otherwise, with a low-entropy PSK, use a PAKE. Reference appendix with security properties from NAME", "new_text": "4. Applications MUST adhere to the following requirements for external PSKs: Each PSK SHOULD be derived from at least 128 bits of entropy, MUST be at least 128 bits long, and SHOULD be combined with a DH exchange for forward secrecy. Low entropy PSKs, i.e., those derived from less than 128 bits of entropy, MUST be combined with a Password Authenticated Key Exchange (PAKE) mechanism. Each PSK MUST NOT be shared between with more than two logical nodes. As a result, an agent that acts as both a client and a"} {"id": "q-en-external-psk-design-team-bbf2fd1fd3712f7462f72bf169c12c18fac3db76a089894732b93bd291367337", "old_text": "violated, then the security properties of TLS are severely weakened. As discussed in use-cases, there are use cases where it is desirable for multiple clients or multiple servers share a PSK. If this is done naively by having all members share a common key, then TLS only authenticates the entire group, and the security of the overall system is inherently rather brittle. There are a number of obvious", "comments": "FIxed up: some nits on recommendations. I think the wording was a bit off on low entropy PSKs, so I lifted from ekrs text from earlier in the draft. some of the stack interface stuff on hints is not valid for TLS1.3 (hints are TLS1.2 and earlier), so removed it. Some prelim text on collisions. For completeness, will take a look at gnutls and wolfssl order too. Note, even with the OpenSSL callback sequence, the application still does not know if there is a collision...\nStarter text from NAME\nNAME can you please take this?\nPing NAME", "new_text": "violated, then the security properties of TLS are severely weakened. As discussed in use-cases, there are use cases where it is desirable for multiple clients or multiple servers to share a PSK. If this is done naively by having all members share a common key, then TLS only authenticates the entire group, and the security of the overall system is inherently rather brittle. There are a number of obvious"} {"id": "q-en-external-psk-design-team-bbf2fd1fd3712f7462f72bf169c12c18fac3db76a089894732b93bd291367337", "old_text": "appearing in cleartext in a ClientHello. As a result, a passive adversary can link two or more connections together that use the same external PSK on the wire. Applications should take precautions when using external PSKs if these risks. In addition to linkability in the network, external PSKs are intrinsically linkable by PSK receivers. Specifically, servers can", "comments": "FIxed up: some nits on recommendations. I think the wording was a bit off on low entropy PSKs, so I lifted from ekrs text from earlier in the draft. some of the stack interface stuff on hints is not valid for TLS1.3 (hints are TLS1.2 and earlier), so removed it. Some prelim text on collisions. For completeness, will take a look at gnutls and wolfssl order too. Note, even with the OpenSSL callback sequence, the application still does not know if there is a collision...\nStarter text from NAME\nNAME can you please take this?\nPing NAME", "new_text": "appearing in cleartext in a ClientHello. As a result, a passive adversary can link two or more connections together that use the same external PSK on the wire. Applications should take precautions when using external PSKs to mitigate these risks. In addition to linkability in the network, external PSKs are intrinsically linkable by PSK receivers. Specifically, servers can"} {"id": "q-en-external-psk-design-team-bbf2fd1fd3712f7462f72bf169c12c18fac3db76a089894732b93bd291367337", "old_text": "constrained UI. Moreover, PSK production lacks guidance unlike user passwords. Some devices are provisioned PSKs via an out-of-band, cloud-based syncing protocol. Some secrets may be baked into or hardware or software device", "comments": "FIxed up: some nits on recommendations. I think the wording was a bit off on low entropy PSKs, so I lifted from ekrs text from earlier in the draft. some of the stack interface stuff on hints is not valid for TLS1.3 (hints are TLS1.2 and earlier), so removed it. Some prelim text on collisions. For completeness, will take a look at gnutls and wolfssl order too. Note, even with the OpenSSL callback sequence, the application still does not know if there is a collision...\nStarter text from NAME\nNAME can you please take this?\nPing NAME", "new_text": "constrained UI. Moreover, PSK production lacks guidance unlike user passwords. Some devices provision PSKs via an out-of-band, cloud-based syncing protocol. Some secrets may be baked into or hardware or software device"} {"id": "q-en-external-psk-design-team-bbf2fd1fd3712f7462f72bf169c12c18fac3db76a089894732b93bd291367337", "old_text": "Each PSK SHOULD be derived from at least 128 bits of entropy, MUST be at least 128 bits long, and SHOULD be combined with a DH exchange for forward secrecy. Low entropy PSKs, i.e., those derived from less than 128 bits of entropy, MUST be combined with a Password Authenticated Key Exchange (PAKE) mechanism. Each PSK MUST NOT be shared between with more than two logical nodes. As a result, an agent that acts as both a client and a", "comments": "FIxed up: some nits on recommendations. I think the wording was a bit off on low entropy PSKs, so I lifted from ekrs text from earlier in the draft. some of the stack interface stuff on hints is not valid for TLS1.3 (hints are TLS1.2 and earlier), so removed it. Some prelim text on collisions. For completeness, will take a look at gnutls and wolfssl order too. Note, even with the OpenSSL callback sequence, the application still does not know if there is a collision...\nStarter text from NAME\nNAME can you please take this?\nPing NAME", "new_text": "Each PSK SHOULD be derived from at least 128 bits of entropy, MUST be at least 128 bits long, and SHOULD be combined with a DH exchange for forward secrecy. As discussed in sec-properties, low entropy PSKs, i.e., those derived from less than 128 bits of entropy, are subject to attack and SHOULD be avoided. Low entropy keys are only secure against active attack if a Password Authenticated Key Exchange (PAKE) is used with TLS. Each PSK MUST NOT be shared between with more than two logical nodes. As a result, an agent that acts as both a client and a"} {"id": "q-en-external-psk-design-team-bbf2fd1fd3712f7462f72bf169c12c18fac3db76a089894732b93bd291367337", "old_text": "6.1. Most major TLS implementations support external PSKs. And all have a common interface that applications may use when supplying them for individual connections. Details about existing stacks at the time of writing are below. OpenSSL and BoringSSL: Applications specify support for external PSKs via distinct ciphersuites. They also then configure callbacks that are invoked for PSK selection during the handshake. These callbacks must provide a PSK identity (as a character string) and key (as a byte string). (If no identity is provided, a default one is assumed.) They are typically invoked with a PSK hint, i.e., the hint provided by the server as per RFC4279. The PSK length is validated to be between [1, 256] bytes upon selection. mbedTLS: Client applications configure PSKs before creating a connection by providing the PSK identity and value inline. Servers must implement callbacks similar to that of OpenSSL. PSK lengths are validate to be between [1, 16] bytes. gnuTLS: Applications configure PSK values, either as raw byte strings or hexadecimal strings. The PSK size is not validated. wolfSSL: Applications configure PSKs with callbacks similar to OpenSSL.", "comments": "FIxed up: some nits on recommendations. I think the wording was a bit off on low entropy PSKs, so I lifted from ekrs text from earlier in the draft. some of the stack interface stuff on hints is not valid for TLS1.3 (hints are TLS1.2 and earlier), so removed it. Some prelim text on collisions. For completeness, will take a look at gnutls and wolfssl order too. Note, even with the OpenSSL callback sequence, the application still does not know if there is a collision...\nStarter text from NAME\nNAME can you please take this?\nPing NAME", "new_text": "6.1. Most major TLS implementations support external PSKs. Stacks supporting external PSKs provide interfaces that applications may use when supplying them for individual connections. Details about existing stacks at the time of writing are below. OpenSSL and BoringSSL: Applications specify support for external PSKs via distinct ciphersuites. They also then configure callbacks that are invoked for PSK selection during the handshake. These callbacks must provide a PSK identity and key. The exact format of the callback depends on the negotiated TLS protocol version with new callback functions added specifically to OpenSSL for TLS 1.3 RFC8446 PSK support. The PSK length is validated to be between [1, 256] bytes. The PSK identity may be up to 128 bytes long. mbedTLS: Client applications configure PSKs before creating a connection by providing the PSK identity and value inline. Servers must implement callbacks similar to that of OpenSSL. Both PSK identity and key lengths may be between [1, 16] bytes long. gnuTLS: Applications configure PSK values, either as raw byte strings or hexadecimal strings. The PSK identity and key size are not validated. wolfSSL: Applications configure PSKs with callbacks similar to OpenSSL."} {"id": "q-en-external-psk-design-team-bbf2fd1fd3712f7462f72bf169c12c18fac3db76a089894732b93bd291367337", "old_text": "Deployments should take care that the length of the PSK identity is sufficient to avoid obvious collisions. [[OPEN ISSUE: discuss implication of collisions between external and resumption PSKs.]] 7.", "comments": "FIxed up: some nits on recommendations. I think the wording was a bit off on low entropy PSKs, so I lifted from ekrs text from earlier in the draft. some of the stack interface stuff on hints is not valid for TLS1.3 (hints are TLS1.2 and earlier), so removed it. Some prelim text on collisions. For completeness, will take a look at gnutls and wolfssl order too. Note, even with the OpenSSL callback sequence, the application still does not know if there is a collision...\nStarter text from NAME\nNAME can you please take this?\nPing NAME", "new_text": "Deployments should take care that the length of the PSK identity is sufficient to avoid obvious collisions. 6.1.2. It is possible, though unlikely, that an external PSK identity may clash with a resumption PSK identity. The TLS stack implementation and sequencing of PSK callbacks influences the application's behaviour when identity collisions occur. When a server receives a PSK identity in a TLS 1.3 ClientHello, some TLS stacks execute the application's registered callback function before checking the stack's internal session resumption cache. This means that if a PSK identity collision occurs, the application will be given precedence over how to handle the PSK. 7."} {"id": "q-en-fec-e099b030d69f4011facfa36b3b18b70a5bf6349a9b335847aaebd859bb630211", "old_text": "indicated by congestion control and the receiver, this will lead to less bandwidth available for the primary encoding, even when the redundant data is not being used. This is in contrast to methods like RTX RFC4588 or flexfec I-D.ietf-payload-flexible-fec-scheme retransmissions, which only transmit redundant data when necessary, at the cost of an extra roundtrip. Given this, WebRTC implementations SHOULD consider using RTX or flexfec retransmissions instead of FEC when RTT is low, and SHOULD", "comments": "Ah, I see now. I think what would help is \"in contrast to methods like RTX or flexfec (when retransmissions are used) [I-D.]\"", "new_text": "indicated by congestion control and the receiver, this will lead to less bandwidth available for the primary encoding, even when the redundant data is not being used. This is in contrast to methods like RTX RFC4588 or flexfec's retransmission mode ( I-D.ietf-payload- flexible-fec-scheme, Section 1.1.7), which only transmit redundant data when necessary, at the cost of an extra roundtrip. Given this, WebRTC implementations SHOULD consider using RTX or flexfec retransmissions instead of FEC when RTT is low, and SHOULD"} {"id": "q-en-fec-56e05d85ce9386491a7d09d7f84dec4411775ff42b22d221099ebf96169edbcb", "old_text": "This approach, as described in RFC2198, allows for redundant data to be piggybacked on an existing primary encoding, all in a single packet. This redundant data may be an exact copy of a previous packet, or for codecs that support variable-bitrate encodings, possibly a smaller, lower-quality representation. In certain cases, the redundant data could include multiple prior packets. Since there is only a single set of packet headers, this approach allows for a very efficient representation of primary + redundant", "comments": "It's similar to the redundant encoding, but done within the Opus bitstream, rather than at the RTP level.", "new_text": "This approach, as described in RFC2198, allows for redundant data to be piggybacked on an existing primary encoding, all in a single packet. This redundant data may be an exact copy of a previous payload, or for codecs that support variable-bitrate encodings, possibly a smaller, lower-quality representation. In certain cases, the redundant data could include encodings of multiple prior audio frames. Since there is only a single set of packet headers, this approach allows for a very efficient representation of primary + redundant"} {"id": "q-en-fec-56e05d85ce9386491a7d09d7f84dec4411775ff42b22d221099ebf96169edbcb", "old_text": "Some audio codecs, notably Opus RFC6716 and AMR RFC4867, support their own in-band FEC mechanism, where redundant data is included in the codec payload. For Opus, packets deemed as important are re-encoded at a lower bitrate and added to the subsequent packet, allowing partial recovery of a lost packet. This scheme is fairly efficient; experiments performed indicate that when Opus FEC is used, the overhead imposed is about 20-30%, depending on the amount of protection needed. Note that this mechanism can only carry redundancy information for the immediately preceding packet; as such the decoder cannot fully recover multiple consecutive lost packets, which can be a problem on wireless networks. See RFC6716, Section 2.1.7 for complete details. For AMR/AMR-WB, packets can contain copies or lower-quality encodings of multiple prior audio frames. This mechanism is similar to the redundant encoding mechanism described above, but as it adds no additional framing, it can be slightly more efficient. See RFC4867, Section 3.7.1 for details on this mechanism. In-band FEC mechanisms cannot recover any of the RTP header.", "comments": "It's similar to the redundant encoding, but done within the Opus bitstream, rather than at the RTP level.", "new_text": "Some audio codecs, notably Opus RFC6716 and AMR RFC4867, support their own in-band FEC mechanism, where redundant data is included in the codec payload. This is similar to the redundant encoding mechanism described above, but as it adds no additional framing, it can be slightly more efficient. For Opus, audio frames deemed important are re-encoded at a lower bitrate and appended to the next payload, allowing partial recovery of a lost packet. This scheme is fairly efficient; experiments performed indicate that when Opus FEC is used, the overhead imposed is only about 20-30%, depending on the amount of protection needed. Note that this mechanism can only carry redundancy information for the immediately preceding audio frame; as such the decoder cannot fully recover multiple consecutive lost packets, which can be a problem on wireless networks. See RFC6716, Section 2.1.7 for complete details. For AMR/AMR-WB, packets can contain copies or lower-quality encodings of multiple prior audio frames. See RFC4867, Section 3.7.1 for details on this mechanism. In-band FEC mechanisms cannot recover any of the RTP header."} {"id": "q-en-fec-56e05d85ce9386491a7d09d7f84dec4411775ff42b22d221099ebf96169edbcb", "old_text": "Support for a SSRC-multiplexed flexfec stream to protect a given RTP stream SHOULD be indicated by including one of the formats described in I-D.ietf-payload-flexible-fec-scheme, Section 5.1, as an additional supported media type for the associated m= section in the SDP offer RFC3264. As mentioned above, when BUNDLE is used, only a single flexfec repair stream will be created for each BUNDLE group,", "comments": "It's similar to the redundant encoding, but done within the Opus bitstream, rather than at the RTP level.", "new_text": "Support for a SSRC-multiplexed flexfec stream to protect a given RTP stream SHOULD be indicated by including one of the formats described in I-D.ietf-payload-flexible-fec-scheme, Section 5.1.2, as an additional supported media type for the associated m= section in the SDP offer RFC3264. As mentioned above, when BUNDLE is used, only a single flexfec repair stream will be created for each BUNDLE group,"} {"id": "q-en-gnap-core-protocol-d0caac17c305044caf3a7e5f12a55b18ec2b9d119a23455cf87bee2daeeaeb1b", "old_text": "result MAY be used by the client instance in the request- capabilities of the request. OPTIONAL. A list of the AS's interaction methods. The values of this list correspond to the possible fields in the request- interact of the request. OPTIONAL. A list of the AS's supported key proofing mechanisms. The values of this list correspond to possible values of the", "comments": "The discovery fields for the interaction methods were still in a single list, even though those have now been split into start and finish sections.\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! :hammer: Explore the source changes: 754d971b07753b2b9580c2c6f0b3c8997c34b260 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYes, makes sense", "new_text": "result MAY be used by the client instance in the request- capabilities of the request. OPTIONAL. A list of the AS's interaction start methods. The values of this list correspond to the possible values for the request-interact-start of the request. OPTIONAL. A list of the AS's interaction finish methods. The values of this list correspond to the possible values for the method element of the request-interact-finish of the request. OPTIONAL. A list of the AS's supported key proofing mechanisms. The values of this list correspond to possible values of the"} {"id": "q-en-gnap-core-protocol-29977017836148c4db89647fad7e1bc89392f01c0f573806f8e9358e5132634e", "old_text": "Mutual TLS certificate verification OAuth Demonstration of Proof-of-Possession key proof header HTTP Signing signature header OAuth PoP key proof authentication header", "comments": "remove OAuth DPoP binding. Described why on IETF mailing list: only works for asymmetric keys, requires key be presented in the header (duplicating information from GNAP messages). It was never meant to be a general purpose signing mechanism, though the FAPI group in OIDF is considering it as an option in current proposed work.\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: c380eb747945a8002c19de9784e86bee713951a0 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nOk that's weird (just removed some lines...), will have a look\nOk that was due to my rust analyser in vscode, fixed it now", "new_text": "Mutual TLS certificate verification HTTP Signing signature header OAuth PoP key proof authentication header"} {"id": "q-en-gnap-core-protocol-29977017836148c4db89647fad7e1bc89392f01c0f573806f8e9358e5132634e", "old_text": "7.3.4. This method is indicated by \"dpop\" in the \"proof\" field. The signer creates a Demonstration of Proof-of-Possession signature header as described in I-D.ietf-oauth-dpop section 2. In addition, this specification defines the following fields to be added to the DPoP payload: Digest of the request body as the value of the Digest header defined in RFC3230. When a request contains a message body, such as a POST or PUT request, this field is REQUIRED. In this example, the request body is the following JSON object: ~~~ { \"access_token\": { \"access\": [ \"dolphin-metadata\" ] }, \"interact\": { \"start\": [\"redirect\"], \"finish\": { \"method\": \"redirect\", \"uri\": \"https://client.foo/callback\", \"nonce\": \"VJLO6A4CAYLBXHTR0KRO\" } }, \"client\": { \"proof\": \"dpop\", \"key\": { \"jwk\": { \"kid\": \"gnap-rsa\", \"kty\": \"RSA\", \"e\": \"AQAB\", \"alg\": \"RS256\", \"n\": \"hYOJ- XOKISdMMShn_G4W9m20mT0VWtQBsmBBkI2cmRt4Ai8Bf YdHsFzAtYKOjpBR1RpKpJmVKxIGNy0g6Z3ad2XYsh8KowlyVy8IkZ8NMwSrcUIBZG YXjHpwjzvfGvXH_5KJlnR3_uRUp4Z4Ujk2bCaKegDn11V2vxE41hqaPUnhRZxe0jR ETddzsE3mu1SK8dTCROjwUl14mUNo8iTrTm4n0qDadz8BkPo-uv4BC0bunS0K3bA_ 3UgVp7zBlQFoFnLTO2uWp_muLEWGl67gBq9MO3brKXfGhi3kOzywzwPTuq-cVQDyE N7aL0SxCb3Hc4IdqDaMg8qHUyObpPitDQ\" } } \"display\": { \"name\": \"My Client Display Name\", \"uri\": \"https://client.foo/\" }, } } ~~~ The JOSE header contains the following parameters, including the public key: The JWS Payload contains the following JWT claims, including a hash of the body: This results in the following full HTTP message request: The verifier MUST parse and validate the DPoP proof header as defined in I-D.ietf-oauth-dpop. If the HTTP message request includes a message body, the verifier MUST calculate the digest of the body and compare it to the \"htd\" value. The verifier MUST ensure the key presented in the DPoP proof header is the same as the expected key of the signer. 7.3.5. This method is indicated by \"httpsig\" in the \"proof\" field. The sender creates an HTTP Message Signature as described in I-D.ietf- httpbis-message-signatures.", "comments": "remove OAuth DPoP binding. Described why on IETF mailing list: only works for asymmetric keys, requires key be presented in the header (duplicating information from GNAP messages). It was never meant to be a general purpose signing mechanism, though the FAPI group in OIDF is considering it as an option in current proposed work.\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: c380eb747945a8002c19de9784e86bee713951a0 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nOk that's weird (just removed some lines...), will have a look\nOk that was due to my rust analyser in vscode, fixed it now", "new_text": "7.3.4. This method is indicated by \"httpsig\" in the \"proof\" field. The sender creates an HTTP Message Signature as described in I-D.ietf- httpbis-message-signatures."} {"id": "q-en-gnap-core-protocol-29977017836148c4db89647fad7e1bc89392f01c0f573806f8e9358e5132634e", "old_text": "The verifier MUST validate the signature against the expected key of the signer. 7.3.6. This method is indicated by \"oauthpop\" in the \"proof\" field. The signer creates an HTTP Authorization PoP header as described in I-", "comments": "remove OAuth DPoP binding. Described why on IETF mailing list: only works for asymmetric keys, requires key be presented in the header (duplicating information from GNAP messages). It was never meant to be a general purpose signing mechanism, though the FAPI group in OIDF is considering it as an option in current proposed work.\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: c380eb747945a8002c19de9784e86bee713951a0 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nOk that's weird (just removed some lines...), will have a look\nOk that was due to my rust analyser in vscode, fixed it now", "new_text": "The verifier MUST validate the signature against the expected key of the signer. 7.3.5. This method is indicated by \"oauthpop\" in the \"proof\" field. The signer creates an HTTP Authorization PoP header as described in I-"} {"id": "q-en-gnap-core-protocol-2a1e1eec093752e6ee4a178fa3cc732cc485c7435c9001747b0dcea0ebddc59e", "old_text": "If the client instance is capable of directing the end-user to a URL defined by the AS at runtime, the client instance indicates this by sending the \"redirect\" field with the boolean value \"true\". The means by which the client instance will activate this URL is out of scope of this specification, but common methods include an HTTP redirect, launching a browser on the end-user's device, providing a scannable image encoding, and printing out a URL to an interactive console. While this URL is generally hosted at the AS, the client instance can make no assumptions about its contents, composition, or relationship to the AS grant URL. If this interaction mode is supported for this client instance and request, the AS returns a redirect interaction response response-", "comments": "The boolean value \"true\" is no longer used to specify supported interaction start modes.\nDeploy Preview for gnap-core-protocol-editors-draft ready! Built Explore the source changes: ef948b6b0b53f92352ec617e6074174e0a2eaa3f Inspect the deploy log: Browse the preview:", "new_text": "If the client instance is capable of directing the end-user to a URL defined by the AS at runtime, the client instance indicates this by including \"redirect\" in the array under the \"start\" key. The means by which the client instance will activate this URL is out of scope of this specification, but common methods include an HTTP redirect, launching a browser on the end-user's device, providing a scannable image encoding, and printing out a URL to an interactive console. While this URL is generally hosted at the AS, the client instance can make no assumptions about its contents, composition, or relationship to the AS grant URL. If this interaction mode is supported for this client instance and request, the AS returns a redirect interaction response response-"} {"id": "q-en-gnap-core-protocol-2a1e1eec093752e6ee4a178fa3cc732cc485c7435c9001747b0dcea0ebddc59e", "old_text": "If the client instance can open a URL associated with an application on the end-user's device, the client instance indicates this by sending the \"app\" field with boolean value \"true\". The means by which the client instance determines the application to open with this URL are out of scope of this specification.", "comments": "The boolean value \"true\" is no longer used to specify supported interaction start modes.\nDeploy Preview for gnap-core-protocol-editors-draft ready! Built Explore the source changes: ef948b6b0b53f92352ec617e6074174e0a2eaa3f Inspect the deploy log: Browse the preview:", "new_text": "If the client instance can open a URL associated with an application on the end-user's device, the client instance indicates this by including \"app\" in the array under the \"start\" key. The means by which the client instance determines the application to open with this URL are out of scope of this specification."} {"id": "q-en-gnap-core-protocol-2a1e1eec093752e6ee4a178fa3cc732cc485c7435c9001747b0dcea0ebddc59e", "old_text": "If the client instance is capable of displaying or otherwise communicating a short, human-entered code to the RO, the client instance indicates this by sending the \"user_code\" field with the boolean value \"true\". This code is to be entered at a static URL that does not change at runtime. While this URL is generally hosted at the AS, the client instance can make no assumptions about its contents, composition, or relationship to the AS grant URL. If this interaction mode is supported for this client instance and", "comments": "The boolean value \"true\" is no longer used to specify supported interaction start modes.\nDeploy Preview for gnap-core-protocol-editors-draft ready! Built Explore the source changes: ef948b6b0b53f92352ec617e6074174e0a2eaa3f Inspect the deploy log: Browse the preview:", "new_text": "If the client instance is capable of displaying or otherwise communicating a short, human-entered code to the RO, the client instance indicates this by including \"user_code\" in the array under the \"start\" key. This code is to be entered at a static URL that does not change at runtime. While this URL is generally hosted at the AS, the client instance can make no assumptions about its contents, composition, or relationship to the AS grant URL. If this interaction mode is supported for this client instance and"} {"id": "q-en-gnap-core-protocol-4bf0277751cbe2a32afeffc201882e0870d31da584a2c29e4eef301384ee9d47", "old_text": "ability given to a subject to perform a given operation on a resource under the control of an RS. person, organization or device. statement asserted by an AS about a subject.", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Built Explore the source changes: 0d7a49a6be4a9740f850d2477b1f2462de781b10 Inspect the deploy log: Browse the preview:\nSection 2.2 is currently called: Then the text continues with: This means that this section is currently restricted to the RO. Since this call is restricted to the RO, the title of this section should be changed into: Requesting RO Information. In addition, the reason(s) for requesting such information is not mentioned and should be explained. As a side-consequence, a user has no on-line means to know which personal data the AS knows. This comment also impacts the content of section 1.4.6 (Requesting User Information). This topic has been first addressed under the issue which is still open. It would be wise to allow such an on-line access so that the user can know which attributes are known by an AS and then choose which attributes to insert into an access token.\nMaybe we'll need a bit of rewording, although I'm not sure the proposed changes help so much. I just think we should put more emphasis (and state early on) what is shortly described at the end of section 4, i.e. that here RO = end-user\nRO = end-user is not the general case. The usefulness of supporting Requesting RO Information is not crystal clear and should be indicated, but there is an interest to support Requesting User Information. While I agree with you that \"we'll need a bit of rewording\", simply stating that RO = end-user in section 4 will not solve the issue.\nI notice that states It also shows that there is no RS involve and diagram uses label: User (not RO) To my understanding End-user is \"logged in\" to the client instance. Given role definition: If we don't have RS involved, so we also don't have any Protected Resource here, does it really make sense to speak of Resource Owner?\nYes, that's why we talk about the \"user\" here. Notice that currently, this term is used when we don't know the exact role it will play in the protocol (end user or RO).\nIf that's the case for why does speak about Resource Owner? Again in 2.2 we also don't seem to have Resource Server or Protected Resource involved.\nThe rationale is : in the intro of section 4, gnap allows a large variety of cases. GNAP can handle UMA2 types of scenarii (hence 1.4.6 and a few other places) but in gnap core we mostly detail the case where end-user = RO. Hence what we explain about the subject, and it makes sense to limite the diffusion of confidential information about the RO (the most limiting case, hence 2.2 and a few other places) That's actually a quite subtle difference, but i think it makes good sense.\nGiven above definitions maybe it would make sense to broaden definition of RO to: subject entity that may grant or deny operations on resources or obtaining assertions it has authority upon. Since in the User acting as RO doesn't grant or deny operations on resources (served by RS) but only grants / denies obtaining an assertion.\nI agree that the specific language that NAME references should be tightened up. What's a little awkward right now is that GNAP's language treats access to an RS and access to information contained in assertions as protected resources, but it doesn't do so particularly cleanly due to several major text revisions that have changed things a little out of sync with each other. Both data from an RS (using an access token) and assertions/identifiers passed to the client are targeted at the client itself, so in that way they're the same nature, a protected resource after a fashion and both controlled by an RO of some kind. What's different is that the info passed back directly is usually assumed to be :about: the end-user, because that's what the client is calling about in just about all the use cases that are known. Technically all of the information, either from an RS or subject information passed directly, is controlled by the RO. The end-user can get involved through interaction and act as the RO. The case where the RO doesn't equal the end-user is useful in many situations but leads to unexpected weirdness when you're talking about the subject information coming back.\nFair point. We might start by listing what needs to be changed.\nI tend to think of the AS as having a separate service endpoint for RO requests because they are the ones that can edit AS policy. A service endpoint that is accessible to End User requests would be \"separate\" and could be used by anyone including the RO when they want the protection of least-privilege. This distinction may be unhelpful in the general case of explaining GNAP so I don't mean to overstate it.\nNAME when you say \"service endpoint\" are you thinking of something that's interactive and user-facing (so a webpage) or something that's a callable API?\nNAME I really don't know enough to answer purely technical questions. I think mostly in terms of privacy engineering and look to this group for security and devops perspective. Therefore, I focus on the AS as the primary, if not the sole, entity that stores and executes RO policy. The AS takes requests and turns them into access tokens. The requests might be via (RAR) API or a web form. Either way, this request interface is likely unsecured. Some requests present credentials while others might add an authentication flow. That begs the issue of how did the AS get the RO's policies to begin with, long before a request comes in? I would define whoever sets the AS policies as the RO (regardless of any particular RS-RO relationship). A so-called \"policy manager API\" has been under discussion in UMA for about a year because at least one implementer wants to author policies on a mobile app and standardize the way they are uploaded to the AS. Back to Justin's technical question, I'm trying to implement the shortest distance between OAuth 2 and GNAP in a way that I can understand and explain to others. I'm doing this in FastAPI (because I don't know much beyond elementary Python, because it has built-in OpenAPI documentation, and has some OAuth 2 and JWT functionality). I'm deploying (automagically) to DigitalOcean App Platform in order to be clear about my various security and build shortcuts. I've designed the demo as two separate FastAPI server domains for the RS and AS. A \"global\" CouchDB instance is deployed for RO policy storage and any state that needs to be saved as part of the protocol demos. -- URL (the AS) and running at https://ognap-URL -- URL and running at https://ognap-rs-URL -- URL The protected resource is going to be_ just a self-verifying vaccine credential (I invented) -- URL -- Some of this is running at URL I've been stuck trying to answer Justin's exact question for quite a while. I think I need to set up the AS (OGNAP) to deliver a PWA to the RO so they can edit policies locally in PouchDB and sync the polices with the CouchDB the AS can access when a request comes in. That forces me to learn URL or something like it. Aside from my long-term goal to upgrade the HIE of One Trustee demo from UMA to GNAP, I'm also hoping to show the W3C and DIF protocol groups how easy it could be to mitigate significant OAuth 2 human rights concerns using GNAP. Any suggestions, help, moral support would be really welcome.\nback to NAME and NAME : looking again on this. I guess we could define what we mean by operations. Getting an assertion is an operation, in my view. Or give it as an example. What do you think ?\nI think it comes down to whether or not we consider the \"subject information\", which includes both identifiers and assertions, to be \"resources\" or not. I always thought they are, but they are definitely a different style of resource than what comes from the RS so maybe we need better wording around it. We talked about it a bit in the long terminology discussion a year ago, but I don't think we ever really landed on a term that fully captures \"stuff sent directly to the client by the AS and not accessed through the RS\". Identifiers and assertions are the two categories where there's clear demand, but other things could happen through extensions as well.\nDoes this issue also have to align with authentication flows that build on GNAP? a subject presents a DID or other identifier to a relying party (RP) the RP dereferences the identifier to an AS they trust a request for assertion is made to the AS the AS issues a liveness challenge to the subject the AS returns an assertion and attributes to the RP (the attributes might be accessed indirectly via an access token returned with the assertion) How is SIOP planning to handle this?\nIn the current draft, section 2.2 : \"The AS can determine the RO's identity and permission for releasing this information through interaction with the RO (Section 4), AS policies, or assertions presented by the client instance (Section 2.4). If this is determined positively, the AS MAY return the RO's information in its response (Section 3.4) as requested.\" Currently, a resource is just a short cut for \"Protected resource\", which really refers to an API served by a RS. So we have a mismatch (or at least a missing case) in the RO definition (as pointed out by NAME and NAME I've tried some sentences to specify subtypes for resources, but it becomes very confusing. The proposal by NAME to define the RO as \"subject entity that may grant or deny operations on resources or obtaining assertions it has authority upon.\" seems slightly incorrect to me, because the assertion is not made by the RO, but by the AS. What the RO grants to the AS is access to its attributes (and then the AS may carry out further due diligence to derive assertions, for instance through AuthN flows as suggested by NAME The decision to disclose or not its own attributes is not restricted to the RO, it's actually something under the control of every subject. So here's what I'd suggest to solve the issue (new part in bold): update the definition of \"Subject: person, organization or device. It decides whether and under which conditions its attributes can be disclosed to other parties.\" question 1 : should we limit this to physical persons? (from a legal perspective, that makes sense, but then wouldn't an org want to restrict what it shares, cf issues around data sovereignty?). I think the principle could be generic. question 2 : I've added under which conditions because we already discuss that in the text (manual consent, automatic policy, etc.), plus it's not only a yes/no, it also concerns what those other parties can do with it, including further sharing those attributes, cf the case mentioned by NAME where the AS sends them to the RP. Feedback welcome !\nsection refers to the Subject exclusively as RO. I'm still uncertain if the subject who in interaction only decides to disclose some attributes in an but not any \"Protected resources\" (an API served by RS) is considered to act as RO. If that's the case what would be the case where the subject decides to disclose some attributes but doesn't act as RO?\nI'm not entirely sure about the use case you're targeting here. Could you be more specific ?\nMaybe I shouldn't have mixed two questions together. Let's focus on one of them first. section always refers to the Subject as RO. In your previous comment you mentioned that: When we would have a situation where a Subject is deciding to disclose its own attributes but they don't act as RO?\nIn theory section 4 allows other situations (at the beginning of the section). But the only case that's detailed is the case where it is indeed the RO.\nThanks, now for the second part of my question. In we find: In this scenario, while no RS is participating, so no Protected Resources are being accessed. We still have Shouldn't definition of RO should still be broadened from current Keeping in mind It might still be something in direction of subject entity that may grant or deny operations on resources or obtaining assertions containing attributes it has authority upon. In short, if RO plays role in . There seems to be more to that role than just granting or denying operations on resources (an API served by RS).\nMy view is that the attribute part is covered by the fact that the RO is a subject (obtaining them is covered by the subject definition now). The main rationale for the protocol is still related to access to protected resources, and I believe we would dilute that otherwise.\nIn that case is there any reason for to still include: I think 4 could be removed altogether and 5 could become (after renumbering) This section states in the first sentence that it does not involve RS and access to protected resources (an API served by RS)\nWe still need to make sure that the user is indeed the RO, hence step 4.\nI still consider that the information about the subject is counted among the \"Resources\" being released here. The only real difference is that instead of them being at an RS, they're passed directly to the client instance.\nYes, this didn't change. I'm only saying I don't think it needs to be embedded into the RO definition, since we can assume from the subject definition itself that some mechanism exists to grab the subject attributes data if it consents to. Changing the RO definition or worse the (protected) resource definition makes the entire thing convoluted, at least in every variation I tried.\nNAME Right, and I think I agree with that. The definitions should try to stay tight, if we can. But if the surrounding text, especially the \"subject information\" request stuff isn't clear, then this is going to come around again. I don't have a specific answer but I would be surprised if we can't improve this, since it's come up a few times.\nI am supportive of elf-pavlik when he writes: fimbault agrees with him when he says: When looking at Section 3.4 that is indicated in that section, the following text may be found. If information about the RO is requested and the AS grants the client instance access to that data, the AS returns the approved information in the \"subject\" response field. The case where information is about the end-user is not addressed, i.e. there is no sentence starting with: If information about the end-user is requested (...) Does this mean that an end-user is not allowed to know which personal data is known by the AS about him ? If this would be the case, such a limitation would not comply with generally agreed privacy principles. The text is still not fixed in draft-08. What are the benefits for a client to know information about the RO ? This may be crystal clear for the editor(s), but it is unclear for a lambda reader.\nBased on recent conversations, especially URL it seems to me that the intention is to consider the subject as RO with regards to the information about them. I think this shows up clearly in since it does not involve any Protected Resources but it stills call the user RO. It might be helpful to document scenarios where there is more than one AS involved. Let's say we have Alice using a client to access resources owned by Bob. Here we could have two different AS: AS providing user information about Alice to the client - here Alice would act as both End-user and RO and Bob not be involved at all AS providing access_token to the client allowing access for protected resources - here Alice would act as End-user and Bob as RO Does that scenario with two different Authorization Servers sound reasonable?\nNAME Those two both make sense, and it makes sense even for them to be together. You could have those be the same AS as well, if Bob's resource server is protected by the same server Alice is using to log in to the client. But they're different kinds of access, and it would make sense to talk about them in a single example scenario to help call out that difference. There's also a weird case from the CIBA work in the OIDF: Alice uses client software to find out who Bob is. The canonical example given here is a call center: Alice needs to verify that the person calling her is actually Bob, and so Alice starts something with the AS that reaches out to Bob to approve the release of identity information about Bob to Alice, in real time.\nNAME how does Alice know which AS can release identity information about Bob? I would not make an assumption that one global central AS exists and it has information about the whole Earth's population?\nNAME This use case is not assuming a global AS at all (I agree, that's absurd), but it is assuming that Alice and Bob are both working with a known ecosystem. In the case of the call center example, it's the supporting company's AS -- where Alice works and where Bob is a customer. So, Bob needs to have an account there, and so does Alice. Alice's account needs to be authorized to get info about other users (because she's a support tech). Bob's account is a customer account that Alice can fetch information from anyway.\nI see, thank you. It indeed looks like another use case where Bob acts as RO but there is no RS or Protected Resource involved.\nI've encountered the CIBA use case very recently. Here's a summarised description : the enduser gets in touch (out of band) with the support, so that both are available at the same time the support needs to get read access to some info stored on the enduser's mobile, and uses CIBA to get the enduser's approval returned scopes define what can be accessed through another OIDC client (used by the support agent) there are no RS involved Note : this issue is closed, so either we should reopen it or open a new issue\nNAME Agreed, we've drifted off of the topic of the issue as well. I opened up a new issue for discussion of the cross-user CIBA use case specifically:", "new_text": "ability given to a subject to perform a given operation on a resource under the control of an RS. person, organization or device. It decides whether and under which conditions its attributes can be disclosed to other parties. statement asserted by an AS about a subject."} {"id": "q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002", "old_text": "as far as the overall protocol is concerned. A single role need not be deployed as a monolithic service. For example, A client instance could have components that are installed on the end user's device as well as a back-end system that it communicates with. If both of these components participate in the delegation protocol, they are both considered part of the client", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:", "new_text": "as far as the overall protocol is concerned. A single role need not be deployed as a monolithic service. For example, a client instance could have components that are installed on the end user's device as well as a back-end system that it communicates with. If both of these components participate in the delegation protocol, they are both considered part of the client"} {"id": "q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002", "old_text": "In some circumstances, the information needed at a given stage is communicated out of band or is preconfigured between the components or entities performing the roles. For example, one entity can fulfil multiple roles, and so explicit communication between the roles is not necessary within the protocol flow. Additionally some components may not be involved in all use cases. For example, a client instance could be calling the AS just to get direct user information and have no need to get an access token to call an RS. 1.5.1.", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:", "new_text": "In some circumstances, the information needed at a given stage is communicated out of band or is preconfigured between the components or entities performing the roles. For example, one entity can fulfill multiple roles, and so explicit communication between the roles is not necessary within the protocol flow. Additionally some components may not be involved in all use cases. For example, a client instance could be calling the AS just to get direct user information and have no need to get an access token to call an RS. 1.5.1."} {"id": "q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002", "old_text": "The client instance stores the continuation information from (2) for use in (8) and (10). The client instance then interaction- redirect given by the AS in (2). The user's directs their browser to the user code URI. This URI is stable and can be communicated via the client software's documentation, the AS documentation, or the client software itself. Since it is assumed that the RO will interact with the AS through a secondary device, the client instance does not provide a", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:", "new_text": "The client instance stores the continuation information from (2) for use in (8) and (10). The client instance then interaction- usercode given by the AS in (2). The users directs their browser to the user code URI. This URI is stable and can be communicated via the client software's documentation, the AS documentation, or the client software itself. Since it is assumed that the RO will interact with the AS through a secondary device, the client instance does not provide a"} {"id": "q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002", "old_text": "The client instance request. The client instance does not send any interaction modes to the server. The AS determines that the request is been authorized, the AS grants access to the information in the form of response-token to the client instance. Note that response-subject is not generally applicable in this use case, as there is no user involved. The client instance use-access-token to call the RS.", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:", "new_text": "The client instance request. The client instance does not send any interaction modes to the server. The AS determines that the request has been authorized, the AS grants access to the resource in the form of response-token to the client instance. Note that response-subject is not generally applicable in this use case, as there is no user involved. The client instance use-access-token to call the RS."} {"id": "q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002", "old_text": "the RS again. The RS validates the access token and determines that the access token is expired The RS responds to the client instance with an error. The client instance calls the token management URI returned in (2)", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:", "new_text": "the RS again. The RS validates the access token and determines that the access token is expired. The RS responds to the client instance with an error. The client instance calls the token management URI returned in (2)"} {"id": "q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002", "old_text": "2. To start a request, the client instance sends RFC8259 document with an object as its root. Each member of the request object represents a different aspect of the client instance's request. Each field is described in detail in a section below. Additional members of this request object can be defined by extensions to this protocol as described in A non-normative example of a grant request is below:", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:", "new_text": "2. To start a request, the client instance sends a RFC8259 document with an object as its root. Each member of the request object represents a different aspect of the client instance's request. Each field is described in detail in a section below. Additional members of this request object can be defined by extensions to this protocol as described in request-extending. A non-normative example of a grant request is below:"} {"id": "q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002", "old_text": "to determine appropriate key information, the client instance can send this instance identifier as a direct reference value in lieu of the \"client\" object. The instance identifier MAY be assigned to a client instance at runtime through the response-dynamic-handles or MAY be obtained in another fashion, such as a static registration process at the AS. When the AS receives a request with an instance identifier, the AS MUST ensure that the key used to binding-keys is associated with the", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:", "new_text": "to determine appropriate key information, the client instance can send this instance identifier as a direct reference value in lieu of the \"client\" object. The instance identifier MAY be assigned to a client instance at runtime through a grant response (response- dynamic-handles) or MAY be obtained in another fashion, such as a static registration process at the AS. When the AS receives a request with an instance identifier, the AS MUST ensure that the key used to binding-keys is associated with the"} {"id": "q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002", "old_text": "In response to a client instance's request, the AS responds with a JSON object as the HTTP entity body. Each possible field is detailed in the sections below In this example, the AS is returning an response-interact-redirect, a response-interact-finish, and a response-continue.", "comments": "Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:", "new_text": "In response to a client instance's request, the AS responds with a JSON object as the HTTP entity body. Each possible field is detailed in the sections below. In this example, the AS is returning an response-interact-redirect, a response-interact-finish, and a response-continue."} {"id": "q-en-gnap-core-protocol-41b3b6ca5f3c5ae8314580335ee01c81d5d115e4c9b2d7eee04e37bab805eaf6", "old_text": "it can request-interact-usercode. The AS determines that interaction is needed and response with a response-interact-usercode. This could optionally include a URI to direct the user to, but this URI should be static and so could be configured in the client instance's documentation. The AS also includes information the client instance will need to response- continue in (8) and (10). The AS associates this continuation information with an ongoing request that will be referenced in (4), (6), (8), and (10). The client instance stores the continuation information from (2) for use in (8) and (10). The client instance then interaction-", "comments": "Update all diagrams to use AASVG for rendering in HTML.\nTo edit notification comments on pull requests, go to your ._", "new_text": "it can request-interact-usercode. The AS determines that interaction is needed and response with a response-interact-usercode. The AS also includes information the client instance will need to response-continue in (8) and (10). The AS associates this continuation information with an ongoing request that will be referenced in (4), (6), (8), and (10). The client instance stores the continuation information from (2) for use in (8) and (10). The client instance then interaction-"} {"id": "q-en-gnap-core-protocol-41b3b6ca5f3c5ae8314580335ee01c81d5d115e4c9b2d7eee04e37bab805eaf6", "old_text": "any interaction modes to the server, indicating that it does not expect to interact with the RO. The client instance can also signal which RO it requires authorization from, if known, by using the request-user. The AS determines that interaction is needed, but the client instance cannot interact with the RO. The AS response with the", "comments": "Update all diagrams to use AASVG for rendering in HTML.\nTo edit notification comments on pull requests, go to your ._", "new_text": "any interaction modes to the server, indicating that it does not expect to interact with the RO. The client instance can also signal which RO it requires authorization from, if known, by using the request-user. It's also possible for the AS to determine which RO needs to be contacted by the nature of what access is being requested. The AS determines that interaction is needed, but the client instance cannot interact with the RO. The AS response with the"} {"id": "q-en-gnap-core-protocol-abab5feb85377aa5d88af10f76909cd96b7331911a1754009d852ccb72ef8139", "old_text": "response-token-single, the proof of that key MUST be used when presenting the access token. A key presented by value MUST be a public key in at least one supported format. If a key is sent in multiple formats, all the key format values MUST be equivalent. Note that while most formats present the full value of the public key, some formats present a value cryptographically derived from the public key. See additional discussion of public keys in security-symmetric. A key presented by value MUST be a public key in at least one supported format. If a key is sent in multiple formats, all the key format values MUST be equivalent. Note that while most formats present the full value of the public key, some formats present a value cryptographically derived from the public key. Additional key formats are defined in the IANA-key-formats. This non-normative example shows a single key presented in multiple formats. This example key is intended to be used with the httpsig- binding proofing mechanism, as indicated by the \"httpsig\" value of the \"proof\" field. 7.1.1.", "comments": "After discussion with NAME , this PR now makes keys single-format-only as previously suggested.\nTo edit notification comments on pull requests, go to your ._\nYaron: why are we sending a JWK as well as a cert? Are we checking that the cert contains the same public key as the cert? Justin: I would be OK with the different formats being mutually exclusive somehow. The AS would need to check that they\u2019re the same and throw an error if not. But this would also let the RC send over whatever it\u2019s got for its own keys every time.\nThe document has been changed to address this issue, copying my text that refers to the latest draft version: In Sec. 7.1 we allow the client to include multiple keys of different formats, provided they are \"equivalent\". This doesn't make sense, because the only reason to send multiple keys is that the sender suspects that the recipient doesn't understand one of the formats. But if that's the case, the recipient is not able to validate the MUST requirement that the two formats be equivalent. This could also lead to interesting key injection attacks. IMO we should only allow one key value/format for each request.", "new_text": "response-token-single, the proof of that key MUST be used when presenting the access token. A key presented by value MUST be a public key and MUST be presented in one and only one supported format, as discussed in security- multiple-key-formats. Note that while most formats present the full value of the public key, some formats present a value cryptographically derived from the public key. See additional discussion of the presentation of public keys in security-symmetric. Additional key formats are defined in the IANA-key-formats. This non-normative example shows a single key presented in two different formats. This example key is intended to be used with the httpsig-binding proofing mechanism, as indicated by the \"httpsig\" value of the \"proof\" field. As a JSON Web Key: As a certificate in PEM format: 7.1.1."} {"id": "q-en-gnap-core-protocol-abab5feb85377aa5d88af10f76909cd96b7331911a1754009d852ccb72ef8139", "old_text": "SSRF is somewhat more difficult to manage at runtime, and systems should generally refuse to fetch a URI if unsure. 13. The privacy considerations in this section are modeled after the list", "comments": "After discussion with NAME , this PR now makes keys single-format-only as previously suggested.\nTo edit notification comments on pull requests, go to your ._\nYaron: why are we sending a JWK as well as a cert? Are we checking that the cert contains the same public key as the cert? Justin: I would be OK with the different formats being mutually exclusive somehow. The AS would need to check that they\u2019re the same and throw an error if not. But this would also let the RC send over whatever it\u2019s got for its own keys every time.\nThe document has been changed to address this issue, copying my text that refers to the latest draft version: In Sec. 7.1 we allow the client to include multiple keys of different formats, provided they are \"equivalent\". This doesn't make sense, because the only reason to send multiple keys is that the sender suspects that the recipient doesn't understand one of the formats. But if that's the case, the recipient is not able to validate the MUST requirement that the two formats be equivalent. This could also lead to interesting key injection attacks. IMO we should only allow one key value/format for each request.", "new_text": "SSRF is somewhat more difficult to manage at runtime, and systems should generally refuse to fetch a URI if unsure. 12.33. Keys presented by value are allowed to be in only a single format, as discussed in key-format. Presenting the same key in multiple formats is not allowed and is considered an error in the request. If multiple keys formats were allowed, receivers of these key definitions need to be able to make sure that it's the same key represented in each field and not simply use one of the key formats without checking for equivalence. If equivalence were not carefully checked, it is possible for an attacker to insert their own key into one of the formats without needing to have control over the other formats. This could potentially lead to a situation where one key is used by part of the system (such as identifying the client instance) and a different key in a different format in the same message is used for other things, like calculating signature validity. To combat this, all keys presented by value have to be in exactly one supported format known by the receiver. Normally, a client instance is going to be configured with its keys in a single format, and it will simply present that format as-is to the AS in its request. A client instance capable of multiple formats can use discovery to determine which formats are supported, if desired. An AS should be generous in supporting many different key formats to allow different types of client software and client instance deployments. 13. The privacy considerations in this section are modeled after the list"} {"id": "q-en-gnap-core-protocol-9040bdf0e39eadb770f1e8bfea2510ee83ad2c419d5e630863d3df74f3b40a7a", "old_text": "An example set of protocol messages for this method can be found in example-async. 1.6.5. In this example flow, the AS policy allows the client instance to", "comments": "Adds the field for requesting information for a specific user, separate from identifying the current end-user. Also adds a security considerations section addressing asynchronous authorization methods.\nTo edit notification comments on pull requests, go to your .\nNormally we think about releasing subject information being about the current end-user, but that's not necessarily universal. The use case from CIBA: Alice is a support engineer with a company Bob is a customer of that company Bob calls the company and talks to Alice Alice needs to authenticate, from her system, that Bob is the person tied to the account in question Alice's client messages the AS to request Bob's information (probably including an identifier for Bob in the request) The AS reaches out to Bob to approve Alice's client to get Bob's information Bob approves Alice's request (probably on a pre-enrolled device) The AS sends Bob's information back to Alice's client Note a few key assumptions with this: At no point does Alice's client think that Bob is logging in. Alice and Bob both need accounts of some type at the AS Alice's account (or client) need the rights to request someone else's info Bob's approval mechanism needs to be tied to his account ahead of time (since Bob isn't running the client and the AS needs to reach out to him out of band) Bob's approval needs to be clearly directed to tell Bob that it's Alice (and alice's client) asking for his info Alice might need to log in / interact with the AS as part of this, or it might be tied to the client instance (maybe Alice has logged in for the day and that association is remembered by the AS?)\nThis is an interesting use case. Yet CIBA doesn't necessarily require an AS to work, as long as one is only interested in getting scopes. Is there a generic use case for cross-user subject information + required AS? Or maybe we should explain the added value of the AS in that specific case ?\nWe will add an example in section D that shows how this would work and verify that the language around subject identifiers is sufficiently robust to encapsulate this. This might also add additional security and privacy considerations when used in this mode because multiple people are now involved explicitly.\nI think this could be marked as NEEDS TEXT ?\nI think it mostly just needs to be added as an example, and I can do that.\nFine ;-) (on my side, I always tend to consider CIBA as some degenerate phishing experiment, but maybe that's too extreme)", "new_text": "An example set of protocol messages for this method can be found in example-async. Additional considerations for asynchronous interactions like this are discussed in security-async. 1.6.5. In this example flow, the AS policy allows the client instance to"} {"id": "q-en-gnap-core-protocol-9040bdf0e39eadb770f1e8bfea2510ee83ad2c419d5e630863d3df74f3b40a7a", "old_text": "returned and it might not match what the client instance requested, see the section on subject information for details. 2. To start a request, the client instance sends a RFC8259 document with", "comments": "Adds the field for requesting information for a specific user, separate from identifying the current end-user. Also adds a security considerations section addressing asynchronous authorization methods.\nTo edit notification comments on pull requests, go to your .\nNormally we think about releasing subject information being about the current end-user, but that's not necessarily universal. The use case from CIBA: Alice is a support engineer with a company Bob is a customer of that company Bob calls the company and talks to Alice Alice needs to authenticate, from her system, that Bob is the person tied to the account in question Alice's client messages the AS to request Bob's information (probably including an identifier for Bob in the request) The AS reaches out to Bob to approve Alice's client to get Bob's information Bob approves Alice's request (probably on a pre-enrolled device) The AS sends Bob's information back to Alice's client Note a few key assumptions with this: At no point does Alice's client think that Bob is logging in. Alice and Bob both need accounts of some type at the AS Alice's account (or client) need the rights to request someone else's info Bob's approval mechanism needs to be tied to his account ahead of time (since Bob isn't running the client and the AS needs to reach out to him out of band) Bob's approval needs to be clearly directed to tell Bob that it's Alice (and alice's client) asking for his info Alice might need to log in / interact with the AS as part of this, or it might be tied to the client instance (maybe Alice has logged in for the day and that association is remembered by the AS?)\nThis is an interesting use case. Yet CIBA doesn't necessarily require an AS to work, as long as one is only interested in getting scopes. Is there a generic use case for cross-user subject information + required AS? Or maybe we should explain the added value of the AS in that specific case ?\nWe will add an example in section D that shows how this would work and verify that the language around subject identifiers is sufficiently robust to encapsulate this. This might also add additional security and privacy considerations when used in this mode because multiple people are now involved explicitly.\nI think this could be marked as NEEDS TEXT ?\nI think it mostly just needs to be added as an example, and I can do that.\nFine ;-) (on my side, I always tend to consider CIBA as some degenerate phishing experiment, but maybe that's too extreme)", "new_text": "returned and it might not match what the client instance requested, see the section on subject information for details. 1.6.8. In this scenario, the end user and resource owner are two different people. In this scenario, the client instance already knows who the end user is, likely through a separate authentication process. The end user, operating the client instance, needs to get subject information about another person in the system, the RO. The RO is given an opportunity to release this information using an asynchronous interaction method with the AS. This scenario would apply, for instance, when the end user is an agent in a call-center and the resource owner is a customer authorizing the call center agent to access their account on their behalf. Precondition: The end user is authenticated to the client instance, and the client instance has an identifier representing the end user that it can present to the AS. This identifier should be unique to the particular session with the client instance and the AS. The RO communicates a human-readable identifier to the end user, such as an email address or account number. This communication happens out of band from the protocol, such as over the phone between parties. Note that the RO is not interacting with the client instance. The end user communicates the identifier to the client instance. The means by which the identifier is communicated to the client instance is out of scope for this specification. The client instance request. The request includes the RO's identifier in the request-subject \"sub_ids\" field, and the end user's identifier in the request-user of the request. The request includes no interaction start methods, since the end user is not expected to be the one interacting with the AS. The request does include the request-interact-callback-push to allow the AS to signal to the client instance when the interaction with the RO has concluded. The AS sees that the identifier for the end user and subject being requested are different. The AS determines that it can reach out to the RO asynchronously for approval. While it is doing so, the AS returns a response-continue with a \"finish\" nonce to allow the client instance to keep polling after interaction with the RO has concluded. The AS contacts the RO and has them authenticate to the system. The means for doing this are outside the scope of this specification, but the identity of the RO is known from the subject identifier sent in (3). The RO is prompted to authorize the end user's request via the client instance. Since the end user was identified in (3) via the user field, the AS can show this information to the RO during the authorization request. The RO completes the authorization with the AS. The AS marks the request as . The RO pushes the interaction-pushback to the client instance. Note that in the case the RO cannot be reached or the RO denies the request, the AS still sends the interaction finish message to the client instance, after which the client instance can negotiate next steps if possible. The client instance validates the interaction finish message and continue-after-interaction. The AS returns the RO's response-subject to the client instance. The client instance can display or otherwise utilize the RO's user information in its session with the end user. Note that since the client instance requested different sets of user information in (3), the client instance does not conflate the end user with the RO. Additional considerations for asynchronous interactions like this are discussed in security-async. 2. To start a request, the client instance sends a RFC8259 document with"} {"id": "q-en-gnap-core-protocol-9040bdf0e39eadb770f1e8bfea2510ee83ad2c419d5e630863d3df74f3b40a7a", "old_text": "12.34. An attacker may aim to gain access to confidential or sensitive resources. The measures for hardening and monitoring resource server systems (beyond protection with access tokens) is out of the scope of", "comments": "Adds the field for requesting information for a specific user, separate from identifying the current end-user. Also adds a security considerations section addressing asynchronous authorization methods.\nTo edit notification comments on pull requests, go to your .\nNormally we think about releasing subject information being about the current end-user, but that's not necessarily universal. The use case from CIBA: Alice is a support engineer with a company Bob is a customer of that company Bob calls the company and talks to Alice Alice needs to authenticate, from her system, that Bob is the person tied to the account in question Alice's client messages the AS to request Bob's information (probably including an identifier for Bob in the request) The AS reaches out to Bob to approve Alice's client to get Bob's information Bob approves Alice's request (probably on a pre-enrolled device) The AS sends Bob's information back to Alice's client Note a few key assumptions with this: At no point does Alice's client think that Bob is logging in. Alice and Bob both need accounts of some type at the AS Alice's account (or client) need the rights to request someone else's info Bob's approval mechanism needs to be tied to his account ahead of time (since Bob isn't running the client and the AS needs to reach out to him out of band) Bob's approval needs to be clearly directed to tell Bob that it's Alice (and alice's client) asking for his info Alice might need to log in / interact with the AS as part of this, or it might be tied to the client instance (maybe Alice has logged in for the day and that association is remembered by the AS?)\nThis is an interesting use case. Yet CIBA doesn't necessarily require an AS to work, as long as one is only interested in getting scopes. Is there a generic use case for cross-user subject information + required AS? Or maybe we should explain the added value of the AS in that specific case ?\nWe will add an example in section D that shows how this would work and verify that the language around subject identifiers is sufficiently robust to encapsulate this. This might also add additional security and privacy considerations when used in this mode because multiple people are now involved explicitly.\nI think this could be marked as NEEDS TEXT ?\nI think it mostly just needs to be added as an example, and I can do that.\nFine ;-) (on my side, I always tend to consider CIBA as some degenerate phishing experiment, but maybe that's too extreme)", "new_text": "12.34. GNAP allows the RO to be contacted by the AS asynchronously, outside the regular flow of the protocol. This allows for some advanced use cases, such as cross-user authentication or information release, but such advanced use cases have some distinct issues that implementors need to be fully aware of before using these features. First, in many applications, the return of a subject information to the client instance could indicate to the client instance that the end-user is the party represented by that information, functionally allowing the end-user to authenticate to the client application. While the details of a fully functional authentication protocol are outside the scope of GNAP, it is a common exercise for a client instance to be requesting information about the end user. This is facilitated by the several interaction-start defined in GNAP that allow the end user to begin interaction directly with the AS. However, when the subject of the information is intentionally not the end-user, the client application will need some way to differentiate between requests for authentication of the end user and requests for information about a different user. Confusing these states could lead to an attacker having their account associated with a privileged user. Client instances can mitigate this by having distinct code paths for primary end user authentication and requesting subject information about secondary users, such as in a call center. In such use cases, the client software used by the resource owner (the caller) and the end-user (the agent) are generally distinct, allowing the AS to differentiate between the agent's corporate device making the request and the caller's personal device approving the request. Second, RO's interacting asynchronously do not usually have the same context as an end user in an application attempting to perform the task needing authorization. As such, the asynchronous requests for authorization coming to the RO from the AS might have very little to do with what the RO is doing at the time. This situation can consequently lead to authorization fatigue on the part of the RO, where any incoming authorization request is quickly approved and dispatched without the RO making a proper verification of the request. An attacker can exploit this fatigue and get the RO to authorize the attacker's system for access. To mitigate this, AS systems deploying asynchronous authorization should only prompt the RO when the RO is expecting such a request, and significant user experience engineering efforts need to be employed to ensure the RO can clearly make the appropriate security decision. Furthermore, audit capability, and the ability to undo access decisions that may be ongoing, is particularly important in the asynchronous case. An attacker may aim to gain access to confidential or sensitive resources. The measures for hardening and monitoring resource server systems (beyond protection with access tokens) is out of the scope of"} {"id": "q-en-gnap-core-protocol-6e016107e354128ee57cb310998343681c95d2d54bdf1b684b38ee890602595b", "old_text": "4.2. If an interaction response-interact-finish method is associated with the current request, the AS MUST follow the appropriate method at upon completion of interaction in order to signal the client instance to continue, except for some limited error cases discussed below. If a finish method is not available, the AS SHOULD instruct the RO to return to the client instance upon completion. The AS MUST create an interaction reference and associate that", "comments": "Just two minor typos I stumbled across.\nBuilt To edit notification comments on pull requests, go to your .", "new_text": "4.2. If an interaction response-interact-finish method is associated with the current request, the AS MUST follow the appropriate method upon completion of interaction in order to signal the client instance to continue, except for some limited error cases discussed below. If a finish method is not available, the AS SHOULD instruct the RO to return to the client instance upon completion. The AS MUST create an interaction reference and associate that"} {"id": "q-en-gnap-core-protocol-6e016107e354128ee57cb310998343681c95d2d54bdf1b684b38ee890602595b", "old_text": "appropriate cryptographic processes to ensure the integrity of the assertion. For example, when SAML 2 assertions are used, the receiver hast to parse an XML document. There are many well-known security vulnerabilities in XML parsers, and the XML standard itself can be attacked through the use of processing instructions and entity", "comments": "Just two minor typos I stumbled across.\nBuilt To edit notification comments on pull requests, go to your .", "new_text": "appropriate cryptographic processes to ensure the integrity of the assertion. For example, when SAML 2 assertions are used, the receiver has to parse an XML document. There are many well-known security vulnerabilities in XML parsers, and the XML standard itself can be attacked through the use of processing instructions and entity"} {"id": "q-en-gnap-core-protocol-0ef488bcbd2be1a504c69d9f4125bcd76fcdd2a5050f56c36a2bb6448474678d", "old_text": "usercode and the AS supports this mode for the client instance's request, the AS responds with a \"user_code\" field. This field is string containing a unique short code that the user can type into a web page. This string MUST be case-insensitive, MUST consist of only easily typeable characters (such as letters or numbers). The time in which this code will be accepted SHOULD be short lived, such as several minutes. It is RECOMMENDED that this code be no more than eight characters in length. The client instance MUST communicate the \"user_code\" value to the end", "comments": "clarify that user codes are unguessable, - update user code examples,\nTo edit notification comments on pull requests, go to your .\nThis is very minor, but there's a contradiction between the usercode provided in the example in section 3.3.3. and the recommendations that are mentioned in the text above it. The text says: And However, the example provided below it goes against these two recommendations, as it contains a special character (a hyphen) rather than just letters and numbers, and it's longer than eight characters in length: I'd suggest either tweaking the wording, or tweaking the example so that the usercode provided in the example is aligned to the recommendations mentioned in the text.\nThe specification does not require user codes to be unguessable. Section 3.3.3 (Display of a Short User Code) states that user codes have to be unique and should be short-lived, but this does not imply that codes should be unguessable. It seems that Section 13.27 (Exhaustion of Random Value Space) does not apply to user codes, but to values that are clearly random values \"such as nonces, tokens, and randomized URIs\". If attackers can guess user codes, the same attack described in URL is possible.\nGood catch! The intent is for these to be unguessable, so we can add that explicitly.", "new_text": "usercode and the AS supports this mode for the client instance's request, the AS responds with a \"user_code\" field. This field is string containing a unique short code that the user can type into a web page. To facilitate usability, this string MUST be case- insensitive, MUST consist of only easily typeable characters (such as letters or numbers). The string MUST be randomly generated so as to be unguessable by an attacker within the time it is accepted. The time in which this code will be accepted SHOULD be short lived, such as several minutes. It is RECOMMENDED that this code be no more than eight characters in length. The client instance MUST communicate the \"user_code\" value to the end"} {"id": "q-en-gnap-core-protocol-0ef488bcbd2be1a504c69d9f4125bcd76fcdd2a5050f56c36a2bb6448474678d", "old_text": "When the end user is directed to enter a short code through the response-interact-usercode mode, the client instance communicates the user code to the end user and directs the end user to enter that code at an associated URI. This mode is designed to be used when the client instance is not able to communicate or facilitate launching an arbitrary URI. The associated URI could be statically configured with the client instance or in the client software's documentation. As a consequence, these URIs SHOULD be short. The user code URI MUST be reachable from the end user's browser, though the URI is usually be opened on a separate device from the client instance itself. The URI MUST be accessible from an HTTP GET request and MUST be protected by HTTPS or equivalent means. In many cases, the URI indicates a web page hosted at the AS, allowing the AS to authenticate the end user as the RO and", "comments": "clarify that user codes are unguessable, - update user code examples,\nTo edit notification comments on pull requests, go to your .\nThis is very minor, but there's a contradiction between the usercode provided in the example in section 3.3.3. and the recommendations that are mentioned in the text above it. The text says: And However, the example provided below it goes against these two recommendations, as it contains a special character (a hyphen) rather than just letters and numbers, and it's longer than eight characters in length: I'd suggest either tweaking the wording, or tweaking the example so that the usercode provided in the example is aligned to the recommendations mentioned in the text.\nThe specification does not require user codes to be unguessable. Section 3.3.3 (Display of a Short User Code) states that user codes have to be unique and should be short-lived, but this does not imply that codes should be unguessable. It seems that Section 13.27 (Exhaustion of Random Value Space) does not apply to user codes, but to values that are clearly random values \"such as nonces, tokens, and randomized URIs\". If attackers can guess user codes, the same attack described in URL is possible.\nGood catch! The intent is for these to be unguessable, so we can add that explicitly.", "new_text": "When the end user is directed to enter a short code through the response-interact-usercode mode, the client instance communicates the user code to the end user and directs the end user to enter that code at an associated URI. The client instance MAY format the user code in such a way as to facilitate memorability and transfer of the code, so long as this formatting does not alter the value as accepted at the user code URI. For example, a client instance receiving the user code \"A1BC3DFF\" could choose to display this to the user as \"A1BC 3DFF\", breaking up the long string into two shorter strings. In this example, the space in between the two parts would be removed upon its entry into the user code URI. This mode is designed to be used when the client instance is not able to communicate or facilitate launching an arbitrary URI. The associated URI could be statically configured with the client instance or in the client software's documentation. As a consequence, these URIs SHOULD be short. The user code URI MUST be reachable from the end user's browser, though the URI is usually be opened on a separate device from the client instance itself. The URI MUST be accessible from an HTTP GET request and MUST be protected by HTTPS or equivalent means. In many cases, the URI indicates a web page hosted at the AS, allowing the AS to authenticate the end user as the RO and"} {"id": "q-en-gnap-core-protocol-0ef488bcbd2be1a504c69d9f4125bcd76fcdd2a5050f56c36a2bb6448474678d", "old_text": "When the end user is directed to enter a short code through the response-interact-usercodeuri mode, the client instance communicates the user code and associated URI to the end user and directs the end user to enter that code at the URI. This mode is used when the client instance is not able to facilitate launching a complex arbitrary URI but can communicate arbitrary values like URIs. As a consequence, these URIs SHOULD be short to allow the URI to be typed by the end user. The client instance MUST NOT modify the URI when communicating it to the end user; in particular the client instance MUST NOT add any parameters to the URI. The user code URI MUST be reachable from the end user's browser, though the URI is usually be opened on a separate device from the client instance itself. The URI MUST be accessible from an HTTP GET request and MUST be protected by HTTPS or equivalent means. In many cases, the URI indicates a web page hosted at the AS, allowing the AS to authenticate the end user as the RO and", "comments": "clarify that user codes are unguessable, - update user code examples,\nTo edit notification comments on pull requests, go to your .\nThis is very minor, but there's a contradiction between the usercode provided in the example in section 3.3.3. and the recommendations that are mentioned in the text above it. The text says: And However, the example provided below it goes against these two recommendations, as it contains a special character (a hyphen) rather than just letters and numbers, and it's longer than eight characters in length: I'd suggest either tweaking the wording, or tweaking the example so that the usercode provided in the example is aligned to the recommendations mentioned in the text.\nThe specification does not require user codes to be unguessable. Section 3.3.3 (Display of a Short User Code) states that user codes have to be unique and should be short-lived, but this does not imply that codes should be unguessable. It seems that Section 13.27 (Exhaustion of Random Value Space) does not apply to user codes, but to values that are clearly random values \"such as nonces, tokens, and randomized URIs\". If attackers can guess user codes, the same attack described in URL is possible.\nGood catch! The intent is for these to be unguessable, so we can add that explicitly.", "new_text": "When the end user is directed to enter a short code through the response-interact-usercodeuri mode, the client instance communicates the user code and associated URI to the end user and directs the end user to enter that code at the URI. The client instance MAY format the user code in such a way as to facilitate memorability and transfer of the code, so long as this formatting does not alter the value as accepted at the user code URI. For example, a client instance receiving the user code \"A1BC3DFF\" could choose to display this to the user as \"A1BC 3DFF\", breaking up the long string into two shorter strings. In this example, the space in between the two parts would be removed upon its entry into the user code URI. This mode is used when the client instance is not able to facilitate launching a complex arbitrary URI but can communicate arbitrary values like URIs. As a consequence, these URIs SHOULD be short to allow the URI to be typed by the end user. The client instance MUST NOT modify the URI when communicating it to the end user; in particular the client instance MUST NOT add any parameters to the URI. The user code URI MUST be reachable from the end user's browser, though the URI is usually be opened on a separate device from the client instance itself. The URI MUST be accessible from an HTTP GET request and MUST be protected by HTTPS or equivalent means. In many cases, the URI indicates a web page hosted at the AS, allowing the AS to authenticate the end user as the RO and"} {"id": "q-en-gnap-core-protocol-0ef488bcbd2be1a504c69d9f4125bcd76fcdd2a5050f56c36a2bb6448474678d", "old_text": "13.27. Several parts of the GNAP process make use of unguessable randomized values, such as nonces, tokens, and randomized URIs. Since these values are intended to be unique, a sufficiently powerful attacker could make a large number of requests to trigger generation of randomized values in an attempt to exhaust the random number generation space. While this attack is particularly applicable to the AS, client software could likewise be targeted by an attacker triggering new grant requests against an AS.", "comments": "clarify that user codes are unguessable, - update user code examples,\nTo edit notification comments on pull requests, go to your .\nThis is very minor, but there's a contradiction between the usercode provided in the example in section 3.3.3. and the recommendations that are mentioned in the text above it. The text says: And However, the example provided below it goes against these two recommendations, as it contains a special character (a hyphen) rather than just letters and numbers, and it's longer than eight characters in length: I'd suggest either tweaking the wording, or tweaking the example so that the usercode provided in the example is aligned to the recommendations mentioned in the text.\nThe specification does not require user codes to be unguessable. Section 3.3.3 (Display of a Short User Code) states that user codes have to be unique and should be short-lived, but this does not imply that codes should be unguessable. It seems that Section 13.27 (Exhaustion of Random Value Space) does not apply to user codes, but to values that are clearly random values \"such as nonces, tokens, and randomized URIs\". If attackers can guess user codes, the same attack described in URL is possible.\nGood catch! The intent is for these to be unguessable, so we can add that explicitly.", "new_text": "13.27. Several parts of the GNAP process make use of unguessable randomized values, such as nonces, tokens, user codes, and randomized URIs. Since these values are intended to be unique, a sufficiently powerful attacker could make a large number of requests to trigger generation of randomized values in an attempt to exhaust the random number generation space. While this attack is particularly applicable to the AS, client software could likewise be targeted by an attacker triggering new grant requests against an AS."} {"id": "q-en-gnap-core-protocol-7b168e1af4316fd0c440ab962897674044fc5004440eef5c3d892a4b4a1db956", "old_text": "This specification defines the following interaction start modes: Additional start modes are defined in the IANA-interaction-start- modes.", "comments": "NAME Please take a look at this updated text to see if it addresses your concerns.\nTo edit notification comments on pull requests, go to your .\nThinking about possible extensions to GNAP's interaction modes, I had the impression that the specification could be a bit clearer regarding requirements on such extensions. From reading Section 3.3, my understanding is that all implementations of GNAP, including ones using extensions which define additional interaction finish methods, MUST include a nonce in the grant response (given that the AS wants to use the finish method offered by the client instance): URL However, Section 3.3.5 sounds a bit different, only referring to the two interaction finish methods defined by GNAP core: URL Section 4.2 could be a bit clearer as to what is expected of future extensions to GNAP, it currently says: URL with the two following sections describing the finish methods defined by GNAP core. This is, however, somewhat clearer in Section 4.2.3: I.e., Section 4.2.3 implies that AS must (somehow) provide the client instance with an interaction reference and interaction finish nonce. But it may be helpful to make this more explicit throughout the relevant Sections. There seem to be no requirements for future interaction start methods. It may be helpful to document some minimal requirements, e.g., enough information to identify the grant has to be conveyed to AS via RO in the interaction start (such as the redirect URI in GNAP core).\nThanks NAME for taking the time! The proposed text looks good to me (just one possible typo, I'm not a native speaker, so I'm not 100% sure about that).", "new_text": "This specification defines the following interaction start modes: All interaction start method definitions MUST provide enough information to uniquely identify the grant request during the interaction. In the \"redirect\" and \"app\" modes, this is done using a unique URI (including its parameters). In the \"user_code\" and \"user_code_uri\" mode, this is done using the value of the user code. Additional start modes are defined in the IANA-interaction-start- modes."} {"id": "q-en-gnap-core-protocol-7b168e1af4316fd0c440ab962897674044fc5004440eef5c3d892a4b4a1db956", "old_text": "methods: If interaction finishing is supported for this client instance and request, the AS response-interact-finish used by the client instance to validate the callback. Requests to the callback URI MUST be processed as described in interaction-finish, and the AS MUST require presentation of an interaction callback reference as described in continue-after-interaction. 2.5.2.1.", "comments": "NAME Please take a look at this updated text to see if it addresses your concerns.\nTo edit notification comments on pull requests, go to your .\nThinking about possible extensions to GNAP's interaction modes, I had the impression that the specification could be a bit clearer regarding requirements on such extensions. From reading Section 3.3, my understanding is that all implementations of GNAP, including ones using extensions which define additional interaction finish methods, MUST include a nonce in the grant response (given that the AS wants to use the finish method offered by the client instance): URL However, Section 3.3.5 sounds a bit different, only referring to the two interaction finish methods defined by GNAP core: URL Section 4.2 could be a bit clearer as to what is expected of future extensions to GNAP, it currently says: URL with the two following sections describing the finish methods defined by GNAP core. This is, however, somewhat clearer in Section 4.2.3: I.e., Section 4.2.3 implies that AS must (somehow) provide the client instance with an interaction reference and interaction finish nonce. But it may be helpful to make this more explicit throughout the relevant Sections. There seem to be no requirements for future interaction start methods. It may be helpful to document some minimal requirements, e.g., enough information to identify the grant has to be conveyed to AS via RO in the interaction start (such as the redirect URI in GNAP core).\nThanks NAME for taking the time! The proposed text looks good to me (just one possible typo, I'm not a native speaker, so I'm not 100% sure about that).", "new_text": "methods: If interaction finishing is supported for this client instance and request, the AS will response-interact-finish used by the client instance to validate the callback. All interaction finish methods MUST use this nonce to allow the client to verify the connection between the pending interaction request and the callback. GNAP does this through the use of the interaction hash, defined in interaction- hash. All requests to the callback URI MUST be processed as described in interaction-finish. All interaction finish methods MUST require presentation of an interaction reference for continuing this grant request. This means that the the interaction reference MUST be returned by the AS and MUST be presented by the client as described in continue-after- interaction. The means by which the interaction reference is returned to the client instance is specific to the interaction finish method. 2.5.2.1."} {"id": "q-en-gnap-core-protocol-7b168e1af4316fd0c440ab962897674044fc5004440eef5c3d892a4b4a1db956", "old_text": "hash. The client instance will use this value to validate the \"finish\" call. The AS MUST send the hash and interaction reference based on the interaction finish mode as described in the following sections. Note that in many error cases, such as when the RO has denied access, the \"finish\" method is still enacted by the AS. This pattern allows", "comments": "NAME Please take a look at this updated text to see if it addresses your concerns.\nTo edit notification comments on pull requests, go to your .\nThinking about possible extensions to GNAP's interaction modes, I had the impression that the specification could be a bit clearer regarding requirements on such extensions. From reading Section 3.3, my understanding is that all implementations of GNAP, including ones using extensions which define additional interaction finish methods, MUST include a nonce in the grant response (given that the AS wants to use the finish method offered by the client instance): URL However, Section 3.3.5 sounds a bit different, only referring to the two interaction finish methods defined by GNAP core: URL Section 4.2 could be a bit clearer as to what is expected of future extensions to GNAP, it currently says: URL with the two following sections describing the finish methods defined by GNAP core. This is, however, somewhat clearer in Section 4.2.3: I.e., Section 4.2.3 implies that AS must (somehow) provide the client instance with an interaction reference and interaction finish nonce. But it may be helpful to make this more explicit throughout the relevant Sections. There seem to be no requirements for future interaction start methods. It may be helpful to document some minimal requirements, e.g., enough information to identify the grant has to be conveyed to AS via RO in the interaction start (such as the redirect URI in GNAP core).\nThanks NAME for taking the time! The proposed text looks good to me (just one possible typo, I'm not a native speaker, so I'm not 100% sure about that).", "new_text": "hash. The client instance will use this value to validate the \"finish\" call. All interaction finish methods MUST define a way to convey the hash and interaction reference back to the client instance. When an interaction finish method is used, the client instance MUST present the interaction reference back to the AS as part of its continue- after-interaction. Note that in many error cases, such as when the RO has denied access, the \"finish\" method is still enacted by the AS. This pattern allows"} {"id": "q-en-gnap-resource-servers-1e4ab3b4b215b431e6f1e45e51b9731c985d865a8524e6d114386c46b70e5379", "old_text": "3.1. A GNAP AS offering RS-facing services can publish its features on a well-known discovery document using the URL \".well-known/gnap-as-rs\". This endpoint contains a JSON document RFC8259 consisting of a single JSON object with any combination of the following optional fields: The URL of the endpoint offering introspection. A list of token formats supported by this AS. The URL of the endpoint offering resource registration. The grant endpoint of the GNAP AS. 3.2.", "comments": "Expands the discovery document and aligns it with changes made to the discovery process in core. Addresses\n:heavycheckmark: Deploy Preview for gnap-resource-servers-editors-draft ready! :hammer: Explore the source changes: ac397efc4f491e2003a3f8059eef765b8e3075b3 :mag: Inspect the deploy log: :sunglasses: Browse the preview:", "new_text": "3.1. A GNAP AS offering RS-facing services can publish its features on a well-known discovery document using the URL \".well-known/gnap-as-rs\" appended to the grant request endpoint URL. The discovery response is a JSON document RFC8259 consisting of a single JSON object with the following fields: OPTIONAL. The URL of the endpoint offering introspection. The location MUST be a URL RFC3986 with a scheme component that MUST be https, a host component, and optionally, port, path and query components and no fragment components. A list of token formats supported by this AS. The URL of the endpoint offering resource registration. The location MUST be a URL RFC3986 with a scheme component that MUST be https, a host component, and optionally, port, path and query components and no fragment components. REQUIRED. The location of the AS's grant request endpoint, used by the RS to derive downstream access tokens. The location MUST be a URL RFC3986 with a scheme component that MUST be https, a host component, and optionally, port, path and query components and no fragment components. This URL MUST be the same URL used by client instances in support of GNAP requests. OPTIONAL. A list of the AS's supported key proofing mechanisms. The values of this list correspond to possible values of the \"proof\" field of the key section of the request. 3.2."} {"id": "q-en-gnap-resource-servers-1fff2bfb4fe76fc29023ae87204c6008bed5deb97a5b6793801b27183d28204b", "old_text": "If the RS needs to, it can post a set of resources as described in the Resource Access Rights section of I-D.ietf-gnap-core-protocol to the AS's resource registration endpoint. The RS MUST identify itself with its own key and sign the request. The AS responds with a handle appropriate to represent the resources list that the RS presented. The RS MAY make this handle available as part of a discovery response as described in I-D.ietf-gnap-core-protocol or as documentation to developers. [[ See issue #117 [2] ]] 4.", "comments": "Expands the definition of the resource set registration protocol, used by an RS to declare a resource set to an AS at runtime and receive a reference identifier for that resource set to hand to a client.\n:heavycheckmark: Deploy Preview for gnap-resource-servers-editors-draft ready! :hammer: Explore the source changes: f9aac0b2af163c0f9a58ba17e93db7e8956140bf :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nRegistering a Resource Handle: Editor's note: It's not an exact match here because the \"resource_handle\" returned now represents a collection of objects instead of a single one. Perhaps we should let this return a list of strings instead? Or use a different syntax than the resource request? Also, this borrows heavily from UMA 2's \"distributed authorization\" model and, like UMA, might be better suited to an extension than the core protocol", "new_text": "If the RS needs to, it can post a set of resources as described in the Resource Access Rights section of I-D.ietf-gnap-core-protocol to the AS's resource registration endpoint along with information about what the RS will need to validate the request. REQUIRED. The list of access rights associated with the request in the format described in the \"Resource Access Rights\" section of I-D.ietf-gnap-core-protocol. REQUIRED. The identification used to authenticate the resource server making this call, either by value or by reference as described in authentication. OPTIONAL. The token format required to access the identified resource. If the field is omitted, the token format is at the discretion of the AS. If the AS does not support the requested token format, the AS MUST return an error to the RS. OPTIONAL. If present and set to \"true\", the RS expects to make a token introspection request as described in introspection. If absent or set to \"false\", the RS does not anticipate needing to make an introspection request for tokens relating to this resource set. The RS MUST identify itself with its own key and sign the request. The AS responds with a reference appropriate to represent the resources list that the RS presented in its request as well as any additional information the RS might need in future requests. REQUIRED. A single string representing the list of resources registered in the request. The RS MAY make this handle available to a client instance as part of a discovery response as described in I-D.ietf-gnap-core-protocol or as documentation to client software developers. OPTIONAL. An instance identifier that the RS can use to refer to itself in future calls to the AS, in lieu of sending its key by value. OPTIONAL. The introspection endpoint of this AS, used to allow the RS to perform token introspection. 4."} {"id": "q-en-gnap-resource-servers-1fff2bfb4fe76fc29023ae87204c6008bed5deb97a5b6793801b27183d28204b", "old_text": "itself with its own key in the \"client\" field and sign the request just as any client instance would. [[ See issue #116 [3] ]] The AS responds with a token for the downstream RS2 as described in I-D.ietf-gnap-core-protocol. The downstream RS2 could repeat this", "comments": "Expands the definition of the resource set registration protocol, used by an RS to declare a resource set to an AS at runtime and receive a reference identifier for that resource set to hand to a client.\n:heavycheckmark: Deploy Preview for gnap-resource-servers-editors-draft ready! :hammer: Explore the source changes: f9aac0b2af163c0f9a58ba17e93db7e8956140bf :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nRegistering a Resource Handle: Editor's note: It's not an exact match here because the \"resource_handle\" returned now represents a collection of objects instead of a single one. Perhaps we should let this return a list of strings instead? Or use a different syntax than the resource request? Also, this borrows heavily from UMA 2's \"distributed authorization\" model and, like UMA, might be better suited to an extension than the core protocol", "new_text": "itself with its own key in the \"client\" field and sign the request just as any client instance would. [[ See issue #116 [2] ]] The AS responds with a token for the downstream RS2 as described in I-D.ietf-gnap-core-protocol. The downstream RS2 could repeat this"} {"id": "q-en-gnap-resource-servers-1fff2bfb4fe76fc29023ae87204c6008bed5deb97a5b6793801b27183d28204b", "old_text": "[1] https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/115 [2] https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/117 [3] https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/116 ", "comments": "Expands the definition of the resource set registration protocol, used by an RS to declare a resource set to an AS at runtime and receive a reference identifier for that resource set to hand to a client.\n:heavycheckmark: Deploy Preview for gnap-resource-servers-editors-draft ready! :hammer: Explore the source changes: f9aac0b2af163c0f9a58ba17e93db7e8956140bf :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nRegistering a Resource Handle: Editor's note: It's not an exact match here because the \"resource_handle\" returned now represents a collection of objects instead of a single one. Perhaps we should let this return a list of strings instead? Or use a different syntax than the resource request? Also, this borrows heavily from UMA 2's \"distributed authorization\" model and, like UMA, might be better suited to an extension than the core protocol", "new_text": "[1] https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/115 [2] https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/116 "} {"id": "q-en-gnap-resource-servers-e1de17021bc3e5ddcc89dd3d0e53075b4703818f6af8d8fc7281ffa176e74859", "old_text": "The RS signs the request with its own key and sends the access token as the body of the request. The AS responds with a data structure describing the token's current state and any information the RS would need to validate the token's presentation, such as its intended proofing mechanism and key material. The response MAY include any fields defined in an access token response. 3.4.", "comments": "Updates token introspection definitions into a more full protocol offered by the AS.\n:heavycheckmark: Deploy Preview for gnap-resource-servers-editors-draft ready! :hammer: Explore the source changes: 0b6505e11547ab68388ce27ffc88a8e1c9e5c16b :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nIntrospecting a Token: Editor's note: This isn't super different from the token management URIs, but the RS has no way to get that URI, and it's bound to the RS's keys instead of the RC's or token's keys.\nFrom a privacy point of view, token introspection allows an AS to know exactly when operation(s) are being performed by an end-user on a RS. This provides useful information for ASs that may be tempted to act as \"Big Brother\". For that reason, the risks related to token introspection should be advertised in the Privacy Considerations section. Its usage should be deprecated in the general case. If there exists specific cases where there is no such a risk, these cases should be advertised.\nThe section doesn't provide any information on what happens when a client wants to introspect an inactive/invalid/revoked token.\nYes, this is pretty thin right now. It's the RS that introspects the token, and this section will be pulled out into a separate spec to make that more clear (). The response for both positive and negative situations is likely to be based on the OAuth introspection spec, RFC7662 URL\nWe should add: the AS MUST validate that the token is appropriate for the RS that presented it, and return an error otherwise.\nIn OAuth token introspection, the AS doesn't return an error in this stated case -- it simply says the token presented is not active, in order to prevent information leakage to a nosy RS.\nNAME I don't understand your argument about a \"nosy RS\". The text states: As a consequence, if the AS does not recognize the signature, it can return a error stating \"bad signature\". If the signature is correct, the AS can provide all details to the recognized RS, as yaronf mentioned.\ns/error/inactive status/.\nToken Introspection: it is not clear to what depth we are defining the API: is it only the existence of the \"introspect\" endpoint? Or do we define a minimal set of standard attributes that need to be returned? The API would not be useful for interoperability unless we define some of the returned attributes. At the very least: \"active\".\nThe idea would be to define a core attribute set with extension points like RFC7662. This same attribute set could be used for the token model (but maybe not format?) described in", "new_text": "The RS signs the request with its own key and sends the access token as the body of the request. REQUIRED. The access token value presented to the RS by the client instance. RECOMMENDED. The proofing method used by the client instance to bind the token to the RS request. REQUIRED. The identification used to authenticate the resource server making this call, either by value or by reference as described in authentication. OPTIONAL. The minimum access rights required to fulfill the request. This MUST be in the format described in the Resource Access Rights section of I-D.ietf-gnap-core-protocol. The AS MUST validate the access token value and determine if the token is active. An active access token is defined as a token that was issued by the processing AS, has not been revoked, has not expired, and is appropriate for presentation at the identified RS. The AS responds with a data structure describing the token's current state and any information the RS would need to validate the token's presentation, such as its intended proofing mechanism and key material. REQUIRED. If \"true\", the access token presented is active, as defined above. If any of the criteria for an active token are not true, or if the AS is unable to make a determination (such as the token is not found), the value is set to \"false\" and other fields are omitted. If the access token is active, additional fields from the single access token response structure defined in I-D.ietf-gnap-core- protocol are included. In particular, these include the following: REQUIRED. The access rights associated with this access token. This MUST be in the format described in the Resource Access Rights section of I-D.ietf-gnap-core-protocol. This array MAY be filtered or otherwise limited for consumption by the identified RS, including being an empty array. REQUIRED if the token is bound. The key bound to the access token, to allow the RS to validate the signature of the request from the client instance. If the access token is a bearer token, this MUST NOT be included. OPTIONAL. The set of flags associated with the access token. The response MAY include any additional fields defined in an access token response and MUST NOT include the access token \"value\" itself. 3.4."} {"id": "q-en-groupcomm-bis-511749c70a1a8d9779cba5ac9ceec452c48d9c617bdb8c21e5e33cb7a320e81f", "old_text": "control application were shown in Section 2.2 of RFC7390. An application group can be named in many ways through different types of identifiers, such as numbers, URIs or other strings. An application group name or identifier, if explicitly encoded in a CoAP request, is typically included in the path component or in the query component of a Group URI. It may also be encoded using the Uri-Host Option RFC7252 in case application group members implement a virtual CoAP server specific to that application group. The application group can then be identified by the value of the Uri-Host Option and each virtual server serves one specific application group. However, encoding the application group in the Uri-Host Option is not the preferred method because in this case the application group cannot be encoded in a Group URI, and also the Uri-Host Option is being used for another purpose than encoding the host part of a URI as intended by RFC7252 - which is potentially confusing. Appendix A of I-D.ietf- core-resource-directory shows an example registration of an application group into a Resource Directory (RD), along with the CoAP group it uses and the resources supported by the application group. In this example an application group identifier is not explicitly encoded in the RD nor in CoAP requests made to the group, but it implicitly follows from the CoAP group used for the request. So there is a one-to-one binding between the CoAP group and the application group. The \"NoSec\" security group is used. A best practice for encoding application group into a Group URI is to use one URI path component to identify the application group and use the following URI paths component(s) to identify the resource within this application group. For example, //res1 or /base//res1/res2 conform to this practice. An application group identifier (like ) should be as short as possible when used in constrained networks. A security group is identified by a stable and invariant string used as group name, which is generally not related with other kinds of", "comments": "more details on encoding application group name, for\nReading the paragraph after \"An application group can be named in many ways\", I was left a bit confused: Why should it be encoded in the Uri-Query? That'd need coordination among all applications running on that host (for it is interfering with the query argument space that's usually left for the application to use). The common partitioning of applications inside a host is left-to-right in the URI, and that puts it in the very last spot. I think I do see what is meant by the discouragement of using the Uri-Host, but doubt that it'd be immediately obvious to general readers. How I understand Uri-Host to be impractical here is that while the request is sent to a well-defined and named URI, the response is a representation of the same URI with the individual server's IP replaced into the authority component, and if two application groups on the same CoAP group use the same path, then there may be clashes, not on the request-response layer (where the responses are bound to request for a particular host name) but in the client's cache of the responses. (And once one is at the point where application groups in the same CoAP group have a disjunct path set, one can do away with identification altogether and just address the unique path). I don't quite follow how this is not \"as intended in RFC7252\": It is an identifier for who is being addressed, and as such just part of the URI. (For example, if I define my lights group to be named URL and name the resource , then the host name is exactly the group name, and already needs to be put into the request unless there is known URI aliasing). See also URL on \"send a Uri-Host\" option, with the caveat that I'm a strong proponent of answer A2. Furthermore, the text seems to imply that virtual hosting is something a server would need to go out of its way to implement. It isn't: The Uri-Host option is critical, and a host needs to make the conscious decision to ignore that option; otherwise it's just as much part of finding the addressed resource as all the other Uri-* options are. The best practice is something I can get behind, although I suggest using no identifier on the wire (and just the disjunct set of resources) as a byte saving alternative when the administrative effort of ensuring path uniqueness can be afforded.\nThanks; we do need to improve the current text on these aspects. On Uri-Host and your example \"URL\" where authority component includes the application group: in this group communication example the sending client would normally DNS-resolve \"URL\" to a particular multicast IP address, then send the CoAP request to this multicast IP address destination, without including any Uri-Host Option. This works as long as you have a unique 1:1 association between application-group and CoAP-group. In case multiple application-groups map to a single CoAP-group, this doesn't work anymore : a client could be e.g. configured to always include the \"Uri-Host\" Option to indicate the hostname. Still not the preferred approach I would think. Btw I can't follow answer A2 on the CoAP-Faq, unfortunately: formulation is too complex. In some (common?) constrained use cases, the DNS name would not even be configured in the client and the client would just be configured directly with the IPv6 multicast address of the CoAP group to send it to. Then, the application-group would need to be encoded in some way still, e.g. using the Uri-Path Option. Now what our original text tried to suggest is to \"misuse\" a Uri-Host Option for that. So instead of using an option like: Uri-Path = group123 You would instead use an option: Uri-Host = group123 So this is not a \"real\" URI hostname but rather a low-level hack to let the client's CoAP stack insert by force an additional option just before sending out the CoAP request. The receiver's CoAP stack would just parse it as the ID of the application-group and not really as a URI-host. Because this gets confusing and potentially messy (proven by the present discussion !) we don't recommend it. So, trying to reiterate if one has configured Group URIs of the form coap://[mcast-ip-address]/resource on the client then by default the Uri-Host Option would get elided. The above mentioned \"hack\" is to put it back to encode application-group. This has the issue that the application-group name isn't present in the Group URI. if one has configured Group URIs of the form URL on the client then normally by default the authority gets resolved to some IP multicast address before sending it out, and thus while sending it the Uri-Host Option is NOT included in the CoAP encoding but elided. So the receiving server doesn't see the application-group ID because it's not encoded in the request data. (We assume here that multiple application-groups use the same CoAP-group.) Maybe we should discuss this further; what's the effort here / who ensures it / unique between what and what?\nInitial text proposed in above PR! We'll merge it in by tomorrow evening, as an improvement (hopefully). What is not included yet is a statement saying that we don't recommend using the non-Group-URI methods of encoding application group name. To do so we would need to clearly state why that is not recommended; still not clear at this moment what the disadvantages of these are.", "new_text": "control application were shown in Section 2.2 of RFC7390. An application group can be named in many ways through different types of identifiers, such as name string, number, URI or other type of string. An application group name may be explicitly encoded in a CoAP Group URI, or it may be not included in the Group URI. This is an implementation-specific decision. If the application group name is explicitly encoded in a CoAP Group URI, it can be encoded within one of the URI path component: this is the most common and RECOMMENDED method to encode the application group name. A best practice for encoding application group into a Group URI is to use one URI path component to identify the application group and use the following URI paths component(s) to identify the resource within this application group. For example, //res1 or /base//res1/res2 conform to this practice. An application group name (like ) should be as short as possible when used in constrained networks. URI query component: using this method, the query may consist of the group name (?) or it may be one parameter of the query (?g= or ?param1=value1&gn=). URI host subcomponent: using this method, the application group becomes equal to the CoAP group. This can only be used if there is a one-to-one mapping between CoAP groups and application groups. URI port subcomponent: using this method, the application group is identified by a number that is encoded in some way in the destination port. There are also methods to encode the application group name within the CoAP request even though it is not encoded within the Group URI. Examples of such methods are: encode in a Uri-Host Option RFC7252 which is added to the CoAP request by the client before sending it out. Each CoAP server that is part of the CoAP group, receiving this request, decodes the Uri-Host Option and treats it as an application group name. (It can also treat the application group name in this Option as a \"virtual CoAP server\" specific to that application group, exactly in the same way that the Uri-Host Option was intended to allow support for multiple virtual servers hosted on the same port. The net effect of both treatments is the same.) encode in a new (custom/application-specific) CoAP Option which is added to the CoAP request by the client before sending it out. Each CoAP server that is part of the CoAP group, receiving this request, would by design understand this Option, would decode it, and treat it as an application group name. Finally, it is possible to not encode the application group name at all within the CoAP request. This yields the most compact representation on the wire. In this case, each CoAP server needs to determine the application group based on contextual information, such as client identity and/or target resource. For example, each application group on a server could have a unique set of resources that does not overlap with any resources of other application groups. Appendix A of I-D.ietf-core-resource-directory shows an example registration of an application group into a Resource Directory (RD), along with the CoAP group it uses and the resources supported by the application group. In this example an application group name \"lights\" is encoded in the \"ep\" (endpoint) attribute of the RD registration entry. The CoAP group is ff35:30:2001:db8:f1::8000:1 and the \"NoSec\" security group is used. A security group is identified by a stable and invariant string used as group name, which is generally not related with other kinds of"} {"id": "q-en-https-notif-6f242b4c603b81c241034cfef49da3d843edf88a21e030afbbe00ca8f3401008", "old_text": "6. Encoding notifications for the HTTPS notifications is the same as the encoding notifications as defined in RFC8040 Section 6.4, with the following changes. Instead of saying that for JSON-encoding purposes, the module name for \"notification\" element will be \"ietf- restconf, it will say that for JSON-encoding purposes, the module name for \"notification\" element will be \"ietf-https-notif\". With those changes, the SSE event notification encoded JSON example that would be sent over the HTTPS notif transport would appear as follows: 7.", "comments": "This adds an XML example. The only question is whether we need a prefix for ietf-https-notif in the example.", "new_text": "6. Notifications are encoded as defined in RFC8040 Section 6.4. The examples in that section apply for sending notifications over the \"https-notif\" based transport. An example of YANG-Push in JSON would look something like this: An example of YANG-Push in XML would look something like this: 7."} {"id": "q-en-idempotency-aea0af4c40c617d78a7d14b164069f4d54809b607c49ec3cd16a36a8d0f344c5", "old_text": "If the \"Idempotency-Key\" request header is missing for a documented idempotent operation requiring this header, the resource server SHOULD reply with an HTTP \"400\" status code with body containing a link pointing to relevant documentation. Alternately, using the HTTP header \"Link\", the client can be informed about the error as shown below. If there is an attempt to reuse an idempotency key with a different request payload, the resource server SHOULD reply with a HTTP \"422\" status code with body containing a link pointing to relevant documentation. The status code \"422\" is defined in Section 11.2 of RFC4918. The server can also inform the client by using the HTTP header \"Link\" as shown below. If the request is retried, while the original request is still being processed, the resource server SHOULD reply with an HTTP \"409\" status code with body containing a link or the HTTP header \"Link\" pointing to the relevant documentation. Error scenarios above describe the status of failed idempotent requests after the resource server prcocesses them. Clients MUST", "comments": "Issue\nNAME NAME pl review re issue\nLooks good\nthe examples of errors use a pattern of using a link in a header, and then (supposedly) using a human-readable error message. what about using RFC 7807 instead and using examples that use problem reports to indicate failure semantics in addition to the status code?\nI agree. coming up in the next draft.\nNAME should we close this issue if it is resolved?\nNAME closing this issue. Let me know if not addressed with my latest PR.", "new_text": "If the \"Idempotency-Key\" request header is missing for a documented idempotent operation requiring this header, the resource server SHOULD reply with an HTTP \"400\" status code with body containing a link pointing to relevant documentation. Following examples shows an error response describing the problem using RFC7807. Alternately, using the HTTP header \"Link\", the client can be informed about the error as shown below. If there is an attempt to reuse an idempotency key with a different request payload, the resource server SHOULD reply with a HTTP \"422\" status code with body containing a link pointing to relevant documentation. The status code \"422\" is defined in Section 11.2 of RFC4918. The server can also inform the client by using the HTTP header \"Link\" as shown below. If the request is retried, while the original request is still being processed, the resource server SHOULD reply with an HTTP \"409\" status code with body containing problem description. Or, alternately using the HTTP header \"Link\" pointing to the relevant documentation Error scenarios above describe the status of failed idempotent requests after the resource server prcocesses them. Clients MUST"} {"id": "q-en-ietf-rats-wg-architecture-7ea5fdd094304769a5734adfb3b53628251e6dcf9f2c1ce2a4ef383bf04150e3", "old_text": "6. An entity in the RATS architecture includes at least one of the roles defined in this document. As a result, the entity can participate as a constituent of the RATS architecture. Additionally, an entity can aggregate more than one role into itself. These collapsed roles combine the duties of multiple roles. In these cases, interaction between these roles do not necessarily use the Internet Protocol. They can be using a loopback device or other IP-based communication between separate environments, but they do not have to. Alternative channels to convey conceptual messages include function calls, sockets, GPIO interfaces, local busses, or hypervisor calls. This type of conveyance is typically found in Composite Devices. Most importantly, these conveyance methods are out-of-scope of RATS, but they are presumed to exist in order to convey conceptual messages appropriately between roles. For example, an entity that both connects to a wide-area network and to a system bus is taking on both the Attester and Verifier roles.", "comments": "URL Greetings! I got as far as I could with time constraints through the rest of the document. There are a few sections I'll likely go back over again as I was short on time when reviewing them, but do hope this is helpful towards improving the document. This review is from the editors revision on github that I started my review earlier int he week. Section 4.2: I find the following grouping a bit confusing: \"Places that Attesting Environments can exist include Trusted Execution Environments (TEE), embedded Secure Elements (eSE), and BIOS firmware.\" This has an attesting environment as a processor, an element, and as compiled code. That doesn't seem right, especially when you refer to the attesting environment as a \"place\". Do you mean where BIOS is executed perhaps? I would expect a TEE and maybe other components where security functions are offloaded, which is in some cases to a processor other than a TEE like a GPU (I'm not making a recommendation to do that here, just stating what is happening in ecosystems that exist). The following text is a little confusing, in particular, the last sentence: \"An execution environment may not, by default, be capable of claims collection for a given Target Environment. Attesting Environments are designed specifically with claims collection in mind.\" If the attesting environment is a place, TEEs and other processors were not necessarily designed with claims in mind. If you can help to explain an attesting environment better, I can help to phrase it more clearly to convey the intended set of points as I think there are several. Section 4.3 Consider changing from: \"By definition, the Attester role takes on the duty to create Evidence.\" To: \"By definition, the Attester role creates Evidence.\" The following sentence is a run on and I think some additional information will help provide clarity. I don't think it's the role that is composed of nest environments. Current: \"The fact that an Attester role is composed of environments that can be nested or staged adds complexity to the architectural layout of how an Attester can be composed and therefore has to conduct the Claims collection in order to create believable attestation Evidence.\" How about: \"An Attester may be one or more nested or staged environments, adding complexity to the architectural structure. The unifying component is the root of trust and the nested, staged, or chained attestations produced. The nested or chained structure should include claims, collected by the attester to aid in the assurance or believability of the attestation Evidence.\" The following sentence should be broken into two: \"This example could be extended further by, say, making the kernel become another Attesting Environment for an application as another Target Environment, resulting in a third set of Claims in the Evidence pertaining to that application.\" Proposed: \"This example could be extended further by making the kernel become another Attesting Environment for an application as another Target Environment. This results in a third set of Claims in the Evidence pertaining to that application.\" How about changing the following sentence from: \"Among these routers, there is only one main router that connects to the Verifier. \" To: \"A multi-chassis router, provides a management point and is the only one that connects to the Verifier. \" For the following sentence, trying to remove the use of a word twice in a row and convey the same intent. Old: \"After collecting the Evidence of other Attesters, this inside Verifier verifies them using Endorsements and Appraisal Policies (obtained the same way as any other Verifier), to generate Attestation Results.\" Proposed: \"After collecting the Evidence of other Attesters, this inside Verifier uses Endorsements and Appraisal Policies (obtained the same way as any other Verifier) in the verification process to generate Attestation Results.\" I may try another pass at section 4 to improve readability. Section 5, Consider breaking this into 2 sentences, from: \"This section includes some reference models, but this is not intended to be a restrictive list, and other variations may exist.\" To: \"This section includes some reference models. This is not intended to be a restrictive list, and other variations may exist.\" Section 5.1 Two result words together is not necessary. Change from: \"The second way in which the process may fail is when the resulting Result is examined by the Relying Party, and based upon the Appraisal Policy, the result does not pass the policy. \" To: \"The second way in which the process may fail is when the Result is examined by the Relying Party, and based upon the Appraisal Policy, the result does not pass the policy. \" I suggest making the last paragraph the second paragraph. It's easier to read than the others and provides an introduction to the model before you begin talking about how it might fail. ** I may come back to this section to provide wording suggestions. Section 5.2 I suggest moving the last paragraph to the first in this instance. It also provides a nice high-level overview. The current paragraph 1&2 flow well together, hence this order suggestion. The third paragraph is a really long way of stating that interoperability is required. Entities must be able to create, send, receive, and process data (hence the need for a standard). It's fine, but could be shorter. Section 5.3: OLD: \"One variation of the background-check model is where the Relying Party and the Verifier on the same machine, and so there is no need for a protocol between the two.\" Proposed: \"One variation of the background-check model is where the Relying Party and the Verifier are on the same machine, performing both functions together. In this case, there is no need for a protocol between the two.\" I think the addition is necessary or you could scratch your head wondering why no protocol is needed. If they are separate functions on the same system, you'd need some way for the relying party to access the results of the verifier. The next sentence could benefit from being made into several. OLD: \"It is also worth pointing out that the choice of model is generally up to the Relying Party, and the same device may need to create Evidence for different Relying Parties and different use cases (e.g., a network infrastructure device to gain access to the network, and then a server holding confidential data to get access to that data). As such, both models may simultaneously be in use by the same device.\" If the server holds confidential data, why does it need to create evidence to access the data? I'm not following the second example as written, so my proposed text may not be as intended. Proposed: \"It is also worth pointing out that the choice of model is generally up to the Relying Party. The same device may need to create Evidence for different Relying Parties and/or different use cases. For instance, a network infrastructure device may attest evidence to gain access to the network or a server holding confidential data may require attestations of evidence to gain access to the data. As such, both models may simultaneously be in use by the same device.\" Section 6: The first 3 sentences are fine, but if you wanted to reduce it, it could be one sentence. Current: \"An entity in the RATS architecture includes at least one of the roles defined in this document. As a result, the entity can participate as a constituent of the RATS architecture. Additionally, an entity can aggregate more than one role into itself. \" Proposed: \"An entity in the RATS architecture includes at least one [or more] of the roles defined in this document. \" Or more is redundant, so that really isn't necessary to convey the same point. Then the last sentence: I'd change it a little from: \"In essence, an entity that combines more than one role also creates and consumes the corresponding conceptual messages as defined in this document.\" To: \"In essence, an entity that combines more than one role may create and consume the corresponding conceptual messages as defined in this document.\" It might not, it could be the verifier and relying party. Section 5: Consider changing from: \"That is, it might appraise the trustworthiness of an application component, or operating system component or service, under the assumption that information provided about it by the lower-layer hypervisor or firmware is true. \" To: \"That is, it might appraise the trustworthiness of an application component, operating system component, or service under the assumption that information provided about it by the lower-layer hypervisor or firmware is true. \" Section 7: I'd say it's really a stronger level of assurance of security rather than a stronger level of security in the following: \"A stronger level of security comes when information can be vouched for by hardware or by ROM code, especially if such hardware is physically resistant to hardware tampering. \" Then with the following sentence, \"The component that is implicitly trusted is often referred to as a Root of Trust.\" I think it would be worth mentioning that a hardware RoT is immutable and mutable RoTs in software are also possible, offering different levels for an assurance of trust/security. Section 9. Since ROLIE [RFC8322] has been added to SCAP2.0, it would be good to list that as an option since a lot of time and effort went into how to secure the exchange of formatted data for that protocol. Section 11. Privacy This is a good start. Remote attestation also provides a way to profile systems as well as the user behind that system. If attestation results go higher in the stack to include containers and applications, it could reveal even more about a system or user. The scope of access needs to be emphasized, including the administrators access to data (or restrictions). If there is a way to make inferences about attestations from their processing, that should be noted as well. Section 12: This will need to state an explicit requirement for transport encryption. In the introductory paragraph, the text is as follows: \"Any solution that conveys information used for security purposes, whether such information is in the form of Evidence, Attestation Results, Endorsements, or Appraisal Policy, needs to support end-to-end integrity protection and replay attack prevention, and often also needs to support additional security protections. For example, additional means of authentication, confidentiality, integrity, replay, denial of service and privacy protection are needed in many use cases. \" Proposed: \"Any solution that conveys information used for security purposes, whether such information is in the form of Evidence, Attestation Results, Endorsements, or Appraisal Policy must support the security properties of confidentiality, integrity, and availability. A conveyance protocol includes the typical transport security considerations: o end-to-end encryption, o end-to-end integrity protection, o replay attack prevention, o denial of service protection, o authentication, o authorization, o fine grained access controls, and o logging in line with current threat models and zero trust architectures.\" Best regards, Kathleen", "new_text": "6. An entity in the RATS architecture includes at least one of the roles defined in this document. An entity can aggregate more than one role into itself. These collapsed roles combine the duties of multiple roles. In these cases, interaction between these roles do not necessarily use the Internet Protocol. They can be using a loopback device or other IP-based communication between separate environments, but they do not have to. Alternative channels to convey conceptual messages include function calls, sockets, GPIO interfaces, local busses, or hypervisor calls. This type of conveyance is typically found in Composite Devices. Most importantly, these conveyance methods are out-of-scope of RATS, but they are presumed to exist in order to convey conceptual messages appropriately between roles. For example, an entity that both connects to a wide-area network and to a system bus is taking on both the Attester and Verifier roles."} {"id": "q-en-ietf-rats-wg-architecture-7ea5fdd094304769a5734adfb3b53628251e6dcf9f2c1ce2a4ef383bf04150e3", "old_text": "role. The entity, as a system bus Verifier, may choose to fully isolate its role as a wide-area network Attester. In essence, an entity that combines more than one role also creates and consumes the corresponding conceptual messages as defined in this document. 7.", "comments": "URL Greetings! I got as far as I could with time constraints through the rest of the document. There are a few sections I'll likely go back over again as I was short on time when reviewing them, but do hope this is helpful towards improving the document. This review is from the editors revision on github that I started my review earlier int he week. Section 4.2: I find the following grouping a bit confusing: \"Places that Attesting Environments can exist include Trusted Execution Environments (TEE), embedded Secure Elements (eSE), and BIOS firmware.\" This has an attesting environment as a processor, an element, and as compiled code. That doesn't seem right, especially when you refer to the attesting environment as a \"place\". Do you mean where BIOS is executed perhaps? I would expect a TEE and maybe other components where security functions are offloaded, which is in some cases to a processor other than a TEE like a GPU (I'm not making a recommendation to do that here, just stating what is happening in ecosystems that exist). The following text is a little confusing, in particular, the last sentence: \"An execution environment may not, by default, be capable of claims collection for a given Target Environment. Attesting Environments are designed specifically with claims collection in mind.\" If the attesting environment is a place, TEEs and other processors were not necessarily designed with claims in mind. If you can help to explain an attesting environment better, I can help to phrase it more clearly to convey the intended set of points as I think there are several. Section 4.3 Consider changing from: \"By definition, the Attester role takes on the duty to create Evidence.\" To: \"By definition, the Attester role creates Evidence.\" The following sentence is a run on and I think some additional information will help provide clarity. I don't think it's the role that is composed of nest environments. Current: \"The fact that an Attester role is composed of environments that can be nested or staged adds complexity to the architectural layout of how an Attester can be composed and therefore has to conduct the Claims collection in order to create believable attestation Evidence.\" How about: \"An Attester may be one or more nested or staged environments, adding complexity to the architectural structure. The unifying component is the root of trust and the nested, staged, or chained attestations produced. The nested or chained structure should include claims, collected by the attester to aid in the assurance or believability of the attestation Evidence.\" The following sentence should be broken into two: \"This example could be extended further by, say, making the kernel become another Attesting Environment for an application as another Target Environment, resulting in a third set of Claims in the Evidence pertaining to that application.\" Proposed: \"This example could be extended further by making the kernel become another Attesting Environment for an application as another Target Environment. This results in a third set of Claims in the Evidence pertaining to that application.\" How about changing the following sentence from: \"Among these routers, there is only one main router that connects to the Verifier. \" To: \"A multi-chassis router, provides a management point and is the only one that connects to the Verifier. \" For the following sentence, trying to remove the use of a word twice in a row and convey the same intent. Old: \"After collecting the Evidence of other Attesters, this inside Verifier verifies them using Endorsements and Appraisal Policies (obtained the same way as any other Verifier), to generate Attestation Results.\" Proposed: \"After collecting the Evidence of other Attesters, this inside Verifier uses Endorsements and Appraisal Policies (obtained the same way as any other Verifier) in the verification process to generate Attestation Results.\" I may try another pass at section 4 to improve readability. Section 5, Consider breaking this into 2 sentences, from: \"This section includes some reference models, but this is not intended to be a restrictive list, and other variations may exist.\" To: \"This section includes some reference models. This is not intended to be a restrictive list, and other variations may exist.\" Section 5.1 Two result words together is not necessary. Change from: \"The second way in which the process may fail is when the resulting Result is examined by the Relying Party, and based upon the Appraisal Policy, the result does not pass the policy. \" To: \"The second way in which the process may fail is when the Result is examined by the Relying Party, and based upon the Appraisal Policy, the result does not pass the policy. \" I suggest making the last paragraph the second paragraph. It's easier to read than the others and provides an introduction to the model before you begin talking about how it might fail. ** I may come back to this section to provide wording suggestions. Section 5.2 I suggest moving the last paragraph to the first in this instance. It also provides a nice high-level overview. The current paragraph 1&2 flow well together, hence this order suggestion. The third paragraph is a really long way of stating that interoperability is required. Entities must be able to create, send, receive, and process data (hence the need for a standard). It's fine, but could be shorter. Section 5.3: OLD: \"One variation of the background-check model is where the Relying Party and the Verifier on the same machine, and so there is no need for a protocol between the two.\" Proposed: \"One variation of the background-check model is where the Relying Party and the Verifier are on the same machine, performing both functions together. In this case, there is no need for a protocol between the two.\" I think the addition is necessary or you could scratch your head wondering why no protocol is needed. If they are separate functions on the same system, you'd need some way for the relying party to access the results of the verifier. The next sentence could benefit from being made into several. OLD: \"It is also worth pointing out that the choice of model is generally up to the Relying Party, and the same device may need to create Evidence for different Relying Parties and different use cases (e.g., a network infrastructure device to gain access to the network, and then a server holding confidential data to get access to that data). As such, both models may simultaneously be in use by the same device.\" If the server holds confidential data, why does it need to create evidence to access the data? I'm not following the second example as written, so my proposed text may not be as intended. Proposed: \"It is also worth pointing out that the choice of model is generally up to the Relying Party. The same device may need to create Evidence for different Relying Parties and/or different use cases. For instance, a network infrastructure device may attest evidence to gain access to the network or a server holding confidential data may require attestations of evidence to gain access to the data. As such, both models may simultaneously be in use by the same device.\" Section 6: The first 3 sentences are fine, but if you wanted to reduce it, it could be one sentence. Current: \"An entity in the RATS architecture includes at least one of the roles defined in this document. As a result, the entity can participate as a constituent of the RATS architecture. Additionally, an entity can aggregate more than one role into itself. \" Proposed: \"An entity in the RATS architecture includes at least one [or more] of the roles defined in this document. \" Or more is redundant, so that really isn't necessary to convey the same point. Then the last sentence: I'd change it a little from: \"In essence, an entity that combines more than one role also creates and consumes the corresponding conceptual messages as defined in this document.\" To: \"In essence, an entity that combines more than one role may create and consume the corresponding conceptual messages as defined in this document.\" It might not, it could be the verifier and relying party. Section 5: Consider changing from: \"That is, it might appraise the trustworthiness of an application component, or operating system component or service, under the assumption that information provided about it by the lower-layer hypervisor or firmware is true. \" To: \"That is, it might appraise the trustworthiness of an application component, operating system component, or service under the assumption that information provided about it by the lower-layer hypervisor or firmware is true. \" Section 7: I'd say it's really a stronger level of assurance of security rather than a stronger level of security in the following: \"A stronger level of security comes when information can be vouched for by hardware or by ROM code, especially if such hardware is physically resistant to hardware tampering. \" Then with the following sentence, \"The component that is implicitly trusted is often referred to as a Root of Trust.\" I think it would be worth mentioning that a hardware RoT is immutable and mutable RoTs in software are also possible, offering different levels for an assurance of trust/security. Section 9. Since ROLIE [RFC8322] has been added to SCAP2.0, it would be good to list that as an option since a lot of time and effort went into how to secure the exchange of formatted data for that protocol. Section 11. Privacy This is a good start. Remote attestation also provides a way to profile systems as well as the user behind that system. If attestation results go higher in the stack to include containers and applications, it could reveal even more about a system or user. The scope of access needs to be emphasized, including the administrators access to data (or restrictions). If there is a way to make inferences about attestations from their processing, that should be noted as well. Section 12: This will need to state an explicit requirement for transport encryption. In the introductory paragraph, the text is as follows: \"Any solution that conveys information used for security purposes, whether such information is in the form of Evidence, Attestation Results, Endorsements, or Appraisal Policy, needs to support end-to-end integrity protection and replay attack prevention, and often also needs to support additional security protections. For example, additional means of authentication, confidentiality, integrity, replay, denial of service and privacy protection are needed in many use cases. \" Proposed: \"Any solution that conveys information used for security purposes, whether such information is in the form of Evidence, Attestation Results, Endorsements, or Appraisal Policy must support the security properties of confidentiality, integrity, and availability. A conveyance protocol includes the typical transport security considerations: o end-to-end encryption, o end-to-end integrity protection, o replay attack prevention, o denial of service protection, o authentication, o authorization, o fine grained access controls, and o logging in line with current threat models and zero trust architectures.\" Best regards, Kathleen", "new_text": "role. The entity, as a system bus Verifier, may choose to fully isolate its role as a wide-area network Attester. In essence, an entity that combines more than one role creates and consumes the corresponding conceptual messages as defined in this document. 7."} {"id": "q-en-ietf-rats-wg-architecture-072a1fedee59cd1ae661236a08739bc482ea51071515732266808e8a6b9453d6", "old_text": "12. Any solution that conveys information used for security purposes, whether such information is in the form of Evidence, Attestation Results, Endorsements, or Appraisal Policy must support end-to-end", "comments": "This is to partially\nIn section 4.2 there is a list of \"places an attesting environment can exist\" that is limited. The text should be softened to say the list is only examples and there is no formal limitation or requirement on the security of an Attester. There should probably be a detailed description of Attester security in the document, but it should be in Security Considerations. This seems like a glaring omission from Security Considerations.\nThe considerations weren't the extent of the issue. There is still text in 4.2 that I think needs changing.", "new_text": "12. 12.1. Implementers need to pay close attention to the isolation and protection of the Attester and the factory processes for provisioning the Attestation Key Material. When either of these are compromised, the remote attestation becomes worthless because the attacker can forge Evidence. Remote attestation applies to use cases with a range of security requirements, so the protections discussed here range from low to high security where low security may be only application or process isolation by the device's operating system and high security involves specialized hardware to defend against physical attacks on a chip. 12.1.1. It is assumed that the Attester is located in an isolated environment of a device like a process, a dedicated chip a TEE or such that collects the Claims, formats them and signs them with an Attestation Key. The Attester must be protected from unauthorized modification to ensure it behaves correctly. There must also be confidentiality so that the signing key is not captured and used elsewhere to forge evidence. In many cases the user or owner of the device must not be able to modify or exfiltrate keys from the Attesting Environment of the Attester. For example the owner or user of a mobile phone or FIDO authenticator is not trusted. The point of remote attestation is for the Relying Party to be able to trust the Attester even though they don't trust the user or owner. Some of the measures for low level security include process or application isolation by a high-level operating system, and perhaps restricting access to root or system privilege. For extremely simple single-use devices that don't use a protected mode operating system, like a Bluetooth speaker, the isolation might only be the plastic housing for the device. At medium level security, a special restricted operating environment like a Trusted Execution Environment (TEE) might be used. In this case, only security-oriented software has access to the Attester and key material. For high level security, specialized hardware will likely be used providing protection against chip decapping attacks, power supply and clock glitching, faulting injection and RF and power side channel attacks. 12.1.2. Attestation key provisioning is the process that occurs in the factory or elsewhere that establishes the signing key material on the device and the verification key material off the device. Sometimes this is referred to as \"personalization\". One way to provision a key is to first generate it external to the device and then copy the key onto the device. In this case, confidentiality of the generator, as well as the path over which the key is provisioned, is necessary. This can be achieved in a number of ways. Confidentiality can be achieved entirely with physical provisioning facility security involving no encryption at all. For low-security use cases, this might be simply locking doors and limiting personnel that can enter the facility. For high-security use cases, this might involve a special area of the facility accessible only to select security-trained personnel. Cryptography can also be used to support confidentiality, but keys that are used to then provision attestation keys must somehow have been provisioned securely beforehand (a recursive problem). In many cases both some physical security and some cryptography will be necessary and useful to establish confidentiality. Another way to provision the key material is to generate it on the device and export the verification key. If public key cryptography is being used, then only integrity is necessary. Confidentiality is not necessary. In all cases, the Attestation Key provisioning process must ensure that only attestation key material that is generated by a valid Endorser is established in Attesters and then configured correctly. For many use cases, this will involve physical security at the facility, to prevent unauthorized devices from being manufactured that may be counterfeit or incorrectly configured. 12.2. Any solution that conveys information used for security purposes, whether such information is in the form of Evidence, Attestation Results, Endorsements, or Appraisal Policy must support end-to-end"} {"id": "q-en-ietf-rats-wg-architecture-963267e5e29b79e06e3b1abd80595536fb774f3202bb9b999a5e023c54477c98", "old_text": "An Attester creates Evidence that is conveyed to a Verifier. The Verifier uses the Evidence, and any Endorsements from Endorsers, by applying an Evidence Appraisal Policy to assess the trustworthiness of the Attester, and generates Attestation Results for use by Relying Parties. The Appraisal Policy for Evidence might be obtained from an Endorser along with the Endorsements, or might be", "comments": "Change \"Evidence Appraisal Policy\" -> \"Appraisal Policy for Evidence\" Fixes editorial nit reported to me by Tolga Acar Signed-off-by: Dave Thaler\nlooks good.", "new_text": "An Attester creates Evidence that is conveyed to a Verifier. The Verifier uses the Evidence, and any Endorsements from Endorsers, by applying an Appraisal Policy for Evidence to assess the trustworthiness of the Attester, and generates Attestation Results for use by Relying Parties. The Appraisal Policy for Evidence might be obtained from an Endorser along with the Endorsements, or might be"} {"id": "q-en-ietf-rats-wg-architecture-25975f0f58bab1de235a101d28f953d2f9dd0a643b751283271f01711038e46a", "old_text": "by applying an Appraisal Policy for Evidence to assess the trustworthiness of the Attester, and generates Attestation Results for use by Relying Parties. The Appraisal Policy for Evidence might be obtained from an Endorser along with the Endorsements, or might be obtained via some other mechanism such as being configured in the Verifier by an administrator. The Relying Party uses Attestation Results by applying its own Appraisal Policy to make application-specific decisions such as authorization decisions. The Appraisal Policy for Attestation Results might, for example, be configured in the Relying Party by an administrator. 4.1.", "comments": "Also, my assumption is that all messages, configs and such go over one of the arrows in the figure. There are no back channels for things that are vital to or a major part of attestation. Seems like it would be a poor document if the major parts of attestation are not explicitly carried in the flows depicted.", "new_text": "by applying an Appraisal Policy for Evidence to assess the trustworthiness of the Attester, and generates Attestation Results for use by Relying Parties. The Appraisal Policy for Evidence might be obtained from an Endorser along with the Endorsements, and/or might be obtained via some other mechanism such as being configured in the Verifier by the Verifier Owner. The Relying Party uses Attestation Results by applying its own Appraisal Policy to make application-specific decisions such as authorization decisions. The Appraisal Policy for Attestation Results is configured in the Relying Party by the Relying Party Owner, and/or is programmed into the Relying Party. 4.1."} {"id": "q-en-ietf-rats-wg-architecture-2c6d16f39d04cf3a45b18b8a3b11484cb97ad901d6ce52cb5759e9e0659bdbc3", "old_text": "12. Special thanks go to David Wooten, Joerg Borchert, Hannes Tschofenig, Laurence Lundblade, Diego Lopez, Jessica Fitzgerald-McKay, Frank Xia, and Nancy Cam-Winget.", "comments": "Merged from draft-thaler-rats-URL Signed-off-by: Dave Thaler\nPR and PR merged text that I had, and continue to have, problems with. A summary of the problems follows: 1) It includes events (CC, RP) that are not used in any checks in the text. I had pointed this out before when the PR was generated, but the PR was merged without addressing this comment. 2) The section about using timestamps with synchronized clocks was removed and replaced with a section that talks about a different mechanism (handles) that is not in any RFC or WG document. I believe that such replacement is inappropriate. If the WG adopts a document that talks about a handle distribution mechanism, then a section can be added. Or if there is a standard from another org that can be referenced then it's fine to add one in that case too. But I think it should not replace the section discussing the well known technique of synchronized clocks (e.g., using secure NTP, or PTP (IEEE 1588-2002) or whatever other standard mechanism). 3) time(HD) is problematic as phrased since all the other events are times as seen by a single entity, whereas time(HD) does not clearly say whose perspective (i.e., whose clock) it is from... is it the distributor's? Text is unclear. 4) It talks about nonces in a timestamp section, which I think causes confusion since there's already a separately section about nonces and this section is labeled to be scoped to timestamps. Addressing might address this naturally, if it uses a different section title. 5) There are some editorial typos, such as capital \"The\" in the middle of a sentence, etc. 6) I have no idea what \"delta(time(HD),time(EG))distribution-interval\" is supposed to mean. Normally I would expect an equality for any condition to be evaluated, but I have no idea what this check is. 7) time(HD) and time(EG) are according to different clocks (if my guess about point 3 above is right) and so the delta is not directly comparable. I'm not sure how point 6 accounts for this. 8) The phrase \"A nonce not predictable to an Attester (recentness & uniqueness) is sent to an Attester\" has \"(recentness & uniqueness)\" which doesn't read well (sounds like bad grammar) to me, and is not explained.\nHopefully PR now addresses this issue.\nPR reverts the text, leaving the work in progress in PR\nfixed by .", "new_text": "12. This document does not require any actions by IANA. 13. Special thanks go to David Wooten, Joerg Borchert, Hannes Tschofenig, Laurence Lundblade, Diego Lopez, Jessica Fitzgerald-McKay, Frank Xia, and Nancy Cam-Winget."} {"id": "q-en-ietf-rats-wg-architecture-d0c122e82b3940419986bb91f7c1170a4c54293e3ec50c9b1192e4422f7eda02", "old_text": "Environment, then this sub-entity generates Evidence about its trustworthiness. Therefore each sub-entity can be called an Attester. Among these Attesters, there may be only some, which can be called Lead Attesters, that have the communication functionality with the Verifier. Other Attesters don't have this ability, but they are connected to the Lead Attesters via internal links or network connections, and they are evaluated via the Lead Attester's help. For example, a carrier-grade router is a composite device consisting of a chassis and multiple slots. The trustworthiness of the router depends on all its slots' trustworthiness. Each slot has an Attesting Environment such as a TPM or TEE collecting the claims of its boot process, then it generates Evidence from the claims to prove its trustworthiness. Among these slots, only a main slot can communicate with the Verifier while other slots cannot. But other slots can communicate with the main slot by the links between them inside the router. So the main slot collects the Evidence of other slots, produces the final Evidence of the whole router and conveys the final Evidence to the Verifier. Therefore the router is a Composite Attester, each slot is an Attester and the main slot is the Lead Attester. Another example is a multi-chassis router which is composed of multiple single carrier-grade routers. The multi-chassis router provides higher throughput by interconnecting multiple routers and simpler management by being logically treated as one router. Among these routers, there is only one main router that connects to the Verifier. Other routers are only connected to the main router by the network cables, and therefore they are managed and verified via this main router. So, in this case, the multi-chassis router is the Composite Attester, each router is an Attester and the main router is the Lead Attester. composite depicts the data that flows between the Composite Attester and Verifier for the remote attestation. In the Composite Attester, each Attester generates its own Evidence by its Attesting Environments collecting the claims from its Target Environments. The Lead Attester collects the Evidence of all other Attesters and then generates the Evidence of the whole Composite Attester. Inside the Lead Attester, there may be an optional Verifying Environment. The Verifying Environment can verify the collected Evidence of other Attesters to evaluate their trustworthiness. Therefore, there are two situations when the Lead Attester generates the final Evidence. One situation is that the Lead Attester has no Verifying Environment. In this situation, the Lead Attester just collects the Evidence of other Attesters but doesn't verify them. It may just string all these Evidence into a whole one, or it may reorganize these Evidence with a new structure and sign this final Evidence. Then it conveys the final Evidence to the Verifier and the Verifier evaluates the Composite Attester's, including the Lead Attester's and other Attesters', trustworthiness. The other situation is that the Lead Attester has a Verifying Environment. After collecting the Evidence of other Attesters, the Lead Attester verifies these Evidence by using the Endorsements and Appraisal Policies, which are got from the Verifier or some reliable parties, for evaluating these Attesters' trustworthiness. Then the Lead Attester makes the verification results as claims which are the input to the final Evidence of the whole Composite Attester. Then the Lead Attester conveys the final Attestation Evidence to the Verifier on behalf of the Composite Attester. Before receiving the Endorsements and Appraisal Policies for other Attesters, to increase the security, the Lead Attester may first generate Evidence about its trustworthiness and convey this Evidence to the Verifier for evaluating. 5.", "comments": "This attempts to address the remaining feedback on PR on issues that were not fixed before it was merged. Signed-off-by: Dave Thaler\nThe simple word choice / grammar changes seem OK. I still don't understand the difference between a \"claims collector\" and a \"verifier\" (in the case where Attester B, C,... are signing evidence) and a \"target environment\" (in the case where Attester B, C, ... are not signing evidence). It seems this term isn't actually necessary. The case where Composite Attester is relying Evidence from \"sub\" attesters, the composite attester (aka claims collector) is relying Evidence not \"collecting claims\" in the sense that an attester \"collects claims about a target environment\". I think it is confusing to overload this terminology to also mean \"rely Evidence\".", "new_text": "Environment, then this sub-entity generates Evidence about its trustworthiness. Therefore each sub-entity can be called an Attester. Among these Attesters, there may be only some, which can be called Lead Attesters, that have the ability to communicate with the Verifier. Other Attesters don't have this ability, but they are connected to the Lead Attesters via internal links or network connections, and they are evaluated via the Lead Attester's help. For example, a carrier-grade router is a composite device consisting of a chassis and multiple slots. The trustworthiness of the router depends on all its slots' trustworthiness. Each slot has an Attesting Environment such as a TPM or TEE collecting the claims of its boot process, after which it generates Evidence from the claims. Among these slots, only a main slot can communicate with the Verifier while other slots cannot. But other slots can communicate with the main slot by the links between them inside the router. So the main slot collects the Evidence of other slots, produces the final Evidence of the whole router and conveys the final Evidence to the Verifier. Therefore the router is a Composite Attester, each slot is an Attester, and the main slot is the Lead Attester. Another example is a multi-chassis router composed of multiple single carrier-grade routers. The multi-chassis router provides higher throughput by interconnecting multiple routers and simpler management by being logically treated as one router. Among these routers, there is only one main router that connects to the Verifier. Other routers are only connected to the main router by the network cables, and therefore they are managed and verified via this main router. So, in this case, the multi-chassis router is the Composite Attester, each router is an Attester and the main router is the Lead Attester. composite depicts the conceptual data flow for a Composite Attester. In the Composite Attester, each Attester generates its own Evidence by its Attesting Environment(s) collecting the claims from its Target Environment(s). The Lead Attester collects the Evidence of all other Attesters and then generates the Evidence of the whole Composite Attester. The Lead Attester's Claims Collector may or may not include its own Verifier. One situation is that the Claims Collector has no internal Verifier. In this situation, the Claims Collecctor simply combines the various Evidences into the final Evidence that is sent off to the remote Verifier, which evaluates the Composite Attester's, including the Lead Attester's and other Attesters', trustworthiness. The other situation is that the Lead Attester's Claims Collector has an internal Verifier. After collecting the Evidence of other Attesters, the Claims Collector verifies them using Endorsements and Appraisal Policies (obtained the same way as any other Verifier), for evaluating these Attesters' trustworthiness. Then the Claims Collector combines the Attestation Results into the final Evidence of the whole Composite Attester which is sent off to the remote Verifier, which might treat the claims obtained from the local Attestation Results as if they were Evidence. 5."} {"id": "q-en-ietf-rats-wg-architecture-0fc07fe197e2f01a18d704efde1e9e8f6962168509e835ee686942e4137d04b8", "old_text": "An Attester creates Evidence that is conveyed to a Verifier. The Verifier uses the Evidence, and any Endorsements from Endorsers, by applying an Appraisal Policy for Evidence to assess the trustworthiness of the Attester, and generates Attestation Results for use by Relying Parties. The Appraisal Policy for Evidence might be obtained from an Endorser along with the Endorsements, and/or might be obtained via some other mechanism such as being configured in the Verifier by the Verifier Owner. The Relying Party uses Attestation Results by applying its own appraisal policy to make application-specific decisions such as", "comments": "adding the Reference Values to fix issue 185\n` And the Verifier uses the Reference Values\nI agree that Reference Values should be mentioned here.", "new_text": "An Attester creates Evidence that is conveyed to a Verifier. The Verifier uses the Evidence, any Reference Values from Reference Value Providers, and any Endorsements from Endorsers, by applying an Appraisal Policy for Evidence to assess the trustworthiness of the Attester, and generates Attestation Results for use by Relying Parties. The Appraisal Policy for Evidence might be obtained from an Endorser along with the Endorsements, and/or might be obtained via some other mechanism such as being configured in the Verifier by the Verifier Owner. The Relying Party uses Attestation Results by applying its own appraisal policy to make application-specific decisions such as"} {"id": "q-en-ietf-rats-wg-architecture-1fc45397fcdd39a5db09f50319023a72ce5062080292aa61f57c0830568f9b33", "old_text": "It is also worth pointing out that the choice of model is generally up to the Relying Party. The same device may need to create Evidence for different Relying Parties and/or different use cases. For instance, it would provide Evidence to a network infrastructure device to gain access to the network, and to a server holding confidential data to gain access to that data. As such, both models may simultaneously be in use by the same device. combination shows another example of a combination where Relying Party 1 uses the passport model, whereas Relying Party 2 uses an", "comments": "update the \"Combinations\" section to fix issue .\nHow does this show that both models are required in this example? I suggest changing this sentence in this way. Because I think the combinations have several possibilities and there is no need to detail all possibilities. Any other thoughts?", "new_text": "It is also worth pointing out that the choice of model is generally up to the Relying Party. The same device may need to create Evidence for different Relying Parties and/or different use cases. For instance, it would use one model to provide Evidence to a network infrastructure device to gain access to the network, and the other model to provide Evidence to a server holding confidential data to gain access to that data. As such, both models may simultaneously be in use by the same device. combination shows another example of a combination where Relying Party 1 uses the passport model, whereas Relying Party 2 uses an"} {"id": "q-en-ietf-rats-wg-architecture-cccb07732ef6ef82b91203deb1e3ba8d26ba94549e2d655ef99820e6d0894088", "old_text": "As a system bus-connected entity, a Verifier consumes Evidence from other devices connected to the system bus that implement Attester roles. As a wide-area network connected entity, it may implement an Attester role. The entity, as a system bus Verifier, may choose to fully isolate its role as a wide-area network Attester. In essence, an entity that combines more than one role creates and consumes the corresponding conceptual messages as defined in this", "comments": "Signed-off-by: Dave Thaler\n` Isolate? From what? \u201cIsolation\u201d seems to be used several times in this and later sections without saying who\u2019s isolating what from whom.\nI don't understand that sentence either. Propose removing it. Another sentence: Propose removing \"isolation and\" Other uses of isolation I think are fine, where it means that an Attesting Environment cannot be modified by a Target Environment it measures. Any way of protecting that might be called \"isolating\" the two environments from each other in some way.", "new_text": "As a system bus-connected entity, a Verifier consumes Evidence from other devices connected to the system bus that implement Attester roles. As a wide-area network connected entity, it may implement an Attester role. In essence, an entity that combines more than one role creates and consumes the corresponding conceptual messages as defined in this"} {"id": "q-en-ietf-rats-wg-architecture-cccb07732ef6ef82b91203deb1e3ba8d26ba94549e2d655ef99820e6d0894088", "old_text": "12.1. Implementers need to pay close attention to the isolation and protection of the Attester and the factory processes for provisioning the Attestation key material. If either of these are compromised, the remote attestation becomes worthless because the attacker can forge Evidence. Remote attestation applies to use cases with a range of security requirements, so the protections discussed here range from low to", "comments": "Signed-off-by: Dave Thaler\n` Isolate? From what? \u201cIsolation\u201d seems to be used several times in this and later sections without saying who\u2019s isolating what from whom.\nI don't understand that sentence either. Propose removing it. Another sentence: Propose removing \"isolation and\" Other uses of isolation I think are fine, where it means that an Attesting Environment cannot be modified by a Target Environment it measures. Any way of protecting that might be called \"isolating\" the two environments from each other in some way.", "new_text": "12.1. Implementers need to pay close attention to the protection of the Attester and the factory processes for provisioning the Attestation key material. If either of these are compromised, the remote attestation becomes worthless because an attacker can forge Evidence or manipulate the Attesting Environment. For example, a Target Environment should not be able to tamper with the Attesting Environment that measures it, by isolating the two environments from each other in some way. Remote attestation applies to use cases with a range of security requirements, so the protections discussed here range from low to"} {"id": "q-en-ietf-rats-wg-architecture-1dc1b1d34b66c29a012b87e0a018d7137c2244fd0da361ee55c114dd780fcc2b", "old_text": "vouched for via Endorsements because no Evidence is generated about them are referred to as roots of trust. The manufacturer of the Attester arranges for its Attesting Environment to be provisioned with key material. The key material is typically in the form of an asymmetric key pair (e.g., an RSA or ECDSA private key and a manufacturer-signed IDevID certificate) secured in the Attester. The Verifier is provided with an appropriate trust anchor, or provided with a database of public keys (rather than certificates), or even carefully secured lists of symmetric keys. The nature of how the Verifier manages to validate the signatures produced by the Attester is critical to the secure operation of an Attestation system, but is not the subject of standardization within this architecture. A conveyance protocol that provides authentication and integrity protection can be used to convey unprotected Evidence, assuming the", "comments": "` 1) Is this a normative statement? 2) I think the next paragraph says that maybe it\u2019s not an asymmetric key, and anyways, keys are out of scope for RATS Arch\nIn a layered device architecture the previous layer \"provisions\" the next layer. Hence, it isn't possible for the 'manufacturer' entity to do that. I don't know why a section on \"Verifier\" is going into so much detail about the Attester.", "new_text": "vouched for via Endorsements because no Evidence is generated about them are referred to as roots of trust. The manufacturer having arranged for an Attesting Environment to be provisioned with key material with which to sign Evidence, the Verifier is then provided with some way of verifying the signature on the Evidence. This may be in the form of an appropriate trust anchor, or the Verifier may be provided with a database of public keys (rather than certificates) or even carefully secured lists of symmetric keys. The nature of how the Verifier manages to validate the signatures produced by the Attester is critical to the secure operation of an Attestation system, but is not the subject of standardization within this architecture. A conveyance protocol that provides authentication and integrity protection can be used to convey unprotected Evidence, assuming the"} {"id": "q-en-ietf-rats-wg-architecture-d03e0eea170856cd74db46d15f1762751b527022f221ba56cf3630a8ccb66fad", "old_text": "boot, or immutable hardware/ROM. It is also important that the appraisal policy was itself obtained securely. As such, if appraisal policies for a Relying Party or for a Verifier can be configured via a network protocol, the ability to create Evidence about the integrity of the entity providing the appraisal policy needs to be considered. The security of conveyed information may be applied at different layers, whether by a conveyance protocol, or an information encoding", "comments": "` Does this mean to imply that bogus policy can\u2019t be installed by sneakernet, e.g., on a usb thumb drive??\nI don't think the 'via a network protocol' is necessary. The consideration is simply that policies have security considerations that are out of scope of the architecture.", "new_text": "boot, or immutable hardware/ROM. It is also important that the appraisal policy was itself obtained securely. If an attacker can configure appraisal policies for a Relying Party or for a Verifier, then integrity of the process is compromised. The security of conveyed information may be applied at different layers, whether by a conveyance protocol, or an information encoding"} {"id": "q-en-ietf-rats-wg-architecture-4560e78fd87d0dfa46462bb86529df8f7452ac7f67c9c150d63bb0f9f4ead188", "old_text": "Claims about a root of trust typically are asserted by Endorsers. The device illustrated in layered includes (A) a BIOS stored in read- only memory, (B) an updatable bootloader, and (C) an operating system kernel. Attesting Environment A, the read-only BIOS in this example, has to ensure the integrity of the bootloader (Target Environment B). There", "comments": "` The loader environment only exists for a second or two, and has few resources\u2026 how it is supposed to report evidence to a verifier in that window?\nRefers to figure 3, in section 3.4, on layered attestation.\nIt is rare that early layers generate or convey Evidence. In critical system with appropriate protected capabilities that actually happens. I can see that it is not a common case and would not argue strongly against removing 'B' from that conveyance arc.\nPerhaps a better example for which the loader environment is not ephemeral would be where B is the OS and C is an application/workload process.", "new_text": "Claims about a root of trust typically are asserted by Endorsers. The device illustrated in layered includes (A) a BIOS stored in read- only memory, (B) an operating system kernel, and (C) an application or workload. Attesting Environment A, the read-only BIOS in this example, has to ensure the integrity of the bootloader (Target Environment B). There"} {"id": "q-en-ietf-rats-wg-architecture-4e9a2517b97b09a01f779c7078356b66c54b947cde9d790190bb63a7b4316ab9", "old_text": "in read-only memory, (B) a bootloader, and (C) an operating system kernel. The first Attesting Environment, the read-only BIOS in this example, has to ensure the integrity of the bootloader (the first Target Environment). There are potentially multiple kernels to boot, and the decision is up to the bootloader. Only a bootloader with intact integrity will make an appropriate decision. Therefore, the Claims", "comments": "Changes \"read-only BIOS\" to \"ROM\" to align with the diagram\nI think the text \u201cthe read-only BIOS in this example, has to ensure the integrity of the bootloader\u201d is wrong since the example no longer has a \u201cread-only BIOS\u201d component. It should say \u201cROM\u201d to be consistent. If ROM is determined to be ambiguous as a Root of Trust and we decide to changed it to \"RoT\" then this sentence would be updated accordingly.\nNAME do you want to fix this still?\nI'll add it to my list.", "new_text": "in read-only memory, (B) a bootloader, and (C) an operating system kernel. The first Attesting Environment, the ROM in this example, has to ensure the integrity of the bootloader (the first Target Environment). There are potentially multiple kernels to boot, and the decision is up to the bootloader. Only a bootloader with intact integrity will make an appropriate decision. Therefore, the Claims"} {"id": "q-en-ietf-rats-wg-architecture-80c2b366fb4c571071870aec2b6042eb1aaab27f57bc7c07596822f56555a86f", "old_text": "whether in the same conveyance protocol as part of the Evidence or not. 4. RFC4949 has defined a number of terms that are also used in this", "comments": "attempt to provide forward reference as requested on Monday\nThis looks good to me, although I wonder whether the sentence would make more sense at the end of the \"Implementation Considerations\" section (a couple lines earlier in the document). I am ok either way.\nWorks for me. That would put that into section 1.\n\"artifacts are defined by which roles produce them or consume them\" - This sentence structure seems awkward. Is \"artifacts are defined by the roles that produce or consume them\" better?LGTM", "new_text": "whether in the same conveyance protocol as part of the Evidence or not. As explained in overview, artifacts are defined by which roles produce them or consume them. In some protocol instantiations, other roles may cache and forward artifacts as opaque data, to one or more entities implementing the consuming role. 4. RFC4949 has defined a number of terms that are also used in this"} {"id": "q-en-ietf-rats-wg-architecture-eb5f0f144c2fe5207e7508449cf984389701b15595f89dbcc839d9835df9d4e3", "old_text": "obtained via some other mechanism such as being configured in the Verifier by an administrator. For example, for some claims the Verifier might check the values of claims in the Evidence against constraints specified in the Appraisal Policy for Evidence. Such constraints might involve a comparison for equality against reference values, or a check for being in a range bounded by reference values, or membership in a set of reference values, or a check against values in other claims, or any other test. Such reference values might be specified as part of the Appraisal Policy for Evidence itself, or might be obtained from a separate source, such as an Endorsement, and then used by the Appraisal Policy for Evidence. The actual data format and semantics of a known-good value are specific to claims and implementations. There is no general purpose format for them or general means for comparison defined in this architecture document. Similarly, for some claims the Verifier might check the values of claims in the Evidence for membership in a set, or against a range of values, or against known- bad values such as an expiration time. These reference values may be conveyed to the Verifier as part of an Endorsement or as part of Appraisal Policy or both as these are the two input paths to the Verifier. The Relying Party uses Attestation Results by applying its own Appraisal Policy to make application-specific decisions such as authorization decisions. The Attestation Result Appraisal Policy", "comments": "Previously the text was specific to policy for Evidence, but the same applies to policy for appraising Attestation Results, so generalizing the text. This also removes a redundant sentence where older text and newly merged text from this morning said the same thing. And the diffs look larger than they really are, since some text was just moved around to be in a different order. Signed-off-by: Dave Thaler", "new_text": "obtained via some other mechanism such as being configured in the Verifier by an administrator. The Relying Party uses Attestation Results by applying its own Appraisal Policy to make application-specific decisions such as authorization decisions. The Attestation Result Appraisal Policy"} {"id": "q-en-ietf-rats-wg-architecture-eb5f0f144c2fe5207e7508449cf984389701b15595f89dbcc839d9835df9d4e3", "old_text": "4.1. An Attester consists of at least one Attesting Environment and at least one Target Environment. In some implementations, the Attesting and Target Environments might be combined. Other implementations", "comments": "Previously the text was specific to policy for Evidence, but the same applies to policy for appraising Attestation Results, so generalizing the text. This also removes a redundant sentence where older text and newly merged text from this morning said the same thing. And the diffs look larger than they really are, since some text was just moved around to be in a different order. Signed-off-by: Dave Thaler", "new_text": "4.1. The Verifier, when appraising Evidence, or the Relying Party, when appraising Attestation Results, checks the values of some claims against constraints specified in its Appraisal Policy. Such constraints might involve a comparison for equality against a reference value, or a check for being in a range bounded by reference values, or membership in a set of reference values, or a check against values in other claims, or any other test. Such reference values might be specified as part of the Appraisal Policy itself, or might be obtained from a separate source, such as an Endorsement, and then used by the Appraisal Policy. The actual data format and semantics of any reference values are specific to claims and implementations. This architecture document does not define any general purpose format for them or general means for comparison. 4.2. An Attester consists of at least one Attesting Environment and at least one Target Environment. In some implementations, the Attesting and Target Environments might be combined. Other implementations"} {"id": "q-en-ietf-rats-wg-architecture-eb5f0f144c2fe5207e7508449cf984389701b15595f89dbcc839d9835df9d4e3", "old_text": "Environments are designed specifically with claims collection in mind. 4.2. By definition, the Attester role takes on the duty to create Evidence. The fact that an Attester role is composed of several", "comments": "Previously the text was specific to policy for Evidence, but the same applies to policy for appraising Attestation Results, so generalizing the text. This also removes a redundant sentence where older text and newly merged text from this morning said the same thing. And the diffs look larger than they really are, since some text was just moved around to be in a different order. Signed-off-by: Dave Thaler", "new_text": "Environments are designed specifically with claims collection in mind. 4.3. By definition, the Attester role takes on the duty to create Evidence. The fact that an Attester role is composed of several"} {"id": "q-en-ietf-rats-wg-architecture-eb5f0f144c2fe5207e7508449cf984389701b15595f89dbcc839d9835df9d4e3", "old_text": "Therefore, creating a layered boot sequence and correspondingly enabling Layered Attestation. 4.3. A Composite Device is an entity composed of multiple sub-entities such that its trustworthiness has to be determined by the appraisal", "comments": "Previously the text was specific to policy for Evidence, but the same applies to policy for appraising Attestation Results, so generalizing the text. This also removes a redundant sentence where older text and newly merged text from this morning said the same thing. And the diffs look larger than they really are, since some text was just moved around to be in a different order. Signed-off-by: Dave Thaler", "new_text": "Therefore, creating a layered boot sequence and correspondingly enabling Layered Attestation. 4.4. A Composite Device is an entity composed of multiple sub-entities such that its trustworthiness has to be determined by the appraisal"} {"id": "q-en-ietf-rats-wg-architecture-13bfb7a95016743de858e0368d690afb2de80d2956a2ed4269b1668b42ba9068", "old_text": "Universal Time, or might be defined relative to some other timestamp or timeticks counter. We now walk through a number of hypothetical examples of how a solution might be built. This list is not intended to be complete, but is just representative enough to highlight various timing considerations. 15.1.", "comments": "only the most essential items: added three more timestamp types based on TUDA, trying to generalize it a bit added a few periods to text that seems to express sentences", "new_text": "Universal Time, or might be defined relative to some other timestamp or timeticks counter. Using the table above, a number of hypothetical examples of how a solution might be built are illustrated below. a solution might be built. This list is not intended to be complete, but is just representative enough to highlight various timing considerations. 15.1."} {"id": "q-en-ietf-rats-wg-architecture-13bfb7a95016743de858e0368d690afb2de80d2956a2ed4269b1668b42ba9068", "old_text": "15.3. The following example illustrates a hypothetical Background-Check Model solution that uses timestamps and requires roughly synchronized clocks between the Attester, Verifier, and Relying Party. The time considerations in this example are equivalent to those discussed under Example 1 above. 15.4.", "comments": "only the most essential items: added three more timestamp types based on TUDA, trying to generalize it a bit added a few periods to text that seems to express sentences", "new_text": "15.3. The following example illustrates a hypothetical Background-Check Model solution that uses centrally generated identifiers for explicit time-keeping (referred to as \"handle\" in this example). Handles can be qualifying data, such as nonces or signed timestamps. In this example, centrally generated signed timestamps and - and synchronized clocks between all entities - are distributed in periodic intervals as handles. If the Attester lacks a source of time based on an absolute timescale, a relative source of time, such as a tick counter can be used, alternatively. In this example, evidence generation is not triggered at value generation, but at events at which the Attesting Environment becomes of changes to the Target Environment. In comparison with example 1, the time considerations in this example go into more detail with respect to the life-cycle of Claims and Evidence. While the goal is to create up-to-date and recent Evidence as soon as possible, typically there is a latency between value generation and Attester awareness. At time(AA) the Attesting Environment is able to trigger an event (e.g. based on an Event-Condition-Action model) to create attestation Evidence that is as recent as possible. In essence, at time(AA) the Attesting Environment is aware of new values that where generated at time(VG) and corresponding Claim values are collected immediately. Consecutively, Evidence based on relevant \"old\" Claims and the just collected \"new\" Claims is generated at time(EG). In essence, the Claims used to generate the Evidence are generated at various time(VG) before time(AA). In order to create attestation Evidence at at time(AA), the Attester requires a fresh (i.e. not expired) centrally generated handle that has been distributed to all involved entities. In general, The duration a handle remains fresh depends on the content-type of the handle. If it is a (relative or absolute) timestamp, clocks synchronized with a shared and trustworthy source of time are required. If another value type is used as a handle, the reception time of the handle time(HD) provides an epoch (relative time of zero) for measuring the duration of validity (similar to a heart-beat timeout). From the point of view of a Verifier, validity of Evidence is only given if the handle used in Evidence satisfies delta(time(HD),time(EG))distribution-interval. In this usage scenario, time(VG), time(AA), and time(EG) are tightly coupled. Also, the absolute point in time at which a handle is received by all three entities is assumed to be close to identical. 15.4."} {"id": "q-en-ietf-wg-privacypass-base-drafts-baa18be73c82f98e57e95f710574324c2b6f4c880701ad5368c5d8dbdfd0e7da", "old_text": "attempted against. An empty redemption is returned when the limit has been hit: 6. We present a number of security considerations that prevent malicious", "comments": "cc NAME\nThere should be documentation that addresses the risks of the centralization in the current Privacy Pass architecture with potential solutions. This was raised in the previous WG meeting: URL This documentation should either be incorporated into the architecture document as a section when discussing the shape of the ecosystem, and/or security considerations. There is also the option of declaring a separate document for addressing this.\nA draft has been put forward for this by Mark McFadden: URL, and will be discussed at IETF 110. Will leave this open for now, until this draft (or some form of it) is accepted.", "new_text": "attempted against. An empty redemption is returned when the limit has been hit: 5.4. A consequence of limiting the number of participants (Attesters or Issuers) in Privacy Pass deployments for meaningful privacy is that it forces concentrated centralization amongst those participants. CENTRALIZATION discusses several ways in which this might be mitigated. For example, a multi-stakeholder governance model could be established to determine what participants are fit to operate as participants in a Privacy Pass deployment. This is precisely the model used to control the Web's trust model. Alternatively, Privacy Pass deployments might mitigate this problem through implementation. For example, rather than centralize the role of attestation in one or few entities, attestation could be a distributed function performed by a quorum of many parties, provided that neither Issuers nor Origins learn which attester implementations were chosen. As a result, clients could have more opportunities to switch between attestation participants. 6. We present a number of security considerations that prevent malicious"} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "that attest to this information. The most basic Privacy Pass protocol provides a set of cross-origin authorization tokens that protect the client's anonymity during interactions with a server. This allows clients to communicate an attestation of a previously authenticated server action without having to reauthenticate manually. The tokens retain anonymity in", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "that attest to this information. The most basic Privacy Pass protocol provides a set of cross-origin authorization tokens that protect the Clients' anonymity during interactions with a server. This allows clients to communicate an attestation of a previously authenticated server action without having to reauthenticate manually. The tokens retain anonymity in"} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "properties include, though are not limited to: Capable of solving a CAPTCHA. Clients that solve CAPTCHA challenges can be attested to have this capability for the purposes of being ruled out as a bot or otherwise automated Client. Client state. Clients can be associated with state and the attester can attest to this state. Examples of state include the number of issuance protocol invocations, the client's geographic region, and whether the client has a valid application-layer account. Trusted device. Some Clients run on trusted hardware that are capable of producing device-level attestation statements. Each of these attestation types have different security properties. For example, attesting to having a valid account is different from attesting to running on trusted hardware. In general, Attesters should accept a limited form of attestation formats.", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "properties include, though are not limited to: Capable of solving a CAPTCHA. Clients that solve CAPTCHA challenges can be attested to have this capability for the purpose of being ruled out as a bot or otherwise automated Client. Client state. Clients can be associated with state and the attester can attest to this state. Examples of state include the number of issuance protocol invocations, the Client's geographic region, and whether the client has a valid application-layer account. Trusted device. Some Clients run on trusted hardware that are capable of producing device-level attestation statements. Each of these attestation types has different security properties. For example, attesting to having a valid account is different from attesting to running on trusted hardware. In general, Attesters should accept a limited form of attestation formats."} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "on compliant devices, then the corresponding attestation format should be untrusted until the exploit is patched. Addressing changes in attestation quality is therefore a deployment-specific task. In Split Attester and Issuer deployments, Issuers can choose to remove compromised Attesters from their trusted set until the compromise is patched, without needing to modify Origin allow-lists. 3.2.2.", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "on compliant devices, then the corresponding attestation format should be untrusted until the exploit is patched. Addressing changes in attestation quality is therefore a deployment-specific task. In Split Attester and Issuer deployments (see deploy-split), Issuers can choose to remove compromised Attesters from their trusted set until the compromise is patched, without needing to modify Origin allow- lists. 3.2.2."} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "any given time. The Issuer public key MUST be made available to all Clients in such a way that key rotations and other updates are publicly visible. The key material and protocol configuration that an Issuer uses to produce tokens corresponds to a number of different pieces of information. The issuance protocol in use; and", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "any given time. The Issuer public key MUST be made available to all Clients in such a way that key rotations and other updates are publicly visible. The key material and protocol configuration that an Issuer uses to produce tokens corresponds to two different pieces of information. The issuance protocol in use; and"} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "public metadata is metadata the client can see but cannot check for correctness. As an example, the opaque public metadata might be a \"fraud detection signal\", computed on behalf of the Issuer, during token issuance. In normal circumstances, clients cannot determine if this value is correct or otherwise a tracking vector. Private metadata is that which clients cannot observe as part of the token issuance flow. Such instantiations may be built on the Private Metadata Bit construction from Kreuter et al. KLOR20 or the attribute-based VOPRF from Huang et al. HIJK21.", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "public metadata is metadata the client can see but cannot check for correctness. As an example, the opaque public metadata might be a \"fraud detection signal\", computed on behalf of the Issuer, during token issuance. In normal circumstances, Clients cannot determine if this value is correct or otherwise a tracking vector. Private metadata is that which Clients cannot observe as part of the token issuance flow. Such instantiations may be built on the Private Metadata Bit construction from Kreuter et al. KLOR20 or the attribute-based VOPRF from Huang et al. HIJK21."} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "different forms. For example, any Client can only remain private relative to the entire space of other Clients using the protocol. Moreover, by owning tokens for a given set of keys, the Client's anonymity set shrinks to the total number of clients controlling tokens for the same keys. In the following, we consider the possible ways that Issuers can", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "different forms. For example, any Client can only remain private relative to the entire space of other Clients using the protocol. Moreover, by owning tokens for a given set of keys, the Client's anonymity set shrinks to the total number of Clients controlling tokens for the same keys. In the following, we consider the possible ways that Issuers can"} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "Client anonymity. Such techniques are closely linked to the type of key schedule that is used by the Issuer. When an Issuer rotates their key, any Client that invokes the issuance protocol in this key cycle will be part of a group of possible clients owning valid tokens for this key. To mechanize this attack strategy, an Issuer could introduce a key rotation policy that forces Clients into small key cycles, reducing the size of the anonymity set for these Clients.", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "Client anonymity. Such techniques are closely linked to the type of key schedule that is used by the Issuer. When an Issuer rotates their key, any Client that invokes the issuance protocol in this key cycle will be part of a group of possible Clients owning valid tokens for this key. To mechanize this attack strategy, an Issuer could introduce a key rotation policy that forces Clients into small key cycles, reducing the size of the anonymity set for these Clients."} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "partition, the Client limits the number of different Issuers used to a small number to maintain the privacy properties the Client requires. As long as each redemption partition maintains a strong privacy boundary with each other, the verifier will only be able to learn a number of bits of information up to the limits within that \"redemption partition\". To support this strategy, the client keeps track of a \"partition\" which contains the set of Issuers that redemptions have been", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "partition, the Client limits the number of different Issuers used to a small number to maintain the privacy properties the Client requires. As long as each redemption partition maintains a strong privacy boundary with the others, the number of bits of information the verifier can learn is bounded by the number of \"redemption partitions\". To support this strategy, the client keeps track of a \"partition\" which contains the set of Issuers that redemptions have been"} {"id": "q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324", "old_text": "6. We present a number of security considerations that prevent malicious Clients from abusing the protocol. 6.1. When a Client holds tokens for an Issuer, it is possible for any verifier to cause that client to redeem tokens for that Issuer. This can lead to an attack where a malicious verifier can force a Client to spend all of their tokens from a given Issuer. To prevent this from happening, tokens can be scoped to single Origins such that they can only be redeemed within for a single Origin. If tokens are cross-Origin, Clients should use alternate methods to prevent many tokens from being redeemed at once. For example, if the", "comments": "Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing.", "new_text": "6. Beyond the aforementioned security gaols for the Issuance protocol (issuance-protocol), it is important for Privacy Pass deployments to mitigate the risk of abuse by malicious Clients. When a Client holds tokens for an Issuer, it is possible for any verifier to cause that client to redeem tokens for that Issuer. This can lead to an attack where a malicious verifier can force a Client to spend all of their tokens from a given Issuer. To prevent this from happening, tokens can be scoped to single Origins such that they can only be redeemed for a single Origin. If tokens are cross-Origin, Clients should use alternate methods to prevent many tokens from being redeemed at once. For example, if the"} {"id": "q-en-ietf-wg-privacypass-base-drafts-505a90afd2bd32e739ec770c5cf87b1ac488b303912f015ff2f215556b5d89ad", "old_text": "directly communicating with an Issuer. Depending on the attestation, Attesters can store state about a Client, such as the number of overall tokens issued thus far. As an example of an issuance protocol, in the original Privacy Pass protocol PPSRV, tokens were only issued to Clients that solved CAPTCHAs. In this context, the Attester attested that some client solved a CAPTCHA and the resulting token produced by the Issuer was proof of this fact.", "comments": "[x] 1. The Reference PPEXT is not cited in the document. seems it could be sited in the into PR [x] 2. I think it would be good to explicitly say that the HTTP authentication protocol is targeted at redemption and the issuer protocol is targeted at issuance. PR [ ] 3. Last paragraph in section 3.2.1 mentions Origin Allow Lists. These allow lists are not specifically mentioned earlier in the document. Can you clarify this. [ ] 4. In section 4.3 and 4.4 is the attester involved in the TokeRequest and TokenResponse messages? Is the attester proxying the communication? It would be good to clarify this in the text. [ ] 5. Section 6 has an incomplete reference. [x] 6. It seems that the Reference to RFC 8446 is not needed. PR [ ] 7. The link - URL does not work perhaps this is the updated one? - URL", "new_text": "directly communicating with an Issuer. Depending on the attestation, Attesters can store state about a Client, such as the number of overall tokens issued thus far. As an example of an issuance protocol, in the original Privacy Pass protocol PPEXT, tokens were only issued to Clients that solved CAPTCHAs. In this context, the Attester attested that some client solved a CAPTCHA and the resulting token produced by the Issuer was proof of this fact."} {"id": "q-en-ietf-wg-privacypass-base-drafts-73341c9cc4c21228aad4e8143494aa598b0833fe4b8265625e3138e65239326c", "old_text": "valid token available, it presents the token to the origin (redemption). 2.1. Origins send a token challenge to clients in an \"WWW-Authenticate\"", "comments": "it's not crystal-clear what the interaction model for this auth scheme is. In many auth schemes, the credential is repeated on each request, to keep the protocol stateless, and to prove the client's identity. However, that doesn't seem to be desireable here -- not only would it make all of those responses uncacheable, it would (AIUI) 'spend' a lot of private tokens. OTOH, the 'realm' parameter is explicitly supported (albiet without much detail), which implies the opposite. I think the preferred interaction model is that a token is only spent upon an explicit challenge from the server, but it was difficult to discern this from the documentation. Either way, it'd be good to clarify this.\nCorrect, the client shouldn't spend a token multiple times, and shouldn't keep including the credential on future requests. We could drop the realm text, but I think allowing it as a MAY does leave open use cases that might want to segment up different use cases of tokens.\nYeah, unless there's a reason to omitting it, I think keeping the MAY for the realm is fine. I tried to get a sense for how clients are intended to use other auth schemes from RFC 9110, but it wasn't clear to me. NAME do you know of other auth scheme specifications that make the interaction model more clear?", "new_text": "valid token available, it presents the token to the origin (redemption). Unlike many authentication schemes in which a client will present the same credentials across multiple requests, tokens used with the \"PrivateToken\" scheme are single-use credentials, and are not reused. Spending the same token value more than once allows the origin to link multiple transactions to the same client. In deployment scenarios where origins send token challenges to request tokens, origins ought to expect at most one request containing a token from the client in reaction to a particular challenge. 2.1. Origins send a token challenge to clients in an \"WWW-Authenticate\""} {"id": "q-en-ietf-wg-privacypass-base-drafts-afd1154174b1f4662c37f73fbd4e248ed5fba500c65ef20f53eb7daadb01cc66", "old_text": "The Privacy Pass protocol provides a privacy-preserving authorization mechanism. In essence, the protocol allows clients to provide cryptographic tokens that prove nothing other than that they have been created by a given server in the past I-D.ietf-privacypass- architecture. This document describes the issuance protocol for Privacy Pass. It specifies two variants: one that is privately verifiable based on the oblivious pseudorandom function from OPRF, and one that is publicly verifiable based on the blind RSA signature scheme BLINDRSA. This document DOES NOT cover the architectural framework required for running and maintaining the Privacy Pass protocol in the Internet setting. In addition, it DOES NOT cover the choices that are necessary for ensuring that client privacy leaks do not occur. Both of these considerations are covered in I-D.ietf-privacypass- architecture. 2.", "comments": "There are two cases where the term seems to be used with importance. I suspect that the Internet isn't actually that important in the context / scope of this protocol document, especially since the architecture document doesn't mention it.\nYep, good catch. It's not.\nIs there a more polite way to phrase this disclaimer?\nThis follows RFC 2119 (URL) and RFC 8174 linked from terminology where these are reserved phrases with specific meaning in IETF specifications and documents. Though technically these should probably be \"SHOULD NOT\" and \"SHALL NOT\" to be in line with the RFC terms.\nWhat requirements are being defined in the quoted text though? Is it simply a statement of fact like \"this document doesn't cover architectural stuff\" or is it \"implementations of the issuance protocol SHOULD account for considerations related to operations and privacy described in the architecture document\"?\nNAME this is just a statement of fact -- we don't need to shout it =)", "new_text": "The Privacy Pass protocol provides a privacy-preserving authorization mechanism. In essence, the protocol allows clients to provide cryptographic tokens that prove nothing other than that they have been created by a given server in the past ARCHITECTURE. This document describes the issuance protocol for Privacy Pass. It specifies two variants: one that is privately verifiable based on the oblivious pseudorandom function from OPRF, and one that is publicly verifiable based on the blind RSA signature scheme BLINDRSA. This document does not cover the Privacy Pass architecture, including choices that are necessary for ensuring that client privacy leaks. This information is covered in ARCHITECTURE. 2."} {"id": "q-en-ietf-wg-privacypass-base-drafts-afd1154174b1f4662c37f73fbd4e248ed5fba500c65ef20f53eb7daadb01cc66", "old_text": "blind RSA documents also apply in the Privacy Pass use-case. Considerations related to broader privacy and security concerns in a multi-Client and multi-Issuer setting are deferred to the Architecture document I-D.ietf-privacypass-architecture. Beyond these considerations, it is worth highlighting the fact that Client TokenRequest messages contain truncated token key IDs. This", "comments": "There are two cases where the term seems to be used with importance. I suspect that the Internet isn't actually that important in the context / scope of this protocol document, especially since the architecture document doesn't mention it.\nYep, good catch. It's not.\nIs there a more polite way to phrase this disclaimer?\nThis follows RFC 2119 (URL) and RFC 8174 linked from terminology where these are reserved phrases with specific meaning in IETF specifications and documents. Though technically these should probably be \"SHOULD NOT\" and \"SHALL NOT\" to be in line with the RFC terms.\nWhat requirements are being defined in the quoted text though? Is it simply a statement of fact like \"this document doesn't cover architectural stuff\" or is it \"implementations of the issuance protocol SHOULD account for considerations related to operations and privacy described in the architecture document\"?\nNAME this is just a statement of fact -- we don't need to shout it =)", "new_text": "blind RSA documents also apply in the Privacy Pass use-case. Considerations related to broader privacy and security concerns in a multi-Client and multi-Issuer setting are deferred to the Architecture document ARCHITECTURE. Beyond these considerations, it is worth highlighting the fact that Client TokenRequest messages contain truncated token key IDs. This"} {"id": "q-en-ietf-wg-privacypass-base-drafts-ea646b7d6cba8a05fc7b2e836a83fe3f140a7aa75a6d849e1dc35f2ced5f917c", "old_text": "initially issued. At a high level, the Privacy Pass architecture consists of two protocols: issuance and redemption. The issuance protocol runs between an endpoint referred to as a Client and two functions in the Privacy Pass architecture: Attestation and Issuance. These two network functions can be implemented by the same protocol participant, but can also be implemented separately. The entity that implements Issuance, referred to as the Issuer, is responsible for issuing tokens in response to requests from Clients. The entity that implements Attestation, referred to as the Attester, is responsible for attesting to properties about the Client for which tokens are issued. The Issuer needs to be trusted by the server that later redeems the token. Attestation can be performed by the Issuer or by an Attester that is trusted by the Issuer. Clients might prefer to select different Attesters, separate from the Issuer, to be able to use preferred authentication methods or to improve privacy by not directly communicating with an Issuer. Depending on the attestation, Attesters can store state about a Client, such as the number of overall tokens issued thus far. As an example of an issuance protocol, in the original Privacy Pass protocol PPEXT, tokens were only issued to Clients that solved CAPTCHAs. In this context, the Attester attested that some client solved a CAPTCHA and the resulting token produced by the Issuer was proof of this fact. The redemption protocol runs between Client and Origin (server). It allows Origins to challenge Clients to present one or more tokens for authorization. Depending on the type of token, e.g., whether or not it can be cached, the Client either presents a previously obtained token or invokes the issuance protocol to acquire one for authorization. The issuance and redemption protocols operate in concert as shown in the figure below. This document describes requirements for both issuance and redemption protocols. It also provides recommendations on how the architecture should be deployed to ensure the privacy of clients and the security of all participating entities. The privacypass working group is working on AUTHSCHEME as an instantiation of a redemption protocol and ISSUANCE as an instantiation of the issuance protocol. 2.", "comments": "In this image: ! It isn't very clear where messages originate and who gets to see them. In particular, it isn't clear where the TokenRequest originates (the Client) and whether the Attester gets to see it (\u00af(\u00b0_o)/\u00af).\nThat's a fair point. It should be clear that the TokenRequest comes from the Attester. I wonder if we also somehow need to illustrate that there's a trust relationship between the Attester and Issuer here. Something to at least imply that the Attester is either implicitly trusted to do attestation, or, perhaps in future versions of Privacy Pass, attestation proofs are sent directly to the Issuer.\nMove diagram to section 3 (architecture). Also move much of the second intro paragraph down. Add more text about the actual steps to section 3, such as a numbered list\nIn here, introduce the basic privacy premise for how the redemption and issuance contexts are separated\nThe architecture references [AUTHSCHEME] for key concepts far more than I expected. For instance: Or: Can the architecture define the messages in their own right, then point to [AUTHSCHEME] for example implementations?\nIs the suggestion here to describe -- conceptually -- what Token and TokenChallenges are in the architecture document? If so, that seems quite reasonable.\nYeah. An architecture is most useful if you don't have to dig into the details to understand it and those references are somewhat critical to comprehension.", "new_text": "initially issued. At a high level, the Privacy Pass architecture consists of two protocols: issuance and redemption. The redemption protocol AUTHSCHEME runs between Client and Origin (server). It allows Origins to challenge Clients to present one or more tokens for authorization. Depending on the type of token, e.g., whether or not it can be cached, the Client either presents a previously obtained token or invokes an issuance protocol, such as ISSUANCE, to acquire a token to present as authorization. This document describes requirements for both issuance and redemption protocols and how they interact. It also provides recommendations on how the architecture should be deployed to ensure the privacy of clients and the security of all participating entities. 2."} {"id": "q-en-ietf-wg-privacypass-base-drafts-ea646b7d6cba8a05fc7b2e836a83fe3f140a7aa75a6d849e1dc35f2ced5f917c", "old_text": "Attester: An entity that attests to properties of Client for the purposes of token issuance. 3. The Privacy Pass architecture consists of four logical entities - Client, Origin, Issuer, and Attester - that work in concert as shown in introduction for token issuance and redemption. This section describes the purpose of token issuance and redemption and the requirements therein on the relevant participants. 3.1.", "comments": "In this image: ! It isn't very clear where messages originate and who gets to see them. In particular, it isn't clear where the TokenRequest originates (the Client) and whether the Attester gets to see it (\u00af(\u00b0_o)/\u00af).\nThat's a fair point. It should be clear that the TokenRequest comes from the Attester. I wonder if we also somehow need to illustrate that there's a trust relationship between the Attester and Issuer here. Something to at least imply that the Attester is either implicitly trusted to do attestation, or, perhaps in future versions of Privacy Pass, attestation proofs are sent directly to the Issuer.\nMove diagram to section 3 (architecture). Also move much of the second intro paragraph down. Add more text about the actual steps to section 3, such as a numbered list\nIn here, introduce the basic privacy premise for how the redemption and issuance contexts are separated\nThe architecture references [AUTHSCHEME] for key concepts far more than I expected. For instance: Or: Can the architecture define the messages in their own right, then point to [AUTHSCHEME] for example implementations?\nIs the suggestion here to describe -- conceptually -- what Token and TokenChallenges are in the architecture document? If so, that seems quite reasonable.\nYeah. An architecture is most useful if you don't have to dig into the details to understand it and those references are somewhat critical to comprehension.", "new_text": "Attester: An entity that attests to properties of Client for the purposes of token issuance. Redemption context: The interactions and set of information shared between the Client and Origin. Issuance context: The interactions and set of information shared between the Client, Attester, and Issuer. Attestation context: The interactions and set of information shared between the Client and Attester only, for the purposes of attesting the vailidity of the Client. 3. The Privacy Pass architecture consists of four logical entities - Client, Origin, Issuer, and Attester - that work in concert as for token issuance and redemption. This section describes the purpose of token issuance and redemption and the requirements on the relevant participants. The typical interaction flow for Privacy Pass tokens uses the following steps: A Client interacts with an Origin by sending an HTTP request. The Origin sends an HTTP response that contains a token challenge that indicates a specific Issuer to use. Note that the request might be made as part of accessing a resource normally, or with the specific intent of triggering a token challenge. If the Client already has a token available that satisfies the token challenge, it can skip to step 6 and redeem its token. Otherwise, it invokes the issuance protocol to request a token from the designated Issuer. The first step in the issuance protocol is attestation. Specifically, the Attester performs attestation checks on the Client. These checks could be proof of solving a CAPTCHA, device trust, hardware attestation, etc (see attester). If attestation succeeds, the client creates a Token Request to send to the designated Issuer (generally via the Attester). The Attester and Issuer might be functions on the same server, depending on the deployment model (see deployment). Depending on the details of Attestation, the Client can send the Token Request to the Attester alongside any attestation information. If attestation fails, the Client receives an error and issuance aborts without a token. The Issuer generates a Token Response based on the Token Request, which is returned to the Client (generally via the Attester). Upon receiving the Token Response, the Client computes a token from the token challenge and Token Response. This token can be validated by anyone with the per-Issuer key, but cannot be linked to the content of the Token Request or Token Response. If the Client has a token, it includes it in a subsequent HTTP request to the Origin, as authorization. This token is sent only once. The Origin validates that the token was generated by the expected Issuer and has not already been redeemed for the corresponding token challenge. If the Client does not have a token, perhaps because issuance failed, the client does not reply to the Origin's challenge with a new request. 3.1."} {"id": "q-en-ietf-wg-privacypass-base-drafts-ea646b7d6cba8a05fc7b2e836a83fe3f140a7aa75a6d849e1dc35f2ced5f917c", "old_text": "identifiers or IP address information, to the Issuer. Tokens produced by an Issuer that admits issuance for any type of attestation cannot be relied on for any specific property. See attester-role for more details. 3.2.1.", "comments": "In this image: ! It isn't very clear where messages originate and who gets to see them. In particular, it isn't clear where the TokenRequest originates (the Client) and whether the Attester gets to see it (\u00af(\u00b0_o)/\u00af).\nThat's a fair point. It should be clear that the TokenRequest comes from the Attester. I wonder if we also somehow need to illustrate that there's a trust relationship between the Attester and Issuer here. Something to at least imply that the Attester is either implicitly trusted to do attestation, or, perhaps in future versions of Privacy Pass, attestation proofs are sent directly to the Issuer.\nMove diagram to section 3 (architecture). Also move much of the second intro paragraph down. Add more text about the actual steps to section 3, such as a numbered list\nIn here, introduce the basic privacy premise for how the redemption and issuance contexts are separated\nThe architecture references [AUTHSCHEME] for key concepts far more than I expected. For instance: Or: Can the architecture define the messages in their own right, then point to [AUTHSCHEME] for example implementations?\nIs the suggestion here to describe -- conceptually -- what Token and TokenChallenges are in the architecture document? If so, that seems quite reasonable.\nYeah. An architecture is most useful if you don't have to dig into the details to understand it and those references are somewhat critical to comprehension.", "new_text": "identifiers or IP address information, to the Issuer. Tokens produced by an Issuer that admits issuance for any type of attestation cannot be relied on for any specific property. See attester for more details. 3.2.1."} {"id": "q-en-ietf-wg-privacypass-base-drafts-6ba56cdaafec308d4428c327bdc651e9ed80c8cfd87721362e23b6ddea2929a2", "old_text": "timestamp of the event, Client visible information (including the IP address), and the Origin name. The challenge controls the type of token that the Origin will accept for the given resource. As described in AUTHSCHEME, there are a number of ways in which the token may vary, including: Issuance protocol. The token identifies the type of issuance protocol required for producing the token. Different issuance", "comments": "Is really true? A challenge is just a declaration from the Origin about what the Client should offer. It exerts no control. I would have thought that the arrangement is thus: The Origin determines what sorts of token are acceptable, including whether a token is necessary. This might incorporate any contextual information and so could change between different Clients or over time. A valid token includes information that an Origin can use to validate it against its policies. This includes all the stuff in the list that follows the quoted text (issuance protocol, issuer, redemption context, origin). Importantly, it does not carry information that directly identifies the Client to any entity other than the Origin; this being the primary privacy benefit to Clients. If the Origin determines that a token is necessary it can issue a challenge to the Client. This challenge might imply that a token that meets these constraints would be acceptable, but see the first point (i.e., the Origin can change its mind). There should be enough information in the challenge for the Client to obtain a token under reasonable assumptions (the Client understands the request and meets the requirements for token issuance). Clients can pre-emptively request token issuance. This might be based on a previous indication that this might work. (I see some caching in the trust tokens API.)\nI'm probably missing the confusion that led to this issue. The arrangement you describe is true, and in that arrangement the only tokens that will be accepted for an Origin are those which verify with a valid challenge. That means the challenge determines how the client produces a valid token. For example, the challenge indicates the token type and therefore the issuance protocol required to produce a token. It also indicates a redemption context and therefore whether or not clients can preemptively request and cache tokens to amortize the cost of issuance. I don't think a challenge is just a declaration of what the Client should offer. To me, challenges fundamentally determine the set of possible tokens that clients can produce, and that implies some level of control. If \"control\" is somehow problematic or confusing, perhaps we just phrase this differently. That might be \"The challenge determines the set of tokens the Origin will accept for the given resource. This is because tokens cryptographically bind the challenge to a client-chosen nonce.\"\nAs far as I can tell, yes, challenges do constrain what a client can produce in that in order to produce some tokens you need a challenge. However, the text here might be read to imply that it constrains the server in terms of what it might accept in the future. So yeah, using \"control\" is problematic as it too directly implies a link between the signal and the server actions that follow. I would have said that maybe \"A challenge provides the client with the information necessary to obtain tokens that the server might subsequently accept (in this context).\"\nThat works for me! I'll send a PR.", "new_text": "timestamp of the event, Client visible information (including the IP address), and the Origin name. The challenge provides the client with the information necessary to obtain tokens that the server might subsequently accept (in this context). As described in AUTHSCHEME, there are a number of ways in which the token may vary, including: Issuance protocol. The token identifies the type of issuance protocol required for producing the token. Different issuance"} {"id": "q-en-ietf-wg-privacypass-base-drafts-167fd56233a705a3a1d319b7a4737cb84b21d9ecd370f95bb78e5215f9d2adb7", "old_text": "3.2.1. Attestation is an important part of the issuance protocol. Attestation is the process by which an Attester bears witness to, confirms, or authenticates a Client so as to verify a property about the Client that is required for Issuance. Examples of attestation properties include, though are not limited to: Capable of solving a CAPTCHA. Clients that solve CAPTCHA challenges can be attested to have this capability for the purpose of being ruled out as a bot or otherwise automated Client. Client state. Clients can be associated with state and the attester can attest to this state. Examples of state include the number of issuance protocol invocations, the Client's geographic region, and whether the client has a valid application-layer account. Trusted device. Some Clients run on trusted hardware that are capable of producing device-level attestation statements. Each of these attestation types has different security properties. For example, attesting to having a valid account is different from attesting to running on trusted hardware. In general, minimizing the set of attestation formats helps minimize the amount of information leaked through a token. Each attestation format also has an impact on the overall system privacy. Requiring a conjunction of attestation types could decrease", "comments": "After talking with NAME and NAME it seems like we could benefit from more clarity around attestation. In particular, we could improve clarity by providing a mapping between the terminology used in the architecture document and the terminology used in the RATS working group (which already does attestation stuff). Moreover, we could provide more examples of how attestation might actually work in practice. Currently, the diagrams just have this opaque \"Attest\" message that's sent in the issuance protocol, but expanding on that with details and an example or two could help align everyone's mental model.", "new_text": "3.2.1. Attestation is an important part of the issuance protocol. In Privacy Pass, attestation is the process by which an Attester bears witness to, confirms, or authenticates a Client so as to verify a property about the Client that is required for Issuance. RFC9334 describes an architecture for attestation procedures. Using that architecture as a conceptual basis, Clients are RATS attesters that produce attestation evidence, and Attesters are RATS verififiers that appraise the validity of attestation evidence. The type of attestation procedure is a deployment-specific option and outside the scope of the issuance protocol. Example attestation procedures are below. Solving a CAPTCHA. Clients that solve CAPTCHA challenges can be attested to have this capability for the purpose of being ruled out as a bot or otherwise automated Client. Presenting evidence of Client device validity. Some Clients run on trusted hardware that are capable of producing device-level attestation evidence. Proving properties about Client state. Clients can be associated with state and the Attester can verify this state. Examples of state include the Client's geographic region and whether the Client has a valid application-layer account. Attesters may support different types of attestation procedures. A type of attestation procedure is also referred as an attestation format. In general, each attestation format has different security properties. For example, attesting to having a valid account is different from attesting to running on trusted hardware. In general, minimizing the set of attestation formats helps minimize the amount of information leaked through a token. Each attestation format also has an impact on the overall system privacy. Requiring a conjunction of attestation types could decrease"} {"id": "q-en-ietf-wg-privacypass-base-drafts-099f3e231b502c3a28acf97a60f5e76a11912d31d842d9554193b516b1d58db3", "old_text": "If the Client already has a token available that satisfies the token challenge, e.g., because the Client has a cache of previously issued tokens, it can skip to step-redemption and redeem its token. Otherwise, it invokes the issuance protocol to request a token from the designated Issuer. The first step in the issuance protocol is attestation. Specifically, the Attester performs attestation checks on the Client. These checks could be proof of solving a CAPTCHA, device trust, hardware attestation, etc (see attester). If attestation succeeds, the client creates a Token Request to send to the designated Issuer (generally via the Attester). The Attester and Issuer might be functions on the same server, depending on the deployment model (see deployment). Depending on the details of Attestation, the Client can send the Token Request to the Attester alongside any attestation information. If attestation fails, the Client receives an error and issuance aborts without a token. The Issuer generates a Token Response based on the Token Request, which is returned to the Client (generally via the Attester).", "comments": "The issuance overview step 3 says that the Client and Attester interact to do whatever is necessary to convince the Attester to attest. But there doesn't seem to be any pretext for this process to be initiated. Step 4 says that the client requests issuance (via the Attester), which would provide the necessary cause to start the attesting process. Should these steps be swapped?\nEither the attester is an intermediary to the flow (which is what we're doing in practice), or it is a separate flow. So the prerequisite is that the client knows it wants to request a token from an issuer, and then it does attestation.\nPerhaps this could be made more clear if we say that the client, upon deciding it wants to invoke the issuance protocol, then begins the deployment-specific attestation process with its trusted attester? I can suggest some text for this.", "new_text": "If the Client already has a token available that satisfies the token challenge, e.g., because the Client has a cache of previously issued tokens, it can skip to step-redemption and redeem its token. If the Client does not have a token available and decides it wants to obtain one (or more) bound to the token challenge, it then invokes the issuance protocol. As a prerequisite to the issuance protocol, the Client runs the deployment specific attestation process that is required for the designated Issuer. Client attestation can be done via proof of solving a CAPTCHA, checking device or hardware attestation validity, etc; see attester for more details. If the attestation process completes successfully, the client creates a Token Request to send to the designated Issuer (generally via the Attester). The Attester and Issuer might be functions on the same server, depending on the deployment model (see deployment). Depending on the attestation process, it is possible for attestation to run alongside the issuance protocol, e.g., where Clients send necessary attestation information to the Attester along with their Token Request. If the attestation process fails, the Client receives an error and issuance aborts without a token. The Issuer generates a Token Response based on the Token Request, which is returned to the Client (generally via the Attester)."} {"id": "q-en-ietf-wg-privacypass-base-drafts-018e45d4707b11774a4aaaec3168015a070b9b50ad99025e2946a6e7ac56cdf5", "old_text": "Issuer-Client unlinkability. This is similar to Origin-Client unlinkability in that a Client in an issuance context is indistinguishable from any other Client that might use the same issuance context. The set of Clients that share the same redemption context is referred to as a redemption anonymity set. Attester-Origin unlinkability. This is similar to Origin-Client and Issuer-Client unlinkability. It means that given two attestation contexts, the Attester cannot determine if both contexts correspond to the same Origin or two different Origins. The set of Clients that share the same attestation context is referred to as an anonymity set. By ensuring that different contexts cannot be linked in this way, only the Client is able to correlate information that might be used", "comments": "I think that this definition is incorrect. The latest sentence is from the definition of Origin-Client unlinkability:\ngood catch. Copy-pasta error.", "new_text": "Issuer-Client unlinkability. This is similar to Origin-Client unlinkability in that a Client in an issuance context is indistinguishable from any other Client that might use the same issuance context. The set of Clients that share the same issuance context is referred to as an issuance anonymity set. Attester-Origin unlinkability. This is similar to Origin-Client and Issuer-Client unlinkability. It means that given two attestation contexts, the Attester cannot determine if both contexts correspond to the same Origin or two different Origins. The set of Clients that share the same attestation context is referred to as an attestation anonymity set. By ensuring that different contexts cannot be linked in this way, only the Client is able to correlate information that might be used"} {"id": "q-en-ietf-wg-privacypass-base-drafts-1e171af33799e07c18cb62790841d2b1111934edd3c0be468a2468c9c3ac4711", "old_text": "If the Client already has a token available that satisfies the token challenge, e.g., because the Client has a cache of previously issued tokens, it can skip to step-redemption and redeem its token. If the Client does not have a token available and decides it wants to obtain one (or more) bound to the token challenge, it then", "comments": "NAME good suggestions! I merged them all.\nThis system extensively uses tokens that are only loosely bound to usage contexts. Possession of a token is pretty much all that is necessary. That means that tokens might be stolen or hoarded and then replayed. The architecture draft says nothing about this as a risk, but it probably should.\nAgree, especially for for cached tokens. I thought we had mentioned this somewhere, but if not, let's make sure it's included.", "new_text": "If the Client already has a token available that satisfies the token challenge, e.g., because the Client has a cache of previously issued tokens, it can skip to step-redemption and redeem its token; see hoarding for security considerations of cached tokens. If the Client does not have a token available and decides it wants to obtain one (or more) bound to the token challenge, it then"} {"id": "q-en-ietf-wg-privacypass-base-drafts-1e171af33799e07c18cb62790841d2b1111934edd3c0be468a2468c9c3ac4711", "old_text": "Origins. Implementing mitigations discused in deployment and privacy is therefore necessary to ensure that Privacy Pass offers meaningful privacy improvements to end-users. ", "comments": "NAME good suggestions! I merged them all.\nThis system extensively uses tokens that are only loosely bound to usage contexts. Possession of a token is pretty much all that is necessary. That means that tokens might be stolen or hoarded and then replayed. The architecture draft says nothing about this as a risk, but it probably should.\nAgree, especially for for cached tokens. I thought we had mentioned this somewhere, but if not, let's make sure it's included.", "new_text": "Origins. Implementing mitigations discused in deployment and privacy is therefore necessary to ensure that Privacy Pass offers meaningful privacy improvements to end-users. 7.1. Depending on the Origin's token challenge, Clients can request and cache more than one token using an issuance protocol. Cached tokens help improve privacy by separating the time of token issuance from the time of token redemption, and also allow Clients to reduce the overhead of receiving new tokens via the issuance protocol. As a consequence, Origins that send token challenges which are compatible with cached tokens need to take precautions to ensure that tokens are not replayed. This is typically done via keeping track of tokens that are redeemed for the period of time in which cached tokens would be accepted for particular challenges. Moreover, since tokens are not intrinsically bound to Clients, it is possible for malicious Clients to collude and share tokens in a so- called \"hoarding attack.\" As an example of this attack, many distributed Clients could obtain cacheable tokens and them share them with a single Client to redeem in a way that would violate an Origin's attempt to limit tokens to any one particular Client. Depending on the deployment model, it can be possible to detect these types of attacks by comparing issuance and redemption contexts; for example, this is possible in the Joint Origin and Issuer model. "} {"id": "q-en-ietf-wg-privacypass-base-drafts-30e03405be5111bcdff2347a951894821ce07ad2bb630b3d83e8d1d701a40a8e", "old_text": "3.4.1. Attestation is an important part of the issuance protocol. In Privacy Pass, attestation is the process by which an Attester bears witness to, confirms, or authenticates a Client so as to verify a property about the Client that is required for Issuance. Clients explicitly trust Attesters to perform attestation correctly and in a way that does not violate their privacy. RFC9334 describes an architecture for attestation procedures. Using that architecture as a conceptual basis, Clients are RATS attesters", "comments": "In a successful issuance exchange, the attester provides the issuer with information about the client. The text in Section 3.2.1 points out that the way in which information about the client is combined could reduce anonymity set size in ways that affect privacy. Given that we have a chain of custody, I imagine that the privacy property here is about what the issuer learns about a client. More information means more information that might then propagate from the issue onward to the origin. This means that the attester needs to limit what it passes along to only that which is relevant. The client needs to trust the attester to do this. ... or does it? The text in Section 3.2.1 implies strongly that this anonymity set size is important, but the constraints on the client-issuer interaction, especially consistency rules, very much limits what the issuer can pass along to the origin. So maybe the system depends less on trusting the attester and more on the combination of the cryptographic properties of the token and the consistency system that mitigates against misuse. None of this appears in the privacy considerations. Should it?\nOn this point: \"Clients do not explicitly trust Issuers.\" (Section 3.2.2) This would invalidate much of the discussion of anonymity sets in attestations. This is quite confusing. The client doesn't need to trust the issuer, because the issuer is unable to pass information on to others. However, the client cannot trust the issuer with information, so it has to trust the attester not to pass stuff along AND it has to use an anonymizing proxy to interact with the issuer so that nothing leaks. This is quite contradictory overall.\nThere are issuance protocols wherein the issuer learns information about the origin by design, like the . In this variant, if the attester were to pass information about the client on to the issuer, then the issuer would be able to reconstruct the (client, origin) pair, breaking the unlinkability goals. I agree that in the case where the issuer learns nothing about the origin in the protocol then reliance on the attester is lessened, but we need to make sure we accommodate future issuance protocols here. I'll think about ways in which we can try to make this more clear. This might go into the privacy considerations, as suggested.", "new_text": "3.4.1. In Privacy Pass, attestation is the process by which an Attester bears witness to, confirms, or authenticates a Client so as to verify properties about the Client that are required for Issuance. Issuers trust Attesters to perform attestation correctly. RFC9334 describes an architecture for attestation procedures. Using that architecture as a conceptual basis, Clients are RATS attesters"} {"id": "q-en-ietf-wg-privacypass-base-drafts-30e03405be5111bcdff2347a951894821ce07ad2bb630b3d83e8d1d701a40a8e", "old_text": "minimizing the set of supported attestation procedures helps minimize the amount of information leaked through a token. Each attestation procedure also has an impact on the overall system privacy. Requiring a conjunction of attestation types could decrease the overall anonymity set size. For example, the number of Clients that have solved a CAPTCHA in the past day, that have a valid account, and that are running on a trusted device is less than the number of Clients that have solved a CAPTCHA in the past day. Attesters SHOULD not admit attestation types that result in small anonymity sets. Issuers trust Attesters to correctly and reliably perform attestation. However, certain types of attestation can vary in value", "comments": "In a successful issuance exchange, the attester provides the issuer with information about the client. The text in Section 3.2.1 points out that the way in which information about the client is combined could reduce anonymity set size in ways that affect privacy. Given that we have a chain of custody, I imagine that the privacy property here is about what the issuer learns about a client. More information means more information that might then propagate from the issue onward to the origin. This means that the attester needs to limit what it passes along to only that which is relevant. The client needs to trust the attester to do this. ... or does it? The text in Section 3.2.1 implies strongly that this anonymity set size is important, but the constraints on the client-issuer interaction, especially consistency rules, very much limits what the issuer can pass along to the origin. So maybe the system depends less on trusting the attester and more on the combination of the cryptographic properties of the token and the consistency system that mitigates against misuse. None of this appears in the privacy considerations. Should it?\nOn this point: \"Clients do not explicitly trust Issuers.\" (Section 3.2.2) This would invalidate much of the discussion of anonymity sets in attestations. This is quite confusing. The client doesn't need to trust the issuer, because the issuer is unable to pass information on to others. However, the client cannot trust the issuer with information, so it has to trust the attester not to pass stuff along AND it has to use an anonymizing proxy to interact with the issuer so that nothing leaks. This is quite contradictory overall.\nThere are issuance protocols wherein the issuer learns information about the origin by design, like the . In this variant, if the attester were to pass information about the client on to the issuer, then the issuer would be able to reconstruct the (client, origin) pair, breaking the unlinkability goals. I agree that in the case where the issuer learns nothing about the origin in the protocol then reliance on the attester is lessened, but we need to make sure we accommodate future issuance protocols here. I'll think about ways in which we can try to make this more clear. This might go into the privacy considerations, as suggested.", "new_text": "minimizing the set of supported attestation procedures helps minimize the amount of information leaked through a token. The role of the Attester in the issuance protocol and its impact on privacy depends on the type of attestation procedure, issuance protocol, deployment model. For instance, requiring a conjunction of attestation procedures could decrease the overall anonymity set size. As an example, the number of Clients that have solved a CAPTCHA in the past day, that have a valid account, and that are running on a trusted device is less than the number of Clients that have solved a CAPTCHA in the past day. Attesters SHOULD NOT be based on attestation procedures that result in small anonymity sets. Depending on the issuance protocol, the Issuer might learn information about the Origin. To ensure Issuer-Client unlinkability, the Issuer should be unable to link that information to a specific Client. For such issuance protocols where the Attester has access to Client-specific information, such as is the case for attestation procedures that involve Client-specific information (such as application-layer account information) or for deployment models where the Attester learns Client-specific information (such as Client IP addresses), Clients trust the Attester to not share any Client- specific information with the Issuer. In deployments where the Attester does not learn Client-specific information, the Client does not need to explicitly trust the Attester in this regard. Issuers trust Attesters to correctly and reliably perform attestation. However, certain types of attestation can vary in value"} {"id": "q-en-ip-handling-5a1ee329b6b9ebb2c793101be24476e308317ac1a037d1050036d3c3d0702e6e", "old_text": "WebRTC IP Address Handling Requirements draft-ietf-rtcweb-ip-handling-10 Abstract", "comments": "8445 8174 Future documents, not future versions Fix normative refs\nSure - happy to add text of this sort if we think it would be valuable. OK. :-) OK.\nLGTM", "new_text": "WebRTC IP Address Handling Requirements draft-ietf-rtcweb-ip-handling-11 Abstract"} {"id": "q-en-ip-handling-5a1ee329b6b9ebb2c793101be24476e308317ac1a037d1050036d3c3d0702e6e", "old_text": "2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in RFC2119. 3. In order to establish a peer-to-peer connection, WebRTC implementations use Interactive Connectivity Establishment (ICE) RFC5245, which attempts to discover multiple IP addresses using techniques such as Session Traversal Utilities for NAT (STUN) RFC5389 and Traversal Using Relays around NAT (TURN) RFC5766, and then checks the connectivity of each local-address-remote-address pair in order", "comments": "8445 8174 Future documents, not future versions Fix normative refs\nSure - happy to add text of this sort if we think it would be valuable. OK. :-) OK.\nLGTM", "new_text": "2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. 3. In order to establish a peer-to-peer connection, WebRTC implementations use Interactive Connectivity Establishment (ICE) RFC8445, which attempts to discover multiple IP addresses using techniques such as Session Traversal Utilities for NAT (STUN) RFC5389 and Traversal Using Relays around NAT (TURN) RFC5766, and then checks the connectivity of each local-address-remote-address pair in order"} {"id": "q-en-ip-handling-5a1ee329b6b9ebb2c793101be24476e308317ac1a037d1050036d3c3d0702e6e", "old_text": "choose stricter modes if desired, e.g., if a user indicates they want all WebRTC traffic to follow the default route. Future versions of this document may define additional modes and/or update the recommended default modes. Note that the suggested defaults can still be used even for organizations that want all external WebRTC traffic to traverse a", "comments": "8445 8174 Future documents, not future versions Fix normative refs\nSure - happy to add text of this sort if we think it would be valuable. OK. :-) OK.\nLGTM", "new_text": "choose stricter modes if desired, e.g., if a user indicates they want all WebRTC traffic to follow the default route. Future documents may define additional modes and/or update the recommended default modes. Note that the suggested defaults can still be used even for organizations that want all external WebRTC traffic to traverse a"} {"id": "q-en-jsep-686684ebe8238c052ff9662246224c1346cbcb0235445f08920603451abbd6a3", "old_text": "1.3. When RFC8829 was published, inconsistencies regarding BUNDLE RFC8843 operation were identified with regard to both the specification text as well as implementation behavior. The former concern was addressed via an update to RFC8843. For the latter concern, it was observed that some implementations implemented the \"max-bundle\" bundle policy defined in RFC8829 by assuming that bundling had already been negotiated, rather than marking \"m=\" sections as bundle-only as indicated by the specification. In order to prevent unexpected changes to applications relying on the pre-standard behavior, the decision was made to deprecate \"max-bundle\" and instead introduce an identically defined \"must-bundle\" policy that, when selected, provides the behavior originally specified by RFC8829. 2.", "comments": "and\nTo resolve WGLC comments: URL it was agreed to incorporate a reference to this text in 8843bis: URL That text ended up in s7.6.\nWhen draft-8843bis is given an RFC number, this draft should reference the new RFC. Hi, Eventhough I would not like to make more changes than necessary, I am fine with \"3PCC Considerations\". However, your suggested text is very difficult to understand in some places, so let me give it a try. (The first paragraph is generic, the second SIP specific, and the third BUNDLE specific.) 3PCC Considerations In some 3PCC scenarios a new session will be established between an endpoint that is currently part of an ongoing session and an endpoint that is currently not part of an ongoing session. The endpoint that is part of a session will generate a subsequent offer that will be forwarded to the other endpoint by a 3PCC controller. The endpoint that is not part of a session will process the offer as an initial offer. The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If the UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA). If the UA is not part of an ongoing session, it will process the offer as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers, and it cannot be assumed that a UA is able to correctly process a subsequent offer as an initial offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer, following the procedures in (Section 7.2), before it forwards the offer to a UA. In the rewritten offer the 3PCC controller will set the port value to zero (and include an SDP 'bundle-only' attribute) for each \"m=\" section within the BUNDLE group, excluding the offerer-tagged \"m=\" section. __ From: Roman Shpount Sent: Tuesday, November 2, 2021 6:33 PM To: Christer Holmberg Cc: Justin Uberti ; Justin Uberti ; RTCWeb IETF Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL How about we replace the SIP Considerations with: 3PCC Considerations In some 3PCC scenarios, an offer generated during an ongoing session, i.e., a subsequent offer, will be used by a 3PCC controller to establish a new session and forwarded as an initial offer to another endpoint that is currently not part of a session. For example, the Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA) as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers. It cannot be assumed that a subsequent offer is a valid initial offer and that the endpoint that expects an initial offer will properly process such a subsequent offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer before it is used to establish a new session. To make the subsequent offer a valid initial offer, 3PCC will need to modify all the non-tagged m= lines to include the bundle-only attribute and set the m= line port to zero. _ Roman Shpount Christer Holmberg > wrote: Hi, What about something like this: OLD: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP Offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. >From a BUNDLE perspective, such SDP Offer SHOULD be generated using the procedures defined in Section 7.2.\u201d NEW: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. In some 3PCC scenarios the UAS will be part of an ongoing session, and will therefore include a subsequent offer in the 200 OK responses. The offer will be received by a 3PCC controller (UAC) and then forwarded as an initial offer to another User Agent (UA) that is currently not part of a session. When the BUNDLE mechanism is used, as an initial BUNDLE offer look different than a subsequent BUNDLE offer, it cannot be assumed that a UA that expects an initial offer will be able to properly process a subsequent offer. Therefore, the 3PCC controller needs to act as a Back-To-Back User Agent (B2BUA), and when it receives the subsequent offer it needs to rewrite it into an initial offer before it is forwarded to such UA.\u201d Regards, Christer From: Roman Shpount > Sent: tiistai 2. marraskuuta 2021 10.41 To: Justin Uberti > Cc: Christer Holmberg >; Justin Uberti >; RTCWeb IETF > Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL The PROBLEM is that we have two endpoints, where one sends a subsequent offer, and the other one expects an initial offer. What do you normally do when you have that kind of problem? You use an SBC/B2BUA. In this case that SBC/B2BUA would be the 3PCC controller. So, my suggestion would be to remove the SHOULD text from 8843bis, and simply add a note somewhere (in 8843bis and/or 8829bis) which describes the issue and says that the 3GPP controller needs to modify the offer accordingly. Roman, thoughts on this? If the 3PCC is going to rewrite the offer SDP anyway then maybe adding a=bundle-only isn't the end of the world. I am not opposed to this idea. 3PCC typically knows that the subsequent offer is going to be used as initial, and should be able to rewrite the offer to make it valid. We can change SIP Considerations section in 8843bis (URL), remove the SHOULD, and specify that 3PCC controller should fix the offer. We can then reference this note from 8829bis or restate the same guidance. Roman Shpount\nNot seeing any objections for a month then once -02 is submitted we will progress this towards our AD. spt", "new_text": "1.3. When RFC8829 was published, inconsistencies regarding BUNDLE I- D.ietf-mmusic-rfc8843-bis operation were identified with regard to both the specification text as well as implementation behavior. The former concern was addressed via an update to I-D.ietf-mmusic- rfc8843-bis. For the latter concern, it was observed that some implementations implemented the \"max-bundle\" bundle policy defined in RFC8829 by assuming that bundling had already been negotiated, rather than marking \"m=\" sections as bundle-only as indicated by the specification. In order to prevent unexpected changes to applications relying on the pre-standard behavior, the decision was made to deprecate \"max-bundle\" and instead introduce an identically defined \"must-bundle\" policy that, when selected, provides the behavior originally specified by RFC8829. 2."} {"id": "q-en-jsep-686684ebe8238c052ff9662246224c1346cbcb0235445f08920603451abbd6a3", "old_text": "bundled (either by a successful bundle negotiation or by being marked as bundle-only), then candidates will be gathered and exchanged for that \"m=\" section if and only if its MID item is a BUNDLE-tag, as described in RFC8843. 3.5.2.", "comments": "and\nTo resolve WGLC comments: URL it was agreed to incorporate a reference to this text in 8843bis: URL That text ended up in s7.6.\nWhen draft-8843bis is given an RFC number, this draft should reference the new RFC. Hi, Eventhough I would not like to make more changes than necessary, I am fine with \"3PCC Considerations\". However, your suggested text is very difficult to understand in some places, so let me give it a try. (The first paragraph is generic, the second SIP specific, and the third BUNDLE specific.) 3PCC Considerations In some 3PCC scenarios a new session will be established between an endpoint that is currently part of an ongoing session and an endpoint that is currently not part of an ongoing session. The endpoint that is part of a session will generate a subsequent offer that will be forwarded to the other endpoint by a 3PCC controller. The endpoint that is not part of a session will process the offer as an initial offer. The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If the UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA). If the UA is not part of an ongoing session, it will process the offer as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers, and it cannot be assumed that a UA is able to correctly process a subsequent offer as an initial offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer, following the procedures in (Section 7.2), before it forwards the offer to a UA. In the rewritten offer the 3PCC controller will set the port value to zero (and include an SDP 'bundle-only' attribute) for each \"m=\" section within the BUNDLE group, excluding the offerer-tagged \"m=\" section. __ From: Roman Shpount Sent: Tuesday, November 2, 2021 6:33 PM To: Christer Holmberg Cc: Justin Uberti ; Justin Uberti ; RTCWeb IETF Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL How about we replace the SIP Considerations with: 3PCC Considerations In some 3PCC scenarios, an offer generated during an ongoing session, i.e., a subsequent offer, will be used by a 3PCC controller to establish a new session and forwarded as an initial offer to another endpoint that is currently not part of a session. For example, the Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA) as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers. It cannot be assumed that a subsequent offer is a valid initial offer and that the endpoint that expects an initial offer will properly process such a subsequent offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer before it is used to establish a new session. To make the subsequent offer a valid initial offer, 3PCC will need to modify all the non-tagged m= lines to include the bundle-only attribute and set the m= line port to zero. _ Roman Shpount Christer Holmberg > wrote: Hi, What about something like this: OLD: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP Offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. >From a BUNDLE perspective, such SDP Offer SHOULD be generated using the procedures defined in Section 7.2.\u201d NEW: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. In some 3PCC scenarios the UAS will be part of an ongoing session, and will therefore include a subsequent offer in the 200 OK responses. The offer will be received by a 3PCC controller (UAC) and then forwarded as an initial offer to another User Agent (UA) that is currently not part of a session. When the BUNDLE mechanism is used, as an initial BUNDLE offer look different than a subsequent BUNDLE offer, it cannot be assumed that a UA that expects an initial offer will be able to properly process a subsequent offer. Therefore, the 3PCC controller needs to act as a Back-To-Back User Agent (B2BUA), and when it receives the subsequent offer it needs to rewrite it into an initial offer before it is forwarded to such UA.\u201d Regards, Christer From: Roman Shpount > Sent: tiistai 2. marraskuuta 2021 10.41 To: Justin Uberti > Cc: Christer Holmberg >; Justin Uberti >; RTCWeb IETF > Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL The PROBLEM is that we have two endpoints, where one sends a subsequent offer, and the other one expects an initial offer. What do you normally do when you have that kind of problem? You use an SBC/B2BUA. In this case that SBC/B2BUA would be the 3PCC controller. So, my suggestion would be to remove the SHOULD text from 8843bis, and simply add a note somewhere (in 8843bis and/or 8829bis) which describes the issue and says that the 3GPP controller needs to modify the offer accordingly. Roman, thoughts on this? If the 3PCC is going to rewrite the offer SDP anyway then maybe adding a=bundle-only isn't the end of the world. I am not opposed to this idea. 3PCC typically knows that the subsequent offer is going to be used as initial, and should be able to rewrite the offer to make it valid. We can change SIP Considerations section in 8843bis (URL), remove the SHOULD, and specify that 3PCC controller should fix the offer. We can then reference this note from 8829bis or restate the same guidance. Roman Shpount\nNot seeing any objections for a month then once -02 is submitted we will progress this towards our AD. spt", "new_text": "bundled (either by a successful bundle negotiation or by being marked as bundle-only), then candidates will be gathered and exchanged for that \"m=\" section if and only if its MID item is a BUNDLE-tag, as described in I-D.ietf-mmusic-rfc8843-bis. 3.5.2."} {"id": "q-en-jsep-686684ebe8238c052ff9662246224c1346cbcb0235445f08920603451abbd6a3", "old_text": "MUST be zero. The application can specify its preferred policy regarding the use of BUNDLE, the multiplexing mechanism defined in RFC8843. Regardless of policy, the application will always try to negotiate bundle onto a single transport and will offer a single bundle group across all \"m=\" sections; use of this single transport is contingent upon the answerer accepting bundle. However, by specifying a policy from the list below, the application can control exactly how aggressively it will try to bundle media streams together, which affects how it will interoperate with a non-bundle-aware endpoint. When negotiating with a non-bundle-aware endpoint, only the streams not marked as bundle- only streams will be established. The set of available policies is as follows:", "comments": "and\nTo resolve WGLC comments: URL it was agreed to incorporate a reference to this text in 8843bis: URL That text ended up in s7.6.\nWhen draft-8843bis is given an RFC number, this draft should reference the new RFC. Hi, Eventhough I would not like to make more changes than necessary, I am fine with \"3PCC Considerations\". However, your suggested text is very difficult to understand in some places, so let me give it a try. (The first paragraph is generic, the second SIP specific, and the third BUNDLE specific.) 3PCC Considerations In some 3PCC scenarios a new session will be established between an endpoint that is currently part of an ongoing session and an endpoint that is currently not part of an ongoing session. The endpoint that is part of a session will generate a subsequent offer that will be forwarded to the other endpoint by a 3PCC controller. The endpoint that is not part of a session will process the offer as an initial offer. The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If the UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA). If the UA is not part of an ongoing session, it will process the offer as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers, and it cannot be assumed that a UA is able to correctly process a subsequent offer as an initial offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer, following the procedures in (Section 7.2), before it forwards the offer to a UA. In the rewritten offer the 3PCC controller will set the port value to zero (and include an SDP 'bundle-only' attribute) for each \"m=\" section within the BUNDLE group, excluding the offerer-tagged \"m=\" section. __ From: Roman Shpount Sent: Tuesday, November 2, 2021 6:33 PM To: Christer Holmberg Cc: Justin Uberti ; Justin Uberti ; RTCWeb IETF Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL How about we replace the SIP Considerations with: 3PCC Considerations In some 3PCC scenarios, an offer generated during an ongoing session, i.e., a subsequent offer, will be used by a 3PCC controller to establish a new session and forwarded as an initial offer to another endpoint that is currently not part of a session. For example, the Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA) as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers. It cannot be assumed that a subsequent offer is a valid initial offer and that the endpoint that expects an initial offer will properly process such a subsequent offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer before it is used to establish a new session. To make the subsequent offer a valid initial offer, 3PCC will need to modify all the non-tagged m= lines to include the bundle-only attribute and set the m= line port to zero. _ Roman Shpount Christer Holmberg > wrote: Hi, What about something like this: OLD: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP Offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. >From a BUNDLE perspective, such SDP Offer SHOULD be generated using the procedures defined in Section 7.2.\u201d NEW: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. In some 3PCC scenarios the UAS will be part of an ongoing session, and will therefore include a subsequent offer in the 200 OK responses. The offer will be received by a 3PCC controller (UAC) and then forwarded as an initial offer to another User Agent (UA) that is currently not part of a session. When the BUNDLE mechanism is used, as an initial BUNDLE offer look different than a subsequent BUNDLE offer, it cannot be assumed that a UA that expects an initial offer will be able to properly process a subsequent offer. Therefore, the 3PCC controller needs to act as a Back-To-Back User Agent (B2BUA), and when it receives the subsequent offer it needs to rewrite it into an initial offer before it is forwarded to such UA.\u201d Regards, Christer From: Roman Shpount > Sent: tiistai 2. marraskuuta 2021 10.41 To: Justin Uberti > Cc: Christer Holmberg >; Justin Uberti >; RTCWeb IETF > Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL The PROBLEM is that we have two endpoints, where one sends a subsequent offer, and the other one expects an initial offer. What do you normally do when you have that kind of problem? You use an SBC/B2BUA. In this case that SBC/B2BUA would be the 3PCC controller. So, my suggestion would be to remove the SHOULD text from 8843bis, and simply add a note somewhere (in 8843bis and/or 8829bis) which describes the issue and says that the 3GPP controller needs to modify the offer accordingly. Roman, thoughts on this? If the 3PCC is going to rewrite the offer SDP anyway then maybe adding a=bundle-only isn't the end of the world. I am not opposed to this idea. 3PCC typically knows that the subsequent offer is going to be used as initial, and should be able to rewrite the offer to make it valid. We can change SIP Considerations section in 8843bis (URL), remove the SHOULD, and specify that 3PCC controller should fix the offer. We can then reference this note from 8829bis or restate the same guidance. Roman Shpount\nNot seeing any objections for a month then once -02 is submitted we will progress this towards our AD. spt", "new_text": "MUST be zero. The application can specify its preferred policy regarding the use of BUNDLE, the multiplexing mechanism defined in I-D.ietf-mmusic- rfc8843-bis. Regardless of policy, the application will always try to negotiate bundle onto a single transport and will offer a single bundle group across all \"m=\" sections; use of this single transport is contingent upon the answerer accepting bundle. However, by specifying a policy from the list below, the application can control exactly how aggressively it will try to bundle media streams together, which affects how it will interoperate with a non-bundle- aware endpoint. When negotiating with a non-bundle-aware endpoint, only the streams not marked as bundle-only streams will be established. The set of available policies is as follows:"} {"id": "q-en-jsep-686684ebe8238c052ff9662246224c1346cbcb0235445f08920603451abbd6a3", "old_text": "RFC8859 groups SDP attributes into different categories. To avoid unnecessary duplication when bundling, attributes of category IDENTICAL or TRANSPORT MUST NOT be repeated in bundled \"m=\" sections, repeating the guidance from RFC8843. This includes \"m=\" sections for which bundling has been negotiated and is still desired, as well as \"m=\" sections marked as bundle-only. The following attributes, which are of a category other than IDENTICAL or TRANSPORT, MUST be included in each \"m=\" section:", "comments": "and\nTo resolve WGLC comments: URL it was agreed to incorporate a reference to this text in 8843bis: URL That text ended up in s7.6.\nWhen draft-8843bis is given an RFC number, this draft should reference the new RFC. Hi, Eventhough I would not like to make more changes than necessary, I am fine with \"3PCC Considerations\". However, your suggested text is very difficult to understand in some places, so let me give it a try. (The first paragraph is generic, the second SIP specific, and the third BUNDLE specific.) 3PCC Considerations In some 3PCC scenarios a new session will be established between an endpoint that is currently part of an ongoing session and an endpoint that is currently not part of an ongoing session. The endpoint that is part of a session will generate a subsequent offer that will be forwarded to the other endpoint by a 3PCC controller. The endpoint that is not part of a session will process the offer as an initial offer. The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If the UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA). If the UA is not part of an ongoing session, it will process the offer as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers, and it cannot be assumed that a UA is able to correctly process a subsequent offer as an initial offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer, following the procedures in (Section 7.2), before it forwards the offer to a UA. In the rewritten offer the 3PCC controller will set the port value to zero (and include an SDP 'bundle-only' attribute) for each \"m=\" section within the BUNDLE group, excluding the offerer-tagged \"m=\" section. __ From: Roman Shpount Sent: Tuesday, November 2, 2021 6:33 PM To: Christer Holmberg Cc: Justin Uberti ; Justin Uberti ; RTCWeb IETF Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL How about we replace the SIP Considerations with: 3PCC Considerations In some 3PCC scenarios, an offer generated during an ongoing session, i.e., a subsequent offer, will be used by a 3PCC controller to establish a new session and forwarded as an initial offer to another endpoint that is currently not part of a session. For example, the Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA) as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers. It cannot be assumed that a subsequent offer is a valid initial offer and that the endpoint that expects an initial offer will properly process such a subsequent offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer before it is used to establish a new session. To make the subsequent offer a valid initial offer, 3PCC will need to modify all the non-tagged m= lines to include the bundle-only attribute and set the m= line port to zero. _ Roman Shpount Christer Holmberg > wrote: Hi, What about something like this: OLD: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP Offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. >From a BUNDLE perspective, such SDP Offer SHOULD be generated using the procedures defined in Section 7.2.\u201d NEW: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. In some 3PCC scenarios the UAS will be part of an ongoing session, and will therefore include a subsequent offer in the 200 OK responses. The offer will be received by a 3PCC controller (UAC) and then forwarded as an initial offer to another User Agent (UA) that is currently not part of a session. When the BUNDLE mechanism is used, as an initial BUNDLE offer look different than a subsequent BUNDLE offer, it cannot be assumed that a UA that expects an initial offer will be able to properly process a subsequent offer. Therefore, the 3PCC controller needs to act as a Back-To-Back User Agent (B2BUA), and when it receives the subsequent offer it needs to rewrite it into an initial offer before it is forwarded to such UA.\u201d Regards, Christer From: Roman Shpount > Sent: tiistai 2. marraskuuta 2021 10.41 To: Justin Uberti > Cc: Christer Holmberg >; Justin Uberti >; RTCWeb IETF > Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL The PROBLEM is that we have two endpoints, where one sends a subsequent offer, and the other one expects an initial offer. What do you normally do when you have that kind of problem? You use an SBC/B2BUA. In this case that SBC/B2BUA would be the 3PCC controller. So, my suggestion would be to remove the SHOULD text from 8843bis, and simply add a note somewhere (in 8843bis and/or 8829bis) which describes the issue and says that the 3GPP controller needs to modify the offer accordingly. Roman, thoughts on this? If the 3PCC is going to rewrite the offer SDP anyway then maybe adding a=bundle-only isn't the end of the world. I am not opposed to this idea. 3PCC typically knows that the subsequent offer is going to be used as initial, and should be able to rewrite the offer to make it valid. We can change SIP Considerations section in 8843bis (URL), remove the SHOULD, and specify that 3PCC controller should fix the offer. We can then reference this note from 8829bis or restate the same guidance. Roman Shpount\nNot seeing any objections for a month then once -02 is submitted we will progress this towards our AD. spt", "new_text": "RFC8859 groups SDP attributes into different categories. To avoid unnecessary duplication when bundling, attributes of category IDENTICAL or TRANSPORT MUST NOT be repeated in bundled \"m=\" sections, repeating the guidance from I-D.ietf-mmusic-rfc8843-bis. This includes \"m=\" sections for which bundling has been negotiated and is still desired, as well as \"m=\" sections marked as bundle-only. The following attributes, which are of a category other than IDENTICAL or TRANSPORT, MUST be included in each \"m=\" section:"} {"id": "q-en-jsep-686684ebe8238c052ff9662246224c1346cbcb0235445f08920603451abbd6a3", "old_text": "possible, and media sections will not be marked as bundle-only. This is by design, but could cause issues in the rare case of sending a subsequent offer as an initial offer to a non-bundle-aware endpoint via Third Party Call Control (3PCC). \"a=group:LS\" attributes are generated in the same way as for initial offers, with the additional stipulation that any lip sync groups that", "comments": "and\nTo resolve WGLC comments: URL it was agreed to incorporate a reference to this text in 8843bis: URL That text ended up in s7.6.\nWhen draft-8843bis is given an RFC number, this draft should reference the new RFC. Hi, Eventhough I would not like to make more changes than necessary, I am fine with \"3PCC Considerations\". However, your suggested text is very difficult to understand in some places, so let me give it a try. (The first paragraph is generic, the second SIP specific, and the third BUNDLE specific.) 3PCC Considerations In some 3PCC scenarios a new session will be established between an endpoint that is currently part of an ongoing session and an endpoint that is currently not part of an ongoing session. The endpoint that is part of a session will generate a subsequent offer that will be forwarded to the other endpoint by a 3PCC controller. The endpoint that is not part of a session will process the offer as an initial offer. The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If the UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA). If the UA is not part of an ongoing session, it will process the offer as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers, and it cannot be assumed that a UA is able to correctly process a subsequent offer as an initial offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer, following the procedures in (Section 7.2), before it forwards the offer to a UA. In the rewritten offer the 3PCC controller will set the port value to zero (and include an SDP 'bundle-only' attribute) for each \"m=\" section within the BUNDLE group, excluding the offerer-tagged \"m=\" section. __ From: Roman Shpount Sent: Tuesday, November 2, 2021 6:33 PM To: Christer Holmberg Cc: Justin Uberti ; Justin Uberti ; RTCWeb IETF Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL How about we replace the SIP Considerations with: 3PCC Considerations In some 3PCC scenarios, an offer generated during an ongoing session, i.e., a subsequent offer, will be used by a 3PCC controller to establish a new session and forwarded as an initial offer to another endpoint that is currently not part of a session. For example, the Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA) as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers. It cannot be assumed that a subsequent offer is a valid initial offer and that the endpoint that expects an initial offer will properly process such a subsequent offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer before it is used to establish a new session. To make the subsequent offer a valid initial offer, 3PCC will need to modify all the non-tagged m= lines to include the bundle-only attribute and set the m= line port to zero. _ Roman Shpount Christer Holmberg > wrote: Hi, What about something like this: OLD: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP Offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. >From a BUNDLE perspective, such SDP Offer SHOULD be generated using the procedures defined in Section 7.2.\u201d NEW: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. In some 3PCC scenarios the UAS will be part of an ongoing session, and will therefore include a subsequent offer in the 200 OK responses. The offer will be received by a 3PCC controller (UAC) and then forwarded as an initial offer to another User Agent (UA) that is currently not part of a session. When the BUNDLE mechanism is used, as an initial BUNDLE offer look different than a subsequent BUNDLE offer, it cannot be assumed that a UA that expects an initial offer will be able to properly process a subsequent offer. Therefore, the 3PCC controller needs to act as a Back-To-Back User Agent (B2BUA), and when it receives the subsequent offer it needs to rewrite it into an initial offer before it is forwarded to such UA.\u201d Regards, Christer From: Roman Shpount > Sent: tiistai 2. marraskuuta 2021 10.41 To: Justin Uberti > Cc: Christer Holmberg >; Justin Uberti >; RTCWeb IETF > Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL The PROBLEM is that we have two endpoints, where one sends a subsequent offer, and the other one expects an initial offer. What do you normally do when you have that kind of problem? You use an SBC/B2BUA. In this case that SBC/B2BUA would be the 3PCC controller. So, my suggestion would be to remove the SHOULD text from 8843bis, and simply add a note somewhere (in 8843bis and/or 8829bis) which describes the issue and says that the 3GPP controller needs to modify the offer accordingly. Roman, thoughts on this? If the 3PCC is going to rewrite the offer SDP anyway then maybe adding a=bundle-only isn't the end of the world. I am not opposed to this idea. 3PCC typically knows that the subsequent offer is going to be used as initial, and should be able to rewrite the offer to make it valid. We can change SIP Considerations section in 8843bis (URL), remove the SHOULD, and specify that 3PCC controller should fix the offer. We can then reference this note from 8829bis or restate the same guidance. Roman Shpount\nNot seeing any objections for a month then once -02 is submitted we will progress this towards our AD. spt", "new_text": "possible, and media sections will not be marked as bundle-only. This is by design, but could cause issues in the rare case of sending a subsequent offer as an initial offer to a non-bundle-aware endpoint via Third Party Call Control (3PCC), as discussed in I-D.ietf-mmusic- rfc8843-bis. \"a=group:LS\" attributes are generated in the same way as for initial offers, with the additional stipulation that any lip sync groups that"} {"id": "q-en-jsep-686684ebe8238c052ff9662246224c1346cbcb0235445f08920603451abbd6a3", "old_text": "If the answer contains valid bundle groups, discard any ICE components for the \"m=\" sections that will be bundled onto the primary ICE components in each bundle, and begin muxing these \"m=\" sections accordingly, as described in RFC8843. If the description is of type \"answer\" and there are still remaining candidates in the ICE candidate pool, discard them.", "comments": "and\nTo resolve WGLC comments: URL it was agreed to incorporate a reference to this text in 8843bis: URL That text ended up in s7.6.\nWhen draft-8843bis is given an RFC number, this draft should reference the new RFC. Hi, Eventhough I would not like to make more changes than necessary, I am fine with \"3PCC Considerations\". However, your suggested text is very difficult to understand in some places, so let me give it a try. (The first paragraph is generic, the second SIP specific, and the third BUNDLE specific.) 3PCC Considerations In some 3PCC scenarios a new session will be established between an endpoint that is currently part of an ongoing session and an endpoint that is currently not part of an ongoing session. The endpoint that is part of a session will generate a subsequent offer that will be forwarded to the other endpoint by a 3PCC controller. The endpoint that is not part of a session will process the offer as an initial offer. The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If the UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA). If the UA is not part of an ongoing session, it will process the offer as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers, and it cannot be assumed that a UA is able to correctly process a subsequent offer as an initial offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer, following the procedures in (Section 7.2), before it forwards the offer to a UA. In the rewritten offer the 3PCC controller will set the port value to zero (and include an SDP 'bundle-only' attribute) for each \"m=\" section within the BUNDLE group, excluding the offerer-tagged \"m=\" section. __ From: Roman Shpount Sent: Tuesday, November 2, 2021 6:33 PM To: Christer Holmberg Cc: Justin Uberti ; Justin Uberti ; RTCWeb IETF Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL How about we replace the SIP Considerations with: 3PCC Considerations In some 3PCC scenarios, an offer generated during an ongoing session, i.e., a subsequent offer, will be used by a 3PCC controller to establish a new session and forwarded as an initial offer to another endpoint that is currently not part of a session. For example, the Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA) as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers. It cannot be assumed that a subsequent offer is a valid initial offer and that the endpoint that expects an initial offer will properly process such a subsequent offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer before it is used to establish a new session. To make the subsequent offer a valid initial offer, 3PCC will need to modify all the non-tagged m= lines to include the bundle-only attribute and set the m= line port to zero. _ Roman Shpount Christer Holmberg > wrote: Hi, What about something like this: OLD: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP Offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. >From a BUNDLE perspective, such SDP Offer SHOULD be generated using the procedures defined in Section 7.2.\u201d NEW: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. In some 3PCC scenarios the UAS will be part of an ongoing session, and will therefore include a subsequent offer in the 200 OK responses. The offer will be received by a 3PCC controller (UAC) and then forwarded as an initial offer to another User Agent (UA) that is currently not part of a session. When the BUNDLE mechanism is used, as an initial BUNDLE offer look different than a subsequent BUNDLE offer, it cannot be assumed that a UA that expects an initial offer will be able to properly process a subsequent offer. Therefore, the 3PCC controller needs to act as a Back-To-Back User Agent (B2BUA), and when it receives the subsequent offer it needs to rewrite it into an initial offer before it is forwarded to such UA.\u201d Regards, Christer From: Roman Shpount > Sent: tiistai 2. marraskuuta 2021 10.41 To: Justin Uberti > Cc: Christer Holmberg >; Justin Uberti >; RTCWeb IETF > Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL The PROBLEM is that we have two endpoints, where one sends a subsequent offer, and the other one expects an initial offer. What do you normally do when you have that kind of problem? You use an SBC/B2BUA. In this case that SBC/B2BUA would be the 3PCC controller. So, my suggestion would be to remove the SHOULD text from 8843bis, and simply add a note somewhere (in 8843bis and/or 8829bis) which describes the issue and says that the 3GPP controller needs to modify the offer accordingly. Roman, thoughts on this? If the 3PCC is going to rewrite the offer SDP anyway then maybe adding a=bundle-only isn't the end of the world. I am not opposed to this idea. 3PCC typically knows that the subsequent offer is going to be used as initial, and should be able to rewrite the offer to make it valid. We can change SIP Considerations section in 8843bis (URL), remove the SHOULD, and specify that 3PCC controller should fix the offer. We can then reference this note from 8829bis or restate the same guidance. Roman Shpount\nNot seeing any objections for a month then once -02 is submitted we will progress this towards our AD. spt", "new_text": "If the answer contains valid bundle groups, discard any ICE components for the \"m=\" sections that will be bundled onto the primary ICE components in each bundle, and begin muxing these \"m=\" sections accordingly, as described in I-D.ietf-mmusic-rfc8843-bis. If the description is of type \"answer\" and there are still remaining candidates in the ICE candidate pool, discard them."} {"id": "q-en-jsep-686684ebe8238c052ff9662246224c1346cbcb0235445f08920603451abbd6a3", "old_text": "6. When bundling, associating incoming RTP/RTCP with the proper \"m=\" section is defined in RFC8843. When not bundling, the proper \"m=\" section is clear from the ICE component over which the RTP/RTCP is received. Once the proper \"m=\" section or sections are known, RTP/RTCP is delivered to the RtpTransceiver(s) associated with the \"m=\"", "comments": "and\nTo resolve WGLC comments: URL it was agreed to incorporate a reference to this text in 8843bis: URL That text ended up in s7.6.\nWhen draft-8843bis is given an RFC number, this draft should reference the new RFC. Hi, Eventhough I would not like to make more changes than necessary, I am fine with \"3PCC Considerations\". However, your suggested text is very difficult to understand in some places, so let me give it a try. (The first paragraph is generic, the second SIP specific, and the third BUNDLE specific.) 3PCC Considerations In some 3PCC scenarios a new session will be established between an endpoint that is currently part of an ongoing session and an endpoint that is currently not part of an ongoing session. The endpoint that is part of a session will generate a subsequent offer that will be forwarded to the other endpoint by a 3PCC controller. The endpoint that is not part of a session will process the offer as an initial offer. The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If the UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA). If the UA is not part of an ongoing session, it will process the offer as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers, and it cannot be assumed that a UA is able to correctly process a subsequent offer as an initial offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer, following the procedures in (Section 7.2), before it forwards the offer to a UA. In the rewritten offer the 3PCC controller will set the port value to zero (and include an SDP 'bundle-only' attribute) for each \"m=\" section within the BUNDLE group, excluding the offerer-tagged \"m=\" section. __ From: Roman Shpount Sent: Tuesday, November 2, 2021 6:33 PM To: Christer Holmberg Cc: Justin Uberti ; Justin Uberti ; RTCWeb IETF Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL How about we replace the SIP Considerations with: 3PCC Considerations In some 3PCC scenarios, an offer generated during an ongoing session, i.e., a subsequent offer, will be used by a 3PCC controller to establish a new session and forwarded as an initial offer to another endpoint that is currently not part of a session. For example, the Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA) as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers. It cannot be assumed that a subsequent offer is a valid initial offer and that the endpoint that expects an initial offer will properly process such a subsequent offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer before it is used to establish a new session. To make the subsequent offer a valid initial offer, 3PCC will need to modify all the non-tagged m= lines to include the bundle-only attribute and set the m= line port to zero. _ Roman Shpount Christer Holmberg > wrote: Hi, What about something like this: OLD: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP Offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. >From a BUNDLE perspective, such SDP Offer SHOULD be generated using the procedures defined in Section 7.2.\u201d NEW: \u201cThe Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. In some 3PCC scenarios the UAS will be part of an ongoing session, and will therefore include a subsequent offer in the 200 OK responses. The offer will be received by a 3PCC controller (UAC) and then forwarded as an initial offer to another User Agent (UA) that is currently not part of a session. When the BUNDLE mechanism is used, as an initial BUNDLE offer look different than a subsequent BUNDLE offer, it cannot be assumed that a UA that expects an initial offer will be able to properly process a subsequent offer. Therefore, the 3PCC controller needs to act as a Back-To-Back User Agent (B2BUA), and when it receives the subsequent offer it needs to rewrite it into an initial offer before it is forwarded to such UA.\u201d Regards, Christer From: Roman Shpount > Sent: tiistai 2. marraskuuta 2021 10.41 To: Justin Uberti > Cc: Christer Holmberg >; Justin Uberti >; RTCWeb IETF > Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL The PROBLEM is that we have two endpoints, where one sends a subsequent offer, and the other one expects an initial offer. What do you normally do when you have that kind of problem? You use an SBC/B2BUA. In this case that SBC/B2BUA would be the 3PCC controller. So, my suggestion would be to remove the SHOULD text from 8843bis, and simply add a note somewhere (in 8843bis and/or 8829bis) which describes the issue and says that the 3GPP controller needs to modify the offer accordingly. Roman, thoughts on this? If the 3PCC is going to rewrite the offer SDP anyway then maybe adding a=bundle-only isn't the end of the world. I am not opposed to this idea. 3PCC typically knows that the subsequent offer is going to be used as initial, and should be able to rewrite the offer to make it valid. We can change SIP Considerations section in 8843bis (URL), remove the SHOULD, and specify that 3PCC controller should fix the offer. We can then reference this note from 8829bis or restate the same guidance. Roman Shpount\nNot seeing any objections for a month then once -02 is submitted we will progress this towards our AD. spt", "new_text": "6. When bundling, associating incoming RTP/RTCP with the proper \"m=\" section is defined in I-D.ietf-mmusic-rfc8843-bis. When not bundling, the proper \"m=\" section is clear from the ICE component over which the RTP/RTCP is received. Once the proper \"m=\" section or sections are known, RTP/RTCP is delivered to the RtpTransceiver(s) associated with the \"m=\""} {"id": "q-en-jsep-7f32b7ca8ba9fb44a7c221a0a2b1302ad0060906d23bbc8a1d795178c816f8ec", "old_text": "RFC4566 is the base SDP specification and MUST be implemented. RFC5764 MUST be supported for signaling the UDP/TLS/RTP/SAVPF RFC5764 and TCP/TLS/RTP/SAVPF I-D.nandakumar-mmusic-proto-iana- registration RTP profiles. RFC5245 MUST be implemented for signaling the ICE credentials and", "comments": "URL of draft-nandakumar. This is a merge of a previously approved PR\nlgtm\nLGTM\nAs per discussion in mmusic, to make it abundantly clear that DTLS is to be used even when running over ICE-TCP, the profile name should be TCP/DTLS/RTP/SAVPF. We'll leave the UDP/TLS/RTP/SAVPF one alone, since that ship has sailed, and it's less ambiguous.\nNeed Suhas to update URL to reflect the above.\nFor SCTP, URL has the correct UDP/DTLS/SCTP and TCP/DTLS/SCTP things that we want to refer to.\nThe action item here is to update this document to use the correct fields as agreed above as if draft-nandakumar was done, and then when that document is fixed we will be good.", "new_text": "RFC4566 is the base SDP specification and MUST be implemented. RFC5764 MUST be supported for signaling the UDP/TLS/RTP/SAVPF RFC5764 and TCP/DTLS/RTP/SAVPF I-D.nandakumar-mmusic-proto-iana- registration RTP profiles. RFC5245 MUST be implemented for signaling the ICE credentials and"} {"id": "q-en-jsep-7f32b7ca8ba9fb44a7c221a0a2b1302ad0060906d23bbc8a1d795178c816f8ec", "old_text": "5.1.3. For media m= sections, JSEP endpoints MUST support both the \"UDP/TLS/ RTP/SAVPF\" and \"TCP/TLS/RTP/SAVPF\" profiles and MUST indicate one of these two profiles for each media m= line they produce in an offer. For data m= sections, JSEP endpoints must support both the \"UDP/TLS/ SCTP\" and \"TCP/TLS/SCTP\" profiles and MUST indicate one of these two profiles for each data m= line they produce in an offer. Because ICE can select either TCP or UDP transport depending on network conditions, both advertisements are consistent with ICE eventually", "comments": "URL of draft-nandakumar. This is a merge of a previously approved PR\nlgtm\nLGTM\nAs per discussion in mmusic, to make it abundantly clear that DTLS is to be used even when running over ICE-TCP, the profile name should be TCP/DTLS/RTP/SAVPF. We'll leave the UDP/TLS/RTP/SAVPF one alone, since that ship has sailed, and it's less ambiguous.\nNeed Suhas to update URL to reflect the above.\nFor SCTP, URL has the correct UDP/DTLS/SCTP and TCP/DTLS/SCTP things that we want to refer to.\nThe action item here is to update this document to use the correct fields as agreed above as if draft-nandakumar was done, and then when that document is fixed we will be good.", "new_text": "5.1.3. For media m= sections, JSEP endpoints MUST support both the \"UDP/TLS/ RTP/SAVPF\" and \"TCP/DTLS/RTP/SAVPF\" profiles and MUST indicate one of these two profiles for each media m= line they produce in an offer. For data m= sections, JSEP endpoints must support both the \"UDP/DTLS/ SCTP\" and \"TCP/DTLS/SCTP\" profiles and MUST indicate one of these two profiles for each data m= line they produce in an offer. Because ICE can select either TCP or UDP transport depending on network conditions, both advertisements are consistent with ICE eventually"} {"id": "q-en-jsep-7f32b7ca8ba9fb44a7c221a0a2b1302ad0060906d23bbc8a1d795178c816f8ec", "old_text": "compatible mode and use AVP timing, i.e., \"trr-int=4\". For data m= sections, JSEP endpoints MUST support receiving the \"UDP/ TLS/SCTP\", \"TCP/TLS/SCTP\", or \"DTLS/SCTP\" (for backwards compatibility) profiles. Note that re-offers by JSEP endpoints MUST use the correct profile", "comments": "URL of draft-nandakumar. This is a merge of a previously approved PR\nlgtm\nLGTM\nAs per discussion in mmusic, to make it abundantly clear that DTLS is to be used even when running over ICE-TCP, the profile name should be TCP/DTLS/RTP/SAVPF. We'll leave the UDP/TLS/RTP/SAVPF one alone, since that ship has sailed, and it's less ambiguous.\nNeed Suhas to update URL to reflect the above.\nFor SCTP, URL has the correct UDP/DTLS/SCTP and TCP/DTLS/SCTP things that we want to refer to.\nThe action item here is to update this document to use the correct fields as agreed above as if draft-nandakumar was done, and then when that document is fixed we will be good.", "new_text": "compatible mode and use AVP timing, i.e., \"trr-int=4\". For data m= sections, JSEP endpoints MUST support receiving the \"UDP/ DTLS/SCTP\", \"TCP/DTLS/SCTP\", or \"DTLS/SCTP\" (for backwards compatibility) profiles. Note that re-offers by JSEP endpoints MUST use the correct profile"} {"id": "q-en-jsep-7f32b7ca8ba9fb44a7c221a0a2b1302ad0060906d23bbc8a1d795178c816f8ec", "old_text": "To properly indicate use of DTLS, the field MUST be set to \"UDP/TLS/RTP/SAVPF\", as specified in RFC5764, Section 8, if the default candidate uses UDP transport, or \"TCP/TLS/RTP/SAVPF\", as specified inI-D.nandakumar-mmusic-proto-iana-registration if the default candidate uses TCP transport.", "comments": "URL of draft-nandakumar. This is a merge of a previously approved PR\nlgtm\nLGTM\nAs per discussion in mmusic, to make it abundantly clear that DTLS is to be used even when running over ICE-TCP, the profile name should be TCP/DTLS/RTP/SAVPF. We'll leave the UDP/TLS/RTP/SAVPF one alone, since that ship has sailed, and it's less ambiguous.\nNeed Suhas to update URL to reflect the above.\nFor SCTP, URL has the correct UDP/DTLS/SCTP and TCP/DTLS/SCTP things that we want to refer to.\nThe action item here is to update this document to use the correct fields as agreed above as if draft-nandakumar was done, and then when that document is fixed we will be good.", "new_text": "To properly indicate use of DTLS, the field MUST be set to \"UDP/TLS/RTP/SAVPF\", as specified in RFC5764, Section 8, if the default candidate uses UDP transport, or \"TCP/DTLS/RTP/SAVPF\", as specified inI-D.nandakumar-mmusic-proto-iana-registration if the default candidate uses TCP transport."} {"id": "q-en-jsep-7f32b7ca8ba9fb44a7c221a0a2b1302ad0060906d23bbc8a1d795178c816f8ec", "old_text": "Lastly, if a data channel has been created, a m= section MUST be generated for data. The field MUST be set to \"application\" and the field MUST be set to \"UDP/TLS/SCTP\" if the default candidate uses UDP transport, or \"TCP/TLS/SCTP\" if the default candidate uses TCP transport I-D.ietf-mmusic-sctp-sdp. The \"fmt\" value MUST be set to the SCTP port number, as specified in Section 4.1. [TODO: update this to use a=sctp-port, as indicated in", "comments": "URL of draft-nandakumar. This is a merge of a previously approved PR\nlgtm\nLGTM\nAs per discussion in mmusic, to make it abundantly clear that DTLS is to be used even when running over ICE-TCP, the profile name should be TCP/DTLS/RTP/SAVPF. We'll leave the UDP/TLS/RTP/SAVPF one alone, since that ship has sailed, and it's less ambiguous.\nNeed Suhas to update URL to reflect the above.\nFor SCTP, URL has the correct UDP/DTLS/SCTP and TCP/DTLS/SCTP things that we want to refer to.\nThe action item here is to update this document to use the correct fields as agreed above as if draft-nandakumar was done, and then when that document is fixed we will be good.", "new_text": "Lastly, if a data channel has been created, a m= section MUST be generated for data. The field MUST be set to \"application\" and the field MUST be set to \"UDP/DTLS/SCTP\" if the default candidate uses UDP transport, or \"TCP/DTLS/SCTP\" if the default candidate uses TCP transport I-D.ietf-mmusic-sctp-sdp. The \"fmt\" value MUST be set to the SCTP port number, as specified in Section 4.1. [TODO: update this to use a=sctp-port, as indicated in"} {"id": "q-en-jsep-cc8b4a3d2c99ac6c316dca2eb8245f917b3853820f40c4a45bc92a764a7d041b", "old_text": "Specific processing MUST also be applied for the following attribute lines: If present, a single \"a=ice-lite\" line is parsed as specified in RFC5245, Section 15.3, and a value indicating the presence of ice- lite is stored. If present, a single \"a=ice-ufrag\" line is parsed as specified in RFC5245, Section 15.4, and the ufrag value is stored.", "comments": "LGTM\nhttp://rtcweb-URL mentions its use at media level; this should be clarified.", "new_text": "Specific processing MUST also be applied for the following attribute lines: If present, a single \"a=ice-ufrag\" line is parsed as specified in RFC5245, Section 15.4, and the ufrag value is stored."} {"id": "q-en-jsep-c8a3805a92d1bd6e28395be69128521310dea8da663f859fc879313e81f5b3c9", "old_text": "BUNDLE, the multiplexing mechanism defined in I-D.ietf-mmusic-sdp- bundle-negotiation. Regardless of policy, the application will always try to negotiate BUNDLE onto a single transport, and will offer a single BUNDLE group across all media sections. However, by specifying a policy from the list below, the application can control how aggressively it will try to BUNDLE media streams together, which affects how it will interoperate with a non-BUNDLE-aware endpoint. When negotiating with a non-BUNDLE-aware endpoint, only the streams not marked as bundle-only streams will be established. The set of available policies is as follows: The first media section of each type (audio, video, or application) will contain transport parameters, which will allow", "comments": "LGTM\nOverall looks good - added some minor suggestions\nlgtm\nF. Section 4.1.1: Regardless of policy, the application will always try to negotiate BUNDLE onto a single transport, and will offer a single BUNDLE group across all media sections. I think this sentence may be a bit misleading without additional sentence(s) to explain that the actual number of established transport streams will depend on the peer's response to the attempt to bundle.\nOK. This could be resolved by adding a final sentence of the form \"Of course, the answerer can choose whether or not to accept BUNDLE\".", "new_text": "BUNDLE, the multiplexing mechanism defined in I-D.ietf-mmusic-sdp- bundle-negotiation. Regardless of policy, the application will always try to negotiate BUNDLE onto a single transport, and will offer a single BUNDLE group across all media section; use of this single transport is contingent upon the answerer accepting BUNDLE. However, by specifying a policy from the list below, the application can control exactly how aggressively it will try to BUNDLE media streams together, which affects how it will interoperate with a non- BUNDLE-aware endpoint. When negotiating with a non-BUNDLE-aware endpoint, only the streams not marked as bundle-only streams will be established. The set of available policies is as follows: The first media section of each type (audio, video, or application) will contain transport parameters, which will allow"} {"id": "q-en-jsep-fc0609ef2534216d66f6906c1a6ac2a2aef7fa67229421e952f4b553a6b00185", "old_text": "present, but an \"AS\" value is specified, generate a \"TIAS\" value using this formula: TIAS = AS * 0.95 - 50 * 40 * 8 The 50 is based on 50 packets per second, the 40 is based on an estimate of total header size, and the 0.95 is to allocate 5% to RTCP. If more accurate control of bandwidth is needed, \"TIAS\" should be used instead of \"AS\". For any \"RR\" or \"RS\" bandwidth values, handle as specified in RFC3556, Section 2.", "comments": "b=TIAS is specified in bits per second, b=AS in kbps. Hence the formula needs to multiply by 1000 to change units. Thanks NAME for catching this in code review.\nEven though I noticed it in the code review I had not noticed it as an issue in the spec. Good catch!\nnice catch - lgtm with minor comment\nLGTM\nLGTM to me as well. I updated the text in", "new_text": "present, but an \"AS\" value is specified, generate a \"TIAS\" value using this formula: TIAS = AS * 1000 * 0.95 - 50 * 40 * 8 The 50 is based on 50 packets per second, the 40 is based on an estimate of total header size, the 1000 is to change the unit from kbps to bps and the 0.95 is to allocate 5% to RTCP. If more accurate control of bandwidth is needed, \"TIAS\" should be used instead of \"AS\". For any \"RR\" or \"RS\" bandwidth values, handle as specified in RFC3556, Section 2."} {"id": "q-en-jsep-413af353d909fafa04cb3f31b474d5aeaedc76ca15e2ea6314274d30b8db71cf", "old_text": "The next step is to generate m= sections, as specified in RFC4566 Section 5.14. An m= section is generated for each RtpTransceiver that has been added to the PeerConnection via the addTrack, addTransceiver, and setRemoteDescription methods. [[OPEN ISSUE: move discussion of setRemoteDescription to the subsequent-offer section.]] This is done in the order that their associated RtpTransceivers were added to the PeerConnection and excludes RtpTranscievers that are stopped and not associated with an m= section (either due to an m= section being recycled or an RtpTransceiver having been stopped before being associated with an m= section) . Each m= section, provided it is not marked as bundle-only, MUST generate a unique set of ICE credentials and gather its own unique", "comments": "LGTM\nWFM. NAME PTAL\nThe current draft suggests moving the discussion of setRemoteDescription: [[OPEN ISSUE: move discussion of setRemoteDescription to the subsequent-offer section.]]", "new_text": "The next step is to generate m= sections, as specified in RFC4566 Section 5.14. An m= section is generated for each RtpTransceiver that has been added to the PeerConnection. This is done in the order that their associated RtpTransceivers were added to the PeerConnection and excludes RtpTransceivers that are stopped and not associated with an m= section (either due to an m= section being recycled or an RtpTransceiver having been stopped before being associated with an m= section) . Each m= section, provided it is not marked as bundle-only, MUST generate a unique set of ICE credentials and gather its own unique"} {"id": "q-en-jsep-b1449e0d22a48f677c8df366a2db4db4d30886f27d1416a4aff69295542a666f", "old_text": "as defined in RFC5245, Section 4.1.1, unless it has been marked as bundle-only. Or, if the ICE ufrag and password values have changed, trigger the ICE Agent to start an ICE restart and begin gathering new candidates for the media section, as defined in RFC5245, Section 9.1.1.1, unless it has been marked as bundle-only. If the media section proto value indicates use of RTP:", "comments": "NAME removed the dangerous text.\nStil LGTM\nConsider the following sequence of actions: setRemoteDescription(offerwithicerestart); addIceCandidate(candidate); setLocalDescription(answerwithicerestart); As issue describes, when the remote peer has set an offer with an ICE restart, but has not yet set an answer, it's doing ICE processing for both the new and old \"generations\" simultaneously (assuming we go that route). So, is this a candidate for the new or old generation? If \"old\", then it can be applied immediately. Otherwise, it should be queued and only applied after setting the local description, since the local description is what triggers the ICE restart. Of course, there's a separate issue, which is that there's no way of telling which \"generation\" the candidate applies to. One proposed way of fixing this is to trickle a ufrag/password along with the candidate. Another way would be to queue the candidates from the \"old generation\" before signaling them, and only signal them after rolling back the local description. If instead of rolling back the local description, an answer is applied, these queued old candidates would be discarded. This way, once a candidate is signaled, it unambiguously applies to only the newest generation. Note that the \"candidate\" could also be an end-of-candidates indication, and the same reasoning applies.\nHonghai already has considered how to associate candidates with ufrag/pwds. However, I think in the absence of such markings, the candidate should be associated with the new generation, since the offer specified ICE restart and the candidate followed the offer.\nI agree. Thus, if an endpoint gathers a candidate for an old generation (and doesn't have a mechanism to associate it with a ufrag/pwd), it should only signal the candidate if and when the local description that triggered the ICE restart gets rolled back. That's what I was trying to explain in the second-to-last paragraph.\nIn ICE, we agreed to have a ufrag parameter on the candidate. Where that parameter goes is still TBD; it could be in a=candidate or on RTCIceCandidate. But this makes it clear what the JSEP implementation should do in this case.\nThe ufrag parameter would need to apply to a=end-of-candidates as well.\nThis is good news. But now we need to go back to the drawing board with trickling end-of-candidates. If the ufrag needs to be trickled, we can't just trickle a \"null\" candidate. And we can't trickle a single special candidate with a ufrag, because different media descriptions could have different ufrags.\n:-/ Perhaps we could generate individual end-of-candidates objects, followed by a null for backcompat. The null would serve as an implicit end-of-candidates for the current ufrag, but new apps would never use the null.\nI think I lean towards having the trickle candidates strongly tied to a ufrag/pwd.\nTo provide an update on this issue: I think we still want to clarify that if there's a pending ICE restart initiated by the remote peer, any remote candidates from the new \"ICE negotiation session\" should be queued and only processed by the local ICE agent once the stable signaling state is reached. I don't think this can go in the trickle ICE spec, because trickle ICE has no concept of a pending ICE restart (it's a consequence of having the ability to roll back an ICE restart offer). Also, last I heard the decision was to trickle the ufrag and mid along with end-of-candidates indications. So the second paragraph of the section will need to be changed to reflect this; it's no longer one indication \"for all media descriptions in the last remote description\".\nI've dealt with the second paragraph in PR . The first seems mostly like a \"can't happen\" to me. As an offerer, you should not be receiving candidates before the answer (that indicates a signaling channel reordering issue). I suppose in principle as an answerer you could receive ice candidates while thinking about the offer, but I think that the answer there is you just discard them.\nSee also", "new_text": "as defined in RFC5245, Section 4.1.1, unless it has been marked as bundle-only. Or, if the ICE ufrag and password values have changed, and it has not been marked as bundle-only, trigger the ICE Agent to start an ICE restart, and begin gathering new candidates for the media section as described in RFC5245, Section 9.1.1.1. If this description is an answer, also start checks on that media section as defined in RFC5245, Section 9.3.1.1. If the media section proto value indicates use of RTP:"} {"id": "q-en-jsep-b1449e0d22a48f677c8df366a2db4db4d30886f27d1416a4aff69295542a666f", "old_text": "parameters are out of bounds, or cannot be applied, processing MUST stop and an error MUST be returned. If the description is of type \"offer\", and the ICE ufrag or password changed from the previous remote description, as described in Section 9.1.1.1 of RFC5245, mark that an ICE restart is needed. Configure the ICE components associated with this media section to use the supplied ICE remote ufrag and password for their", "comments": "NAME removed the dangerous text.\nStil LGTM\nConsider the following sequence of actions: setRemoteDescription(offerwithicerestart); addIceCandidate(candidate); setLocalDescription(answerwithicerestart); As issue describes, when the remote peer has set an offer with an ICE restart, but has not yet set an answer, it's doing ICE processing for both the new and old \"generations\" simultaneously (assuming we go that route). So, is this a candidate for the new or old generation? If \"old\", then it can be applied immediately. Otherwise, it should be queued and only applied after setting the local description, since the local description is what triggers the ICE restart. Of course, there's a separate issue, which is that there's no way of telling which \"generation\" the candidate applies to. One proposed way of fixing this is to trickle a ufrag/password along with the candidate. Another way would be to queue the candidates from the \"old generation\" before signaling them, and only signal them after rolling back the local description. If instead of rolling back the local description, an answer is applied, these queued old candidates would be discarded. This way, once a candidate is signaled, it unambiguously applies to only the newest generation. Note that the \"candidate\" could also be an end-of-candidates indication, and the same reasoning applies.\nHonghai already has considered how to associate candidates with ufrag/pwds. However, I think in the absence of such markings, the candidate should be associated with the new generation, since the offer specified ICE restart and the candidate followed the offer.\nI agree. Thus, if an endpoint gathers a candidate for an old generation (and doesn't have a mechanism to associate it with a ufrag/pwd), it should only signal the candidate if and when the local description that triggered the ICE restart gets rolled back. That's what I was trying to explain in the second-to-last paragraph.\nIn ICE, we agreed to have a ufrag parameter on the candidate. Where that parameter goes is still TBD; it could be in a=candidate or on RTCIceCandidate. But this makes it clear what the JSEP implementation should do in this case.\nThe ufrag parameter would need to apply to a=end-of-candidates as well.\nThis is good news. But now we need to go back to the drawing board with trickling end-of-candidates. If the ufrag needs to be trickled, we can't just trickle a \"null\" candidate. And we can't trickle a single special candidate with a ufrag, because different media descriptions could have different ufrags.\n:-/ Perhaps we could generate individual end-of-candidates objects, followed by a null for backcompat. The null would serve as an implicit end-of-candidates for the current ufrag, but new apps would never use the null.\nI think I lean towards having the trickle candidates strongly tied to a ufrag/pwd.\nTo provide an update on this issue: I think we still want to clarify that if there's a pending ICE restart initiated by the remote peer, any remote candidates from the new \"ICE negotiation session\" should be queued and only processed by the local ICE agent once the stable signaling state is reached. I don't think this can go in the trickle ICE spec, because trickle ICE has no concept of a pending ICE restart (it's a consequence of having the ability to roll back an ICE restart offer). Also, last I heard the decision was to trickle the ufrag and mid along with end-of-candidates indications. So the second paragraph of the section will need to be changed to reflect this; it's no longer one indication \"for all media descriptions in the last remote description\".\nI've dealt with the second paragraph in PR . The first seems mostly like a \"can't happen\" to me. As an offerer, you should not be receiving candidates before the answer (that indicates a signaling channel reordering issue). I suppose in principle as an answerer you could receive ice candidates while thinking about the offer, but I think that the answer there is you just discard them.\nSee also", "new_text": "parameters are out of bounds, or cannot be applied, processing MUST stop and an error MUST be returned. If the ICE ufrag or password changed from the previous remote description, then an ICE restart is needed, as described in Section 9.1.1.1 of RFC5245 If the description is of type \"offer\", mark that an ICE restart is needed. If the description is of type \"answer\" and the current local description is also an ICE restart, then signal the ICE agent to begin checks as described in Section 9.3.1.1 of RFC5245. An answer MUST change the ufrag and password in an answer if and only if ICE is restarting, as described in Section 9.2.1.1 of RFC5245. Configure the ICE components associated with this media section to use the supplied ICE remote ufrag and password for their"} {"id": "q-en-jsep-b0cfefec767adfb583d00a2fbba92a75cdd98e05658d3d17c38cc6e362f25d41", "old_text": "4.1.3. [TODO] 4.1.4.", "comments": "NAME\nNAME PTAL\nLG with nits\npara break and typo comment were missed\nIn the current draft we have addTransceiver [TODO] We need this text before WGLC, so it's being added to the punchlist.", "new_text": "4.1.3. The addTransceiver method adds a new RTPTransceiver to the PeerConnection. If a MediaStreamTrack argument is provided, then the transceiver will be configured with that media type and the track will be attached to the transceiver. Otherwise, the application MUST explicitly specify the type; this mode is useful for creating recvonly transceivers as well as for creating transceivers to which a track can be attached at some later point. At the time of creation, the application can also specify a transceiver direction attribute, a set of MediaStreams which the transceiver is associated with (allowing LS group assignments), and a set of encodings for the media (used for simulcast as desccribed in sec.simulcast). 4.1.4."} {"id": "q-en-jsep-64290478acaca974874abf138e11baa554736ed2c0360faf8b1fc906e423872c", "old_text": "For each specified RTP header extension, establish a mapping between the extension ID and URI, as described in section 6 of RFC5285. If any indicated RTP header extension is unknown, this MUST result in an error. If the MID header extension is supported, prepare to demux RTP data intended for this media section based on the MID header extension, as described in I-D.ietf-mmusic-msid, Section 3.2. For each specified payload type, establish a mapping between the payload type ID and the actual media format, as described in RFC3264. If any indicated payload type is unknown, this MUST result in an error. For each specified \"rtx\" media format, establish a mapping between the RTX payload type and its associated primary payload", "comments": "Talk about what to do with header extensions in a remote description Expand discussion of how to handle payload types in a remote description Fail things we don\u2019t know about in an answer ,\nLGTM\nNAME PTAL at my responses\nURL If any indicated payload type is unknown, it MUST be ignored. [TODO: should fail on answers] (q: does this logic need to also include processing for other lines - extmap, rtcp-fb, etc - that weren't present in the offer?)\nFixed by\nURL 2) * For each specified payload type that is also supported by the local implementation, establish a mapping between the payload type ID and the actual media format. [TODO - Justin to add more to explain mapping.]\nRelevant to as well\nFixed by\nURL 1) If the media section proto value indicates use of RTP: [TODO: header extensions]\nLGTM", "new_text": "For each specified RTP header extension, establish a mapping between the extension ID and URI, as described in section 6 of RFC5285. If any indicated RTP header extension is not supported, this MUST result in an error. If the MID header extension is supported, prepare to demux RTP data intended for this media section based on the MID header extension, as described in I-D.ietf-mmusic-msid, Section 3.2. For each specified media format, establish a mapping between the payload type and the actual media format, as described in RFC3264, Section 6.1. If any indicated media format is not supported, this MUST result in an error. For each specified \"rtx\" media format, establish a mapping between the RTX payload type and its associated primary payload"} {"id": "q-en-jsep-64290478acaca974874abf138e11baa554736ed2c0360faf8b1fc906e423872c", "old_text": "If the media section proto value indicates use of RTP: [TODO: header extensions] If the m= section is being recycled (see sec.subsequent- offers), dissociate the currently associated RtpTransceiver by setting its mid attribute to null.", "comments": "Talk about what to do with header extensions in a remote description Expand discussion of how to handle payload types in a remote description Fail things we don\u2019t know about in an answer ,\nLGTM\nNAME PTAL at my responses\nURL If any indicated payload type is unknown, it MUST be ignored. [TODO: should fail on answers] (q: does this logic need to also include processing for other lines - extmap, rtcp-fb, etc - that weren't present in the offer?)\nFixed by\nURL 2) * For each specified payload type that is also supported by the local implementation, establish a mapping between the payload type ID and the actual media format. [TODO - Justin to add more to explain mapping.]\nRelevant to as well\nFixed by\nURL 1) If the media section proto value indicates use of RTP: [TODO: header extensions]\nLGTM", "new_text": "If the media section proto value indicates use of RTP: If the m= section is being recycled (see sec.subsequent- offers), dissociate the currently associated RtpTransceiver by setting its mid attribute to null."} {"id": "q-en-jsep-64290478acaca974874abf138e11baa554736ed2c0360faf8b1fc906e423872c", "old_text": "section by setting the value of the RtpTransceiver's mid attribute to the MID of the m= section. For each specified payload type that is also supported by the local implementation, establish a mapping between the payload type ID and the actual media format. [TODO - Justin to add more to explain mapping.] If any indicated payload type is unknown, it MUST be ignored. [TODO: should fail on answers] For each specified \"rtx\" media format, establish a mapping between the RTX payload type and its associated primary payload type, as described in RFC4588. If any referenced primary payload types are not present, this MUST result in an error. For each specified fmtp parameter that is supported by the local implementation, enable them on the associated payload types. For each specified RTCP feedback mechanism that is supported by the local implementation, enable them on the associated payload types. For any specified \"TIAS\" bandwidth value, set this value as a constraint on the maximum RTP bitrate to be used when sending", "comments": "Talk about what to do with header extensions in a remote description Expand discussion of how to handle payload types in a remote description Fail things we don\u2019t know about in an answer ,\nLGTM\nNAME PTAL at my responses\nURL If any indicated payload type is unknown, it MUST be ignored. [TODO: should fail on answers] (q: does this logic need to also include processing for other lines - extmap, rtcp-fb, etc - that weren't present in the offer?)\nFixed by\nURL 2) * For each specified payload type that is also supported by the local implementation, establish a mapping between the payload type ID and the actual media format. [TODO - Justin to add more to explain mapping.]\nRelevant to as well\nFixed by\nURL 1) If the media section proto value indicates use of RTP: [TODO: header extensions]\nLGTM", "new_text": "section by setting the value of the RtpTransceiver's mid attribute to the MID of the m= section. For each specified media format that is also supported by the local implementation, establish a mapping between the specified payload type and the media format, as described in RFC3264, Section 6.1. Specifically, this means that the implementation records the payload type to be used in outgoing RTP packets when sending each specified media format, as well as the relative preference for each format that is indicated in their ordering. If any indicated media format is not supported by the local implementation, it MUST be ignored. For each specified \"rtx\" media format, establish a mapping between the RTX payload type and its associated primary payload type, as described in RFC4588, Section 4. If any referenced primary payload types are not present, this MUST result in an error. For each specified fmtp parameter that is supported by the local implementation, enable them on the associated media formats. For each specified RTP header extension that is also supported by the local implementation, establish a mapping between the extension ID and URI, as described in RFC5285, Section 5. Specifically, this means that the implementation records the extension ID to be used in outgoing RTP packets when sending each specified header extension. If any indicated RTP header extension is not supported by the local implementation, it MUST be ignored. For each specified RTCP feedback mechanism that is supported by the local implementation, enable them on the associated media formats. For any specified \"TIAS\" bandwidth value, set this value as a constraint on the maximum RTP bitrate to be used when sending"} {"id": "q-en-jsep-64290478acaca974874abf138e11baa554736ed2c0360faf8b1fc906e423872c", "old_text": "[TODO: handling of CN, telephone-event, \"red\"] If the media section if of type audio: For any specified \"ptime\" value, configure the available payload types to use the specified packet size. If the specified size is not supported for a payload type, use the next closest value instead. Finally, if this description is of type \"pranswer\" or \"answer\",", "comments": "Talk about what to do with header extensions in a remote description Expand discussion of how to handle payload types in a remote description Fail things we don\u2019t know about in an answer ,\nLGTM\nNAME PTAL at my responses\nURL If any indicated payload type is unknown, it MUST be ignored. [TODO: should fail on answers] (q: does this logic need to also include processing for other lines - extmap, rtcp-fb, etc - that weren't present in the offer?)\nFixed by\nURL 2) * For each specified payload type that is also supported by the local implementation, establish a mapping between the payload type ID and the actual media format. [TODO - Justin to add more to explain mapping.]\nRelevant to as well\nFixed by\nURL 1) If the media section proto value indicates use of RTP: [TODO: header extensions]\nLGTM", "new_text": "[TODO: handling of CN, telephone-event, \"red\"] If the media section is of type audio: For any specified \"ptime\" value, configure the available media formats to use the specified packet size. If the specified size is not supported for a media format, use the next closest value instead. Finally, if this description is of type \"pranswer\" or \"answer\","} {"id": "q-en-jsep-64290478acaca974874abf138e11baa554736ed2c0360faf8b1fc906e423872c", "old_text": "If the media section proto value indicates use of RTP: If the media section has RTCP mux enabled, discard any RTCP component, and begin or continue muxing RTCP over the RTP component, as specified in RFC5761, Section 5.1.3. Otherwise,", "comments": "Talk about what to do with header extensions in a remote description Expand discussion of how to handle payload types in a remote description Fail things we don\u2019t know about in an answer ,\nLGTM\nNAME PTAL at my responses\nURL If any indicated payload type is unknown, it MUST be ignored. [TODO: should fail on answers] (q: does this logic need to also include processing for other lines - extmap, rtcp-fb, etc - that weren't present in the offer?)\nFixed by\nURL 2) * For each specified payload type that is also supported by the local implementation, establish a mapping between the payload type ID and the actual media format. [TODO - Justin to add more to explain mapping.]\nRelevant to as well\nFixed by\nURL 1) If the media section proto value indicates use of RTP: [TODO: header extensions]\nLGTM", "new_text": "If the media section proto value indicates use of RTP: If the media section references any media formats, RTP header extensions, or RTCP feedback mechanisms that were not present in the corresponding media section in the offer, this indicates a negotiation problem and MUST result in an error. If the media section has RTCP mux enabled, discard any RTCP component, and begin or continue muxing RTCP over the RTP component, as specified in RFC5761, Section 5.1.3. Otherwise,"} {"id": "q-en-jsep-64290478acaca974874abf138e11baa554736ed2c0360faf8b1fc906e423872c", "old_text": "the RTCP transmission for this media section to use reduced- size RTCP, as specified in RFC5506. If the directional attribute in the answer is of type \"sendrecv\" or \"sendonly\", prepare to start transmitting media using the specified primary SSRC and one of the selected payload types, once the underlying transport layers have been established. If RID values are specified, include the RID header extension in the RTP streams, as indicated in I-D.ietf- mmusic-rid, Section 4). If simulcast is negotiated, send the number of Source RTP Streams as specified in I-D.ietf-mmusic- sdp-simulcast, Section 6.2.2. If the directional attribute is of type \"recvonly\" or \"inactive\", stop transmitting RTP media, although RTCP should still be sent, as described in RFC3264, Section 5.1. If the media section proto value indicates use of SCTP:", "comments": "Talk about what to do with header extensions in a remote description Expand discussion of how to handle payload types in a remote description Fail things we don\u2019t know about in an answer ,\nLGTM\nNAME PTAL at my responses\nURL If any indicated payload type is unknown, it MUST be ignored. [TODO: should fail on answers] (q: does this logic need to also include processing for other lines - extmap, rtcp-fb, etc - that weren't present in the offer?)\nFixed by\nURL 2) * For each specified payload type that is also supported by the local implementation, establish a mapping between the payload type ID and the actual media format. [TODO - Justin to add more to explain mapping.]\nRelevant to as well\nFixed by\nURL 1) If the media section proto value indicates use of RTP: [TODO: header extensions]\nLGTM", "new_text": "the RTCP transmission for this media section to use reduced- size RTCP, as specified in RFC5506. [TODO: enable appropriate rtcp-fb mechanisms] If the directional attribute in the answer is of type \"sendrecv\" or \"sendonly\", prepare to start transmitting media using the most preferred media format from the remote description that is also present in the answer, as described in RFC3264, Sections 6.1 and 7, once the underlying transport layers have been established. [TODO: add discusssion of RED/FEC/RTX/CN] The payload type mapping from the remote description is used to determine payload types for the outgoing RTP streams. Any RTP header extensions that were negotiated should be included in the outgoing RTP streams, using the extension mapping from the remote description; if the RID header extension has been negotiated, and RID values are specified, include the RID header extension in the outgoing RTP streams, as indicated in I-D.ietf-mmusic-rid, Section 4). If simulcast is negotiated, send the number of Source RTP Streams as specified in I-D.ietf-mmusic-sdp-simulcast, Section 6.2.2. If the directional attribute is of type \"recvonly\" or \"inactive\", stop transmitting RTP media, although RTCP should still be sent, as described in RFC3264, Section 5.1. If the media section proto value indicates use of SCTP:"} {"id": "q-en-jsep-df9f4f15bb2fc71bf62ec72bec83c4364a152d886d3509c3a7189e97aa231a3b", "old_text": "Each \"a=mid\" line MUST stay the same. Each \"a=ice-ufrag\" and \"a=ice-pwd\" line MUST stay the same. For MediaStreamTracks that are still present, the \"a=msid\", \"a=ssrc\", and \"a=ssrc-group\" lines MUST stay the same.", "comments": "This is OK with me\nS 5.2.2 says: If the initial offer was applied using setLocalDescription, but an answer from the remote side has not yet been applied, meaning the PeerConnection is still in the \"local-offer\" state, an offer is generated by following the steps in the \"stable\" state above, along with these exceptions: o The \"s=\" and \"t=\" lines MUST stay the same. o Each \"a=mid\" line MUST stay the same. o Each \"a=ice-ufrag\" and \"a=ice-pwd\" line MUST stay the same. However, Section 5.2.3.4 says: If the \"IceRestart\" constraint is specified, with a value of \"true\", the offer MUST indicate an ICE restart by generating new ICE ufrag and pwd attributes, as specified in RFC5245, Section 9.1.1.1. If this constraint is specified on an initial offer, it has no effect (since a new ICE ufrag and pwd are already generated).\nYes, if this is an ICE restart, then ufrag/pwd need to change.\nDiscussed in DC: restart requires a change of credentials. Might want to add text that notes the dangers in doing this (i.e., don't do this).", "new_text": "Each \"a=mid\" line MUST stay the same. Each \"a=ice-ufrag\" and \"a=ice-pwd\" line MUST stay the same unless the \"IceRestart\" option (sec.options-handling was specified. Note that it's not clear why you would actually want to do this, since at this point ICE has not yet started and is thus unlikely to need a restart. For MediaStreamTracks that are still present, the \"a=msid\", \"a=ssrc\", and \"a=ssrc-group\" lines MUST stay the same."} {"id": "q-en-jsep-cef7d845e19b3e7b3b4434af8f0790bbd56bcde755a2f1cbf71aa523dab3c350", "old_text": "resources allocated by the caller can be released, now that the exact session configuration is known. These \"resources\" can include things like extra ICE components, TURN candidates, or video decoders. Provisional answers, on the other hand, do no such deallocation results; as a result, multiple dissimilar provisional answers can be received and applied during call setup. In RFC3264, the constraint at the signaling level is that only one offer can be outstanding for a given session, but at the media stack", "comments": "Provisional answers, on the other hand, do no such deallocation results as a result, multiple dissimilar provisional answers can be received and applied during call setup.\nFixe by", "new_text": "resources allocated by the caller can be released, now that the exact session configuration is known. These \"resources\" can include things like extra ICE components, TURN candidates, or video decoders. Provisional answers, on the other hand, do no such deallocation; as a result, multiple dissimilar provisional answers can be received and applied during call setup. In RFC3264, the constraint at the signaling level is that only one offer can be outstanding for a given session, but at the media stack"} {"id": "q-en-jsep-83e48c0f351ac09a47037cf825e147c517b873a94d68c657653ffa418df191f8", "old_text": "transceiver direction is intersected with the offered direction, as explained in the sec.generating-an-answer section below. 4.2.4. The direction method returns the last value passed into setDirection.", "comments": "Fixes issue .\nSection 4.2.3: The setDirection method sets the direction of a transceiver, which affects the direction attribute of the associated m= section on future calls to createOffer and createAnswer. [BA] \"sets the direction of a transceiver\" is a bit confusing because setDirection() doesn't immediately change whether an RtpSender can send or an RtpReceiver can receive. Since setDirection does immediately change the value of the transceiver.direction attribute you might say \"sets the transceiver.direction attribute\" instead. Also, an application may want to determine the current direction (e.g. whether an RtpSender can send or an RtpReceiver can receive) as opposed to the pending direction (provided by transceiver.direction). Taylor agreed to create a WebRTC 1.0 PR to add the concept of current direction (e.g. by adding a currentDirection attribute). It would therefore be useful to add some text to Section 4.2.3 explaining the distinction. For example: \"Note that while setDirection sets the transceiver.direction attribute immediately, this pending direction does not immediately affect whether an RtpSender will send or RtpReceiver will receive. After calls to setLocal/setRemoteDescription, the direction currently in effect is represented by transceiver.currentDirection (the attribute Taylor will be defining).\"\nAs for naming: which of these makes the most sense? \"direction\"/\"currentDirection\" \"pendingDirection\"/\"currentDirection\" \"preferredDirection\"/\"currentDirection\" \"preferredDirection\"/\"negotiatedDirection\" I like \"preferredDirection\" (or something of that nature); \"pending\" is a bit of a misnomer here because it's possible the set direction may never end up negotiated.\nNAME My personal preference would be \"direction\"/\"currentDirection\". While \"pendingDirection\" might be more accurate than \"direction\", I'm a bit scared of all the spec changes that might be result from changing the attribute name.\nAgree with NAME\nLGTM (direction + currentDirection).\nNAME can this now be closed?\nI don't think Bernard's concern was addressed, which is that \"sets the direction of a transceiver\" is too ambiguous. Now that we have a \"currentDirection\" to reference, I'll add some clarifying text like he suggested.\nResolved by .", "new_text": "transceiver direction is intersected with the offered direction, as explained in the sec.generating-an-answer section below. Note that while setDirection sets the direction attribute of the transceiver immediately (sec.transceiver-direction), this attribute does not immediately affect whether the transceiver's RtpSender will send or its RtpReceiver will receive. The direction in effect is represented by the currentDirection attribute, which is only updated when an answer is applied. 4.2.4. The direction method returns the last value passed into setDirection."} {"id": "q-en-jsep-b97144590971d312b8628e3cca9e35666493156a8cf3f2ff234459797a85c9d9", "old_text": "If RTCP mux is indicated, prepare to demux RTP and RTCP from the RTP ICE component, as specified in RFC5761, Section 5.1.1. If RTCP mux is not indicated, but was indicated in a previous description, this MUST result in an error. For each specified RTP header extension, establish a mapping between the extension ID and URI, as described in section 6 of", "comments": "LGTM\nIf RTCP mux is indicated, prepare to demux RTP and RTCP from the RTP ICE component, as specified in [RFC5761], Section 5.1.1. If RTCP mux is not indicated, but was indicated in a previous description, this MUST result in an error. I note that there can be difference here between an offer and an answer if the RTCP mux indication may have changed or not between these two description.\nAgreed. If we offered rtcp-mux, but it wasn't accepted, the next offer would be without mux. It should only be an error if we accepted mux and already discarded the RTCP ICE component.", "new_text": "If RTCP mux is indicated, prepare to demux RTP and RTCP from the RTP ICE component, as specified in RFC5761, Section 5.1.1. If RTCP mux is not indicated, but was previously negotiated, i.e., the RTCP ICE component no longer exists, this MUST result in an error. For each specified RTP header extension, establish a mapping between the extension ID and URI, as described in section 6 of"} {"id": "q-en-jsep-a801952321e0d2b7aa32af4e1d96102b720d5129e414b77e29c3dfc571422da8", "old_text": "local description is supplied, and the number of transports currently in use does not match the number of transports needed by the local description, the PeerConnection will create transports as needed and begin gathering candidates for them. If setRemoteDescription was previously called with an offer, and setLocalDescription is called with an answer (provisional or final),", "comments": "NAME don't love the way the bit about the pool turned out but I don't have better text. Fixed. Consensus here was this is fine. No, this seems fairly clear. URL Done. Done. Do you mean RTP here? If so, I'm feeling OK about this. Done.\nMerging and will tweak post-merge", "new_text": "local description is supplied, and the number of transports currently in use does not match the number of transports needed by the local description, the PeerConnection will create transports as needed and either withdraw an appropriate number of ICE candidates from the candidate pool or begin gathering candidates if sufficient pre- gathered candidates are not available. If setRemoteDescription was previously called with an offer, and setLocalDescription is called with an answer (provisional or final),"} {"id": "q-en-jsep-a801952321e0d2b7aa32af4e1d96102b720d5129e414b77e29c3dfc571422da8", "old_text": "The 50 is based on 50 packets per second, the 40 is based on an estimate of total header size, the 1000 changes the unit from kbps to bps (as required by TIAS), and the 0.95 is to allocate 5% to RTCP. If more accurate control of bandwidth is needed, \"TIAS\" should be used instead of \"AS\". For any \"RR\" or \"RS\" bandwidth values, handle as specified in RFC3556, Section 2.", "comments": "NAME don't love the way the bit about the pool turned out but I don't have better text. Fixed. Consensus here was this is fine. No, this seems fairly clear. URL Done. Done. Do you mean RTP here? If so, I'm feeling OK about this. Done.\nMerging and will tweak post-merge", "new_text": "The 50 is based on 50 packets per second, the 40 is based on an estimate of total header size, the 1000 changes the unit from kbps to bps (as required by TIAS), and the 0.95 is to allocate 5% to RTCP. \"TIAS\" is used in preference to \"AS\" because it provides more accurate control of bandwidth. For any \"RR\" or \"RS\" bandwidth values, handle as specified in RFC3556, Section 2."} {"id": "q-en-jsep-a801952321e0d2b7aa32af4e1d96102b720d5129e414b77e29c3dfc571422da8", "old_text": "If the media section has been rejected (i.e. port is set to zero in the answer), stop any reception or transmission of media for this section, and discard any associated ICE components, as described in Section 9.2.1.3 of RFC5245. If the remote DTLS fingerprint has been changed or the dtls-id has", "comments": "NAME don't love the way the bit about the pool turned out but I don't have better text. Fixed. Consensus here was this is fine. No, this seems fairly clear. URL Done. Done. Do you mean RTP here? If so, I'm feeling OK about this. Done.\nMerging and will tweak post-merge", "new_text": "If the media section has been rejected (i.e. port is set to zero in the answer), stop any reception or transmission of media for this section, and, if there was no accepted media section bundled with this media section, discard any associated ICE components, as described in Section 9.2.1.3 of RFC5245. If the remote DTLS fingerprint has been changed or the dtls-id has"} {"id": "q-en-jsep-6b0f9285053b6efb3b8c361ec9ee86ae55d14423d6c7a807eb8123cb66f587d9", "old_text": "session configuration is known. These \"resources\" can include things like extra ICE components, TURN candidates, or video decoders. Provisional answers, on the other hand, do no such deallocation; as a result, multiple dissimilar provisional answers can be received and applied during call setup. In RFC3264, the constraint at the signaling level is that only one offer can be outstanding for a given session, but at the media stack", "comments": "Originally raised by Christer Section 3.2 says: \"JSEP also allows for an answer to be treated as provisional by the application. Provisional answers provide a way for an answerer to communicate initial session parameters back to the offerer, in order to allow the session to begin, while allowing a final answer to be specified later. This concept of a final answer is important to the offer/answer model; when such an answer is received, any extra resources allocated by the caller can be released, now that the exact session configuration is known.\" 1) I would like explicit text saying that this is a deviation from RFC 3264, where there is no such thing as a provisional answer. I would like to have a dedicated \u00b3Deviations to RFC3264\u00b2 (or something similar) section. 2) I would like explicit text saying that the final answer may be completely different (in terms of candidates, codecs, accepted m- line etc) from the provisional answer. 3) Related to 2), the final answer may also require NEW resources to be allocated.\nWe discussed this and we don't think text is needed for 1) or 3); the offerer doesn't release resources until the final answer. For 2), what kind of text did you have in mind?\nFew notes to cullen from talking wtih Christer Christer proposals ... adding section \"SDP Offer/Nswer Usage\" document variation we make on SAVPF stuff explain how ANSWE / PR answer are used withen paringin multiple to single offer Points to section for details ... follwo proccedures in 3264\nCullen, I think I improved this. PTAL", "new_text": "session configuration is known. These \"resources\" can include things like extra ICE components, TURN candidates, or video decoders. Provisional answers, on the other hand, do no such deallocation; as a result, multiple dissimilar provisional answers, with their own codec choices, transport parameters, etc., can be received and applied during call setup. Note that the final answer itself may be different than any received provisional answers. In RFC3264, the constraint at the signaling level is that only one offer can be outstanding for a given session, but at the media stack"} {"id": "q-en-jsep-92015ad4733dda27b12a96ec9ad1f9a3749ce67d805041457738547577bb8c04", "old_text": "JSEP implementations must comply with the specifications listed below that govern the creation and processing of offers and answers. The first set of specifications is the \"mandatory-to-implement\" set. All implementations must support these behaviors, but may not use all of them if the remote side, which may not be a JSEP endpoint, does not support them. The second set of specifications is the \"mandatory-to-use\" set. The local JSEP endpoint and any remote endpoint must indicate support for these specifications in their session descriptions. 5.1.1. Implementations of JSEP MUST conform to I-D.ietf-rtcweb-rtp-usage. This list of mandatory-to-implement specifications is derived from the requirements outlined in that document and from I-D.ietf-rtcweb- security-arch. RFC4566 is the base SDP specification and MUST be implemented. RFC5764 MUST be supported for signaling the UDP/TLS/RTP/SAVPF RFC5764, TCP/DTLS/RTP/SAVPF RFC7850, \"UDP/DTLS/SCTP\" I-D.ietf- mmusic-sctp-sdp, and \"TCP/DTLS/SCTP\" I-D.ietf-mmusic-sctp-sdp RTP profiles. RFC5245 MUST be implemented for signaling the ICE credentials and candidate lines corresponding to each media stream. The ICE implementation MUST be a Full implementation, not a Lite implementation. RFC5763 MUST be implemented to signal DTLS certificate fingerprints. RFC5888 MUST be implemented for signaling grouping information, and MUST be used to identify m= lines via the a=mid attribute. I-D.ietf-mmusic-msid MUST be supported, in order to signal associations between RTP objects and W3C MediaStreams and MediaStreamTracks in a standard way. The bundle mechanism in I-D.ietf-mmusic-sdp-bundle-negotiation MUST be supported to signal the ability to multiplex RTP streams on a single UDP port, in order to avoid excessive use of port number resources. The SDP attributes of \"sendonly\", \"recvonly\", \"inactive\", and \"sendrecv\" from RFC4566 MUST be implemented to signal information about media direction. RFC5576 MUST be implemented to signal RTP SSRC values and grouping semantics. RFC4585 MUST be implemented to signal RTCP based feedback. RFC5761 MUST be implemented to signal multiplexing of RTP and RTCP. RFC5506 MUST be implemented to signal reduced-size RTCP messages. RFC4588 MUST be implemented to signal RTX payload type associations. RFC3556 MUST be supported for control of RTCP bandwidth limits. The SDES SRTP keying mechanism from RFC4568 MUST NOT be implemented, as discussed in I-D.ietf-rtcweb-security-arch. As required by RFC4566, Section 5.13, JSEP implementations MUST ignore unknown attribute (a=) lines. 5.1.2. All session descriptions handled by JSEP implementations, both local and remote, MUST indicate support for the following specifications. If any of these are absent, this omission MUST be treated as an", "comments": "All the xref I removed are used elsewhere in spec other than the one which I added to where it is used.\nFixed as Justin suggested and ready for re review\nNAME - have a look - I think I got all the changes\nI tweaked the discussion of unknown a=; now LGTM", "new_text": "JSEP implementations must comply with the specifications listed below that govern the creation and processing of offers and answers. 5.1.1. All session descriptions handled by JSEP implementations, both local and remote, MUST indicate support for the following specifications. If any of these are absent, this omission MUST be treated as an"} {"id": "q-en-jsep-92015ad4733dda27b12a96ec9ad1f9a3749ce67d805041457738547577bb8c04", "old_text": "DTLS RFC6347 or DTLS-SRTP RFC5763, MUST be used, as appropriate for the media type, as specified in 5.1.3. For media m= sections, JSEP implementations MUST support both the \"UDP/TLS/ RTP/SAVPF\" and \"TCP/DTLS/RTP/SAVPF\" profiles and MUST indicate one of these two profiles for each media m= line they produce in an offer. For data m= sections, implementations MUST support both the \"UDP/DTLS/SCTP\" and \"TCP/DTLS/SCTP\" profiles and MUST indicate one of these two profiles for each data m= line they produce in an offer. Because ICE can select either TCP or UDP transport depending on network conditions, both advertisements are consistent with ICE eventually selecting either either UDP or TCP. Unfortunately, in an attempt at compatibility, some endpoints", "comments": "All the xref I removed are used elsewhere in spec other than the one which I added to where it is used.\nFixed as Justin suggested and ready for re review\nNAME - have a look - I think I got all the changes\nI tweaked the discussion of unknown a=; now LGTM", "new_text": "DTLS RFC6347 or DTLS-SRTP RFC5763, MUST be used, as appropriate for the media type, as specified in The SDES SRTP keying mechanism from RFC4568 MUST NOT be used, as discussed in I-D.ietf-rtcweb-security-arch. 5.1.2. For media m= sections, JSEP implementations MUST support the \"UDP/TLS/RTP/SAVPF\" profile specified in RFC7850, and MUST indicate this profile for each media m= line they produce in an offer. For data m= sections, implementations MUST support the \"UDP/DTLS/SCTP\" profile and MUST indicate this profile for each data m= line they produce in an offer. Because ICE can select either TCP or UDP transport depending on network conditions, this advertisement is consistent with ICE eventually selecting either either UDP or TCP. Unfortunately, in an attempt at compatibility, some endpoints"} {"id": "q-en-jsep-92015ad4733dda27b12a96ec9ad1f9a3749ce67d805041457738547577bb8c04", "old_text": "Any \"a=extmap\" lines are parsed as specified in RFC5285, Section 5, and their values are stored. Once all the session-level lines have been parsed, processing continues with the lines in m= sections.", "comments": "All the xref I removed are used elsewhere in spec other than the one which I added to where it is used.\nFixed as Justin suggested and ready for re review\nNAME - have a look - I think I got all the changes\nI tweaked the discussion of unknown a=; now LGTM", "new_text": "Any \"a=extmap\" lines are parsed as specified in RFC5285, Section 5, and their values are stored. As required by RFC4566, Section 5.13, unknown attribute lines MUST be ignored. Once all the session-level lines have been parsed, processing continues with the lines in m= sections."} {"id": "q-en-jsep-92015ad4733dda27b12a96ec9ad1f9a3749ce67d805041457738547577bb8c04", "old_text": "as specified in I-D.ietf-mmusic-sctp-sdp, Section 6, and the value stored. Otherwise, use the specified default. 5.7.3. Assuming parsing completes successfully, the parsed description is", "comments": "All the xref I removed are used elsewhere in spec other than the one which I added to where it is used.\nFixed as Justin suggested and ready for re review\nNAME - have a look - I think I got all the changes\nI tweaked the discussion of unknown a=; now LGTM", "new_text": "as specified in I-D.ietf-mmusic-sctp-sdp, Section 6, and the value stored. Otherwise, use the specified default. As required by RFC4566, Section 5.13, unknown attribute lines MUST be ignored. 5.7.3. Assuming parsing completes successfully, the parsed description is"} {"id": "q-en-jsep-33db5a40ceed032528a8435bffafcf5d787436d25a9783a0ebd5c3b159694393", "old_text": "section is bundled into another m= section, it still MUST NOT contain any ICE credentials. If the m= section is not bundled into another m= section, an \"a=rtcp\" attribute line MUST be added with of the default RTCP candidate, as indicated in RFC5761, Section 5.1.3. If the m= section is not bundled into another m= section, for each candidate that has been gathered during the most recent gathering", "comments": "o If the m= section is not bundled into another m= section, an \"a=rtcp\" attribute line MUST be added with of the default RTCP candidate, as indicated in [RFC5761], Section 5.1.3.", "new_text": "section is bundled into another m= section, it still MUST NOT contain any ICE credentials. If the m= section is not bundled into another m= section, its \"a=rtcp\" attribute line MUST be filled in with the port and address of the default RTCP candidate, as indicated in RFC5761, Section 5.1.3. If no RTCP candidates have yet been gathered, dummy values MUST be used, as described in the initial offer section above. If the m= section is not bundled into another m= section, for each candidate that has been gathered during the most recent gathering"} {"id": "q-en-jsep-4e55b851c8a3fde23430c947313d3b1e2db81a242ff0732ce0c033fc41c6b57b", "old_text": "If the type is not correct for the current state, processing MUST stop and an error MUST be returned. Next, the SessionDescription is parsed into a data structure, as described in the sec.parsing-a-desc section below. If parsing fails for any reason, processing MUST stop and an error MUST be", "comments": "In Section 5.4, JSEP says: \" The SDP returned from createOffer or createAnswer MUST NOT be changed before passing it to setLocalDescription.\" However in Section 5.8, it says: \"First, the parsed parameters are checked to ensure that they are identical to those generated in the last call to createOffer/ createAnswer, and thus have not been altered, as discussed in Section 5.4; otherwise, processing MUST stop and an error MUST be returned.\" [BA] So is any change to URL unacceptable (e.g. must be byte for byte identical)? Or is the text trying to say that the parsed representation must be the same (e.g. \"a=\" lines appearing in a different order shouldn't matter)?\nI believe the expectation was that the description is entirely immutable. Verifying isomorphism in the face of minor modifications seems like something that is not entirely trivial, and TBH I don't quite understand why it would be needed.\nThanks. PR URL assumes that URL is immutable.\nI will update 5.8 accordingly.", "new_text": "If the type is not correct for the current state, processing MUST stop and an error MUST be returned. The SessionDescription is then checked to ensure that its contents are identical to those generated in the last call to createOffer/ createAnswer, and thus have not been altered, as discussed in sec.modifying-sdp; otherwise, processing MUST stop and an error MUST be returned. Next, the SessionDescription is parsed into a data structure, as described in the sec.parsing-a-desc section below. If parsing fails for any reason, processing MUST stop and an error MUST be"} {"id": "q-en-jsep-4e55b851c8a3fde23430c947313d3b1e2db81a242ff0732ce0c033fc41c6b57b", "old_text": "a local description. If an error is returned, the session MUST be restored to the state it was in before performing these steps. First, the parsed parameters are checked to ensure that they are identical to those generated in the last call to createOffer/ createAnswer, and thus have not been altered, as discussed in sec.modifying-sdp; otherwise, processing MUST stop and an error MUST be returned. Next, m= sections are processed. For each m= section, the following steps MUST be performed; if any parameters are out of bounds, or cannot be applied, processing MUST stop and an error MUST be", "comments": "In Section 5.4, JSEP says: \" The SDP returned from createOffer or createAnswer MUST NOT be changed before passing it to setLocalDescription.\" However in Section 5.8, it says: \"First, the parsed parameters are checked to ensure that they are identical to those generated in the last call to createOffer/ createAnswer, and thus have not been altered, as discussed in Section 5.4; otherwise, processing MUST stop and an error MUST be returned.\" [BA] So is any change to URL unacceptable (e.g. must be byte for byte identical)? Or is the text trying to say that the parsed representation must be the same (e.g. \"a=\" lines appearing in a different order shouldn't matter)?\nI believe the expectation was that the description is entirely immutable. Verifying isomorphism in the face of minor modifications seems like something that is not entirely trivial, and TBH I don't quite understand why it would be needed.\nThanks. PR URL assumes that URL is immutable.\nI will update 5.8 accordingly.", "new_text": "a local description. If an error is returned, the session MUST be restored to the state it was in before performing these steps. Next, m= sections are processed. For each m= section, the following steps MUST be performed; if any parameters are out of bounds, or cannot be applied, processing MUST stop and an error MUST be"} {"id": "q-en-jsep-f3811b381bfeb4282a1c39175bde0b541400dd18eaee02a0d3720393686bb0aa", "old_text": "An \"a=ice-options\" line with the \"trickle\" option MUST be added, as specified in I-D.ietf-ice-trickle, Section 4. The next step is to generate m= sections, as specified in RFC4566 Section 5.14. An m= section is generated for each RtpTransceiver that has been added to the PeerConnection, excluding any stopped", "comments": "I fixed the grammar and added some text about \"subsequent verification\"\nAlso see URL\nWhat's the issue?\nSomeone thought we should talk about how to parse SDP with more than one identity line and handle that. I did not look, we may already have whatever we need in the draft but I just opened this so we would at least go check.\nJSEP practice is just to put in placeholders and link to the defining draft, so we should punt the actual semantics in security-arch.\nSee email to rtcweb from Stefan H\u00e5kansson subject \"Identity in the SDP\"\nSayeth Stefan: >[2] URL", "new_text": "An \"a=ice-options\" line with the \"trickle\" option MUST be added, as specified in I-D.ietf-ice-trickle, Section 4. If WebRTC identity is being used, an \"a=identity\" line as described in I-D.ietf-rtcweb-security-arch, Section 5. The next step is to generate m= sections, as specified in RFC4566 Section 5.14. An m= section is generated for each RtpTransceiver that has been added to the PeerConnection, excluding any stopped"} {"id": "q-en-jsep-f3811b381bfeb4282a1c39175bde0b541400dd18eaee02a0d3720393686bb0aa", "old_text": "If present, a single \"a=dtls-id\" line is parsed as specified in I- D.ietf-mmusic-dtls-sdp Section 5, and the dtls-id value is stored. Any \"a=extmap\" lines are parsed as specified in RFC5285, Section 5, and their values are stored.", "comments": "I fixed the grammar and added some text about \"subsequent verification\"\nAlso see URL\nWhat's the issue?\nSomeone thought we should talk about how to parse SDP with more than one identity line and handle that. I did not look, we may already have whatever we need in the draft but I just opened this so we would at least go check.\nJSEP practice is just to put in placeholders and link to the defining draft, so we should punt the actual semantics in security-arch.\nSee email to rtcweb from Stefan H\u00e5kansson subject \"Identity in the SDP\"\nSayeth Stefan: >[2] URL", "new_text": "If present, a single \"a=dtls-id\" line is parsed as specified in I- D.ietf-mmusic-dtls-sdp Section 5, and the dtls-id value is stored. Any \"a=identity\" lines are parsed and the identity values stored for subsequent verification, as specified I-D.ietf-rtcweb- security-arch, Section 5. Any \"a=extmap\" lines are parsed as specified in RFC5285, Section 5, and their values are stored."} {"id": "q-en-jsep-e1a5fe9120465a6cdf890672d1beae0ef7e211ce46ec768ea1df4632c4aefbbb", "old_text": "appropriate). Note that when considering a MediaStreamTrack that is producing rotated video, the unrotated resolution MUST be used. This is required regardless of whether the receiver supports performing receive-side rotation (e.g., through CVO), as it significantly simplifies the matching logic. For the purposes of resolution negotiation, only size limits are considered. Any other values, e.g. picture or sample aspect ratio,", "comments": "Note that the use of CVO in webrtc is described in URL\nNAME feel free to rewrite however and merge. I just wanted to get the xref in.\nWe mention it in the imageattr section. This is required regardless of whether the receiver supports performing receive-side rotation (e.g., through CVO), as it significantly simplifies the matching logic.\nNAME , can you look into this? Not WGLC2 blocking.\nwill do - I seem to recall it is a 3GPP ref but will try and track down.", "new_text": "appropriate). Note that when considering a MediaStreamTrack that is producing rotated video, the unrotated resolution MUST be used. This is required regardless of whether the receiver supports performing receive-side rotation (e.g., through CVO TS26.114), as it significantly simplifies the matching logic. For the purposes of resolution negotiation, only size limits are considered. Any other values, e.g. picture or sample aspect ratio,"} {"id": "q-en-jsep-c6e87f83777ad41233596baa95d71a982f04c01bcfcc9b50ac318b2243c35605", "old_text": "mechanism (e.g., WebSockets); upon receipt of that offer, the remote party installs it using the setRemoteDescription() API. When the call is accepted, the callee uses the createAnswer() API to generate an appropriate answer, applies it using setLocalDescription(), and sends the answer back to the initiator over the signaling channel. When the offerer gets that answer, it installs it using setRemoteDescription(), and initial setup is complete. This process can be repeated for additional offer/answer exchanges. Regarding ICE RFC5245, JSEP decouples the ICE state machine from the overall signaling state machine, as the ICE state machine must remain", "comments": "I was reading this draft and noticed a couple of minor wording issues that made the text not as clear as it could be. I don't believe the changes are controversial, or change the intention of the draft.\nThese changes look good - I agree the rewrite of the offer/answer paragraph is a nice improvement.\nLooks fine to me", "new_text": "mechanism (e.g., WebSockets); upon receipt of that offer, the remote party installs it using the setRemoteDescription() API. To complete the offer/answer exchange, the remote party uses the createAnswer() API to generate an appropriate answer, applies it using the setLocalDescription() API, and sends the answer back to the initiator over the signaling channel. When the initiator gets that answer, it installs it using the setRemoteDescription() API, and initial setup is complete. This process can be repeated for additional offer/answer exchanges. Regarding ICE RFC5245, JSEP decouples the ICE state machine from the overall signaling state machine, as the ICE state machine must remain"} {"id": "q-en-jsep-c6e87f83777ad41233596baa95d71a982f04c01bcfcc9b50ac318b2243c35605", "old_text": "The basic operations that the applications can have the media engine do are: Start exchanging media to a given remote peer, but keep all the resources reserved in the offer. Start exchanging media with a given remote peer, and free any", "comments": "I was reading this draft and noticed a couple of minor wording issues that made the text not as clear as it could be. I don't believe the changes are controversial, or change the intention of the draft.\nThese changes look good - I agree the rewrite of the offer/answer paragraph is a nice improvement.\nLooks fine to me", "new_text": "The basic operations that the applications can have the media engine do are: Start exchanging media with a given remote peer, but keep all the resources reserved in the offer. Start exchanging media with a given remote peer, and free any"} {"id": "q-en-jsep-c6e87f83777ad41233596baa95d71a982f04c01bcfcc9b50ac318b2243c35605", "old_text": "4.1.4. Session description objects (RTCSessionDescription) may be of type \"offer\", \"pranswer\", and \"answer\". These types provide information as to how the description parameter should be parsed, and how the media state should be changed. \"offer\" indicates that a description should be parsed as an offer; said description may include many possible media configurations. A", "comments": "I was reading this draft and noticed a couple of minor wording issues that made the text not as clear as it could be. I don't believe the changes are controversial, or change the intention of the draft.\nThese changes look good - I agree the rewrite of the offer/answer paragraph is a nice improvement.\nLooks fine to me", "new_text": "4.1.4. Session description objects (RTCSessionDescription) may be of type \"offer\", \"pranswer\", or \"answer\". These types provide information as to how the description parameter should be parsed, and how the media state should be changed. \"offer\" indicates that a description should be parsed as an offer; said description may include many possible media configurations. A"} {"id": "q-en-jsep-c6e87f83777ad41233596baa95d71a982f04c01bcfcc9b50ac318b2243c35605", "old_text": "This API changes the local media state; among other things, it sets up local resources for sending and encoding media. If setRemoteDescription was previously called with an offer, and setLocalDescription is called with an answer (provisional or final), and the media directions are compatible, and media are available to send, this will result in the starting of media transmission.", "comments": "I was reading this draft and noticed a couple of minor wording issues that made the text not as clear as it could be. I don't believe the changes are controversial, or change the intention of the draft.\nThese changes look good - I agree the rewrite of the offer/answer paragraph is a nice improvement.\nLooks fine to me", "new_text": "This API changes the local media state; among other things, it sets up local resources for sending and encoding media. If setLocalDescription was previously called with an offer, and setRemoteDescription is called with an answer (provisional or final), and the media directions are compatible, and media are available to send, this will result in the starting of media transmission."} {"id": "q-en-jsep-a9442582e171a8e22c1b333a86acda080a77637c547a515d5b87922c2c9800bc", "old_text": "\"a=rid\" lines. Each m= section is also checked to ensure prohibited features are not used. If this is a local description, the \"ice-lite\" attribute MUST NOT be specified. If the RTP/RTCP multiplexing policy is \"require\", each m= section MUST contain an \"a=rtcp-mux\" attribute. If an \"m=\" section", "comments": "Page 62 has a bullet ending \"If this is a local description, the 'ice-lite' attribute MUST NOT be specified.\" This sentence is leftover from before SDP munging was prohibited and should be removed.", "new_text": "\"a=rid\" lines. Each m= section is also checked to ensure prohibited features are not used. If the RTP/RTCP multiplexing policy is \"require\", each m= section MUST contain an \"a=rtcp-mux\" attribute. If an \"m=\" section"} {"id": "q-en-jsep-3116c17f33e09dddd25bed6534f7e30da4c0e972417d8f1cab60fccdbb08616b", "old_text": "have to keep pace with the changes to SDP, at least until the time that this new encoding eclipsed SDP in popularity. However, to simplify Javascript processing, and provide for future flexibility, the SDP syntax is encapsulated within a SessionDescription object, which can be constructed from SDP, and be serialized out to SDP. If future specifications agree on a JSON format for session descriptions, we could easily enable this object to generate and consume that JSON. Other methods may be added to SessionDescription in the future to simplify handling of SessionDescriptions from Javascript. In the meantime, Javascript libraries can be used to perform these manipulations. Note that most applications should be able to treat the SessionDescriptions produced and consumed by these various API calls as opaque blobs; that is, the application will not need to read or change them.", "comments": "Section 3.3, paragraph 3 is an example of text that should have been removed with SDP munging. Please remove it.\nMunging remote SDP is still allowed, but I agree that we don't need to get into the mechanics of how to do it. Agree this para should be removed, and the last para reworked accordingly.", "new_text": "have to keep pace with the changes to SDP, at least until the time that this new encoding eclipsed SDP in popularity. However, to provide for future flexibility, the SDP syntax is encapsulated within a SessionDescription object, which can be constructed from SDP, and be serialized out to SDP. If future specifications agree on a JSON format for session descriptions, we could easily enable this object to generate and consume that JSON. As detailed below, most applications should be able to treat the SessionDescriptions produced and consumed by these various API calls as opaque blobs; that is, the application will not need to read or change them."} {"id": "q-en-jsep-a02765b5eaa713fa68908a0b8f9f1d4c9fffbc3d30a648ce1a3a738a5b80b034", "old_text": "If the m= section proto value indicates use of RTP: If there is no RtpTransceiver associated with this m= section (which will only happen when applying an offer), find one and associate it with this m= section according to the following steps: Find the RtpTransceiver that corresponds to this m= section, using the mapping between transceivers and m= section", "comments": "Fixes URL\nSee comment in\nAlright, try this version\nSection 5.8 contains the sentence: \"If there is no RtpTransceiver associated with this m= section (which will only happen when applying an offer)...\" -- it is grammatically ambiguous whether this says that the absence of an RtpTranceiver can only happen with an offer, or if the presence of an RtpTranceiver can only happen with an offer. Suggest rephrasing as: \"If applying an offer and there is no RtpTransceiver associated with this m= section, find one...\"\nNAME can you take a look at this?\nI'm not sold on this approach, since it then invites the question, what should one do if this condition happens when applying an answer? My suggestion would be to still call this out explicitly, but try to avoid the ambiguity noted above by changing the parenthetical to its own sentence at the end of the paragraph, e.g. \"Note that this situation will only occur when applying an offer.\"\nI changed it to that.\nLGTM", "new_text": "If the m= section proto value indicates use of RTP: If there is no RtpTransceiver associated with this m= section, find one and associate it with this m= section according to the following steps. Note that this situation will only occur when applying an offer. Find the RtpTransceiver that corresponds to this m= section, using the mapping between transceivers and m= section"} {"id": "q-en-jsep-b5004de790ccaeac832fea4dfbddda76bef8e3fc40f242b645d8282987b95bf0", "old_text": "use the \"a=imageattr\" SDP attribute RFC6236 to indicate what video frame sizes it is capable of receiving. A receiver may have hard limits on what its video decoder can process, or it may have some maximum set by policy. Note that certain codecs support transmission of samples with aspect ratios other than 1.0 (i.e., non-square pixels). JSEP", "comments": "Section 3.6.2 would ideally include a disclaimer, like we have for many other features, indicating that -- despite normative statements in this section -- the remote side might not honor imageattr because of legacy behavior, and that implementations MUST be prepared to handle such situations.\nSure. This should probably go into 3.6 or 3.6.1, since those govern sent imageattr attribs.\nLGTM", "new_text": "use the \"a=imageattr\" SDP attribute RFC6236 to indicate what video frame sizes it is capable of receiving. A receiver may have hard limits on what its video decoder can process, or it may have some maximum set by policy. By specifying these limits in an \"a=imageattr\" attribute, JSEP endpoints can attempt to ensure that the remote sender transmits video at an acceptable resolution. However, when communicating with a non-JSEP endpoint that does not understand this attribute, any signaled limits may be exceeded, and the JSEP implementation MUST handle this gracefully, e.g., by discarding the video. Note that certain codecs support transmission of samples with aspect ratios other than 1.0 (i.e., non-square pixels). JSEP"} {"id": "q-en-jsep-df2da1ba1bb7828f1b1ba75f385f753cc8961cbdcc7d0c052017e6cb5166f50d", "old_text": "hardware decoder capababilities, local policy) to determine the absolute minimum and maximum sizes it can receive. If there are no known local limits, the \"a=imageattr\" attribute SHOULD be omitted. Otherwise, an \"a=imageattr\" attribute is created with \"recv\" direction, and the resulting resolution space formed from the aforementioned intersection is used to specify its minimum and maximum x= and y= values. If the intersection is the null set, i.e., the degenerate case of no permitted resolutions, this MUST be represented by x=0 and y=0 values. The rules here express a single set of preferences, and therefore, the \"a=imageattr\" q= value is not important. It SHOULD be set to", "comments": "This seems like a normative change. I'm not saying I'm against this, but what's the rationale?\nAdam's argument in , which I found to be convincing.", "new_text": "hardware decoder capababilities, local policy) to determine the absolute minimum and maximum sizes it can receive. If there are no known local limits, the \"a=imageattr\" attribute SHOULD be omitted. If these local limits preclude receiving any video, i.e., the degenerate case of no permitted resolutions, the \"a=imageattr\" attribute MUST be omitted, and the m= section MUST be marked as sendonly/inactive, as appropriate. Otherwise, an \"a=imageattr\" attribute is created with \"recv\" direction, and the resulting resolution space formed from the aforementioned intersection is used to specify its minimum and maximum x= and y= values. The rules here express a single set of preferences, and therefore, the \"a=imageattr\" q= value is not important. It SHOULD be set to"} {"id": "q-en-jsep-df2da1ba1bb7828f1b1ba75f385f753cc8961cbdcc7d0c052017e6cb5166f50d", "old_text": "by allowing receipt of arbitrarily small resolutions, perhaps via fallback to a software decoder. In the special case of a maximum resolution of [0, 0], as described above, the sender MUST NOT transmit the encoding. 3.7. JSEP supports simulcast transmission of a MediaStreamTrack, where", "comments": "This seems like a normative change. I'm not saying I'm against this, but what's the rationale?\nAdam's argument in , which I found to be convincing.", "new_text": "by allowing receipt of arbitrarily small resolutions, perhaps via fallback to a software decoder. 3.7. JSEP supports simulcast transmission of a MediaStreamTrack, where"} {"id": "q-en-jsep-f8816161fd1cd72d933a837aae924494ce9d61f5c9874670e3bb6c100f658f78", "old_text": "guarantee that applications do so. The JSEP implementation MUST be prepared for the JS to pass in bogus data instead. Conversely, the application programmer MUST recognize that the JS does not have complete control of endpoint behavior. One case that bears particular mention is that editing ICE candidates out of the SDP or suppressing trickled candidates does not have the expected", "comments": "In the last paragraph of section 8 at \"programmer MUST recognize\". That is not an appropriate use of 2119. I suggest \"needs to be aware\" instead of \"MUST recognize\".\nThis seems right", "new_text": "guarantee that applications do so. The JSEP implementation MUST be prepared for the JS to pass in bogus data instead. Conversely, the application programmer needs to be aware that the JS does not have complete control of endpoint behavior. One case that bears particular mention is that editing ICE candidates out of the SDP or suppressing trickled candidates does not have the expected"} {"id": "q-en-jsep-1d8d024ad56abfd57bb8858febcdf2250cada11198a0bfeae3b8ddeb47a6f5d7", "old_text": "A second approach that was considered but not chosen was to decouple the management of the media control objects from session descriptions, instead offering APIs that would control each component directly. This was rejected based on a feeling that requiring exposure of this level of complexity to the application programmer would not be beneficial; it would result in an API where even a simple example would require a significant amount of code to", "comments": "Section 1.2 second paragraph - \"This was rejected based on a feeling\". Can you write something that speaks to consensus here? \"Based on a feeling\" isn't quite right. Maybe \"using the argument\"?\nSure.\nLGTM", "new_text": "A second approach that was considered but not chosen was to decouple the management of the media control objects from session descriptions, instead offering APIs that would control each component directly. This was rejected based on the argument that requiring exposure of this level of complexity to the application programmer would not be beneficial; it would result in an API where even a simple example would require a significant amount of code to"} {"id": "q-en-jsep-a9b412252a071a64cbf3ab3226846dddc66719cc9ef43de77bf6971d6f6ca845", "old_text": "of codecs sent to a remote party indicates what the local side is willing to receive, which, when intersected with the set of codecs the remote side supports, specifies what the remote side should send. However, not all parameters follow this rule; for example, the fingerprints RFC8122 sent to a remote party are calculated based on the local certificate(s) offered; the remote party MUST either accept these parameters or reject them altogether, with no option to choose different values. In addition, various RFCs put different conditions on the format of offers versus answers. For example, an offer may propose an", "comments": "Second paragraph of 3.2, last sentence: at \"these parameters or reject them\" - \"these\" and \"them\" have ambiguous antecedents. It's not clear if you are trying to point to the parameters that don't follow the earlier stated rule, or all parameters. The sentence is complex. Splitting it up and finding a way to avoid the need for the semicolon will likely make it easier to say what you mean.\nI agree that this is messy. Editorial.\nLGTMLG with comment", "new_text": "of codecs sent to a remote party indicates what the local side is willing to receive, which, when intersected with the set of codecs the remote side supports, specifies what the remote side should send. However, not all parameters follow this rule; some parameters are declarative and the remote side MUST either accept them or reject them altogether. An example of such a parameter is the DTLS fingerprints RFC8122, which are calculated based on the local certificate(s) offered, and are not subject to negotiation. In addition, various RFCs put different conditions on the format of offers versus answers. For example, an offer may propose an"} {"id": "q-en-jsep-4a1f3b51124a6e40a0fc7724ebe05a6839812372aba33da781c5e9b694f1eb62", "old_text": "security considerations for this document. While formally the JSEP interface is an API, it is better to think of it is an Internet protocol, with the JS being untrustworthy from the perspective of the endpoint. Thus, the threat model of RFC3552 applies. In particular, JS can call the API in any order and with any inputs, including malicious ones. This is particularly relevant when we consider the SDP which is passed to setLocalDescription(). While correct API usage requires that the application pass in SDP which was derived from createOffer() or createAnswer(), there is no guarantee that applications do so. The JSEP implementation MUST be prepared for the JS to pass in bogus data instead. Conversely, the application programmer needs to be aware that the JS does not have complete control of endpoint behavior. One case that bears particular mention is that editing ICE candidates out of the SDP or suppressing trickled candidates does not have the expected behavior: implementations will still perform checks from those candidates even if they are not sent to the other side. Thus, for instance, it is not possible to prevent the remote peer from learning your public IP address by removing server reflexive candidates. Applications which wish to conceal their public IP address should instead configure the ICE agent to use only relay candidates. 9.", "comments": "From \"Opsdir last call review of draft-ietf-rtcweb-jsep-21\" A minor observation, not even a nit, is that the text in the Security Considerations \"While formally the JSEP interface is an API, it is better to think of it is an Internet protocol, with the JS being untrustworthy from the perspective of the endpoint. \" could be more clear.\nI have no idea what to do with this. Obviously I thought it was clear when wrote it, and still do.\nThis is the only place in the document where we use \"JS\". I think it would be good to spell out \"JavaScript\", and potentially use \"application JavaScript\" in the first usage. Finally, I think we should use \"implementation\" rather than \"endpoint\" in this para.\nLGTM", "new_text": "security considerations for this document. While formally the JSEP interface is an API, it is better to think of it is an Internet protocol, with the application JavaScript being untrustworthy from the perspective of the JSEP implementation. Thus, the threat model of RFC3552 applies. In particular, JavaScript can call the API in any order and with any inputs, including malicious ones. This is particularly relevant when we consider the SDP which is passed to setLocalDescription(). While correct API usage requires that the application pass in SDP which was derived from createOffer() or createAnswer(), there is no guarantee that applications do so. The JSEP implementation MUST be prepared for the JavaScript to pass in bogus data instead. Conversely, the application programmer needs to be aware that the JavaScript does not have complete control of endpoint behavior. One case that bears particular mention is that editing ICE candidates out of the SDP or suppressing trickled candidates does not have the expected behavior: implementations will still perform checks from those candidates even if they are not sent to the other side. Thus, for instance, it is not possible to prevent the remote peer from learning your public IP address by removing server reflexive candidates. Applications which wish to conceal their public IP address should instead configure the ICE agent to use only relay candidates. 9."} {"id": "q-en-jsep-aec5e1277a7bbf9a1406d8ef094add136c93b286849aefae99997a4d1cbc1dc9", "old_text": "header extension defined in I-D.ietf-mmusic-sdp-bundle- negotiation, Section 11. An \"a=msid\" line, as specified in I-D.ietf-mmusic-msid, Section 2. An \"a=sendrecv\" line, as specified in RFC3264, Section 5.1.", "comments": "lgtm\nI think the dummy candidates should be v4 not v6 as many things just don't do v6 but other than that looks reasonable. You don't really need to specify the address at all the the lines when you don't know the port can just look like a=rtcp:9 With no IP address or family\nNAME The IP6 vs IP4 is described in URL, so this isn't a JSEP thing. Given that this is part of the trickle spec, and only trickle endpoints will get these candidates, I don't see any problem. As to whether a=rtcp can just have the port, consider the case where RTP candidates have been gathered, but no RTCP candidates have yet been gathered. In that case, we certainly need to have the full a=rtcp:9 IP6 ::, so I think it makes sense to include it in all cases where RTCP candidates have not yet been gathered.\nI don't care much about this so fine if not change is made. I agree it is a trickle-ice draft issue. However, when I read that, it seems that it would be fine with \"a=rtcp:9\" with no address. It seems the reason we used 9 was to provide something that stuff that did not implement trickle ICE would not barf on. I have no tested but an address of \"IP6 ::\" seems like something that has high odds of stuff barfing on. Anyways, no big deal with me one way or the other. On Sep 12, 2014, at 7:49 PM, Justin Uberti EMAIL wrote:\nOnly trickle endpoints will even receive an offer with these dummy addresses and ports, so compatibility with old non-trickle stuff is not an issue. LMK if you have any further concerns, otherwise I will merge later today.\nI concur that we should merge this. , Justin Uberti EMAIL wrote:\nOn Sep 18, 2014, at 9:34 AM, Justin Uberti EMAIL wrote: ah - my bad - makes sense Merge it !", "new_text": "header extension defined in I-D.ietf-mmusic-sdp-bundle- negotiation, Section 11. An \"a=rtcp\" line, as specified in RFC3605, Section 2.1, containing the dummy value \"9 IN IP6 ::\", because no candidates have yet been gathered. An \"a=msid\" line, as specified in I-D.ietf-mmusic-msid, Section 2. An \"a=sendrecv\" line, as specified in RFC3264, Section 5.1."} {"id": "q-en-jsep-aec5e1277a7bbf9a1406d8ef094add136c93b286849aefae99997a4d1cbc1dc9", "old_text": "Each \"m=\" and c=\" line MUST be filled in with the port and address of the default candidate for the m= section, as described in RFC5245, Section 4.3. If no candidates have yet been gathered, the dummy values MUST be used, as described above. [TODO: update profile UDP/TCP per default candidate] Each \"a=mid\" line MUST stay the same.", "comments": "lgtm\nI think the dummy candidates should be v4 not v6 as many things just don't do v6 but other than that looks reasonable. You don't really need to specify the address at all the the lines when you don't know the port can just look like a=rtcp:9 With no IP address or family\nNAME The IP6 vs IP4 is described in URL, so this isn't a JSEP thing. Given that this is part of the trickle spec, and only trickle endpoints will get these candidates, I don't see any problem. As to whether a=rtcp can just have the port, consider the case where RTP candidates have been gathered, but no RTCP candidates have yet been gathered. In that case, we certainly need to have the full a=rtcp:9 IP6 ::, so I think it makes sense to include it in all cases where RTCP candidates have not yet been gathered.\nI don't care much about this so fine if not change is made. I agree it is a trickle-ice draft issue. However, when I read that, it seems that it would be fine with \"a=rtcp:9\" with no address. It seems the reason we used 9 was to provide something that stuff that did not implement trickle ICE would not barf on. I have no tested but an address of \"IP6 ::\" seems like something that has high odds of stuff barfing on. Anyways, no big deal with me one way or the other. On Sep 12, 2014, at 7:49 PM, Justin Uberti EMAIL wrote:\nOnly trickle endpoints will even receive an offer with these dummy addresses and ports, so compatibility with old non-trickle stuff is not an issue. LMK if you have any further concerns, otherwise I will merge later today.\nI concur that we should merge this. , Justin Uberti EMAIL wrote:\nOn Sep 18, 2014, at 9:34 AM, Justin Uberti EMAIL wrote: ah - my bad - makes sense Merge it !", "new_text": "Each \"m=\" and c=\" line MUST be filled in with the port and address of the default candidate for the m= section, as described in RFC5245, Section 4.3. Each \"a=rtcp\" attribute line MUST also be filled in with the port and address of the appropriate default candidate, either the default RTP or RTCP candidate, depending on whether RTCP multiplexing is currently active or not. Note that if RTCP multiplexing is being offered, but not yet active, the default RTCP candidate MUST be used, as indicated in RFC5761, section 5.1.3. In each case, if no candidates of the desired type have yet been gathered, dummy values MUST be used, as described above. [TODO: update profile UDP/TCP per default candidate] Each \"a=mid\" line MUST stay the same."} {"id": "q-en-jsep-aec5e1277a7bbf9a1406d8ef094add136c93b286849aefae99997a4d1cbc1dc9", "old_text": "Section 9.1. The \"mid\" value MUST match that specified in the offer. If a local MediaStreamTrack has been associated, an \"a=msid\" line, as specified in I-D.ietf-mmusic-msid, Section 2.", "comments": "lgtm\nI think the dummy candidates should be v4 not v6 as many things just don't do v6 but other than that looks reasonable. You don't really need to specify the address at all the the lines when you don't know the port can just look like a=rtcp:9 With no IP address or family\nNAME The IP6 vs IP4 is described in URL, so this isn't a JSEP thing. Given that this is part of the trickle spec, and only trickle endpoints will get these candidates, I don't see any problem. As to whether a=rtcp can just have the port, consider the case where RTP candidates have been gathered, but no RTCP candidates have yet been gathered. In that case, we certainly need to have the full a=rtcp:9 IP6 ::, so I think it makes sense to include it in all cases where RTCP candidates have not yet been gathered.\nI don't care much about this so fine if not change is made. I agree it is a trickle-ice draft issue. However, when I read that, it seems that it would be fine with \"a=rtcp:9\" with no address. It seems the reason we used 9 was to provide something that stuff that did not implement trickle ICE would not barf on. I have no tested but an address of \"IP6 ::\" seems like something that has high odds of stuff barfing on. Anyways, no big deal with me one way or the other. On Sep 12, 2014, at 7:49 PM, Justin Uberti EMAIL wrote:\nOnly trickle endpoints will even receive an offer with these dummy addresses and ports, so compatibility with old non-trickle stuff is not an issue. LMK if you have any further concerns, otherwise I will merge later today.\nI concur that we should merge this. , Justin Uberti EMAIL wrote:\nOn Sep 18, 2014, at 9:34 AM, Justin Uberti EMAIL wrote: ah - my bad - makes sense Merge it !", "new_text": "Section 9.1. The \"mid\" value MUST match that specified in the offer. An \"a=rtcp\" line, as specified in RFC3605, Section 2.1, containing the dummy value \"9 IN IP6 ::\", because no candidates have yet been gathered. If a local MediaStreamTrack has been associated, an \"a=msid\" line, as specified in I-D.ietf-mmusic-msid, Section 2."} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "JSEP's handling of session descriptions is simple and straightforward. Whenever an offer/answer exchange is needed, the initiating side creates an offer by calling a createOffer() API. The application then uses that offer to set up its local config via the setLocalDescription() API. The offer is finally sent off to the remote side over its preferred signaling mechanism (e.g., WebSockets); upon receipt of that offer, the remote party installs it using the setRemoteDescription() API. To complete the offer/answer exchange, the remote party uses the createAnswer() API to generate an appropriate answer, applies it using the setLocalDescription() API, and sends the answer back to the initiator over the signaling channel. When the initiator gets that answer, it installs it using the setRemoteDescription() API, and initial setup is complete. This process can be repeated for additional offer/answer exchanges.", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "JSEP's handling of session descriptions is simple and straightforward. Whenever an offer/answer exchange is needed, the initiating side creates an offer by calling a createOffer API. The application then uses that offer to set up its local config via the setLocalDescription API. The offer is finally sent off to the remote side over its preferred signaling mechanism (e.g., WebSockets); upon receipt of that offer, the remote party installs it using the setRemoteDescription API. To complete the offer/answer exchange, the remote party uses the createAnswer API to generate an appropriate answer, applies it using the setLocalDescription API, and sends the answer back to the initiator over the signaling channel. When the initiator gets that answer, it installs it using the setRemoteDescription API, and initial setup is complete. This process can be repeated for additional offer/answer exchanges."} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "using SIP for signaling, if one offer is sent and is then canceled using a SIP CANCEL, another offer can be generated even though no answer was received for the first offer. To support this, the JSEP media layer can provide an offer via the createOffer() method whenever the JavaScript application needs one for the signaling. The answerer can send back zero or more provisional answers and then finally end the offer/answer exchange by sending a final answer. The state machine for this is as follows: Aside from these state transitions, there is no other difference between the handling of provisional (\"pranswer\") and final (\"answer\")", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "using SIP for signaling, if one offer is sent and is then canceled using a SIP CANCEL, another offer can be generated even though no answer was received for the first offer. To support this, the JSEP media layer can provide an offer via the createOffer method whenever the JavaScript application needs one for the signaling. The answerer can send back zero or more provisional answers and then finally end the offer/answer exchange by sending a final answer. The state machine for this is as follows: Aside from these state transitions, there is no other difference between the handling of provisional (\"pranswer\") and final (\"answer\")"} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "in RFC5888. addTrack attempts to minimize the number of transceivers as follows: if the PeerConnection is in the \"have-remote-offer\" state, the track will be attached to the first compatible transceiver that was created by the most recent call to setRemoteDescription() and does not have a local track. Otherwise, a new transceiver will be created, as described in sec.addTransceiver. 4.1.3.", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "in RFC5888. addTrack attempts to minimize the number of transceivers as follows: if the PeerConnection is in the \"have-remote-offer\" state, the track will be attached to the first compatible transceiver that was created by the most recent call to setRemoteDescription and does not have a local track. Otherwise, a new transceiver will be created, as described in sec.addTransceiver. 4.1.3."} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "\"offer\" indicates that a description should be parsed as an offer; said description may include many possible media configurations. A description used as an \"offer\" may be applied any time the PeerConnection is in a stable state or applied as an update to a previously supplied but unanswered \"offer\". \"pranswer\" indicates that a description should be parsed as an", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "\"offer\" indicates that a description should be parsed as an offer; said description may include many possible media configurations. A description used as an \"offer\" may be applied any time the PeerConnection is in a \"stable\" state or applied as an update to a previously supplied but unanswered \"offer\". \"pranswer\" indicates that a description should be parsed as an"} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "voicemail). \"rollback\" is a special session description type implying that the state machine should be rolled back to the previous stable state, as described in sec.rollback. The contents MUST be empty. 4.1.8.1.", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "voicemail). \"rollback\" is a special session description type implying that the state machine should be rolled back to the previous \"stable\" state, as described in sec.rollback. The contents MUST be empty. 4.1.8.1."} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "after setRemoteDescription, decides it does not want to accept the new parameters and sends a reject message back to the offerer. Now, the offerer, and possibly the answerer as well, needs to return to a stable state and the previous local/remote description. To support this, we introduce the concept of \"rollback\", which discards any proposed changes to the session, returning the state machine to the stable state. A rollback is performed by supplying a session description of type \"rollback\" with empty contents to either setLocalDescription or setRemoteDescription.", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "after setRemoteDescription, decides it does not want to accept the new parameters and sends a reject message back to the offerer. Now, the offerer, and possibly the answerer as well, needs to return to a \"stable\" state and the previous local/remote description. To support this, we introduce the concept of \"rollback\", which discards any proposed changes to the session, returning the state machine to the \"stable\" state. A rollback is performed by supplying a session description of type \"rollback\" with empty contents to either setLocalDescription or setRemoteDescription."} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "5.2.3.1. If the \"IceRestart\" option is specified, with a value of \"true\", the offer MUST indicate an ICE restart by generating new ICE ufrag and pwd attributes, as specified in RFC8839. If this option is specified on an initial offer, it has no effect (since a new ICE ufrag and pwd", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "5.2.3.1. If the IceRestart option is specified, with a value of \"true\", the offer MUST indicate an ICE restart by generating new ICE ufrag and pwd attributes, as specified in RFC8839. If this option is specified on an initial offer, it has no effect (since a new ICE ufrag and pwd"} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "A rollback may be performed if the PeerConnection is in any state except for \"stable\". This means that both offers and provisional answers can be rolled back. Rollback can only be used to cancel proposed changes; there is no support for rolling back from a stable state to a previous stable state. If a rollback is attempted in the \"stable\" state, processing MUST stop and an error MUST be returned. Note that this implies that once the answerer has performed setLocalDescription with its answer, this cannot be rolled back. The effect of rollback MUST be the same regardless of whether setLocalDescription or setRemoteDescription is called.", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "A rollback may be performed if the PeerConnection is in any state except for \"stable\". This means that both offers and provisional answers can be rolled back. Rollback can only be used to cancel proposed changes; there is no support for rolling back from a \"stable\" state to a previous \"stable\" state. If a rollback is attempted in the \"stable\" state, processing MUST stop and an error MUST be returned. Note that this implies that once the answerer has performed setLocalDescription with its answer, this cannot be rolled back. The effect of rollback MUST be the same regardless of whether setLocalDescription or setRemoteDescription is called."} {"id": "q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1", "old_text": "the threat model of RFC3552 applies. In particular, JavaScript can call the API in any order and with any inputs, including malicious ones. This is particularly relevant when we consider the SDP that is passed to setLocalDescription(). While correct API usage requires that the application pass in SDP that was derived from createOffer() or createAnswer(), there is no guarantee that applications do so. The JSEP implementation MUST be prepared for the JavaScript to pass in bogus data instead. Conversely, the application programmer needs to be aware that the JavaScript does not have complete control of endpoint behavior. One", "comments": "Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting).", "new_text": "the threat model of RFC3552 applies. In particular, JavaScript can call the API in any order and with any inputs, including malicious ones. This is particularly relevant when we consider the SDP that is passed to setLocalDescription. While correct API usage requires that the application pass in SDP that was derived from createOffer or createAnswer, there is no guarantee that applications do so. The JSEP implementation MUST be prepared for the JavaScript to pass in bogus data instead. Conversely, the application programmer needs to be aware that the JavaScript does not have complete control of endpoint behavior. One"} {"id": "q-en-link-template-9f8f3ab9b436c3cab62fdcb9037f8dc8523a2fe1b3a2cfe188e5691aa8f09cd6", "old_text": "BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. This document uses the Augmented BNF defined in HTTP to specify valid protocol elements. Additionally, it uses the modified \"parameter\" rule from RFC5987 and the \"URI-Template\" rule from URI-TEMPLATE. 2. The Link-Template header field provides a means for serialising one or more links into HTTP message metadata. It is semantically equivalent to the Link header field defined in WEB-LINKING, except that it uses URI Templates URI-TEMPLATE to convey the structure of links. For example:", "comments": "Emerging best practice is to use for new headers, so that parsing is well-defined. Arguments for sticking with a -like syntax: Consistence with (IMO a very poor justification) Reuse of parsers / other software (it's not well-defined, tho) ???\nWe could also talk about creating a SF version of (maybe ?).\nMatching SF and headers would be appealing.\nIn the meantime (or maybe before ;) -- URL", "new_text": "BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. This specification uses the following terms from STRUCTURED-FIELDS: List, String, Parameter. 2. The Link-Template header field is a Structured Field STRUCTURED- FIELDS that serializes one or more links into HTTP message metadata. It is semantically equivalent to the Link header field defined in WEB-LINKING, except that it uses URI Templates URI-TEMPLATE to convey the structure of links. Its value is a List of Strings. Each String is a URI Template, and Parameters on it carry associated metadata. For example:"} {"id": "q-en-link-template-9f8f3ab9b436c3cab62fdcb9037f8dc8523a2fe1b3a2cfe188e5691aa8f09cd6", "old_text": "Parameters on a templated-link have identical semantics to those of a Link header field. This includes (but is not limited to) the use of the \"rel\" parameter to convey the relation type, the \"anchor\" parameter to modify the context IRI, and so on. Likewise, the requirements for parameters on templated-links are the same as those for a Link header field; in particular, the \"rel\" parameter MUST NOT appear more than once, and if it does, the templated-link MUST be ignored by parsers. This specification defines additional semantics for the \"var-base\" parameter on templated-links; see below.", "comments": "Emerging best practice is to use for new headers, so that parsing is well-defined. Arguments for sticking with a -like syntax: Consistence with (IMO a very poor justification) Reuse of parsers / other software (it's not well-defined, tho) ???\nWe could also talk about creating a SF version of (maybe ?).\nMatching SF and headers would be appealing.\nIn the meantime (or maybe before ;) -- URL", "new_text": "Parameters on a templated-link have identical semantics to those of a Link header field. This includes (but is not limited to) the use of the \"rel\" parameter to convey the relation type, the \"anchor\" parameter to modify the context IRI, and so on. Parameter values MUST be Strings. Likewise, the requirements for parameters on templated-links are the same as those for a Link header field. This specification defines additional semantics for the \"var-base\" parameter on templated-links; see below."} {"id": "q-en-load-balancers-763e6684ca06c5282afdf0df47ce4e96cae479620332f9b8ff9ddf67b2d4e6a7", "old_text": "Load balancers SHOULD drop short header packets with unroutable DCIDs. The routing of long headers with unroutable DCIDs depends on the server ID allocation strategy, described in sid-allocation. However, the load balancer MUST NOT drop these packets, with one exception. Load balancers MAY drop packets with long headers and unroutable DCIDs if and only if it knows that the encoded QUIC version does not", "comments": "and .\nIn light of , NAME concerns about the framework, and NAME declining interest in implementing, I think it's time to simply eliminate this from the document.\nThanks for all the hard work, but sorry to be the cause of it.\nFixed by\nHow does a dynamic framework survive an HA handover? It would seem to lose all the SID allocations and break all connections.", "new_text": "Load balancers SHOULD drop short header packets with unroutable DCIDs. When forwarding a packet with a long header and unroutable DCID, load balancers MUST use a fallback algorithm as specified in fallback- algorithm. Load balancers MAY drop packets with long headers and unroutable DCIDs if and only if it knows that the encoded QUIC version does not"} {"id": "q-en-load-balancers-763e6684ca06c5282afdf0df47ce4e96cae479620332f9b8ff9ddf67b2d4e6a7", "old_text": "4.3. For any given configuration, the configuration agent must specify if server IDs will be statically or dynamically allocated. Load Balancer configurations with statically allocated server IDs explicitly include a mapping of server IDs to forwarding addresses. The corresponding server configurations contain one or more unique server IDs. A dynamically allocated configuration does not have a pre-defined assignment, reducing configuration complexity. However, it places limits on the maximum server ID length and requires more state at the load balancer. In certain edge cases, it can force parts of the system to fail over to 5-tuple routing for a short time. In either case, the configuration agent chooses a server ID length for each configuration that MUST be at least one octet. For Static Allocation, the maximum length depends on the algorithm. For dynamic allocation, the maximum length is 7 octets. A QUIC-LB configuration MAY significantly over-provision the server ID space (i.e., provide far more codepoints than there are servers) to increase the probability that a randomly generated Destination Connection ID is unroutable. Conceptually, each configuration has its own set of server ID allocations, though two static configurations with identical server ID lengths MAY use a common allocation between them.", "comments": "and .\nIn light of , NAME concerns about the framework, and NAME declining interest in implementing, I think it's time to simply eliminate this from the document.\nThanks for all the hard work, but sorry to be the cause of it.\nFixed by\nHow does a dynamic framework survive an HA handover? It would seem to lose all the SID allocations and break all connections.", "new_text": "4.3. Load Balancer configurations include a mapping of server IDs to forwarding addresses. The corresponding server configurations contain one or more unique server IDs. The configuration agent chooses a server ID length for each configuration that MUST be at least one octet. A QUIC-LB configuration MAY significantly over-provision the server ID space (i.e., provide far more codepoints than there are servers) to increase the probability that a randomly generated Destination Connection ID is unroutable. The configuration agent SHOULD provide a means for servers to express the number of server IDs it can usefully employ, because a single routing address actually corresponds to multiple server entities (see lb-chains). Conceptually, each configuration has its own set of server ID allocations, though two static configurations with identical server ID lengths MAY use a common allocation between them."} {"id": "q-en-load-balancers-763e6684ca06c5282afdf0df47ce4e96cae479620332f9b8ff9ddf67b2d4e6a7", "old_text": "A server encodes one of its assigned server IDs in any CID it generates using the relevant configuration. 4.3.1. In the static allocation method, the configuration agent assigns at least one server ID to each server. When forwarding a packet with a long header and unroutable DCID, load balancers MUST forward packets with long headers and unroutable DCIDs using an fallback algorithm as specified in fallback-algorithm. 4.3.2. In the dynamic allocation method, the load balancer assigns server IDs dynamically so that configuration does not require fixed server ID assignment. This reduces linkability and simplifies configuration. However, it also limits the length of the server ID and requires the load balancer to lie on the path of outbound packets. As the server mapping is no longer part of the configuration, standby load balancers need an out-of-band mechanism to synchronize server ID allocations in the event of failures of the primary device. To summarize, the load balancer forwards incoming Initial packets arbitrarily and both load balancer and server are sometimes able to infer a potential server ID allocation from the CID in the packet. The server can signal acceptance of that allocation by using it immediately, in which case both entities add it to their permanent table. Usually, however, the server will reject the allocation by not using it, in which case it is not added to the permanent assignment list. 4.3.2.1. The configuration agent does not assign server IDs, but does configure a server ID length. The server ID MUST be at least one and no more than seven octets. See sid-limits for other considerations if also using the Plaintext CID algorithm. 4.3.2.2. The load balancer maintains a mapping of assigned server IDs to routing information for servers, initialized as empty. This mapping is independent for each operating configuration. Note that when the load balancer's tables for a configuration are empty, all incoming DCIDs corresponding to that configuration are unroutable by definition. The load balancer processes a long header packet as follows: If the config rotation bits do not match a known configuration, the load balancer routes the packet using a fallback algorithm (see fallback-algorithm). It does not extract a server ID. If there is a matching configuration, but the CID is not long enough to apply the algorithm, the load balancer pads the connection ID with zeros to the required length. Otherwise, the load balancer extracts the server ID in accordance with the configured algorithm and parameters. If the load balancer extracted a server ID already in its mapping, it routes the packet accordingly. If the server ID is not in the mapping, it routes the packet according to a fallback algorithm and awaits the first long header the server sends in response. If the load balancer extracted an unassigned server ID and observes that the first long header packet the server sends has a Source Connection ID that encodes the same server ID, it adds that server ID to the mapping. Otherwise, it takes no action. 4.3.2.3. Each server maintains a list of server IDs assigned to it, initialized empty. Upon receipt of a packet with a client-generated DCID, the server MUST follow these steps in order: If the config rotation bits do not correspond to a known configuration, do not attempt to extract a server ID. If the DCID is not long enough to decode using the configured algorithm, pad it with zeros to the required length and extract a server ID. If the DCID is long enough to decode, extract the server ID. If the server ID is not already in its list, the server MUST decide whether or not to immediately use it to encode a CID on the new connection. If it chooses to use it, it adds the server ID to its list. If it does not, it MUST NOT use the server ID in future CIDs. The server SHOULD NOT use more than one CID, unless it is close to exhausting the nonces for an existing assignment. Note also that the load balancer may observe a single entity claiming multiple server IDs because that entity actually represents multiple servers devices or processors. The server MUST generate a new connection ID if the client-generated CID is of insufficient length for the configuration. The server then processes the packet normally. When a server needs a new connection ID, it uses one of the server IDs in its list to populate the server ID field of that CID. It MAY vary this selection to reduce linkability within a connection. After loading a new configuration, a server may not have any available SIDs. This is because an incoming packet may not contain the config rotation bits necessary to extract a server ID in accordance with the algorithm above. When required to generate a CID under these conditions, the server MUST generate CIDs using the 5-tuple routing codepoint (see config-failover. Note that these connections will not be robust to client address changes while they use this connection ID. For this reason, a server SHOULD retire these connection IDs and replace them with routable ones once it receives a client-generated CID that allows it to acquire a server ID. As, statistically, one in every four such CIDs can provide a server ID, this is typically a short interval. 4.4. All connection IDs use the following format:", "comments": "and .\nIn light of , NAME concerns about the framework, and NAME declining interest in implementing, I think it's time to simply eliminate this from the document.\nThanks for all the hard work, but sorry to be the cause of it.\nFixed by\nHow does a dynamic framework survive an HA handover? It would seem to lose all the SID allocations and break all connections.", "new_text": "A server encodes one of its assigned server IDs in any CID it generates using the relevant configuration. 4.4. All connection IDs use the following format:"} {"id": "q-en-load-balancers-763e6684ca06c5282afdf0df47ce4e96cae479620332f9b8ff9ddf67b2d4e6a7", "old_text": "QUIC-LB requires common configuration to synchronize understanding of encodings and guarantee explicit consent of the server. The load balancer and server MUST agree on a routing algorithm, server ID allocation method, and the relevant parameters for that algorithm. All algorithm configurations can have a server ID length, nonce length, and key. However, for Plaintext CID, there is no key. If server IDs are statically allocated, the load balancer MUST receive the full table of mappings, and each server must receive its assigned SID(s), from the configuration agent. Note that server IDs are opaque bytes, not integers, so there is no notion of network order or host order.", "comments": "and .\nIn light of , NAME concerns about the framework, and NAME declining interest in implementing, I think it's time to simply eliminate this from the document.\nThanks for all the hard work, but sorry to be the cause of it.\nFixed by\nHow does a dynamic framework survive an HA handover? It would seem to lose all the SID allocations and break all connections.", "new_text": "QUIC-LB requires common configuration to synchronize understanding of encodings and guarantee explicit consent of the server. The load balancer and server MUST agree on a routing algorithm and the relevant parameters for that algorithm. All algorithm configurations can have a server ID length, nonce length, and key. However, for Plaintext CID, there is no key. The load balancer MUST receive the full table of mappings, and each server must receive its assigned SID(s), from the configuration agent. Note that server IDs are opaque bytes, not integers, so there is no notion of network order or host order."} {"id": "q-en-load-balancers-763e6684ca06c5282afdf0df47ce4e96cae479620332f9b8ff9ddf67b2d4e6a7", "old_text": "fewer than 2^40 tokens are generated with a single key, the risk of collisions is lower than 0.001%. 11.8. When using Dynamic SID allocation, the load balancer's SID table can be as large as 2^56 entries, which is prohibitively large. To constrain the size of this table, servers are encouraged to accept as few SIDs as possible, so that the remainder do not enter the load balancer's table. 12. There are no IANA requirements.", "comments": "and .\nIn light of , NAME concerns about the framework, and NAME declining interest in implementing, I think it's time to simply eliminate this from the document.\nThanks for all the hard work, but sorry to be the cause of it.\nFixed by\nHow does a dynamic framework survive an HA handover? It would seem to lose all the SID allocations and break all connections.", "new_text": "fewer than 2^40 tokens are generated with a single key, the risk of collisions is lower than 0.001%. 12. There are no IANA requirements."} {"id": "q-en-load-balancers-e77175e4da15472416df839aecb681c1c39a1196bb80b50d917da3f81423410d", "old_text": "The Stream Cipher CID algorithm provides cryptographic protection at the cost of additional per-packet processing at the load balancer to decrypt every incoming connection ID. The CID format is depicted below. 5.2.1.", "comments": "The CID format figure has been removed since draft-ietf-quic-load-balancers-08", "new_text": "The Stream Cipher CID algorithm provides cryptographic protection at the cost of additional per-packet processing at the load balancer to decrypt every incoming connection ID. 5.2.1."} {"id": "q-en-load-balancers-6ac76dfcfb731b51305ec8b7ddc25d2cddcf91cd91583b15e1e6d843f50ec3e2", "old_text": "octet of the Connection ID are reserved to express the length of the following connection ID, not including the first octet. A server not using this functionality SHOULD make the six bits appear to be random. 2.4.", "comments": "URL is probably not your intent, but that is what you get with this phrasing. I think that the point is that the values are not predictable to entities other than the one generating it. This appears in several places in the document.\nWell URL uses \"indistinguishable from random\" :-) But OK, I'll seek a new formulation.", "new_text": "octet of the Connection ID are reserved to express the length of the following connection ID, not including the first octet. A server not using this functionality SHOULD choose the six bits so as to have no observable relationship to previous connection IDs issued for that connection. 2.4."} {"id": "q-en-load-balancers-6ac76dfcfb731b51305ec8b7ddc25d2cddcf91cd91583b15e1e6d843f50ec3e2", "old_text": "respective fields. If there is no key in the configuration, the server MUST fill the Nonce field with bytes that appear to be random. If there is a key, the server fills the nonce field with a nonce of its choosing. See cid-entropy for details. The server MAY append additional bytes to the connection ID, up to the limit specified in that version of QUIC, for its own use. These", "comments": "URL is probably not your intent, but that is what you get with this phrasing. I think that the point is that the values are not predictable to entities other than the one generating it. This appears in several places in the document.\nWell URL uses \"indistinguishable from random\" :-) But OK, I'll seek a new formulation.", "new_text": "respective fields. If there is no key in the configuration, the server MUST fill the Nonce field with bytes that have no observable relationship to the field in previously issued connection IDs. If there is a key, the server fills the nonce field with a nonce of its choosing. See cid- entropy for details. The server MAY append additional bytes to the connection ID, up to the limit specified in that version of QUIC, for its own use. These"} {"id": "q-en-load-balancers-6ac76dfcfb731b51305ec8b7ddc25d2cddcf91cd91583b15e1e6d843f50ec3e2", "old_text": "two connection IDs to the same connection, client, or server. In particular, all servers using a configuration MUST consistently add the same length to each connection ID, to preserve the linkability objectives of QUIC-LB. Any additional bytes SHOULD appear random unless individual servers are not distinguishable (e.g. any server using that configuration appends identical bytes to every connection ID). If there is no key in the configuration, the Connection ID is complete. Otherwise, there are further steps, as described in the", "comments": "URL is probably not your intent, but that is what you get with this phrasing. I think that the point is that the values are not predictable to entities other than the one generating it. This appears in several places in the document.\nWell URL uses \"indistinguishable from random\" :-) But OK, I'll seek a new formulation.", "new_text": "two connection IDs to the same connection, client, or server. In particular, all servers using a configuration MUST consistently add the same length to each connection ID, to preserve the linkability objectives of QUIC-LB. Any additional bytes SHOULD NOT provide any observable correlation to previous connection IDs for that connection (e.g., the bytes can be chosen at random). If there is no key in the configuration, the Connection ID is complete. Otherwise, there are further steps, as described in the"} {"id": "q-en-load-balancers-6ac76dfcfb731b51305ec8b7ddc25d2cddcf91cd91583b15e1e6d843f50ec3e2", "old_text": "Whether or not it implements the counter method, the server MUST NOT reuse a nonce until it switches to a configuration with new keys. If the nonce is sent in plaintext, servers MUST generate nonces so that they appear to be random. Observable correlations between plaintext nonces would provide trivial linkability between individual connections, rather than just to a common server. For any algorithm, configuration agents SHOULD implement an out-of- band method to discover when servers are in danger of exhausting", "comments": "URL is probably not your intent, but that is what you get with this phrasing. I think that the point is that the values are not predictable to entities other than the one generating it. This appears in several places in the document.\nWell URL uses \"indistinguishable from random\" :-) But OK, I'll seek a new formulation.", "new_text": "Whether or not it implements the counter method, the server MUST NOT reuse a nonce until it switches to a configuration with new keys. Servers are forbidden from generating linkable plaintext nonces, because observable correlations between plaintext nonces would provide trivial linkability between individual connections, rather than just to a common server. For any algorithm, configuration agents SHOULD implement an out-of- band method to discover when servers are in danger of exhausting"} {"id": "q-en-load-balancers-9e143e249bb7e18b3bb8ae457dde427d4f5824f38ab0c9fcc05a8f936839e04e", "old_text": "shifting necessary in the event that there are an odd number of octets. The expand_left() function outputs 16 octets, with its first argument in the most significant bits, its second argument in the least significant byte, its third argument in the second least significant byte, and zeros in all other positions. Thus, expand_right() is similar, except that the second argument is in the most significant byte, the third is in the second most significant byte, and the first argument is in the least significant bits. Therefore, Similarly, truncate_left() and truncate_right() take the most significant and least significant bits, respectively, from a ciphertext. For example, to take 28 bits of a ciphertext: The example at the end of this section helps to clarify the steps described below. The server concatenates the server ID and nonce to create plaintext_CID. The server splits plaintext_CID into components left_0 and right_0 of equal length, splitting an odd octet in half if necessary. For example, 0x7040b81b55ccf3 would split into a left_0 of 0x7040b81 and right_0 of 0xb55ccf3. Encrypt the result of expand_left(left_0, index)) to obtain a ciphertext, where 'index' is one octet: the two most significant bits of which are 0b00, and the six least significant bits are the length of the resulting connection ID in bytes, cid_len. XOR the least significant bits of the ciphertext with right_0 to form right_1. Thus steps 3 and 4 can be expressed as \"right_1 = right_0 ^ truncate_right( AES_ECB(key, expand_left(left_0, cid_len, 1)), len(right_0)) \" Repeat steps 3 and 4, but use them to compute left_1 by expanding and encrypting right_1 with the most significant octet as the concatenation of 0b01 and cid_len, and XOR the results with left_0. \"left_1 = left_0 ^ truncate_left( AES_ECB(key, expand_right(right_1, cid_len, 2)), len(left_0)) \" Repeat steps 3 and 4, but use them to compute right_2 by expanding and encrypting left_1 with the least significant octet as the concatenation of 0b10 and cid_len, and XOR the results with right_1. \"right_2 = right_1 ^ truncate_right( AES_ECB(key, expand_left(left_1, cid_len, 3), len(right_1)) \" Repeat steps 3 and 4, but use them to compute left_2 by expanding and encrypting right_2 with the most significant octet as the concatenation of 0b11 ands cid_len, and XOR the results with left_1. \"left_2 = left_1 ^ truncate_left( AES_ECB(key, expand_right(right_2, cid_len, 4), len(left_1)) \" The server concatenates left_2 with right_2 to form the ciphertext CID, which it appends to the first octet. The following example executes the steps for the provided inputs. Note that the plaintext is of odd octet length, so the middle octet", "comments": "Attempt to , . Examples and test vectors not yet updated. PTAL at the description. Is the algorithm accurately described Are things clearer or not\nNAME Thanks for all the formatting suggestions. It's cleaned up a bit, although I wasn't able to get aasvg to work. PTAL\nCorrection. I got aasvg to work but it did not look good at all, probably due to user incompetence. If you would like to submit a PR after this that is prettier, I would welcome it.\nStefan Kolbl points out this problem with the 4-pass method: Indeed, there are ways to avoid ugly and error-prone bit shifting while avoiding this property. In particular, we can always use expand-left, but just padding odd-bytes with zeros to preserve byte boundaries and avoid bit-shifting.\nFixed by\nSeveral problems here: A lot of the description relies on examples rather than normative language. The split of a CID that has an odd length produces HEX sequences with an odd number of characters, which in many circumstances will be interpreted as being (left-)padded with 4 zero bits. If that happened, then bad things would occur. You really need to retain the number of bits. Step 3 passes two arguments to , when it has three. The encryption doesn't bind to the configuration that is in use; instead the high bits of the first octet are set to 0. The formatting of the operations wraps awkwardly. The operators that this uses (^ and primarily) aren't defined. The third input to seems to increment without explanation. Given the examples, I could no doubt work this all out satisfactorily, but that shouldn't be necessary.\nFixed by\nA lot.\nYes, the new text seems clear, and correct.", "new_text": "shifting necessary in the event that there are an odd number of octets. When configured with both a key, and a nonce length and server ID length that sum to any number other than 16, the server MUST follow the algorith below to encrypt the connection ID. 4.3.2.1. The 4-pass algorithm is a four-round Feistel Network with the round function being AES-ECB. Most modern applications of Feistel Networks have more than four rounds. The implications of this choice, which is meant to limit the per-packet compute overhead at load balancers, are discussed in distinguishing-attacks. The server concatenates the server ID and nonce into a single field, which is then split into equal halves. In successive passes, one of these halves is expanded into a 16B plaintext, encrypted with AES- ECB, and the result XORed with the other half. The diagram below shows the conceptual processing of a plaintext server ID and nonce into a connection ID. 'FO' stands for 'First Octet'. 4.3.2.2. Two functions are useful to define: The expand(length, pass, input_bytes) function concatenates three arguments and outputs 16 zero-padded octets. The first argument 'length' is an 8-bit integer that reports the sum of the configured nonce length and server id length in octets, and forms the most significant octet of the output. The 'length' argument MUST NOT exceed 28. The second argument is an 8-bit integer that is the 'pass' of the algorithm, and forms the second-most significant octet of the output. The third argument is a variable-length stream of octets, which is copied into the third-most significant octet of the output and beyond. The length of this octet stream is half the 'length', rounded up. All remaining octets of the output are zero. For example, Similarly, truncate(input, n) returns the first n octets of 'input'. Let 'half_len' be equal to 'plaintext_len' / 2, rounded up. 4.3.2.3. The example at the end of this section helps to clarify the steps described below. The server concatenates the server ID and nonce to create plaintext_CID. The length of the result in octets is plaintext_len. The server splits plaintext_CID into components left_0 and right_0 of equal length half_len. If plaintext_len is odd, right_0 clears its first four bits, and left_0 clears its last four bits. For example, 0x7040b81b55ccf3 would split into a left_0 of 0x7040b810 and right_0 of 0x0b55ccf3. Encrypt the result of expand(plaintext_len, 1, left_0) using an AES-ECB-128 cipher to obtain a ciphertext. XOR the first half_len octets of the ciphertext with right_0 to form right_1. Steps 3 and 4 can be summarized as If the plaintext_len is odd, clear the first four bits of right_1. Repeat steps 3 and 4, but use them to compute left_1 by expanding and encrypting right_1 with pass = 2, and XOR the results with left_0. If the plaintext_len is odd, clear the last four bits of left_1. Repeat steps 3 and 4, but use them to compute right_2 by expanding and encrypting left_1 with pass = 3, and XOR the results with right_1. If the plaintext_len is odd, clear the first four bits of right_2. Repeat steps 3 and 4, but use them to compute left_2 by expanding and encrypting right_2 with pass = 4, and XOR the results with left_1. If the plaintext_len is odd, clear the last four bits of left_2. The server concatenates left_2 with right_2 to form the ciphertext CID, which it appends to the first octet. If plaintext_len is odd, the four least significant bits of left_2 and four most significant bits of right_2, which are all zero, are stripped off before concatenation to make the resulting ciphertext the same length as the original plaintext. 4.3.2.4. The following example executes the steps for the provided inputs. Note that the plaintext is of odd octet length, so the middle octet"} {"id": "q-en-load-balancers-9e143e249bb7e18b3bb8ae457dde427d4f5824f38ab0c9fcc05a8f936839e04e", "old_text": "(i.e., the nonce is at least as large as the server ID). If the server ID is longer, a fourth pass is necessary: \"right_0 = right_1 ^ truncate_right( AES_ECB(key, expand_left(left_0, cid_len, 1), len(right_1)) \" and the load balancer has to concatenate left_0 and right_0 to obtain the complete server ID.", "comments": "Attempt to , . Examples and test vectors not yet updated. PTAL at the description. Is the algorithm accurately described Are things clearer or not\nNAME Thanks for all the formatting suggestions. It's cleaned up a bit, although I wasn't able to get aasvg to work. PTAL\nCorrection. I got aasvg to work but it did not look good at all, probably due to user incompetence. If you would like to submit a PR after this that is prettier, I would welcome it.\nStefan Kolbl points out this problem with the 4-pass method: Indeed, there are ways to avoid ugly and error-prone bit shifting while avoiding this property. In particular, we can always use expand-left, but just padding odd-bytes with zeros to preserve byte boundaries and avoid bit-shifting.\nFixed by\nSeveral problems here: A lot of the description relies on examples rather than normative language. The split of a CID that has an odd length produces HEX sequences with an odd number of characters, which in many circumstances will be interpreted as being (left-)padded with 4 zero bits. If that happened, then bad things would occur. You really need to retain the number of bits. Step 3 passes two arguments to , when it has three. The encryption doesn't bind to the configuration that is in use; instead the high bits of the first octet are set to 0. The formatting of the operations wraps awkwardly. The operators that this uses (^ and primarily) aren't defined. The third input to seems to increment without explanation. Given the examples, I could no doubt work this all out satisfactorily, but that shouldn't be necessary.\nFixed by\nA lot.\nYes, the new text seems clear, and correct.", "new_text": "(i.e., the nonce is at least as large as the server ID). If the server ID is longer, a fourth pass is necessary: and the load balancer has to concatenate left_0 and right_0 to obtain the complete server ID."} {"id": "q-en-load-balancers-e4d10fdad87525bd36e53b4b9a73419a0b31df4a909afacf9f2281ddc1ed34c0", "old_text": "connection if servers make reasonable selections when generating new IDs for that connection. 11. There are no IANA requirements.", "comments": "Added two sections to Security Considerations (Stateless Reset Oracle and Local Configurations Only). and .\nCan you speak a little about your reasoning for the recent changes? If this attack is limited to extracting the identity of server instances for other co-hosted entities, that's probably OK, but I'm having trouble connecting the SHOULD here with the preceding text.\nSure. If an attacker has the same QUIC-LB config as the victim, then it can extract the server mapping, which defeats the whole point of the spec. Obviously, an LB must have the QUIC-LB config for servers it routes to. If I'm an attacker within that group of servers I may already see the packet headers that give away the server mapping, so there's little added linkability here (though there is a little). The moment I share configuration among multiple server pools, I'm expanding the number of entities with access to the config, for no operational benefit except to make the administrator's life easier. In the absurd limit, all of AWS has the same QUIC-LB config, and essentially everyone can extract the server mapping from any CID that goes to AWS.\nAdd a security consideration to avoid the following scenario: MyCloudProvider has a single QUIC-LB config for all its load balancers. It rotates keys periodically, etc, but everyone gets the same config. Obviously, all the attacker has to do is open an account with MyCloudProvider and it is able to recover all the server IDs. Configs ought to be restricted to load balancers serving a finite set of servers. It is possible another MyCloudProvider customer is in the pool behind that load balancer, but that's already a privileged position as already described in the draft. Obviously, this will require some wordsmithing, as the statement above isn't very precise.\nAs server clusters increase in size, the need to reallocate server identifiers becomes more acute. In one model, the configuration ID is used to indicate a stable routing configuration. Server identifiers for a given configuration ID are routed to the same server, no matter how many other instances are added or removed. In order to allow for changes in the cluster, the configuration ID is used so that old servers can be removed from consideration and new ones added. If these changes happen frequently enough, the number of bits allocated to identifying a configuration might be insufficient. Why not make the length of the identifier flexible? That might mean that you need to make the length of the length similarly configurable.\nIt was not the intent of these bits to support long-lived configurations, instead supporting key rotation, upgrades, and the like. I would much rather people overprovisioned the server ID space than using this tool, TBH. However, the only cost is limiting the theoretical size of CIDs. At the moment, we can support up to 64B, future-proofing the encodings against future versions of QUIC. I'm open to another bit for this, but how would a configurable number of CR bits work with multiple configurations? How does a config that needs 5 bits and one that needs 2 coexist, especially if the latter needs length self-encoding?\nAs we talk through the implications of mutually mistrustful servers in , I think the case for adding another bit is compelling. I'm going to remove the needs-discussion label and come up with a PR that takes another bit.\nI'm returning this to needs-discussion, as NAME points out there is a privacy tradeoff here. If the config codespace is large, it's straightforward to have each mistrustful server have its own totally unique config. On the other hand, keeping this long-lived config difference leaks the type of flow. Assuming it's routed based on SNI, it leaks the SNI of each CID, and in that sense also increases linkability.\nAlright, I've reflected on this a bit more. In general, different server entities will have different external IP addresses and/or ports, so the load balancer can distinguish the correct QUIC-LB config without resorting to the CR bits. As IP and port are visible to everyone, there is no privacy leakage. The problem occurs when mutually mistrustful servers share the same IP/port and are switched on something else. That \"something\" may be something present only in the client hello, with the classical load balancer simply using the 4tuple after that. The only thing I am aware of in practice is the SNI. If there are others, please say so in this thread. If the SNI is encrypted, the unprivileged LB envisioned in QUIC-LB does not have access to it. So we can assume this use case only applies to unencrypted SNI. Option 1: mistrustful servers share the same config. a third party will be able to extract your server mapping, but in practice it will be hard for an attacker to obtain this position on purpose. If this is the best outcome, we can stick with 2 config rotation bits. Option 2: Issue them different config rotation codepoints. So an observer can see the SNI and associate it with certain CR bits; if the client later migrates, it will still be able to associate that connection with that SNI. If this is better than Option 1, we should probably add a third config rotation bit. Other (minor) costs of having 3 config rotation bits: length self-encoding can only support 31 byte connection IDs, instead of 63 (obviously this is only relevant for hypothetical future versions of QUIC) each config can have its own set of server IDs. So there is considerably more config state at the LB. As SIDs can be up to 18B, it's 18B x (# of servers) x (# of config rotation codepoints). This memory footprint will roughly double by going from 3 to 7 codepoints. I've talked myself into Option 1 as being a mildly better situation, thus sticking with 2 CR bits.\nUpdate: the ECHO design might allow the LB privileged access to the SNI, so it might be encrypted. However, an attacker could connect to the domains at that IP and obtain the config rotation bits. So option 2 actually circumvents ESNI entirely!\nClosing -- I have received no pushback on doing nothing (leaving it at 2 bits), so I'm going to do nothing.\nThe draft doesn't address the impact of each method of connection ID generation on how servers can use stateless resets. Most of this is likely bound up in decisions stemming from . If you can guess a valid but unused connection ID, then you might be able to induce a stateless reset that could be used to kill an open connection. As the draft only includes methods that include an explicit server identifier, it is possible that as long as valid values cannot be guessed, the effect is minimal and each server instance can have its own configured stateless reset key (or a shared key from which a per-server key is derived using a KDF).\nI don't understand the attack here. A given CID will deterministically map to a specific server instance. So there is no way for another server to receive a packet with that CID and generate a stateless reset. What am I missing? There might be something here with the differing treatment of long-header vs. short-header packets, (and the option for servers to send resets on long headers), but I'll have to think about it more.\nAs to the last point, nope: even a long header with a DCID that conforms to the server's expectations (i.e. maps to a real server) will get delivered to that server, so I don't think that's an attack.\nNAME Should we talk about this issue more, or are you satisfied enough that I can close it?\nI don't see any mention of stateless reset in the draft at all. That's probably something worth addressing, even if it is to say what you have already.", "new_text": "connection if servers make reasonable selections when generating new IDs for that connection. 10.3. A simple deployment of QUIC-LB in a cloud provider might use the same global QUIC-LB configuration across all its load balancers that route to customer servers. An attacker could then simply become a customer, obtain the configuration, and then extract server IDs of other customers' connections at will. To avoid this, the configuration agent SHOULD issue QUIC-LB configurations to mutually distrustful servers that have different keys (for the block cipher or stream cipher algorithms) or routing masks and divisors (for the obfuscated algorithm). The load balancers can distinguish these configurations by external IP address, or by assigning different values to the config rotation bits (config-rotation). Note that either of these techniques exposes information to outside observers, as traffic destined for each server set can be easily distinguished. These techniques are not necessary for the plaintext algorithm, as it does not attempt to conceal the server ID. 10.4. Section 21.9 of QUIC-TRANSPORT discusses the Stateless Reset Oracle attack. For a server deployment to be vulnerable, an attacking client must be able to cause two packets with the same Destination CID to arrive at two different servers that share the same cryptographic context for Stateless Reset tokens. As QUIC-LB requires deterministic routing of DCIDs over the life of a connection, it is a sufficient means of avoiding an Oracle without additional measures. 11. There are no IANA requirements."} {"id": "q-en-load-balancers-a330bfa28acfba3a60a33df1a3e85e1631e7f3a09a8d0941c4efcb06408684c5", "old_text": "The extracted server mapping might not correspond to an active server. A field that should be all zeroes after decryption may not be so. Load balancers MUST forward packets with long headers with non- compliant DCIDs to an active server using an algorithm of its own choosing. It need not coordinate this algorithm with the servers.", "comments": "The encrypted CID format includes a zero-pad field that is used to detect whether the decryption succeeded or not. I suggest merging this field with the server ID field, and test whether the decryption succeed by checking whether the server ID is valid or not. This assumes that the server ID field is sparsely populated. For example, if there are just 256 servers, in theory a 1-octed field would be sufficient; instead, we could use a 4 or 5 octet server ID field that would be sparsely populated, allowing for error detection. This would allow for unified validity detection across all supported methods: clear text: verify that the server ID is valid; obfuscated: the divider need to have the same size as the full length server ID; the modulo is the server ID; validity can be verified there. stream: decrypt and verify that the server-id is valid encrypt: decrypt and verify that the server-id is valid It would also allows for simplification of the configuration for the encrypted method, by specifying just one field instead of two.\nYes, I agree this is simpler with no cost at all. Care to do a PR?", "new_text": "The extracted server mapping might not correspond to an active server. Load balancers MUST forward packets with long headers with non- compliant DCIDs to an active server using an algorithm of its own choosing. It need not coordinate this algorithm with the servers."} {"id": "q-en-load-balancers-a330bfa28acfba3a60a33df1a3e85e1631e7f3a09a8d0941c4efcb06408684c5", "old_text": "The server ID will start in the second octet of the decrypted connection ID and occupy continuous octets beyond that. The configuration agent selects a zero-padding length. This SHOULD be at least four octets to allow detection of non-compliant DCIDs. The server ID and zero- padding length MUST sum to no more than 16 octets. They SHOULD sum to no more than 12 octets, to provide servers adequate space to encode their own opaque data. The configuration agent also selects an 16-octet AES-ECB key to use for connection ID decryption.", "comments": "The encrypted CID format includes a zero-pad field that is used to detect whether the decryption succeeded or not. I suggest merging this field with the server ID field, and test whether the decryption succeed by checking whether the server ID is valid or not. This assumes that the server ID field is sparsely populated. For example, if there are just 256 servers, in theory a 1-octed field would be sufficient; instead, we could use a 4 or 5 octet server ID field that would be sparsely populated, allowing for error detection. This would allow for unified validity detection across all supported methods: clear text: verify that the server ID is valid; obfuscated: the divider need to have the same size as the full length server ID; the modulo is the server ID; validity can be verified there. stream: decrypt and verify that the server-id is valid encrypt: decrypt and verify that the server-id is valid It would also allows for simplification of the configuration for the encrypted method, by specifying just one field instead of two.\nYes, I agree this is simpler with no cost at all. Care to do a PR?", "new_text": "The server ID will start in the second octet of the decrypted connection ID and occupy continuous octets beyond that. They server ID length MUST be no more than 16 octets and SHOULD sum to no more than 12 octets, to provide servers adequate space to encode their own opaque data. The configuration agent also selects an 16-octet AES-ECB key to use for connection ID decryption."} {"id": "q-en-load-balancers-a330bfa28acfba3a60a33df1a3e85e1631e7f3a09a8d0941c4efcb06408684c5", "old_text": "octet to obtain the config rotation bits. It then decrypts the subsequent 16 octets using AES-ECB decryption and the chosen key. The decrypted plaintext contains the server id, zero padding, and opaque server data in that order. The load balancer uses the server ID octets for routing. 4.4.3. When generating a routable connection ID, the server MUST choose a connection ID length between 17 and 20 octets. The server writes its provided server ID into the server ID octets, zeroes into the zero- padding octets, and arbitrary bits into the remaining bits. These arbitrary bits MAY encode additional information. Bits in the first, eighteenth, nineteenth, and twentieth octets SHOULD appear essentially random to observers. The first octet is reserved as described in first-octet. The server then encrypts the second through seventeenth octets using the 128-bit AES-ECB cipher.", "comments": "The encrypted CID format includes a zero-pad field that is used to detect whether the decryption succeeded or not. I suggest merging this field with the server ID field, and test whether the decryption succeed by checking whether the server ID is valid or not. This assumes that the server ID field is sparsely populated. For example, if there are just 256 servers, in theory a 1-octed field would be sufficient; instead, we could use a 4 or 5 octet server ID field that would be sparsely populated, allowing for error detection. This would allow for unified validity detection across all supported methods: clear text: verify that the server ID is valid; obfuscated: the divider need to have the same size as the full length server ID; the modulo is the server ID; validity can be verified there. stream: decrypt and verify that the server-id is valid encrypt: decrypt and verify that the server-id is valid It would also allows for simplification of the configuration for the encrypted method, by specifying just one field instead of two.\nYes, I agree this is simpler with no cost at all. Care to do a PR?", "new_text": "octet to obtain the config rotation bits. It then decrypts the subsequent 16 octets using AES-ECB decryption and the chosen key. The decrypted plaintext contains the server id and opaque server data in that order. The load balancer uses the server ID octets for routing. 4.4.3. When generating a routable connection ID, the server MUST choose a connection ID length between 17 and 20 octets. The server writes its provided server ID into the server ID octets and arbitrary bits into the remaining bits. These arbitrary bits MAY encode additional information. Bits in the eighteenth, nineteenth, and twentieth octets SHOULD appear essentially random to observers. The first octet is reserved as described in first-octet. The server then encrypts the second through seventeenth octets using the 128-bit AES-ECB cipher."} {"id": "q-en-load-balancers-3856fc14cfb29c43cc2d4d05d680a4825b203c68ca92514522694a70ca5c9e56", "old_text": "state Retry Service also needs the token key, and to be aware if a NAT sits between it and the servers. The following pseudocode describes the data items necessary to store a full QUIC-LB configuration at the server. It is meant to describe the conceptual range and not specify the presentation of such configuration in an internet packet. The comments signify the range of acceptable values where applicable. 9.", "comments": "As the config gets more complicated, the C-ish pseudocode is getting more unwieldy. YANG is the standard for configuration models, so I should bite the bullet and just figure out YANG.", "new_text": "state Retry Service also needs the token key, and to be aware if a NAT sits between it and the servers. yang-model provides a YANG Model of the a full QUIC-LB configuration. 9."} {"id": "q-en-lxnm-f3fe42fb9a2f96b60dae2e4fdce8a91d748bf39b1e7fb78a12c51a9d9db4ba39", "old_text": "The bandwidth is per L2VPN service. 'svc-inbound-bandwidth' indicates the inbound bandwidth of the connection (i.e., download bandwidth from the service provider to the site). 'svc-outbound-bandwidth' indicates the outbound bandwidth of the connection (i.e., upload bandwidth from the site to the service provider).", "comments": "RFC8299 and RFC8466 use \"inbound-bandwidth\" and \"outbound-bandwidth\" from the point of view of the customer site. L3NM also uses this terminology from the point of view of the customer's site. L2NM flips the terminology, for example \"inbound\" is from PE's perspective. To avoid confusion, it would better in both L3NM and L2NM to use explicit terminology so that the direction is obvious, e.g. \"PE-to-CE-bandwidth\" \"CE-to-PE-bandwidth\" In the description field, it would be good to say which field in RFC8299/RFC8466 this corresponds to.\npractically it is ingress-Policer and egress-Shaper, because the BW is controlled only from the PE side. in addition there is policer and shaper per CoS/Queue which should be addressed.\nI guess you meant \"input-bandwidth\" and \"output-bandwidth\". We do have the following for the L3NM: 'inbound-bandwidth': Indicates, in bits per second (bps), the inbound bandwidth of the connection (i.e., download bandwidth from the service provider to the site). 'outbound-bandwidth': Indicates, in bps, the outbound bandwidth of the connection (i.e., upload bandwidth from the site to the service provider). >L2NM flips the terminology, for example \"inbound\" is from PE's perspective. 'svc-inbound-bandwidth' indicates the inbound bandwidth of the connection (i.e., download bandwidth from the service provider to the site). 'svc-outbound-bandwidth' indicates the outbound bandwidth of the connection (i.e., upload bandwidth from the site to the service provider). Please note that we do already have the following in the common I-D: \"Indicates support for the inbound bandwidth in a VPN. That is, support for specifying the download bandwidth from the service provider network to the VPN site. Note that the L3SM uses 'input' to identify the same feature. That terminology should be deprecated in favor of the one defined in this module.\"; Which was echoed in the L3NM, e.g., \"Note that the L3SM uses 'input- -bandwidth' to refer to the same concept.\";\nHi Med In URL it says as follows, maybe it is a typo? container svc-outbound-bandwidth { if-feature \"vpn-common:outbound-bw\"; description \"From the PE perspective, the service outbound bandwidth of the connection.\"; But I would urge you to change the terminology to \"PE-to-CE-bandwidth\" /\"CE-to-PE-bandwidth\" to make it super-explicit, the current terminology has been causing endless confusion to implementers (I realise it's inherited from the service models, but changing the terminology in LXNM would cure the problem well)\nThis is a typo. It needs to be fixed. Will move this one to the list.\nJulian, please check the PR at 7e5286a\nMed, this is great, thanks! Can it be changed similarly in L3NM as well? Julian", "new_text": "The bandwidth is per L2VPN service. 'svc-pe-to-ce-bandwidth' indicates the inbound bandwidth of the connection (i.e., download bandwidth from the service provider to the site). 'svc-ce-to-pe-bandwidth' indicates the outbound bandwidth of the connection (i.e., upload bandwidth from the site to the service provider)."} {"id": "q-en-mdns-ice-candidates-2ab683cb2b2307086c0cd345931f109087041909a0672f70f6b794319a2ca26a", "old_text": "away. When mDNS fails, ICE will attempt to fall back to either NAT hairpin, if supported, or TURN relay, if not. As noted in IPHandling, this may result in increased media latency and reduced connectivity/ increased cost (depending on whether the application chooses to use TURN). One potential mitigation, as discussed in {#privacy}, is to not conceal candidates created from RFC4941 IPv6 addresses. This permits", "comments": "Let's also consider data use cases here: Fallback to TURN has a severe effect on throughput in many cases.\nSure, could add a brief note on this.\nThe text here isn't normative, so I don't think we need to have a lengthy discussion of this - a few words should suffice.\nFixed by\nLGTM", "new_text": "away. When mDNS fails, ICE will attempt to fall back to either NAT hairpin, if supported, or TURN relay if not. This may result in reduced connectivity, reduced throughput and increased latency, as well as increased cost in case of TURN relay. One potential mitigation, as discussed in {#privacy}, is to not conceal candidates created from RFC4941 IPv6 addresses. This permits"} {"id": "q-en-mdns-ice-candidates-ecbc3ba38172fe94a307679f543542f5da75d0f537cec4e6c8f57252a9c0cdb3", "old_text": "\".local\" name may happen through Unicast DNS as noted in RFC6762, Section 3. An ICE agent that supports mDNS candidates MUST support the situation where the hostname resolution results in more than one IP address. In this case, the ICE agent MUST take exactly one of the resolved IP addresses and ignore the others. The ICE agent SHOULD use the first IPv6 address resolved, if one exists, or the first IPv4 address, if not. An ICE agent MAY add additional restrictions regarding the ICE candidates it will resolve using mDNS, as this mechanism allows", "comments": "PR for I considered referring to RFC6724, but that raises more questions than it answers it seems.\nFrom Tom Pusateri: When more than one IPv4 or more than one IPv6 address is present, it seems like it would be better to first prefer an address that is on a shared network instead of always taking the first one (which doesn\u2019t mean anything in DNS). If you want others to use the same address you should prefer the lowest one or the highest one or something sortable instead of the first one which could be different depending on hashing in a cache.\nDiscussed and agreement was to find a simple alternate algorithm.\nClosed by PR", "new_text": "\".local\" name may happen through Unicast DNS as noted in RFC6762, Section 3. An ICE agent SHOULD ignore candidates where the hostname resolution returns more than one IP address. An ICE agent MAY add additional restrictions regarding the ICE candidates it will resolve using mDNS, as this mechanism allows"} {"id": "q-en-mdns-ice-candidates-3dd2bad7b4c99f6ee54bb9e2ad630e5e7733eec5d0d131b44369ae7c3b186ab1", "old_text": "not end with \".local\" or if the value contains more than one \".\", then process the candidate as defined in RFC8445. Otherwise, resolve the candidate using mDNS. The ICE agent SHOULD set the unicast-response bit of the corresponding mDNS query message; this minimizes multicast traffic, as the response is", "comments": "Fixes URL\nSee URL for details\nI added a step where we ignore the candidate in that case. The term 'ignore' should probable be refined as it is used both here and in the context of URL We might want to update ignore in URL context.\nLGTM", "new_text": "not end with \".local\" or if the value contains more than one \".\", then process the candidate as defined in RFC8445. If the ICE candidate policy is \"relay\", as defined in JSEP, ignore the candidate. Otherwise, resolve the candidate using mDNS. The ICE agent SHOULD set the unicast-response bit of the corresponding mDNS query message; this minimizes multicast traffic, as the response is"} {"id": "q-en-mdns-ice-candidates-1316fe585ecebe0bcb8ca9955bfe8dc0d6ceab80dd80f12f56a6696e17ebf97d", "old_text": "section 5.1.1, the candidate is processed as follows: Check whether the ICE agent has a usable registered mDNS hostname resolving to the ICE host candidate's IP address. If one exists, skip ahead to Step 6. Generate a unique mDNS hostname. The unique name MUST consist of a version 4 UUID as defined in RFC4122, followed by \".local\".", "comments": "PR for\nSeems all feedback is addresses?\nHost ICE candidates, ICE host candidates, or something else?", "new_text": "section 5.1.1, the candidate is processed as follows: Check whether the ICE agent has a usable registered mDNS hostname resolving to the ICE candidate's IP address. If one exists, skip ahead to Step 6. Generate a unique mDNS hostname. The unique name MUST consist of a version 4 UUID as defined in RFC4122, followed by \".local\"."} {"id": "q-en-mdns-ice-candidates-1316fe585ecebe0bcb8ca9955bfe8dc0d6ceab80dd80f12f56a6696e17ebf97d", "old_text": "Store the mDNS hostname and its related IP address in the ICE agent for future reuse. Replace the IP address of the ICE host candidate with its mDNS hostname, and expose the candidate as usual. An ICE agent can implement this procedure in any way so long as it", "comments": "PR for\nSeems all feedback is addresses?\nHost ICE candidates, ICE host candidates, or something else?", "new_text": "Store the mDNS hostname and its related IP address in the ICE agent for future reuse. Replace the IP address of the ICE candidate with its mDNS hostname, and expose the candidate as usual. An ICE agent can implement this procedure in any way so long as it"} {"id": "q-en-mdns-ice-candidates-1316fe585ecebe0bcb8ca9955bfe8dc0d6ceab80dd80f12f56a6696e17ebf97d", "old_text": "2.2. For any remote host ICE candidate received by the ICE agent, the following procedure is used: If the connection-address field value of the ICE candidate does not end with \".local\" or if the value contains more than one \".\",", "comments": "PR for\nSeems all feedback is addresses?\nHost ICE candidates, ICE host candidates, or something else?", "new_text": "2.2. For any remote ICE candidate received by the ICE agent, the following procedure is used: If the connection-address field value of the ICE candidate does not end with \".local\" or if the value contains more than one \".\","} {"id": "q-en-mdns-ice-candidates-1316fe585ecebe0bcb8ca9955bfe8dc0d6ceab80dd80f12f56a6696e17ebf97d", "old_text": "Otherwise, resolve the candidate using mDNS. If it resolves to an IP address, replace the value of the ICE host candidate by the resolved IP address and continue processing of the candidate. Otherwise, ignore the candidate.", "comments": "PR for\nSeems all feedback is addresses?\nHost ICE candidates, ICE host candidates, or something else?", "new_text": "Otherwise, resolve the candidate using mDNS. If it resolves to an IP address, replace the mDNS hostname of the ICE candidate with the resolved IP address and continue processing of the candidate. Otherwise, ignore the candidate."} {"id": "q-en-mdns-ice-candidates-1316fe585ecebe0bcb8ca9955bfe8dc0d6ceab80dd80f12f56a6696e17ebf97d", "old_text": "A peer-reflexive remote candidate could be learned and constructed from the source transport address of the STUN Binding request as an ICE connectivity check. The peer-reflexive candidate could share the same address as a remote host ICE candidate that will be signaled or has been signaled, received and is in the process of name resolution. In addition to the elimination procedure of redundant candidates defined in Section 5.1.3 of RFC8445, which could remove constructed peer-reflexive remote candidates, the address of any existing peer- reflexive remote candidate should not be exposed to Web applications by ICE agents that implement this proposal, as detailed in Section #guidelines. 3.", "comments": "PR for\nSeems all feedback is addresses?\nHost ICE candidates, ICE host candidates, or something else?", "new_text": "A peer-reflexive remote candidate could be learned and constructed from the source transport address of the STUN Binding request as an ICE connectivity check. The peer-reflexive candidate could share the same address as a remote mDNS ICE candidate whose name is in the process of being resolved. In addition to the elimination procedure of redundant candidates defined in Section 5.1.3 of RFC8445, which could remove constructed peer-reflexive remote candidates, the address of any existing peer-reflexive remote candidate should not be exposed to Web applications by ICE agents that implement this proposal, as detailed in Section #guidelines. 3."} {"id": "q-en-mdns-ice-candidates-1316fe585ecebe0bcb8ca9955bfe8dc0d6ceab80dd80f12f56a6696e17ebf97d", "old_text": "When there is no user consent, the following filtering should be done to prevent private IP address leakage: host ICE candidates with an IP address are not exposed as ICE candidate events. Server reflexive ICE candidate raddr field is set to 0.0.0.0 and rport to 0. SDP does not expose any a=candidate line corresponding to a host ICE candidate which contains an IP address. Statistics related to ICE candidates MUST NOT contain the resolved IP address of a remote mDNS candidate or the IP address of a peer-", "comments": "PR for\nSeems all feedback is addresses?\nHost ICE candidates, ICE host candidates, or something else?", "new_text": "When there is no user consent, the following filtering should be done to prevent private IP address leakage: ICE candidates with an IP address are not exposed as ICE candidate events. Server reflexive ICE candidate raddr field is set to 0.0.0.0 and rport to 0. SDP does not expose any a=candidate line corresponding to an ICE candidate which contains an IP address. Statistics related to ICE candidates MUST NOT contain the resolved IP address of a remote mDNS candidate or the IP address of a peer-"} {"id": "q-en-mdns-ice-candidates-dfaa9b9533013da05fc248771848acfca981f98e98ffcc5f89f2f00327dd22e4", "old_text": "1. As detailed in IPHandling, exposing client private IP addresses by default maximizes the probability of successfully creating direct peer-to-peer connection between two clients, but creates a significant surface for user fingerprinting. IPHandling recognizes this issue, but also admits that there is no current solution to this problem; implementations that choose to use Mode 3 to address the privacy concerns often suffer from failing or suboptimal connections", "comments": "Also wrap long lines for #processing section.\nComments addressed, PTAL.\nWill wait for NAME approval before merging.\nLGTM, updated PR for line wrapping.\nThe changes to ICE candidate gathering are targeted to web browsers; the changes to ICE candidate processing affect all WebRTC endpoints.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nLGTM", "new_text": "1. As detailed in IPHandling, exposing client private IP addresses by default to web applications maximizes the probability of successfully creating direct peer-to-peer connections between clients, but creates a significant surface for user fingerprinting. IPHandling recognizes this issue, but also admits that there is no current solution to this problem; implementations that choose to use Mode 3 to address the privacy concerns often suffer from failing or suboptimal connections"} {"id": "q-en-mdns-ice-candidates-dfaa9b9533013da05fc248771848acfca981f98e98ffcc5f89f2f00327dd22e4", "old_text": "not be supported. This document proposes an overall solution to this problem by registering ephemeral mDNS RFC6762 names for each local private IP address, and then providing those names, rather than the IP addresses, to the web application when it gathers ICE candidates. WebRTC implementations resolve these names to IP addresses and perform ICE processing as usual, but the actual IP addresses are not exposed to the web application. 2.", "comments": "Also wrap long lines for #processing section.\nComments addressed, PTAL.\nWill wait for NAME approval before merging.\nLGTM, updated PR for line wrapping.\nThe changes to ICE candidate gathering are targeted to web browsers; the changes to ICE candidate processing affect all WebRTC endpoints.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nLGTM", "new_text": "not be supported. This document proposes an overall solution to this problem by providing a mechanism for WebRTC implementations to register ephemeral mDNS RFC6762 names for local private IP addresses, and then provide those names, rather than the IP addresses, in their ICE candidates. While this technique is intended to benefit WebRTC implementations in web browsers, by preventing collection of private IP addresses by arbitrary web pages, it can also be used by any endpoint that wants to avoid disclosing information about its local network to remote peers on other networks. WebRTC and WebRTC-compatible endpoints Overview that receive ICE candidates with mDNS names will resolve these names to IP addresses and perform ICE processing as usual. In the case where the endpoint is a web application, the WebRTC implementation will manage this resolution internally and will not disclose the actual IP addresses to the application. 2."} {"id": "q-en-mdns-ice-candidates-dfaa9b9533013da05fc248771848acfca981f98e98ffcc5f89f2f00327dd22e4", "old_text": "3.1. For any host candidate gathered by an ICE agent as part of RFC8445, Section 5.1.1, the candidate is handled as follows:", "comments": "Also wrap long lines for #processing section.\nComments addressed, PTAL.\nWill wait for NAME approval before merging.\nLGTM, updated PR for line wrapping.\nThe changes to ICE candidate gathering are targeted to web browsers; the changes to ICE candidate processing affect all WebRTC endpoints.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nLGTM", "new_text": "3.1. This section outlines how mDNS should be used by ICE agents to conceal local IP addresses. Naturally, if the ICE agent does not want to conceal its IPs, e.g., if it has a priori knowledge that its addresses are in fact public, this processing is unnecessary. For any host candidate gathered by an ICE agent as part of RFC8445, Section 5.1.1, the candidate is handled as follows:"} {"id": "q-en-mdns-ice-candidates-dfaa9b9533013da05fc248771848acfca981f98e98ffcc5f89f2f00327dd22e4", "old_text": "3.2. For any remote ICE candidate received by the ICE agent, the following procedure is used:", "comments": "Also wrap long lines for #processing section.\nComments addressed, PTAL.\nWill wait for NAME approval before merging.\nLGTM, updated PR for line wrapping.\nThe changes to ICE candidate gathering are targeted to web browsers; the changes to ICE candidate processing affect all WebRTC endpoints.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nLGTM", "new_text": "3.2. This section outlines how received ICE candidates with mDNS names are processed by ICE agents, and is relevant to all endpoints. For any remote ICE candidate received by the ICE agent, the following procedure is used:"} {"id": "q-en-mdns-ice-candidates-dfaa9b9533013da05fc248771848acfca981f98e98ffcc5f89f2f00327dd22e4", "old_text": "An ICE agent that supports mDNS candidates MUST support the situation where the hostname resolution results in more than one IP address. In this case, the ICE agent MUST take exactly one of the resolved IP addresses and ignore the others. The ICE agent SHOULD, if available, use the first IPv6 address resolved, otherwise the first IPv4 address. 4.", "comments": "Also wrap long lines for #processing section.\nComments addressed, PTAL.\nWill wait for NAME approval before merging.\nLGTM, updated PR for line wrapping.\nThe changes to ICE candidate gathering are targeted to web browsers; the changes to ICE candidate processing affect all WebRTC endpoints.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nLGTM", "new_text": "An ICE agent that supports mDNS candidates MUST support the situation where the hostname resolution results in more than one IP address. In this case, the ICE agent MUST take exactly one of the resolved IP addresses and ignore the others. The ICE agent SHOULD use the first IPv6 address resolved, if one exists, or the first IPv4 address, if not. 4."} {"id": "q-en-mdns-ice-candidates-b2c68e1ab35a4e50236c0c398cb1e07f1e960766db1442ccb06c555f33a01ae6", "old_text": "want to conceal its IPs, e.g., if it has a priori knowledge that its addresses are in fact public, this processing is unnecessary. For any host candidate gathered by an ICE agent as part of RFC8445, Section 5.1.1, the candidate is handled as follows: Check whether the ICE agent has a usable registered mDNS hostname resolving to the ICE candidate's IP address. If one exists, skip", "comments": "I think this is good to go, will plan to land at EOD.\nDoes it obfuscate, conceal, or obscure local IP addresses? My vote is for conceal.\nConceal/concealment sgtm.\nAs noted in , we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.\nThis section is largely normative, so it might make more sense to put some parts from it immediately after Principle, namely the various adjustments to the technique that need to be considered (TURN, IPv6, stats) Things where we merely provide discussion (e.g., session monitoring) rather than normative guidance could remain in the privacy section.\nDue to their different sensitivity and lifetime.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nThe rest LGTM", "new_text": "want to conceal its IPs, e.g., if it has a priori knowledge that its addresses are in fact public, this processing is unnecessary. This section outlines how mDNS should be used by ICE agents to conceal local IP addresses. For each host candidate gathered by an ICE agent as part of the gathering process described in RFC8445, Section 5.1.1, the candidate is handled as described below. Check whether this IP address satisfies the ICE agent's policy regarding whether an address is safe to expose. If so, expose the candidate and abort this process. Check whether the ICE agent has a usable registered mDNS hostname resolving to the ICE candidate's IP address. If one exists, skip"} {"id": "q-en-mdns-ice-candidates-b2c68e1ab35a4e50236c0c398cb1e07f1e960766db1442ccb06c555f33a01ae6", "old_text": "Replace the IP address of the ICE candidate with its mDNS hostname and provide the candidate to the web application. An ICE agent can implement this procedure in any way so long as it produces equivalent results to this procedure. An implementation may for instance pre-register mDNS hostnames by executing steps 3 to 5 and prepopulate an ICE agent accordingly. By doing so, only step 6 of the above procedure will be executed at the time of gathering candidates. An implementation may also detect that mDNS is not supported by the available network interfaces. The ICE agent may skip steps 2 and 3 and directly decide to not expose the host candidate. This procedure ensures that an mDNS name is used to replace only one", "comments": "I think this is good to go, will plan to land at EOD.\nDoes it obfuscate, conceal, or obscure local IP addresses? My vote is for conceal.\nConceal/concealment sgtm.\nAs noted in , we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.\nThis section is largely normative, so it might make more sense to put some parts from it immediately after Principle, namely the various adjustments to the technique that need to be considered (TURN, IPv6, stats) Things where we merely provide discussion (e.g., session monitoring) rather than normative guidance could remain in the privacy section.\nDue to their different sensitivity and lifetime.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nThe rest LGTM", "new_text": "Replace the IP address of the ICE candidate with its mDNS hostname and provide the candidate to the web application. ICE agents can implement this procedure in any way as long as it produces equivalent results. An implementation may for instance pre- register mDNS hostnames by executing steps 3 to 6 and prepopulate an ICE agent accordingly. By doing so, only step 7 of the above procedure will be executed at the time of gathering candidates. ICE agents may also decide that certain local IP addresses are safe to expose. This may be because the ICE agent has a priori knowledge that the address is in fact public, or because the agent has made a policy decision to not conceal certain types of IP addresses (e.g., those with built-in privacy protections) as a calculated choice to improve connectivity. This topic is discussed further in {#privacy} below. An implementation may also detect that mDNS is not supported by the available network interfaces. The ICE agent may skip steps 3 and 4 and directly decide to not expose the host candidate. This procedure ensures that an mDNS name is used to replace only one"} {"id": "q-en-mdns-ice-candidates-b2c68e1ab35a4e50236c0c398cb1e07f1e960766db1442ccb06c555f33a01ae6", "old_text": "increased cost (depending on whether the application chooses to use TURN). The exact impact of this technique is being researched experimentally and will be provided before publication of this document. 4.2.", "comments": "I think this is good to go, will plan to land at EOD.\nDoes it obfuscate, conceal, or obscure local IP addresses? My vote is for conceal.\nConceal/concealment sgtm.\nAs noted in , we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.\nThis section is largely normative, so it might make more sense to put some parts from it immediately after Principle, namely the various adjustments to the technique that need to be considered (TURN, IPv6, stats) Things where we merely provide discussion (e.g., session monitoring) rather than normative guidance could remain in the privacy section.\nDue to their different sensitivity and lifetime.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nThe rest LGTM", "new_text": "increased cost (depending on whether the application chooses to use TURN). One potential mitigation, as discussed in {#privacy}, is to not conceal candidates created from RFC4941 IPv6 addresses. This permits connectivity even in large internal networks or where mDNS is disabled. The exact impact of the mDNS technique is being researched experimentally and will be provided before publication of this document. 4.2."} {"id": "q-en-mdns-ice-candidates-b2c68e1ab35a4e50236c0c398cb1e07f1e960766db1442ccb06c555f33a01ae6", "old_text": "6.4. As noted in IPHandling, privacy may be breached if a web application running in two browsing contexts can determine whether it is running on the same device. While the approach in this document prevents the", "comments": "I think this is good to go, will plan to land at EOD.\nDoes it obfuscate, conceal, or obscure local IP addresses? My vote is for conceal.\nConceal/concealment sgtm.\nAs noted in , we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.\nThis section is largely normative, so it might make more sense to put some parts from it immediately after Principle, namely the various adjustments to the technique that need to be considered (TURN, IPv6, stats) Things where we merely provide discussion (e.g., session monitoring) rather than normative guidance could remain in the privacy section.\nDue to their different sensitivity and lifetime.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nThe rest LGTM", "new_text": "6.4. Naturally, an address that is already exposed to the Internet does not need to be protected by mDNS, as it can be trivially observed by the web server or remote endpoint. However, determining this ahead of time is not straightforward; while the fact that an IPv4 address is private can sometimes be inferred by its value, e.g., whether it is an RFC1918 address, the reverse is not necessarily true. IPv6 addresses present their own complications, e.g., private IPv6 addresses as a result of NAT64 RFC6146. Instead, the determination of whether an address is public can be reliably made as part of the ICE gathering process, namely, if the query to the STUN RFC5389 server returns the same value as the local address. This can be done for both IPv4 and IPv6 local addresses, provided that the application has configured both IPv4 and IPv6 STUN servers. If this situation occurs, i.e., STUN returns the same IP address value for an address that has already been communicated as an mDNS candidate during the current ICE gathering phase, the ICE agent MUST NOT eliminate the candidate as redundant and MUST send the IP address as a server-reflexive candidate. This allows the ICE agent to send mDNS candidates immediately (i.e., without waiting for STUN), even if the associated addresses may not be private. Once an address has been identified as public, the ICE agent MAY cache this information and omit mDNS protection for that address in future ICE gathering phases. 6.5. As noted in IPHandling, private IPv4 addresses are especially problematic because of their unbounded lifetime. However, the RFC4941 IPv6 addresses recommended for WebRTC have inherent privacy protections, namely a short lifetime and the lack of any stateful information. Accordingly, implementations MAY choose to not conceal RFC4941 addresses with mDNS names as a tradeoff for improved peer-to- peer connectivity. 6.6. As noted in IPHandling, privacy may be breached if a web application running in two browsing contexts can determine whether it is running on the same device. While the approach in this document prevents the"} {"id": "q-en-mdns-ice-candidates-b2c68e1ab35a4e50236c0c398cb1e07f1e960766db1442ccb06c555f33a01ae6", "old_text": "a context that has a different origin than the top-level browsing context), or a private browsing context. 6.5. Even when local IP addresses are not exposed, the number of mDNS hostname candidates can still provide a fingerprinting dimension.", "comments": "I think this is good to go, will plan to land at EOD.\nDoes it obfuscate, conceal, or obscure local IP addresses? My vote is for conceal.\nConceal/concealment sgtm.\nAs noted in , we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.\nThis section is largely normative, so it might make more sense to put some parts from it immediately after Principle, namely the various adjustments to the technique that need to be considered (TURN, IPv6, stats) Things where we merely provide discussion (e.g., session monitoring) rather than normative guidance could remain in the privacy section.\nDue to their different sensitivity and lifetime.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nThe rest LGTM", "new_text": "a context that has a different origin than the top-level browsing context), or a private browsing context. 6.7. Even when local IP addresses are not exposed, the number of mDNS hostname candidates can still provide a fingerprinting dimension."} {"id": "q-en-mdns-ice-candidates-3a35f318c5bb432005ae664b8584f88bac335d24d3e1e052e97c963c64f47a26", "old_text": "included in this candidate collection. However, disclosure of these addresses has privacy implications. This document describes a way to share local IP addresses with other clients while preserving client privacy. This is achieved by obfuscating IP addresses with dynamically generated Multicast DNS (mDNS) names. 1.", "comments": "On line 421: \"that this specification is designed to hide\". Could edit that as well.\nMy take is that we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.\nDoes it obfuscate, conceal, or obscure local IP addresses? My vote is for conceal.\nConceal/concealment sgtm.\nAs noted in , we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.", "new_text": "included in this candidate collection. However, disclosure of these addresses has privacy implications. This document describes a way to share local IP addresses with other clients while preserving client privacy. This is achieved by concealing IP addresses with dynamically generated Multicast DNS (mDNS) names. 1."} {"id": "q-en-mls-architecture-9040eab0bcfc54943e4b5278919b7b68cf5188ec4500c74e052bb93a3c105ce1", "old_text": "addition or removal of group members without informing all other members. Membership of an MLS group is the managed at the level of individual clients. In most cases, a client corresponds to a specific device used by a user. If a user has multiple devices, the user will be represented in a group by multiple clients. If an application wishes", "comments": "Thanks both, I'll have a look today.\nLooks good ! Thanks Richard!\nThe architecture document mentions PSKs in several places, but does not discuss the security of them. Nor does it discuss the security of External Proposals and External Commits. Notably, the third paragraph of 5.1 Membership Changes says \"\u2026the set of devices controlled by the user can only be altered by an authorized member of the group.\" The set of devices could be altered via an External Commit by a non-member. At the end of 5.2 Parallel Groups there is a vague mention of \"using the PSK mechanism to link healing properties among parallel groups.\", but no discussion of the security. Likewise Section 5.5 Recovery After State Loss talks about PSKs as the solution to a problem but the risks are not discussed. External Proposals and External Commits should be discussed in section 5.4 Access Control.\nNAME I fixed the specific problems you note in", "new_text": "addition or removal of group members without informing all other members. Membership of an MLS group is managed at the level of individual clients. In most cases, a client corresponds to a specific device used by a user. If a user has multiple devices, the user will be represented in a group by multiple clients. If an application wishes"} {"id": "q-en-mls-architecture-9040eab0bcfc54943e4b5278919b7b68cf5188ec4500c74e052bb93a3c105ce1", "old_text": "among parallel groups. For example, suppose a common member M of two groups A and B has performed a key update in group A but not in group B. The key update provides PCS with regard to M in group A. If a PSK exported from group A and injected into group B, then some of these PCS properties carry over to group B, since the PSK and secrets derived from it are only known to the new, updated version of M, not to the old, possibly compromised version of M.", "comments": "Thanks both, I'll have a look today.\nLooks good ! Thanks Richard!\nThe architecture document mentions PSKs in several places, but does not discuss the security of them. Nor does it discuss the security of External Proposals and External Commits. Notably, the third paragraph of 5.1 Membership Changes says \"\u2026the set of devices controlled by the user can only be altered by an authorized member of the group.\" The set of devices could be altered via an External Commit by a non-member. At the end of 5.2 Parallel Groups there is a vague mention of \"using the PSK mechanism to link healing properties among parallel groups.\", but no discussion of the security. Likewise Section 5.5 Recovery After State Loss talks about PSKs as the solution to a problem but the risks are not discussed. External Proposals and External Commits should be discussed in section 5.4 Access Control.\nNAME I fixed the specific problems you note in", "new_text": "among parallel groups. For example, suppose a common member M of two groups A and B has performed a key update in group A but not in group B. The key update provides PCS with regard to M in group A. If a PSK is exported from group A and injected into group B, then some of these PCS properties carry over to group B, since the PSK and secrets derived from it are only known to the new, updated version of M, not to the old, possibly compromised version of M."} {"id": "q-en-mls-architecture-9040eab0bcfc54943e4b5278919b7b68cf5188ec4500c74e052bb93a3c105ce1", "old_text": "its prior membership with a PSK. There are a few practical challenges to this approach. For example, the application will need to ensure that the new members have the required PSK, including any new members that have joined the group since the epoch in which the PSK was issued.", "comments": "Thanks both, I'll have a look today.\nLooks good ! Thanks Richard!\nThe architecture document mentions PSKs in several places, but does not discuss the security of them. Nor does it discuss the security of External Proposals and External Commits. Notably, the third paragraph of 5.1 Membership Changes says \"\u2026the set of devices controlled by the user can only be altered by an authorized member of the group.\" The set of devices could be altered via an External Commit by a non-member. At the end of 5.2 Parallel Groups there is a vague mention of \"using the PSK mechanism to link healing properties among parallel groups.\", but no discussion of the security. Likewise Section 5.5 Recovery After State Loss talks about PSKs as the solution to a problem but the risks are not discussed. External Proposals and External Commits should be discussed in section 5.4 Access Control.\nNAME I fixed the specific problems you note in", "new_text": "its prior membership with a PSK. There are a few practical challenges to this approach. For example, the application will need to ensure that all members have the required PSK, including any new members that have joined the group since the epoch in which the PSK was issued."} {"id": "q-en-mls-architecture-5e6e2508993091f8fb45e62cf48d176c50f0b3a0b36109006a50940d58dd2215", "old_text": "agreement properties of MLS will confirm that all members of the group agree on the content of these extensions. 5.8. Application messages carried by MLS are opaque to the protocol; they can contain arbitrary data. Each application which uses MLS needs to define the format of its \"application_data\" and any mechanism necessary to negotiate the format of that content over the lifetime of an MLS group. In many applications this means managing format migrations for groups with multiple members who may each be offline at unpredictable times. *RECOMMENDATION:* Use the default content mechanism defined in I- D.mahy-mls-content-neg, unless the specific application defines another mechanism which more appropriately addresses the same requirements for that application of MLS. The MLS framing for application messages also provides a field where clients can send information that is authenticated but not encrypted. Such information can be used by servers that handle the message, but group members are assured that it has not been tampered with. 5.9. The protocol aims to be compatible with federated environments. While this document does not specify all necessary mechanisms", "comments": "Reverts mlswg/mls-architecture This has not been merged for a reason, namely I don\u2019t want the dependency on an unpublished draft.", "new_text": "agreement properties of MLS will confirm that all members of the group agree on the content of these extensions. Application messages carried by MLS are opaque; they can contain arbitrary data. The MLS framing for application messages also provides a field where clients can send information that is authenticated but not encrypted. Such information can be used by servers that handle the message, but group members are assured that it has not been tampered with. 5.8. The protocol aims to be compatible with federated environments. While this document does not specify all necessary mechanisms"} {"id": "q-en-mls-architecture-5e6e2508993091f8fb45e62cf48d176c50f0b3a0b36109006a50940d58dd2215", "old_text": "authentication mechanisms, ciphersuites, and infrastructure functionalities. 5.10. It is important that multiple versions of MLS be able to coexist in the future. Thus, MLS offers a version negotiation mechanism; this", "comments": "Reverts mlswg/mls-architecture This has not been merged for a reason, namely I don\u2019t want the dependency on an unpublished draft.", "new_text": "authentication mechanisms, ciphersuites, and infrastructure functionalities. 5.9. It is important that multiple versions of MLS be able to coexist in the future. Thus, MLS offers a version negotiation mechanism; this"} {"id": "q-en-mls-architecture-2e1c0acd34463213ee73ec340d997e766844f6683e8fe34f602d971cc602398b", "old_text": "relatively short period of time, clients have an indication that the credential might have been created without the user's knowledge. Due to the asynchronous nature of MLS, however, there may be transient inconsistencies in a user's device set, so correlating users' clients across groups is more of a detection mechanism than a prevention mechanism.", "comments": "Particularly in multi-device scenarios (where a user has more than one device), we should recommend that clients correlate the set of devices of each user across groups. This would make it much harder for a malicious AS to issue fake device credentials for a particular user because clients would expect the credential to appear in all groups of which the user is a member. If a device credential does not appear in all groups after a while, clients have an indication that the credential might have been created without the user's knowledge. This should however only be a recommendation since it might not be a desirable property in all scenarios. In practical terms, users might also need some time to add their new device to all of the groups, therefore a grace period might be needed. Taking the latter into account, correlating users' devices across groups is more of a detection mechanism than a prevention mechanism.\n\"In many uses of MLS, there are multiple MLS clients which represent a single user (for example a human user with a mobile and desktop version of an application). Often the same set of clients is represented in exactly the same list groups. In applications where this is the case, clients can compare the set of devices of each user across groups. This would make it much harder for a malicious AS to issue fake device credentials for a particular user because clients would expect the credential to appear in all groups of which the user is a member. If a device credential does not appear in all groups after some relatively short period of time, clients have an indication that the credential might have been created without the user's knowledge. As this may take some time, correlating users' devices across groups is more of a detection mechanism than a prevention mechanism.\"\nI lightly edited NAME prose and turned it into PR .\nMade a quick pass. I think we should stick with \"clients\" rather than \"devices\".", "new_text": "relatively short period of time, clients have an indication that the credential might have been created without the user's knowledge. Due to the asynchronous nature of MLS, however, there may be transient inconsistencies in a user's client set, so correlating users' clients across groups is more of a detection mechanism than a prevention mechanism."} {"id": "q-en-mls-architecture-7096b529fc30fd4f537d44827ee1f0a2d7aa02a2ee477520d2eb5105e5ea460f", "old_text": "Various academic works have analyzed MLS and the different security guarantees it aims to provide. The security of large parts of the protocol has been analyzed by [BBN19] (draft 7), [ACDT21] (draft 11) and [AJM20] (draft 12). Individual components of various drafts of the MLS protocol have been analyzed in isolation and with differing adversarial models, for example, [BBR18], [ACDT19], [ACCKKMPPWY19], [AJM20] and [ACJM20] analyze the ratcheting tree as the sub-protocol of MLS that facilitates key agreement, while [BCK21] analyzes the key derivation paths in the ratchet tree and key schedule. Finally, [CHK21] analyzes the authentication and cross-group healing guarantees provided by MLS. 8. ACCKKMPPWY19: https://eprint.iacr.org/2019/1489 ACDT19: https://eprint.iacr.org/2019/1189 ACDT21: https://eprint.iacr.org/2021/1083 ACJM20: https://eprint.iacr.org/2020/752 AHKM21: https://eprint.iacr.org/2021/1456 AJM20: https://eprint.iacr.org/2020/1327 BBN19: https://hal.laas.fr/INRIA/hal-02425229 BBR18: https://hal.inria.fr/hal-02425247 BCK21: https://eprint.iacr.org/2021/137 CHK21: https://www.usenix.org/system/files/sec21-cremers.pdf 9. This document makes no requests of IANA.", "comments": "The references at the end of the Security Considerations weren't in kramdown format. Fixed.", "new_text": "Various academic works have analyzed MLS and the different security guarantees it aims to provide. The security of large parts of the protocol has been analyzed by BBN19 (draft 7), ACDT21 (draft 11) and AJM20 (draft 12). Individual components of various drafts of the MLS protocol have been analyzed in isolation and with differing adversarial models, for example, BBR18, ACDT19, ACCKKMPPWY19, AJM20, ACJM20, and AHKM21 analyze the ratcheting tree as the sub-protocol of MLS that facilitates key agreement, while BCK21 analyzes the key derivation paths in the ratchet tree and key schedule. Finally, CHK21 analyzes the authentication and cross-group healing guarantees provided by MLS. 8. This document makes no requests of IANA."} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "KT log under a user's identity. The verification function would correspond to verifying a key's inclusion in the log for a claimed identity, together with the KT log's mechanisms for a user to monitor and control which keys are associated to their identity. By the nature of its roles in MLS authentication, the AS is invested with a large amount of trust and the compromise of one of its", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "KT log under a user's identity. The verification function would correspond to verifying a key's inclusion in the log for a claimed identity, together with the KT log's mechanisms for a user to monitor and control which keys are associated with their identity. By the nature of its roles in MLS authentication, the AS is invested with a large amount of trust and the compromise of one of its"} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "is persistently offline may still be holding old keying material and thus be a threat to both FS and PCS if it is later compromised. MLS partially defend against this problem by active member including freshness, however not much can be done on the inactive side especially in the case where the client has not processed messages. *RECOMMENDATION:* Mandate a key updates from clients that are not otherwise sending messages and evict clients which are idle for too long.", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "is persistently offline may still be holding old keying material and thus be a threat to both FS and PCS if it is later compromised. MLS partially defends against this problem by active members including freshness, however not much can be done on the inactive side especially in the case where the client has not processed messages. *RECOMMENDATION:* Mandate key updates from clients that are not otherwise sending messages and evict clients which are idle for too long."} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "the secrets themselves are protected by HPKE encryption. Note that under that compromise scenario, authentication is not affected in neither of these cases. As every member of the group can compute the AEAD keys for all the chains (they have access to the Group Secrets) in order to send and receive messages, the authentication provided by the AEAD encryption layer of the common", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "the secrets themselves are protected by HPKE encryption. Note that under that compromise scenario, authentication is not affected in either of these cases. As every member of the group can compute the AEAD keys for all the chains (they have access to the Group Secrets) in order to send and receive messages, the authentication provided by the AEAD encryption layer of the common"} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "secrecy. If the adversary is active, the adversary can follow the protocol and perform updates on behalf of the compromised party with no ability to an honest group to recover message secrecy. However, MLS provides PCS against active adaptive attackers through its Remove group operation. This means that, as long as other members of the group are honest, the protocol will guarantee message secrecy for all messages exchanged in the epochs after the compromised party has been removed.", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "secrecy. If the adversary is active, the adversary can follow the protocol and perform updates on behalf of the compromised party with no ability for an honest group to recover message secrecy. However, MLS provides PCS against active adaptive attackers through its Remove group operation. This means that, as long as other members of the group are honest, the protocol will guarantee message secrecy for all messages exchanged in the epochs after the compromised party has been removed."} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "epochs until an honest update from the compromised client happens. Note that under this compromise scenario, the attacker can perform all operations which are available to an legitimate client even without access to the actual value of the signature key. Without access to the group secrets, the adversary will not have the", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "epochs until an honest update from the compromised client happens. Note that under this compromise scenario, the attacker can perform all operations which are available to a legitimate client even without access to the actual value of the signature key. Without access to the group secrets, the adversary will not have the"} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "compromised parties refresh their credentials securely. Beware that in both oracle and private key access, an active adaptive attacker, can follow the protocol and request to update its own credential. This in turn induces a signature key rotation which could provide the attacker with part or the full value of the private key depending on the architecture of the service provider.", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "compromised parties refresh their credentials securely. Beware that in both oracle and private key access, an active adaptive attacker can follow the protocol and request to update its own credential. This in turn induces a signature key rotation which could provide the attacker with part or the full value of the private key depending on the architecture of the service provider."} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "and changed with each message received by a client. However, the signature private keys are mostly used by clients to send a message. They also are providing the strong authentication guarantees to other clients, hence we consider that their protection by additional security mechanism should be a priority. Overall there is no way to detect or prevent these compromise, as discussed in the previous sections, performing separation of the application secret states can help recovery after compromise, this is the case for signature keys but similar concern exists for the", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "and changed with each message received by a client. However, the signature private keys are mostly used by clients to send a message. They also provide strong authentication guarantees to other clients, hence we consider that their protection by additional security mechanisms should be a priority. Overall there is no way to detect or prevent these compromises, as discussed in the previous sections, performing separation of the application secret states can help recovery after compromise, this is the case for signature keys but similar concern exists for the"} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "notification provider have to be trusted to avoid making correlation on which devices are recipients of the same message. For secure messaging systems, push notification are often sent real- time as it is not acceptable to create artificial delays for message retrieval.", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "notification provider have to be trusted to avoid making correlation on which devices are recipients of the same message. For secure messaging systems, push notifications are often sent real- time as it is not acceptable to create artificial delays for message retrieval."} {"id": "q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974", "old_text": "The attacker can publish or distribute credentials Infrastructures that provide cryptographic material or credentials in place of the MLS client (which is under the control of the user) have often the ability to use the associated secrets to perform operations on behalf of the user, which is unacceptable in many situations. Other mechanisms can be used to prevent this issue, such as the service blessing cryptographic material used by an MLS client. *RECOMMENDATION:* Make clients submit signature public keys to the AS, this is usually better than the AS generating public key pairs", "comments": "For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me.", "new_text": "The attacker can publish or distribute credentials Infrastructures that provide cryptographic material or credentials in place of the MLS client (which is under the control of the user) often have the ability to use the associated secrets to perform operations on behalf of the user, which is unacceptable in many situations. Other mechanisms can be used to prevent this issue, such as the service blessing cryptographic material used by an MLS client. *RECOMMENDATION:* Make clients submit signature public keys to the AS, this is usually better than the AS generating public key pairs"} {"id": "q-en-mls-architecture-b5d875644a523bb596539f44bc8712ec216edc069d32559fbe43dc3c50dfdbe4", "old_text": "inside the group. When members perform changes directly, this is clearly the case. External joins are authorized indirectly, in the sense that a member publishing a GroupInfo object authorizes anyone to join who has access to the GroupInfo object. External joins do not allow for more granular authorization checks to be done before the new member is added to the group, so if an application wishes to both allow external joins and enforce such checks, then either all the members of the group must all have the ability to check and reject invalid External joins autonomously, or the application needs to do such checks when a member joins and remove them if those checks fail. Application setup may also determine other criteria for membership validity. For example, per-device signature keys can be signed by an", "comments": "I think the spirit of this change is good, but the last two sentences are confusing talking about the client trying to join and an existing member trying to prevent them from seeing anything confidential after joining. My initial reaction to this idea was always that post-hoc removing someone who should never have joined is a terrible architectural decision. Post-hoc removal either allows an undesirable member a window to see messages sent by some members, or it requires consistent policy across members. If there is consistent policy across members the members should simply treat the Commit from the undesirable client as invalid. Rather than further condone post-hoc removal (which doesn't require any normative change to the MLS protocol), why not just delete mentions of post-hoc removal?\nCompromise - Deleted the sentence that talks about the security implications of post-hoc removal, left the mention of it simply being possible.\nFollowing on from discussion in Even once a GroupInfo has been published to a prospective new joiner, there are still a few opportunities to apply authorization policy to the join: The DS can block the external Commit from reaching the group members (requires intelligent DS) The clients in the group can all refuse to apply the external Commit (requires consistent policy) Any client can remove the member after allowing them to join (might expose some group comms to the joiner)\nComment by NAME This seems rather risky? What could such a member do in the short time between getting added and getting kicked? Would they see message history? Group membership? How many devices a user has? When members were last online? Why support such a group type? Wouldn't it be better to not support this and have an automated \"special member\" being a bot that handles member addition?\nUntil someone else sends a message, a new joiner doesn't learn anything non-public. If you think about it, how would they? They're outside the group before they send an external join, and the external join just establishes a new shared secret with the group for future communications. More specifically: Message history - MLS's forward secrecy guarantees mean that new joiners don't get message history. Group membership - This is public anyway via the GroupInfo that an external joiner needs to join How many devices - To the degree this is exposed, it would be in the group membership Last online - MLS doesn't have any notion of this As for an \"add me bot\", it's a useful idea, but not applicable to a bunch of cases. The bot has to be a member of the group and has to be always-online. So now you have the group's secrets on a server somewhere, instead of just in the communicating endpoints, already an additional point of compromise. And it can't be the DS's server, since the whole point is to protect against the DS.\nThis statement in the draft is just plain wrong. In our implementation, we actually do validation of External joins at the application layer. I addressed this in (dating from WGLC) which says the following: \"External joins do not allow for more granular authorization checks to be done before the new member is added to the group, so if an application wishes to both allow external joins and enforce such checks, then either all the members of the group must all have the ability to check and reject invalid External joins autonomously, or the application needs to do such checks when a member joins and remove them if those checks fail.\"", "new_text": "inside the group. When members perform changes directly, this is clearly the case. External joins are authorized indirectly, in the sense that a member publishing a GroupInfo object authorizes anyone to join who has access to the GroupInfo object. Both types of joins are done via a Commit message, which could be blocked by the DS or rejected by clients if the join is not authorized. The former approach requires that Commits be visible to the DS; the latter approach requires that clients all share a consistent policy. In the unfortunate event that an unauthorized member is able to join, MLS enables any member to remove them. Application setup may also determine other criteria for membership validity. For example, per-device signature keys can be signed by an"} {"id": "q-en-mls-architecture-a0b5f6adde1bd2637e21d7d557d0275d81dbc32666cf553201dbfc473a68c311", "old_text": "5.5. Within an MLS group, every member is authenticated to other member, by means of credentials issued and verified by the Authentication Service. MLS does not prescribe what actions, if any, an application should take in the event that a group member presents an invalid credential. For example, an application may require such a member to", "comments": "NAME noted in URL that an important function of the AS is to ensure that clients come to consistent evaluations of credentials. We should make that recommendation in this document, including the fact that consistency is needed when validity can change over time.", "new_text": "5.5. Within an MLS group, every member is authenticated to other member by means of credentials issued and verified by the Authentication Service. MLS does not prescribe what actions, if any, an application should take in the event that a group member presents an invalid credential. For example, an application may require such a member to"} {"id": "q-en-mls-architecture-a0b5f6adde1bd2637e21d7d557d0275d81dbc32666cf553201dbfc473a68c311", "old_text": "In some authentication systems, it is possible for a previously-valid credential to become invalid over time. For example, in a system based on X.509 certificates, credentials can expire or be revoked. Fortunately, the MLS update mechansisms allow a client to replace an old credential with a new one. This is best done before the old credential becomes invalid. *RECOMMENDATION:* Proactively rotate credentials, especially if a credential is about to become invalid.", "comments": "NAME noted in URL that an important function of the AS is to ensure that clients come to consistent evaluations of credentials. We should make that recommendation in this document, including the fact that consistency is needed when validity can change over time.", "new_text": "In some authentication systems, it is possible for a previously-valid credential to become invalid over time. For example, in a system based on X.509 certificates, credentials can expire or be revoked. The MLS update mechanisms allow a client to replace an old credential with a new one. This is best done before the old credential becomes invalid. *RECOMMENDATION:* Proactively rotate credentials, especially if a credential is about to become invalid."} {"id": "q-en-mls-architecture-5c50c339f78831f3032db82b4c5e601c863b535cbe351eb4cec0736fed78c5c8", "old_text": "2. MLS provides a way for _clients_ to form _groups_ within which they can communicate securely. For example, a set of users might use clients on their phones or laptops to join a group and communicate with each other. A group may be as small as two clients (e.g., for simple person to person messaging) or as large as tens of thousands. A client that is part of a group is a _member_ of that group. In order to communicate securely, users initially interact with services at their disposal to establish the necessary values and", "comments": "This replaced PR which has broken merge conflicts.\nI've reverted this. It was prematurely merged. I'll open a new PR once I have had time to examine this.\nThis was discussed in the interim today and is the same as the content that was in . Thanks, -rohan Rohan Mahy l Vice President Engineering, Architecture Chat: NAME on Wire Wire - Secure team messaging. Zeta Project Germany GmbH l Rosenthaler Stra\u00dfe 40, 10178 Berlin, Germany Gesch\u00e4ftsf\u00fchrer/Managing Director: Alan Duric HRB 149847 beim Handelsregister Charlottenburg, Berlin VAT-ID DE288748675\nThat's fine, but I still want to go over it before it merges.", "new_text": "2. 2.1. MLS provides a way for _clients_ to form _groups_ within which they can communicate securely. For example, a set of users might use clients on their phones or laptops to join a group and communicate with each other. A group may be as small as two clients (e.g., for simple person to person messaging) or as large as tens of thousands. A client that is part of a group is a _member_ of that group. As groups change membership and group or member properties, they advance from one _epoch_ to another and the cryptographic state of the group evolves. The group is represented using a _ratchet tree_, which represents the members as the leaves of a tree. It is used to efficiently encrypt to subsets of the members. Each member has a _LeafNode_ object in the tree holding the client's identity, credentials, and capabilities. Various messages are used in the evolution from epoch to epoch. A _Proposal_ message proposes a change to be made in the next epoch, such as adding or removing a member. A _Commit_ message initiates a new epoch by instructing members of the group to implement a collection of proposals. Proposals and Commits are collectively called _Handshake messages_. A _KeyPackage_ provides keys that can be used to add the client to a group, including its LeafNode, and _Signature Key_. A _Welcome_ message provides a new member to the group with the information to initialize their state for the epoch in which they were added. Of course most (but not all) applications use MLS to send encrypted group messages. An _application message_ is an MLS message with an arbitrary application payload. Finally, a _PublicMessage_ contains an integrity-protected MLS Handshake message, while a _PrivateMessage_ contains a confidential, integrity-protected Handshake or Application message. For a more detailed explanation of these terms, please consult the MLS protocol specification. 2.2. In order to communicate securely, users initially interact with services at their disposal to establish the necessary values and"} {"id": "q-en-mls-architecture-5c50c339f78831f3032db82b4c5e601c863b535cbe351eb4cec0736fed78c5c8", "old_text": "restricted to certain users, but we assume that those restrictions are enforced by the application layer. 2.1. While informally, a group can be considered to be a set of users possibly using multiple endpoint devices to interact with the Service", "comments": "This replaced PR which has broken merge conflicts.\nI've reverted this. It was prematurely merged. I'll open a new PR once I have had time to examine this.\nThis was discussed in the interim today and is the same as the content that was in . Thanks, -rohan Rohan Mahy l Vice President Engineering, Architecture Chat: NAME on Wire Wire - Secure team messaging. Zeta Project Germany GmbH l Rosenthaler Stra\u00dfe 40, 10178 Berlin, Germany Gesch\u00e4ftsf\u00fchrer/Managing Director: Alan Duric HRB 149847 beim Handelsregister Charlottenburg, Berlin VAT-ID DE288748675\nThat's fine, but I still want to go over it before it merges.", "new_text": "restricted to certain users, but we assume that those restrictions are enforced by the application layer. 2.3. While informally, a group can be considered to be a set of users possibly using multiple endpoint devices to interact with the Service"} {"id": "q-en-mls-protocol-10b6d0e2cf000008a968b90ea49d9c894f1791635080604730d8291d340ddd57", "old_text": "RFC EDITOR PLEASE DELETE THIS SECTION. draft-03 Added ciphersuites and signature schemes (*)", "comments": "This does not handle the removal of DH uses instead of KEM which will be done as part of Issue / PR ...\nI would be happier if this PR were moving the draft to use draft-barnes-cfrg-hpke, which is one of the changes I had in mind for -04. Do you think that's premature?\nThis text is wrong:\nThis could also be addressed by using draft-barnes-cfrg-hpke.\nThis has been fixed by\nEnable, e.g., P-256 with both AES-GCM and ChaChaPoly Re-use code points from TLS\nWe should discuss that today", "new_text": "RFC EDITOR PLEASE DELETE THIS SECTION. draft-04 ECIES is now renamed in favor of HPKE (*) draft-03 Added ciphersuites and signature schemes (*)"} {"id": "q-en-mls-protocol-10b6d0e2cf000008a968b90ea49d9c894f1791635080604730d8291d340ddd57", "old_text": "ciphertext in the list is the encryption to the corresponding node in the resolution. The ECIESCiphertext values encoding the encrypted secret values are computed as follows: Generate an ephemeral DH key pair (x, x*G) in the DH group specified by the ciphersuite in use Compute the shared secret Z with the node's other child Derive a key and nonce as described below Encrypt the node's secret value using the AEAD algorithm specified by the ciphersuite in use, with the following inputs: Key: The key derived from Z Nonce: The nonce derived from Z Additional Authenticated Data: The empty octet string Plaintext: The secret value, without any further formatting Encode the ECIESCiphertext with the following values: ephemeral_key: The ephemeral public key x*G ciphertext: The AEAD output Decryption is performed in the corresponding way, using the private key of the resolution node and the ephemeral public key transmitted", "comments": "This does not handle the removal of DH uses instead of KEM which will be done as part of Issue / PR ...\nI would be happier if this PR were moving the draft to use draft-barnes-cfrg-hpke, which is one of the changes I had in mind for -04. Do you think that's premature?\nThis text is wrong:\nThis could also be addressed by using draft-barnes-cfrg-hpke.\nThis has been fixed by\nEnable, e.g., P-256 with both AES-GCM and ChaChaPoly Re-use code points from TLS\nWe should discuss that today", "new_text": "ciphertext in the list is the encryption to the corresponding node in the resolution. The HPKECiphertext values are computed according to the Encrypt function defined in HPKE. Decryption is performed in the corresponding way, using the private key of the resolution node and the ephemeral public key transmitted"} {"id": "q-en-mls-protocol-10b6d0e2cf000008a968b90ea49d9c894f1791635080604730d8291d340ddd57", "old_text": "The Welcome message contains the information that the new member needs to initialize a GroupState object that can be updated to the current state using the Add message. This information is encrypted for the new member using ECIES. The recipient key pair for the ECIES encryption is the one included in the indicated UserInitKey, corresponding to the indicated ciphersuite.", "comments": "This does not handle the removal of DH uses instead of KEM which will be done as part of Issue / PR ...\nI would be happier if this PR were moving the draft to use draft-barnes-cfrg-hpke, which is one of the changes I had in mind for -04. Do you think that's premature?\nThis text is wrong:\nThis could also be addressed by using draft-barnes-cfrg-hpke.\nThis has been fixed by\nEnable, e.g., P-256 with both AES-GCM and ChaChaPoly Re-use code points from TLS\nWe should discuss that today", "new_text": "The Welcome message contains the information that the new member needs to initialize a GroupState object that can be updated to the current state using the Add message. This information is encrypted for the new member using HPKE. The recipient key pair for the HPKE encryption is the one included in the indicated UserInitKey, corresponding to the indicated ciphersuite."} {"id": "q-en-mls-protocol-10b6d0e2cf000008a968b90ea49d9c894f1791635080604730d8291d340ddd57", "old_text": "simply copy all fields except the \"leaf_secret\" from its GroupState object. [[ OPEN ISSUE: The Welcome message needs to be sent encrypted for the new member. This should be done using the public key in the UserInitKey, either with ECIES or X3DH. ]] [[ OPEN ISSUE: The Welcome message needs to be synchronized in the same way as the Add. That is, the Welcome should be sent only if the Add succeeds, and is not in conflict with another, simultaneous Add.", "comments": "This does not handle the removal of DH uses instead of KEM which will be done as part of Issue / PR ...\nI would be happier if this PR were moving the draft to use draft-barnes-cfrg-hpke, which is one of the changes I had in mind for -04. Do you think that's premature?\nThis text is wrong:\nThis could also be addressed by using draft-barnes-cfrg-hpke.\nThis has been fixed by\nEnable, e.g., P-256 with both AES-GCM and ChaChaPoly Re-use code points from TLS\nWe should discuss that today", "new_text": "simply copy all fields except the \"leaf_secret\" from its GroupState object. [[ OPEN ISSUE: The Welcome message needs to be synchronized in the same way as the Add. That is, the Welcome should be sent only if the Add succeeds, and is not in conflict with another, simultaneous Add."} {"id": "q-en-mls-protocol-217912dc5a0c639c0a0d9f9af2b3e7a3b79d4407958a01695a62b78334c95c22", "old_text": "each Participant creates an initial Participant Application Secret to be used for its own sending chain: Note that [sender] represent the uint32 value encoding the index of the participant in the ratchet tree. Updating the Application secret and deriving the associated AEAD key and nonce can be summarized as the following Application key schedule", "comments": "Fix for\nNAME This might not be completely ideal, because of the group state but it currently does not change the key schedule, I am merging this because we need it for correct interop but feel free to get back to discuss a better way to do this in details.\nThere are at least two bugs in the message protection section that make it unimplementable: It calls for , but the third argument to as defined is a , not an octet string. It refers to a function , which is defined in TLS, but not here. ISTM the simplest way to fix these problems would to define and use it (1) as the basis for , (2) for deriving sender root secrets, and (3) for deriving keys and nonces.\nFixed by", "new_text": "each Participant creates an initial Participant Application Secret to be used for its own sending chain: Note that [sender] represents the index of the member in the roster. Updating the Application secret and deriving the associated AEAD key and nonce can be summarized as the following Application key schedule"} {"id": "q-en-mls-protocol-2e5f6fe53f5428976b1e93d145ea70384ff66fd6db8831958174f68263cbd959", "old_text": "A Key Derivation Function (KDF) A Derive-Public-Key function that produces a public key from a private key A ratchet tree is a left-balanced binary tree, in which each node contains up to three values:", "comments": "Initial attempt to solve . This defines the KDF as discussed in and relocates the DH computation obligation and verification as part of the Derive-Key-Pair function definition.\nFixing my own review comments here, since NAME appears to be offline and we need to publish -04.\nThe PR from NAME introducing a KDF was merged before I had a chance to review it. There are a number of problems with it: Instead of calling for an abstract KDF, it should define one, as we have with the use of HKDF elsewhere We can't just say , since KDFs produce octet strings, not private keys. We need , not just . This doesn't match what I recall discussing on the list. I think what you want is actually as follows: Or in prose:\nThat was exactly the question I was asking on the list, i.e., if we need an explicit \"derive-keypair\" function. The idea with deriving the private key directly was to avoid a redundant derive-key operation. I'm totally fine re-doing the PR to represent either approach.\nMerging that PR too quickly is on me. For the parent derivation, we can just use what the formal model for TreeKEM does, which is roughly the Richard points out. The part I don't like about this, is to define in the MLS specification as it depends on the HPKE internal algorithm. The TreeKEM formal specification, it is done the following way (because we know that the HPKE interface we use can take random bytes as a secret key): In the MLS specification, I suggest we avoid defining anything new but point to the HPKE spec instead. As it provides the KEM scheme, it should also provide ways to derive the from an octet string and the function that transform an to a . I suggest we define, in the HPKE document, something like that we already need anyway and eventually that maps the random bytes to an . For X25519 we don't even need the second function but generically we probably need it. That would lead to define the following in the MLS spec: NAME ?\nI don't think we can delegate this to HPKE. HPKE doesn't have a need for anything like Derive-Key-Pair. We do, so it's on us to define it. The point NAME raises about avoiding unnecessary operations is relevant here: If we're forking the hashes like this, then we can remove the hashing part from Derive-Key-Pair, so that it's really just a definition of how you convert from an octet string to a private key (and thus its public key).\nI disagree, it is not for MLS to define the and functions for all asymmetric schemes possibly used in HPKE. In MLS we should have to do only what TreeKEM does, aka, 2 KDFs, one outputing the parent's secret, one outputing the KEM encryption key, and that's it. I am willing to accept defining here but this is already odd.\nIf it's not our job, whose is it? It's not HPKE's job, because they don't have a need to convert byte strings to private keys. Some document needs to define this conversion in order for MLS to be implementable.\nI agree we are in trouble here :) But I still wish that if we can avoid it, we should.\nI agree that the KEM schemes themselves should define these things, like Curve25519 does. But it's like two lines of text. I'm not too bothered with it.\nI think I'm a bit lost here. How do we go forward? Regarding the use of an abstract KDF: I was mostly just adapting the existing text, which used an abstract hash function if I recall correctly. I didn't touch the texts that got more concrete because they seemed to be a little outdated anyway. That shouldn't be a problem though.\nNAME Ok, looking at this now, I am not sure what we decided in the end. I would be ok to define as you said but as it is KEM specific so there is no good solution for P256 or for whatever PQ KEM that would have a specific way of generating keys from pseudo random bytes...\nNAME any suggestion here\nFixed by", "new_text": "A Key Derivation Function (KDF) A Derive-Key-Pair function that produces an asymmetric keypair from a node secret A ratchet tree is a left-balanced binary tree, in which each node contains up to three values:"} {"id": "q-en-mls-protocol-2e5f6fe53f5428976b1e93d145ea70384ff66fd6db8831958174f68263cbd959", "old_text": "An asymmetric public key The private key for a node is derived from its secret value using the KDF. The public key is then derived from the private key using the Derive-Public-Key function. The contents of a parent node are computed from one of its children as follows: The contents of the parent are based on the latest-updated child. For example, if participants with leaf secrets A, B, C, and D join a group in that order, then the resulting tree will have the following structure: If the first participant subsequently changes its leaf secret to be X, then the tree will have the following structure. 5.3.", "comments": "Initial attempt to solve . This defines the KDF as discussed in and relocates the DH computation obligation and verification as part of the Derive-Key-Pair function definition.\nFixing my own review comments here, since NAME appears to be offline and we need to publish -04.\nThe PR from NAME introducing a KDF was merged before I had a chance to review it. There are a number of problems with it: Instead of calling for an abstract KDF, it should define one, as we have with the use of HKDF elsewhere We can't just say , since KDFs produce octet strings, not private keys. We need , not just . This doesn't match what I recall discussing on the list. I think what you want is actually as follows: Or in prose:\nThat was exactly the question I was asking on the list, i.e., if we need an explicit \"derive-keypair\" function. The idea with deriving the private key directly was to avoid a redundant derive-key operation. I'm totally fine re-doing the PR to represent either approach.\nMerging that PR too quickly is on me. For the parent derivation, we can just use what the formal model for TreeKEM does, which is roughly the Richard points out. The part I don't like about this, is to define in the MLS specification as it depends on the HPKE internal algorithm. The TreeKEM formal specification, it is done the following way (because we know that the HPKE interface we use can take random bytes as a secret key): In the MLS specification, I suggest we avoid defining anything new but point to the HPKE spec instead. As it provides the KEM scheme, it should also provide ways to derive the from an octet string and the function that transform an to a . I suggest we define, in the HPKE document, something like that we already need anyway and eventually that maps the random bytes to an . For X25519 we don't even need the second function but generically we probably need it. That would lead to define the following in the MLS spec: NAME ?\nI don't think we can delegate this to HPKE. HPKE doesn't have a need for anything like Derive-Key-Pair. We do, so it's on us to define it. The point NAME raises about avoiding unnecessary operations is relevant here: If we're forking the hashes like this, then we can remove the hashing part from Derive-Key-Pair, so that it's really just a definition of how you convert from an octet string to a private key (and thus its public key).\nI disagree, it is not for MLS to define the and functions for all asymmetric schemes possibly used in HPKE. In MLS we should have to do only what TreeKEM does, aka, 2 KDFs, one outputing the parent's secret, one outputing the KEM encryption key, and that's it. I am willing to accept defining here but this is already odd.\nIf it's not our job, whose is it? It's not HPKE's job, because they don't have a need to convert byte strings to private keys. Some document needs to define this conversion in order for MLS to be implementable.\nI agree we are in trouble here :) But I still wish that if we can avoid it, we should.\nI agree that the KEM schemes themselves should define these things, like Curve25519 does. But it's like two lines of text. I'm not too bothered with it.\nI think I'm a bit lost here. How do we go forward? Regarding the use of an abstract KDF: I was mostly just adapting the existing text, which used an abstract hash function if I recall correctly. I didn't touch the texts that got more concrete because they seemed to be a little outdated anyway. That shouldn't be a problem though.\nNAME Ok, looking at this now, I am not sure what we decided in the end. I would be ok to define as you said but as it is KEM specific so there is no good solution for P256 or for whatever PQ KEM that would have a specific way of generating keys from pseudo random bytes...\nNAME any suggestion here\nFixed by", "new_text": "An asymmetric public key The contents of the parent are based on the latest-updated child. Nodes in a tree are always updated along the \"direct path\" from a leaf to the root. The generator of the update chooses a random secret value \"path_secret[0]\", and generates a sequence of \"path secrets\", one for each node from the leaf to the root. That is, path_secret[0] is used for the leaf, path_secret[1] for its parent, and so on. At each step, the path secret is used to derive a new secret value for the corresponding node, from which the node's key pair is derived. For example, suppose there is a group with four participants: If the first participant subsequently generates an update based on a secret X, then the sender would generate the following sequence of path secrets and node secrets: After the update, the tree will have the following structure, where \"ns[i]\" represents the node_secret values generated as described above: 5.3."} {"id": "q-en-mls-protocol-2e5f6fe53f5428976b1e93d145ea70384ff66fd6db8831958174f68263cbd959", "old_text": "curves, they SHOULD perform the additional checks specified in Section 7 of Encryption keys are derived from shared secrets by taking the first 16 bytes of H(Z), where Z is the shared secret and H is SHA-256. 5.5.2. This ciphersuite uses the following primitives:", "comments": "Initial attempt to solve . This defines the KDF as discussed in and relocates the DH computation obligation and verification as part of the Derive-Key-Pair function definition.\nFixing my own review comments here, since NAME appears to be offline and we need to publish -04.\nThe PR from NAME introducing a KDF was merged before I had a chance to review it. There are a number of problems with it: Instead of calling for an abstract KDF, it should define one, as we have with the use of HKDF elsewhere We can't just say , since KDFs produce octet strings, not private keys. We need , not just . This doesn't match what I recall discussing on the list. I think what you want is actually as follows: Or in prose:\nThat was exactly the question I was asking on the list, i.e., if we need an explicit \"derive-keypair\" function. The idea with deriving the private key directly was to avoid a redundant derive-key operation. I'm totally fine re-doing the PR to represent either approach.\nMerging that PR too quickly is on me. For the parent derivation, we can just use what the formal model for TreeKEM does, which is roughly the Richard points out. The part I don't like about this, is to define in the MLS specification as it depends on the HPKE internal algorithm. The TreeKEM formal specification, it is done the following way (because we know that the HPKE interface we use can take random bytes as a secret key): In the MLS specification, I suggest we avoid defining anything new but point to the HPKE spec instead. As it provides the KEM scheme, it should also provide ways to derive the from an octet string and the function that transform an to a . I suggest we define, in the HPKE document, something like that we already need anyway and eventually that maps the random bytes to an . For X25519 we don't even need the second function but generically we probably need it. That would lead to define the following in the MLS spec: NAME ?\nI don't think we can delegate this to HPKE. HPKE doesn't have a need for anything like Derive-Key-Pair. We do, so it's on us to define it. The point NAME raises about avoiding unnecessary operations is relevant here: If we're forking the hashes like this, then we can remove the hashing part from Derive-Key-Pair, so that it's really just a definition of how you convert from an octet string to a private key (and thus its public key).\nI disagree, it is not for MLS to define the and functions for all asymmetric schemes possibly used in HPKE. In MLS we should have to do only what TreeKEM does, aka, 2 KDFs, one outputing the parent's secret, one outputing the KEM encryption key, and that's it. I am willing to accept defining here but this is already odd.\nIf it's not our job, whose is it? It's not HPKE's job, because they don't have a need to convert byte strings to private keys. Some document needs to define this conversion in order for MLS to be implementable.\nI agree we are in trouble here :) But I still wish that if we can avoid it, we should.\nI agree that the KEM schemes themselves should define these things, like Curve25519 does. But it's like two lines of text. I'm not too bothered with it.\nI think I'm a bit lost here. How do we go forward? Regarding the use of an abstract KDF: I was mostly just adapting the existing text, which used an abstract hash function if I recall correctly. I didn't touch the texts that got more concrete because they seemed to be a little outdated anyway. That shouldn't be a problem though.\nNAME Ok, looking at this now, I am not sure what we decided in the end. I would be ok to define as you said but as it is KEM specific so there is no good solution for P256 or for whatever PQ KEM that would have a specific way of generating keys from pseudo random bytes...\nNAME any suggestion here\nFixed by", "new_text": "curves, they SHOULD perform the additional checks specified in Section 7 of 5.5.2. This ciphersuite uses the following primitives:"} {"id": "q-en-mls-protocol-2e5f6fe53f5428976b1e93d145ea70384ff66fd6db8831958174f68263cbd959", "old_text": "elliptic curve equation. For these curves, implementers do not need to verify membership in the correct subgroup. Encryption keys are derived from shared secrets by taking the first 16 bytes of H(Z), where Z is the shared secret and H is SHA-256. 5.6. A member of a group authenticates the identities of other", "comments": "Initial attempt to solve . This defines the KDF as discussed in and relocates the DH computation obligation and verification as part of the Derive-Key-Pair function definition.\nFixing my own review comments here, since NAME appears to be offline and we need to publish -04.\nThe PR from NAME introducing a KDF was merged before I had a chance to review it. There are a number of problems with it: Instead of calling for an abstract KDF, it should define one, as we have with the use of HKDF elsewhere We can't just say , since KDFs produce octet strings, not private keys. We need , not just . This doesn't match what I recall discussing on the list. I think what you want is actually as follows: Or in prose:\nThat was exactly the question I was asking on the list, i.e., if we need an explicit \"derive-keypair\" function. The idea with deriving the private key directly was to avoid a redundant derive-key operation. I'm totally fine re-doing the PR to represent either approach.\nMerging that PR too quickly is on me. For the parent derivation, we can just use what the formal model for TreeKEM does, which is roughly the Richard points out. The part I don't like about this, is to define in the MLS specification as it depends on the HPKE internal algorithm. The TreeKEM formal specification, it is done the following way (because we know that the HPKE interface we use can take random bytes as a secret key): In the MLS specification, I suggest we avoid defining anything new but point to the HPKE spec instead. As it provides the KEM scheme, it should also provide ways to derive the from an octet string and the function that transform an to a . I suggest we define, in the HPKE document, something like that we already need anyway and eventually that maps the random bytes to an . For X25519 we don't even need the second function but generically we probably need it. That would lead to define the following in the MLS spec: NAME ?\nI don't think we can delegate this to HPKE. HPKE doesn't have a need for anything like Derive-Key-Pair. We do, so it's on us to define it. The point NAME raises about avoiding unnecessary operations is relevant here: If we're forking the hashes like this, then we can remove the hashing part from Derive-Key-Pair, so that it's really just a definition of how you convert from an octet string to a private key (and thus its public key).\nI disagree, it is not for MLS to define the and functions for all asymmetric schemes possibly used in HPKE. In MLS we should have to do only what TreeKEM does, aka, 2 KDFs, one outputing the parent's secret, one outputing the KEM encryption key, and that's it. I am willing to accept defining here but this is already odd.\nIf it's not our job, whose is it? It's not HPKE's job, because they don't have a need to convert byte strings to private keys. Some document needs to define this conversion in order for MLS to be implementable.\nI agree we are in trouble here :) But I still wish that if we can avoid it, we should.\nI agree that the KEM schemes themselves should define these things, like Curve25519 does. But it's like two lines of text. I'm not too bothered with it.\nI think I'm a bit lost here. How do we go forward? Regarding the use of an abstract KDF: I was mostly just adapting the existing text, which used an abstract hash function if I recall correctly. I didn't touch the texts that got more concrete because they seemed to be a little outdated anyway. That shouldn't be a problem though.\nNAME Ok, looking at this now, I am not sure what we decided in the end. I would be ok to define as you said but as it is KEM specific so there is no good solution for P256 or for whatever PQ KEM that would have a specific way of generating keys from pseudo random bytes...\nNAME any suggestion here\nFixed by", "new_text": "elliptic curve equation. For these curves, implementers do not need to verify membership in the correct subgroup. 5.6. A member of a group authenticates the identities of other"} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "Rename the GroupState structure to GroupContext (*) draft-05 Common framing for handshake and application messages (*)", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "Rename the GroupState structure to GroupContext (*) Rename UserInitKey to ClientInitKey draft-05 Common framing for handshake and application messages (*)"} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "A short-lived HPKE key pair used to introduce a new client to a group. Initialization keys are published for each client (UserInitKey). A secret that represent a member's contribution to the group secret (so called because the members' leaf keys are the leaves in", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "A short-lived HPKE key pair used to introduce a new client to a group. Initialization keys are published for each client (ClientInitKey). A secret that represent a member's contribution to the group secret (so called because the members' leaf keys are the leaves in"} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "Removing a member. Before the initialization of a group, clients publish UserInitKey objects to a directory provided to the Messaging Service. When a client A wants to establish a group with B and C, it first downloads UserInitKeys for B and C. It then initializes a group state containing only itself and uses the UserInitKeys to compute Welcome and Add messages to add B and C, in a sequence chosen by A. The Welcome messages are sent directly to the new members (there is no need to send them to the group). The Add messages are broadcasted", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "Removing a member. Before the initialization of a group, clients publish ClientInitKey objects to a directory provided to the Messaging Service. When a client A wants to establish a group with B and C, it first downloads ClientInitKeys for B and C. It then initializes a group state containing only itself and uses the ClientInitKeys to compute Welcome and Add messages to add B and C, in a sequence chosen by A. The Welcome messages are sent directly to the new members (there is no need to send them to the group). The Add messages are broadcasted"} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "update its state to reflect their addition. Subsequent additions of group members proceed in the same way. Any member of the group can download an UserInitKey for a new client and broadcast an Add message that the current group can use to update their state and the new client can use to initialize its state. To enforce forward secrecy and post-compromise security of messages,", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "update its state to reflect their addition. Subsequent additions of group members proceed in the same way. Any member of the group can download an ClientInitKey for a new client and broadcast an Add message that the current group can use to update their state and the new client can use to initialize its state. To enforce forward secrecy and post-compromise security of messages,"} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "In order to facilitate asynchronous addition of clients to a group, it is possible to pre-publish initialization keys that provide some public information about a user. UserInitKey messages provide information about a client that any existing member can use to add this client to the group asynchronously. A UserInitKey object specifies what ciphersuites a client supports, as well as providing public keys that the client can use for key derivation and signing. The client's identity key is intended to be stable throughout the lifetime of the group; there is no mechanism to change it. Init keys are intended to be used a very limited number of times, potentially once. (see init-key-reuse). UserInitKeys also contain an identifier chosen by the client, which the client MUST assure uniquely identifies a given UserInitKey object among the set of UserInitKeys created by this client. The init_keys array MUST have the same length as the cipher_suites array, and each entry in the init_keys array MUST be a public key for", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "In order to facilitate asynchronous addition of clients to a group, it is possible to pre-publish initialization keys that provide some public information about a user. ClientInitKey messages provide information about a client that any existing member can use to add this client to the group asynchronously. A ClientInitKey object specifies what ciphersuites a client supports, as well as providing public keys that the client can use for key derivation and signing. The client's identity key is intended to be stable throughout the lifetime of the group; there is no mechanism to change it. Init keys are intended to be used a very limited number of times, potentially once. (see init-key-reuse). ClientInitKeys also contain an identifier chosen by the client, which the client MUST assure uniquely identifies a given ClientInitKey object among the set of ClientInitKeys created by this client. The init_keys array MUST have the same length as the cipher_suites array, and each entry in the init_keys array MUST be a public key for"} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "and used in the HPKE construction for TreeKEM. The whole structure is signed using the client's identity key. A UserInitKey object with an invalid signature field MUST be considered malformed. The input to the signature computation comprises all of the fields except for the signature field. 8.", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "and used in the HPKE construction for TreeKEM. The whole structure is signed using the client's identity key. A ClientInitKey object with an invalid signature field MUST be considered malformed. The input to the signature computation comprises all of the fields except for the signature field. 8."} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "needs to initialize a GroupContext object that can be updated to the current state using the Add message. This information is encrypted for the new member using HPKE. The recipient key pair for the HPKE encryption is the one included in the indicated UserInitKey, corresponding to the indicated ciphersuite. In the description of the tree as a list of nodes, the \"credential\"", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "needs to initialize a GroupContext object that can be updated to the current state using the Add message. This information is encrypted for the new member using HPKE. The recipient key pair for the HPKE encryption is the one included in the indicated ClientInitKey, corresponding to the indicated ciphersuite. In the description of the tree as a list of nodes, the \"credential\""} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "The \"welcome_info_hash\" field contains a hash of the WelcomeInfo object sent in a Welcome message to the new member. A group member generates this message by requesting a UserInitKey from the directory for the user to be added, and encoding it into an Add message.", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "The \"welcome_info_hash\" field contains a hash of the WelcomeInfo object sent in a Welcome message to the new member. A group member generates this message by requesting a ClientInitKey from the directory for the user to be added, and encoding it into an Add message."} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "If the \"index\" value is equal to the size of the group, increment the size of the group, and extend the tree accordingly Verify the signature on the included UserInitKey; if the signature verification fails, abort Generate a WelcomeInfo object describing the state prior to the add, and verify that its hash is the same as the value of the", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "If the \"index\" value is equal to the size of the group, increment the size of the group, and extend the tree accordingly Verify the signature on the included ClientInitKey; if the signature verification fails, abort Generate a WelcomeInfo object describing the state prior to the add, and verify that its hash is the same as the value of the"} {"id": "q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69", "old_text": "direct path of the new node Set the leaf node in the tree at position \"index\" to a new node containing the public key from the UserInitKey in the Add corresponding to the ciphersuite in use, as well as the credential under which the UserInitKey was signed The \"update_secret\" resulting from this change is an all-zero octet string of length Hash.length.", "comments": "Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon.", "new_text": "direct path of the new node Set the leaf node in the tree at position \"index\" to a new node containing the public key from the ClientInitKey in the Add corresponding to the ciphersuite in use, as well as the credential under which the ClientInitKey was signed The \"update_secret\" resulting from this change is an all-zero octet string of length Hash.length."} {"id": "q-en-mls-protocol-0fa37707eb5ef9b7a5e26e9127a8d2783ef07f3337ccc4a1a6257a363ed178a7", "old_text": "RFC EDITOR PLEASE DELETE THIS SECTION. draft-06 Reorder blanking and update in the Remove operation (*)", "comments": "Update change log for draft-07 No conflicting PRs so I did break many lines that were too long. Fix a few minor editorial issues after merging the tree-based application key schedule.", "new_text": "RFC EDITOR PLEASE DELETE THIS SECTION. draft-07 Initial version of the Tree based Application Key Schedule (*) Initial definition of the Init message for group creation (*) Fix issue with the transcript used for newcomers (*) Clarifications on message framing and HPKE contexts (*) draft-06 Reorder blanking and update in the Remove operation (*)"} {"id": "q-en-mls-protocol-0fa37707eb5ef9b7a5e26e9127a8d2783ef07f3337ccc4a1a6257a363ed178a7", "old_text": "For application messages, a chain of keys is derived for each sender in a similar fashion. This allows forward secrecy at the level of application messages within and out of an epoch. A step in this chain (the second subscript) is called a \"generation\". As before the value [sender] represents the index of the member that will use this key to send, encoded as a uint32. [[ OPEN ISSUE: The HKDF context field is left empty for now. A proper security study is needed to make sure that we do not need more information in the context to achieve the security goals.]] [[ OPEN ISSUE: At the moment there is no contributivity of Application secrets chained from the initial one to the next generation of Epoch secret. While this seems safe because cryptographic operations using the application secrets can't affect the group init_secret, it remains to be proven correct. ]] The following rules apply to the usage of the secrets, keys, and nonces derived above: Senders MUST only use a given secret once and monotonically increment the generation of their secret. This is important to provide Forward Secrecy at the level of Application messages. An attacker getting hold of a member specific Application Secret at generation [N+1] will not be able to derive the member's Application Secret [N] nor the associated AEAD key and nonce. Receivers MUST delete an Application Secret once it has been used to derive the corresponding AEAD key and nonce as well as the next Application Secret. Receivers MAY keep the AEAD key and nonce around for some reasonable period. Receivers MUST delete AEAD keys and nonces once they have been used to successfully decrypt a message. 7.", "comments": "Update change log for draft-07 No conflicting PRs so I did break many lines that were too long. Fix a few minor editorial issues after merging the tree-based application key schedule.", "new_text": "For application messages, a chain of keys is derived for each sender in a similar fashion. This allows forward secrecy at the level of application messages within and out of an epoch. A step in this chain (the second subscript) is called a \"generation\". The details of application key derivation are described in the astree section below. 7."} {"id": "q-en-mls-protocol-0fa37707eb5ef9b7a5e26e9127a8d2783ef07f3337ccc4a1a6257a363ed178a7", "old_text": "tree of Application Secrets as well as one symmetric ratchet per group member. Each client maintains their own local copy of (parts of) the Application Key Schedule for each epoch during which they are a group member. They derive new keys, nonces and secrets as needed while deleting old ones as soon as they have been used. Application messages MUST be protected with the Authenticated- Encryption with Associated-Data (AEAD) encryption scheme associated with the MLS ciphersuite. Note that \"Authenticated\" in this context does not mean messages are known to be sent by a specific client but only from a legitimate member of the group. To authenticate a message from a particular member, signatures are required. Handshake messages MUST use asymmetric signatures to strongly authenticate the sender of a message. 11.1. The Application schedule begins with the Application secrets which are arranged in an \"Application Secret Tree\" or AS Tree for short; a left balanced binary tree with the same set of nodes and edges as the epoch's ratchet tree. Each leaf in the AS Tree is associated with the same group member as the corresponding leaf in the ratchet tree. Nodes are also assigned an index according to their position in the array representation of the tree (described in tree-math). If N is a node index in the AS Tree then left(N) and right(N) denote the children of N (if they exist). Each node in the tree is assigned a secret. The root's secret is simply the application_secret of that epoch. (See key-schedule for", "comments": "Update change log for draft-07 No conflicting PRs so I did break many lines that were too long. Fix a few minor editorial issues after merging the tree-based application key schedule.", "new_text": "tree of Application Secrets as well as one symmetric ratchet per group member. Each client maintains their own local copy of the Application Key Schedule for each epoch during which they are a group member. They derive new keys, nonces and secrets as needed while deleting old ones as soon as they have been used. Application messages MUST be protected with the Authenticated- Encryption with Associated-Data (AEAD) encryption scheme associated with the MLS ciphersuite using the common framing mechanism. Note that \"Authenticated\" in this context does not mean messages are known to be sent by a specific client but only from a legitimate member of the group. To authenticate a message from a particular member, signatures are required. Handshake messages MUST use asymmetric signatures to strongly authenticate the sender of a message. 11.1. The application key schedule begins with the application secrets which are arranged in an \"Application Secret Tree\" or AS Tree for short; a left balanced binary tree with the same set of nodes and edges as the epoch's ratchet tree. Each leaf in the AS Tree is associated with the same group member as the corresponding leaf in the ratchet tree. Nodes are also assigned an index according to their position in the array representation of the tree (described in tree-math). If N is a node index in the AS Tree then left(N) and right(N) denote the children of N (if they exist). Each node in the tree is assigned a secret. The root's secret is simply the application_secret of that epoch. (See key-schedule for"} {"id": "q-en-mls-protocol-0fa37707eb5ef9b7a5e26e9127a8d2783ef07f3337ccc4a1a6257a363ed178a7", "old_text": "The secret of any other node in the tree is derived from its parent's secret using a call to Derive-App-Secret. If N is a node index in the ASTree then the secrets of the children of N are defined to be: Note that fixing concrete values for GroupState_[n] and application_secret completely defines all secrets in the AS Tree. 11.2.", "comments": "Update change log for draft-07 No conflicting PRs so I did break many lines that were too long. Fix a few minor editorial issues after merging the tree-based application key schedule.", "new_text": "The secret of any other node in the tree is derived from its parent's secret using a call to Derive-App-Secret. If N is a node index in the AS Tree then the secrets of the children of N are defined to be: Note that fixing concrete values for GroupContext_[n] and application_secret completely defines all secrets in the AS Tree. 11.2."} {"id": "q-en-mls-protocol-0fa37707eb5ef9b7a5e26e9127a8d2783ef07f3337ccc4a1a6257a363ed178a7", "old_text": "11.3. It is important to delete all security sensitive values as soon as they, or another value derived from them, is used for encryption or decryption.", "comments": "Update change log for draft-07 No conflicting PRs so I did break many lines that were too long. Fix a few minor editorial issues after merging the tree-based application key schedule.", "new_text": "11.3. It is important to delete all security sensitive values S as soon as they, or another value derived from them, is used for encryption or decryption."} {"id": "q-en-mls-protocol-0fa37707eb5ef9b7a5e26e9127a8d2783ef07f3337ccc4a1a6257a363ed178a7", "old_text": "encrypt or (successfully) decrypt a message or if a key, nonce or secret derived from S has been consumed. (This goes both for values derived via Derive-Secret and HKDF-Expand- Label). Here, S may be the init_secret, update_secret, epoch_secret, application_secret as well as any secret in the AS Tree or one of the", "comments": "Update change log for draft-07 No conflicting PRs so I did break many lines that were too long. Fix a few minor editorial issues after merging the tree-based application key schedule.", "new_text": "encrypt or (successfully) decrypt a message or if a key, nonce or secret derived from S has been consumed. (This goes both for values derived via Derive-Secret and HKDF-Expand- Label.) Here, S may be the init_secret, update_secret, epoch_secret, application_secret as well as any secret in the AS Tree or one of the"} {"id": "q-en-mls-protocol-0fa37707eb5ef9b7a5e26e9127a8d2783ef07f3337ccc4a1a6257a363ed178a7", "old_text": "11.4. During each epoch senders MUST NOT encrypt more messages than permitted by the security bounds of the AEAD scheme used. Note that each change to the Group through a Handshake message will also set a new application_secret. Hence this change MUST be applied", "comments": "Update change log for draft-07 No conflicting PRs so I did break many lines that were too long. Fix a few minor editorial issues after merging the tree-based application key schedule.", "new_text": "11.4. During each epoch senders MUST NOT encrypt more data than permitted by the security bounds of the AEAD scheme used. Note that each change to the Group through a Handshake message will also set a new application_secret. Hence this change MUST be applied"} {"id": "q-en-mls-protocol-f667bab8618acdfd804c2c91f52c1693bb723e96cd5eadc8afe1b81cdb1243e1", "old_text": "In this document, we describe a protocol based on tree structures that enable asynchronous group keying with forward secrecy and post- compromise security. Based on earlier work on \"asynchronous ratcheting trees\" art, the mechanism presented here use a asynchronous key-encapsulation mechanism for tree structures. This mechanism allows the members of the group to derive and update shared keys with costs that scale as the log of the group size.", "comments": "Here's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 8 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 5 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "In this document, we describe a protocol based on tree structures that enable asynchronous group keying with forward secrecy and post- compromise security. Based on earlier work on \"asynchronous ratcheting trees\" art, the protocol presented here uses an asynchronous key-encapsulation mechanism for tree structures. This mechanism allows the members of the group to derive and update shared keys with costs that scale as the log of the group size."} {"id": "q-en-mls-protocol-f667bab8618acdfd804c2c91f52c1693bb723e96cd5eadc8afe1b81cdb1243e1", "old_text": "group. Initialization keys are published for each client (ClientInitKey). A secret that represent a member's contribution to the group secret (so called because the members' leaf keys are the leaves in the group's ratchet tree).", "comments": "Here's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 8 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 5 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "group. Initialization keys are published for each client (ClientInitKey). A secret that represents a member's contribution to the group secret (so called because the members' leaf keys are the leaves in the group's ratchet tree)."} {"id": "q-en-mls-protocol-f667bab8618acdfd804c2c91f52c1693bb723e96cd5eadc8afe1b81cdb1243e1", "old_text": "The recipient of an update processes it with the following steps: Compute the updated path secrets * Identify a node in the direct path for which the local member is in the subtree of the non- updated child * Identify a node in the resolution of the copath node for which this node has a private key * Decrypt the path secret for the parent of the copath node using the private key from the resolution node * Derive path secrets for ancestors of that node using the algorithm described above * The recipient SHOULD verify that the received public keys agree with the public keys derived from the new node_secret values Merge the updated path secrets into the tree * Replace the public keys for nodes on the direct path with the received public keys * For nodes where an updated path secret was computed in step 1, compute the corresponding node secret and node key pair and replace the values stored at the node with the computed values. For example, in order to communicate the example update described in the previous section, the sender would transmit the following values: In this table, the value pk(X) represents the public key corresponding derived from the node secret X. The value E(K, S) represents the public-key encryption of the path secret S to the public key K. 6.", "comments": "Here's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 8 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 5 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "The recipient of an update processes it with the following steps: Compute the updated path secrets. Identify a node in the direct path for which the local member is in the subtree of the non-updated child. Identify a node in the resolution of the copath node for which this node has a private key. Decrypt the path secret for the parent of the copath node using the private key from the resolution node. Derive path secrets for ancestors of that node using the algorithm described above. The recipient SHOULD verify that the received public keys agree with the public keys derived from the new node_secret values. Merge the updated path secrets into the tree. Replace the public keys for nodes on the direct path with the received public keys. For nodes where an updated path secret was computed in step 1, compute the corresponding node secret and node key pair and replace the values stored at the node with the computed values. For example, in order to communicate the example update described in the previous section, the sender would transmit the following values: In this table, the value pk(X) represents the public key derived from the node secret X. The value E(K, S) represents the public-key encryption of the path secret S to the public key K. 6."} {"id": "q-en-mls-protocol-f667bab8618acdfd804c2c91f52c1693bb723e96cd5eadc8afe1b81cdb1243e1", "old_text": "6.1. This ciphersuite uses the following primitives: Hash function: SHA-256", "comments": "Here's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 8 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 5 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "6.1. 6.1.1. This ciphersuite uses the following primitives: Hash function: SHA-256"} {"id": "q-en-mls-protocol-f667bab8618acdfd804c2c91f52c1693bb723e96cd5eadc8afe1b81cdb1243e1", "old_text": "curves, they SHOULD perform the additional checks specified in Section 7 of 6.1.1. This ciphersuite uses the following primitives:", "comments": "Here's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 8 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 5 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "curves, they SHOULD perform the additional checks specified in Section 7 of 6.1.2. This ciphersuite uses the following primitives:"} {"id": "q-en-mls-protocol-f667bab8618acdfd804c2c91f52c1693bb723e96cd5eadc8afe1b81cdb1243e1", "old_text": "The creator of the group constructs an Init message as follows: Fetch a UserInitKey for each member (including the creator) Identify a protocol version and cipher suite that is supported by all proposed members. Construct a ratchet tree with its leaves populated with the public keys and credentials from the UserInitKeys of the members, and all other nodes blank. Generate a fresh leaf key pair for the first leaf", "comments": "Here's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 8 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 5 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "The creator of the group constructs an Init message as follows: Fetch a ClientInitKey for each member (including the creator) Identify a protocol version and cipher suite that is supported by all proposed members. Construct a ratchet tree with its leaves populated with the public keys and credentials from the ClientInitKeys of the members, and all other nodes blank. Generate a fresh leaf key pair for the first leaf"} {"id": "q-en-mls-protocol-bfbcce22213678ebe8db2f928b3178bf5affab4dff43708782ed70f48b817cce", "old_text": "Decompose group operations into Proposals and Commits (*) draft-07 Initial version of the Tree based Application Key Schedule (*)", "comments": "Depends on\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*\nLong-term inactive users undermine the FS and PCS properties of the protocol. Obviously, users can remove each other if they notice that a participant is inactive. We should consider whether we want to allow the server to do such a removal.\nDiscussion at interim 2019-01: Could do this as \"server-instructed\" vs. \"server done\" i.e., server instructs a client to do a remove But this causes some ambiguity w.r.t. the rest of the group The only difference between Remove and a server-initiated variant would be signature Other use cases: User deletes account User is no longer authorized to be in group Application would need to set policy about whether / when server-initiated actions would be allowed\nI'm assigning this to draft-04 under the theory that the signature changes that will come about as a result of will make it straightforward to have an additional key for the server that can be used to sign Adds / Removes. If that doesn't turn out to be the case, this might get deferred.\nAfter discussion with NAME and NAME There will be a need to signal that a non-member key is being used, e.g., with some reserved values Do the participants in the group need to agree the set of allowed non-member signers? If some members accept a signer, others don't, then you can get partition -04 will focus on Remove, not Add, and punt on the agreement question; we assume the application maintains consistency of the view of authorized signers.\nWe should push some of this to the application layer in order to not introduce a new handshake message with problematic authenticity (agreement on the list of non-members who can sign handshake messages). The server could publish an \"intent to remove\" that will be honored by the first client to come online. The actual Remove HS message will be issued by a member of the group. It can additionally be attached to the server intent to remove, so that clients can convey more contextual information to users. Example: Server issues the intent to remove Alice from the group. Bob comes online first after that and send a regular Remove HS message to remove Alice and links it to the sever intent. Other members of the group can now display \"Alice was removed\" instead of \"Bob removed Alice\" to the user. In this example Bob is the first member to come online, but it could really be any other member. This has the advantage that the protocol remains unaffected as such, while the desired behavior is still achieved.\nCurrently, in order for a new member to join a group, an existing member needs to add them to the group. Certain use cases work more naturally if a new joiner can initiate the add process. I think that broadly speaking we want a process here where someone outside the group can (1) request to be added and (2) send to the group, but cannot receive from the group until added. Some things to consider here: Anything that is revealed to a joiner that joins this way is effectively public. Should they be able to see, e.g., the roster? How should these interactions relate to the key schedule? Assuming there's some message, should it be included into the transcript at Add time? What happens if there are multiple parallel requests?\nAt the 2019-01 interim, we discussed a \"reversed Add\" flow, where: Group publishes an InitKey The new joiner sends a UIK to the group and establishes a shared secret with the group This initiates a period where the new joiner can send, but nobody in the group can Before anyone in the group can send, they have to send a Welcome for the new joiner ... which initiates an epoch where they are fully joined\nIn certain use cases, such as enterprise messaging there may/could/might be a problem if \"Any member of the group can download an ClientInitKey for a new client and broadcast an Add message\". An enterprise may wish to enforce access restrictions for certain information such as ClientInitKey. Just thinking about security related issues that may be most easily noticed by corp IT types. On the other hand this may be best dealt with outside the protocol scope.\nNAME I think this is fixed by correct? A user can propose to Add herself.\nPartly addressed by . To complete the story, we need a \"send-to-group-from-outside\" mechanism. Leaving this open until we have that.\nI don't think a \"send-to-group-from-outside\" mechanism will be possible, given that we try quite hard to prevent any key material from inside the group from ever being published.\nThe signing key of external actors will be contained in the leaf in the case it is a CIK.", "new_text": "Decompose group operations into Proposals and Commits (*) Enable Add and Remove proposals from outside the group (*) draft-07 Initial version of the Tree based Application Key Schedule (*)"} {"id": "q-en-mls-protocol-bfbcce22213678ebe8db2f928b3178bf5affab4dff43708782ed70f48b817cce", "old_text": "Blank the intermediate nodes along the path from the removed leaf to the root 10.2. A Commit message initiates a new epoch for the group, based on a", "comments": "Depends on\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*\nLong-term inactive users undermine the FS and PCS properties of the protocol. Obviously, users can remove each other if they notice that a participant is inactive. We should consider whether we want to allow the server to do such a removal.\nDiscussion at interim 2019-01: Could do this as \"server-instructed\" vs. \"server done\" i.e., server instructs a client to do a remove But this causes some ambiguity w.r.t. the rest of the group The only difference between Remove and a server-initiated variant would be signature Other use cases: User deletes account User is no longer authorized to be in group Application would need to set policy about whether / when server-initiated actions would be allowed\nI'm assigning this to draft-04 under the theory that the signature changes that will come about as a result of will make it straightforward to have an additional key for the server that can be used to sign Adds / Removes. If that doesn't turn out to be the case, this might get deferred.\nAfter discussion with NAME and NAME There will be a need to signal that a non-member key is being used, e.g., with some reserved values Do the participants in the group need to agree the set of allowed non-member signers? If some members accept a signer, others don't, then you can get partition -04 will focus on Remove, not Add, and punt on the agreement question; we assume the application maintains consistency of the view of authorized signers.\nWe should push some of this to the application layer in order to not introduce a new handshake message with problematic authenticity (agreement on the list of non-members who can sign handshake messages). The server could publish an \"intent to remove\" that will be honored by the first client to come online. The actual Remove HS message will be issued by a member of the group. It can additionally be attached to the server intent to remove, so that clients can convey more contextual information to users. Example: Server issues the intent to remove Alice from the group. Bob comes online first after that and send a regular Remove HS message to remove Alice and links it to the sever intent. Other members of the group can now display \"Alice was removed\" instead of \"Bob removed Alice\" to the user. In this example Bob is the first member to come online, but it could really be any other member. This has the advantage that the protocol remains unaffected as such, while the desired behavior is still achieved.\nCurrently, in order for a new member to join a group, an existing member needs to add them to the group. Certain use cases work more naturally if a new joiner can initiate the add process. I think that broadly speaking we want a process here where someone outside the group can (1) request to be added and (2) send to the group, but cannot receive from the group until added. Some things to consider here: Anything that is revealed to a joiner that joins this way is effectively public. Should they be able to see, e.g., the roster? How should these interactions relate to the key schedule? Assuming there's some message, should it be included into the transcript at Add time? What happens if there are multiple parallel requests?\nAt the 2019-01 interim, we discussed a \"reversed Add\" flow, where: Group publishes an InitKey The new joiner sends a UIK to the group and establishes a shared secret with the group This initiates a period where the new joiner can send, but nobody in the group can Before anyone in the group can send, they have to send a Welcome for the new joiner ... which initiates an epoch where they are fully joined\nIn certain use cases, such as enterprise messaging there may/could/might be a problem if \"Any member of the group can download an ClientInitKey for a new client and broadcast an Add message\". An enterprise may wish to enforce access restrictions for certain information such as ClientInitKey. Just thinking about security related issues that may be most easily noticed by corp IT types. On the other hand this may be best dealt with outside the protocol scope.\nNAME I think this is fixed by correct? A user can propose to Add herself.\nPartly addressed by . To complete the story, we need a \"send-to-group-from-outside\" mechanism. Leaving this open until we have that.\nI don't think a \"send-to-group-from-outside\" mechanism will be possible, given that we try quite hard to prevent any key material from inside the group from ever being published.\nThe signing key of external actors will be contained in the leaf in the case it is a CIK.", "new_text": "Blank the intermediate nodes along the path from the removed leaf to the root 10.1.4. Add and Remove proposals can be constructed and sent to the group by a party that is outside the group. For example, a Delivery Service might propose to remove a member of a group has been inactive for a long time, or propose adding a newly-hired staff member to a group representing a real-world team. Proposals originating outside the group are identified by having a \"sender\" value in the range 0xFFFFFF00 - 0xFFFFFFFF. The specific value 0xFFFFFFFF is reserved for clients proposing that they themselves be added. Proposals with types other than Add MUST NOT be sent with this sender index. In such cases, the MLSPlaintext MUST be signed with the private key corresponding to the ClientInitKey in the Add message. Recipients MUST verify that the MLSPlaintext carrying the Proposal message is validly signed with this key. The remaining values 0xFFFFFF00 - 0xFFFFFFFE are reserved for signer that are pre-provisioned to the clients within a group. If proposals with these sender IDs are to be accepted within a group, the members of the group MUST be provisioned by the application with a mapping between sender indices in this range and authorized signing keys. To ensure consistent handling of external proposals, the application MUST ensure that the members of a group have the same mapping and apply the same policies to external proposals. An external proposal MUST be sent as an MLSPlaintext object, since the sender will not have the keys necessary to construct an MLSCiphertext object. [[ TODO: Should recognized external signers be added to some object that the group explicitly agrees on, e.g., as an extension to the GroupContext? ]] 10.2. A Commit message initiates a new epoch for the group, based on a"} {"id": "q-en-mls-protocol-45cd311b9f960d27661b8a43352e8bd68e3a1b95e57229259ccbba42aa06f2ad", "old_text": "A hash function A Diffie-Hellman finite-field group or elliptic curve An AEAD encryption algorithm", "comments": "This PR begins filling in the IANA Considerations section, starting with a Ciphersuites registry. This addresses the concerns in as discussed at the interim (2019-10), namely by reserving for vendor use chunk of the code points space large enough to be selected at random without huge risk of collision (2^12 values). Depends on\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*\nEnable, e.g., P-256 with both AES-GCM and ChaChaPoly Re-use code points from TLS\nWe should discuss that today", "new_text": "A hash function A Diffie-Hellman finite-field group or elliptic curve group An AEAD encryption algorithm"} {"id": "q-en-mls-protocol-45cd311b9f960d27661b8a43352e8bd68e3a1b95e57229259ccbba42aa06f2ad", "old_text": "14. TODO: Registries for protocol parameters, e.g., ciphersuites ", "comments": "This PR begins filling in the IANA Considerations section, starting with a Ciphersuites registry. This addresses the concerns in as discussed at the interim (2019-10), namely by reserving for vendor use chunk of the code points space large enough to be selected at random without huge risk of collision (2^12 values). Depends on\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*\nEnable, e.g., P-256 with both AES-GCM and ChaChaPoly Re-use code points from TLS\nWe should discuss that today", "new_text": "14. This document requests the creation of the following new IANA registries: MLS Ciphersuites All of these registries should be under a heading of \"Message Layer Security\", and administered under a Specification Required policy RFC8126. 14.1. The \"MLS Ciphersuites\" registry lists identifiers for suites of cryptographic algorithms defined for use with MLS. These are two- byte values, so the maximum possible value is 0xFFFF = 65535. Values in the range 0xF000 - 0xFFFF are reserved for vendor-internal usage. Template: Value: The two-byte identifier for the ciphersuite Name: The name of the ciphersuite Reference: Where this algorithm is defined The initial contents for this registry are as follows: [[ Note to RFC Editor: Please replace \"XXXX\" above with the number assigned to this RFC. ]] "} {"id": "q-en-mls-protocol-87ac8082b0a886bbc4f70370ab515dc55db0cffb45d5ce667b2e33caac74be82", "old_text": "where \"node_public_key\" is the public key of the node that the path secret is being encrypted for, group_context is the current GroupContext object for the group, and the functions \"SetupBaseI\" and \"Seal\" are defined according to I-D.irtf-cfrg-hpke. Decryption is performed in the corresponding way, using the private", "comments": "In the latest HPKE draft (draft-irtf-cfrg-hpke-04), \"Initiator (I)\" was renamed to \"Sender (S)\".", "new_text": "where \"node_public_key\" is the public key of the node that the path secret is being encrypted for, group_context is the current GroupContext object for the group, and the functions \"SetupBaseS\" and \"Seal\" are defined according to I-D.irtf-cfrg-hpke. Decryption is performed in the corresponding way, using the private"} {"id": "q-en-mls-protocol-f74fc1496661537e069e77d29cc66333c72852e72ad39f28032ab6d4909a1815", "old_text": "left and right children, respectively. When computing the hash of a leaf node, the hash of a \"LeafNodeHashInput\" object is used: 7.6. Each member of the group maintains a GroupContext object that", "comments": "NAME - I seem to recall you had some opinions on node hashing?\nRight now the LeafNodeHashInput struct uses the index of the node among the leaves, while the ParentNodeHashInput struct uses the index of the node among all nodes. This means that, for example, the index value will be the same for the second leaf and its parent ( in both cases). We should probably just use the node index in both cases.\nURL is a better solution", "new_text": "left and right children, respectively. When computing the hash of a leaf node, the hash of a \"LeafNodeHashInput\" object is used: Note that the \"node_index\" field contains the index of the leaf among the nodes in the tree, not its index among the leaves; \"node_index = 2 * leaf_index\". 7.6. Each member of the group maintains a GroupContext object that"} {"id": "q-en-mls-protocol-2685a8a390ba227c0ffe5afe2b6c35c14580f19d9bc6af4d4398d7918ed26877", "old_text": "An AEAD encryption algorithm A signature algorithm The HPKE parameters are used to instantiate HPKE I-D.irtf-cfrg-hpke", "comments": "Just to recap the options as I understand them: a) Today: encryptedsenderdata = AEAD(senderdatakey, senderdatanonce, (groupID, epoch), senderdata) encryptedcontent = AEAD(appkey[i][j], appkey[i][j], (groupID, epoch, encryptedsenderdata), content) b) Sample sender data nonce from ciphertext (saves explicit nonce) encryptedcontent = AEAD(appkey[i][j], appnonce[i][j], (groupID, epoch), content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata) c) Drop auth tag from content encryption (saves explicit nonce and content auth tag) encryptedcontent = Enc(appkey[i][j], appnonce[i][j], content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata)\nEarlier, we had considered a \"masking\" approach for sender data, much like QUIC does for . This approach would save 28 bytes of overhead (nonce + tag), but was abandoned because of uncertainty around the security of the masking approach. Since then, Bellare et al. published , which includes what is effectively a proof of the security properties of the QUIC scheme. The QUIC scheme is essentially scheme HN1 of the paper, the main difference being that QUIC encrypt not the nonce directly, but a packet number which is used to compute the nonce. Our application would be similar, since we would encrypt metadata used to encrypt the key and the nonce. Given this new analysis, is it time to bring back masking?\nFixed by", "new_text": "An AEAD encryption algorithm A hash algorithm A signature algorithm The HPKE parameters are used to instantiate HPKE I-D.irtf-cfrg-hpke"} {"id": "q-en-mls-protocol-2685a8a390ba227c0ffe5afe2b6c35c14580f19d9bc6af4d4398d7918ed26877", "old_text": "Application messages The sender information used to look up the key for the content encryption is encrypted under AEAD using a random nonce and the \"sender_data_key\" which is derived from the \"sender_data_secret\" as follows: For handshake and application messages, a sequence of keys is derived via a \"sender ratchet\". Each sender has their own sender ratchet,", "comments": "Just to recap the options as I understand them: a) Today: encryptedsenderdata = AEAD(senderdatakey, senderdatanonce, (groupID, epoch), senderdata) encryptedcontent = AEAD(appkey[i][j], appkey[i][j], (groupID, epoch, encryptedsenderdata), content) b) Sample sender data nonce from ciphertext (saves explicit nonce) encryptedcontent = AEAD(appkey[i][j], appnonce[i][j], (groupID, epoch), content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata) c) Drop auth tag from content encryption (saves explicit nonce and content auth tag) encryptedcontent = Enc(appkey[i][j], appnonce[i][j], content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata)\nEarlier, we had considered a \"masking\" approach for sender data, much like QUIC does for . This approach would save 28 bytes of overhead (nonce + tag), but was abandoned because of uncertainty around the security of the masking approach. Since then, Bellare et al. published , which includes what is effectively a proof of the security properties of the QUIC scheme. The QUIC scheme is essentially scheme HN1 of the paper, the main difference being that QUIC encrypt not the nonce directly, but a packet number which is used to compute the nonce. Our application would be similar, since we would encrypt metadata used to encrypt the key and the nonce. Given this new analysis, is it time to bring back masking?\nFixed by", "new_text": "Application messages The sender information used to look up the key for content encryption is encrypted with an AEAD where the key and nonce are derived from both \"sender_data_secret\" and a sample of the encrypted message content. For handshake and application messages, a sequence of keys is derived via a \"sender ratchet\". Each sender has their own sender ratchet,"} {"id": "q-en-mls-protocol-2685a8a390ba227c0ffe5afe2b6c35c14580f19d9bc6af4d4398d7918ed26877", "old_text": "each key/nonce pair MUST NOT be used to encrypt more than one message. Keys, nonces and secrets of ratchets are derived using DeriveAppSecret. The context in a given call consists of the index of the sender's leaf in the ratchet tree and the current position in the ratchet. In particular, the index of the sender's leaf in the", "comments": "Just to recap the options as I understand them: a) Today: encryptedsenderdata = AEAD(senderdatakey, senderdatanonce, (groupID, epoch), senderdata) encryptedcontent = AEAD(appkey[i][j], appkey[i][j], (groupID, epoch, encryptedsenderdata), content) b) Sample sender data nonce from ciphertext (saves explicit nonce) encryptedcontent = AEAD(appkey[i][j], appnonce[i][j], (groupID, epoch), content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata) c) Drop auth tag from content encryption (saves explicit nonce and content auth tag) encryptedcontent = Enc(appkey[i][j], appnonce[i][j], content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata)\nEarlier, we had considered a \"masking\" approach for sender data, much like QUIC does for . This approach would save 28 bytes of overhead (nonce + tag), but was abandoned because of uncertainty around the security of the masking approach. Since then, Bellare et al. published , which includes what is effectively a proof of the security properties of the QUIC scheme. The QUIC scheme is essentially scheme HN1 of the paper, the main difference being that QUIC encrypt not the nonce directly, but a packet number which is used to compute the nonce. Our application would be similar, since we would encrypt metadata used to encrypt the key and the nonce. Given this new analysis, is it time to bring back masking?\nFixed by", "new_text": "each key/nonce pair MUST NOT be used to encrypt more than one message. Keys, nonces, and the secrets in ratchets are derived using DeriveAppSecret. The context in a given call consists of the index of the sender's leaf in the ratchet tree and the current position in the ratchet. In particular, the index of the sender's leaf in the"} {"id": "q-en-mls-protocol-2685a8a390ba227c0ffe5afe2b6c35c14580f19d9bc6af4d4398d7918ed26877", "old_text": "Set group_id, epoch, content_type and authenticated_data fields from the MLSPlaintext object directly Randomly generate the sender_data_nonce field Identify the key and key generation depending on the content type Encrypt an MLSSenderData object for the encrypted_sender_data field from MLSPlaintext and the key generation Generate and sign an MLSPlaintextTBS object from the MLSPlaintext object Encrypt an MLSCiphertextContent for the ciphertext field using the key identified, the signature, and MLSPlaintext object Decryption is done by decrypting the metadata, then the message, and then verifying the content signature. The following sections describe the encryption and signing processes in detail. 8.1. The \"sender data\" used to look up the key for the content encryption is encrypted under AEAD using the MLSCiphertext sender_data_nonce and the sender_data_key from the keyschedule. It is encoded as an object of the following form: MLSSenderData.sender is assumed to be a \"member\" sender type. When constructing an MLSSenderData from a Sender object, the sender MUST verify Sender.sender_type is \"member\" and use Sender.sender for MLSSenderData.sender. The \"reuse_guard\" field contains a fresh random value used to avoid nonce reuse in the case of state loss or corruption, as described in content-signing-and-encryption. The Additional Authenticated Data (AAD) for the SenderData ciphertext computation is its prefix in the MLSCiphertext, namely: When parsing a SenderData struct as part of message decryption, the recipient MUST verify that the sender field represents an occupied leaf in the ratchet tree. In particular, the sender index value MUST be less than the number of leaves in the tree. 8.2. The signature field in an MLSPlaintext object is computed using the signing private key corresponding to the credential at the leaf in the tree indicated by the sender field. The signature covers the plaintext metadata and message content, which is all of MLSPlaintext", "comments": "Just to recap the options as I understand them: a) Today: encryptedsenderdata = AEAD(senderdatakey, senderdatanonce, (groupID, epoch), senderdata) encryptedcontent = AEAD(appkey[i][j], appkey[i][j], (groupID, epoch, encryptedsenderdata), content) b) Sample sender data nonce from ciphertext (saves explicit nonce) encryptedcontent = AEAD(appkey[i][j], appnonce[i][j], (groupID, epoch), content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata) c) Drop auth tag from content encryption (saves explicit nonce and content auth tag) encryptedcontent = Enc(appkey[i][j], appnonce[i][j], content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata)\nEarlier, we had considered a \"masking\" approach for sender data, much like QUIC does for . This approach would save 28 bytes of overhead (nonce + tag), but was abandoned because of uncertainty around the security of the masking approach. Since then, Bellare et al. published , which includes what is effectively a proof of the security properties of the QUIC scheme. The QUIC scheme is essentially scheme HN1 of the paper, the main difference being that QUIC encrypt not the nonce directly, but a packet number which is used to compute the nonce. Our application would be similar, since we would encrypt metadata used to encrypt the key and the nonce. Given this new analysis, is it time to bring back masking?\nFixed by", "new_text": "Set group_id, epoch, content_type and authenticated_data fields from the MLSPlaintext object directly Identify the key and key generation depending on the content type Encrypt an MLSCiphertextContent for the ciphertext field using the key identified and MLSPlaintext object Encrypt the sender data using a key and nonce derived from the \"sender_data_secret\" for the epoch and a sample of the encrypted MLSCiphertextContent. Decryption is done by decrypting the sender data, then the message, and then verifying the content signature. The following sections describe the encryption and signing processes in detail. 8.1. The \"signature\" field in an MLSPlaintext object is computed using the signing private key corresponding to the credential at the leaf in the tree indicated by the sender field. The signature covers the plaintext metadata and message content, which is all of MLSPlaintext"} {"id": "q-en-mls-protocol-2685a8a390ba227c0ffe5afe2b6c35c14580f19d9bc6af4d4398d7918ed26877", "old_text": "GroupContext for the current epoch, so that signatures are specific to a given group and epoch. The ciphertext field of the MLSCiphertext object is produced by supplying the inputs described below to the AEAD function specified by the ciphersuite in use. The plaintext input contains content and signature of the MLSPlaintext, plus optional padding. These values", "comments": "Just to recap the options as I understand them: a) Today: encryptedsenderdata = AEAD(senderdatakey, senderdatanonce, (groupID, epoch), senderdata) encryptedcontent = AEAD(appkey[i][j], appkey[i][j], (groupID, epoch, encryptedsenderdata), content) b) Sample sender data nonce from ciphertext (saves explicit nonce) encryptedcontent = AEAD(appkey[i][j], appnonce[i][j], (groupID, epoch), content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata) c) Drop auth tag from content encryption (saves explicit nonce and content auth tag) encryptedcontent = Enc(appkey[i][j], appnonce[i][j], content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata)\nEarlier, we had considered a \"masking\" approach for sender data, much like QUIC does for . This approach would save 28 bytes of overhead (nonce + tag), but was abandoned because of uncertainty around the security of the masking approach. Since then, Bellare et al. published , which includes what is effectively a proof of the security properties of the QUIC scheme. The QUIC scheme is essentially scheme HN1 of the paper, the main difference being that QUIC encrypt not the nonce directly, but a packet number which is used to compute the nonce. Our application would be similar, since we would encrypt metadata used to encrypt the key and the nonce. Given this new analysis, is it time to bring back masking?\nFixed by", "new_text": "GroupContext for the current epoch, so that signatures are specific to a given group and epoch. 8.2. The \"ciphertext\" field of the MLSCiphertext object is produced by supplying the inputs described below to the AEAD function specified by the ciphersuite in use. The plaintext input contains content and signature of the MLSPlaintext, plus optional padding. These values"} {"id": "q-en-mls-protocol-2685a8a390ba227c0ffe5afe2b6c35c14580f19d9bc6af4d4398d7918ed26877", "old_text": "contains an object of the following form, with the values used to identify the key and nonce: The ciphertext field of the MLSCiphertext object is produced by supplying these inputs to the AEAD function specified by the ciphersuite in use. 9.", "comments": "Just to recap the options as I understand them: a) Today: encryptedsenderdata = AEAD(senderdatakey, senderdatanonce, (groupID, epoch), senderdata) encryptedcontent = AEAD(appkey[i][j], appkey[i][j], (groupID, epoch, encryptedsenderdata), content) b) Sample sender data nonce from ciphertext (saves explicit nonce) encryptedcontent = AEAD(appkey[i][j], appnonce[i][j], (groupID, epoch), content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata) c) Drop auth tag from content encryption (saves explicit nonce and content auth tag) encryptedcontent = Enc(appkey[i][j], appnonce[i][j], content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata)\nEarlier, we had considered a \"masking\" approach for sender data, much like QUIC does for . This approach would save 28 bytes of overhead (nonce + tag), but was abandoned because of uncertainty around the security of the masking approach. Since then, Bellare et al. published , which includes what is effectively a proof of the security properties of the QUIC scheme. The QUIC scheme is essentially scheme HN1 of the paper, the main difference being that QUIC encrypt not the nonce directly, but a packet number which is used to compute the nonce. Our application would be similar, since we would encrypt metadata used to encrypt the key and the nonce. Given this new analysis, is it time to bring back masking?\nFixed by", "new_text": "contains an object of the following form, with the values used to identify the key and nonce: 8.3. The \"sender data\" used to look up the key for the content encryption is encrypted with the ciphersuite's AEAD with a key and nonce derived from both the \"sender_data_secret\" and a sample of the encrypted content. Before being encrypted, the sender data is encoded as an object of the following form: MLSSenderData.sender is assumed to be a \"member\" sender type. When constructing an MLSSenderData from a Sender object, the sender MUST verify Sender.sender_type is \"member\" and use Sender.sender for MLSSenderData.sender. The \"reuse_guard\" field contains a fresh random value used to avoid nonce reuse in the case of state loss or corruption, as described in content-encryption. The key and nonce provided to the AEAD are computed as the KDF of the first \"KDF.Nh\" bytes of the ciphertext generated in the previous section. If the length of the ciphertext is less than \"KDF.Nh\", the whole ciphertext is used without padding. In pseudocode, the key and nonce are derived as: ``` ciphertext_sample = ciphertext[0..KDF.Nh-1] sender_data_key = ExpandWithLabel(sender_data_secret, \"key\", ciphertext_sample, AEAD.Nk) sender_data_nonce = ExpandWithLabel(sender_data_secret, \"nonce\", ciphertext_sample, AEAD.Nn) ``` The Additional Authenticated Data (AAD) for the SenderData ciphertext is all the fields of MLSCiphertext excluding \"encrypted_sender_data\": When parsing a SenderData struct as part of message decryption, the recipient MUST verify that the sender field represents an occupied leaf in the ratchet tree. In particular, the sender index value MUST be less than the number of leaves in the tree. 9."} {"id": "q-en-mls-protocol-2685a8a390ba227c0ffe5afe2b6c35c14580f19d9bc6af4d4398d7918ed26877", "old_text": "The mandatory-to-implement ciphersuite for MLS 1.0 is \"MLS10_128_HPKE25519_AES128GCM_SHA256_Ed25519\" which uses Curve25519, HKDF over SHA2-256 and AES-128-GCM for HPKE, and AES- 128-GCM with Ed25519 for symmetric encryption and signatures. Values with the first byte 255 (decimal) are reserved for Private Use.", "comments": "Just to recap the options as I understand them: a) Today: encryptedsenderdata = AEAD(senderdatakey, senderdatanonce, (groupID, epoch), senderdata) encryptedcontent = AEAD(appkey[i][j], appkey[i][j], (groupID, epoch, encryptedsenderdata), content) b) Sample sender data nonce from ciphertext (saves explicit nonce) encryptedcontent = AEAD(appkey[i][j], appnonce[i][j], (groupID, epoch), content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata) c) Drop auth tag from content encryption (saves explicit nonce and content auth tag) encryptedcontent = Enc(appkey[i][j], appnonce[i][j], content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata)\nEarlier, we had considered a \"masking\" approach for sender data, much like QUIC does for . This approach would save 28 bytes of overhead (nonce + tag), but was abandoned because of uncertainty around the security of the masking approach. Since then, Bellare et al. published , which includes what is effectively a proof of the security properties of the QUIC scheme. The QUIC scheme is essentially scheme HN1 of the paper, the main difference being that QUIC encrypt not the nonce directly, but a packet number which is used to compute the nonce. Our application would be similar, since we would encrypt metadata used to encrypt the key and the nonce. Given this new analysis, is it time to bring back masking?\nFixed by", "new_text": "The mandatory-to-implement ciphersuite for MLS 1.0 is \"MLS10_128_HPKE25519_AES128GCM_SHA256_Ed25519\" which uses Curve25519 for key exchange, AES-128-GCM for HPKE, HKDF over SHA2-256, AES for metadata masking, and Ed25519 for signatures. Values with the first byte 255 (decimal) are reserved for Private Use."} {"id": "q-en-mls-protocol-3e7db6000430997640df68206d7d91b70c971aeb439ed2285e604a0a56c6fd7f", "old_text": "Provider (SP) as described in I-D.ietf-mls-architecture. In particular, we assume the SP provides the following services: A long-term identity key provider which allows clients to authenticate protocol messages in a group. A broadcast channel, for each group, which will relay a message to", "comments": "The credentials we have defined at the moment only contain a single public key and no information regarding what signature scheme that keypair works with (even though the Spec actually says it should include the signature scheme as well). It would be nice, however, to support potential CredentialTypes that, similar to KeyPackages, include multiple supported signature schemes, each accompanied by a corresponding public key. The Key that should be used in any given group is then determined by the CipherSuite of the group (which includes a Signature Scheme). When choosing a KeyPackage of a new member, it has to be one that contains a Credential which supports that Signature Scheme. Note, that I'm not suggesting that this be the case in all Credentials, or even that we change the BasicCredential. I understand that it's possible to have multiple credentials per identity, but in some authentication settings it can be beneficial to have a 1-to-1 mapping between Credential and identity.\nKeyPackages don't support multiple signature schemes? In what scenario is it beneficial?\nAh, right. I think that was the case in the past, but you're right, they don't. Of course it's always possible to just bundle a bunch of certificates, each supporting a different signature scheme and then treat that as your \"Multi-Scheme Credential\" in the grand scheme of things. However, if you end up rotating them a lot, where you have to sign them and potentially store the whole chain for a longer period of time, being able to support multiple signature schemes in only one credential means less overhead.\nCredentials are stored in each group's ratchet tree though, so you're keeping a bunch of unnecessary key material in the group's tree which everyone has to store\nThat's fair. And it's not something you have to do if you don't want to. I'm just proposing for MLS to support that type of Credential in general.\nTo support hybrid PQ/standard schemes at any point in the future, allowing for multiple schemes is reasonable (in addition to the above arguments for it). Most applications probably will not use the feature, especially to start with, but providing support for just the option is not going to cause any of the problems noted.\nPQ schemes can be used by switching the entire group to a new ciphersuite using those schemes. Requiring everybody in the group to store a bunch of unused key material isn't reasonable\nThe statement was not about PQ, but hybrid PQ/standard schemes. Those require use to two signature schemes, for which both types of keying material are used.\nThen you need a ciphersuite that specifies a hybrid scheme.\nIt would be ideal to have a ciphersuite specifying a single hybrid scheme, but even that does not imply use of a single credential, unfortunately. In fact, hybrid schemes may rely on two separate credentials, with the ciphersuite specifying the hybrid algorithm for combining them. Since the \"main\" method of building a solo hybrid credential is under patent, there is also additional motivation for applications to use two.\nI concur with Brendan here. To be consistent with our earlier decisions to (a) keep KeyPackages to one ciphersuite and (b) include the signature algorithm in the ciphersuite, we should keep credentials to one signature algorithm as well. Propose closing this PR without merging.\nDiscussed at 2020-10-06 interim, TODO: Re-add SignatureScheme enum, re-using the TLS signature schemes Clarify that the main job of a Credential is to provide a public key for verifying messages How it provides the credential might be different per credential type e.g., in BasicCredential, it's provided directly, X509Credential, have to parse the leaf certificate\nLGTM with a couple nits.", "new_text": "Provider (SP) as described in I-D.ietf-mls-architecture. In particular, we assume the SP provides the following services: A long-term signature key provider which allows clients to authenticate protocol messages in a group. A broadcast channel, for each group, which will relay a message to"} {"id": "q-en-mls-protocol-3e7db6000430997640df68206d7d91b70c971aeb439ed2285e604a0a56c6fd7f", "old_text": "A member of a group authenticates the identities of other participants by means of credentials issued by some authentication system, like a PKI. Each type of credential MUST express the following data: The public key of a signature key pair The identity of the holder of the private key The signature scheme that the holder will use to sign MLS messages Credentials MAY also include information that allows a relying party to verify the identity / signing key binding. A BasicCredential is a raw, unauthenticated assertion of an identity/ key binding. The format of the key in the \"public_key\" field is defined by the relevant ciphersuite: the group ciphersuite for a", "comments": "The credentials we have defined at the moment only contain a single public key and no information regarding what signature scheme that keypair works with (even though the Spec actually says it should include the signature scheme as well). It would be nice, however, to support potential CredentialTypes that, similar to KeyPackages, include multiple supported signature schemes, each accompanied by a corresponding public key. The Key that should be used in any given group is then determined by the CipherSuite of the group (which includes a Signature Scheme). When choosing a KeyPackage of a new member, it has to be one that contains a Credential which supports that Signature Scheme. Note, that I'm not suggesting that this be the case in all Credentials, or even that we change the BasicCredential. I understand that it's possible to have multiple credentials per identity, but in some authentication settings it can be beneficial to have a 1-to-1 mapping between Credential and identity.\nKeyPackages don't support multiple signature schemes? In what scenario is it beneficial?\nAh, right. I think that was the case in the past, but you're right, they don't. Of course it's always possible to just bundle a bunch of certificates, each supporting a different signature scheme and then treat that as your \"Multi-Scheme Credential\" in the grand scheme of things. However, if you end up rotating them a lot, where you have to sign them and potentially store the whole chain for a longer period of time, being able to support multiple signature schemes in only one credential means less overhead.\nCredentials are stored in each group's ratchet tree though, so you're keeping a bunch of unnecessary key material in the group's tree which everyone has to store\nThat's fair. And it's not something you have to do if you don't want to. I'm just proposing for MLS to support that type of Credential in general.\nTo support hybrid PQ/standard schemes at any point in the future, allowing for multiple schemes is reasonable (in addition to the above arguments for it). Most applications probably will not use the feature, especially to start with, but providing support for just the option is not going to cause any of the problems noted.\nPQ schemes can be used by switching the entire group to a new ciphersuite using those schemes. Requiring everybody in the group to store a bunch of unused key material isn't reasonable\nThe statement was not about PQ, but hybrid PQ/standard schemes. Those require use to two signature schemes, for which both types of keying material are used.\nThen you need a ciphersuite that specifies a hybrid scheme.\nIt would be ideal to have a ciphersuite specifying a single hybrid scheme, but even that does not imply use of a single credential, unfortunately. In fact, hybrid schemes may rely on two separate credentials, with the ciphersuite specifying the hybrid algorithm for combining them. Since the \"main\" method of building a solo hybrid credential is under patent, there is also additional motivation for applications to use two.\nI concur with Brendan here. To be consistent with our earlier decisions to (a) keep KeyPackages to one ciphersuite and (b) include the signature algorithm in the ciphersuite, we should keep credentials to one signature algorithm as well. Propose closing this PR without merging.\nDiscussed at 2020-10-06 interim, TODO: Re-add SignatureScheme enum, re-using the TLS signature schemes Clarify that the main job of a Credential is to provide a public key for verifying messages How it provides the credential might be different per credential type e.g., in BasicCredential, it's provided directly, X509Credential, have to parse the leaf certificate\nLGTM with a couple nits.", "new_text": "A member of a group authenticates the identities of other participants by means of credentials issued by some authentication system, like a PKI. Each type of credential MUST express the following data in the context of the group it is used with: The public key of a signature key pair matching the SignatureScheme specified by the CipherSuite of the group The identity of the holder of the private keys Credentials MAY also include information that allows a relying party to verify the identity / signing key binding. Additionally, Credentials SHOULD specify the signature scheme corresponding to each contained public key. A BasicCredential is a raw, unauthenticated assertion of an identity/ key binding. The format of the key in the \"public_key\" field is defined by the relevant ciphersuite: the group ciphersuite for a"} {"id": "q-en-mls-protocol-3e7db6000430997640df68206d7d91b70c971aeb439ed2285e604a0a56c6fd7f", "old_text": "A KeyPackage object specifies a ciphersuite that the client supports, as well as providing a public key that others can use for key agreement. The client's identity key can be updated throughout the lifetime of the group by sending a new KeyPackage with a new identity; the new identity MUST be validated by the authentication service. When used as InitKeys, KeyPackages are intended to be used only once and SHOULD NOT be reused except in case of last resort. (See", "comments": "The credentials we have defined at the moment only contain a single public key and no information regarding what signature scheme that keypair works with (even though the Spec actually says it should include the signature scheme as well). It would be nice, however, to support potential CredentialTypes that, similar to KeyPackages, include multiple supported signature schemes, each accompanied by a corresponding public key. The Key that should be used in any given group is then determined by the CipherSuite of the group (which includes a Signature Scheme). When choosing a KeyPackage of a new member, it has to be one that contains a Credential which supports that Signature Scheme. Note, that I'm not suggesting that this be the case in all Credentials, or even that we change the BasicCredential. I understand that it's possible to have multiple credentials per identity, but in some authentication settings it can be beneficial to have a 1-to-1 mapping between Credential and identity.\nKeyPackages don't support multiple signature schemes? In what scenario is it beneficial?\nAh, right. I think that was the case in the past, but you're right, they don't. Of course it's always possible to just bundle a bunch of certificates, each supporting a different signature scheme and then treat that as your \"Multi-Scheme Credential\" in the grand scheme of things. However, if you end up rotating them a lot, where you have to sign them and potentially store the whole chain for a longer period of time, being able to support multiple signature schemes in only one credential means less overhead.\nCredentials are stored in each group's ratchet tree though, so you're keeping a bunch of unnecessary key material in the group's tree which everyone has to store\nThat's fair. And it's not something you have to do if you don't want to. I'm just proposing for MLS to support that type of Credential in general.\nTo support hybrid PQ/standard schemes at any point in the future, allowing for multiple schemes is reasonable (in addition to the above arguments for it). Most applications probably will not use the feature, especially to start with, but providing support for just the option is not going to cause any of the problems noted.\nPQ schemes can be used by switching the entire group to a new ciphersuite using those schemes. Requiring everybody in the group to store a bunch of unused key material isn't reasonable\nThe statement was not about PQ, but hybrid PQ/standard schemes. Those require use to two signature schemes, for which both types of keying material are used.\nThen you need a ciphersuite that specifies a hybrid scheme.\nIt would be ideal to have a ciphersuite specifying a single hybrid scheme, but even that does not imply use of a single credential, unfortunately. In fact, hybrid schemes may rely on two separate credentials, with the ciphersuite specifying the hybrid algorithm for combining them. Since the \"main\" method of building a solo hybrid credential is under patent, there is also additional motivation for applications to use two.\nI concur with Brendan here. To be consistent with our earlier decisions to (a) keep KeyPackages to one ciphersuite and (b) include the signature algorithm in the ciphersuite, we should keep credentials to one signature algorithm as well. Propose closing this PR without merging.\nDiscussed at 2020-10-06 interim, TODO: Re-add SignatureScheme enum, re-using the TLS signature schemes Clarify that the main job of a Credential is to provide a public key for verifying messages How it provides the credential might be different per credential type e.g., in BasicCredential, it's provided directly, X509Credential, have to parse the leaf certificate\nLGTM with a couple nits.", "new_text": "A KeyPackage object specifies a ciphersuite that the client supports, as well as providing a public key that others can use for key agreement. The client's signature key can be updated throughout the lifetime of the group by sending a new KeyPackage with a new signature key; the new signature key MUST be validated by the authentication service. When used as InitKeys, KeyPackages are intended to be used only once and SHOULD NOT be reused except in case of last resort. (See"} {"id": "q-en-mls-protocol-3e7db6000430997640df68206d7d91b70c971aeb439ed2285e604a0a56c6fd7f", "old_text": "The value for hpke_init_key MUST be a public key for the asymmetric encryption scheme defined by cipher_suite. The whole structure is signed using the client's identity key. A KeyPackage object with an invalid signature field MUST be considered malformed. The input to the signature computation comprises all of the fields except for the signature field.", "comments": "The credentials we have defined at the moment only contain a single public key and no information regarding what signature scheme that keypair works with (even though the Spec actually says it should include the signature scheme as well). It would be nice, however, to support potential CredentialTypes that, similar to KeyPackages, include multiple supported signature schemes, each accompanied by a corresponding public key. The Key that should be used in any given group is then determined by the CipherSuite of the group (which includes a Signature Scheme). When choosing a KeyPackage of a new member, it has to be one that contains a Credential which supports that Signature Scheme. Note, that I'm not suggesting that this be the case in all Credentials, or even that we change the BasicCredential. I understand that it's possible to have multiple credentials per identity, but in some authentication settings it can be beneficial to have a 1-to-1 mapping between Credential and identity.\nKeyPackages don't support multiple signature schemes? In what scenario is it beneficial?\nAh, right. I think that was the case in the past, but you're right, they don't. Of course it's always possible to just bundle a bunch of certificates, each supporting a different signature scheme and then treat that as your \"Multi-Scheme Credential\" in the grand scheme of things. However, if you end up rotating them a lot, where you have to sign them and potentially store the whole chain for a longer period of time, being able to support multiple signature schemes in only one credential means less overhead.\nCredentials are stored in each group's ratchet tree though, so you're keeping a bunch of unnecessary key material in the group's tree which everyone has to store\nThat's fair. And it's not something you have to do if you don't want to. I'm just proposing for MLS to support that type of Credential in general.\nTo support hybrid PQ/standard schemes at any point in the future, allowing for multiple schemes is reasonable (in addition to the above arguments for it). Most applications probably will not use the feature, especially to start with, but providing support for just the option is not going to cause any of the problems noted.\nPQ schemes can be used by switching the entire group to a new ciphersuite using those schemes. Requiring everybody in the group to store a bunch of unused key material isn't reasonable\nThe statement was not about PQ, but hybrid PQ/standard schemes. Those require use to two signature schemes, for which both types of keying material are used.\nThen you need a ciphersuite that specifies a hybrid scheme.\nIt would be ideal to have a ciphersuite specifying a single hybrid scheme, but even that does not imply use of a single credential, unfortunately. In fact, hybrid schemes may rely on two separate credentials, with the ciphersuite specifying the hybrid algorithm for combining them. Since the \"main\" method of building a solo hybrid credential is under patent, there is also additional motivation for applications to use two.\nI concur with Brendan here. To be consistent with our earlier decisions to (a) keep KeyPackages to one ciphersuite and (b) include the signature algorithm in the ciphersuite, we should keep credentials to one signature algorithm as well. Propose closing this PR without merging.\nDiscussed at 2020-10-06 interim, TODO: Re-add SignatureScheme enum, re-using the TLS signature schemes Clarify that the main job of a Credential is to provide a public key for verifying messages How it provides the credential might be different per credential type e.g., in BasicCredential, it's provided directly, X509Credential, have to parse the leaf certificate\nLGTM with a couple nits.", "new_text": "The value for hpke_init_key MUST be a public key for the asymmetric encryption scheme defined by cipher_suite. The whole structure is signed using the client's signature key. A KeyPackage object with an invalid signature field MUST be considered malformed. The input to the signature computation comprises all of the fields except for the signature field."} {"id": "q-en-mls-protocol-3e7db6000430997640df68206d7d91b70c971aeb439ed2285e604a0a56c6fd7f", "old_text": "creator, because they are derived from an authenticated key exchange protocol. Subsequent leaf keys are known only by their owner. Note that the long-term identity keys used by the protocol MUST be distributed by an \"honest\" authentication service for clients to authenticate their legitimate peers.", "comments": "The credentials we have defined at the moment only contain a single public key and no information regarding what signature scheme that keypair works with (even though the Spec actually says it should include the signature scheme as well). It would be nice, however, to support potential CredentialTypes that, similar to KeyPackages, include multiple supported signature schemes, each accompanied by a corresponding public key. The Key that should be used in any given group is then determined by the CipherSuite of the group (which includes a Signature Scheme). When choosing a KeyPackage of a new member, it has to be one that contains a Credential which supports that Signature Scheme. Note, that I'm not suggesting that this be the case in all Credentials, or even that we change the BasicCredential. I understand that it's possible to have multiple credentials per identity, but in some authentication settings it can be beneficial to have a 1-to-1 mapping between Credential and identity.\nKeyPackages don't support multiple signature schemes? In what scenario is it beneficial?\nAh, right. I think that was the case in the past, but you're right, they don't. Of course it's always possible to just bundle a bunch of certificates, each supporting a different signature scheme and then treat that as your \"Multi-Scheme Credential\" in the grand scheme of things. However, if you end up rotating them a lot, where you have to sign them and potentially store the whole chain for a longer period of time, being able to support multiple signature schemes in only one credential means less overhead.\nCredentials are stored in each group's ratchet tree though, so you're keeping a bunch of unnecessary key material in the group's tree which everyone has to store\nThat's fair. And it's not something you have to do if you don't want to. I'm just proposing for MLS to support that type of Credential in general.\nTo support hybrid PQ/standard schemes at any point in the future, allowing for multiple schemes is reasonable (in addition to the above arguments for it). Most applications probably will not use the feature, especially to start with, but providing support for just the option is not going to cause any of the problems noted.\nPQ schemes can be used by switching the entire group to a new ciphersuite using those schemes. Requiring everybody in the group to store a bunch of unused key material isn't reasonable\nThe statement was not about PQ, but hybrid PQ/standard schemes. Those require use to two signature schemes, for which both types of keying material are used.\nThen you need a ciphersuite that specifies a hybrid scheme.\nIt would be ideal to have a ciphersuite specifying a single hybrid scheme, but even that does not imply use of a single credential, unfortunately. In fact, hybrid schemes may rely on two separate credentials, with the ciphersuite specifying the hybrid algorithm for combining them. Since the \"main\" method of building a solo hybrid credential is under patent, there is also additional motivation for applications to use two.\nI concur with Brendan here. To be consistent with our earlier decisions to (a) keep KeyPackages to one ciphersuite and (b) include the signature algorithm in the ciphersuite, we should keep credentials to one signature algorithm as well. Propose closing this PR without merging.\nDiscussed at 2020-10-06 interim, TODO: Re-add SignatureScheme enum, re-using the TLS signature schemes Clarify that the main job of a Credential is to provide a public key for verifying messages How it provides the credential might be different per credential type e.g., in BasicCredential, it's provided directly, X509Credential, have to parse the leaf certificate\nLGTM with a couple nits.", "new_text": "creator, because they are derived from an authenticated key exchange protocol. Subsequent leaf keys are known only by their owner. Note that the long-term signature keys used by the protocol MUST be distributed by an \"honest\" authentication service for clients to authenticate their legitimate peers."} {"id": "q-en-mls-protocol-3e7db6000430997640df68206d7d91b70c971aeb439ed2285e604a0a56c6fd7f", "old_text": "The second form considers authentication with respect to the sender, meaning the group members can verify that a message originated from a particular member of the group. This property is provided by digital signatures on the messages under identity keys. 14.3.", "comments": "The credentials we have defined at the moment only contain a single public key and no information regarding what signature scheme that keypair works with (even though the Spec actually says it should include the signature scheme as well). It would be nice, however, to support potential CredentialTypes that, similar to KeyPackages, include multiple supported signature schemes, each accompanied by a corresponding public key. The Key that should be used in any given group is then determined by the CipherSuite of the group (which includes a Signature Scheme). When choosing a KeyPackage of a new member, it has to be one that contains a Credential which supports that Signature Scheme. Note, that I'm not suggesting that this be the case in all Credentials, or even that we change the BasicCredential. I understand that it's possible to have multiple credentials per identity, but in some authentication settings it can be beneficial to have a 1-to-1 mapping between Credential and identity.\nKeyPackages don't support multiple signature schemes? In what scenario is it beneficial?\nAh, right. I think that was the case in the past, but you're right, they don't. Of course it's always possible to just bundle a bunch of certificates, each supporting a different signature scheme and then treat that as your \"Multi-Scheme Credential\" in the grand scheme of things. However, if you end up rotating them a lot, where you have to sign them and potentially store the whole chain for a longer period of time, being able to support multiple signature schemes in only one credential means less overhead.\nCredentials are stored in each group's ratchet tree though, so you're keeping a bunch of unnecessary key material in the group's tree which everyone has to store\nThat's fair. And it's not something you have to do if you don't want to. I'm just proposing for MLS to support that type of Credential in general.\nTo support hybrid PQ/standard schemes at any point in the future, allowing for multiple schemes is reasonable (in addition to the above arguments for it). Most applications probably will not use the feature, especially to start with, but providing support for just the option is not going to cause any of the problems noted.\nPQ schemes can be used by switching the entire group to a new ciphersuite using those schemes. Requiring everybody in the group to store a bunch of unused key material isn't reasonable\nThe statement was not about PQ, but hybrid PQ/standard schemes. Those require use to two signature schemes, for which both types of keying material are used.\nThen you need a ciphersuite that specifies a hybrid scheme.\nIt would be ideal to have a ciphersuite specifying a single hybrid scheme, but even that does not imply use of a single credential, unfortunately. In fact, hybrid schemes may rely on two separate credentials, with the ciphersuite specifying the hybrid algorithm for combining them. Since the \"main\" method of building a solo hybrid credential is under patent, there is also additional motivation for applications to use two.\nI concur with Brendan here. To be consistent with our earlier decisions to (a) keep KeyPackages to one ciphersuite and (b) include the signature algorithm in the ciphersuite, we should keep credentials to one signature algorithm as well. Propose closing this PR without merging.\nDiscussed at 2020-10-06 interim, TODO: Re-add SignatureScheme enum, re-using the TLS signature schemes Clarify that the main job of a Credential is to provide a public key for verifying messages How it provides the credential might be different per credential type e.g., in BasicCredential, it's provided directly, X509Credential, have to parse the leaf certificate\nLGTM with a couple nits.", "new_text": "The second form considers authentication with respect to the sender, meaning the group members can verify that a message originated from a particular member of the group. This property is provided by digital signatures on the messages under signature keys. 14.3."} {"id": "q-en-mls-protocol-3e7db6000430997640df68206d7d91b70c971aeb439ed2285e604a0a56c6fd7f", "old_text": "reveal previous message or root keys. Post-compromise security is provided by Commit operations, in which a new root key is generated from the latest ratcheting tree. If the adversary cannot derive the updated root key after an Commit operation, it cannot compute any derived secrets. In the case where the client could have been compromised (device", "comments": "The credentials we have defined at the moment only contain a single public key and no information regarding what signature scheme that keypair works with (even though the Spec actually says it should include the signature scheme as well). It would be nice, however, to support potential CredentialTypes that, similar to KeyPackages, include multiple supported signature schemes, each accompanied by a corresponding public key. The Key that should be used in any given group is then determined by the CipherSuite of the group (which includes a Signature Scheme). When choosing a KeyPackage of a new member, it has to be one that contains a Credential which supports that Signature Scheme. Note, that I'm not suggesting that this be the case in all Credentials, or even that we change the BasicCredential. I understand that it's possible to have multiple credentials per identity, but in some authentication settings it can be beneficial to have a 1-to-1 mapping between Credential and identity.\nKeyPackages don't support multiple signature schemes? In what scenario is it beneficial?\nAh, right. I think that was the case in the past, but you're right, they don't. Of course it's always possible to just bundle a bunch of certificates, each supporting a different signature scheme and then treat that as your \"Multi-Scheme Credential\" in the grand scheme of things. However, if you end up rotating them a lot, where you have to sign them and potentially store the whole chain for a longer period of time, being able to support multiple signature schemes in only one credential means less overhead.\nCredentials are stored in each group's ratchet tree though, so you're keeping a bunch of unnecessary key material in the group's tree which everyone has to store\nThat's fair. And it's not something you have to do if you don't want to. I'm just proposing for MLS to support that type of Credential in general.\nTo support hybrid PQ/standard schemes at any point in the future, allowing for multiple schemes is reasonable (in addition to the above arguments for it). Most applications probably will not use the feature, especially to start with, but providing support for just the option is not going to cause any of the problems noted.\nPQ schemes can be used by switching the entire group to a new ciphersuite using those schemes. Requiring everybody in the group to store a bunch of unused key material isn't reasonable\nThe statement was not about PQ, but hybrid PQ/standard schemes. Those require use to two signature schemes, for which both types of keying material are used.\nThen you need a ciphersuite that specifies a hybrid scheme.\nIt would be ideal to have a ciphersuite specifying a single hybrid scheme, but even that does not imply use of a single credential, unfortunately. In fact, hybrid schemes may rely on two separate credentials, with the ciphersuite specifying the hybrid algorithm for combining them. Since the \"main\" method of building a solo hybrid credential is under patent, there is also additional motivation for applications to use two.\nI concur with Brendan here. To be consistent with our earlier decisions to (a) keep KeyPackages to one ciphersuite and (b) include the signature algorithm in the ciphersuite, we should keep credentials to one signature algorithm as well. Propose closing this PR without merging.\nDiscussed at 2020-10-06 interim, TODO: Re-add SignatureScheme enum, re-using the TLS signature schemes Clarify that the main job of a Credential is to provide a public key for verifying messages How it provides the credential might be different per credential type e.g., in BasicCredential, it's provided directly, X509Credential, have to parse the leaf certificate\nLGTM with a couple nits.", "new_text": "reveal previous message or root keys. Post-compromise security is provided by Commit operations, in which a new root key is generated from the latest ratcheting tree. If the adversary cannot derive the updated root key after a Commit operation, it cannot compute any derived secrets. In the case where the client could have been compromised (device"} {"id": "q-en-mls-protocol-dd376704c9e3620aed62113f8646ad9ef5354795e87eb590d6227e417ec733cf", "old_text": "A signed object describing a client's identity and capabilities, and including a hybrid public-key encryption (HPKE I-D.irtf-cfrg- hpke ) public key that can be used to encrypt to that client. A key package that is prepublished by a client, which other clients can use to introduce the client to a new group.", "comments": "cc NAME\nNAME is the build expected to fail?\nNo, but the problem seem unrelated to your change, so we are all good.\nLooks like it was a transient failure. Re-running resulted in success.\nNAME points out that the HPKE version is not pinned. This is probably needed for interop. Version -07 should come sometime this week, so that seems like a perfectly fine version to use. I can send a PR when that version is cut.\nFixed by", "new_text": "A signed object describing a client's identity and capabilities, and including a hybrid public-key encryption (HPKE I-D.irtf-cfrg- hpke) public key that can be used to encrypt to that client. A key package that is prepublished by a client, which other clients can use to introduce the client to a new group."} {"id": "q-en-mls-protocol-dd376704c9e3620aed62113f8646ad9ef5354795e87eb590d6227e417ec733cf", "old_text": "A signature algorithm The HPKE parameters are used to instantiate HPKE I-D.irtf-cfrg-hpke for the purpose of public-key encryption. The \"DeriveKeyPair\" function associated to the KEM for the ciphersuite maps octet strings to HPKE key pairs. Ciphersuites are represented with the CipherSuite type. HPKE public keys are opaque values in a format defined by the underlying protocol", "comments": "cc NAME\nNAME is the build expected to fail?\nNo, but the problem seem unrelated to your change, so we are all good.\nLooks like it was a transient failure. Re-running resulted in success.\nNAME points out that the HPKE version is not pinned. This is probably needed for interop. Version -07 should come sometime this week, so that seems like a perfectly fine version to use. I can send a PR when that version is cut.\nFixed by", "new_text": "A signature algorithm MLS uses draft-07 of HPKE I-D.irtf-cfrg-hpke for public-key encryption. The \"DeriveKeyPair\" function associated to the KEM for the ciphersuite maps octet strings to HPKE key pairs. Ciphersuites are represented with the CipherSuite type. HPKE public keys are opaque values in a format defined by the underlying protocol"} {"id": "q-en-mls-protocol-ffe0afb144f9088be7cae530b9d9f367cee4cbb6dc1f47a9dd92e9f10bec7d86", "old_text": "Name: The name of the ciphersuite Recommended: Whether support for this extension is recommended by the IETF MLS WG. Valid values are \"Y\" and \"N\". The \"Recommended\" column is assigned a value of \"N\" unless explicitly requested, and adding a value with a \"Recommended\" value of \"Y\" requires Standards Action RFC8126. IESG Approval is REQUIRED for a Y->N transition. Reference: The document where this ciphersuite is defined", "comments": "I noticed a bunch of typos in the \"IANA Considerations\" section: the \"ciphersuites\" and \"credentials\" were probably copy-pasted from the \"extension\" one, and some parts were not changed correctly. This PR fixes these typos.\nDiscussed 20210526 Interim: editorial will be taken up by editors.\nThanks!", "new_text": "Name: The name of the ciphersuite Recommended: Whether support for this ciphersuite is recommended by the IETF MLS WG. Valid values are \"Y\" and \"N\". The \"Recommended\" column is assigned a value of \"N\" unless explicitly requested, and adding a value with a \"Recommended\" value of \"Y\" requires Standards Action RFC8126. IESG Approval is REQUIRED for a Y->N transition. Reference: The document where this ciphersuite is defined"} {"id": "q-en-mls-protocol-ffe0afb144f9088be7cae530b9d9f367cee4cbb6dc1f47a9dd92e9f10bec7d86", "old_text": "16.3. This registry lists identifiers for types of credentials that can be used for authentication in the MLS protocol. The extension type field is two bytes wide, so valid extension type values are in the range 0x0000 to 0xffff. Template:", "comments": "I noticed a bunch of typos in the \"IANA Considerations\" section: the \"ciphersuites\" and \"credentials\" were probably copy-pasted from the \"extension\" one, and some parts were not changed correctly. This PR fixes these typos.\nDiscussed 20210526 Interim: editorial will be taken up by editors.\nThanks!", "new_text": "16.3. This registry lists identifiers for types of credentials that can be used for authentication in the MLS protocol. The credential type field is two bytes wide, so valid credential type values are in the range 0x0000 to 0xffff. Template:"} {"id": "q-en-mls-protocol-ffe0afb144f9088be7cae530b9d9f367cee4cbb6dc1f47a9dd92e9f10bec7d86", "old_text": "Name: The name of the credential type Recommended: Whether support for this extension is recommended by the IETF MLS WG. Valid values are \"Y\" and \"N\". The \"Recommended\" column is assigned a value of \"N\" unless explicitly requested, and adding a value with a \"Recommended\" value of \"Y\" requires Standards Action RFC8126. IESG Approval is REQUIRED for a Y->N transition. Reference: The document where this extension is defined Initial contents:", "comments": "I noticed a bunch of typos in the \"IANA Considerations\" section: the \"ciphersuites\" and \"credentials\" were probably copy-pasted from the \"extension\" one, and some parts were not changed correctly. This PR fixes these typos.\nDiscussed 20210526 Interim: editorial will be taken up by editors.\nThanks!", "new_text": "Name: The name of the credential type Recommended: Whether support for this credential is recommended by the IETF MLS WG. Valid values are \"Y\" and \"N\". The \"Recommended\" column is assigned a value of \"N\" unless explicitly requested, and adding a value with a \"Recommended\" value of \"Y\" requires Standards Action RFC8126. IESG Approval is REQUIRED for a Y->N transition. Reference: The document where this credential is defined Initial contents:"} {"id": "q-en-mls-protocol-cd897806c0441e1939faf6d5dd758b672099c3efa9d887d938e1ef835ddfd1e1", "old_text": "in the \"PreSharedKeyID\" object. Specifically, \"psk_secret\" is computed as follows: The \"index\" field in \"PSKLabel\" corresponds to the index of the PSK in the \"psk\" array, while the \"count\" field contains the total number of PSKs. 8.3.", "comments": "In , NAME noted that the concatenation approach used to combine multiple PSKs is not well validated. At the 2021-10-04 interim, we discussed moving from this approach to a \"cascaded HKDF\" approach, with the idea of doing the same thing here as the main key schedule. This PR implements that suggestion. One nice implication of the algorithm here is that when there are no PSKs, it produces the zero vector as a degenerate case. So you could implement it pretty cleanly:\nThe spec currently doesn't distinguish much between an external and a member commit w.r.t. how proposals by reference are handled. In theory, the server could keep track of proposals and send the external committer all proposals that were sent up until the time they want to commit externally. However, there are a few difficulties in committing proposals by reference in an external commit. Membership tags: The external committer doesn't have the ability to verify membership tags on proposals. This is probably not a show-stopper, as proposals still need to be signed by a group member, but if nothing else, this should be noted in the spec. Leaf positioning: The spec was changed such that the committer is added into the tree without an Add proposal being present. If there are other Add (and Remove) proposals present, it is unclear in what order they should be processed. Should the committer be added when the path is processed? Or should it still be processed as if it were an Add? There are probably other things that can go wrong when committing referenced proposals externally, but those are the first that come to mind. The simplest solution would be to simply ignore proposals by reference, although this would break the current principle that a commit must always include all valid proposals that were sent during the epoch. The consequence would be that Parties that sent proposals will have to re-send them if the epoch is ended by an external commit. It should be noted, that a Preconfigured sender will have no way to detect if their proposal was committed or not.\nI'm not sure your worries here are very real: Membership tags: I don't think the membership tag issue matters, because the existing members will reject the proposal (and thus the commit) if the membership tag is invalid; at worst the joiner gets themselves wedged into a group by themselves. And as you say, they're signed, and they keys can be tied back to the tree hash in the GroupInfo. Leaf positioning: We need to clear this up regardless of proposals-by-reference. It seems like basically having a synthetic Add at the beginning or end of the proposal list would work fine. Preconfigured proposer: This is a problem only when Commits are encrypted, so it's not a problem in the external commit case (since an external Commit cannot be encrypted). Nonetheless, the point that an external joiner can't fully verify a Proposal seems to argue pretty strongly for forbidding proposals by reference. I don't think this even violates the semantic of \"include all valid proposals received in the epoch\", since (1) that requirement is receiver oriented (it's not enough for the proposals to have been sent) and (2) the external joiner can't tell if the proposals or valid. Overall, I would lean slightly toward removing proposals by reference. And if we do that, it might even make sense to just define an ExternalCommit message that packages together the few things that are still allowed.\nI'm not sure I understand how the receiver-orientation of the \"all valid proposals must be included\" rule matters to its application to this case. If we remove the requirement for external commits to include proposals by reference and don't soften that rule to exclude external commits, receivers will consider all external commits invalid in the presence of valid pending proposals by reference. In any case, I'm also in favor of keeping it simple and restrict the External Commit to a few, well defined components that do not include any proposals by reference.\nDiscussion on working call: \"No proposals by reference\" kind of follows from \"all valid proposals\", since the joiner can't necessarily determine that proposals are valid Maybe allow attempts? Might not get feedback to retry, though. Whole point of external commit is to allow async. Agreement to prohibit proposals by reference in external Commit\nI believe there's a small mismatch between the text and the Key Schedule figure. The text says that if there are no PSK proposals, the should be a zero-length string, while the Key Schedule figure seems to indicate that it's a string of zeros of length .\nFixed in\nOne brief comment, otherwise looks good!", "new_text": "in the \"PreSharedKeyID\" object. Specifically, \"psk_secret\" is computed as follows: Here \"0\" represents the all-zero vector of length KDF.Nh. The \"index\" field in \"PSKLabel\" corresponds to the index of the PSK in the \"psk\" array, while the \"count\" field contains the total number of PSKs. In other words, the PSKs are chained together with KDF.Extract invocations, as follows: In particular, if there are no PreSharedKey proposals in a given Commit, then the resulting \"psk_secret\" is \"psk_secret_[0]\", the all- zero vector. 8.3."} {"id": "q-en-mls-protocol-cd897806c0441e1939faf6d5dd758b672099c3efa9d887d938e1ef835ddfd1e1", "old_text": "In this case, the new ratchet tree is the same as the provisional ratchet tree. If one or more PreSharedKey proposals are part of the commit, derive the \"psk_secret\" as specified in pre-shared-keys, where the order of PSKs in the derivation corresponds to the order of PreSharedKey proposals in the \"proposals\" vector. Otherwise, set \"psk_secret\" to a zero-length octet string. Construct an MLSPlaintext object containing the Commit object. Sign the MLSPlaintext using the old GroupContext as context.", "comments": "In , NAME noted that the concatenation approach used to combine multiple PSKs is not well validated. At the 2021-10-04 interim, we discussed moving from this approach to a \"cascaded HKDF\" approach, with the idea of doing the same thing here as the main key schedule. This PR implements that suggestion. One nice implication of the algorithm here is that when there are no PSKs, it produces the zero vector as a degenerate case. So you could implement it pretty cleanly:\nThe spec currently doesn't distinguish much between an external and a member commit w.r.t. how proposals by reference are handled. In theory, the server could keep track of proposals and send the external committer all proposals that were sent up until the time they want to commit externally. However, there are a few difficulties in committing proposals by reference in an external commit. Membership tags: The external committer doesn't have the ability to verify membership tags on proposals. This is probably not a show-stopper, as proposals still need to be signed by a group member, but if nothing else, this should be noted in the spec. Leaf positioning: The spec was changed such that the committer is added into the tree without an Add proposal being present. If there are other Add (and Remove) proposals present, it is unclear in what order they should be processed. Should the committer be added when the path is processed? Or should it still be processed as if it were an Add? There are probably other things that can go wrong when committing referenced proposals externally, but those are the first that come to mind. The simplest solution would be to simply ignore proposals by reference, although this would break the current principle that a commit must always include all valid proposals that were sent during the epoch. The consequence would be that Parties that sent proposals will have to re-send them if the epoch is ended by an external commit. It should be noted, that a Preconfigured sender will have no way to detect if their proposal was committed or not.\nI'm not sure your worries here are very real: Membership tags: I don't think the membership tag issue matters, because the existing members will reject the proposal (and thus the commit) if the membership tag is invalid; at worst the joiner gets themselves wedged into a group by themselves. And as you say, they're signed, and they keys can be tied back to the tree hash in the GroupInfo. Leaf positioning: We need to clear this up regardless of proposals-by-reference. It seems like basically having a synthetic Add at the beginning or end of the proposal list would work fine. Preconfigured proposer: This is a problem only when Commits are encrypted, so it's not a problem in the external commit case (since an external Commit cannot be encrypted). Nonetheless, the point that an external joiner can't fully verify a Proposal seems to argue pretty strongly for forbidding proposals by reference. I don't think this even violates the semantic of \"include all valid proposals received in the epoch\", since (1) that requirement is receiver oriented (it's not enough for the proposals to have been sent) and (2) the external joiner can't tell if the proposals or valid. Overall, I would lean slightly toward removing proposals by reference. And if we do that, it might even make sense to just define an ExternalCommit message that packages together the few things that are still allowed.\nI'm not sure I understand how the receiver-orientation of the \"all valid proposals must be included\" rule matters to its application to this case. If we remove the requirement for external commits to include proposals by reference and don't soften that rule to exclude external commits, receivers will consider all external commits invalid in the presence of valid pending proposals by reference. In any case, I'm also in favor of keeping it simple and restrict the External Commit to a few, well defined components that do not include any proposals by reference.\nDiscussion on working call: \"No proposals by reference\" kind of follows from \"all valid proposals\", since the joiner can't necessarily determine that proposals are valid Maybe allow attempts? Might not get feedback to retry, though. Whole point of external commit is to allow async. Agreement to prohibit proposals by reference in external Commit\nI believe there's a small mismatch between the text and the Key Schedule figure. The text says that if there are no PSK proposals, the should be a zero-length string, while the Key Schedule figure seems to indicate that it's a string of zeros of length .\nFixed in\nOne brief comment, otherwise looks good!", "new_text": "In this case, the new ratchet tree is the same as the provisional ratchet tree. Derive the \"psk_secret\" as specified in pre-shared-keys, where the order of PSKs in the derivation corresponds to the order of PreSharedKey proposals in the \"proposals\" vector. Construct an MLSPlaintext object containing the Commit object. Sign the MLSPlaintext using the old GroupContext as context."} {"id": "q-en-mls-protocol-cd897806c0441e1939faf6d5dd758b672099c3efa9d887d938e1ef835ddfd1e1", "old_text": "Update the confirmed and interim transcript hashes using the new Commit, and generate the new GroupContext. If the \"proposals\" vector contains any PreSharedKey proposals, derive the \"psk_secret\" as specified in pre-shared-keys, where the order of PSKs in the derivation corresponds to the order of PreSharedKey proposals in the \"proposals\" vector. Otherwise, set \"psk_secret\" to 0. Use the \"init_secret\" from the previous epoch, the \"commit_secret\" and the \"psk_secret\" as defined in the previous steps, and the new", "comments": "In , NAME noted that the concatenation approach used to combine multiple PSKs is not well validated. At the 2021-10-04 interim, we discussed moving from this approach to a \"cascaded HKDF\" approach, with the idea of doing the same thing here as the main key schedule. This PR implements that suggestion. One nice implication of the algorithm here is that when there are no PSKs, it produces the zero vector as a degenerate case. So you could implement it pretty cleanly:\nThe spec currently doesn't distinguish much between an external and a member commit w.r.t. how proposals by reference are handled. In theory, the server could keep track of proposals and send the external committer all proposals that were sent up until the time they want to commit externally. However, there are a few difficulties in committing proposals by reference in an external commit. Membership tags: The external committer doesn't have the ability to verify membership tags on proposals. This is probably not a show-stopper, as proposals still need to be signed by a group member, but if nothing else, this should be noted in the spec. Leaf positioning: The spec was changed such that the committer is added into the tree without an Add proposal being present. If there are other Add (and Remove) proposals present, it is unclear in what order they should be processed. Should the committer be added when the path is processed? Or should it still be processed as if it were an Add? There are probably other things that can go wrong when committing referenced proposals externally, but those are the first that come to mind. The simplest solution would be to simply ignore proposals by reference, although this would break the current principle that a commit must always include all valid proposals that were sent during the epoch. The consequence would be that Parties that sent proposals will have to re-send them if the epoch is ended by an external commit. It should be noted, that a Preconfigured sender will have no way to detect if their proposal was committed or not.\nI'm not sure your worries here are very real: Membership tags: I don't think the membership tag issue matters, because the existing members will reject the proposal (and thus the commit) if the membership tag is invalid; at worst the joiner gets themselves wedged into a group by themselves. And as you say, they're signed, and they keys can be tied back to the tree hash in the GroupInfo. Leaf positioning: We need to clear this up regardless of proposals-by-reference. It seems like basically having a synthetic Add at the beginning or end of the proposal list would work fine. Preconfigured proposer: This is a problem only when Commits are encrypted, so it's not a problem in the external commit case (since an external Commit cannot be encrypted). Nonetheless, the point that an external joiner can't fully verify a Proposal seems to argue pretty strongly for forbidding proposals by reference. I don't think this even violates the semantic of \"include all valid proposals received in the epoch\", since (1) that requirement is receiver oriented (it's not enough for the proposals to have been sent) and (2) the external joiner can't tell if the proposals or valid. Overall, I would lean slightly toward removing proposals by reference. And if we do that, it might even make sense to just define an ExternalCommit message that packages together the few things that are still allowed.\nI'm not sure I understand how the receiver-orientation of the \"all valid proposals must be included\" rule matters to its application to this case. If we remove the requirement for external commits to include proposals by reference and don't soften that rule to exclude external commits, receivers will consider all external commits invalid in the presence of valid pending proposals by reference. In any case, I'm also in favor of keeping it simple and restrict the External Commit to a few, well defined components that do not include any proposals by reference.\nDiscussion on working call: \"No proposals by reference\" kind of follows from \"all valid proposals\", since the joiner can't necessarily determine that proposals are valid Maybe allow attempts? Might not get feedback to retry, though. Whole point of external commit is to allow async. Agreement to prohibit proposals by reference in external Commit\nI believe there's a small mismatch between the text and the Key Schedule figure. The text says that if there are no PSK proposals, the should be a zero-length string, while the Key Schedule figure seems to indicate that it's a string of zeros of length .\nFixed in\nOne brief comment, otherwise looks good!", "new_text": "Update the confirmed and interim transcript hashes using the new Commit, and generate the new GroupContext. Derive the \"psk_secret\" as specified in pre-shared-keys, where the order of PSKs in the derivation corresponds to the order of PreSharedKey proposals in the \"proposals\" vector. Use the \"init_secret\" from the previous epoch, the \"commit_secret\" and the \"psk_secret\" as defined in the previous steps, and the new"} {"id": "q-en-mls-protocol-a686d3a057864533c952e88d84cef505c37960f81355af4a5e05f3b9b896de1d", "old_text": "Each leaf is given an _index_ (or _leaf index_), starting at \"0\" from the left to \"n-1\" at the right. There are multiple ways that an implementation might represent a ratchet tree in memory. For example, left-balanced binary trees can be represented as an array of nodes, with node relationships computed", "comments": "The definition of blank node needs to move ahead into ## Ratchet Tree Terminology as it is used in the section immediately following (## Views of a Ratchet Tree).", "new_text": "Each leaf is given an _index_ (or _leaf index_), starting at \"0\" from the left to \"n-1\" at the right. Finally, a node in the tree may also be _blank_, indicating that no value is present at that node (i.e. no keying material). This is often the case when a leaf was recently removed from the tree. There are multiple ways that an implementation might represent a ratchet tree in memory. For example, left-balanced binary trees can be represented as an array of nodes, with node relationships computed"} {"id": "q-en-mls-protocol-a686d3a057864533c952e88d84cef505c37960f81355af4a5e05f3b9b896de1d", "old_text": "The conditions under which each of these values must or must not be present are laid out in views. A node in the tree may also be _blank_, indicating that no value is present at that node. The _resolution_ of a node is an ordered list of non-blank nodes that collectively cover all non-blank descendants of the node. The resolution of a non-blank node with no unmerged leaves is just the node itself. More generally, the resolution of a node is effectively a depth-first, left-first enumeration of the nearest non-blank nodes below the node: The resolution of a non-blank node comprises the node itself, followed by its list of unmerged leaves, if any", "comments": "The definition of blank node needs to move ahead into ## Ratchet Tree Terminology as it is used in the section immediately following (## Views of a Ratchet Tree).", "new_text": "The conditions under which each of these values must or must not be present are laid out in views. The _resolution_ of a node is an ordered list of non-blank nodes that collectively cover all non-blank descendants of the node. The resolution of a non-blank node with no unmerged leaves is just the node itself. More generally, the resolution of a node is effectively a depth-first, left-first enumeration of the nearest non-blank nodes below the node: The resolution of a non-blank node comprises the node itself, followed by its list of unmerged leaves, if any"} {"id": "q-en-mls-protocol-546ade3accadd97ada66250ca6d0160993d246d8a90338c99711a91dc95c3486", "old_text": "\"lazy\" version of this operation, where only the leaf changes and intermediate nodes are blanked out. The \"path\" field of a Commit message MUST be populated if the Commit covers at least one Update or Remove proposal. The \"path\" field MUST also be populated if the Commit covers no proposals at all (i.e., if the proposals vector is empty). The \"path\" field MAY be omitted if the Commit covers only Add proposals. In pseudocode, the logic for validating a Commit is as follows: To summarize, a Commit can have three different configurations, with different uses:", "comments": "This PR introduces a notion of a proposal type being \"path safe\", in the sense that it is safe for the path to be omitted. (Better phrasing welcome!) The field is required default, and allowed to be omitted only if all the proposals in the Commit are path-safe.\nThanks for taking care of this fix! One suggestion: Instead of having a global list of which types are path safe and which ones aren't, we could make it a property of the proposal. Maybe something like this: Regarding the name, the only thing I can think of is that we could invert the property and call it : In fact, I think I'd be slightly in favor of the latter proposal.\nDiscussion on virtual interim: OK to keep using type TODO(NAME \"path safe\" -> \"path required\" Otherwise clear to merge\nThe authoritative definition (in Section ) of when to include a is not the same as the summary below. For example, if an implementation adds a custom proposal type, according to the summary, a path is required, whereas according to the definition above it, no path is required. Ultimately, it depends on the nature of the proposal and the desired security guarantees if a path is required. However, I think the safer option is to require a path by default and maybe leave it to the definition of the respective custom proposal to explicitly specify if a path is not required.", "new_text": "\"lazy\" version of this operation, where only the leaf changes and intermediate nodes are blanked out. By default, the \"path\" field of a Commit MUST be populated. The \"path\" field MAY be omitted if (a) it covers at least one proposal and (b) none proposals covered by the Commit are of \"path required\" types. A proposal type requires a path if it cannot change the group membership in a way that requires the forward secrecy and post- compromise security guarantees that an UpdatePath provides. The only proposal types defined in this document that do not require a path are: \"add\" \"psk\" \"app_ack\" \"reinit\" New proposal types MUST state whether they require a path. If any instance of a proposal type requires a path, then the proposal type requires a path. This attribute of a proposal type is reflected in the \"Path Required\" field of the proposal type registry defined in mls-proposal-types. Update and Remove proposals are the clearest examples of proposals that require a path. An UpdatePath is required to evict the removed member or the old appearance of the updated member. In pseudocode, the logic for validating the \"path\" field of a Commit is as follows: To summarize, a Commit can have three different configurations, with different uses:"} {"id": "q-en-mls-protocol-546ade3accadd97ada66250ca6d0160993d246d8a90338c99711a91dc95c3486", "old_text": "committer's contribution to the group and provides PCS with regard to the committer. A \"partial\" Commit that references Add, PreSharedKey, or ReInit proposals but where the path is empty. Such a commit doesn't provide PCS with regard to the committer. A \"full\" Commit that references proposals of any type, which provides FS with regard to any removed members and PCS for the", "comments": "This PR introduces a notion of a proposal type being \"path safe\", in the sense that it is safe for the path to be omitted. (Better phrasing welcome!) The field is required default, and allowed to be omitted only if all the proposals in the Commit are path-safe.\nThanks for taking care of this fix! One suggestion: Instead of having a global list of which types are path safe and which ones aren't, we could make it a property of the proposal. Maybe something like this: Regarding the name, the only thing I can think of is that we could invert the property and call it : In fact, I think I'd be slightly in favor of the latter proposal.\nDiscussion on virtual interim: OK to keep using type TODO(NAME \"path safe\" -> \"path required\" Otherwise clear to merge\nThe authoritative definition (in Section ) of when to include a is not the same as the summary below. For example, if an implementation adds a custom proposal type, according to the summary, a path is required, whereas according to the definition above it, no path is required. Ultimately, it depends on the nature of the proposal and the desired security guarantees if a path is required. However, I think the safer option is to require a path by default and maybe leave it to the definition of the respective custom proposal to explicitly specify if a path is not required.", "new_text": "committer's contribution to the group and provides PCS with regard to the committer. A \"partial\" Commit that references proposals that do not require a path, and where the path is empty. Such a commit doesn't provide PCS with regard to the committer. A \"full\" Commit that references proposals of any type, which provides FS with regard to any removed members and PCS for the"} {"id": "q-en-mls-protocol-f71b4c05c2af9af0d9c8047e4d983c82f6148ab6b26c14e1b1b84fdf5e46fa98", "old_text": "12.3. A new group can be formed from a subset of an existing group's members, using the same parameters as the old group. The creator of the group indicates this situation by including a PreSharedKey of type \"resumption\" with usage \"branch\" in the Welcome message that creates the branched subgroup. A client receiving a Welcome including a PreSharedKey of type \"resumption\" with usage \"branch\" MUST verify that the new group", "comments": "Based on , it should be made explicit that sub-group branching requires a fresh key package for each member. I think there is still a conflict of wording here that I would like feedback on because of this line (which is what triggered me thinking there was a bug)\nThanks for the PR and thanks for digging into this. I think what the sentence is meant to say is that the receiver MUST check that the members of the new group are a subset of the members of the old group at that specific epoch. Since we're using new LeafNodes in the new group and we don't have a fixed notion of identity (at least not within MLS), we have to refer to the application/the AS to ensure that the identity expressed by the credential in each LeafNode in the subgroup has a corresponding identity (i.e. one that represents the same party) expressed by a credential in a LeafNode in the original group. Maybe something like: This leaves the possibility of a collision, where the identifiers in a given leaf in the new group would be valid updates to more than one member in the old group. This should probably be forbidden in general, although outside of this case it's not really a problem, because updates are always specific to one leaf. Alternatively, we could somehow include LeafNodeRefs that specifically indicate which member represents which other member. Or just leave it up to the application to sort this all out.\nCurious what you think about saying the the credential has to be equal? We can determine equality of the credential itself just by doing tls serialization.\nSure, requiring the credential to be exactly the same would be a simple way to figure out if it's the same party in both groups. However, I can imagine that some applications will want distinct credentials for each group. If we mandate that credentials are the same, this would force the application to use the same credential in every key package/LeafNode. Since subgroups can potentially be branched at any time and in any group, all key packages that a party publishes then have to have the same credential as all of the groups that party is in, so another party branching a subgroup can get hold of a KeyPackage with the same credential as the group the subgroup is branched off of. Personally, I'd prefer asking the application to check that the identities of the new group are a subset of the identities of the original group.\nI can see that being an issue, although I am somewhat worried about us not being opinionated in same way about how to determine equality here, mainly because of interop type things. We punt on how to validate the credentials as well in general so I guess this is in line with prior decisions\nHey NAME and NAME check the change I just posted. NAME and I put that together, it's a bit tough to get a generic definition of equivalency.\nThere is currently a logical flaw in the subgroup branching flow. In order to branch, you need to compose a message . A Welcome message includes which is defined as follows: The problem with this is that the field of type cannot be made for existing members of the group because the leaf nodes are now types. is now only used to add new members to the group, so we can't reuse this for sub group initialization.\nI'm not sure I understand correctly. Are you suggesting when branching a group off of another group, one re-uses the leaves of the old group? If so, that's not how it's meant to work. The new group created in the process of subgroup branching should be independent from the \"parent\" group safe for the PSK, which all member have to provide upon creation of the group. In fact one should never use the same key material in the leaves of multiple groups except when the supply of key packages that a party provides via the DS is exhausted (see Section 17.4).\nThat was part of my question. When there were key packages in the leaves, you theoretically could fork the group from a given point in time and reuse the current tree, but I wasn't sure if that was a proper solution or not. Sounds like it isn't, but either way, I think there should be clarification in the document about this. Let me write up a quick PR on it.\nNAME opened URL Would like your feedback there. We have conflicting wording here as well. What threw me off in the first place was the line indicating that LeafNodes should be the same in the sub-group, and if we are fetching new key packages they will not be the same.\nNAME Konrad, could you safely derive new keys for the members of the subgroup instead? If so, that would be an enormous increase in efficiency and (especially in the federated case) make group creation timing consistent enough to do dynamically.\nNAME Nice idea! But it's tricky, since we're dealing with public key pairs that the creator of the new group doesn't know the private keys for. If we were only using HPKE schemes that support updateable public key encryption, the creator of the new group could indeed update the key in the existing group and send the \"delta\" encrypted to each corresponding member (which would still represent an overhead, but we might be able to optimize this somehow). This is similar to the RTreeKEM mechanism that NAME et al. proposed at some point. To go all the way here, we'd have to do the same for the public key material in the credentials as well, which is not something we had considered before. It would be a new primitive, something like an updateable signature scheme? Probably something to consider for HPKE/MLS 2.0. We could also allow the re-use of LeadNodes for this particular case. It would be the simpler solution and I don't think it would be catastrophic, since we have to assume that LeafNode (or KeyPackage) key material is re-used in the \"last resort\" scenario anyway, but this would definitely be something we should discuss on the mailing list, as it would break a pretty basic assumption we've made until now.\nJust catching up on this. NAME nice catch here. It seems like we have two options here: Require new KeyPackages (as in ) Invent some way to make a new group based on LeafNodes alone I probably agree that we should do (1) in the short term, but (2) also has some appeal -- it doesn't feel right to require new KeyPackages for folks you're already in touch with -- despite it probably being more spec text. So here's a sketch of (2) in the interest of completeness. Suppose we defined a message of the following form: ... which would pretty closely parallel to Commit. You would send it in MLSMessage and require a in MLSMessageAuth. Where Commit instructs the recipient to create a new epoch from the current one within the same group, Branch would instruct the recipient to create a new epoch that is the first epoch of a new group. In both cases, you would form an epoch / GroupContext as specified, then use the to ratchet into the new epoch. I'm pretty sure that would work and have the expected security properties, though obviously formal analysis would be appreciated. The main awkardness that occurs to me is that any members joining after the branch would have to provide new key packages. And obviously, it would require a fair bit more spec text. All that said, though, the exercise of working all that out has increased my impression that we should do (1) for now. If we want something like (2) later, it can be done pretty cleanly in an extension.\nFixed by\nWell put!", "new_text": "12.3. A new group can be formed from a subset of an existing group's members, using the same parameters as the old group. A member can create a sub-group by performing the following steps: Determine a subset of existing members that should be a part of the sub-group. Create a new tree for the sub-group by fetching a new KeyPackage for each existing member that should be included in the sub-group. Create a Welcome message that includes a PreSharedKey of type \"resumption\" with usage \"branch\". A client receiving a Welcome including a PreSharedKey of type \"resumption\" with usage \"branch\" MUST verify that the new group"} {"id": "q-en-mls-protocol-f71b4c05c2af9af0d9c8047e4d983c82f6148ab6b26c14e1b1b84fdf5e46fa98", "old_text": "The \"version\" and \"ciphersuite\" values in the Welcome MUST be the same as those used by the old group. Each LeafNode in the new group's tree MUST be a leaf in the old group's tree at the epoch indicated in the PreSharedKey. In addition, to avoid key re-use, the \"psk_nonce\" included in the \"PreSharedKeyID\" object MUST be a randomly sampled nonce of length", "comments": "Based on , it should be made explicit that sub-group branching requires a fresh key package for each member. I think there is still a conflict of wording here that I would like feedback on because of this line (which is what triggered me thinking there was a bug)\nThanks for the PR and thanks for digging into this. I think what the sentence is meant to say is that the receiver MUST check that the members of the new group are a subset of the members of the old group at that specific epoch. Since we're using new LeafNodes in the new group and we don't have a fixed notion of identity (at least not within MLS), we have to refer to the application/the AS to ensure that the identity expressed by the credential in each LeafNode in the subgroup has a corresponding identity (i.e. one that represents the same party) expressed by a credential in a LeafNode in the original group. Maybe something like: This leaves the possibility of a collision, where the identifiers in a given leaf in the new group would be valid updates to more than one member in the old group. This should probably be forbidden in general, although outside of this case it's not really a problem, because updates are always specific to one leaf. Alternatively, we could somehow include LeafNodeRefs that specifically indicate which member represents which other member. Or just leave it up to the application to sort this all out.\nCurious what you think about saying the the credential has to be equal? We can determine equality of the credential itself just by doing tls serialization.\nSure, requiring the credential to be exactly the same would be a simple way to figure out if it's the same party in both groups. However, I can imagine that some applications will want distinct credentials for each group. If we mandate that credentials are the same, this would force the application to use the same credential in every key package/LeafNode. Since subgroups can potentially be branched at any time and in any group, all key packages that a party publishes then have to have the same credential as all of the groups that party is in, so another party branching a subgroup can get hold of a KeyPackage with the same credential as the group the subgroup is branched off of. Personally, I'd prefer asking the application to check that the identities of the new group are a subset of the identities of the original group.\nI can see that being an issue, although I am somewhat worried about us not being opinionated in same way about how to determine equality here, mainly because of interop type things. We punt on how to validate the credentials as well in general so I guess this is in line with prior decisions\nHey NAME and NAME check the change I just posted. NAME and I put that together, it's a bit tough to get a generic definition of equivalency.\nThere is currently a logical flaw in the subgroup branching flow. In order to branch, you need to compose a message . A Welcome message includes which is defined as follows: The problem with this is that the field of type cannot be made for existing members of the group because the leaf nodes are now types. is now only used to add new members to the group, so we can't reuse this for sub group initialization.\nI'm not sure I understand correctly. Are you suggesting when branching a group off of another group, one re-uses the leaves of the old group? If so, that's not how it's meant to work. The new group created in the process of subgroup branching should be independent from the \"parent\" group safe for the PSK, which all member have to provide upon creation of the group. In fact one should never use the same key material in the leaves of multiple groups except when the supply of key packages that a party provides via the DS is exhausted (see Section 17.4).\nThat was part of my question. When there were key packages in the leaves, you theoretically could fork the group from a given point in time and reuse the current tree, but I wasn't sure if that was a proper solution or not. Sounds like it isn't, but either way, I think there should be clarification in the document about this. Let me write up a quick PR on it.\nNAME opened URL Would like your feedback there. We have conflicting wording here as well. What threw me off in the first place was the line indicating that LeafNodes should be the same in the sub-group, and if we are fetching new key packages they will not be the same.\nNAME Konrad, could you safely derive new keys for the members of the subgroup instead? If so, that would be an enormous increase in efficiency and (especially in the federated case) make group creation timing consistent enough to do dynamically.\nNAME Nice idea! But it's tricky, since we're dealing with public key pairs that the creator of the new group doesn't know the private keys for. If we were only using HPKE schemes that support updateable public key encryption, the creator of the new group could indeed update the key in the existing group and send the \"delta\" encrypted to each corresponding member (which would still represent an overhead, but we might be able to optimize this somehow). This is similar to the RTreeKEM mechanism that NAME et al. proposed at some point. To go all the way here, we'd have to do the same for the public key material in the credentials as well, which is not something we had considered before. It would be a new primitive, something like an updateable signature scheme? Probably something to consider for HPKE/MLS 2.0. We could also allow the re-use of LeadNodes for this particular case. It would be the simpler solution and I don't think it would be catastrophic, since we have to assume that LeafNode (or KeyPackage) key material is re-used in the \"last resort\" scenario anyway, but this would definitely be something we should discuss on the mailing list, as it would break a pretty basic assumption we've made until now.\nJust catching up on this. NAME nice catch here. It seems like we have two options here: Require new KeyPackages (as in ) Invent some way to make a new group based on LeafNodes alone I probably agree that we should do (1) in the short term, but (2) also has some appeal -- it doesn't feel right to require new KeyPackages for folks you're already in touch with -- despite it probably being more spec text. So here's a sketch of (2) in the interest of completeness. Suppose we defined a message of the following form: ... which would pretty closely parallel to Commit. You would send it in MLSMessage and require a in MLSMessageAuth. Where Commit instructs the recipient to create a new epoch from the current one within the same group, Branch would instruct the recipient to create a new epoch that is the first epoch of a new group. In both cases, you would form an epoch / GroupContext as specified, then use the to ratchet into the new epoch. I'm pretty sure that would work and have the expected security properties, though obviously formal analysis would be appreciated. The main awkardness that occurs to me is that any members joining after the branch would have to provide new key packages. And obviously, it would require a fair bit more spec text. All that said, though, the exercise of working all that out has increased my impression that we should do (1) for now. If we want something like (2) later, it can be done pretty cleanly in an extension.\nFixed by\nWell put!", "new_text": "The \"version\" and \"ciphersuite\" values in the Welcome MUST be the same as those used by the old group. Each LeafNode in a new subgroup MUST match some LeafNode in the original group. In this context, a pair of LeafNodes is said to \"match\" if the identifiers presented by their respective credentials are considered equivalent by the application. In addition, to avoid key re-use, the \"psk_nonce\" included in the \"PreSharedKeyID\" object MUST be a randomly sampled nonce of length"} {"id": "q-en-mls-protocol-61fc75008638f820406d30a4656c4e5dbf157365d9ae23082337b34d1f2fe734", "old_text": "The following structure is used to fully describe the data transmitted in plaintexts or ciphertexts. 7.1. MLSMessageContent is authenticated using the MLSMessageAuth", "comments": "I drew this to help explain the relevant patch in MLSpp to someone, and thought it might be helpful to other folks.\nExcellent! This is a very nice addition to explain the design introduced .", "new_text": "The following structure is used to fully describe the data transmitted in plaintexts or ciphertexts. The following figure illustrates how the various structures described in this section relate to each other, and the high-level operations used to produce and consume them: 7.1. MLSMessageContent is authenticated using the MLSMessageAuth"} {"id": "q-en-mls-protocol-5f4ad8bbb0092fbd5e6ee3566f7a71413f6bf9a7aa26c499e67bd265ef19e3f9", "old_text": "cryptographic state of the group, this section defines a scheme for generating a hash value (called the \"tree hash\") that represents the contents of the group's ratchet tree and the members' leaf nodes. The tree hash of a tree is the tree hash of its root node, which we define recursively, starting with the leaves. The tree hash of a leaf node is the hash of leaf's \"LeafNodeHashInput\" object which might include a \"LeafNode\" object depending on whether or not it is blank. Now the tree hash of any non-leaf node is recursively defined to be the hash of its \"ParentNodeHashInput\". This includes an optional \"ParentNode\" object depending on whether the node is blank or not. The \"left_hash\" and \"right_hash\" fields hold the tree hashes of the node's left and right children, respectively. 8.8.", "comments": "I'm not sure if this is actually necessary, but there usually seem to be issues in Merkle trees when the leaves are not explicitly distinct from the parents.\nI would find it nicer to describe things with a like in other parts of the protocol: It is equivalent to your modification. Stating things that way ensures there is no ambiguity: TreeHashInput is parseable, which is I think the property you want to ensure with this PR.\nUse enum instead and then ready for merge.\nInterim 2022-05-26: Nicer with an NAME to update to , then merge\nWhen computing the tree hash, should there be an indicator bit for whether the value being hashed represents a leaf or parent node?\nThis doesn't seem like a terrible idea. You could either add a field to LeafNodeHashInput and ParentNodeHashInput, or define an enum and do the same, or make a . For example:", "new_text": "cryptographic state of the group, this section defines a scheme for generating a hash value (called the \"tree hash\") that represents the contents of the group's ratchet tree and the members' leaf nodes. The tree hash of an individual node is the hash of the node's \"TreeHashInput\" object, which may contain either a \"LeafNodeHashInput\" or a \"ParentNodeHashInput\" depending on the type of node. \"LeafNodeHashInput\" objects contain the \"leaf_index\" and the \"LeafNode\" (if any). \"ParentNodeHashInput\" objects contain the \"ParentNode\" (if any) and the tree hash of the node's left and right children. The tree hash of an entire tree corresponds to the tree hash of the root node, which is computed recursively by starting at the leaf nodes and building up. 8.8."} {"id": "q-en-mls-protocol-9cad84b73b6aa96af483ead7c833b1d87338f8b67426804a65659a179e12e0b4", "old_text": "input to the key schedule, the Commit and GroupSecrets objects MUST indicate the same set of PSKs, in the same order. On receiving a Commit with a \"PreSharedKey\" proposal or a GroupSecrets object with the \"psks\" field set, the receiving Client includes them in the key schedule in the order listed in the Commit,", "comments": "Also covers the PSK part of\nInterim 2022-05-26: Ready to go once merge conflicts are resolved.\nThe collection of structs could be less chatty. Suggest PreSharedKeys is only used once, can be inlined in GroupSecrets Use GroupContext for the internals of GroupInfo\nNote: was fixed in In , we did: So only the following remain: Use GroupContext for the internals of GroupInfo\ncurrently described at time of use. In the \"Pre-Shared Keys\" section, we should just specify that it must always be a fresh value of length The ReInit provisions require , but those for branching do not. Presumably these should be the same? Currently, the processing rules for PreSharedKey proposal says \u201cMUST be external\u201d. This seems overly restrictive; for example, it rules out the scenario in Figure 8. Suggest \u201cMUST NOT be branch/reinit\u201d instead.", "new_text": "input to the key schedule, the Commit and GroupSecrets objects MUST indicate the same set of PSKs, in the same order. Each time a client injects a PSK into a group, the \"psk_nonce\" of its PreSharedKeyID MUST be set to a fresh random value of length \"KDF.Nh\", where \"KDF\" is the KDF for the ciphersuite of the group into which the PSK is being injected. This ensures that even when a PSK is used multiple times, the value used as an input into the key schedule is different each time. On receiving a Commit with a \"PreSharedKey\" proposal or a GroupSecrets object with the \"psks\" field set, the receiving Client includes them in the key schedule in the order listed in the Commit,"} {"id": "q-en-mls-protocol-9cad84b73b6aa96af483ead7c833b1d87338f8b67426804a65659a179e12e0b4", "old_text": "Welcome message MUST be the same as the corresponding fields in the ReInit proposal. The \"epoch\" in the Welcome message MUST be 1 The Welcome MUST specify a PreSharedKey of type \"resumption\" with usage \"reinit\". The \"group_id\" must match the old group, and the \"epoch\" must indicate the epoch after the Commit covering the ReInit. The \"psk_nonce\" included in the \"PreSharedKeyID\" of the resumption PSK MUST be a randomly sampled nonce of length \"KDF.Nh\", for the KDF defined by the new group's ciphersuite. Note that these three steps may be done by the same group member or different members. For example, if a group member sends a commit with an inline ReInit proposal (steps 1 and 2), but then goes", "comments": "Also covers the PSK part of\nInterim 2022-05-26: Ready to go once merge conflicts are resolved.\nThe collection of structs could be less chatty. Suggest PreSharedKeys is only used once, can be inlined in GroupSecrets Use GroupContext for the internals of GroupInfo\nNote: was fixed in In , we did: So only the following remain: Use GroupContext for the internals of GroupInfo\ncurrently described at time of use. In the \"Pre-Shared Keys\" section, we should just specify that it must always be a fresh value of length The ReInit provisions require , but those for branching do not. Presumably these should be the same? Currently, the processing rules for PreSharedKey proposal says \u201cMUST be external\u201d. This seems overly restrictive; for example, it rules out the scenario in Figure 8. Suggest \u201cMUST NOT be branch/reinit\u201d instead.", "new_text": "Welcome message MUST be the same as the corresponding fields in the ReInit proposal. The Welcome MUST specify a PreSharedKey of type \"resumption\" with usage \"reinit\". The \"group_id\" must match the old group, and the \"epoch\" must indicate the epoch after the Commit covering the ReInit. Note that these three steps may be done by the same group member or different members. For example, if a group member sends a commit with an inline ReInit proposal (steps 1 and 2), but then goes"} {"id": "q-en-mls-protocol-9cad84b73b6aa96af483ead7c833b1d87338f8b67426804a65659a179e12e0b4", "old_text": "The \"version\" and \"ciphersuite\" values in the Welcome MUST be the same as those used by the old group. Each LeafNode in a new subgroup MUST match some LeafNode in the original group. In this context, a pair of LeafNodes is said to \"match\" if the identifiers presented by their respective credentials are considered equivalent by the application. In addition, to avoid key re-use, the \"psk_nonce\" included in the \"PreSharedKeyID\" object MUST be a randomly sampled nonce of length \"KDF.Nh\". Resumption PSKs with usage \"branch\" MUST NOT be used in other contexts. A PreSharedKey proposal with type \"resumption\" and usage \"branch\" MUST be considered invalid.", "comments": "Also covers the PSK part of\nInterim 2022-05-26: Ready to go once merge conflicts are resolved.\nThe collection of structs could be less chatty. Suggest PreSharedKeys is only used once, can be inlined in GroupSecrets Use GroupContext for the internals of GroupInfo\nNote: was fixed in In , we did: So only the following remain: Use GroupContext for the internals of GroupInfo\ncurrently described at time of use. In the \"Pre-Shared Keys\" section, we should just specify that it must always be a fresh value of length The ReInit provisions require , but those for branching do not. Presumably these should be the same? Currently, the processing rules for PreSharedKey proposal says \u201cMUST be external\u201d. This seems overly restrictive; for example, it rules out the scenario in Figure 8. Suggest \u201cMUST NOT be branch/reinit\u201d instead.", "new_text": "The \"version\" and \"ciphersuite\" values in the Welcome MUST be the same as those used by the old group. The \"epoch\" in the Welcome message MUST be 1. Each LeafNode in a new subgroup MUST match some LeafNode in the original group. In this context, a pair of LeafNodes is said to \"match\" if the identifiers presented by their respective credentials are considered equivalent by the application. Resumption PSKs with usage \"branch\" MUST NOT be used in other contexts. A PreSharedKey proposal with type \"resumption\" and usage \"branch\" MUST be considered invalid."} {"id": "q-en-mls-protocol-9cad84b73b6aa96af483ead7c833b1d87338f8b67426804a65659a179e12e0b4", "old_text": "A PreSharedKey proposal is invalid if any of the following is true: The \"psktype\" in the PreSharedKeyID struct is not set to \"external\". The \"psk_nonce\" is not of length \"KDF.Nh\".", "comments": "Also covers the PSK part of\nInterim 2022-05-26: Ready to go once merge conflicts are resolved.\nThe collection of structs could be less chatty. Suggest PreSharedKeys is only used once, can be inlined in GroupSecrets Use GroupContext for the internals of GroupInfo\nNote: was fixed in In , we did: So only the following remain: Use GroupContext for the internals of GroupInfo\ncurrently described at time of use. In the \"Pre-Shared Keys\" section, we should just specify that it must always be a fresh value of length The ReInit provisions require , but those for branching do not. Presumably these should be the same? Currently, the processing rules for PreSharedKey proposal says \u201cMUST be external\u201d. This seems overly restrictive; for example, it rules out the scenario in Figure 8. Suggest \u201cMUST NOT be branch/reinit\u201d instead.", "new_text": "A PreSharedKey proposal is invalid if any of the following is true: The \"psktype\" in the PreSharedKeyID struct is not set to \"resumption\" and the \"usage\" is \"reinit\" or \"branch\". The \"psk_nonce\" is not of length \"KDF.Nh\"."} {"id": "q-en-mls-protocol-f400e351345f956b13b6bb3163ec050f9d6bc13afe0ddf5404d0a8ef3a7d2783", "old_text": "A GroupContextExtensions proposal is used to update the list of extensions in the GroupContext for the group. \"struct { Extension extensions; } GroupContextExtensions; \" A GroupContextExtensions proposal is invalid if it includes a \"required_capabilities\" extension and some members of the group do not support some of the required capabilities (including those added", "comments": "The enum was missing the limit. The wasn't using a TLS environment.\nNAME shouldn't ResumptionPSKUsage include (255)? enum { reserved(0), application(1), reinit(2), branch(3), (255) } ResumptionPSKUsage;\nFixed in\nSeems reasonable to me.", "new_text": "A GroupContextExtensions proposal is used to update the list of extensions in the GroupContext for the group. A GroupContextExtensions proposal is invalid if it includes a \"required_capabilities\" extension and some members of the group do not support some of the required capabilities (including those added"} {"id": "q-en-mls-protocol-14f3bca7d6d6ae8e0c2cb3e6e10f95ef74734570cc3eee186f7ed2f8867ddb74", "old_text": "5.3.2. MLS implementations will presumably provide applications with a way to request protocol operations with regard to other clients (e.g., removing clients). Such functions will need to refer to the other", "comments": "Interim 2022-12-08: Distinguish \"time-invalid\" from \"invalid\" There is a slightly more general issue: When catching up, there might be messages sent by members whose credentials were valid when the message was sent, but have since expired Recommend accepting messages from members with time-invalid credentials when catching up\nI think a note on consistency would work a little better in the architecture document, especially because it applies in the general case as well as the time-varying case. I filed URL\nLatest change uses \"expired\" to refer generically to all time based reasons for a transition. (So, excluding , as NAME suggests.)\nWe ran into this case, which does not seem to have a defined solution within the RFC and I'm wondering if this ambiguity should be resolved at the RFC level, or be left up to the application. Existing group with Alice, and Bob. Bob had a valid certificate upon being added to the group by Alice. Bob's certificate expires . Alice does a commit to add Charlie to the group. Bob accepts the valid commit to add Charlie. Charlie follows 8.3 and validates the tree. In this process if certificate expiration is enforced, then Charlie would drop the Welcome, which would not be optimal. Solution 1: Leave this behavior up to the application, consider it part of the AS. Solution 2: Add an explicit mention that credentials that expire should be considered valid after expiration once they are a member of the group. Solution 3: A committer MUST validate that all existing credentials in the tree are still considered valid at the time a commit is made. If an existing credential is no longer considered valid, then the resulting commit MUST contain a Remove proposal for the leaf containing that credential.\nOf course it would be better to have the same credential validation algorithm, but at the protocol level, there is no reason to constrain all clients to that, applications can enforce a consistent policy across clients. So, in my mind this is not symmetric between senders and receivers and solutions 1 and 3 are not incompatible, however 2 is not acceptable IMO. I would suggest the following: Alice MUST verify that she considers all credentials valid before adding Charlie and act if it is not the case (Solution 3) Charlie MUST check all the credentials upon joining and act according to its authentication policy* Charlie MUST NOT send any application message before she considers all credentials valid\nAre you suggesting that it should not be a requirement that each member of the group has the same validation algorithm? That would create forks in any case that individuals disagree on a particular credential's validity. We can of course punt to the application, but I feel like that should at least be cautioned against?\nI don't quite see how the validation algorithms are different. They are the same, just executed at different points in time. Or am I missing something here? In any case it would be unfortunate if Charlie couldn't send any messages until someone committed to their proposal. So I would propose to change point 2 of the algorithm suggested by NAME to the following: Charlie MUST check all the credentials upon joining. If any group members have expired credentials, Charlie MUST issue a commit to remove the affected members.\nI am saying that it is not a requirement. Most applications will want to use the same algorithm, but the protocol doesn't need to make that mandatory, it is just a convenience for applications. The protocol is designed so that each client can make its own choice. It would not create forks, as for example the case Alice decides to remove Bob, Bob is removed for everyone. I would be - ok - with the change proposed by Konrad but we need to be aware that this is a restriction on what clients can decide with respect to authentication (especially in a context where there is no AS).\nI think it's also important to remember that there may be a delay between when a user is sent a Welcome and when they process it. So when a user processes a Welcome, the first epoch they see after joining may be quite old and have expired credentials for that reason, and this epoch may not even be the most recent epoch anymore. So I would lean towards solution 2 actually, of \"members that are joining a group just need to ignore expired credentials.\" And if the application wants to have a policy of not sending new messages to a group if there are expired credentials present, then that's fine but not something we need to require in the spec. Edit: Or maybe we do want to specify that, but I'd be clear that it's a check done before sending new messages, not part of processing the Welcome.\nAlthough I started the dialog here around certificates and expiration, I think maybe it is useful to ignore the idea of \"expired\" credentials for now, because I agree with the comment by NAME that we should try to avoid being opinionated in that regard. I think what I am trying to get across is that regardless of what credential system you use that the protocol can only operate without forks if the validation of that credential is consistent across all members while also taking into account the asynchronous nature of the protocol. If each member of the group has the same state for epoch N, they should all independently determine that new credentials presented as part of a commit are valid or invalid regardless of when they process that commit. I'm viewing a credential being invalid as a trigger for rejecting a commit, which maybe in itself is a bias. Not sure if the above belongs in the protocol, the AS definition, or no where at all :-). If everyone thinks this is too restrictive of a statement I'm fine with that. Maybe a better solution to this issue is that there could be a separately defined standard way of dealing with certificates within MLS that can be more opinionated than the base protocol and introduce concepts discussed above like MUST remove of expired certs on commit, use an external packet timestamp for consistent expiration checking on the receive side, etc. I think a lot of applications would benefit from that and NAME NAME and I have been doing a lot of experimentation in that area.\nAfter a couple long chats with NAME and NAME here's where I stand on this. A guiding thought I have is this: IMO creds aren't just for getting in to a group. They should also be necessary for continued participation. After all MLS is designed for sessions that can last years. That means, I'm uneasy with an MLS design that allows me to join a group (epoch even) where someone has an invalid (e.g. expired) cred. Worse yet, I don't want MLS to let me send application messages to users with invalid creds at the time of my sending. With that in mind, how about this approach which tries to leave as much freedom to an app/AS while still giving guidance to an app/AS that wants to ensure valid creds are needed for ongoing participation in a group and, optionally, that forks are being avoided as much as possible. MLS clients MUST validate the creds at all leaves in a ratchet tree (via the AS) in the following cases: a. Sending a commit (validation done on new epoch's tree after applying commit). b. Receiving a commit (validation done on new epoch's tree after applying commit). c. When joining a group. d. When client wants to send an application message. Based on this MLS now gives 3 guarantees. Call an application/AS \"consistent\" if it ensures that for any given epoch and cred in its ratchet tree, the result of any client validating the cred is the same (i.e. regardless of the time the validation is performed and which rule of a. b. or c. triggered it). (Edit: Just to further clarify \"consistency\" doesnt mean validating the same cred but when taken from different ratchet trees should have the same result. It only concerns validations pertaining to the same epoch but performed by different clients at different times.) 1) Regardless of the app/AS's consistency and any insider attacks, clients only ever join epochs in which all members have valid creds. 2) Regardless of the app/AS's consistency and any insider attacks, clients will never send application messages to receiver with invalid creds (at the time of sending). 3) If the app/AS is consistent, all parties follow the protocol and all packets are delivered by the DS then no forks will occur. My justification for this approach is that it strikes the following balance: On the one hand, we now give guidance to app/AS devs for how they can enforce (1-3) if thats what they want. On the other hand, we don't actually tie an app/AS's hands at all (e.g. compared to mandating almost no checks ever). That's because MLS says nothing about how cred validation should be implemented. Hard-coding always returning true is an MLS spec compliant AS. So if an app/AS doesn't want the constraints imposed by the above rules it is still free to hard-code those validations as returning true; i.e. it can skip the checks if it really needs to. In summary: the idea is basically to tell app/AS devs when/which are the right checks to perform to get good security & availability but also to leave it up to them to make those checks meaningful. Finally, regarding \"consistency\": in my mind creds are validated relative to a context (e.g. a time-stamp for checking expiration, a root CA cert, a blockchain or even particular block for checking wallets and/or balances, etc.) One way to get consistency is for the app/AS to ensure that all clients use the same context when checking the creds in a ratchet tree regardless of when they perform their checks.\nIn my mind, applications are only ever in one of two states: A.) I'm reading backlogged messages, or B.) I've read all the messages and I'm waiting for new ones. When you're in state B, you can send messages and also check time-related things. When you're in state A, you have no idea when the messages you're reading were truly sent. We don't currently provide any way to communicate a \"context\" against which time-related things should be validated in state B. So I don't see how we can implement b or c from this list?\nImplementing b. and c. in some spec compliant way is actually trivial because the spec doesnt require \"consistency\" (in the sense that I defined that term in my previous post.) The only thing the spec says about consistency is that if you want guarantee (3.) then you need consistent cred validation. So e.g. always returning true when validating in cases b. and c. while doing something more interesting for cases (a.) would be fine; i.e. spec compliant. You just wouldn't necessarily get guarantee (3.) is all. But I figure what you meant is how to implement (a.) b. and c. to be consistent, right? And that'd be a good question that any app trying to avoid forks would have to think about. (Notice, the RFC already creates this concern for such apps with out my proposed changes because to avoid forks now you already need commit sender and receivers to agree on their validation.) In our opinion, we landed on leaving it up to the app/AS how it guarantees consistent validations because we think there are a bunch of different but reasonable ways this could be done. Yet, which of those is best (even among just the couple we could come with on the fly) depends on things like the app's security assumptions, communication models and access to third-party services, not to mention the details of how the AS works. To start off with, it wasn't even totally clear to us that the \"context\"-based syntax really applies to all reasonable AS designs. (Thats how we ended up on the notion of \"consistency\" as that is the more abstract and minimal guarantee we really needed to avoid forks. Context is just one way we thought of to get consistency.) But say the AS does conform to \"context\"-based syntax. Maybe the app/AS ensure consistency by only ensuring that the (possibly different!) contexts being used by clients still produce the same result. Further, even if we are more opinionated and assume an app/AS will ensure an identical context is used in all validation cases this still leaves room for many variations. Who chooses the context used for a given epoch? Maybe the commitor? Maybe the AS? Maybe the DS (which we already have to trust it to avoid forks)? Maybe a group-context extension? Also, how is the context distributed? Out-of band? By the AS? By the DS along with commit/welcome packets? Along with group contexts? Each made sense to us in the right setting... One thing we toyed with for a while was the requirement that the commitor authenticates the particular context C they used when creating a commit packet (e.g. by using C as associated data or a PSK or something). But, not only is that already pretty opinionated about how consistency should be enforced, it also wasn't clear to us what the actual point is. TBH at first I was for it at first out of instinct (and ignoring the being opinionated part for the moment), but NAME did a good job of convincing me that I couldn't actually justify it beyond \"it feels right\". What kind of, otherwise not possible, attack is this trying to defend against? In the end, the point is there were enough plausible but diverging options for how to get consistency that we didnt feel comfortable enforcing anything that constrained an app/AS's options. But it's not due to lack of ways to solve for consistency. We just didn't want to be opinionated in a way we couldn't STRONGLY justify.\nThanks NAME I'm not sure I agree with checking every time I send an application message as TLS sets a precedent for not checking in that case. I do agree though that I view each epoch as a session renegotiation of sorts, which means that you reevaluate if credentials you previously thought were valid are still valid according to whatever rules that are in place. I believe this conversation winds up in a place where its a very small statement in the main doc that explains how to avoid forks, and then an entirely separate document for each credential type (starting with certificates) that aims to set guidelines on how one could achieve various goals like not allowing expired certs etc. Im all for helping write that document as Wickr will be using the x509 system in production quite a bit.\nSolution 4: Bob has to do an UpdatePath before his Credential expires with a LeafNode with a valid Credential (certificate). If he doesn't, the other members should Remove him from the group. Same goes if Bob's Credential is revoked.\nNAME I think the core of the discussion at this point is more around the fact that the base protocol does not have any concept of \"expiration\" of credentials, so their behavior is undefined. Either we put an opinionated statement around it in the core protocol, or just punt to applications / another RFC. Thoughts on that?\nHi NAME , My opinion is that whatever credentials you have in an MLS group should be valid (including not expired, not revoked, not too early, etc.). Therefore it would logically follow: It is the responsibility of each member to make sure their LeafNode credentials are valid and consistent with the group policy at all times. Any member of the group that detects another member is out of policy or invalid can remove the offending member. To the extent possible a DS that detects that a member is out of compliance with group policy or no longer has valid credentials, can try to remove the member (ex: via an external proposal). The definition of \"valid\" could be done per-credential type or per profile.\nSide note: under the general rubric of \"validity changes over time\", revocation is also an issue! Overall, my thinking aligns with what NAME says here. Saying that the credentials presented by group members should be valid at all times is a concise, clear standard. It is arguably implied by the current requirement to validate credentials on joining (if you also assume that the group should be joinable at any time). The more difficult question is how it is implemented in practical reality. I don't think we can really write hard requirements around this. The only remediation is Remove, and we don't have other situations where we say a client MUST send a specific proposal. In part because this might conflict with application-level policies about who is allowed to send Proposals/Commits. And it seems like there are valid situations where availability might trump authenticity (cf. browsers' click-through on expired certs). Overall, this seems like more of an operational question than a core protocol question here. In terms of how to reflect all this in documents: The details of how to manage credential freshness seem better suited for an implementation guidance document. (I believe NAME is gathering ideas for one?) In the MLS protocol spec, we could add some high-level guidance, something like the following: I think that would fit well either in the Credentials section (between \"Credential Validation\" and \"Uniquely Idenitfying Clients\") or in the security considerations.\nWe could use the sender_generation field of the MLS message to signal what was the global app generation of the sender. Functionally, this allows receivers to know if they received all previous application messages from the sender. In the current design, if S sent M0, M1, M2 you can know if you missed M1 when receiving M2, but you cannot know if you missed M2. There are many ways of doing this: global sender counter, counter between two operations of the sender...\nWe seemed in agreement at the interim because it prevents the application message suppression attacks\nLooks great thanks NAME I like the separate treatment of expired vs. revoked. Looks good to me now!", "new_text": "5.3.2. In some credential schemes, a valid credential can \"expire\", or become invalid after a certain point in time. For example, each X.509 certificate has a \"notAfter\" field, expressing a time after which the certificate is not valid. Expired credentials can cause operational problems in light of the validation requirements of credential-validation. Applications can apply some operational practices and adaptations to Authentication Service policies to moderate these impacts. In general, to avoid operational problems such as new joiners rejecting expired credentials in a group, applications that use such credentials should ensure to the extent practical that all of the credentials in use in a group are valid at all times. If a member finds that its credential has expired (or will soon), it should issue an Update or Commit that replaces it with a valid credential. For this reason, members SHOULD accept Update proposals and Commits issued by members with expired credentials, if the credential in the Update or Commit is valid. Similarly, when a client is processing messages sent some time in the past (e.g., syncing up with a group after being offline), the client SHOULD accept signatures from members with expired credentials, since the credential may have been valid at the time the message was sent. If a member finds that another member's credential has expired, they may issue a Remove that removes that member. For example, an application could require a member preparing to issue a Commit to check the tree for expired credentials and include Remove proposals for those members in its Commit. In situations where the group tree is known to the DS, the DS could also monitor the tree for expired credentials and issue external Remove proposals. Some credential schemes also allow credentials to be revoked. Revocation is similar to expiry, in that a previously valid credential becomes invalid. As such, most of the considerations above also apply to revoked credentials. However, applications may want to treat revoked credentials differently, e.g., removing members with revoked credentials while allowing members with expired credentials time to update. 5.3.3. MLS implementations will presumably provide applications with a way to request protocol operations with regard to other clients (e.g., removing clients). Such functions will need to refer to the other"} {"id": "q-en-mls-protocol-1571b6331f7a5bad4d6c49d6e8d1008e3851ace38cbd2fa4ad4deff6e752ceee", "old_text": "A signing key pair used to authenticate the sender of a message. A PublicMessage or PrivateMessage carrying an MLS Proposal or Commit object, as opposed to application data.", "comments": "URL This definition of a handshake message is being defined in terms of a Proposal or Commit object, neither of which are defined in this session. Furthermore, PublicMessage or PrivateMessage aren\u2019t defined either as this point in the text. To improve clarity, consider defining these key terms. The use of \"respectively\" in the text is suggesting that PrivateMessages don\u2019t have integrity protection which is not accurate. (see the Cryptographic Dependencies section of the HPKE specification for more information). Is this Section 4 of RFC9180? If so, why not say that? What would be the circumstance where the application_id extension would be treated with equal standing as something from the AS? Why can\u2019t this be a \"MUST\"? Unsaid is what to do if the proposal list is not valid. Please clarify. Is this normative behavior required, or could it be left up to application? The mixing of updates/removes on the same node and the SHOULD guidance already means that this \"recovery from an invalid node\" may not be consistent across clients/DS-es. Same observation on recovering from ReInit with any other proposals. Is there any additional retry mechanism? Let\u2019s say the sender of the proposal doesn\u2019t see it reflect in the epoch after it sent proposal. And then not in the next epoch. Does it try resending indefinitely? This setup seems a little simplistic. If the application data is encrypted to begin with, how does the adversary know the question being posed by Alice?", "new_text": "A signing key pair used to authenticate the sender of a message. A message that proposes a change to the group, e.g., adding or removing a member. A message that implements the changes to the group proposed in a set of Proposals. An MLS protocol message that is signed by its sender and authenticated as coming from a member of the group in a particular epoch, but not encrypted. An MLS protocol message that is both signed by its sender, authenticated as coming from a member of the group in a particular epoch, and encrypted so that it is confidential to the members of the group in that epoch. A PublicMessage or PrivateMessage carrying an MLS Proposal or Commit object, as opposed to application data."} {"id": "q-en-mls-protocol-1571b6331f7a5bad4d6c49d6e8d1008e3851ace38cbd2fa4ad4deff6e752ceee", "old_text": "with an algorithm such as HMAC or an AEAD algorithm. The PublicMessage and PrivateMessage formats are defined in message- framing; they represent integrity-protected and confidentiality- protected messages, respectively. Security notions such as forward secrecy and post-compromise security are defined in security- considerations. 2.1.", "comments": "URL This definition of a handshake message is being defined in terms of a Proposal or Commit object, neither of which are defined in this session. Furthermore, PublicMessage or PrivateMessage aren\u2019t defined either as this point in the text. To improve clarity, consider defining these key terms. The use of \"respectively\" in the text is suggesting that PrivateMessages don\u2019t have integrity protection which is not accurate. (see the Cryptographic Dependencies section of the HPKE specification for more information). Is this Section 4 of RFC9180? If so, why not say that? What would be the circumstance where the application_id extension would be treated with equal standing as something from the AS? Why can\u2019t this be a \"MUST\"? Unsaid is what to do if the proposal list is not valid. Please clarify. Is this normative behavior required, or could it be left up to application? The mixing of updates/removes on the same node and the SHOULD guidance already means that this \"recovery from an invalid node\" may not be consistent across clients/DS-es. Same observation on recovering from ReInit with any other proposals. Is there any additional retry mechanism? Let\u2019s say the sender of the proposal doesn\u2019t see it reflect in the epoch after it sent proposal. And then not in the next epoch. Does it try resending indefinitely? This setup seems a little simplistic. If the application data is encrypted to begin with, how does the adversary know the question being posed by Alice?", "new_text": "with an algorithm such as HMAC or an AEAD algorithm. The PublicMessage and PrivateMessage formats are defined in message- framing. Security notions such as forward secrecy and post- compromise security are defined in security-considerations. 2.1."} {"id": "q-en-mls-protocol-1571b6331f7a5bad4d6c49d6e8d1008e3851ace38cbd2fa4ad4deff6e752ceee", "old_text": "5.1.1. HPKE public keys are opaque values in a format defined by the underlying protocol (see the Cryptographic Dependencies section of the HPKE specification for more information). Signature public keys are likewise represented as opaque values in a format defined by the ciphersuite's signature scheme.", "comments": "URL This definition of a handshake message is being defined in terms of a Proposal or Commit object, neither of which are defined in this session. Furthermore, PublicMessage or PrivateMessage aren\u2019t defined either as this point in the text. To improve clarity, consider defining these key terms. The use of \"respectively\" in the text is suggesting that PrivateMessages don\u2019t have integrity protection which is not accurate. (see the Cryptographic Dependencies section of the HPKE specification for more information). Is this Section 4 of RFC9180? If so, why not say that? What would be the circumstance where the application_id extension would be treated with equal standing as something from the AS? Why can\u2019t this be a \"MUST\"? Unsaid is what to do if the proposal list is not valid. Please clarify. Is this normative behavior required, or could it be left up to application? The mixing of updates/removes on the same node and the SHOULD guidance already means that this \"recovery from an invalid node\" may not be consistent across clients/DS-es. Same observation on recovering from ReInit with any other proposals. Is there any additional retry mechanism? Let\u2019s say the sender of the proposal doesn\u2019t see it reflect in the epoch after it sent proposal. And then not in the next epoch. Does it try resending indefinitely? This setup seems a little simplistic. If the application data is encrypted to begin with, how does the adversary know the question being posed by Alice?", "new_text": "5.1.1. HPKE public keys are opaque values in a format defined by the underlying protocol (see Section 4 of RFC9180 for more information). Signature public keys are likewise represented as opaque values in a format defined by the ciphersuite's signature scheme."} {"id": "q-en-mls-protocol-1571b6331f7a5bad4d6c49d6e8d1008e3851ace38cbd2fa4ad4deff6e752ceee", "old_text": "the \"extensions\" field of a LeafNode object with the \"application_id\" extension. However, applications SHOULD NOT rely on the data in an \"application_id\" extension as if it were authenticated by the Authentication Service, and SHOULD gracefully handle cases where the identifier presented is not unique.", "comments": "URL This definition of a handshake message is being defined in terms of a Proposal or Commit object, neither of which are defined in this session. Furthermore, PublicMessage or PrivateMessage aren\u2019t defined either as this point in the text. To improve clarity, consider defining these key terms. The use of \"respectively\" in the text is suggesting that PrivateMessages don\u2019t have integrity protection which is not accurate. (see the Cryptographic Dependencies section of the HPKE specification for more information). Is this Section 4 of RFC9180? If so, why not say that? What would be the circumstance where the application_id extension would be treated with equal standing as something from the AS? Why can\u2019t this be a \"MUST\"? Unsaid is what to do if the proposal list is not valid. Please clarify. Is this normative behavior required, or could it be left up to application? The mixing of updates/removes on the same node and the SHOULD guidance already means that this \"recovery from an invalid node\" may not be consistent across clients/DS-es. Same observation on recovering from ReInit with any other proposals. Is there any additional retry mechanism? Let\u2019s say the sender of the proposal doesn\u2019t see it reflect in the epoch after it sent proposal. And then not in the next epoch. Does it try resending indefinitely? This setup seems a little simplistic. If the application data is encrypted to begin with, how does the adversary know the question being posed by Alice?", "new_text": "the \"extensions\" field of a LeafNode object with the \"application_id\" extension. However, applications MUST NOT rely on the data in an \"application_id\" extension as if it were authenticated by the Authentication Service, and SHOULD gracefully handle cases where the identifier presented is not unique."} {"id": "q-en-mls-protocol-1571b6331f7a5bad4d6c49d6e8d1008e3851ace38cbd2fa4ad4deff6e752ceee", "old_text": "12.2. A group member creating a commit and a group member processing a commit MUST verify that the list of committed proposals is valid using one of the following procedures, depending on whether the commit is external or not. For a regular, i.e. not external, commit the list is invalid if any of the following occurs:", "comments": "URL This definition of a handshake message is being defined in terms of a Proposal or Commit object, neither of which are defined in this session. Furthermore, PublicMessage or PrivateMessage aren\u2019t defined either as this point in the text. To improve clarity, consider defining these key terms. The use of \"respectively\" in the text is suggesting that PrivateMessages don\u2019t have integrity protection which is not accurate. (see the Cryptographic Dependencies section of the HPKE specification for more information). Is this Section 4 of RFC9180? If so, why not say that? What would be the circumstance where the application_id extension would be treated with equal standing as something from the AS? Why can\u2019t this be a \"MUST\"? Unsaid is what to do if the proposal list is not valid. Please clarify. Is this normative behavior required, or could it be left up to application? The mixing of updates/removes on the same node and the SHOULD guidance already means that this \"recovery from an invalid node\" may not be consistent across clients/DS-es. Same observation on recovering from ReInit with any other proposals. Is there any additional retry mechanism? Let\u2019s say the sender of the proposal doesn\u2019t see it reflect in the epoch after it sent proposal. And then not in the next epoch. Does it try resending indefinitely? This setup seems a little simplistic. If the application data is encrypted to begin with, how does the adversary know the question being posed by Alice?", "new_text": "12.2. A group member creating a commit and a group member processing a Commit MUST verify that the list of committed proposals is valid using one of the following procedures, depending on whether the commit is external or not. If the list of proposals is invalid, then the Commit message MUST be rejected as invalid. For a regular, i.e. not external, commit the list is invalid if any of the following occurs:"} {"id": "q-en-multipath-b9e1a7ff797912c99ffa325a23dbb53d329cd6af282b1fb9cd5a96ccb2d56e05", "old_text": "For example, assuming the IV value is \"6b26114b9cba2b63a9e8dd4f\", the connection ID sequence number is \"3\", and the packet number is \"aead\", the nonce will be set to \"6b2611489cba2b63a9a873e2\". 6.3.2.", "comments": "The draft states that Shouldn't be the result 6b2611489cba2b63a9e873e2 instead of 6b2611489cba2b63a9a873e2?\nYes, I submitted PR for this issue.", "new_text": "For example, assuming the IV value is \"6b26114b9cba2b63a9e8dd4f\", the connection ID sequence number is \"3\", and the packet number is \"aead\", the nonce will be set to \"6b2611489cba2b63a9e873e2\". 6.3.2."} {"id": "q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4", "old_text": "Path Identifier (Path ID): An identifier that is used to identify a path in a QUIC connection at an endpoint. Path Identifier is used in multipath control frames (etc. PATH_ABANDON frame) to identify a path. The Path ID is defined as the sequence number of the destination Connection ID used for sending packets on that particular path. Packet Number Space Identifier (PNS ID): An identifier that is used to distinguish packet number spaces for different paths. It is used in 1-RTT packets and ACK_MP frames. Each node maintains a list of \"Received Packets\" for each of the CID that it provided to the peer, which is used for acknowledging packets received with that CID. The difference between Path Identifier and Packet Number Space Identifier, is that the Path Identifier is used in multipath control frames to identify a path, and the Packet Number Space Identifier is used in 1-RTT packets and ACK_MP frames to distinguish packet number spaces for different paths. Both identifiers have the same value, which is the sequence number of the connection ID, when a new path is established. The initial path that is used during the handshake (and multipath negotiation) has the path ID 0 and therefore all 0-RTT packets are also tracked and processed with the path ID 0. For 1-RTT packets, the path ID is the sequence number of the Destination Connection ID present in the packet header, as defined in Section 5.1.1 of QUIC- TRANSPORT. 2.", "comments": "From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good.", "new_text": "Path Identifier (Path ID): An identifier that is used to identify a path in a QUIC connection at an endpoint. Path Identifier is used in multipath control frames (e.g., PATH_ABANDON frame) to identify a path. The initial path that is used during the handshake (and multipath negotiation) has the path ID 0 and therefore all 0-RTT packets are also tracked and processed with the path ID 0. For 1-RTT packets, the path ID is the sequence number of the Destination Connection ID present in the packet header, as defined in Section 5.1.1 of QUIC-TRANSPORT that is used for sending packets on that particular path. 2."} {"id": "q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4", "old_text": "Associated Destination Connection ID: The Connection ID used to send packets over the path. If multiple packet number spaces are used over the connection, hosts MUST also track the following information. Path Packet Number Space: The endpoint maintains a separate packet number for sending and receiving packets over this path. Packet number considerations described in QUIC-TRANSPORT apply within the given path. In the \"Active\" state, hosts MUST also track the following information.", "comments": "From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good.", "new_text": "Associated Destination Connection ID: The Connection ID used to send packets over the path. In Active state, hosts MUST also track the following information: Path ID: The endpoint maintains a separate packet number space for sending and receiving packets over this path which is identified by the path ID. Packet number considerations as described in Section 12.3 of QUIC-TRANSPORT apply within the given path. In the \"Active\" state, hosts MUST also track the following information."} {"id": "q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4", "old_text": "The ACK_MP frame, as specified in ack-mp-frame, is used to acknowledge 1-RTT packets. Compared to the QUIC version 1 ACK frame, the ACK_MP frames additionally contains a Packet Number Space Identifier (PN Space ID). The PN Space ID used to distinguish packet number spaces for different paths and is simply derived from the sequence number of Destination Connection ID. Therefore, the packet number space for 1-RTT packets can be identified based on the Destination Connection ID in each packet. Acknowledgements of Initial and Handshake packets MUST be carried using ACK frames, as specified in QUIC-TRANSPORT. The ACK frames, as", "comments": "From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good.", "new_text": "The ACK_MP frame, as specified in ack-mp-frame, is used to acknowledge 1-RTT packets. Compared to the QUIC version 1 ACK frame, the ACK_MP frame additionally contains a Path ID. The Path ID is used to distinguish packet number spaces for different paths and is simply derived from the sequence number of Destination Connection ID. Therefore, the Path ID for 1-RTT packets can be identified based on the Destination Connection ID in each packet. Acknowledgements of Initial and Handshake packets MUST be carried using ACK frames, as specified in QUIC-TRANSPORT. The ACK frames, as"} {"id": "q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4", "old_text": "Section 5.3 of QUIC-TLS specifies AEAD usage, and in particular the use of a nonce, N, formed by combining the packet protection IV with the packet number. If multiple packet number spaces are used, the packet number alone would not guarantee the uniqueness of the nonce. In order to guarantee the uniqueness of the nonce, the nonce N is calculated by combining the packet protection IV with the packet number and with the path identifier. The path ID for 1-RTT packets is the sequence number of the Connection ID as specified in QUIC-TRANSPORT. Section 19 of QUIC- TRANSPORT encodes the Connection ID Sequence Number as a variable- length integer, allowing values up to 2^62-1; in this specification,", "comments": "From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good.", "new_text": "Section 5.3 of QUIC-TLS specifies AEAD usage, and in particular the use of a nonce, N, formed by combining the packet protection IV with the packet number. When multiple packet number spaces are used, the packet number alone would not guarantee the uniqueness of the nonce. In order to guarantee the uniqueness of the nonce, the nonce N is calculated by combining the packet protection IV with the packet number and with the path identifier. The Path ID for 1-RTT packets is the sequence number of the Connection ID as specified in QUIC-TRANSPORT. Section 19 of QUIC- TRANSPORT encodes the Connection ID Sequence Number as a variable- length integer, allowing values up to 2^62-1; in this specification,"} {"id": "q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4", "old_text": "nonce. For example, assuming the IV value is \"6b26114b9cba2b63a9e8dd4f\", the connection ID sequence number is \"3\", and the packet number is \"aead\", the nonce will be set to \"6b2611489cba2b63a9e873e2\". 5.3.", "comments": "From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good.", "new_text": "nonce. For example, assuming the IV value is \"6b26114b9cba2b63a9e8dd4f\", the Connection ID Sequence Number is \"3\", and the packet number is \"aead\", the nonce will be set to \"6b2611489cba2b63a9e873e2\". 5.3."} {"id": "q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4", "old_text": "If the client has used all the allocated CID, it is supposed to retire those that are not used anymore, and the server is supposed to provide replacements, as specified in QUIC-TRANSPORT. Usually, it is desired to provide one more connection ID as currently in use, to allow for new paths or migration. 6.2.", "comments": "From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good.", "new_text": "If the client has used all the allocated CID, it is supposed to retire those that are not used anymore, and the server is supposed to provide replacements, as specified in QUIC-TRANSPORT. Usually, it is desired to provide one more Connection ID as currently in use, to allow for new paths or migration. 6.2."} {"id": "q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4", "old_text": "PATH_ABANDON frame is considered lost, the peer SHOULD repeat it. PATH_ABANDON frames MAY be sent on any path, not only the path identified by the Packet Number Space Identifier. 8.2.", "comments": "From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good.", "new_text": "PATH_ABANDON frame is considered lost, the peer SHOULD repeat it. PATH_ABANDON frames MAY be sent on any path, not only the path identified by the Path Identifier. 8.2."} {"id": "q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4", "old_text": "The ACK_MP frame (types TBD-00 and TBD-01; experiments use 0xbaba00..0xbaba01) is an extension of the ACK frame defined by QUIC- TRANSPORT. It is used to acknowledge packets that were sent on different paths when using multiple packet number spaces. If the frame type is TBD-01, ACK_MP frames also contain the sum of QUIC packets with associated ECN marks received on the connection up to this point. ACK_MP frame is formatted as shown in fig-ack-mp-format. Compared to the ACK frame specified in QUIC-TRANSPORT, the following field is added. The identifier of the path packet number space of the 1-RTT packets which are acknowledged by the ACK_MP frame. If an endpoint receives an ACK_MP frame with a packet number space ID which was never issued by endpoints (i.e., with a sequence number larger than the largest one advertised), it MUST treat this as a connection error of type MP_PROTOCOL_VIOLATION and close the connection. If an endpoint receives an ACK_MP frame with a packet number space ID which is no more active (e.g., retired by a RETIRE_CONNECTION_ID frame or belonging to closed paths), it MUST ignore the ACK_MP frame without causing a connection error. 9.", "comments": "From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good.", "new_text": "The ACK_MP frame (types TBD-00 and TBD-01; experiments use 0xbaba00..0xbaba01) is an extension of the ACK frame defined by QUIC- TRANSPORT. It is used to acknowledge packets that were sent on different paths using multiple packet number spaces. If the frame type is TBD-01, ACK_MP frames also contain the sum of QUIC packets with associated ECN marks received on the connection up to this point. ACK_MP frame is formatted as shown in fig-ack-mp-format. Compared to the ACK frame specified in QUIC-TRANSPORT, the following field is added. The identifier of the path to identify the packet number space of the 1-RTT packets which are acknowledged by the ACK_MP frame. If an endpoint receives an ACK_MP frame with a Path ID which was never issued by endpoints (i.e., with a sequence number larger than the largest one advertised), it MUST treat this as a connection error of type MP_PROTOCOL_VIOLATION and close the connection. If an endpoint receives an ACK_MP frame with a Path ID which is no more active (e.g., retired by a RETIRE_CONNECTION_ID frame or belonging to closed paths), it MUST ignore the ACK_MP frame without causing a connection error. 9."} {"id": "q-en-multipath-769ebbecde33b9d2595f2f03c88d8d61273b849946789d0efcfc7b9cba0659c2", "old_text": "The transmission of QUIC packets on a regular QUIC connection is regulated by the arrival of data from the application and the congestion control scheme. QUIC packets can only be sent when the congestion window of at least one path is open. Multipath QUIC implementations also need to include a packet scheduler that decides, among the paths whose congestion window is open, the path over which the next QUIC packet will be sent. Many factors can influence the definition of these algorithms and their precise definition is outside the scope of this document. Various packet schedulers have been proposed and implemented, notably for Multipath TCP. A companion draft I-D.bonaventure-iccrg-schedulers provides several general-purpose packet schedulers depending on the application goals. Note that the receiver could use a different scheduling strategy to send ACK(_MP) frames. The recommended default behaviour consists in sending ACK(_MP) frames on the path they acknowledge packets. Other scheduling strategies, such as sending ACK(_MP) frames on the lowest latency path, might be considered, but they could impact the sender with side effects on, e.g., the RTT estimation or the congestion control scheme. When adopting such asymetrical acknowledgment scheduling, the receiver should at least ensure that the sender negotiated one-way delay calculation mechanism (e.g., QUIC- Timestamp). 7.5.", "comments": "Let's give the facts to the implementers, without providing too specific guidance neither. and may also address .\nJust to not forget this, but this is mostly editorial. Probably present first the ACKMP frame, and then PATHABANDON and PATHSTATUS Drop reference to I-D.liu-multipath-quic as the PATHSTATUS frame is now integrated in the draft. Based on and , and If we agree on , we could also drop the text in PATh_ABANDON stating that on any path, not only the path on which the referenced Destination Connection ID is used.\nWe probably need to add some text in the scheduling considerations where control frames (PATHSTATUS, PATHABANDON, but also MAX_DATA,...) should be sent on a good working path, ideally with low latency.\nI would say the same applies to ACK_MP\nFor ACK we recommend (SHOULD) to send it on the same path because of RTT calculations and I believe the scheduling text discusses at already.\nThinking about this some more. I believe we should say NOTHING. The point of the standard is to specify expected behavior when a node receives protocol messages from its peer. The basic statement is that most frames, including control frames, can be sent on any path -- the only exception being PATH CHALLENGE frames and PATH_RESPONSE frames. So my preference would be: 1) say nothing, because the purpose of standards is to specify requirements for Interop, not provide guidance to implementors. 2) if we do say something, just say that these frames can be sent on any valid path, based on preferences of the application and the implementation.\nAs for ACKMP frames, the text says \"ACKMP frame (defined in ) SHOULD be sent on the same path as identified by the Path Identifier. However, an ACKMP frame can be returned via a different path, based on different strategies of sending ACKMP frames.\" That's an OK compromise, leaving open the possibility for implementations to use any strategy they want.\nAlternatively, spell out the considerations to make when picking a path? Does it matter, if so, why? This would be useful to know when new frames come along in the future and need to think for themselves how multipath might affect them.\nSure, but at this stage we have preciously little experience. For example, I have a user request to specify affinity between a stream and a path. I can certainly provide them an API to do that, but this is basically research. And then, affinity applies to stream frames, but whether it should be applied to control frames is debatable. I am concerned that the guidance that we provide will be so generic as not to be useful, while still requiring long debates and delaying the publication of the draft. I think guidance is best provided in research papers, and possibly in text books.\nI agree we shoukd avoid the potential time sink. I wasn't suggesting guidance but rather a fairly succinct statement like \"The path that frames are sent on is a local choice, which could effect the behavior or performance of the QUIC transport or delivery of application data. There is no guidance provided by this document on strategies for path selection for frames beyond the control frames for the multipath extension itself.\" And perhaps gathering consensus on that now avoids people asking the question in later review rounds.\nNAME I agree. Your proposal is basically equivalent to my suggestion that \"if we do say something, just say that these frames can be sent on any valid path, based on preferences of the application and the implementation.\" Your text is good.\nSorry, I did read your comment and immediately paged it out. Credit to you for proposing that option earlier\nLooks good, but the description of congestion is imprecise. Either drop it or fix it.", "new_text": "The transmission of QUIC packets on a regular QUIC connection is regulated by the arrival of data from the application and the congestion control scheme. QUIC packets that increase the number of bytes in flight can only be sent when the congestion window allows it. Multipath QUIC implementations also need to include a packet scheduler that decides, among the paths whose congestion window is open, the path over which the next QUIC packet will be sent. Most frames, including control frames (PATH_CHALLENGE and PATH_RESPONSE being the notable exceptions), can be sent and received on any active path. The scheduling is a local decision, based on the preferences of the application and the implementation. Note that this implies that an endpoint may send and receive ACK_MP frames on a path different from the one that carried the acknowledged packets. A reasonable default consists in sending ACK_MP frames on the path they acknowledge packets, but the receiver must not assume its peer will do so. 7.5."} {"id": "q-en-multipath-7c4347223a835a0bceb7c907b154f8a7c914ba9d48ee27243d580ca8c823b5dc", "old_text": "7.2.1. Packet protection for QUIC version 1 is specified is Section 5 of QUIC-TLS. The general principles of packet protection are not changed for QUIC Multipath. No changes are needed for setting packet protection keys, initial secrets, header protection, use of 0-RTT", "comments": "Each line is limited to 72 characters.\nI verified that this PR produces the draft that we expect. I did one small change: remove spaces at the end of lines.\nThere are many isntances in the markdown source where a whole paragraph is entered in a single line of text. This makes the reviewing harder, and messes up the \"what changed\" view when reviewing PRs.\nJust to be on the same page, how should we format the text? I see two possible ways. Hard-constraint the number of characters per line to a given limit (e.g., 80) Make one sentence = one line I have a slight preference for option 2, but I can of course live with the other or a mix of both.\nI would prefer something like 72 characters, because the last 8 positions are used for numbering the cards in your deck.", "new_text": "7.2.1. Packet protection for QUIC version 1 is specified in Section 5 of QUIC-TLS. The general principles of packet protection are not changed for QUIC Multipath. No changes are needed for setting packet protection keys, initial secrets, header protection, use of 0-RTT"} {"id": "q-en-multipath-f9ff98016559e5fe7a5d2ad008a38a93694077a245a8c0346bc1585a10a819c7", "old_text": "idle timeout\". When only one path is available, servers MUST follow the specifications in QUIC-TRANSPORT. When more than one path is available, servers shall monitor the arrival of non-probing packets on the available paths. Servers SHOULD stop sending traffic on paths through where no non-probing packet was received in the last 3 path RTTs, but MAY ignore that rule if it would disqualify all available paths. To avoid idle timeout of a path, endpoints can send ack-eliciting packets such as packets containing PING frames Section 19.2 of QUIC-TRANSPORT on that path to keep it alive. Sending periodic PING frames also helps prevent middlebox timeout, as discussed in Section 10.1.2 of QUIC-TRANSPORT. Server MAY release the resource associated with paths for which no non-probing packet was received for a sufficiently long path-idle", "comments": "My understanding is that if I am a receiver that wants to proactively cause a sender to stop sending, I should use the explicit way of sending a path abandon frame. The idle timeout handles the case when a receiver is unaware of a path change that leads to communication failure. If that is the purpose, as long as the receiver can receive packets in the sender-to-receiver direction and acknowledge on another path, it looks to me that we should still allow the sender to use that path. But I feel if we do that, we are diverging from the bidirectional nature of a path described in the draft, which worths more discussion.\nI agree with NAME that if you want to close a path, you should use the abandon frame. This sentence covers the case where a path was idle for a while and therefore should be closed silently as it might not work anymore anyway. However, if you have received packet on the path or have an indication that packets you've sent on the path were received by the other end, the path is clearly not idle and working. I also agree with NAME that that means you only know for sure that the path is still working in one direction and that might need further discussion.\nThis was discussed in issue and PR .\nRegarding the maxidletimeout, RFC9000 says \"To avoid excessively small idle timeout periods, endpoints MUST increase the idle timeout period to be at least three times the current Probe Timeout (PTO).\" As in multi-path QUIC, the PTO is per-path, when maxidletimeout is used to close a connection, it makes sense to set maxidletimeout = 3max(PTO1, PTO2). But when it comes to close a path, I am not sure if the timeout for path1 should also depend on the PTO of path 2?\nI think there are two idle timeouts: the \"connection idle timeout\" and the \"path idle timeout\". For the \"connection idle timeout\", I agree that it should be at least 3PTO_max. Triggering this timeout closes the connection. For the \"path idle timeout\", this might be a way to have a stronger guarantee to stop using paths (closing them) after some inactivity than the text proposed in . Still, if we define such a mechanism, we need to know whether all paths should have the same path timeout value or not (in the latter case, a transport parameter might not be the best way).\nI would avoid having a different timer for a path than for a connection. That would make the protocol more complex without a lot of benefits. If a host wants to use a shorter timeout per path than the connection timeout, it can simply define it and use a PATH_ABANDON frame to indicate that the path is abandoned. However, we need to be clearer in 3.2.3 to indicate that a path can be considered valid if either packets are received on that path or packets sent on that path are acknowledged (possibly on another path)\nPresented at IETF-113 but no time for discussion.\nI have submitted PR to resolve this issue according to the above discussions.\nPR was merged but forgot to close issue...\nIn 3.2.3, we have the following paragraph: When more than one path is available, servers shall monitor the arrival of non-probing packets on the available paths. Servers SHOULD stop sending traffic on paths through where no non-probing packet was received in the last 3 path RTTs, but MAY ignore that rule if it would disqualify all available paths. To avoid idle timeout of a path, endpoints can send ack-eliciting packets such as packets containing PING frames Section 19.2 of [QUIC-TRANSPORT] on that path to keep it alive. Sending periodic PING frames also helps prevent middlebox timeout, as discussed in Section 10.1.2 of [QUIC-TRANSPORT]. This paragraph implicitly indicates that we consider a path to be active if we send and receive packets on this path. This definition works well for MPTCP where each data packet triggers an ACK on the same path. In MPQUIC, a server could have two paths that it uses to send data. However, the client might decide to always send the ACKs on only one path (e.g. the lowest delay one). In this case, the two paths are clearly active, but the server could consider the higher delay path to be inactive since it does not receive packets over this path. It could send PING frames to trigger data packets, but then it would need to send one PING every 3 RTT, which seams excessive. I would suggest to rewrite the text as follows (I removed Server, because I think the recommendation should be generic): When more than one path is available, hosts shall monitor both the arrival of non-probing packets and the acknowledgements for the packets sent over each path. Hosts SHOULD stop sending traffic on paths whether either: (a) no non-probing packet was received in the last 3 path RTT or (b) no non-probing packet sent over this path was acknowledged during the last 3 path RTT, but MAY ignore that rule if it would disqualify all available paths. To avoid idle timeout of a path, endpoints can send ack-eliciting packets such as packets containing PING frames Section 19.2 of [QUIC-TRANSPORT] on that path to keep it alive. Sending periodic PING frames also helps prevent middlebox timeout, as discussed in Section 10.1.2 of [QUIC-TRANSPORT].\nThe word \"either\" means that STOP sending becomes true If either (a) is true or (b) is true. If I understand correctly, the actual meaning is to use the word \"both\" here, instead. So stop sending if both (a) no non-probing packets received and (b) no packets was acknowledged.\nYes, I think this is a good clarification and yes \"either\" might be confusing here.\nI commented on the PR. I don't like the idea that receivers would have to stop acknowledging packets in order to signal preference to not use a path. We need to think that a little more.\nSo we have a choice: either say that nodes should regularly send non-probing packets over the path that they want to keep using, or if we don't want that, accept the idea that acks on other paths show continuity, and require \"abandon path\" to terminate a path that is otherwise working, acknowledges packets, etc..\nYes, I think the second option is what we are aiming for with the new text. Stopping ack'ing packets to stop someone sending on a path is wrong and is something you should never do. If you receive a packet successfully you should ack it (also because that would otherwise confuse congestion control). But we have the abandon frame exactly for that case. However, it also seems wrong to me to require sending acks on the same path taen the data is received. So if you decide to always only send ack on one path and all payload data flows into one direction only over multiple paths, the current text would require you to send potentially unnecessary ping packets over each non-ack'ing paths or you would end up closing those paths which also is wrong as everything seems to work fine. So the question is really is, do we want to require pings in this scenario (payload traffic over multiple paths but acks only on one path) in order to ensure that all paths are usable bidirectionally? I think that's unnecessary overhead and believe Olivier's proposed addition is the correct thing to do.\nI can look at what I implemented in picoquic: If there is no traffic in either direction for the idle period, the path is automatically abandoned. If traffic is sent and acked, or traffic is received, the path is not considered idle. This is pretty much what Olivier proposed. If there is a \"timer loss\", and if there is an alternate, the path is put in a low priority list. The node will send ping and try get an ack. If no ack is received after N trials, the path is abandoned. If \"abandon path\" frame is received, the path is abandoned.\nI marked this issue as editorial as we have an editorial PR for it. However, I open a new issue for the per path idle timeout discussion (). Do we need another issue to clarify path closure or sending of keep-alive traffic? However this might be covered in other parts of the draft already...?\nThe modified text looks good.Looks good to meThe text looks good to me.", "new_text": "idle timeout\". When only one path is available, servers MUST follow the specifications in QUIC-TRANSPORT. When more than one path is available, hosts shall monitor the arrival of non-probing packets and the acknowledgements for the packets sent over each path. Hosts SHOULD stop sending traffic on a path if for at least max_idle_timeout milliseconds (a) no non-probing packet was received or (b) no non-probing packet sent over this path was acknowledged, but MAY ignore that rule if it would disqualify all available paths. To avoid idle timeout of a path, endpoints can send ack-eliciting packets such as packets containing PING frames (Section 19.2 of QUIC-TRANSPORT) on that path to keep it alive. Sending periodic PING frames also helps prevent middlebox timeout, as discussed in Section 10.1.2 of QUIC-TRANSPORT. Server MAY release the resource associated with paths for which no non-probing packet was received for a sufficiently long path-idle"} {"id": "q-en-oblivious-http-f321a53063316f6029843a4456216be4c46c680fff0aea36c9f518af61cdac6f", "old_text": "request of the oblivious request resource that includes the same content. When it receives a response, it sends a response to the client that includes the content of the response from the oblivious request resource. When generating a request, the proxy MUST follow the forwarding rules in Section 7.6 of HTTP. A proxy can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an encapsulated response. A proxy MUST NOT add information about the client identity when forwarding requests. This includes the Via field, the Forwarded field FORWARDED, and any similar information. A client does not depend on the proxy using an authenticated and encrypted connection to the oblivious request resource, only that information about the client not be attached to forwarded requests. 7.2.1. As there are privacy benefits from having a large rate of requests", "comments": "Let's be clearer about what it means to proxy here. Also, since it is the same text, talk about TLS assumptions more directly. This will interact poorly with . Sorry about that NAME\nFrom Vienna chat. A proxy is not a generic HTTP intermediary, and therefore is not subject to the same rules.\nFrom NAME Generic proxies would forward unknown HTTP headers between clients and targets. We should be very clear that oblivious proxies are not generic, and should not be doing things like forwarding unknown headers arbitrarily between client and target, or target and client.\nrequests. This includes the Via field, the Forwarded field {{?FORWARDED=RFC7239}}, and any similar information. A client does not depend on the proxy using an authenticated and encrypted connection to the oblivious request resource, only that information about the client not be attached to forwarded requests. Obviously, you need encryption on one or the other side of the proxy, right? Probably this document should require it on the inbound side, but I don't see the text that requires it. Did I just miss it?\nYeah, it's only necessary on the inbound side, per the analysis, though the document assumes it's on both sides.", "new_text": "request of the oblivious request resource that includes the same content. When it receives a response, it sends a response to the client that includes the content of the response from the oblivious request resource. When forwarding a request, the proxy MUST follow the forwarding rules in Section 7.6 of HTTP. A generic HTTP intermediary implementation is suitable for the purposes of serving an oblivious proxy resource, but additional care is needed to ensure that client privacy is maintained. Firstly, a generic implementation will forward unknown fields. For oblivious HTTP, a proxy SHOULD NOT forward unknown fields. Though clients are not expected to include fields that might contain identifying information, removing unknown fields removes this privacy risk. Secondly, generic implementations are often configured to augment requests with information about the client, such as the Via field or the Forwarded field FORWARDED. A proxy MUST NOT add information about the client identity when forwarding requests. A proxy can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an encapsulated response. 7.2.1. As there are privacy benefits from having a large rate of requests"} {"id": "q-en-oblivious-http-f321a53063316f6029843a4456216be4c46c680fff0aea36c9f518af61cdac6f", "old_text": "7.2.2. As the time at which encapsulated request or response messages are sent can reveal information to a network observer. Though messages exchanged between the oblivious proxy resource and the oblivious", "comments": "Let's be clearer about what it means to proxy here. Also, since it is the same text, talk about TLS assumptions more directly. This will interact poorly with . Sorry about that NAME\nFrom Vienna chat. A proxy is not a generic HTTP intermediary, and therefore is not subject to the same rules.\nFrom NAME Generic proxies would forward unknown HTTP headers between clients and targets. We should be very clear that oblivious proxies are not generic, and should not be doing things like forwarding unknown headers arbitrarily between client and target, or target and client.\nrequests. This includes the Via field, the Forwarded field {{?FORWARDED=RFC7239}}, and any similar information. A client does not depend on the proxy using an authenticated and encrypted connection to the oblivious request resource, only that information about the client not be attached to forwarded requests. Obviously, you need encryption on one or the other side of the proxy, right? Probably this document should require it on the inbound side, but I don't see the text that requires it. Did I just miss it?\nYeah, it's only necessary on the inbound side, per the analysis, though the document assumes it's on both sides.", "new_text": "7.2.2. This document assumes that all communication between different entities is protected by HTTPS. This protects information about which resources are the subject of request and prevents a network observer from being able to trivially correlate messages on either side of a proxy. As the time at which encapsulated request or response messages are sent can reveal information to a network observer. Though messages exchanged between the oblivious proxy resource and the oblivious"} {"id": "q-en-oblivious-http-8fd0b0631b62f71a16d19e4c8503a070be8e0260ab51ff69867365ea64fb235e", "old_text": "7.3. A server that operates both Oblivious Gateway and Target Resources is responsible for removing request encryption, generating a response to the Encapsulated Request, and encrypting the response.", "comments": "NAME noted that it was important to note that the Gateway needs to be trusted if gateway != target. As a practical matter, that means that you probably want gateway == target.\nHi, Since the gateways are seeing request/response pairs, there is a potential here for a side-channel attack at the gateway itself or at a relay. (if I understood the draft correctly - if not, please clarify my misunderstanding here). Are there mitigations planned for this type of issue that you could expand on in the Security Considerations section perhaps?\nDoes the new text in address this concern?\nI don't think so, because the claim is the gateway does not have intimate knowledge of the contents of the request/response. If that assertion is true, the gateway shouldn't be able to ascertain information about the actual contents from those pairs (directly or indirectly, via a side-channel attack). If the burden is on the application developer to ensure that request/response pairs traversing the gateway are to be resilient to side-channel attacks of those pairings, then it would be good to explicitly call that out in this spec (perhaps in the security considerations section?) Thoughts?\nNAME the gateway necessarily sees all request and response pairs, and will therefore always learn information about each transaction. This is fundamental to the design. Whether or not the gateway uses that information for malicious purposes, e.g., by providing different responses to different requests for the same target resource, depends on the trust model. And the change in addresses that. I don't think further text is needed here.\n\"Finally, a relay can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an Encapsulated Response.\" I feel that a relay could certainly perform a side-channel attack here as well using R/R pairings.\nCan you please be more specific here? What type of attack could the relay do to violate client privacy that is not already documented in the draft?\nTypically in HTTPs scenarios, it is more difficult to perform side channel attacks as a flow observer, especially if multiplexing of requests and responses are performed. In this design, if I understand it correctly, the requests and responses are being broken out into distinct exchanges that the relay can see and therefore it makes it easier for a relay to perform these types of attacks that would be more difficult had the relay not been used in the first place. If my understanding of the protocol is incorrect here, please feel free to point that out.\nYou seem to be describing fingerprinting attacks, wherein the observer uses features extracted from otherwise encrypted traffic to try and learn information about the underlying (unencrypted) data. Yes, the relay does have a more fine-grained view of each exchange compared to a network observer in this protocol since it can see explicit message boundaries, and that might impact the efficacy of these attacks. This is partly already addressed in the document: We could extend this to note the relay's view of message boundaries. I don't think any additional text is needed for the gateway.\nWhatever you feel is best. I think some language around the message level risks this creates vs non-relay scenarios might be helpful. Does the client intentionally break things up in a way that crosses the underlying application message boundaries (e.g. more like packetization) with a relay? I have more questions about the gateway functionality that we can discuss separately.\nThis means that the client really needs to trust that the gateway doesn't muck with it. As a practical matter, this incentivizes clients to choose a gateway that is effectively the same entity as the target. They don't have to, but the amount of trust required of the gateway is higher if gateway and target aren't operated by the same entity. We touch on this, but we could be clearer about it.", "new_text": "7.3. The Oblivious Gateway Resource can be operated by a different entity than the Target Resource. However, this means that the client needs to trust the Oblivious Gateway Resource not to modify requests or responses. This analysis concerns itself with a deployment scenario where a single server provides both the Oblivious Gateway Resource and Target Resource. A server that operates both Oblivious Gateway and Target Resources is responsible for removing request encryption, generating a response to the Encapsulated Request, and encrypting the response."} {"id": "q-en-oblivious-http-8fd0b0631b62f71a16d19e4c8503a070be8e0260ab51ff69867365ea64fb235e", "old_text": "the unprotected requests and responses, plus protections for traffic analysis; see ta. An Oblivious Gateway Resource needs to have a plan for replacing keys. This might include regular replacement of keys, which can be assigned new key identifiers. If an Oblivious Gateway Resource", "comments": "NAME noted that it was important to note that the Gateway needs to be trusted if gateway != target. As a practical matter, that means that you probably want gateway == target.\nHi, Since the gateways are seeing request/response pairs, there is a potential here for a side-channel attack at the gateway itself or at a relay. (if I understood the draft correctly - if not, please clarify my misunderstanding here). Are there mitigations planned for this type of issue that you could expand on in the Security Considerations section perhaps?\nDoes the new text in address this concern?\nI don't think so, because the claim is the gateway does not have intimate knowledge of the contents of the request/response. If that assertion is true, the gateway shouldn't be able to ascertain information about the actual contents from those pairs (directly or indirectly, via a side-channel attack). If the burden is on the application developer to ensure that request/response pairs traversing the gateway are to be resilient to side-channel attacks of those pairings, then it would be good to explicitly call that out in this spec (perhaps in the security considerations section?) Thoughts?\nNAME the gateway necessarily sees all request and response pairs, and will therefore always learn information about each transaction. This is fundamental to the design. Whether or not the gateway uses that information for malicious purposes, e.g., by providing different responses to different requests for the same target resource, depends on the trust model. And the change in addresses that. I don't think further text is needed here.\n\"Finally, a relay can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an Encapsulated Response.\" I feel that a relay could certainly perform a side-channel attack here as well using R/R pairings.\nCan you please be more specific here? What type of attack could the relay do to violate client privacy that is not already documented in the draft?\nTypically in HTTPs scenarios, it is more difficult to perform side channel attacks as a flow observer, especially if multiplexing of requests and responses are performed. In this design, if I understand it correctly, the requests and responses are being broken out into distinct exchanges that the relay can see and therefore it makes it easier for a relay to perform these types of attacks that would be more difficult had the relay not been used in the first place. If my understanding of the protocol is incorrect here, please feel free to point that out.\nYou seem to be describing fingerprinting attacks, wherein the observer uses features extracted from otherwise encrypted traffic to try and learn information about the underlying (unencrypted) data. Yes, the relay does have a more fine-grained view of each exchange compared to a network observer in this protocol since it can see explicit message boundaries, and that might impact the efficacy of these attacks. This is partly already addressed in the document: We could extend this to note the relay's view of message boundaries. I don't think any additional text is needed for the gateway.\nWhatever you feel is best. I think some language around the message level risks this creates vs non-relay scenarios might be helpful. Does the client intentionally break things up in a way that crosses the underlying application message boundaries (e.g. more like packetization) with a relay? I have more questions about the gateway functionality that we can discuss separately.\nThis means that the client really needs to trust that the gateway doesn't muck with it. As a practical matter, this incentivizes clients to choose a gateway that is effectively the same entity as the target. They don't have to, but the amount of trust required of the gateway is higher if gateway and target aren't operated by the same entity. We touch on this, but we could be clearer about it.", "new_text": "the unprotected requests and responses, plus protections for traffic analysis; see ta. Nonsecure requests - such those with the \"http\" scheme as opposed to the \"https\" scheme - SHOULD NOT be used if the Oblivious Gateway and Target Resources are operated by different entities as that would expose both requests and response to modification or inspection by a network attacker. 7.4. An Oblivious Gateway Resource needs to have a plan for replacing keys. This might include regular replacement of keys, which can be assigned new key identifiers. If an Oblivious Gateway Resource"} {"id": "q-en-oblivious-http-8fd0b0631b62f71a16d19e4c8503a070be8e0260ab51ff69867365ea64fb235e", "old_text": "within the HPKE context created using the \"message/bhttp request\" label; see repurposing-the-encapsulation-format. A server is responsible for either rejecting replayed requests or ensuring that the effect of replays does not adversely affect clients or resources; see replay. 7.4. Encrypted requests can be copied and replayed by the Oblivious Relay resource. The threat model for Oblivious HTTP allows the possibility", "comments": "NAME noted that it was important to note that the Gateway needs to be trusted if gateway != target. As a practical matter, that means that you probably want gateway == target.\nHi, Since the gateways are seeing request/response pairs, there is a potential here for a side-channel attack at the gateway itself or at a relay. (if I understood the draft correctly - if not, please clarify my misunderstanding here). Are there mitigations planned for this type of issue that you could expand on in the Security Considerations section perhaps?\nDoes the new text in address this concern?\nI don't think so, because the claim is the gateway does not have intimate knowledge of the contents of the request/response. If that assertion is true, the gateway shouldn't be able to ascertain information about the actual contents from those pairs (directly or indirectly, via a side-channel attack). If the burden is on the application developer to ensure that request/response pairs traversing the gateway are to be resilient to side-channel attacks of those pairings, then it would be good to explicitly call that out in this spec (perhaps in the security considerations section?) Thoughts?\nNAME the gateway necessarily sees all request and response pairs, and will therefore always learn information about each transaction. This is fundamental to the design. Whether or not the gateway uses that information for malicious purposes, e.g., by providing different responses to different requests for the same target resource, depends on the trust model. And the change in addresses that. I don't think further text is needed here.\n\"Finally, a relay can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an Encapsulated Response.\" I feel that a relay could certainly perform a side-channel attack here as well using R/R pairings.\nCan you please be more specific here? What type of attack could the relay do to violate client privacy that is not already documented in the draft?\nTypically in HTTPs scenarios, it is more difficult to perform side channel attacks as a flow observer, especially if multiplexing of requests and responses are performed. In this design, if I understand it correctly, the requests and responses are being broken out into distinct exchanges that the relay can see and therefore it makes it easier for a relay to perform these types of attacks that would be more difficult had the relay not been used in the first place. If my understanding of the protocol is incorrect here, please feel free to point that out.\nYou seem to be describing fingerprinting attacks, wherein the observer uses features extracted from otherwise encrypted traffic to try and learn information about the underlying (unencrypted) data. Yes, the relay does have a more fine-grained view of each exchange compared to a network observer in this protocol since it can see explicit message boundaries, and that might impact the efficacy of these attacks. This is partly already addressed in the document: We could extend this to note the relay's view of message boundaries. I don't think any additional text is needed for the gateway.\nWhatever you feel is best. I think some language around the message level risks this creates vs non-relay scenarios might be helpful. Does the client intentionally break things up in a way that crosses the underlying application message boundaries (e.g. more like packetization) with a relay? I have more questions about the gateway functionality that we can discuss separately.\nThis means that the client really needs to trust that the gateway doesn't muck with it. As a practical matter, this incentivizes clients to choose a gateway that is effectively the same entity as the target. They don't have to, but the amount of trust required of the gateway is higher if gateway and target aren't operated by the same entity. We touch on this, but we could be clearer about it.", "new_text": "within the HPKE context created using the \"message/bhttp request\" label; see repurposing-the-encapsulation-format. 7.5. A server is responsible for either rejecting replayed requests or ensuring that the effect of replays does not adversely affect clients or resources. Encrypted requests can be copied and replayed by the Oblivious Relay resource. The threat model for Oblivious HTTP allows the possibility"} {"id": "q-en-oblivious-http-8fd0b0631b62f71a16d19e4c8503a070be8e0260ab51ff69867365ea64fb235e", "old_text": "field ensures that responses have unique AEAD keys and nonces even when requests are replayed. 7.4.1. Clients SHOULD include a \"Date\" header field in Encapsulated Requests, unless the Oblivious Gateway Resource does not use \"Date\"", "comments": "NAME noted that it was important to note that the Gateway needs to be trusted if gateway != target. As a practical matter, that means that you probably want gateway == target.\nHi, Since the gateways are seeing request/response pairs, there is a potential here for a side-channel attack at the gateway itself or at a relay. (if I understood the draft correctly - if not, please clarify my misunderstanding here). Are there mitigations planned for this type of issue that you could expand on in the Security Considerations section perhaps?\nDoes the new text in address this concern?\nI don't think so, because the claim is the gateway does not have intimate knowledge of the contents of the request/response. If that assertion is true, the gateway shouldn't be able to ascertain information about the actual contents from those pairs (directly or indirectly, via a side-channel attack). If the burden is on the application developer to ensure that request/response pairs traversing the gateway are to be resilient to side-channel attacks of those pairings, then it would be good to explicitly call that out in this spec (perhaps in the security considerations section?) Thoughts?\nNAME the gateway necessarily sees all request and response pairs, and will therefore always learn information about each transaction. This is fundamental to the design. Whether or not the gateway uses that information for malicious purposes, e.g., by providing different responses to different requests for the same target resource, depends on the trust model. And the change in addresses that. I don't think further text is needed here.\n\"Finally, a relay can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an Encapsulated Response.\" I feel that a relay could certainly perform a side-channel attack here as well using R/R pairings.\nCan you please be more specific here? What type of attack could the relay do to violate client privacy that is not already documented in the draft?\nTypically in HTTPs scenarios, it is more difficult to perform side channel attacks as a flow observer, especially if multiplexing of requests and responses are performed. In this design, if I understand it correctly, the requests and responses are being broken out into distinct exchanges that the relay can see and therefore it makes it easier for a relay to perform these types of attacks that would be more difficult had the relay not been used in the first place. If my understanding of the protocol is incorrect here, please feel free to point that out.\nYou seem to be describing fingerprinting attacks, wherein the observer uses features extracted from otherwise encrypted traffic to try and learn information about the underlying (unencrypted) data. Yes, the relay does have a more fine-grained view of each exchange compared to a network observer in this protocol since it can see explicit message boundaries, and that might impact the efficacy of these attacks. This is partly already addressed in the document: We could extend this to note the relay's view of message boundaries. I don't think any additional text is needed for the gateway.\nWhatever you feel is best. I think some language around the message level risks this creates vs non-relay scenarios might be helpful. Does the client intentionally break things up in a way that crosses the underlying application message boundaries (e.g. more like packetization) with a relay? I have more questions about the gateway functionality that we can discuss separately.\nThis means that the client really needs to trust that the gateway doesn't muck with it. As a practical matter, this incentivizes clients to choose a gateway that is effectively the same entity as the target. They don't have to, but the amount of trust required of the gateway is higher if gateway and target aren't operated by the same entity. We touch on this, but we could be clearer about it.", "new_text": "field ensures that responses have unique AEAD keys and nonces even when requests are replayed. 7.5.1. Clients SHOULD include a \"Date\" header field in Encapsulated Requests, unless the Oblivious Gateway Resource does not use \"Date\""} {"id": "q-en-oblivious-http-8fd0b0631b62f71a16d19e4c8503a070be8e0260ab51ff69867365ea64fb235e", "old_text": "for the \"Date\" field to determine the time window over which the server will accept responses. 7.4.2. An Oblivious Gateway Resource can reject requests that contain a \"Date\" value that is outside of its active window with a 400 series", "comments": "NAME noted that it was important to note that the Gateway needs to be trusted if gateway != target. As a practical matter, that means that you probably want gateway == target.\nHi, Since the gateways are seeing request/response pairs, there is a potential here for a side-channel attack at the gateway itself or at a relay. (if I understood the draft correctly - if not, please clarify my misunderstanding here). Are there mitigations planned for this type of issue that you could expand on in the Security Considerations section perhaps?\nDoes the new text in address this concern?\nI don't think so, because the claim is the gateway does not have intimate knowledge of the contents of the request/response. If that assertion is true, the gateway shouldn't be able to ascertain information about the actual contents from those pairs (directly or indirectly, via a side-channel attack). If the burden is on the application developer to ensure that request/response pairs traversing the gateway are to be resilient to side-channel attacks of those pairings, then it would be good to explicitly call that out in this spec (perhaps in the security considerations section?) Thoughts?\nNAME the gateway necessarily sees all request and response pairs, and will therefore always learn information about each transaction. This is fundamental to the design. Whether or not the gateway uses that information for malicious purposes, e.g., by providing different responses to different requests for the same target resource, depends on the trust model. And the change in addresses that. I don't think further text is needed here.\n\"Finally, a relay can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an Encapsulated Response.\" I feel that a relay could certainly perform a side-channel attack here as well using R/R pairings.\nCan you please be more specific here? What type of attack could the relay do to violate client privacy that is not already documented in the draft?\nTypically in HTTPs scenarios, it is more difficult to perform side channel attacks as a flow observer, especially if multiplexing of requests and responses are performed. In this design, if I understand it correctly, the requests and responses are being broken out into distinct exchanges that the relay can see and therefore it makes it easier for a relay to perform these types of attacks that would be more difficult had the relay not been used in the first place. If my understanding of the protocol is incorrect here, please feel free to point that out.\nYou seem to be describing fingerprinting attacks, wherein the observer uses features extracted from otherwise encrypted traffic to try and learn information about the underlying (unencrypted) data. Yes, the relay does have a more fine-grained view of each exchange compared to a network observer in this protocol since it can see explicit message boundaries, and that might impact the efficacy of these attacks. This is partly already addressed in the document: We could extend this to note the relay's view of message boundaries. I don't think any additional text is needed for the gateway.\nWhatever you feel is best. I think some language around the message level risks this creates vs non-relay scenarios might be helpful. Does the client intentionally break things up in a way that crosses the underlying application message boundaries (e.g. more like packetization) with a relay? I have more questions about the gateway functionality that we can discuss separately.\nThis means that the client really needs to trust that the gateway doesn't muck with it. As a practical matter, this incentivizes clients to choose a gateway that is effectively the same entity as the target. They don't have to, but the amount of trust required of the gateway is higher if gateway and target aren't operated by the same entity. We touch on this, but we could be clearer about it.", "new_text": "for the \"Date\" field to determine the time window over which the server will accept responses. 7.5.2. An Oblivious Gateway Resource can reject requests that contain a \"Date\" value that is outside of its active window with a 400 series"} {"id": "q-en-oblivious-http-8fd0b0631b62f71a16d19e4c8503a070be8e0260ab51ff69867365ea64fb235e", "old_text": "Oblivious Gateway Resource might be correlated using that information. 7.5. This document does not provide forward secrecy for either requests or responses during the lifetime of the key configuration. A measure of forward secrecy can be provided by generating a new key configuration then deleting the old keys after a suitable period. 7.6. This design does not provide post-compromise security for responses.", "comments": "NAME noted that it was important to note that the Gateway needs to be trusted if gateway != target. As a practical matter, that means that you probably want gateway == target.\nHi, Since the gateways are seeing request/response pairs, there is a potential here for a side-channel attack at the gateway itself or at a relay. (if I understood the draft correctly - if not, please clarify my misunderstanding here). Are there mitigations planned for this type of issue that you could expand on in the Security Considerations section perhaps?\nDoes the new text in address this concern?\nI don't think so, because the claim is the gateway does not have intimate knowledge of the contents of the request/response. If that assertion is true, the gateway shouldn't be able to ascertain information about the actual contents from those pairs (directly or indirectly, via a side-channel attack). If the burden is on the application developer to ensure that request/response pairs traversing the gateway are to be resilient to side-channel attacks of those pairings, then it would be good to explicitly call that out in this spec (perhaps in the security considerations section?) Thoughts?\nNAME the gateway necessarily sees all request and response pairs, and will therefore always learn information about each transaction. This is fundamental to the design. Whether or not the gateway uses that information for malicious purposes, e.g., by providing different responses to different requests for the same target resource, depends on the trust model. And the change in addresses that. I don't think further text is needed here.\n\"Finally, a relay can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an Encapsulated Response.\" I feel that a relay could certainly perform a side-channel attack here as well using R/R pairings.\nCan you please be more specific here? What type of attack could the relay do to violate client privacy that is not already documented in the draft?\nTypically in HTTPs scenarios, it is more difficult to perform side channel attacks as a flow observer, especially if multiplexing of requests and responses are performed. In this design, if I understand it correctly, the requests and responses are being broken out into distinct exchanges that the relay can see and therefore it makes it easier for a relay to perform these types of attacks that would be more difficult had the relay not been used in the first place. If my understanding of the protocol is incorrect here, please feel free to point that out.\nYou seem to be describing fingerprinting attacks, wherein the observer uses features extracted from otherwise encrypted traffic to try and learn information about the underlying (unencrypted) data. Yes, the relay does have a more fine-grained view of each exchange compared to a network observer in this protocol since it can see explicit message boundaries, and that might impact the efficacy of these attacks. This is partly already addressed in the document: We could extend this to note the relay's view of message boundaries. I don't think any additional text is needed for the gateway.\nWhatever you feel is best. I think some language around the message level risks this creates vs non-relay scenarios might be helpful. Does the client intentionally break things up in a way that crosses the underlying application message boundaries (e.g. more like packetization) with a relay? I have more questions about the gateway functionality that we can discuss separately.\nThis means that the client really needs to trust that the gateway doesn't muck with it. As a practical matter, this incentivizes clients to choose a gateway that is effectively the same entity as the target. They don't have to, but the amount of trust required of the gateway is higher if gateway and target aren't operated by the same entity. We touch on this, but we could be clearer about it.", "new_text": "Oblivious Gateway Resource might be correlated using that information. 7.6. This document does not provide forward secrecy for either requests or responses during the lifetime of the key configuration. A measure of forward secrecy can be provided by generating a new key configuration then deleting the old keys after a suitable period. 7.7. This design does not provide post-compromise security for responses."} {"id": "q-en-oblivious-http-8fd0b0631b62f71a16d19e4c8503a070be8e0260ab51ff69867365ea64fb235e", "old_text": "The total number of affected messages affected by server key compromise can be limited by regular rotation of server keys. 7.7. Including a \"Date\" field in requests reveals some information about the client clock. This might be used to fingerprint clients UWT or", "comments": "NAME noted that it was important to note that the Gateway needs to be trusted if gateway != target. As a practical matter, that means that you probably want gateway == target.\nHi, Since the gateways are seeing request/response pairs, there is a potential here for a side-channel attack at the gateway itself or at a relay. (if I understood the draft correctly - if not, please clarify my misunderstanding here). Are there mitigations planned for this type of issue that you could expand on in the Security Considerations section perhaps?\nDoes the new text in address this concern?\nI don't think so, because the claim is the gateway does not have intimate knowledge of the contents of the request/response. If that assertion is true, the gateway shouldn't be able to ascertain information about the actual contents from those pairs (directly or indirectly, via a side-channel attack). If the burden is on the application developer to ensure that request/response pairs traversing the gateway are to be resilient to side-channel attacks of those pairings, then it would be good to explicitly call that out in this spec (perhaps in the security considerations section?) Thoughts?\nNAME the gateway necessarily sees all request and response pairs, and will therefore always learn information about each transaction. This is fundamental to the design. Whether or not the gateway uses that information for malicious purposes, e.g., by providing different responses to different requests for the same target resource, depends on the trust model. And the change in addresses that. I don't think further text is needed here.\n\"Finally, a relay can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an Encapsulated Response.\" I feel that a relay could certainly perform a side-channel attack here as well using R/R pairings.\nCan you please be more specific here? What type of attack could the relay do to violate client privacy that is not already documented in the draft?\nTypically in HTTPs scenarios, it is more difficult to perform side channel attacks as a flow observer, especially if multiplexing of requests and responses are performed. In this design, if I understand it correctly, the requests and responses are being broken out into distinct exchanges that the relay can see and therefore it makes it easier for a relay to perform these types of attacks that would be more difficult had the relay not been used in the first place. If my understanding of the protocol is incorrect here, please feel free to point that out.\nYou seem to be describing fingerprinting attacks, wherein the observer uses features extracted from otherwise encrypted traffic to try and learn information about the underlying (unencrypted) data. Yes, the relay does have a more fine-grained view of each exchange compared to a network observer in this protocol since it can see explicit message boundaries, and that might impact the efficacy of these attacks. This is partly already addressed in the document: We could extend this to note the relay's view of message boundaries. I don't think any additional text is needed for the gateway.\nWhatever you feel is best. I think some language around the message level risks this creates vs non-relay scenarios might be helpful. Does the client intentionally break things up in a way that crosses the underlying application message boundaries (e.g. more like packetization) with a relay? I have more questions about the gateway functionality that we can discuss separately.\nThis means that the client really needs to trust that the gateway doesn't muck with it. As a practical matter, this incentivizes clients to choose a gateway that is effectively the same entity as the target. They don't have to, but the amount of trust required of the gateway is higher if gateway and target aren't operated by the same entity. We touch on this, but we could be clearer about it.", "new_text": "The total number of affected messages affected by server key compromise can be limited by regular rotation of server keys. 7.8. Including a \"Date\" field in requests reveals some information about the client clock. This might be used to fingerprint clients UWT or"} {"id": "q-en-oblivious-http-cec08dc75dbdfa9940e8aaaac18bab72ce7a9dee4b76b1b8e217001d251ec31d", "old_text": "7. One goal of this design is that independent client requests are only linkable by the chosen key configuration. The Oblivious Relay and Gateway resources can link requests using the same key configuration by matching KeyConfig.key_id, or, if the Target Resource is willing to use trial decryption, a limited set of key configurations that share an identifier. An Oblivious Relay Resource can link requests using the public key corresponding to KeyConfig.key_id or via the URI of the Oblivious Gateway Resource. We refer to this set of information as the gateway's configuration. Whether or not targets can link requests depends on how gateway configuration information is produced and discovered by clients. Specifically, gateways can maliciously construct configurations to track individual clients. Ideally, all clients share a consistent view of the gateway configuration. The number of different valid configurations can exponentially partition the anonymity set of clients. For example, if there are two valid configurations, that would yield two anonymity sets consisting of clients that used one configuration and clients that used the other configuration. If the size of each set is large enough, this may not be a significant loss of privacy in practice. Ensuring that many clients share a configuration is necessary to provide privacy for clients. A specific method for a client to acquire configurations is not included in this specification. Applications using this design MUST provide accommodations to mitigate tracking using gateway configurations. CONSISTENCY provides options for ensuring that configurations are consistent between clients. 8.", "comments": "I was just going to add something about correlating requests based on their content, but then it seemed like the text was duplicated some. Here is my attempt at making this better. I also expanded the notion of \"configuration\" to include the identity of the relay. That made it a little tricky - we really do want diversity of relays - but it makes more sense to talk about minimum anonymity set size rather than to imply that there can be a single configuration, because that is unrealistic.\nHi, The specification talks about State. Could I suggest that it explicitly mention 'or other user identifiable payloads.' Because things like JSON documents (e.g. a JWT) and SAML assertions are not 'cookies' but can carry information that might be user identifiable. Thoughts?\nI think we've given short shrift to the way in which the content of requests can be used to link them. Perhaps some new text for Section 8...\nNAME that text looks good to me Wanna toss up a PR?", "new_text": "7. One goal of this design is that independent client requests are only linkable by their content. However, the choice of client configuration might be used to correlate requests. A client configuration includes the Oblivious Relay Resource URI, the Oblivious Gateway key configuration (KeyConfig), and Oblivious Gateway Resource URI. A configuration is active if clients can successfully use it for interacting with with a target. Oblivious Relay and Gateway Resources can identify when requests use the same configuration by matching \"KeyConfig.key_id\" or the Oblivious Gateway Resource URI. The Oblivious Gateway Resource might use the source address of requests to correlate requests that use an Oblivious Relay Resource run by the same operator. If the Oblivious Gateway Resource is willing to use trial decryption, requests can be further separated into smaller groupings based on the keys that are used. Each active client configuration partitions the client anonymity set. In practice, it is infeasible to reduce the number of active configurations to one. Enabling diversity in choice of Oblivious Relay Resource naturally increases the number of active configurations. A small number of configurations might need to be active to allow for key rotation and server maintenance. Client privacy depends on having each configuration used by many other clients. It is critical prevent the use of unique client configurations, which might be used to track of individual clients, but it is also important to avoid creating small groupings of clients that might weaken privacy protections. A specific method for a client to acquire configurations is not included in this specification. Applications using this design MUST provide accommodations to mitigate tracking using client configurations. CONSISTENCY provides options for ensuring that client configurations are consistent between clients. The content of requests or responses, if used in forming new requests, can be used to correlate requests. This includes obvious methods of linking requests, like cookies COOKIES, but it also includes any information in either message that might affect how subsequent requests are formulated. For example, FIELDING describes how interactions that are individually stateless can be used to build a stateful system when a client acts on the content of a response. 8."} {"id": "q-en-oblivious-http-358fa79ffbe6b77e6946689b1e69edcca3ab443bc0c769bf132dd3f69d0b4071", "old_text": "6. In this design, a client wishes to make a request of a server that is authoritative for the Target Resource. The client wishes to make this request without linking that request with either: The identity at the network and transport layer of the client (that is, the client IP address and TCP or UDP port number the", "comments": "Most of the text was correct in that we talk about making a request of a resource. Where the text refers to making requests of a server, those usages seem to be correct.\nSection 5 details HTTP usage. It seems it's technically correct but the section closes with a sequence of paragraphs that jump back and forth between statements about how an Oblivious Gateway Resource treats requests or responses. Perhaps these could be consolidated to present requests first consistently. I.e. swap the final two paragraphs around\nmnot recently made a PR (now merged) that replaced \"requests of a server\" with \"requests to an origin server\" and that feels both grammatically and technically more correct. However, there are still few cases peppered through the doc that use \"requests of a server\". I'd kindly ask the editors to revisit these and consider if they could be made more consistent with the above", "new_text": "6. In this design, a client wishes to make a request of a server that is authoritative for a Target Resource. The client wishes to make this request without linking that request with either: The identity at the network and transport layer of the client (that is, the client IP address and TCP or UDP port number the"} {"id": "q-en-oblivious-http-e3d60f055877e613b56bf7d665a750f22fe5c106be158460aab858c900055590", "old_text": "analysis could be used to match messages that are forwarded by the relay. A relay could, as part of its function, add delays in order to increase the anonymity set into which each message is attributed. This could latency to the overall time clients take to receive a response, which might not be what some clients want. A relay that forwards large volumes of exchanges can provide better privacy by providing larger sets of messages that need to be matched.", "comments": "I presume the idea here is to add jitter to the system (different delays for different requests) rather than compound delays on a single request? Even if not, this paragraph could probably benefit from some language fixups. There's a missing word and \"might not be what some clients want\" could benefit from a rephrasing to make it stronger.\nclearer, thanks", "new_text": "analysis could be used to match messages that are forwarded by the relay. A relay could, as part of its function, delay requests before forwarding them. Delays might increase the anonymity set into which each request is attributed. Any delay also increases the time that a client waits for a response, so delays SHOULD only be added with the consent - or at least awareness - of clients. A relay that forwards large volumes of exchanges can provide better privacy by providing larger sets of messages that need to be matched."} {"id": "q-en-oblivious-http-24e01fc4958f80b8f50d33be4201b89f64bfbd3f19367dd53090291e4383ce55", "old_text": "response to the POST request made to that resource. Errors detected by the after successfully removing encapsulation and errors detected by the MUST be sent in an . 6.", "comments": "This starts by moving the good text from to the \"Errors\" section, where I realized that it duplicated some of that language. It was more complete, so I merged the two. I then added some language about what happens to errors prior to decapsulation. This doesn't define an error signaling format. Though that might be useful, it is also a lot more work and I'm leery of scope creep in this document. My view: if you get an error, you have screwed up somewhere. You probably want to have a human look at what is going on. The privacy consequences of automated responses are difficult to work out properly. Hence this proposed change.\nThis is an improvement but still insufficient with respect to configuration mismatch signaling.\nLet's start with this one and I'll look at addressing the open issue in a follow-up.\nThanks, I think this is clear enough now.", "new_text": "response to the POST request made to that resource. Errors detected by the after successfully removing encapsulation and errors detected by the MUST be sent in an . This might be because the request is malformed or the does not produce a response. In either case the can generate a response with an appropriate error status code (such as 400 (Bad Request) or 504 (Gateway Timeout); see HTTP and HTTP, respectively). This response is encapsulated in the same way as a successful response. Errors in the encapsulation of requests mean that responses cannot be encapsulated. This includes cases where the is incorrect or outdated. The can generate and send a response with an error status to the . This response MAY be forwarded to the or treated by the as a failure. If a receives a response that is not an , this could indicate that the client configuration used to construct the request is incorrect or out of date. 6."} {"id": "q-en-oblivious-http-03a0f6ef74f128b17909d6d98d7056691b85d4c5f6a64c0db386e5650d44b298", "old_text": "4.6. The encrypted payload of an OHTTP request and response is a binary HTTP message BINARY. The and agree on this encrypted payload type by specifying the media type \"message/bhttp\" in the HPKE info string and HPKE export context string for request and response encryption, respectively. Future specifications may repurpose the encapsulation mechanism described in this document. This requires that the specification", "comments": "From NAME Interesting that this is the first time OHTTP appear in the text, excluding pseudocode and mediatype - please expand (or define somewhere else, for example in terminology).", "new_text": "4.6. The encrypted payload of an Oblivious HTTP request and response is a binary HTTP message BINARY. The and agree on this encrypted payload type by specifying the media type \"message/bhttp\" in the HPKE info string and HPKE export context string for request and response encryption, respectively. Future specifications may repurpose the encapsulation mechanism described in this document. This requires that the specification"} {"id": "q-en-oblivious-http-44d1df1c595c5dce7cb4a1c81be048127e4e9a14c3c23f5f2d68abfd7b908c3b", "old_text": "This draft includes pseudocode that uses the functions and conventions defined in HPKE. This draft uses the variable-length integer encoding from Section 16 of QUIC. Encoding and decoding variable-length integers to a sequence of bytes are described using the functions \"vencode()\" and \"vdecode()\". The function \"len()\" takes the length of a sequence of bytes. Formats are described using notation from Section 1.3 of QUIC.", "comments": "Switches to an 8-bit key identifier here as well. This format now doesn't need varints. Yay. Implemented this in my ohttp library and it's fine.\nSplitting this from . It seems likely that a single KEM key will support multiple KDF or AEAD options (NSS will soon support 3 of the former, and 2 of the latter, maybe 3 of each). Having a configuration that includes a single set of (KEM, key, and key identifier) with a list of KDF+AEAD pairs, akin to ECH, would make it easier to support that sort of deployment. Without that, key identifiers need to carry information about the KDF and AEAD. To support this, the format needs to be adjusted to carry the KDF and AEAD identifier. In doing so, we probably want to encode the choice of KDF and AEAD in the key schedule or AAD accordingly.\nShould the key ID length be explicitly authenticated, e.g., by including it in the AAD, or should the AEAD implicitly authenticate things? Unclear!\nShould we even bother with an AAD at all? The best reason is that the keyID determines which key the server uses, which is information that factors into decisions. (A related question might be why doesn't HPKE take variables as input so that things like key identifiers can be integrated into the key schedule.)\nMy take: yes. I need to think through the implications of no additional authentication at all, and in the mean time, I'd be more comfortable with simply authenticating everything possible. I think it was discussed, but it makes the interface a mess. And since applications can accomplish the same thing by shoving what they want authenticated into the AAD, it was voted down.\nYeah, you and I both know that putting stuff in the AAD isn't a real substitute for having it in the key schedule. I completely understand how this would mess with the purity of the design, but TLS managed by wrapping its labels better (which also addresses a known problem with HKDF not being injective).\nFixed by virtue of making the key ID fixed length in .", "new_text": "This draft includes pseudocode that uses the functions and conventions defined in HPKE. Encoding an integer to a sequence of bytes in network byte order is described using the function \"encode(n, v)\", where \"n\" is the number of bytes and \"v\" is the integer value. The function \"len()\" returns the length of a sequence of bytes. Formats are described using notation from Section 1.3 of QUIC."} {"id": "q-en-oblivious-http-44d1df1c595c5dce7cb4a1c81be048127e4e9a14c3c23f5f2d68abfd7b908c3b", "old_text": "5.1. Clients encapsulate a request \"request\" with an HPKE public key \"pkR\", whose Key Identifier is \"keyID\" as follows: Compute an HPKE context using \"pkR\", yielding \"context\" and encapsulation key \"enc\". Encrypt (seal) \"request\" with \"keyID\" as associated data using \"context\", yielding ciphertext \"ct\". Concatenate the length of \"keyID\" as a variable-length integer, \"keyID\", \"enc\", and \"ct\", yielding an Encapsulated Request \"enc_request\". Note that \"enc\" is of fixed-length, so there is no ambiguity in parsing \"enc\" and \"ct\". In pseudocode, this procedure is as follows: Servers decrypt an Encapsulated Request by reversing this process. Given an Encapsulated Request \"enc_request\", a server: Parses \"enc_request\" into \"keyID\", \"enc\", and \"ct\" (indicated using the function \"parse()\" in pseudocode). The server is then able to find the HPKE private key, \"skR\", corresponding to \"keyID\". If no such key exists, the server returns an error; see errors. Compute an HPKE context using \"skR\" and the encapsulated key \"enc\", yielding \"context\". Construct additional associated data, \"aad\", as the Key Identifier \"keyID\" from \"enc_request\". Decrypt \"ct\" using \"aad\" as associated data, yielding \"request\" or an error on failure. If decryption fails, the server returns an", "comments": "Switches to an 8-bit key identifier here as well. This format now doesn't need varints. Yay. Implemented this in my ohttp library and it's fine.\nSplitting this from . It seems likely that a single KEM key will support multiple KDF or AEAD options (NSS will soon support 3 of the former, and 2 of the latter, maybe 3 of each). Having a configuration that includes a single set of (KEM, key, and key identifier) with a list of KDF+AEAD pairs, akin to ECH, would make it easier to support that sort of deployment. Without that, key identifiers need to carry information about the KDF and AEAD. To support this, the format needs to be adjusted to carry the KDF and AEAD identifier. In doing so, we probably want to encode the choice of KDF and AEAD in the key schedule or AAD accordingly.\nShould the key ID length be explicitly authenticated, e.g., by including it in the AAD, or should the AEAD implicitly authenticate things? Unclear!\nShould we even bother with an AAD at all? The best reason is that the keyID determines which key the server uses, which is information that factors into decisions. (A related question might be why doesn't HPKE take variables as input so that things like key identifiers can be integrated into the key schedule.)\nMy take: yes. I need to think through the implications of no additional authentication at all, and in the mean time, I'd be more comfortable with simply authenticating everything possible. I think it was discussed, but it makes the interface a mess. And since applications can accomplish the same thing by shoving what they want authenticated into the AAD, it was voted down.\nYeah, you and I both know that putting stuff in the AAD isn't a real substitute for having it in the key schedule. I completely understand how this would mess with the purity of the design, but TLS managed by wrapping its labels better (which also addresses a known problem with HKDF not being injective).\nFixed by virtue of making the key ID fixed length in .", "new_text": "5.1. Clients encapsulate a request \"request\" using values from a key configuration: the key identifier from the configuration, \"keyID\", the public key from the configuration, \"pkR\", and a selected combination of KDF, identified by \"kdfID\", and AEAD, identified by \"aeadID\". The client then constructs an encapsulated request, \"enc_request\", as follows: Compute an HPKE context using \"pkR\", yielding \"context\" and encapsulation key \"enc\". Construct associated data, \"aad\", by concatenating the values of \"keyID\", \"kdfID\", and \"aeadID\", as 8-, 16- and 16-bit integers respectively, each in network byte order. Encrypt (seal) \"request\" with \"aad\" as associated data using \"context\", yielding ciphertext \"ct\". Concatenate the three values of \"aad\", \"enc\", and \"ct\", yielding an Encapsulated Request \"enc_request\". Note that \"enc\" is of fixed-length, so there is no ambiguity in parsing this structure. In pseudocode, this procedure is as follows: Servers decrypt an Encapsulated Request by reversing this process. Given an Encapsulated Request \"enc_request\", a server: Parses \"enc_request\" into \"keyID\", \"kdfID\", \"aeadID\", \"enc\", and \"ct\" (indicated using the function \"parse()\" in pseudocode). The server is then able to find the HPKE private key, \"skR\", corresponding to \"keyID\". a. If \"keyID\" does not identify a key, the server returns an error. b. If \"kdfID\" and \"aeadID\" identify a combination of KDF and AEAD that the server is unwilling to use with \"skR\", the server returns an error. Compute an HPKE context using \"skR\" and the encapsulated key \"enc\", yielding \"context\". Construct additional associated data, \"aad\", from \"keyID\", \"kdfID\", and \"aeadID\" or as the first five bytes of \"enc_request\". Decrypt \"ct\" using \"aad\" as associated data, yielding \"request\" or an error on failure. If decryption fails, the server returns an"} {"id": "q-en-oblivious-http-c96efb698c231a86e2fa28d8d5e69e4a1c9ab59088426c948334ac644fc9483d", "old_text": "Padding is a capability provided by binary HTTP messages; see BINARY. If the encapsulation method described in this document is used to protect a different message type (see repurposing), that message format might need to include padding support. 6.3.", "comments": "This came out of , where we realized that the relay padding is useful, but needs to employ different methods than client or gateway. The relay can't use binary HTTP padding; it has to insert stuff at the HTTP layer. Include citations for padding in those documents. Note that these citations are slightly misleading. Most relays won't have access at the level necessary to add that padding. More likely, the relay will need to do as NAME originally suggested, namely adding fields of an appropriate size. But we don't have good citations for that, so I'm cheaping out. (Note that RFC 9112, Section 7.1.1 talks about using chunk extensions for padding, but I'm going to pretend that doesn't exist; it's even less usable than the HTTP/2 and HTTP/3 options.)\nTake it or leave it!", "new_text": "Padding is a capability provided by binary HTTP messages; see BINARY. If the encapsulation method described in this document is used to protect a different message type (see repurposing), that message format might need to include padding support. can also use padding for the same reason, but need to operate at the HTTP layer since they cannot manipulate binary HTTP messages; for example, see HTTP2 or HTTP3). 6.3."} {"id": "q-en-oblivious-http-210fb5ce5a85357bf074c2a110e6a7387044272e67e4365e8726ab5239c8f098", "old_text": "6.5.1. SHOULD include a \"Date\" header field in Encapsulated Requests, unless the does not use \"Date\" for anti-replay purposes. Though HTTP requests often do not include a \"Date\" header field, the value of this field might be used by a server to limit the amount of", "comments": "Comment by NAME In How does a client know this? Preconfiguration ? Are we talking ms? sec? minutes? Any advise to implementers ?\nIt can't be less than 2s, because the granularity of the field doesn't permit detecting anything finer than 1s. I've done what I can.\nThis still does not seem to address: Maybe just cut off the \u201cunless\u201d part if there is no signal for this? Paul\nNAME also included URL which I think should address your concern:\nThanks. That commit does address the issue, mostly. I guess it moves it do an unspecified provisioning layer :P Paul", "new_text": "6.5.1. SHOULD include a \"Date\" header field in Encapsulated Requests, unless the has prior knowledge that indicates that the does not use \"Date\" for anti-replay purposes. Though HTTP requests often do not include a \"Date\" header field, the value of this field might be used by a server to limit the amount of"} {"id": "q-en-oblivious-http-210fb5ce5a85357bf074c2a110e6a7387044272e67e4365e8726ab5239c8f098", "old_text": "might need to allow for the time it takes requests to arrive from the , with a time window that is large enough to allow for differences in clocks. Insufficient tolerance of time differences could result in valid requests being unnecessarily rejected. MUST NOT treat the time window as secret information. An attacker can actively probe with different values for the \"Date\" field to", "comments": "Comment by NAME In How does a client know this? Preconfiguration ? Are we talking ms? sec? minutes? Any advise to implementers ?\nIt can't be less than 2s, because the granularity of the field doesn't permit detecting anything finer than 1s. I've done what I can.\nThis still does not seem to address: Maybe just cut off the \u201cunless\u201d part if there is no signal for this? Paul\nNAME also included URL which I think should address your concern:\nThanks. That commit does address the issue, mostly. I guess it moves it do an unspecified provisioning layer :P Paul", "new_text": "might need to allow for the time it takes requests to arrive from the , with a time window that is large enough to allow for differences in clocks. Insufficient tolerance of time differences could result in valid requests being unnecessarily rejected. Beyond allowing for multiple round trip times - to account for retransmission - network delays are unlikely to be significant in determining the size of the window, unless all potential are known to have excellent time- keeping. A specific window size might need to be determined experimentally. MUST NOT treat the time window as secret information. An attacker can actively probe with different values for the \"Date\" field to"} {"id": "q-en-oblivious-http-e1fb9c20565c511656960f3c8e3771ee39080f3c0125b26394a22fffa7ecdd52", "old_text": "corresponds to the key identifier, but the encapsulated request cannot be successfully decrypted using the key. 9. Please update the \"Media Types\" registry at", "comments": "This isn't anything that special, but this is a case where a bunch of our established defenses don't work.\nThe new changes LGTM. Feel free to merge assuming you're done with edits!\nThis means that servers need anti-replay protections, especially if they are performing non-idempotent operations.", "new_text": "corresponds to the key identifier, but the encapsulated request cannot be successfully decrypted using the key. 8.4. Encapsulated requests can be copied and replayed by the oblivious proxy resource. The design of oblivious HTTP does not assume that the oblivious proxy resource will not replay requests. In addition, if a client sends an encapsulated request in TLS early data (see Section 8 of TLS and RFC8470), a network-based adversary might be able to cause the request to be replayed. In both cases, the effect of a replay attack and the mitigations that might be employed are similar to TLS early data. A client or oblivious proxy resource MUST NOT automatically attempt to retry a failed request unless it receives a positive signal indicating that the request was not processed or forwarded. The HTTP/2 REFUSED_STREAM error code (Section 8.1.4 of RFC7540), the HTTP/3 H3_REQUEST_REJECTED error code (Section 8.1 of QUIC-HTTP), or a GOAWAY frame (in either protocol version) are all sufficient signals that no processing occurred. Connection failures or interruptions are not sufficient signals that no processing occurred. The anti-replay mechanisms described in Section 8 of TLS are generally applicable to oblivious HTTP requests. Servers can use the encapsulated keying material as a unique key for identifying potential replays. The mechanism used in TLS for managing differences in client and server clocks cannot be used as it depends on being able to observe previous interactions. Oblivious HTTP explicitly prevents such linkability. Applications can still include an explicit indication of time to limit the span of time over which a server might need to track accepted requests. Clock information could be used for client identification, so reduction in precision or obfuscation might be necessary. The considerations in RFC8470 as they relate to managing the risk of replay also apply, though there is no option to delay the processing of a request. Limiting requests to those with safe methods might not be satisfactory for some applications, particularly those that involve the submission of data to a server. The use of idempotent methods might be of some use in managing replay risk, though it is important to recognize that different idempotent requests can be combined to be not idempotent. Idempotent actions with a narrow scope based on the value of a protected nonce could enable data submission with limited replay exposure. A nonce might be added as an explicit part of a request, or, if the oblivious request and target resources are co-located, the encapsulated keying material can be used to produce a nonce. The server-chosen \"response_nonce\" field ensures that responses have unique AEAD keys and nonces even when requests are replayed. 9. Please update the \"Media Types\" registry at"} {"id": "q-en-oblivious-http-419d5af59d28c65e493666e11314734107f1274b5e598c64aeee4715c34ba66a", "old_text": "resource; see key-configuration. Clients MUST NOT include identifying information in the request that is encapsulated. Clients cannot carry connection-level state between requests as they only establish direct connections to the proxy responsible for the", "comments": "This was vague enough that an example might help.\nYeah, this seems good. Do we want to explicitly mention cookies?", "new_text": "resource; see key-configuration. Clients MUST NOT include identifying information in the request that is encapsulated. Identifying information includes cookies COOKIES, authentication credentials or tokens, and any information that might reveal client-specific information such as account credentials. Clients cannot carry connection-level state between requests as they only establish direct connections to the proxy responsible for the"} {"id": "q-en-oblivious-http-b95e8f194a071498c04e1cf9adbb983866e2c06a17f12aa58beaad93053b2542", "old_text": "HPKE algorithms that are used with that key. The identity of an oblivious proxy resource that will forward encapsulated requests and responses to the oblivious request resource. This information allows the client to make a request of an oblivious target resource without that resource having only a limited ability", "comments": "This makes the choice of a 1:1 mapping clear: avoid being used as an open proxy through explicit configuration of your ACLs. cc NAME\nThe proxy has a fixed mapping. It's resource maps to just one request resource. That's a deliberate choice that we should probably be much more up front about.\nYes, I think this makes sense, but it took me a while to realize it when implementing. Having a section on mapping state on the proxy would be useful.\nVittorio says: This relates in particular to the potential for generic services with open destination (not a capability in the proposal) to be used for nefarious purposes (spamming, bot net command and control, hiding illegal activity, etc...).", "new_text": "HPKE algorithms that are used with that key. The identity of an oblivious proxy resource that will forward encapsulated requests and responses to a single oblivious request resource. See proxy-state for more information about the mapping between oblivious proxy and oblivious request resources. This information allows the client to make a request of an oblivious target resource without that resource having only a limited ability"} {"id": "q-en-oblivious-http-b95e8f194a071498c04e1cf9adbb983866e2c06a17f12aa58beaad93053b2542", "old_text": "10. Using Oblivious HTTP adds both cryptographic and latency to requests relative to a simple HTTP request-response exchange. Deploying proxy services that are on path between clients and servers avoids adding", "comments": "This makes the choice of a 1:1 mapping clear: avoid being used as an open proxy through explicit configuration of your ACLs. cc NAME\nThe proxy has a fixed mapping. It's resource maps to just one request resource. That's a deliberate choice that we should probably be much more up front about.\nYes, I think this makes sense, but it took me a while to realize it when implementing. Having a section on mapping state on the proxy would be useful.\nVittorio says: This relates in particular to the potential for generic services with open destination (not a capability in the proposal) to be used for nefarious purposes (spamming, bot net command and control, hiding illegal activity, etc...).", "new_text": "10. This section discusses various operational and deployment considerations. 10.1. Using Oblivious HTTP adds both cryptographic and latency to requests relative to a simple HTTP request-response exchange. Deploying proxy services that are on path between clients and servers avoids adding"} {"id": "q-en-oblivious-http-b95e8f194a071498c04e1cf9adbb983866e2c06a17f12aa58beaad93053b2542", "old_text": "similar system ODoH found that deploying proxies close to servers was most effective in minimizing additional latency. Oblivious HTTP might be incompatible with network interception regimes, such as those that rely on configuring clients with trust anchors and intercepting TLS connections. While TLS might be", "comments": "This makes the choice of a 1:1 mapping clear: avoid being used as an open proxy through explicit configuration of your ACLs. cc NAME\nThe proxy has a fixed mapping. It's resource maps to just one request resource. That's a deliberate choice that we should probably be much more up front about.\nYes, I think this makes sense, but it took me a while to realize it when implementing. Having a section on mapping state on the proxy would be useful.\nVittorio says: This relates in particular to the potential for generic services with open destination (not a capability in the proposal) to be used for nefarious purposes (spamming, bot net command and control, hiding illegal activity, etc...).", "new_text": "similar system ODoH found that deploying proxies close to servers was most effective in minimizing additional latency. 10.2. This protocol assumes a fixed, one-to-one mapping between the Oblivious Proxy Resource and the Oblivious Request Resource. This means that any encapsulated request sent to the Oblivious Proxy Resource will always be forwarded to the Oblivious Request Resource. This constraint was imposed to simplify proxy configuration and mitigate against the Oblivious Proxy Resource being used as a generic proxy for unknown Oblivious Request Resources. The proxy will only forward for Oblivious Request Resources that it has explicitly configured and allowed. It is possible for a server to be configured with multiple Oblivious Proxy Resources, each for a different Oblivious Request Resource as needed. 10.3. Oblivious HTTP might be incompatible with network interception regimes, such as those that rely on configuring clients with trust anchors and intercepting TLS connections. While TLS might be"} {"id": "q-en-ops-drafts-bb3733e5d0d2b385746a1a91b690ab397fb62f8e534ddb7ec68c01833dc75e1f", "old_text": "2.1. QUIC packets may have either a long header, or a short header. The first bit of the QUIC header indicates which type of header is present. The long header exposes more information. It is used during connection establishment, including version negotiation, retry, and", "comments": "(which didn't require any non-editorial changes; the review is done now)\nAlso call out invariants (esp. in header structure and handshake) explicitly: v2 may/will break functionality outside the invariants.\nDocument is currently in in with final invariants; do one last pass to make sure everything documented in Section 17 as expose information appears here (which is basically the same check).", "new_text": "2.1. QUIC packets may have either a long header, or a short header. The first bit of the QUIC header us the Header Form bit, and indicates which type of header is present. The long header exposes more information. It is used during connection establishment, including version negotiation, retry, and"} {"id": "q-en-ops-drafts-bb3733e5d0d2b385746a1a91b690ab397fb62f8e534ddb7ec68c01833dc75e1f", "old_text": "The following information is exposed in QUIC packet headers: demux bit: the second most significant bit of the first octet every QUIC packet of the current version is set to 1, for demultiplexing with other UDP-encapsulated protocols. latency spin bit: the third most significant bit of first octet in the short packet header. The spin bit is set by endpoints such", "comments": "(which didn't require any non-editorial changes; the review is done now)\nAlso call out invariants (esp. in header structure and handshake) explicitly: v2 may/will break functionality outside the invariants.\nDocument is currently in in with final invariants; do one last pass to make sure everything documented in Section 17 as expose information appears here (which is basically the same check).", "new_text": "The following information is exposed in QUIC packet headers: \"fixed bit\": the second most significant bit of the first octet most QUIC packets of the current version is currently set to 1, for demultiplexing with other UDP-encapsulated protocols. latency spin bit: the third most significant bit of first octet in the short packet header. The spin bit is set by endpoints such"} {"id": "q-en-ops-drafts-bb3733e5d0d2b385746a1a91b690ab397fb62f8e534ddb7ec68c01833dc75e1f", "old_text": "end-to-end RTT. See spin-usage for further details. header type: the long header has a 2 bit packet type field following the Header Form bit. Header types correspond to stages of the handshake; see Section 17.2 of QUIC-TRANSPORT. version number: the version number present in the long header, and identifies the version used for that packet. Note that during", "comments": "(which didn't require any non-editorial changes; the review is done now)\nAlso call out invariants (esp. in header structure and handshake) explicitly: v2 may/will break functionality outside the invariants.\nDocument is currently in in with final invariants; do one last pass to make sure everything documented in Section 17 as expose information appears here (which is basically the same check).", "new_text": "end-to-end RTT. See spin-usage for further details. header type: the long header has a 2 bit packet type field following the Header Form and fixed bits. Header types correspond to stages of the handshake; see Section 17.2 of QUIC-TRANSPORT for details. version number: the version number present in the long header, and identifies the version used for that packet. Note that during"} {"id": "q-en-ops-drafts-bb3733e5d0d2b385746a1a91b690ab397fb62f8e534ddb7ec68c01833dc75e1f", "old_text": "subsequent connection attempt. The length of the token is explicit in both cases. Retry and Version Negotiation packets are not encrypted or obfuscated in any way. For other kinds of packets, other information in the packet headers is cryptographically obfuscated: packet number: Most packets (with the exception of Version Negotiation and Retry packets) have an associated packet number; however, this packet number is encrypted, and therefore not of use to on-path observers. The offset of the packet number is encoded in the header for packets with long headers, while it is implicit (depending on Destination Connection ID length) in short header packets. The length of the packet number is cryptographically obfuscated.", "comments": "(which didn't require any non-editorial changes; the review is done now)\nAlso call out invariants (esp. in header structure and handshake) explicitly: v2 may/will break functionality outside the invariants.\nDocument is currently in in with final invariants; do one last pass to make sure everything documented in Section 17 as expose information appears here (which is basically the same check).", "new_text": "subsequent connection attempt. The length of the token is explicit in both cases. Retry (Section 17.2.5 of QUIC-TRANSPORT) and Version Negotiation (Section 17.2.1 of QUIC-TRANSPORT) packets are not encrypted or obfuscated in any way. For other kinds of packets, other information in the packet headers is cryptographically obfuscated: packet number: All packets except Version Negotiation and Retry packets have an associated packet number; however, this packet number is encrypted, and therefore not of use to on-path observers. The offset of the packet number is encoded in the header for packets with long headers, while it is implicit (depending on Destination Connection ID length) in short header packets. The length of the packet number is cryptographically obfuscated."} {"id": "q-en-ops-drafts-919b0ae5b322ec5d078983148d001e80cd48b0367abb1d26c0e4146816b519b4", "old_text": "QUIC's stream multiplexing feature allows applications to run multiple streams over a single connection, without head-of-line blocking between streams, associated at a point in time with a single five-tuple. Stream data is carried within Frames, where one (UDP) packet on the wire can carry one of multiple stream frames. Stream can be independently open and closed, gracefully or by error. If a critical stream for the application is closed, the application can generate respective error messages on the application layer to inform the other end or the higher layer and eventually indicate QUIC to reset the connection. QUIC, however, does not need to know which streams are critical, and does not provide an interface to exceptional handling of any stream. There are special streams in QUIC that are used for control on the QUIC connection, however, these streams are not exposed to the application. Mapping of application data to streams is application-specific and described for HTTP/s in QUIC-HTTP. In general data that can be processed independently, and therefore would suffer from head of line blocking if forced to be received in order, should be transmitted over different streams. If the application requires certain data to be received in order, the same stream should be used for that data. If there is a logical grouping of data chunks or messages, streams can be reused, or a new stream can be opened for each chunk/message. If one message is mapped to a single stream, resetting the stream to", "comments": "This document should mention unidirectional streams Some words about the meaning of an abort signal vs. a clean close would be useful, both ingress and egress. Maximum stream bytes = 2^62-1. Saying \"there are special streams for QUIC\" is inaccurate (it's just one stream now, conceptually), and I would not bother to describe the CRYPTO frames this way.\nAlso: stream aborts are always unidirectional. what can/should applications assume about stream IDs? are they accessible, and does the app have any control over them?", "new_text": "QUIC's stream multiplexing feature allows applications to run multiple streams over a single connection, without head-of-line blocking between streams, associated at a point in time with a single five-tuple. Stream data is carried within Frames, where one QUIC packet on the wire can carry one or multiple stream frames. Streams can be unidirectional or bidirectional, and a stream may be initiated either by client or server. Only the initiator of a unidirectional stream can send data on it. Due to offset encoding limitations, a stream can carry a maximum of 2^62-1 bytes in each direction. In the presently unlikely event that this limit is reached by an application, the stream can simply be closed and replaced with a new one. Streams can be independently opened and closed, gracefully or by error. An application can gracefully close the egress direction of a stream by instructing QUIC to send a FIN bit in a STREAM frame. It cannot gracefully close the ingress direction without a peer- generated FIN, much like in TCP. However, an endpoint can abruptly close either the ingress or egress direction; these actions are fully independent of each other. If a stream that is critical for an application is closed, the application can generate respective error messages on the application layer to inform the other end and/or the higher layer, and eventually indicate QUIC to reset the connection. QUIC, however, does not need to know which streams are critical, and does not provide an interface for exceptional handling of any stream. Mapping of application data to streams is application-specific and described for HTTP/3 in QUIC-HTTP. In general, data that can be processed independently, and therefore would suffer from head of line blocking if forced to be received in order, should be transmitted over separate streams. If the application requires certain data to be received in order, that data should be sent on the same stream. If there is a logical grouping of data chunks or messages, streams can be reused, or a new stream can be opened for each chunk/message. If one message is mapped to a single stream, resetting the stream to"} {"id": "q-en-ops-drafts-919b0ae5b322ec5d078983148d001e80cd48b0367abb1d26c0e4146816b519b4", "old_text": "currently open and currently used streams to the application to make the mapping of data to streams dependent on this information. Further, streams have a maximum number of bytes that can be sent on one stream. This number is high enough (2^64) that this will usually not be reached with current applications. Applications that send chunks of data over a very long period of time (such as days, months, or years), should rather utilize the 0-RTT session resumption ability provided by QUIC, than trying to maintain one connection open. 4.1.", "comments": "This document should mention unidirectional streams Some words about the meaning of an abort signal vs. a clean close would be useful, both ingress and egress. Maximum stream bytes = 2^62-1. Saying \"there are special streams for QUIC\" is inaccurate (it's just one stream now, conceptually), and I would not bother to describe the CRYPTO frames this way.\nAlso: stream aborts are always unidirectional. what can/should applications assume about stream IDs? are they accessible, and does the app have any control over them?", "new_text": "currently open and currently used streams to the application to make the mapping of data to streams dependent on this information. While a QUIC implementation must necessarily provide a way for an application to send data on separate streams, it does not necessarily expose stream identifiers to the application (see e.g. QUIC-HTTP section 6) either at the sender or receiver end, so applications should not assume access to these identifiers. 4.1."} {"id": "q-en-ops-drafts-f398ae7b13a301b291c1c9b33d5022d2d54ba2c364b5c066afec10b634d19410", "old_text": "frames. The Client Hello datagram exposes version number, source and destination connection IDs, and information in the TLS Client Hello message, including any TLS Server Name Indication (SNI) present, in the clear. The QUIC PADDING frame shown here may be present to ensure the Client Hello datagram has a minimum size of 1200 octets, to mitigate the possibility of handshake amplification. Note that the location of PADDING is implementation-dependent, and PADDING frames may not appear in the Initial packet in a coalesced packet. The Server Hello datagram exposes version number, source and destination connection IDs, and information in the TLS Server Hello message. The Initial Completion datagram does not expose any additional information; however, recognizing it can be used to determine that a", "comments": "However, we also use the terms Server/Client Hello datagram to cover all QUIC Packets of the first two datagrams sent. This term is not used in the transport draft (anymore) and I find it confusing given the TLS terminology. Should we maybe use a different term?", "new_text": "frames. The Client Hello datagram exposes version number, source and destination connection IDs in the clear. Information in the TLS Client Hello frame, including any TLS Server Name Indication (SNI) present, is obfuscated using the Initial secret. The QUIC PADDING frame shown here may be present to ensure the Client Hello datagram has a minimum size of 1200 octets, to mitigate the possibility of handshake amplification. Note that the location of PADDING is implementation-dependent, and PADDING frames may not appear in the Initial packet in a coalesced packet. The Server Hello datagram also exposes version number, source and destination connection IDs and information in the TLS Server Hello message which is obfuscated using the Initial secret. The Initial Completion datagram does not expose any additional information; however, recognizing it can be used to determine that a"} {"id": "q-en-ops-drafts-f8431bf39da861fd9b5419b1f2a93631d7384c52ad51faaf2d8514ce3f3c0b53", "old_text": "The best way to obscure an encoding is to appear random to observers, which is most rigorously achieved with encryption. Even when encrypted, a scheme could embed the unencrypted length of the Connection ID in the Connection ID itself, instead of remembering it, e.g. by using the first few bits to indicate a certain size of a well-known set of possible sizes with multiple values that indicate the same size but are selected randomly. QUIC_LB further specified possible algorithms to generate Connection IDs at load balancers.", "comments": "addresses\nIn \"Connection ID and Rebinding\": \"Even when encrypted, a scheme could embed the unencrypted length of the Connection ID in the Connection ID itself, instead of remembering it, e.g. by using the first few bits to indicate a certain size of a well-known set of possible sizes with multiple values that indicate the same size but are selected randomly.\" I almost went and filed an issue against quic-lb to do this instead of just encoding a plaintext length like it does currently. But is this a good recommended practice? ISTM that long headers would provide a bit of a Rosetta Stone for this mapping, making it not particularly effective for the complexity.\nThe idea was not to conceal the length but to avoid likability by identifying linked connection based on the existence of a certain length field. This might look slightly more random but I guess you also have to change your mapping more time to time. So not sure. I'm fine to remove that half sentence and maybe add a sentence about likability instead...?\nYes, I would prefer deleting the quoted text, as I think it's a solution that adds complexity for little apparent benefit.\nOkay, will do that change after your PR has landed.", "new_text": "The best way to obscure an encoding is to appear random to observers, which is most rigorously achieved with encryption. Even when encrypted, a scheme could embed the unencrypted length of the Connection ID in the Connection ID itself, instead of remembering it. QUIC_LB further specified possible algorithms to generate Connection IDs at load balancers."} {"id": "q-en-ops-drafts-750bb2b477faffd7190e56ba9d5b4b398e8681416f4ccc249e0c28f2fc7e529a", "old_text": "3.2. Connection establishment requires cleartext packets and is using a TLS handshake on stream 0. Therefore it is detectable using heuristics similar to those used to detect TLS over TCP. 0-RTT connection may additional also send data packets, right after the Client Initial with the TLS client hello. These data may be reordered in the network, therefore it may be possible that 0-RTT Protected data packets are seen before the Client Initial packet. 3.3. The QUIC Connection ID (see rebinding) is designed to allow an on- path device such as a load-balancer to associate two flows as identified by five-tuple when the address and port of one of the", "comments": "Re SNI and ALPN: we can certainly point at the work in progress in TLS that will encrypt this information, but IMO the document should describe the current state of the world. I've just added something pointing to the SNI encryption document; while it seems that the tunneling technique described there could pretty clearly be applied to ALPN fronting as well, is there a reference for encrypting ALPN we can use here that doesn't leave details as an exercise for the reader?\nI would actually prefer to describe the wire image of TLS and associated protocols in a separate document and just point to it because that's actually not only applicable for quic\nI don't have a reference for encrypting ALPN. We don't have one. Maybe I should write one... On the plus side, it's something that benefits from the same techniques as SNI (and less likely to cause deployment issues, though that is only because we haven't ossified around it as much). As for the general point about now vs. later. One of the things we've tried to do is be very clear about what the invariants are, or what promises we are making. It's not the same thing as true ossification defense, but if we make it clear what we are committing to, maybe we won't have people relying on stuff we might want to change.", "new_text": "3.2. Connection establishment uses Initial, Handshake, and Retry packets containing a TLS handshake on Stream 0. Connection establishment can therefore be detected using heuristics similar to those used to detect TLS over TCP. A client using 0-RTT connection may also send data packets in 0-RTT Protected packets directly after the Initial packet containing the TLS Client Hello. Since these packets may be reordered in the network, note that 0-RTT Protected data packets may be seen before the Initial packet. Note that only clients send Initial packets, so the sides of a connection can be distinguished by QUIC packet type in the handshake. 3.3. The cleartext TLS handshake may contain Server Name Indication (SNI) RFC6066, by which the client reveals the name of the server it intends to connect to, in order to allow the server to present a certificate based on that name. It may also contain information from Application-Layer Protocol Negotiation (ALPN) RFC7301, by which the client exposes the names of application-layer protocols it supports; an observer can deduce that one of those protocols will be used if the connection continues. Work is currently underway in the TLS working group to encrypt the SNI in TLS 1.3 TLS-ENCRYPT-SNI, reducing the information available in the SNI to the name of a fronting service, which can generally be identified by the IP address of the server anyway. If used with QUIC, this would make SNI-based application identification impossible through passive measurement. 3.4. The QUIC Connection ID (see rebinding) is designed to allow an on- path device such as a load-balancer to associate two flows as identified by five-tuple when the address and port of one of the"} {"id": "q-en-ops-drafts-750bb2b477faffd7190e56ba9d5b4b398e8681416f4ccc249e0c28f2fc7e529a", "old_text": "should be treated as opaque; see sec-loadbalancing for caveats regarding connection ID selection at servers. 3.4. The QUIC does not expose the end of a connection; the only indication to on-path devices that a flow has ended is that packets are no", "comments": "Re SNI and ALPN: we can certainly point at the work in progress in TLS that will encrypt this information, but IMO the document should describe the current state of the world. I've just added something pointing to the SNI encryption document; while it seems that the tunneling technique described there could pretty clearly be applied to ALPN fronting as well, is there a reference for encrypting ALPN we can use here that doesn't leave details as an exercise for the reader?\nI would actually prefer to describe the wire image of TLS and associated protocols in a separate document and just point to it because that's actually not only applicable for quic\nI don't have a reference for encrypting ALPN. We don't have one. Maybe I should write one... On the plus side, it's something that benefits from the same techniques as SNI (and less likely to cause deployment issues, though that is only because we haven't ossified around it as much). As for the general point about now vs. later. One of the things we've tried to do is be very clear about what the invariants are, or what promises we are making. It's not the same thing as true ossification defense, but if we make it clear what we are committing to, maybe we won't have people relying on stuff we might want to change.", "new_text": "should be treated as opaque; see sec-loadbalancing for caveats regarding connection ID selection at servers. 3.5. The QUIC does not expose the end of a connection; the only indication to on-path devices that a flow has ended is that packets are no"} {"id": "q-en-ops-drafts-750bb2b477faffd7190e56ba9d5b4b398e8681416f4ccc249e0c28f2fc7e529a", "old_text": "Changes to this behavior are currently under discussion: see https://github.com/quicwg/base-drafts/issues/602. 3.5. Round-trip time of QUIC flows can be inferred by observation once per flow, during the handshake, as in passive TCP measurement; this", "comments": "Re SNI and ALPN: we can certainly point at the work in progress in TLS that will encrypt this information, but IMO the document should describe the current state of the world. I've just added something pointing to the SNI encryption document; while it seems that the tunneling technique described there could pretty clearly be applied to ALPN fronting as well, is there a reference for encrypting ALPN we can use here that doesn't leave details as an exercise for the reader?\nI would actually prefer to describe the wire image of TLS and associated protocols in a separate document and just point to it because that's actually not only applicable for quic\nI don't have a reference for encrypting ALPN. We don't have one. Maybe I should write one... On the plus side, it's something that benefits from the same techniques as SNI (and less likely to cause deployment issues, though that is only because we haven't ossified around it as much). As for the general point about now vs. later. One of the things we've tried to do is be very clear about what the invariants are, or what promises we are making. It's not the same thing as true ossification defense, but if we make it clear what we are committing to, maybe we won't have people relying on stuff we might want to change.", "new_text": "Changes to this behavior are currently under discussion: see https://github.com/quicwg/base-drafts/issues/602. 3.6. Round-trip time of QUIC flows can be inferred by observation once per flow, during the handshake, as in passive TCP measurement; this"} {"id": "q-en-ops-drafts-750bb2b477faffd7190e56ba9d5b4b398e8681416f4ccc249e0c28f2fc7e529a", "old_text": "Changes to this behavior are currently under discussion: see https://github.com/quicwg/base-drafts/issues/631. 3.6. All QUIC packets carry packet numbers in cleartext, and while the protocol allows packet numbers to be skipped, skipping is not", "comments": "Re SNI and ALPN: we can certainly point at the work in progress in TLS that will encrypt this information, but IMO the document should describe the current state of the world. I've just added something pointing to the SNI encryption document; while it seems that the tunneling technique described there could pretty clearly be applied to ALPN fronting as well, is there a reference for encrypting ALPN we can use here that doesn't leave details as an exercise for the reader?\nI would actually prefer to describe the wire image of TLS and associated protocols in a separate document and just point to it because that's actually not only applicable for quic\nI don't have a reference for encrypting ALPN. We don't have one. Maybe I should write one... On the plus side, it's something that benefits from the same techniques as SNI (and less likely to cause deployment issues, though that is only because we haven't ossified around it as much). As for the general point about now vs. later. One of the things we've tried to do is be very clear about what the invariants are, or what promises we are making. It's not the same thing as true ossification defense, but if we make it clear what we are committing to, maybe we won't have people relying on stuff we might want to change.", "new_text": "Changes to this behavior are currently under discussion: see https://github.com/quicwg/base-drafts/issues/631. 3.7. All QUIC packets carry packet numbers in cleartext, and while the protocol allows packet numbers to be skipped, skipping is not"} {"id": "q-en-ops-drafts-750bb2b477faffd7190e56ba9d5b4b398e8681416f4ccc249e0c28f2fc7e529a", "old_text": "observation point and the receiver (\"downstream loss\") cannot be reliably estimated. 3.7. QUIC explicitly exposes which side of a connection is a client and which side is a server during the handshake. In addition, the", "comments": "Re SNI and ALPN: we can certainly point at the work in progress in TLS that will encrypt this information, but IMO the document should describe the current state of the world. I've just added something pointing to the SNI encryption document; while it seems that the tunneling technique described there could pretty clearly be applied to ALPN fronting as well, is there a reference for encrypting ALPN we can use here that doesn't leave details as an exercise for the reader?\nI would actually prefer to describe the wire image of TLS and associated protocols in a separate document and just point to it because that's actually not only applicable for quic\nI don't have a reference for encrypting ALPN. We don't have one. Maybe I should write one... On the plus side, it's something that benefits from the same techniques as SNI (and less likely to cause deployment issues, though that is only because we haven't ossified around it as much). As for the general point about now vs. later. One of the things we've tried to do is be very clear about what the invariants are, or what promises we are making. It's not the same thing as true ossification defense, but if we make it clear what we are committing to, maybe we won't have people relying on stuff we might want to change.", "new_text": "observation point and the receiver (\"downstream loss\") cannot be reliably estimated. 3.8. QUIC explicitly exposes which side of a connection is a client and which side is a server during the handshake. In addition, the"} {"id": "q-en-ops-drafts-9a1c8d1533468e0cd3a5017c25725a5cef562f06caed30937876a7c4e4eb21a6", "old_text": "and a description of how common network management practices will be impacted by QUIC. Since QUIC's wire image WIRE-IMAGE is integrity protected and not modifiable on path, in-network operations are not possible without terminating the QUIC connection, for instance using a back-to-back proxy. Proxy operations are not in scope for this document. A proxy can either explicit identify itself as providing a proxy service, or may share the TLS credentials to authenticate as the server and (in some cases) client acting as a front-facing instance for the endpoint itself. Network management is not a one-size-fits-all endeavour: practices considered necessary or even mandatory within enterprise networks", "comments": "The point of is a little odd, because it seems to imply that ALL operations require access to the information that is integrity- or confidentiality-protected. That's clearly not the case, I think, as it is merely pointing out that if you want to modify data, then you have to be an endpoint. I think that this might be better phrased as: Note: This drops the last sentence of the original paragraph. If proxying is out of scope, then it doesn't help to talk about it so much. The point about cooperation should do.\nCreated a PR based on your proposed text but added some of the information about proxies to make it more clear what kind of proxies we are talking about.", "new_text": "and a description of how common network management practices will be impacted by QUIC. Since QUIC's wire image WIRE-IMAGE is integrity-protected, in-network operations that depend on modification of data are not possible without the cooperation of an endpoint. Network operation practices that alter data are only possible if performed as a QUIC endpoint; this might be possible with the introduction of a proxy which authenticates as an endpoint. Proxy operations are not in scope for this document. Network management is not a one-size-fits-all endeavour: practices considered necessary or even mandatory within enterprise networks"} {"id": "q-en-ops-drafts-f6d0a2a2b02a2e0190f17035edfb88373a9935d6bd4423f926498485da135cf5", "old_text": "fall back to some other transport protocol. In the case of HTTP, this fallback is TLS 1.3 over TCP. An application that implements fallback needs to consider the security consequences. A fallback to TCP and TLS 1.3 exposes control information to modification and manipulation in the network. Further", "comments": "This is a proposal by Gorry to address issue\nThis works for me.\nShould TAPS be cited as an informational ref to an API that is being discussed by the IETF, and which could apply to QUIC - at least to define the primitives needed in an API, and has quite some insight into protocol racing/fallback as described in applicability?\nI'd be happy to do so ;-) however we avoided in the end to talk to much about specific of the interfaces or needed nobs. So not sure it fits or where to fit it.... where would you want to say something?\nI think the WG should consider writing a short section on this. It's important enough that we have a WG developing relevant specs for how to build APIs and the need for this type of fallback/racing.\nI'm not sure the QUIC WG has enough experience to say anything on the matter. The fallback discussion is about how (Application Mapping + QUIC) could fallback to (alternative application mapping + TCP). For HTTP/3 this isn't a straight fallback - if QUIC fails then a different HTTP version over TCP is required. This has consequences for applications that use HTTP/3 as a substrate (MASQUE, WebTransport) which rely on features only it provides like unreliable DATAGRAM. I think the Applicability spec might need to say more here. What I'm trying to highlight is that there isn't a straight shootout between TCP and QUIC. If TAPS frames things like that then I don't know that it helps Applicability.\ntaps allows you to use the same interface for multiple transports and also provides you a way to state requirements (hard or soft) to restricts the selection of possible transports. In that sense for some application TCP and QUIC might be not make a real difference as long as all features requested are support, e.g reliability and in-order delivery. Still not sure what/if we want to say something in applicability statement...?\nNAME we were trying to not focus on http in this document. But I do agree that the fact that there is a different version needed for http to fallback could still be interesting. How about adding as sentence like the add the end of the first paragraph in section 2: OLD In the case of HTTP, this fallback is TLS 1.3 over TCP. NEW In the case of HTTP, this fallback is TLS 1.3 over TCP. Note that for HTTP a fallback to TCP also requires the use of a different version of HTTP, as HTTP/3 has been optimised for use with QUIC, e.g., by removing multiplexing from the HTTP layer and used the provided mechanism within QUIC. We could even add something more like: \"Other applications that are optimised for QUIC might need similar adaption in order to be able to function on TCP.\" However, not sure that adds much... What do you think?\nI'm still not sure that helps. And I'm trying not to derail Gorry's OP. The main grounding I have in my understanding is HTTP. But I appreciate the Applicability document is general purpose, so apologies. Many HTTP applications operate at the semantic level. HTTP semantics are version-independent and get profiled into versions that have a wire-syntax, which depends on different properties of a transport. This could be an optimised use of QUIC features, or it could be an outright requirement for those features. There isn't an interoperable notion of \"I want to do HTTP/1.1 over a reliable in-order transport, TAPS selected QUIC for me, great I'll use that\" nor \"I want to do HTTP/3 over a reliable, multiplexed, in-order transport, TAPS selected SCTP for me\". The transport fallback behaviour for HTTP semantics is quite straightforward. Pick a different HTTP version. This could lend itself to a race like scenario. However, there are applications that use HTTP as a substrate and things like MASQUE and WebTransport place requirements on the use of features only available in HTTP/3 - I doubt anyone will try to backport features of QUIC to TCP in order to satisfy the requirements of layers above.\nI was thinking of something like: /The IETF TAPS specifications [URL] describe a system in which multiple protocols can be provided below a common API. and describe some of the implications of fall back, which specifically precludes fallback to insecure protocols or to weaker versions of secure protocols. / Perhaps inserted after this: /These applications must operate, perhaps with impaired functionality, in the absence of features provided by QUIC not present in the fallback protocol./ ... and before mentioning fallback to TLS over TCP.\nNAME you are right that QUIC provides features that TCP doesn't provide and if you use those features you cannot just straight-away replace one with the other. However, a simple one-to-one replacement not implicated when talking about falling back. Rather than trying to back-port features to TCP, what you need to do is implement that feature in your application instead (as H2 does for multiplexing). But this is also true if you try to use TCP and fall back to UDP but need reliability; then you have to implement it in the application layer. I was trying to be brief in my proposed text and maybe the word \"optimise\" is not the right one. We can of course say more. Do you want to give it a try and propose some text if you think it would be helpful to say more?\nThanks. I'll have a think on some text. The important thing is that H2 wasn't designed as a fallback for H3, it is its own thing. There are few features that an application desires that it couldn't also achieve by just using HTTP/1.1 and multiple connections for multiplexing.\nMay I ask what these features are? E.g., the ability to send unreliable datagrams should not matter in the best effort Internet (reliable transfer, when not needed, is just slower). In RFC 8923, we made an effort to identify exactly what MUST be exposed to an application because there's no such fall back. This covers TCP, MPTCP, UDP, UDP-Lite, SCTP and LEDBAT; alas, it excludes QUIC, since this work pre-dates its finalization (by far, despite the date: it spent a long time in the RFC Editor queue). So now I'm curious what QUIC adds to this list.\nmainly multiplexing I would say but that probably covered by SCTP\nThe main feature that QUIC provides to the HTTP substrate is that it is a secure transport that can work on the Internet and is easily integrated with WebPKI. These are meta-features in a way. Having parity of the deployment and security models allows more straightforward rationalisation when designing applications and e.g. Web APIs on top of this substrate.\nThanks! About muxing: is a certain prioritization behavior guaranteed (not just a wish) when muxing with QUIC? SCTP could do that but we don't currently give such guarantees in the TAPS interface. About security: right... that's what I expected. TAPS isn't going to fall back to a less secure version of what you wish for... so I guess \"mostly encrypted header\" isn't a part of any API but as you say it's a meta-feature - it's implicit in the choice of QUIC, and when that's desired, falling back wouldn't happen, a TAPS system should always only give you QUIC. I think what's missing is a QUIC API that makes these wishes explicit...\nPersonally I'd say, for an HTTP application, that QUIC packet header protection is less of a concern. More that QUIC provides transport security on-par with TCP+TLS. Often an application can use the same TLS and crypto libraries for both. Fallback from QUIC to TCP can be a downgrade. But an application that does this can (and should) be aware of the security properties of the transport it is using. With HTTP, imagine a client that connects to URL and receives . This advertises the availability of QUIC and HTTP/3. If that transport connection fails for some reason, then there is not a viable fallback to TCP+TLS. This is an unlikely scenario but the specs allow for it to happen.\nI would also have thought that encrypting data is more important - but surely having fall-back \"under the hood\" and just being aware of what you got (this can be queried in TAPS) is nicer than having to take care of it in the application. Fall-backs can get complex: e.g., when multi-homing, considering IPv6 ... a TAPS system can have a rich set to choose from and implement it all efficiently while shielding applications from this trouble - the application is still in control of things and can express limitations, as well as query what it got in case it did give the system some freedom. This HTTP case of preventing a fall-back... well, it is what it is :) No fall-back in this case, clearly.\nI remain to be convinced that TAPS API will address the way that applications (I am familiar with) intend to use and interact with QUIC.\nI think we're drifting towards a discussion of should a mapping document be written for TAPS to QUIC... which is a different topic. My observation was only that TAPS architecture is something the IETF is working upon, and it has documents that describe how to make an API that with functions and racing/fallback in a very different way to sockets. The guidance about what is an acceptable \"alternative\" connection appear common, at least enough to say there is a similarity and cite it.\nThe suggestion for a short informational reference seems good to me.\nThere is already PR\nWe merged PR . Is there anything we want to add or can this issue be closed?\nThe merged PR addresses all of my concerns. I'll let NAME be the judge whether his original issue has been addressed.\nThat looked good to me.\nThanks all!\no/ lgtm", "new_text": "fall back to some other transport protocol. In the case of HTTP, this fallback is TLS 1.3 over TCP. The IETF TAPS specifications I-D.ietf-taps-arch describe a system with a common API for multiple protocols and some of the implications of fallback between these different protocols, specifically precluding fallback to insecure protocols or to weaker versions of secure protocols. An application that implements fallback needs to consider the security consequences. A fallback to TCP and TLS 1.3 exposes control information to modification and manipulation in the network. Further"} {"id": "q-en-ops-drafts-43b521c52ea7bc1b2403aed7dc6d3abd2dffbaa1c86010ee1af056cf517a181b", "old_text": "The CONNECTION_CLOSE frame provides an optional reason field, that can be used to append human-readable information to an error code. Note that QUIC RESET_STREAM and STOP_SENDING frames also include an error code, but no reason string. Application error codes are expected to be defined from a single space that applies to all three frame types. Alternatively, a QUIC connection can be silently closed by each endpoint separately after an idle timeout. If enabled as indicated", "comments": "Thanks to Mike for noticing this.\nThis is true, but there is nothing inherent to the design that prevents RESETSTREAM (and STOPSENDING) from using a different code space from CONNECTION_CLOSE. Though we have the policy in HTTP/3 that any stream error can become a connection error, the same isn't necessarily true of other protocols. Maybe we cab just avoid mentioning this possibility..,\nI don't think I have a strong opinion on removing or keeping this sentence. Would it be any better if we say: \"Application error codes are recommended to be defined from a single space that applies to all three frame types.\" ? Or is there any other guidance we can or want to give to application designer about the use of error codes?", "new_text": "The CONNECTION_CLOSE frame provides an optional reason field, that can be used to append human-readable information to an error code. RESET_STREAM and STOP_SENDING frames also include an error code, but no reason string. Alternatively, a QUIC connection can be silently closed by each endpoint separately after an idle timeout. If enabled as indicated"} {"id": "q-en-ops-drafts-1850e9eb0ef232a0d9872d4655e606af7e57319502b9d6d53de285f0d670c106", "old_text": "and identifies the version used for that packet. During Version Negotiation (see version and Section 17.2.1 of QUIC-TRANSPORT), the version number field has a special value (0x00000000) that identifies the packet as a Version Negotiation packet. Upon time of publishing of this document, QUIC versions that start with 0xff implement IETF drafts. QUIC version 1 uses version 0x00000001. Operators should expect to observe packets with other version numbers as a result of various Internet experiments and future standards. source and destination connection ID: short and long packet headers carry a destination connection ID, a variable-length field", "comments": "proposed by NAME\nPlan is to update the PR to point to the IANA registry instead.\nAdded a ref to the iana registration sec in the transport draft. Should be ready to go now!", "new_text": "and identifies the version used for that packet. During Version Negotiation (see version and Section 17.2.1 of QUIC-TRANSPORT), the version number field has a special value (0x00000000) that identifies the packet as a Version Negotiation packet. QUIC version 1 uses version 0x00000001. Operators should expect to observe packets with other version numbers as a result of various Internet experiments, future standards, and greasing. All deployed versions are maintained in an IANA registry (see Section 22.2 of QUIC-TRANSPORT). source and destination connection ID: short and long packet headers carry a destination connection ID, a variable-length field"} {"id": "q-en-ops-drafts-4bf3739be0a0e8d9a8edc0d94a67ebdd366c49744a5e3d7df506b80eccb7e123", "old_text": "4.4. Flow control provides a means of managing access to the limited buffers endpoints have for incoming data. This mechanism limits the amount of data that can be in buffers in endpoints or in transit on the network. However, there are several ways in which limits can", "comments": "This mainly wiggles the existing text around and tries to fully qualify transport vs application to help explain the ways an implementation might do things, which is independent of how an application protocol might be designed (for example H3 doesn't require incremental processing but most people come to a realizatopm that it is the best way to do things).\nThe new text is good and clear, but it doesn't really explain that the when credit is released at the receiver, could we say a little more: it takes 1/2 RTT for cerdit to reach the sender, that causes new data to be released, for a large RTT, this means that credit has to be sufficient to avoid blocking or to be sent promptly, otherwise transfers will stall for periods of an RTT. (I know TCP does this stuff for the app and grows the window, but QUIC I suspect needs something to happen to release the credit). I know people closer to the Apps can write this sentence better than me; from my perspective I just see the effects when an app blocks or release credit in big chunks... over a large path RTT and the network performance suffers.\nThanks Gorry. I think I agree. How about I tweak \"Timely updates can improve performance\" paragraph to say \"Timing of updates can affect performance\". I can add a mention that the extension of credits on the data recever side is enacted by emitting MAXDATA/MAXSTREAM_DATA frames, which take at least a 1/2 RTT to reach the data sender side. I think the useful additional applicability statement to be made based on the above is that the data consumer read rate may be disjoint from the delivery rate and responsiveness of signalling channel back. For a highly asymmetric link, the download rate can be application-limited well below the theoretical bandwidth simply because an application is spending too much share of the time reading packets and not giving enough time to emit QUIC packets. Does that sound ok, or am I missing your ask?\nThis is definitely heading in the direction I was hoping, it helps avoid people developing on a LAN and then discovering the effects on other paths later. Please add something.\nI've reworked the paragraph with my attempt to address NAME comment. I've introduced the terms uplink and downlink, which I'm not totally happy with. If someone has a better suggestion I'm all ears.\nIssue: Sect 4.3. Flow Control - we should have a subsection also encouraging timely credits This ID discusses flow control deadlock, whichI think is important. I suggest we should also note that timely release of flow control credit is also important (QUIC transport has some text to point to). This need is evident when the path has a large RTT, and there can be a large BDP, where the whole of operation of QUIC can be dominated by the way in which the application manages the flow credit. This might not be obvious, because the equivalent rwnd update in TCP is handled by the transport rather than the application.\nThe point is the audience for this draft is applications that use a quic library. I would assume that the actual credit handling is performed in the transport/in quic directly, usually without an interface for the application to directly impact it. Do you propose the application should have more direct impact on flow credit handling? I guess we can add a pointer to the right section in the transport draft but not sure we should/can say more here...?\nI agree this seems mostly like a QUIC transport/library issue and I wouldn't expect the application to have much control here. I can imagine applications setting a max value?\nI generally agree with Mirja and Ian. However, on a closer inspection of this section I think it would benefit from a crisper definition of what it means to read data vs. process data. Its a reasonable application mapping implementation design to read data out of the transport in order to release flow control but still buffer it (or just dropping it on the floor) in application space before processing that as application data.\nNAME can you propose text?\nI'm content that flow control is something the libraries need to get correct - the TCP people spent energy to get this automated, although QUIC's multi-streaming makes the trickier. Lucas text suggestion would improve things by making it clearer how the API is intended to be used.\nNAME ?\nI will add some text (just need to find the time).", "new_text": "4.4. QUIC flow control provides a means of managing access to the limited buffers endpoints have for incoming data. This mechanism limits the amount of data that can be in buffers in endpoints or in transit on the network. However, there are several ways in which limits can"} {"id": "q-en-ops-drafts-4bf3739be0a0e8d9a8edc0d94a67ebdd366c49744a5e3d7df506b80eccb7e123", "old_text": "Understanding what causes deadlocking might help implementations avoid deadlocks. Large messages can produce deadlocking if the recipient does not process the message incrementally. If the message is larger than the flow control credit available and the recipient does not release additional flow control credit until the entire message is received and delivered, a deadlock can occur. This is possible even where stream flow control limits are not reached because connection flow control limits can be consumed by other streams. A common flow control implementation technique is for a receiver to extend credit to the sender as a the data consumer reads data. In this setting, a length-prefixed message format makes it easier for the data consumer to leave data unread in the receiver's buffers and thereby withhold flow control credit. If flow control limits prevent the remainder of a message from being sent, a deadlock will result. A length prefix might also enable the detection of this sort of deadlock. Where protocols have messages that might be processed as a single unit, reserving flow control credit for the entire message atomically makes this style of deadlock less likely. A data consumer can read all data as it becomes available to cause the receiver to extend flow control credit to the sender and reduce the chances of a deadlock. However, releasing flow control credit might mean that the data consumer might need other means for holding a peer accountable for the state it keeps for partially processed messages. Deadlocking can also occur if data on different streams is interdependent. Suppose that data on one stream arrives before the", "comments": "This mainly wiggles the existing text around and tries to fully qualify transport vs application to help explain the ways an implementation might do things, which is independent of how an application protocol might be designed (for example H3 doesn't require incremental processing but most people come to a realizatopm that it is the best way to do things).\nThe new text is good and clear, but it doesn't really explain that the when credit is released at the receiver, could we say a little more: it takes 1/2 RTT for cerdit to reach the sender, that causes new data to be released, for a large RTT, this means that credit has to be sufficient to avoid blocking or to be sent promptly, otherwise transfers will stall for periods of an RTT. (I know TCP does this stuff for the app and grows the window, but QUIC I suspect needs something to happen to release the credit). I know people closer to the Apps can write this sentence better than me; from my perspective I just see the effects when an app blocks or release credit in big chunks... over a large path RTT and the network performance suffers.\nThanks Gorry. I think I agree. How about I tweak \"Timely updates can improve performance\" paragraph to say \"Timing of updates can affect performance\". I can add a mention that the extension of credits on the data recever side is enacted by emitting MAXDATA/MAXSTREAM_DATA frames, which take at least a 1/2 RTT to reach the data sender side. I think the useful additional applicability statement to be made based on the above is that the data consumer read rate may be disjoint from the delivery rate and responsiveness of signalling channel back. For a highly asymmetric link, the download rate can be application-limited well below the theoretical bandwidth simply because an application is spending too much share of the time reading packets and not giving enough time to emit QUIC packets. Does that sound ok, or am I missing your ask?\nThis is definitely heading in the direction I was hoping, it helps avoid people developing on a LAN and then discovering the effects on other paths later. Please add something.\nI've reworked the paragraph with my attempt to address NAME comment. I've introduced the terms uplink and downlink, which I'm not totally happy with. If someone has a better suggestion I'm all ears.\nIssue: Sect 4.3. Flow Control - we should have a subsection also encouraging timely credits This ID discusses flow control deadlock, whichI think is important. I suggest we should also note that timely release of flow control credit is also important (QUIC transport has some text to point to). This need is evident when the path has a large RTT, and there can be a large BDP, where the whole of operation of QUIC can be dominated by the way in which the application manages the flow credit. This might not be obvious, because the equivalent rwnd update in TCP is handled by the transport rather than the application.\nThe point is the audience for this draft is applications that use a quic library. I would assume that the actual credit handling is performed in the transport/in quic directly, usually without an interface for the application to directly impact it. Do you propose the application should have more direct impact on flow credit handling? I guess we can add a pointer to the right section in the transport draft but not sure we should/can say more here...?\nI agree this seems mostly like a QUIC transport/library issue and I wouldn't expect the application to have much control here. I can imagine applications setting a max value?\nI generally agree with Mirja and Ian. However, on a closer inspection of this section I think it would benefit from a crisper definition of what it means to read data vs. process data. Its a reasonable application mapping implementation design to read data out of the transport in order to release flow control but still buffer it (or just dropping it on the floor) in application space before processing that as application data.\nNAME can you propose text?\nI'm content that flow control is something the libraries need to get correct - the TCP people spent energy to get this automated, although QUIC's multi-streaming makes the trickier. Lucas text suggestion would improve things by making it clearer how the API is intended to be used.\nNAME ?\nI will add some text (just need to find the time).", "new_text": "Understanding what causes deadlocking might help implementations avoid deadlocks. The size and rate of transport flow control credit updates can affect performance. Applications that use QUIC often have a data consumer that reads data from transport buffers. Some implementations might have independent transport-layer and application-layer receive buffers. Consuming data does not always imply it is immediately processed. However, a common flow control implementation technique is to extend credit to the sender, by emitting MAX_DATA and/or MAX_STREAM_DATA frames, as data is consumed. Delivery of these frames is affected by the latency of the back channel from the receiver to the data sender. If credit is not extended in a timely manner, the sending application can be blocked, effectively throttling the sender. Large application messages can produce deadlocking if the recipient does not read data from the transport incrementally. If the message is larger than the flow control credit available and the recipient does not release additional flow control credit until the entire message is received and delivered, a deadlock can occur. This is possible even where stream flow control limits are not reached because connection flow control limits can be consumed by other streams. A length-prefixed message format makes it easier for a data consumer to leave data unread in the transport buffer and thereby withhold flow control credit. If flow control limits prevent the remainder of a message from being sent, a deadlock will result. A length prefix might also enable the detection of this sort of deadlock. Where application protocols have messages that might be processed as a single unit, reserving flow control credit for the entire message atomically makes this style of deadlock less likely. A data consumer can eagerly read all data as it becomes available, in order to make the receiver extend flow control credit and reduce the chances of a deadlock. However, such a data consumer might need other means for holding a peer accountable for the additional state it keeps for partially processed messages. Deadlocking can also occur if data on different streams is interdependent. Suppose that data on one stream arrives before the"} {"id": "q-en-ops-drafts-589a0957ccb1732fea34bd1d8f57b83c0d536ebda480ffc01ac08e34daa19aea", "old_text": "A single stream provides ordering. If the application requires certain data to be received in order, that data should be sent on the same stream. Multiple streams provide concurrency. Data that can be processed independently, and therefore would suffer from head of line", "comments": "Based on the discussion in webtrans at IETF-110, I guess we could be more explicit that QUIC streams do not guarantee to be received in any particular order.", "new_text": "A single stream provides ordering. If the application requires certain data to be received in order, that data should be sent on the same stream. There is no guarantee of transmission, reception, or delivery order across streams. Multiple streams provide concurrency. Data that can be processed independently, and therefore would suffer from head of line"} {"id": "q-en-ops-drafts-3a81ccd37a0c0c435da595eb5aae493f73e55ebcd1def89d284e60cde2e196d8", "old_text": "use of DiffServ Code Points (DSCPs) RFC2475 as well as Equal-Cost Multi-Path (ECMP) routing, is applied on a per flow-basis (and not per-packet) and as such that all packets belonging to the same QUIC connection get uniform treatment. Using ECMP to distribute packets from a single flow across multiple network paths or any other non- uniform treatment of packets belong to the same connection could result in variations in order, delivery rate, and drop rate. As feedback about loss or delay of each packet is used as input to the congestion controller, these variations could adversely affect performance. Depending on the loss recovery mechanism implemented, QUIC may be more tolerant of packet re-ordering than traditional TCP traffic (see packetnumber). However, it cannot be known by the network which exact recovery mechanism is used and therefore reordering tolerance should be considered as unknown. 4.9.", "comments": "It could be worth noting two general further things about DSCPs : (a) \"When multiplexing multiple flows over a QUIC connection, the DSCP value selected should be the one associated with the highest priority requested for all multiplexed flows.\" (b) \"If a packet enters a network segment that does not support the DSCP value, this could result in the connection not receiving the network treatment it expects. The DSCP value in this packet could also be remarked as the packet travels along the network path.\"\nA sample PR: \"More aspects of setting DSCPs\"", "new_text": "use of DiffServ Code Points (DSCPs) RFC2475 as well as Equal-Cost Multi-Path (ECMP) routing, is applied on a per flow-basis (and not per-packet) and as such that all packets belonging to the same QUIC connection get uniform treatment. Using ECMP to distribute packets from a single flow across multiple network paths or any other non-uniform treatment of packets belong to the same connection could result in variations in order, delivery rate, and drop rate. As feedback about loss or delay of each packet is used as input to the congestion controller, these variations could adversely affect performance. Depending on the loss recovery mechanism implemented, QUIC may be more tolerant of packet re- ordering than traditional TCP traffic (see packetnumber). However, the recovery mechanism used by a flow cannot be known by the network and therefore reordering tolerance should be considered as unknown. 4.9."} {"id": "q-en-ops-drafts-2e1f9de0bdd748a893daeea94d005a9b12e2bb1f1e569e3e9cb7249466df6c08", "old_text": "12. QUIC assumes that all packets of a QUIC connection, or at least with the same 5-tuple {dest addr, source addr, protocol, dest port, source port}, will receive similar network treatment since feedback about loss or delay of each packet is used as input to the congestion controller. Therefore, it is not recommended to use different DiffServ Code Points (DSCPs) RFC2475 for packets belonging to the same connection. If differential network treatment, e.g. by the use of different DSCPs, is desired, multiple QUIC connections to the same server may be used. However, in general it is recommended to minimize the number of QUIC connections to the same server, to avoid increased overheads and, more importantly, competing congestion control. 13.", "comments": "Separates for manageability and this PR for applicability\nI actually find this sentence weak: \"Therefore it is not recommended to use different DiffServ Code Points (DSCPs) {{?RFC2475}} for ... Considering that QUIC v1 as defined has a single congestion controller and recovery handler and don't treat different DSCP values as separate transport flows I would think that this should be stronger formulated. However, I could see that one could provide a bit more context to this DSCP and single QUIC connections vs multiple QUIC streams discussion by referencing Section 5.1 of URL and be explicit about the point that QUIC v1 is similar to SCTP in regards to the issues raised.\nI think the new text do work.", "new_text": "12. QUIC, as defined in RFC9000, has a single congestion controller and recovery handler. This design assumes that all packets of a QUIC connection, or at least with the same 5-tuple {dest addr, source addr, protocol, dest port, source port} that same the same DiffServ Code Point (DSCP) RFC2475, will receive similar network treatment since feedback about loss or delay of each packet is used as input to the congestion controller. Therefore, packets belonging to the same connection should use a single DSCP. Section 5.1 of RFC7657 provides a discussion of DiffServ interactions with datagram transport protocols RFC7657 (in this respect the interactions with QUIC resemble those of SCTP). When multiplexing multiple flows over a single QUIC connection, the selected DSCP value should be the one associated with the highest priority requested for all multiplexed flows. If differential network treatment is desired, e.g., by the use of different DSCPs, multiple QUIC connections to the same server may be used. However, in general it is recommended to minimize the number of QUIC connections to the same server, to avoid increased overhead and, more importantly, competing congestion control. As in other uses of DiffServ, when a packet enters a network segment that does not support the DSCP value, this could result in the connection not receiving the network treatment it expects. The DSCP value in this packet could also be remarked as the packet travels along the network path, changing the requested treatment. 13."} {"id": "q-en-ops-drafts-a48cfe9367530312e7cf0aee9273d7ef1b10e038fd53dfe6e924e0f208042500", "old_text": "7. QUIC version 1 without extensions uses an acknowledgment strategy adopted from TCP. That is, every other packet is acknowledged. However, generating and processing QUIC acknowledgments can consume significant resources, both in terms of processing costs and link utilization, especially on constraint networks. Some applications might be able to improve overall performance by using alternative strategies that reduce the rate of acknowledgments. 8.", "comments": "Suggesting a rewording to clarify various issues and correct some mistakes.\nPreviously scope of this text was agreed, but what appeared seems to me to have various NiTs: \u2022 Saying \"every other packet is acknowledged\" is a recommendation in RC9000. I suggest citing also. \u2022 The text seems to conflate endpoint processing optimisation, and network path performance optimisation: /constraint/ wasn't the correct word, but even /constrained/ was a value judgment on design traedoffs for any mobile/RF link - and many bandwidth-limited cabled link, which could be avoided by simply saying what is intended and saying either /which can impact performance of some types of network/ or /which can impact performance across some types of network/ ... depending on your viewpoint as an operator or application. \u2022 I think we can do better than URL, suggesting removing some and leaving might. This might be resolved in a PR .", "new_text": "7. QUIC version 1 without extensions uses an acknowledgment strategy adopted from TCP Section 13.2 of RFC9000). That is, it recommends every other packet is acknowledged. However, generating and processing QUIC acknowledgments consumes resources at a sender and receiver. Acknowledgments also incur forwarding costs and contribute to link utilization, which can impact performance over some types of network. Applications might be able to improve overall performance by using alternative strategies that reduce the rate of acknowledgments. 8."} {"id": "q-en-ops-drafts-051284e58e22ca53e878bd1799f2bd37611c9a00ffc5ccd641e6a1eb79570da1", "old_text": "Mapping of application data to streams is application-specific and described for HTTP/s in QUIC-HTTP. In general data that can be processed independently, and therefore would suffer from head of line blocking, if forced to be received in order, should be transmitted over different streams. If there is a logical grouping of those data chunks or messages, stream can be reused, or a new stream can be opened for each chunk/message. If one message is mapped on a single stream, resetting the stream if the message is not needed anymore can be used to emulate partialreliability on a message basis. If a QUIC receiver has maximum allowed concurrent streams open and the sender on the other end indicates that more streams are needed, it doesn't automatically lead to an increase of the maximum number of streams by the receiver. Therefore it can be valuable to expose maximum number of allowed, currently open and currently used streams to the application to make the mapping of data to streams dependent on this information. Further, streams have a maximum number of bytes that can be sent on one stream. This number is high enough (2^64) that this will usually", "comments": "One sentences added to address remaining bits for\nNAME I didn't replace \"five-tuple\" with \"connection\" in this sentence in this PR: \"QUIC\u2019s stream multiplexing feature allows applications to run multiple streams over a single connection, without head-of-line blocking between streams, associated at a point in time with a single five-tuple. \" because it already has connection in the sentence; so we could only remove the last half sentence, but I actually think there is a value in leaving this in because some application may actually care about the 5-tuple.\nWFM. We should probably do an editorial pass to make sure we're using terms like \"5-tuple\" and \"connection\" and \"path\" properly everywhere anyway.\nSee also section on Streams in transport draft: \"Stream offsets allow for the octets on a stream to be placed in order. An endpoint MUST be capable of delivering data received on a stream in order. Implementations MAY choose to offer the ability to deliver data out of order. There is no means of ensuring ordering between octets on different streams.\"\nSee PR\nlgtm modulo editorial. circleci doesn't like this but can fix after it lands.", "new_text": "Mapping of application data to streams is application-specific and described for HTTP/s in QUIC-HTTP. In general data that can be processed independently, and therefore would suffer from head of line blocking if forced to be received in order, should be transmitted over different streams. If the application requires certain data to be received in order, the same stream should be used for that data. If there is a logical grouping of data chunks or messages, streams can be reused, or a new stream can be opened for each chunk/message. If one message is mapped to a single stream, resetting the stream to expire an unacknowledged message can be used to emulate partial reliability on a message basis. If a QUIC receiver has maximum allowed concurrent streams open and the sender on the other end indicates that more streams are needed, it doesn't automatically lead to an increase of the maximum number of streams by the receiver. Therefore it can be valuable to expose maximum number of allowed, currently open and currently used streams to the application to make the mapping of data to streams dependent on this information. Further, streams have a maximum number of bytes that can be sent on one stream. This number is high enough (2^64) that this will usually"} {"id": "q-en-oscore-groupcomm-74adb429149a690b38ad9cc75f83eedad8ef4cbf042fad4433d745d4ab4a7446", "old_text": "Context associated to endpoint X. The Recipient Key is used as salt in the HKDF, when deriving the Pairwise Recipient Key. The Shared Secret is computed as a static-static Diffie-Hellman shared secret NIST-800-56A, where the endpoint uses its private key and the public key of the other endpoint X. The Shared Secret is used as Input Keying Material (IKM) in the HKDF. info and L are as defined in Section 3.2.1 of RFC8613. If EdDSA asymmetric keys are used, the Edward coordinates are mapped to Montgomery coordinates using the maps defined in Sections 4.1 and 4.2 of RFC7748, before using the X25519 and X448 functions defined in Section 5 of RFC7748. After establishing a partially or completely new Security Context (see ssec-sec-context-persistence and sec-group-key-management), the", "comments": "NAME Good point about being more explicit. I actually think is valuable to reference an existing encoding of public keys providing also key type and curve, like CREDX in URL Also, there is a discussion about format for public keys in COSE, which if it is specified probably will be referenced by EDHOC instead of CREDX. I would be good if there is a common format used in all these instances. For the time being I propose that we reference CRED_X with empty text string subject name, and keep track of the COSE activity.\nWe should consider replace the NIST reference with SECG [1] to harmonize with EDHOC [2]. [1] URL [2] URL (edit) ... but the commits in [3] mainly refers to the NIST spec. so I propose we keep it as is for the time being. [2] URL\nIt looks good to me; we can give more details on one point. For the two HKDF invocations in Section 2.3.1, we can define more explicitly how the public keys are expressed when concatenated to form IKM-Sender and IKM-Recipient. Given a public key, that can simply be: For OKP signature keys, the compressed 'x' coordinate, i.e. what would be expressed in the key parameter -2 of a corresponding COSE Key. For EC2 signature keys, the 'x' coordinate followed by the 'y' coordinate in this order, i.e. what would be expressed in the key parameters -2 and -3 of a corresponding COSE Key. This should be sufficient, i.e. it shouldn't be needed to have a more structured encoding of public keys providing also key type and curve, e.g. like done for CRED_X in URL", "new_text": "Context associated to endpoint X. The Recipient Key is used as salt in the HKDF, when deriving the Pairwise Recipient Key. IKM-Sender - the Input Keying Material (IKM) used in HKDF for the derivation of the Pairwise Sender Key - is the concatenation of the endpoint's own signature public key, endpoint X's signature public key from the Recipient Context, and the Shared Secret IKM-Recipient - the Input Keying Material (IKM) used in HKDF for the derivation of the Recipient Sender Key - is the concatenation of endpoint X's signature public key from the Recipient Context, the endpoint's own signature public key, and the Shared Secret. The Shared Secret is computed as a cofactor Diffie-Hellman shared secret, see Section 5.7.1.2 of NIST-800-56A, where the endpoint uses its private key and the public key of the other endpoint X. Note the requirement of validation of public keys in ssec-crypto- considerations. For X25519 and X448, the procedure is described in Section 5 of RFC7748 using public keys mapped to Montgomery coordinates, see montgomery. info and L are as defined in Section 3.2.1 of RFC8613. If EdDSA asymmetric keys are used, the Edward coordinates are mapped to Montgomery coordinates using the maps defined in Sections 4.1 and 4.2 of RFC7748, before using the X25519 and X448 functions defined in Section 5 of RFC7748. For further details, see montgomery. After establishing a partially or completely new Security Context (see ssec-sec-context-persistence and sec-group-key-management), the"} {"id": "q-en-oscore-groupcomm-74adb429149a690b38ad9cc75f83eedad8ef4cbf042fad4433d745d4ab4a7446", "old_text": "2.3.2. When using any of its Pairwise Sender Keys, a sender endpoint including the 'Partial IV' parameter in the protected message MUST use the current fresh value of the Sender Sequence Number from its", "comments": "NAME Good point about being more explicit. I actually think is valuable to reference an existing encoding of public keys providing also key type and curve, like CREDX in URL Also, there is a discussion about format for public keys in COSE, which if it is specified probably will be referenced by EDHOC instead of CREDX. I would be good if there is a common format used in all these instances. For the time being I propose that we reference CRED_X with empty text string subject name, and keep track of the COSE activity.\nWe should consider replace the NIST reference with SECG [1] to harmonize with EDHOC [2]. [1] URL [2] URL (edit) ... but the commits in [3] mainly refers to the NIST spec. so I propose we keep it as is for the time being. [2] URL\nIt looks good to me; we can give more details on one point. For the two HKDF invocations in Section 2.3.1, we can define more explicitly how the public keys are expressed when concatenated to form IKM-Sender and IKM-Recipient. Given a public key, that can simply be: For OKP signature keys, the compressed 'x' coordinate, i.e. what would be expressed in the key parameter -2 of a corresponding COSE Key. For EC2 signature keys, the 'x' coordinate followed by the 'y' coordinate in this order, i.e. what would be expressed in the key parameters -2 and -3 of a corresponding COSE Key. This should be sufficient, i.e. it shouldn't be needed to have a more structured encoding of public keys providing also key type and curve, e.g. like done for CRED_X in URL", "new_text": "2.3.2. 2.3.2.1. The y-coordinate of the other endpoint's Ed25519 public key is decoded as specified in Section 5.1.3 of RFC8032. The Curve25519 u-coordinate is recovered as u = (1 + y) / (1 - y) (mod p) following the map in Section 4.1 of RFC7748. Note that the mapping is not defined for y = 1, and that y = -1 maps to u = 0 which corresponds to the neutral group element and thus will result in a degenerate shared secret. Therefore implementations MUST abort if the y-coordinate of the other endpoint's Ed25519 public key is 1 or -1 (mod p). The private signing key byte strings (= the lower 32 bytes used for generating the public key, see step 1 of Section 5.1.5 of RFC8032) are decoded the same way for signing in Ed25519 and scalar multiplication in X25519. Hence, to compute the shared secret the endpoint applies the X25519 function to the Ed25519 private signing key byte string and the encoded u-coordinate byte string as specified in Section 5 of RFC7748. 2.3.2.2. The y-coordinate of the other endpoint's Ed448 public key is decoded as specified in Section 5.2.3. of RFC8032. The Curve448 u-coordinate is recovered as u = y^2 * (d * y^2 - 1) / (y^2 - 1) (mod p) following the map from \"edwards448\" in Section 4.2 of RFC7748, and also using the relation x^2 = (y^2 - 1)/(d * y^2 - 1) from the curve equation. Note that the mapping is not defined for y = 1 or -1. Therefore implementations MUST abort if the y-coordinate of the peer endpoint's Ed448 public key is 1 or -1 (mod p). The private signing key byte strings (= the lower 57 bytes used for generating the public key, see step 1 of Section 5.2.5 of RFC8032) are decoded the same way for signing in Ed448 and scalar multiplication in X448. Hence, to compute the shared secret the endpoint applies the X448 function to the Ed448 private signing key byte string and the encoded u-coordinate byte string as specified in Section 5 of RFC7748. 2.3.3. When using any of its Pairwise Sender Keys, a sender endpoint including the 'Partial IV' parameter in the protected message MUST use the current fresh value of the Sender Sequence Number from its"} {"id": "q-en-oscore-groupcomm-74adb429149a690b38ad9cc75f83eedad8ef4cbf042fad4433d745d4ab4a7446", "old_text": "sequences of one-to-one exchanges with servers in the group, by sending requests over unicast. 2.3.3. If the pairwise mode is supported, the Security Context additionally includes Secret Derivation Algorithm, Secret Derivation Parameters", "comments": "NAME Good point about being more explicit. I actually think is valuable to reference an existing encoding of public keys providing also key type and curve, like CREDX in URL Also, there is a discussion about format for public keys in COSE, which if it is specified probably will be referenced by EDHOC instead of CREDX. I would be good if there is a common format used in all these instances. For the time being I propose that we reference CRED_X with empty text string subject name, and keep track of the COSE activity.\nWe should consider replace the NIST reference with SECG [1] to harmonize with EDHOC [2]. [1] URL [2] URL (edit) ... but the commits in [3] mainly refers to the NIST spec. so I propose we keep it as is for the time being. [2] URL\nIt looks good to me; we can give more details on one point. For the two HKDF invocations in Section 2.3.1, we can define more explicitly how the public keys are expressed when concatenated to form IKM-Sender and IKM-Recipient. Given a public key, that can simply be: For OKP signature keys, the compressed 'x' coordinate, i.e. what would be expressed in the key parameter -2 of a corresponding COSE Key. For EC2 signature keys, the 'x' coordinate followed by the 'y' coordinate in this order, i.e. what would be expressed in the key parameters -2 and -3 of a corresponding COSE Key. This should be sufficient, i.e. it shouldn't be needed to have a more structured encoding of public keys providing also key type and curve, e.g. like done for CRED_X in URL", "new_text": "sequences of one-to-one exchanges with servers in the group, by sending requests over unicast. 2.3.4. If the pairwise mode is supported, the Security Context additionally includes Secret Derivation Algorithm, Secret Derivation Parameters"} {"id": "q-en-oscore-groupcomm-74adb429149a690b38ad9cc75f83eedad8ef4cbf042fad4433d745d4ab4a7446", "old_text": "hold for Group OSCORE, about building the AEAD nonce and the secrecy of the Security Context parameters. The EdDSA signature algorithm and the elliptic curve Ed25519 RFC8032 are mandatory to implement. For endpoints that support the pairwise mode, the ECDH-SS + HKDF-256 algorithm specified in Section 6.3.1 of I-D.ietf-cose-rfc8152bis-algs and the X25519 curve RFC7748 are also mandatory to implement. Constrained IoT devices may alternatively represent Montgomery curves and (twisted) Edwards curves RFC7748 in the short-Weierstrass form", "comments": "NAME Good point about being more explicit. I actually think is valuable to reference an existing encoding of public keys providing also key type and curve, like CREDX in URL Also, there is a discussion about format for public keys in COSE, which if it is specified probably will be referenced by EDHOC instead of CREDX. I would be good if there is a common format used in all these instances. For the time being I propose that we reference CRED_X with empty text string subject name, and keep track of the COSE activity.\nWe should consider replace the NIST reference with SECG [1] to harmonize with EDHOC [2]. [1] URL [2] URL (edit) ... but the commits in [3] mainly refers to the NIST spec. so I propose we keep it as is for the time being. [2] URL\nIt looks good to me; we can give more details on one point. For the two HKDF invocations in Section 2.3.1, we can define more explicitly how the public keys are expressed when concatenated to form IKM-Sender and IKM-Recipient. Given a public key, that can simply be: For OKP signature keys, the compressed 'x' coordinate, i.e. what would be expressed in the key parameter -2 of a corresponding COSE Key. For EC2 signature keys, the 'x' coordinate followed by the 'y' coordinate in this order, i.e. what would be expressed in the key parameters -2 and -3 of a corresponding COSE Key. This should be sufficient, i.e. it shouldn't be needed to have a more structured encoding of public keys providing also key type and curve, e.g. like done for CRED_X in URL", "new_text": "hold for Group OSCORE, about building the AEAD nonce and the secrecy of the Security Context parameters. The EdDSA signature algorithm Ed25519 RFC8032 is mandatory to implement. For endpoints that support the pairwise mode, the ECDH-SS + HKDF-256 algorithm specified in Section 6.3.1 of I-D.ietf-cose- rfc8152bis-algs and the X25519 algorithm RFC7748 are also mandatory to implement. Constrained IoT devices may alternatively represent Montgomery curves and (twisted) Edwards curves RFC7748 in the short-Weierstrass form"} {"id": "q-en-oscore-groupcomm-74adb429149a690b38ad9cc75f83eedad8ef4cbf042fad4433d745d4ab4a7446", "old_text": "The derivation of pairwise keys defined in key-derivation-pairwise is compatible with ECDSA and EdDSA asymmetric keys, but is not compatible with RSA asymmetric keys. The security of using the same key pair for Diffie-Hellman and for signing is demonstrated in Degabriele. 10.14.", "comments": "NAME Good point about being more explicit. I actually think is valuable to reference an existing encoding of public keys providing also key type and curve, like CREDX in URL Also, there is a discussion about format for public keys in COSE, which if it is specified probably will be referenced by EDHOC instead of CREDX. I would be good if there is a common format used in all these instances. For the time being I propose that we reference CRED_X with empty text string subject name, and keep track of the COSE activity.\nWe should consider replace the NIST reference with SECG [1] to harmonize with EDHOC [2]. [1] URL [2] URL (edit) ... but the commits in [3] mainly refers to the NIST spec. so I propose we keep it as is for the time being. [2] URL\nIt looks good to me; we can give more details on one point. For the two HKDF invocations in Section 2.3.1, we can define more explicitly how the public keys are expressed when concatenated to form IKM-Sender and IKM-Recipient. Given a public key, that can simply be: For OKP signature keys, the compressed 'x' coordinate, i.e. what would be expressed in the key parameter -2 of a corresponding COSE Key. For EC2 signature keys, the 'x' coordinate followed by the 'y' coordinate in this order, i.e. what would be expressed in the key parameters -2 and -3 of a corresponding COSE Key. This should be sufficient, i.e. it shouldn't be needed to have a more structured encoding of public keys providing also key type and curve, e.g. like done for CRED_X in URL", "new_text": "The derivation of pairwise keys defined in key-derivation-pairwise is compatible with ECDSA and EdDSA asymmetric keys, but is not compatible with RSA asymmetric keys. For the public key translation from Ed25519 (Ed448) to X25519 (X448) specified in key-derivation-pairwise, variable time methods can be used since the translation operates on public information. Any byte string of appropriate length is accepted as a public key for X25519 (X448) in RFC7748, it is therefore not necessary for security to validate the translated public key (assuming the translation was successful). The security of using the same key pair for Diffie-Hellman and for signing (by considering the ECDH procedure in sec-derivation-pairwise as a Key Encapsulation Mechanism (KEM)) is demonstrated in Degabriele and Thormarker. Applications using ECDH (except X25519 and X448) based KEM in sec- derivation-pairwise are assumed to verify that a peer endpoint's public key is on the expected curve and that the shared secret is not the point at infinity. The KEM in Degabriele checks that the shared secret is different from the point at infinity, as does the procedure in Section 5.7.1.2 of NIST-800-56A which is referenced in sec- derivation-pairwise. Extending Theorem 2 of Degabriele, Thormarker shows that the same key pair can be used with X25519 and Ed25519 (X448 and Ed448) for the KEM specified in sec-derivation-pairwise. By symmetry in the KEM used in this document, both endpoints can consider themselves to have the recipient role in the KEM -- as discussed in Section 7 of Thormarker - and rely on the mentioned proofs for the security of their key pairs. Theorem 3 in Degabriele shows that the same key pair can be used for an ECDH based KEM and ECDSA. The KEM uses a different KDF than in sec-derivation-pairwise, but the proof only depends on that the KDF has certain required properties, which are the typical assumptions about HKDF, e.g., that output keys are pseudorandom. In order to comply with the assumptions of Theorem 3, received public keys MUST be successfully validated, see Section 5.6.2.3.4 of NIST-800-56A. The validation MAY be performed by a trusted Group Manager. For Degabriele to apply as it is written, public keys need to be in the expected subgroup. For this we rely on cofactor DH, Section 5.7.1.2 of NIST-800-56A which is referenced in sec-derivation-pairwise. HashEdDSA variants of Ed25519 and Ed448 are not used by COSE, see Section 2.2 of I-D.ietf-cose-rfc8152bis-algs, and are not covered by the analysis in Thormarker, and hence MUST NOT be used with the public keys used with pairwise keys as specified in this document. 10.14."} {"id": "q-en-oscore-57bffe0f05cac5a14b4e84a11d807af1d39a14a4250492104228192f8c9c36f2", "old_text": "Observe RFC7641 is an optional feature. An implementation MAY support RFC7252 and the OSCORE option without supporting RFC7641, in which case the Observe related processing specified in this section, sequence-numbers and processing can be omitted. The Observe option as used here targets the requirements on forwarding of I-D.hartke-core-e2e-security-reqs (Section 2.2.1). This section specifies Observe processing associated to the Partial IV (observe-partial-iv) and Observe processing in the presence of RFC7641-compliant intermediaries (observe-option-processing). In contrast to e.g. block-wise, the Inner and Outer Observe option are not processed independently. Outer Observe is required to support Observe operations in intermediaries, but the additional use of Inner Observe is needed to protect Observe registrations end-to- end (see observe-option-processing). observe-without-intermed specifies a simplified Observe processing which is applicable in the absence of intermediaries. Note that OSCORE is compliant with the requirement that a client must not register more than once for the same target resource (see Section 3.1 of RFC7641) since the target resource for Observe registration is identified by all options in the request that are part of the Cache-Key, including OSCORE. 4.1.3.4.1. To support proxy operations, the CoAP client using Observe with OSCORE MUST set Outer Observe. If Observe was only sent encrypted end-to-end, an OSCORE-unaware proxy would not expect several responses to a non-Observe request and notifications would not reach the client. Moreover, intermediaries are allowed to cancel observations and inform the server; forbidding this may result in processing and transmission of notifications on the server side which do not reach the client. In case of registrations or re-registrations, the CoAP client using Observe with OSCORE MUST set both Inner and Outer Observe to the same value (0). This allows the server to verify that the observation was requested by the client, thereby avoiding unnecessary processing and transmission of notifications, since such notifications would not benefit the client. An intermediary that supports Observe MUST copy the OSCORE option in the next hop request unchanged. Although intermediaries are allowed to re-send notifications to other clients, when using OSCORE this does not happen, since requests from different clients will have different cache keys. The Outer Observe option in the CoAP request may be legitimately removed by a proxy or ignored by a server. In these cases, the server processes the request as a non-Observe request and produce a non-Observe response. If the OSCORE client receives a response to an Observe request without an Outer Observe value, then it verifies the response as a non-Observe response, as specified in ver-res. If the OSCORE client receives a response to a non-Observe request with an Outer Observe value, it stops processing the message, as specified in ver-res. In order to support Observe processing in OSCORE-unaware intermediaries, for messages with the Observe option the Outer Code SHALL be set to 0.05 (FETCH) for requests and to 2.05 (Content) for responses. 4.1.3.4.2. If the server accepts an Observe registration, a Partial IV MUST be", "comments": "Back to one processing description Observe is now primarily inner Simplified processing Explicit cancellation by intermediary not supported", "new_text": "Observe RFC7641 is an optional feature. An implementation MAY support RFC7252 and the OSCORE option without supporting RFC7641, in which case the Observe related processing can be omitted. OSCORE supports a reduced set of RFC7641 operations performed in intermediary nodes as specified in this section. The use of Observe targets the requirements on forwarding of Section 2.2.1 of I- D.hartke-core-e2e-security-reqs, i.e. that observations go through any intermediate node, as illustrated in Figure 8 of RFC7641). Inner Observe is used by default to protect the value of the Observe option between the endpoints. Outer Observe is additionally used to support a selected set of intermediate node operations that are useful while maintaining end-to-end security. Intermediary nodes MUST copy the OSCORE option in the next hop request unchanged. In order to support Observe processing in OSCORE-unaware intermediaries, for messages with the Observe option the Outer Code MUST be set to 0.05 (FETCH) for requests and to 2.05 (Content) for responses. 4.1.3.4.1. The client MUST set both Inner and Outer Observe to the same value in the request. In order to support the case of an intermediary node ignoring a registration request (Observe 0) and instead processing a non-Observe request (Section 2 of RFC7641), the server MUST only consider the received message a registration request if both Inner and Outer Observe are set to 0. Clients can re-register observations to ensure that the observation is still active and establish freshness again (RFC7641 Section 3.3.1). When an OSCORE protected observation is refreshed, not only the ETags, but also the Partial IV (and thus the payload and OSCORE option) change. The server uses the Partial IV of the new request as the 'request_piv' of new responses. Since intermediaries are not assumed to have a security context with the server, cancellation or re-registration of an observation initiated by an intermediary node is not supported. Cancellation of observation by an intermediary using the Reset message as response to a notification can still be applied. An intermediary node may forward a re-registration message, but if a proxy re-sends an old registration message from a client this will trigger the replay protection mechanism in the server which, depending on action, may result in a termination of the observation in proxy or client. An OSCORE aware intermediary SHALL NOT initiate re-registrations of observations. A server MAY respond to a replayed registration request with a cached notification. The server SHALL NOT respond to a replayed registration request with a message encrypted using the Partial IV of the request. Note that OSCORE is compliant with the requirement that a client must not register more than once for the same target resource (see Section 3.1 of RFC7641) since the target resource for Observe registration is identified by all options in the request that are part of the Cache-Key, including OSCORE. 4.1.3.4.2. If the server accepts an Observe registration, a Partial IV MUST be"} {"id": "q-en-oscore-57bffe0f05cac5a14b4e84a11d807af1d39a14a4250492104228192f8c9c36f2", "old_text": "the order of notifications, the client SHALL maintain a Notification Number for each Observation it registers. The Notification Number is a non-negative integer containing the largest Partial IV of the received notifications for the associated Observe registration (see replay-protection). The Notification Number is initialized to the Partial IV of the first successfully received notification response to the registration request. In contrast to RFC7641, the received Partial IV MUST always be compared with the Notification Number, which thus MUST NOT be forgotten after 128 seconds. Further details of replay protection of notifications are specified in replay- protection. Clients can re-register observations to ensure that the observation is still active and establish freshness again (RFC7641 Section 3.3.1). When an OSCORE protected observation is refreshed, not only the ETags, but also the partial IV (and thus the payload and OSCORE option) change. The server uses the new request's Partial IV as the 'request_piv' of new responses. 4.1.3.5.", "comments": "Back to one processing description Observe is now primarily inner Simplified processing Explicit cancellation by intermediary not supported", "new_text": "the order of notifications, the client SHALL maintain a Notification Number for each Observation it registers. The Notification Number is a non-negative integer containing the largest Partial IV of the received notifications for the associated Observe registration. Further details of replay protection of notifications are specified in replay-notifications. The Inner Observe in a response MUST either have the value of Observe in the original CoAP message or be empty. The former is used to allow the server to set the Observe values to be received by the client. In the latter case the overhead of the Observe value is saved and instead the least significant bytes of the Partial IV is used as Observe value, see replay-notifications. The Outer Observe in the response may be needed for intermediary nodes to support multiple responses to one request. The client MAY ignore the Outer Observe value. If the client receives a response to an Observe request without an Observe value, then it verifies the response as a non-Observe response, as specified in ver-res. If the client receives a response to a non-Observe request with an Observe value, then it stops processing the message but keeps the observation alive, as specified in ver-res. 4.1.3.5."} {"id": "q-en-oscore-57bffe0f05cac5a14b4e84a11d807af1d39a14a4250492104228192f8c9c36f2", "old_text": "The sending endpoint SHALL write the Code of the original CoAP message into the plaintext of the COSE object (see plaintext). After that, the sending endpoint writes an Outer Code to the OSCORE message. With one exeception (see observe-option-processing) the Outer Code SHALL by default be set to 0.02 (POST) for requests and to 2.04 (Changed) for responses. The receiving endpoint SHALL discard the Outer Code in the OSCORE message and write the Code of the COSE object plaintext (plaintext) into the decrypted CoAP message. The other currently defined CoAP Header fields are Unprotected (Class U). The sending endpoint SHALL write all other header fields of the", "comments": "Back to one processing description Observe is now primarily inner Simplified processing Explicit cancellation by intermediary not supported", "new_text": "The sending endpoint SHALL write the Code of the original CoAP message into the plaintext of the COSE object (see plaintext). After that, the sending endpoint writes an Outer Code to the OSCORE message. With one exeception (see observe) the Outer Code SHALL by default be set to 0.02 (POST) for requests and to 2.04 (Changed) for responses. The receiving endpoint SHALL discard the Outer Code in the OSCORE message and write the Code of the COSE object plaintext (plaintext) into the decrypted CoAP message. The other currently defined CoAP Header fields are Unprotected (Class U). The sending endpoint SHALL write all other header fields of the"} {"id": "q-en-oscore-57bffe0f05cac5a14b4e84a11d807af1d39a14a4250492104228192f8c9c36f2", "old_text": "7.4.1. Additionally to the previous section, the following applies when Observe is supported. 7.4.1.1. A client receiving a notification SHALL compare the Partial IV of a received notification with the Notification Number associated to that Observe registration. The ordering of notifications after OSCORE processing MUST be aligned with the Partial IV. The client MAY do so by copying the least significant bytes of the Partial IV into the Observe option, before passing it to CoAP processing. The client MAY ignore an Outer Observe option value. If the verification of the response succeeds, and the received Partial IV was greater than the Notification Number, then the client SHALL update the corresponding Notification Number with the received Partial IV. The client MUST stop processing notifications with a Partial IV which has been previously received. An application MAY require the client to discard notifications which have Partial IV less than the Notification Number. If messages are processed concurrently, the Partial IV needs to be validated a second time after decryption and before updating the replay protection data. The operation of validating the Partial IV and updating the replay protection data MUST be atomic. 7.4.1.2. In order to allow intermediaries to re-register their interest in a resource (see 3.3.1 of RFC7641), a server receiving an Observe registration with Token, kid and Partial IV identical to a previously received registration, and which decrypts without error SHALL not treat it as a replay and SHALL respond with a notification. The notification may be a cached copy of the latest sent notification (with the same Token, kid and Partial IV) or it may be a newly generated notification with a fresh Partial IV. 7.5.", "comments": "Back to one processing description Observe is now primarily inner Simplified processing Explicit cancellation by intermediary not supported", "new_text": "7.4.1. The following applies additionally when Observe is supported. The Notification Number is initialized to the Partial IV of the first successfully received notification response to the registration request. A client receiving a notification SHALL compare the Partial IV of a verified notification with the Notification Number associated to that Observe registration. In contrast to RFC7641, the received Partial IV MUST always be compared with the Notification Number, which thus MUST NOT be forgotten after 128 seconds. If the verification of the response succeeds, and the received Partial IV was greater than the Notification Number, then the client SHALL update the corresponding Notification Number with the received Partial IV. The operation of validating the Partial IV and updating the replay protection data MUST be atomic. If the Inner Observe option is empty, then the client SHALL copy the least significant bytes of the Partial IV into the Observe option, before passing it to CoAP processing. If the Partial IV is less than or equal to the Notification Number, then the client SHALL stop processing the response but not cancel the observation. 7.5."} {"id": "q-en-oscore-57bffe0f05cac5a14b4e84a11d807af1d39a14a4250492104228192f8c9c36f2", "old_text": "7.5.3. To prevent accepting replay of previously received notification responses, the client may perform the following procedure after boot: The client rejects notifications bound to the earlier registration, removes all Notification Numbers and re-registers using Observe. 8.", "comments": "Back to one processing description Observe is now primarily inner Simplified processing Explicit cancellation by intermediary not supported", "new_text": "7.5.3. To prevent accepting replay of previously received notifications, the client may perform the following procedure after boot: The client forgets about earlier registrations, removes all Notification Numbers and re-registers using Observe. 8."} {"id": "q-en-oscore-57bffe0f05cac5a14b4e84a11d807af1d39a14a4250492104228192f8c9c36f2", "old_text": "If Observe is implemented: Replace step 1 in ver-req with: A. Discard Code and all options marked in fig-option-protection with 'x' in column E, except for Observe, present in the received message. For example, an If-Match Outer option is discarded, Uri-Host Outer option is not discarded, Observe is not discarded. Replace step 3 in ver-req with: B. If Observe is present in the received message, and has value 0, check if the Token, kid and Partial IV are identical to a previously received Observe registration. In this case, replay verification is postponed until step C. Otherwise verify the 'Partial IV' parameter using the Replay Window, as described in replay-protection. Insert the following steps between step 6 and 7 of ver-req: C. If Observe was present in the received message (in step 1): If the value of Observe in the Outer message is 0: If Observe is present and has value 0 in the decrypted options, discard the Outer Observe. If the Token, kid and Partial IV are identical to a previously received Observe registration, respond with a notification as described in observe-replay- processing; Otherwise, discard both the Outer and Inner (if present) Observe options and verify the 'Partial IV' parameter using the Replay Window, as described in replay-protection. If the value of Observe in the Outer message is not 0, discard the decrypted Observe option if present. 8.3.", "comments": "Back to one processing description Observe is now primarily inner Simplified processing Explicit cancellation by intermediary not supported", "new_text": "If Observe is implemented: Insert the following step before step 1 in ver-req: A. Check if the Outer Observe option is present and has value zero. Insert the following step between step 6 and 7 of ver-req: B. If Inner Observe is present and has value zero, and Outer option is either not present or has not value 0, then remove the Observe option. 8.3."} {"id": "q-en-oscore-57bffe0f05cac5a14b4e84a11d807af1d39a14a4250492104228192f8c9c36f2", "old_text": "If Observe is implemented: Replace step 1 of ver-res with: A. Discard Code and all options marked in fig-option-protection with 'x' in column E, except for Observe, present in the received message. For example, ETag Outer option is discarded, as well as Max-Age Outer option, Observe is not discarded. Insert the following steps between step 2 and 3 of ver-res: B. If the Observe option is present in the response, but the request was not an Observe registration, then go to 9. C. If an Observe option is included or the Notification number for the observation has already been initiated, but the Partial IV is not present in the response, then go to 9. D. For Observe notifications, verify the received 'Partial IV' parameter against the corresponding Notification Number as described in replay-protection. Replace step 6 of ver-res with: E. If the response is a notification, initiate or update the corresponding Notification Number, as described in sequence-numbers. Otherwise, delete the attribute-value pair (Token, {Security Context, PIV}). Replace step 7 of ver-res with: F. Add decrypted Code, options and payload to the decrypted request, except for decrypted Observe if present. The OSCORE option is removed. An error condition occurring while processing a response in an observation does not cancel the observation. A client MUST NOT react to failure in step 5 by re-registering the observation immediately. 9.", "comments": "Back to one processing description Observe is now primarily inner Simplified processing Explicit cancellation by intermediary not supported", "new_text": "If Observe is implemented: Replace step 6 of ver-res with: A. If Inner Observe is present then: If the request was not an Observe registration, then go to 9. If the Partial IV was not present in the response, then go to 9. If the request was an Observe registration and the Partial IV was present in the response, then verify the received 'Partial IV' parameter against the corresponding Notification Number, and follow the processing described in replay-notifications. Otherwise, delete the attribute-value pair (Token, {Security Context, PIV}). Replace step 9 of ver-res with: B. In case any of the previous erroneous conditions apply: the client SHALL stop processing the response. An error condition occurring while processing a response in an observation does not cancel the observation. A client MUST NOT react to failure by re- registering the observation immediately. Note that the attribute-value attribute-value pair (Token, {Security Context, PIV}) MUST be deleted whenever the Observation is cancelled or \"forgotten\". 9."} {"id": "q-en-oscore-3dfa536703e97ec8fe6c2fe9bfeef92e160532686cc694247d93d6475efa2969", "old_text": "receiving endpoint discards the message, if complying to the policy) may be obtained as part of normal resource discovery. 8. Privacy threats executed through intermediate nodes are considerably", "comments": "I don't know why your commit got included.\nI think I do: I modified the commit in order to add the description (to close issues). Now there are 2 identical commits, one is the one above ( 30475e3) that does not close issues, the other is the one in the master branch (7d4cbc2)\nThanks for your contribution Jim", "new_text": "receiving endpoint discards the message, if complying to the policy) may be obtained as part of normal resource discovery. Applications need to use a padding scheme if the content of a message can be determined solely from the length of the payload. As an example, the strings \"YES\" and \"NO\" even if encrypted can be distinguished from each other as there is no padding supplied by the current set of encryption algorithms. Some information can be determined even from looking at boundary conditions. An example of this would be returning an integer between 0 and 100 where lengths of 1, 2 and 3 will provide information about where in the range things are. Three different methods to deal with this are: 1) ensure that all messages are the same length. For example using 0 and 1 instead of 'yes' and 'no'. 2) Use a character which is not part of the responses to pad to a fixed length. For example, pad with a space to three characters. 3) Use the PKCS #7 style padding scheme where m bytes are appended each having the value of m. For example, appending a 0 to \"YES\" and two 1's to \"NO\". This style of padding means that all values need to be padded. 8. Privacy threats executed through intermediate nodes are considerably"} {"id": "q-en-perc-wg-42b3cc92d43a023727a4277cb7d69ce8f8dab1c6889acc0b219f64b2ebc96d83", "old_text": "Distributor, this framework utilizes a DTLS-SRTP RFC5764 association between an endpoint and the Key Distributor. To establish this association, an endpoint will send DTLS-SRTP messages to the Media Distributor which will then forward them to the Media Distributor as defined in I-D.jones-perc-dtls-tunnel. The Key Encryption Key (KEK) (i.e., EKT Key) is also conveyed by the Key Distributor over the DTLS association to endpoints via procedures defined in PERC EKT [I-", "comments": "Replaced Media with Key distributor in on important place. Replaced the DTLS tunnel with TLS according to the latest tunnel draft. A couple of more wording fixes.", "new_text": "Distributor, this framework utilizes a DTLS-SRTP RFC5764 association between an endpoint and the Key Distributor. To establish this association, an endpoint will send DTLS-SRTP messages to the Media Distributor which will then forward them to the Key Distributor as defined in I-D.jones-perc-dtls-tunnel. The Key Encryption Key (KEK) (i.e., EKT Key) is also conveyed by the Key Distributor over the DTLS association to endpoints via procedures defined in PERC EKT [I-"} {"id": "q-en-perc-wg-42b3cc92d43a023727a4277cb7d69ce8f8dab1c6889acc0b219f64b2ebc96d83", "old_text": "Media Distributors use DTLS-SRTP RFC5764 directly with a peer Media Distributor to establish HBH keys for transmitting RTP and RTCP packets that peer Media Distributor. The Key Distributor does not facilitate establishing HBH keys for use between Media Distributors. 4.5.1. The procedures defined in DTLS Tunnel for PERC I-D.jones-perc-dtls- tunnel establish one or more DTLS tunnels between the Media Distributor and Key Distributor, making it is possible for the Media Distributor to facilitate the establishment of a secure DTLS association between each endpoint and the Key Distributor as shown", "comments": "Replaced Media with Key distributor in on important place. Replaced the DTLS tunnel with TLS according to the latest tunnel draft. A couple of more wording fixes.", "new_text": "Media Distributors use DTLS-SRTP RFC5764 directly with a peer Media Distributor to establish HBH keys for transmitting RTP and RTCP packets to that peer Media Distributor. The Key Distributor does not facilitate establishing HBH keys for use between Media Distributors. 4.5.1. The procedures defined in DTLS Tunnel for PERC I-D.jones-perc-dtls- tunnel establish one or more TLS tunnels between the Media Distributor and Key Distributor, making it is possible for the Media Distributor to facilitate the establishment of a secure DTLS association between each endpoint and the Key Distributor as shown"} {"id": "q-en-perc-wg-42b3cc92d43a023727a4277cb7d69ce8f8dab1c6889acc0b219f64b2ebc96d83", "old_text": "the DTLS signaling, but will instead forward DTLS packets received from an endpoint on to the Key Distributor (and vice versa) via a tunnel established between Media Distributor and the Key Distributor. This tunnel used to encapsulate the DTLS-SRTP signaling between the Key Distributor and endpoints will also be used to convey HBH key information from the Key Distributor to the Media Distributor, so no additional protocol or interface is required.", "comments": "Replaced Media with Key distributor in on important place. Replaced the DTLS tunnel with TLS according to the latest tunnel draft. A couple of more wording fixes.", "new_text": "the DTLS signaling, but will instead forward DTLS packets received from an endpoint on to the Key Distributor (and vice versa) via a tunnel established between Media Distributor and the Key Distributor. This tunnel is used to encapsulate the DTLS-SRTP signaling between the Key Distributor and endpoints will also be used to convey HBH key information from the Key Distributor to the Media Distributor, so no additional protocol or interface is required."} {"id": "q-en-perc-wg-42b3cc92d43a023727a4277cb7d69ce8f8dab1c6889acc0b219f64b2ebc96d83", "old_text": "fingerprint of the DTLS-SRTP certificate used for the call. This certificate is unique for a given call and a conference. This allows the Key Distributor to ensure that only authorized users participate in the conference. Similarly the Key Distributor can create a WeBRTC Identity assertion bound the fingerprint of the unique certificate used by the Key Distributor for this conference so that the endpoint can validate it is talking to the correct Key Distributor.", "comments": "Replaced Media with Key distributor in on important place. Replaced the DTLS tunnel with TLS according to the latest tunnel draft. A couple of more wording fixes.", "new_text": "fingerprint of the DTLS-SRTP certificate used for the call. This certificate is unique for a given call and a conference. This allows the Key Distributor to ensure that only authorized users participate in the conference. Similarly the Key Distributor can create a WebRTC Identity assertion to bind the fingerprint of the unique certificate used by the Key Distributor for this conference so that the endpoint can validate it is talking to the correct Key Distributor."} {"id": "q-en-perc-wg-142327eae773b6ea0c5d424a18696b25150f99d9c4832e8406de30313981f7c8", "old_text": "This document defines an extension to DTLS-SRTP called SRTP EKT Key Transport which enables secure transport of EKT keying material from one DTLS-SRTP peer to another. This allows those peers to process EKT keying material in SRTP (or SRTCP) and retrieve the embedded SRTP keying material. This combination of protocols is valuable because it combines the advantages of DTLS, which has strong authentication of the endpoint and flexibility, along with allowing secure multiparty RTP with loose coordination and efficient communication of per-source keys. 5.1.", "comments": "This is much more compatible with extensibility patterns in TLS stacks, and in a way that is very forward-compatible with the forthcoming DTLS 1.3, while still being implementable in pre-1.3 stacks. DTLS 1.3 has a notion of post-handshake handshake messages, like and , which are acknoweldged with an message. The framework described here simply sends as yet another post-handshake message. Depends on\nr? NAME\nI have a bunch of comments but really I could live with this like you have it now.", "new_text": "This document defines an extension to DTLS-SRTP called SRTP EKT Key Transport which enables secure transport of EKT keying material from the DTLS-SRTP peer in the server role to the client. This allows those peers to process EKT keying material in SRTP (or SRTCP) and retrieve the embedded SRTP keying material. This combination of protocols is valuable because it combines the advantages of DTLS, which has strong authentication of the endpoint and flexibility, along with allowing secure multiparty RTP with loose coordination and efficient communication of per-source keys. 5.1."} {"id": "q-en-perc-wg-142327eae773b6ea0c5d424a18696b25150f99d9c4832e8406de30313981f7c8", "old_text": "5.2. This document defines a new TLS negotiated extension called \"srtp_ekt_key_transport\" and a new TLS content type called EKTMessage. Using the syntax described in DTLS RFC6347, the following structures are used: If a DTLS client includes srtp_ekt_key_transport in its ClientHello, then a DTLS server that supports this extensions will includes srtp_ekt_key_transport in its ServerHello message. If a DTLS client includes srtp_ekt_key_transport in its ClientHello, but does not receive srtp_ekt_key_transport in the ServerHello, the DTLS client MUST NOT send DTLS EKTMessage messages. Also, the srtp_ekt_key_transport in the ServerHello MUST select one and only one EKTCipherType from the list provided by the client in the srtp_ekt_key_transport in the ClientHello. When a DTLS client sends the srtp_ekt_key_transport in its ClientHello message, it MUST include the SupportedEKTCiphers as the extension_data for the extension, listing the EKTCipherTypes the client is willing to use in preference order, with the most preferred version first. When the server responds in the srtp_ekt_key_transport in its ServerHello message, it MUST include a SupportedEKTCiphers list that selects a single EKTCipherType to use (selected from the list provided by the client) or it returns an empty list to indicate there is no matching EKTCipherType in the clients list that the server is also willing to use. The value to be used in the EKTCipherType for future extensions that define new ciphers is the value from the \"EKT Ciphers Type\" IANA registry defined in iana-ciphers. The figure above defines the contents for a new TLS content type called EKTMessage which is registered in iana-tls-content. The EKTMessage above is used as the opaque fragment in the TLSPlaintext structure defined in Section 6.2.1 of RFC5246 and the srtp_ekt_message as the content type. The srtp_ekt_message content type is defined and registered in iana-tls-ext. When the Server wishes to provide a new EKT Key, it can send EKTMessage containing an EKTKey with the new key information. The client MUST respond with an EKTMessage of type ekt_key_act, if the EKTKey was successfully processed and stored or respond with the the ekt_key_error EKTMessage otherwise. The diagram below shows a message flow of DTLS client and DTLS server using the DTLS-SRTP Key Transport extension. Note that when used in PERC I-D.ietf-perc-private-media-framework, the Server is actually split between the Media Distrbutor and Key Distributor. The messages in the above figure that are \"SRTP packets\" will not got to the Key Distributor but the other packets will be relayed by the Media Distributor to the Key Distributor. 5.3.", "comments": "This is much more compatible with extensibility patterns in TLS stacks, and in a way that is very forward-compatible with the forthcoming DTLS 1.3, while still being implementable in pre-1.3 stacks. DTLS 1.3 has a notion of post-handshake handshake messages, like and , which are acknoweldged with an message. The framework described here simply sends as yet another post-handshake message. Depends on\nr? NAME\nI have a bunch of comments but really I could live with this like you have it now.", "new_text": "5.2. This document defines a new TLS negotiated extension and a new TLS handshake message type . The extension negotiates the cipher to be used in encrypting and decrypting EKTCiphertext values, and the handshake message carries the corresponding key. The diagram below shows a message flow of DTLS 1.3 client and server using EKT configured using the DTLS extensions described in this section. (The initial cookie exchange and other normal DTLS messages are omitted.) In the context of a multi-party SRTP session in which each endpoint performs a DTLS handshake as a client with a central DTLS server, the extensions defined in this session allow the DTLS server to set a common EKT key among all participants. Each endpoint can then use EKT tags encrypted with that common key to inform other endpoint of the keys it is using to protect SRTP packet. This avoids the need for many individual DTLS handshakes among the endpoints, at the cost of preventing endpoints from directly authenticating one another. 5.2.1. To indicate its support for EKT, a DTLS-SRTP client includes in its ClientHello an extension of type listing the EKT ciphers the client supports in preference order, with the most preferred version first. If the server agrees to use EKT, then it includes a extension in its ServerHello containing a cipher selected from among those advertised by the client. The field of this extension contains an \"EKTCipher\" value, encoded using the syntax defined in RFC5246: 5.2.2. Once a client and server have concluded a handshake that negotiated an EKT cipher, the server MUST provide to the client a key to be used when encrypting and decrypting EKTCiphertext values. EKT keys are sent in encrypted handshake records, using handshake type . The body of the handshake message contains an structure: [[ NOTE: RFC Editor, please replace \"TBD\" above with the code point assigend by IANA ]] The contents of the fields in this message are as follows: If the server did not provide a extension in its ServerHello, then EKTKey messages MUST NOT be sent by either the client or the server. When an EKTKey is received and processed successfully, the recipient MUST respond with an Ack handshake message as described in Section 7 of I-D.ietf-tls-dtls13. The EKTKey message and Ack must be retransmitted following the rules in Secton 4.2.4 of RFC6347. Note: To be clear, EKT can be used with versions of DTLS prior to 1.3. The only difference is that in a pre-1.3 TLS stacks will not have built-in support for generating and processing Ack messages. If an EKTKey message is received that cannot be processed, then the recipient MUST respond with an appropriate DTLS alert. 5.3."} {"id": "q-en-perc-wg-142327eae773b6ea0c5d424a18696b25150f99d9c4832e8406de30313981f7c8", "old_text": "5.4. The DTLS ekt_key is sent using the retransmissions specified in Section 4.2.4. of DTLS RFC6347. 6.", "comments": "This is much more compatible with extensibility patterns in TLS stacks, and in a way that is very forward-compatible with the forthcoming DTLS 1.3, while still being implementable in pre-1.3 stacks. DTLS 1.3 has a notion of post-handshake handshake messages, like and , which are acknoweldged with an message. The framework described here simply sends as yet another post-handshake message. Depends on\nr? NAME\nI have a bunch of comments but really I could live with this like you have it now.", "new_text": "5.4. The DTLS message is sent using the retransmissions specified in Section 4.2.4. of DTLS RFC6347. Retransmission is finished with an Ack message or an alert is received. 6."} {"id": "q-en-perc-wg-142327eae773b6ea0c5d424a18696b25150f99d9c4832e8406de30313981f7c8", "old_text": "7.3. IANA is requested to add srtp_ekt_key_transport as a new extension name to the \"ExtensionType Values\" table of the \"Transport Layer Security (TLS) Extensions\" registry with a reference to this specification and allocate a value of TBD to for this. Note to RFC Editor: TBD will be allocated by IANA. Considerations for this type of extension are described in Section 5 of RFC4366 and requires \"IETF Consensus\". 7.4. IANA is requested to add srtp_ekt_message as an new descriptions name to the \"TLS ContentType Registry\" table of the \"Transport Layer Security (TLS) Extensions\" registry with a reference to this specification, a DTLS-OK value of \"Y\", and allocate a value of TBD to for this content type. Note to RFC Editor: TBD will be allocated by IANA. This registry was defined in Section 12 of RFC5246 and requires \"Standards Action\".", "comments": "This is much more compatible with extensibility patterns in TLS stacks, and in a way that is very forward-compatible with the forthcoming DTLS 1.3, while still being implementable in pre-1.3 stacks. DTLS 1.3 has a notion of post-handshake handshake messages, like and , which are acknoweldged with an message. The framework described here simply sends as yet another post-handshake message. Depends on\nr? NAME\nI have a bunch of comments but really I could live with this like you have it now.", "new_text": "7.3. IANA is requested to add as a new extension name to the \"ExtensionType Values\" table of the \"Transport Layer Security (TLS) Extensions\" registry with a reference to this specification and allocate a value of TBD to for this. [[ Note to RFC Editor: TBD will be allocated by IANA. ]] Considerations for this type of extension are described in Section 5 of RFC4366 and requires \"IETF Consensus\". 7.4. IANA is requested to add as a new entry in the \"TLS HandshakeType Registry\" table of the \"Transport Layer Security (TLS) Parameters\" registry with a reference to this specification, a DTLS-OK value of \"Y\", and allocate a value of TBD to for this content type. [[ Note to RFC Editor: TBD will be allocated by IANA. ]] This registry was defined in Section 12 of RFC5246 and requires \"Standards Action\"."} {"id": "q-en-perc-wg-339d4e9fc518c8ef82d8c947883adfccf3a4c61ff8a52d5a88c9a06913b67556", "old_text": "a ciphertext value C with a length of N bytes. The decryption function returns a plaintext value P that is at least M bytes long, or returns an indication that the decryption operation failed because the ciphertext was invalid (i.e. it was not generated by the encryption of plaintext with the key K). These functions have the property that D(K, E(K, P)) = ( P concatenated with optional padding) for all values of K and P. Each cipher also has a limit T on the number of times that it can be used with any fixed key value. The EKTKey MUST NOT be used for encryption more that T times. Note that if the same EKT packet is retransmitted 3 times, that only counts as 1 encryption. Security requirements for EKT ciphers are discussed in sec.", "comments": "We're talking about the key wrap function here. Of course there's a requirement to remove padding, but it's explicitly clear in the AES Key Wrap RFCs. Seems like this text might be wrong now.\nThinking about this a bit, I think I will redo to just make it so decrypt has to remove the pad - I think this is way hold historic text that makes no sense given we use key wrap\nSection 2.3 The decryption function returns a plaintext value P that is at least M bytes long, or returns an indication that the decryption operation failed because the ciphertext was invalid (i.e. it was not generated by the encryption of plaintext with the key K). These functions have the property that D(K, E(K, P)) = ( P concatenated with optional padding) for all values of K and P. Each cipher also has a limit T on the number of times that it can be used with any fixed key value. The EKTKey MUST NOT be used more that T times.\nI think the goal was to avoid having a length indicator in the encrypted data as the stuff outside this knows the size (and thus how much pad to remove). Thoughts on what we should say here in the draft ? NAME NAME\nshould we clarify, how the code knows the length of the padding on the decryption side ?\nI'm unsure what the question is. I reviewed that section of the current draft and it sounds clear. Perhaps the confusion is over the padding? The AES Key Wrap logic defines how to insert and remove padding. You can see how that's implemented here: URL There is a \"with padding\" function that inserts required padding per the RFC. So, what's the question?\nNAME .. I think the question is on clarifying when the padding gets added and removed. Is it out of enc/dec or inside. We might just need to clarify tht\nIt's defined in the RFC. I don't think we should be trying to explain it again here.", "new_text": "a ciphertext value C with a length of N bytes. The decryption function returns a plaintext value P that is M bytes long, or returns an indication that the decryption operation failed because the ciphertext was invalid (i.e. it was not generated by the encryption of plaintext with the key K). These functions have the property that D(K, E(K, P)) = P for all values of K and P. Each cipher also has a limit T on the number of times that it can be used with any fixed key value. The EKTKey MUST NOT be used for encryption more that T times. Note that if the same EKT packet is retransmitted 3 times, that only counts as 1 encryption. Security requirements for EKT ciphers are discussed in sec."} {"id": "q-en-perc-wg-339d4e9fc518c8ef82d8c947883adfccf3a4c61ff8a52d5a88c9a06913b67556", "old_text": "other aspects of EKT processing. EKT ciphers are free to use this field in any way, but they SHOULD NOT use other EKT or SRTP fields as an input. The values of the parameters L, and T MUST be defined by each EKTCipher. 2.5.", "comments": "We're talking about the key wrap function here. Of course there's a requirement to remove padding, but it's explicitly clear in the AES Key Wrap RFCs. Seems like this text might be wrong now.\nThinking about this a bit, I think I will redo to just make it so decrypt has to remove the pad - I think this is way hold historic text that makes no sense given we use key wrap\nSection 2.3 The decryption function returns a plaintext value P that is at least M bytes long, or returns an indication that the decryption operation failed because the ciphertext was invalid (i.e. it was not generated by the encryption of plaintext with the key K). These functions have the property that D(K, E(K, P)) = ( P concatenated with optional padding) for all values of K and P. Each cipher also has a limit T on the number of times that it can be used with any fixed key value. The EKTKey MUST NOT be used more that T times.\nI think the goal was to avoid having a length indicator in the encrypted data as the stuff outside this knows the size (and thus how much pad to remove). Thoughts on what we should say here in the draft ? NAME NAME\nshould we clarify, how the code knows the length of the padding on the decryption side ?\nI'm unsure what the question is. I reviewed that section of the current draft and it sounds clear. Perhaps the confusion is over the padding? The AES Key Wrap logic defines how to insert and remove padding. You can see how that's implemented here: URL There is a \"with padding\" function that inserts required padding per the RFC. So, what's the question?\nNAME .. I think the question is on clarifying when the padding gets added and removed. Is it out of enc/dec or inside. We might just need to clarify tht\nIt's defined in the RFC. I don't think we should be trying to explain it again here.", "new_text": "other aspects of EKT processing. EKT ciphers are free to use this field in any way, but they SHOULD NOT use other EKT or SRTP fields as an input. The values of the parameters L, and T MUST be defined by each EKTCipher. The cipher MUST provide integrity protection. 2.5."} {"id": "q-en-qlog-f924ea51ea533ca4850162bd58f64df1869422c54732c3f34d302dce483a8f93", "old_text": "2.2. For several types of events, it is sometimes impossible to tie them to a specific conceptual QUIC connection (e.g., a packet_dropped event triggered because the packet has an unknown connection_id in the header). Since qlog events in a trace are typically associated with a single connection, it is unclear how to log these events. Ideally, implementers SHOULD create a separate, individual \"endpoint- level\" trace file (or group_id value), not associated with a specific connection (for example a \"server.qlog\" or group_id = \"client\"), and log all events that do not belong to a single connection to this grouping trace. However, this is not always practical, depending on the implementation. Because the semantics of most of these events are well-defined in the protocols and because they are difficult to mis-interpret as belonging to a connection, implementers MAY choose to log events not belonging to a particular connection in any other trace, even those strongly associated with a single connection. Note that this can make it difficult to match logs from different vantage points with each other. For example, from the client side, it is easy to log connections with version negotiation or retry in the same trace, while on the server they would most likely be logged in separate traces. Servers can take extra efforts (and keep additional state) to keep these events combined in a single trace however (for example by also matching connections on their four-tuple instead of just the connection ID). 3.", "comments": "NAME asked how to best approach logging from a server's perspective, given that things like version negotiation and stateless retry are not inherently tied to a single connection. We should add some informative guidance on how to best approach this to the spec, depending on how much state you're willing to keep around Some options: low state: keep a separate qlog file for the entire server. This logs vneg, retry, etc.. Then, when a connection is truly accepted, start a new .qlog for the individual connection, containing all events thereafter. The URL can then also contain an event signalling the acceptance of a new connection for later cross-linking between the files. low state: keep a single huge qlog file for everything, using the \"groupid\" field to allow later de-multiplexing into separate connections (I believe quant does this atm) stateful: if you already track vneg/retry and link them up with the final connection, you can output them in the per-connection qlog file as well Maybe also shortly talk about some of the trade-offs in each option. Also talk about how to approach server-level events like serverlistening and packet_dropped in separate scenarios.\nI lean on the side of minimally and clearly identifying the challenges and maybe signpost some features and leave it up to implementers to figure out how to address this. They can always push back later in the specification lifecycle is something is deemed missing.", "new_text": "2.2. A single qlog event trace is typically associated with a single QUIC connection. However, for several types of events (for example, a transport-packetdropped event with trigger value of \"connection_unknown\"), it can be impossible to tie them to a specific QUIC connection, especially on the server. There are various ways to handle these events, each making certain tradeoffs between file size overhead, flexibility, ease of use, or ease of implementation. Some options include: Log them in a separate endpoint-wide trace (or use a special group_id value) not associated with a single connection. Log them in the most recently used trace. Use additional heuristics for connection identification (for example use the four-tuple in addition to the Connection ID). Buffer events until they can be assigned to a connection (for example for version negotiation and retry events). 3."} {"id": "q-en-quic-v2-554fc2d8b29759fe6a69cacb5f81ee1faee7c422352d0c0496e5c39682f9571c", "old_text": "QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in QUIC, QUIC-TLS, and RFC9002, with the following changes: The version field of long headers is 0x709a50c4. The salt used to derive Initial keys in Section 5.2 of QUIC-TLS changes to: The labels used in QUIC-TLS to derive packet protection keys (Section Section 5.1 of QUIC-TLS), header protection keys (Section Section 5.4 of QUIC-TLS), Retry Integrity Tag keys (Section Section 5.8 of QUIC-TLS), and key updates (Section Section 6.1 of QUIC-TLS) change from \"quic key\" to \"quicv2 key\", from \"quic iv\" to \"quicv2 iv\", from \"quic hp\" to \"quicv2 hp\", and from \"quic ku\" to \"quicv2 ku\", to meet the guidance for new versions in Section Section 9.6 of QUIC-TLS of that document. The key and nonce used for the Retry Integrity Tag (Section 5.8 of QUIC-TLS) change to: 4.", "comments": "Also, add structure to the \"changes from v1\" section.\nIt would be a simple matter to switch around the packet type codes in long headers, e.g. 0x0 = Retry 0x1 = Handshake 0x2 = 0-RTT 0x3 = Initial so no one ossifies on 0x0 = Initial.\nI'm ambivalent on this. It's fairly trivial to have the long header parsed in version-specific fashion, so it's easy to implement, but I'm not sure how much value it has. It also makes reading packets slightly harder.", "new_text": "QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in QUIC, QUIC-TLS, and RFC9002, with the following changes. 3.1. The version field of long headers is 0x709a50c4. 3.2. Initial packets use a packet type field of 0b01. 0-RTT packets use a packet type field of 0b10. Handshake packets use a packet type field of 0b11. Retry packets use a packet type field of 0b00. 3.3. 3.3.1. The salt used to derive Initial keys in Section 5.2 of QUIC-TLS changes to: 3.3.2. The labels used in QUIC-TLS to derive packet protection keys (Section Section 5.1 of QUIC-TLS), header protection keys (Section Section 5.4 of QUIC-TLS), Retry Integrity Tag keys (Section Section 5.8 of QUIC-TLS), and key updates (Section Section 6.1 of QUIC-TLS) change from \"quic key\" to \"quicv2 key\", from \"quic iv\" to \"quicv2 iv\", from \"quic hp\" to \"quicv2 hp\", and from \"quic ku\" to \"quicv2 ku\", to meet the guidance for new versions in Section Section 9.6 of QUIC-TLS of that document. 3.3.3. The key and nonce used for the Retry Integrity Tag (Section 5.8 of QUIC-TLS) change to: 4."} {"id": "q-en-quic-v2-2ff77b0e937e85671e28075fb789fa90744875794f0f5cbcb14e84fca0895fe0", "old_text": "Finally QUIC-VN provides two mechanisms for endpoints to negotiate the QUIC version to use. The \"incompatible\" version negotiation method can support switching from any initial QUIC version to any other version with full generality, at the cost of an additional round-trip at the start of the connection. \"Compatible\" version negotiation eliminates the round-trip penalty but levies some restrictions on how much the two versions can differ semantically. QUIC version 2 is meant to mitigate ossification concerns and exercise the version negotiation mechanisms. The only change is a", "comments": "The correct term from the VN document would be \"original version\" but just removing the word initial avoids the confusion\nFewer words means fewer nits", "new_text": "Finally QUIC-VN provides two mechanisms for endpoints to negotiate the QUIC version to use. The \"incompatible\" version negotiation method can support switching from any QUIC version to any other version with full generality, at the cost of an additional round-trip at the start of the connection. \"Compatible\" version negotiation eliminates the round-trip penalty but levies some restrictions on how much the two versions can differ semantically. QUIC version 2 is meant to mitigate ossification concerns and exercise the version negotiation mechanisms. The only change is a"} {"id": "q-en-quicwg-base-drafts-bdaa428f75602673a7c8d269c27d55d24c300c14c375786e654248ecfa77c421", "old_text": "data, unless priorities specified by the application indicate otherwise (see stream-prioritization). Upon detecting losses, a sender MUST take appropriate congestion control action. The details of loss detection and congestion control are described in QUIC-RECOVERY.", "comments": "In we agreed on this. Considering that the decision affects handling of multiple frames, I think it's worth clarifying the general rule in the draft. For discussions regarding specific frames, please refer: MAXSTREAMS, MAXDATA, MAXSTREAMDATA - RESETSTREAM, STOPSENDING - ACK -\nIf you want to be explicit, perhaps also mention that stream ranges can be retransmitted with different boundaries or partially, if this isn't already covered by some existing text.\nNAME While that's possible, I think we should better handle retransmission rules of specific frames in separate PRs. The reason I suggest this having this catch-all sentence is because to me it seems that the concept is sometimes being lost.\nI agree that I'd like to discourage this a bit more than this PR does.\nNAME NAME Your comments make sense to me. I'll update the PR.\nMoved to 13.2, expanding the text to encourage reassembling frames containing up-to-date information. PTAL.\nThis is implicitly assumed, and, as NAME points out in , not doing so can cause undesired outcomes. There should be normative text in the transport draft requiring this behavior.\nSo the situation is that if one so far have ACKed: 1,2,3,6,7. Then if 4,5 shows up at the receiver and it is going to ACK those, it must at least send an ACK that contains 4,5 and 7, thus likely acking 4-7? It should not be allowed to send an ACK saying 4, 5 in the above situation as that doesn't indicate the receiver total progress, i.e. all the way to 7? I think requiring Largest Acknowledge to always indicate highest solves some issues and likely simplifies a lot of implementations. But it may result in larger ACK frames as one will have include the full current state between the largest Acknowledged and so far into the past PNs that you need to go. In severe re-ordering cases that could be a lot. Will this cause cause the ACK to be to large to include? Especially maybe in connections with a large delay bandwidth product going full steam encountering a serious hick up in the network. For example if an ACK is required to ack packet 4000 + 1E9 and you have 500 gaps that has arrived the last RTT/4 and really should be included. Resulting in an ACK frame that is at least 1000 bytes large, possibly bigger.\nThere are a few things this precludes, and we we should understand those before doing this. 1) A very limited implementation(ie: hardware) can't always send only one ACK block without giving up on reordered packets. I don't have such an implementation, but at some point I remember discussing this idea with people who seemed interested in it. 2) One cannot do multipath with QUIC v1 and it means the multipath design will have to use a PN space per path, as described in Christian's draft: URL I think that's likely the right choice for other reasons, but I'm hesitant to make that decision now. 3) An RTT sample can only be taken based on the largest acked. Therefore, if there is a large amount of reordering(ie: 1RTT), you don't get any RTT samples for an RTT. If we care about this, we can probably solve it a different way in the recovery draft, but that likely means using ack delay sometimes and not others. In regards to Martin's comment, I thought we already had text that said ACKs with lower packet numbers than the largest ACK packet number should be ignored, since they're old?\nNAME regarding your question. \"In regards to Martin's comment, I thought we already had text that said ACKs with lower packet numbers than the largest ACK packet number should be ignored, since they're old?\" At least when it comes to ECN, the editors have been working hard to reformulate which has resulted in that there is nothing that applies this from the PN increasing for the ACKs. So to solve the ECN part, I think (re-)introducing that aspect into the text is the simpliest fix for ECN. But there clearly are other aspects here.\nNAME The problem doesn't go away if you ignore acks that are lower than largest_acked. The problem is that the receiver sent the acks in the wrong order because packets were received in the wrong order, but ECN counts are going up. But you raise good points. This is a corner case. I think that with the current spec, under these conditions, ECN gets turned off since validation fails. I think that's a reasonable outcome, and I'm fine with it. NAME\nECN Validation as I currently understand it will not fail if the sender to receiver reordering happens as long as the ACKs from the receiver to sender are not reordered. So to my understanding what is missing is a requirement for the sender to no process verfication for out of order ACKs. It will not fail if the largest acknowledge is not increasing as the comparison is done on number of newly ACKed packets versus counts change. The current sentence in Section 13.3.2 (-17): \"Counts MUST NOT be verified if the ACK frame does not increase the largest received packet number at the endpoint.\" Which I interpret to mean that what PN the ACK reference (sender-receiver direction), rather than the PN of the receiver to sender direction. Can NAME or NAME fix that?\nI think is enough to address the original problem, so I'd like to close this issue. I think and are enough. NAME sound ok?\nYes, the ECN related issue is resolved. What I don't know if it is resolved fully is the implementation requirements is that a sender MUST be capable of handling an ACK frame with Largest Acknowledged that is smaller than highest received? From my perspective there are good and valid arguments why this may be required, however I think consensus needs to be established on that.\nNAME I am not sure if I'm following the context, but my understanding is that a sender MUST be capable of that, because we allow an endpoint to simply retransmit the payload of a packet that is being deemed lost; see .\nJust a stray thought: is this requirement really strictly needed? It would complicate trasnmitters that split work into multiple processing units which might not be fully synced on the latest ACK'ed.\nNAME what NAME says. Also, a sender needs to be able to simply deal with reordering on the ack path. Did you mean something else?\nMy concern was primarily about the clearness of this behavior in the specification text. In transport I find the following which indicates this but are not explicit. : To limit receiver state or the size of ACK frames, a receiver MAY limit the number of ACK Ranges it sends. : Largest Acknowledged: A variable-length integer representing the largest packet number the peer is acknowledging; this is usually the largest packet number that the peer has received prior to generating the ACK frame. I think there are strong hints, with the above \"usually the largest\" indicating that it is not always. I would have made this explicit in 19.3, but then I prefer being more explicit than not. I think it is possible interop issue point. Anywhere else this is more explicit? (I have read github master version) NAME I think there are strong points for allowing an ACK frame to acknowledge only a part of the range between highest received and confirmed to be have been ACKed. The two main reasons I see. high rate flows suffering an packet loss burst so that the number of ACK ranges becomes to large to fit a single QUIC packet. Low complexity receivers ACKing a single range of received packets and doing this for reordered packets.\nNAME As suggested by NAME in , might be the correct place. I am advocating for describing the design principle the PR, but we might (or might not) be interested in expanding how ACK could look like.\nSo results in certain implications, that an ACK could arrive that addresses older state, thus resulting in acknowledging some PNs that was previously unacknowledged. As 13.2 is about retransmissions, I think there are other desing principle aspects of ACKs that probably should be covered, but then earlier in 13.1.\nWe're in the process of making our implementation accept smaller largest_acked values, so I'm wonder if we're any closer to consensus(one way or the other) on this issue?\nI think this is mostly addressed and what's left is OBE. There's text in the draft now that says: \"Processing counts out of order can result in verification failure. An endpoint SHOULD NOT perform this verification if the ACK frame is received in a packet with packet number lower than a previously received ACK frame. Verifying based on ACK frames that arrive out of order can result in disabling ECN unnecessarily.\" This would be adequate, but there's other text (thanks to ) that says old ACKs can be resent. I think we simply add a note about that in the para above, which should be adequate.\nDiscussed in London: NAME to prepare a PR for those remaining bits\nNAME any progress here?\nI think the conclusion here is close with no further action, but it'd be good for NAME to confirm.\nI've added , which I believe addresses what's left of this issue.\nNote that we agreed that proposal-ready would be used when there were approvals for PRs (or at least apparent agreement, as determined by editors).\nNAME : Yeah, my bad.\nThis gets fixed by , so marking this issue as well.\nFixed by .", "new_text": "data, unless priorities specified by the application indicate otherwise (see stream-prioritization). Even though a sender is encouraged to assemble frames containing up- to-date information every time it sends a packet, it is not forbidden to retransmit frames of lost packets as they are. A receiver MUST accept packets containing an outdated frame, such as a MAX_DATA frame carrying a smaller maximum data than one found in an older packet. Upon detecting losses, a sender MUST take appropriate congestion control action. The details of loss detection and congestion control are described in QUIC-RECOVERY."} {"id": "q-en-quicwg-base-drafts-3c2a751085dc998b9d2ce5be3fcc249352fb0ba7543d2688837f910f89bdda0b", "old_text": "QUIC Stream ID for a client-initiated bidirectional stream encoded as a variable-length integer. A client MUST treat receipt of a GOAWAY frame containing a Stream ID of any other type as a connection error of type HTTP_MALFORMED_FRAME. Clients do not need to send GOAWAY to initiate a graceful shutdown; they simply stop making new requests. A server MUST treat receipt of", "comments": "(Note that the issue identifies two issues, but one was already fixed in .)\nIn , NAME correctly observed that I messed up , but he closed his PR and I'm not able to reopen it without more git wizardry than I'm interested in practicing today. Easier just to fix it in my own branch.\nClients don't send GOAWAY and when a client sends one, the server treats it as HTTPUNEXPECTEDFRAME. By contrast, servers don't send MAXPUSHID but the client treats receiving one as HTTPMALFORMEDFRAME. By contrast, if GOAWAY is sent on !control it's HTTPUNEXPECTEDFRAME, but if MAXPUSHID is sent on !control, it's HTTPWRONGSTREAM. I think the answers here are HTTPUNEXPECTEDFRAME and HTTPWRONGSTREAM respectively, but we should be consistent.\nI think you're correct. There are some situations where newer errors would be more appropriate.", "new_text": "QUIC Stream ID for a client-initiated bidirectional stream encoded as a variable-length integer. A client MUST treat receipt of a GOAWAY frame containing a Stream ID of any other type as a connection error of type HTTP_WRONG_STREAM. Clients do not need to send GOAWAY to initiate a graceful shutdown; they simply stop making new requests. A server MUST treat receipt of"} {"id": "q-en-quicwg-base-drafts-3cbc90d334344cb0c2a13bd676f2eb8cd8237bbecec02927f2244670029a8ae0", "old_text": "definitions) is closed. A connection enters the draining state when the idle timeout expires. Each endpoint advertises its own idle timeout to its peer. The idle timeout starts from the last packet received. In order to ensure that initiating new activity postpones an idle timeout, an endpoint restarts this timer when sending a packet. An endpoint does not postpone the idle timeout if another packet has been sent containing frames other than ACK or PADDING, and that other packet has not been acknowledged or declared lost. Packets that contain only ACK or PADDING frames are not acknowledged until an endpoint has other frames to send, so they could prevent the timeout from being refreshed. The value for an idle timeout can be asymmetric. The value advertised by an endpoint is only used to determine whether the", "comments": "I find this pretty hard to read, but assuming I am reading it corectly, it basically says that if you read any packet or write any packet that has a frame other than ACK or padding, then you reset the timer. The text about \"other packet has not been acknowledged or lost\" doesn't make sense unless you have some very complicated timer scheme, because the packet won't be acknowledge or lost until after it is sent, at which point I have already reset the timer.\nYeah, this is opaque. I think that the logic is something like: when receiving a packet, restart the idle timer when sending an ACK-eliciting packet. if you haven't sent other ACK-eliciting packets since the idle timer was started, restart the idle timer That is, if you send 10 packets, only the first of those restarts the timer. This clearly needs to be reworded.\nI'd suggest one change to the second bullet, Martin. \"when sending an ACK-eliciting packet. if you haven't sent other ACK-eliciting packets since the idle timer was restarted due to a received packet, restart the idle timer\" And yes, that text is not clear.\nI like the look of the wording change in ac59e11. Is there something blocking it from being applied?\nNothing. I just neglected to open the pull request. That happens sometimes. Lots of changes in flight...\nI think this looks good, but NAME has more familiarity with this code than I do, so I'm adding him as well.", "new_text": "definitions) is closed. A connection enters the draining state when the idle timeout expires. Each endpoint advertises its own idle timeout to its peer. An enpdpoint restarts any timer it maintains when a packet from its peer is received and processed successfully. The timer is also restarted when sending a packet containing frames other than ACK or PADDING (an ACK-eliciting packet, see QUIC-RECOVERY), but only if no other ACK- eliciting packets have been sent since last receiving a packet. Restarting when sending packets ensures that connections do not prematurely time out when initiating new activity. The value for an idle timeout can be asymmetric. The value advertised by an endpoint is only used to determine whether the"} {"id": "q-en-quicwg-base-drafts-ef4a3e84aa7558333f920e9b0923eee8eb6091f8e6af7a59ccf0089b3560825f", "old_text": "result is smaller than the min_rtt, the RTT should be used, but the ack delay field should be ignored. A sender calculates both smoothed RTT and RTT variance similar to those specified in RFC6298, see on-ack-received. A sender takes an RTT sample when an ACK frame is received that acknowledges a larger packet number than before (see on-ack-", "comments": "In the recovery draft, RTT is briefly spelled out in section 5 but is used in several earlier sections. SRTT and PTO are not explained at all, it seems. TLP is spelled out. RTO has a section of its own but it referenced before that. The recovery doc cannot be a tutorial on transmission theory, but it would be helpful to list the key acronyms and possibly reference an RFC with more details. There are already sections summarising key variables in pseudocode. There are references at the end, but the reader would have to go through them all.\nYes, thanks for the note. I'm planning to do a cleanup of that doc soon.\nI fixed defined RTT earlier in .", "new_text": "result is smaller than the min_rtt, the RTT should be used, but the ack delay field should be ignored. A sender calculates both smoothed RTT (SRTT) and RTT variance (RTTVAR) similar to those specified in RFC6298, see on-ack-received. A sender takes an RTT sample when an ACK frame is received that acknowledges a larger packet number than before (see on-ack-"} {"id": "q-en-quicwg-base-drafts-c12ed84ed5610d7777f48452de5ef78a61b253dc3fe1d4c8fff5056dc7ee35d4", "old_text": "available and unused connection IDs. While each endpoint independently chooses how many connection IDs to issue, endpoints SHOULD provide and maintain at least eight connection IDs. The endpoint SHOULD do this by always supplying a new connection ID when a connection ID is retired by its peer or when the endpoint receives a packet with a previously unused connection ID. Endpoints that initiate migration and require non-zero-length connection IDs SHOULD provide their peers with new connection IDs before migration, or risk the peer closing the connection. 5.1.2.", "comments": "While we allow the consumer to start using new CIDs anytime, there are consequences of consuming them excessively, as discussed in starting from this comment. The text clarifies that excessive consumption is not what the issuer needs to support.\nNAME For HTTP3, it is natural to assume that the connections would not survive for very long time. In such case, capping the number of total connection IDs is a practical countermeasure against excessive issuance of CIDs. Implementations need not to be as complicated as capping the frequency.\nI don't want to put this in the draft. It tells endpoints not to change connection IDs based on some unspecified (scary!) constraints that may or may not be present in a peer. I realize that there might be a limited supply in some implementations, but artificial constraints on the rate of use is not the way I would address the problem. Refusing or slowing issuance of new connection IDs if someone retires too many is a better approach. In other words, implementations should deal with this on their own.\nNAME If that's the concern, do you think clarifying that the issue can \"refuse or slow issuance of new connection IDs\" in section 5.1.1 would be fine? The problem I have with current text is that section 5.1.1 only says that the issuer SHOULD issue new CIDs as the consumer retires them. I think we need to clarify that doing that without limitation could be dangerous.\nNAME Moved the clarification to the issuer side. PTAL.\nOne low-level observation: CID's need to be looked up in a hash table or similar. For a busy server multiplexing many connections, it want this table in fast cache, also to reject invalid CID's fast. You may be able to handle 5000 connections efficiently, but if you are required to handle 8x5000 this can degrade performance, especially on rejection. You can counter this by cryptographically mapping multiple CID's a single internal CID but that is not the simplest approach. Therefore, choosing a minimum number of CID's should be done with care. Would 2 or 3 not be sufficient for most use cases?\nNAME > Unfortunately, this is a design change, and a new one. just to be clear, are you referring to my comment on a limit of eight, or something earlier?\nNAME I think we should have a separate discussion around a sensible default. Part of the idea here was to make sure that we were very unlikely to run out, even around lost/retransmitted packets on different paths. Choosing a very low number brings with it other potential issues which probably should hold up this change?\nNAME yes, I realised this relates to maxconnectionids.\nI'm going with editorial then.\nWhile I can see why you'd limit the frequency of issuing new connection IDs, I don't think you should limit the number of total connection IDs per connection.Unfortunately, this is a design change, and a new one. From an abundance of caution, we'll need an issue for this, and chair assessment of consensus.I'm not convinced this is a design change -- there was previously a SHOULD, and this adds additional text (including a MAY) around when you might choose to violate the SHOULD. However, no mandatory behavior is added, changed, or removed.A few nits, but this looks good to me. That said, this issue will need consensus.", "new_text": "available and unused connection IDs. While each endpoint independently chooses how many connection IDs to issue, endpoints SHOULD provide and maintain at least eight connection IDs. The endpoint SHOULD do this by supplying a new connection ID when a connection ID is retired by its peer or when the endpoint receives a packet with a previously unused connection ID. However, it MAY limit the frequency or the total number of connection IDs issued for each connection to avoid the risk of running out of connection IDs (see reset-token). An endpoint that initiates migration and requires non-zero-length connection IDs SHOULD ensure that the pool of connection IDs available to its peer allows the peer to use a new connection ID on migration, as the peer will close the connection if the pool is exhausted. 5.1.2."} {"id": "q-en-quicwg-base-drafts-990675fc91f1c8c8b44006481ddc561b3d57b3e7364fb22bf377c058d5b3c723", "old_text": "Data is protected using a number of encryption levels: Plaintext Early Data (0-RTT) Keys", "comments": "I suspect these are uncontroversial.\nAll three Mart[ie]ns now agree, no further objections allowed.\nLGTM, with the one change altered.", "new_text": "Data is protected using a number of encryption levels: Initial Keys Early Data (0-RTT) Keys"} {"id": "q-en-quicwg-base-drafts-990675fc91f1c8c8b44006481ddc561b3d57b3e7364fb22bf377c058d5b3c723", "old_text": "using the KDF provided by TLS. In TLS 1.3, the HKDF-Expand-Label function described in Section 7.1 of TLS13 is used, using the hash function from the negotiated cipher suite. Other versions of TLS MUST provide a similar function in order to be used QUIC. The current encryption level secret and the label \"quic key\" are input to the KDF to produce the AEAD key; the label \"quic iv\" is used", "comments": "I suspect these are uncontroversial.\nAll three Mart[ie]ns now agree, no further objections allowed.\nLGTM, with the one change altered.", "new_text": "using the KDF provided by TLS. In TLS 1.3, the HKDF-Expand-Label function described in Section 7.1 of TLS13 is used, using the hash function from the negotiated cipher suite. Other versions of TLS MUST provide a similar function in order to be used with QUIC. The current encryption level secret and the label \"quic key\" are input to the KDF to produce the AEAD key; the label \"quic iv\" is used"} {"id": "q-en-quicwg-base-drafts-990675fc91f1c8c8b44006481ddc561b3d57b3e7364fb22bf377c058d5b3c723", "old_text": "in hexadecimal notation. Future versions of QUIC SHOULD generate a new salt value, thus ensuring that the keys are different for each version of QUIC. This prevents a middlebox that only recognizes one version of QUIC from seeing or modifying the contents of handshake packets from future versions. The HKDF-Expand-Label function defined in TLS 1.3 MUST be used for Initial packets even where the TLS versions offered do not include", "comments": "I suspect these are uncontroversial.\nAll three Mart[ie]ns now agree, no further objections allowed.\nLGTM, with the one change altered.", "new_text": "in hexadecimal notation. Future versions of QUIC SHOULD generate a new salt value, thus ensuring that the keys are different for each version of QUIC. This prevents a middlebox that only recognizes one version of QUIC from seeing or modifying the contents of packets from future versions. The HKDF-Expand-Label function defined in TLS 1.3 MUST be used for Initial packets even where the TLS versions offered do not include"} {"id": "q-en-quicwg-base-drafts-990675fc91f1c8c8b44006481ddc561b3d57b3e7364fb22bf377c058d5b3c723", "old_text": "The \"extension_data\" field of the quic_transport_parameters extension contains a value that is defined by the version of QUIC that is in use. The quic_transport_parameters extension carries a TransportParameters when the version of QUIC defined in QUIC- TRANSPORT is used. The quic_transport_parameters extension is carried in the ClientHello", "comments": "I suspect these are uncontroversial.\nAll three Mart[ie]ns now agree, no further objections allowed.\nLGTM, with the one change altered.", "new_text": "The \"extension_data\" field of the quic_transport_parameters extension contains a value that is defined by the version of QUIC that is in use. The quic_transport_parameters extension carries a TransportParameters struct when the version of QUIC defined in QUIC- TRANSPORT is used. The quic_transport_parameters extension is carried in the ClientHello"} {"id": "q-en-quicwg-base-drafts-f06d55dc2a101bd8b844852868c145f3b92c4fe98ba23e09080bf59e69051c71", "old_text": "Servers SHOULD NOT increase the QUIC MAX_STREAM_ID limit after sending a GOAWAY frame. Once GOAWAY is sent, the server MUST reject requests sent on streams with an identifier greater than or equal to the indicated last Stream ID. Clients MUST NOT send new requests on the connection after receiving GOAWAY, although requests might already be in transit. A new connection can be established for new requests. If the client has sent requests on streams with a Stream ID greater than or equal to that indicated in the GOAWAY frame, those requests are considered rejected (request-cancellation). Clients SHOULD cancel any requests on streams above this ID. Servers MAY also reject requests on streams below the indicated ID if these requests were not processed. Requests on Stream IDs less than the Stream ID in the GOAWAY frame might have been processed; their status cannot be known until they are completed successfully, reset individually, or the connection terminates. Servers SHOULD send a GOAWAY frame when the closing of a connection is known in advance, even if the advance notice is small, so that the", "comments": "Another attempt to group related concepts in this section which hopefully (This is technically design, because previously-required server behavior is now only recommended.)\nWhy do I have to explicitly cancel after GOAWAY? Given that GOAWAY already tells the client this, what's the point?\nGOAWAY tells the client that those requests aren't going to be serviced, but the streams will continue to exist. If the cleanup period is significant, having the streams open is a bad idea. However, attaching SHOULD to this is unnecessary, I think that we might instead just note that clients are free to do as they please with streams that won't be answered.\nDiscussed in Tokyo; this doesn't need to be required server behavior, but recommended (or RECOMMENDED) as a method to clean up state. Should have a reminder cross-reference to request rejection.\nLgtm now", "new_text": "Servers SHOULD NOT increase the QUIC MAX_STREAM_ID limit after sending a GOAWAY frame. Clients MUST NOT send new requests on the connection after receiving GOAWAY; a new connection MAY be established to send additional requests. Some requests might already be in transit. If the client has already sent requests on streams with a Stream ID greater than or equal to that indicated in the GOAWAY frame, those requests will not be processed and MAY be retried by the client on a different connection. The client MAY cancel these requests. It is RECOMMENDED that the server explicitly reject such requests (see request-cancellation) in order to clean up transport state for the affected streams. Requests on Stream IDs less than the Stream ID in the GOAWAY frame might have been processed; their status cannot be known until a response is received, the stream is reset individually, or the connection terminates. Servers MAY reject individual requests on streams below the indicated ID if these requests were not processed. Servers SHOULD send a GOAWAY frame when the closing of a connection is known in advance, even if the advance notice is small, so that the"} {"id": "q-en-quicwg-base-drafts-d439329a9e20b3db9571ba4cf2d6a0761a8b10189dd18c5ed2bb2d6008df6856", "old_text": "A client MUST NOT send a DUPLICATE_PUSH frame. A server MUST treat the receipt of a DUPLICATE_PUSH frame as a connection error of type HTTP_MALFORMED_FRAME. The DUPLICATE_PUSH frame carries a single variable-length integer that identifies the Push ID of a resource that the server has", "comments": "I believe UNEXPECTED_FRAME is the preferred type.\nI actually prefer the wording in duplicate push section and wonder if we should port that to PUSH_PROMISE too", "new_text": "A client MUST NOT send a DUPLICATE_PUSH frame. A server MUST treat the receipt of a DUPLICATE_PUSH frame as a connection error of type HTTP_UNEXPECTED_FRAME. The DUPLICATE_PUSH frame carries a single variable-length integer that identifies the Push ID of a resource that the server has"} {"id": "q-en-quicwg-base-drafts-397ac08677d02b86c7918ebbcaede8a301d486ab30e0c8ce169bf9581f55880a", "old_text": "can be mounted using spoofed source addresses. In determining this limit, servers only count the size of successfully processed packets. Clients MUST ensure that UDP datagrams containing Initial packets are sized to at least 1200 bytes, adding padding to packets in the datagram as necessary. Once a client has received an acknowledgment for a Handshake packet it MAY send smaller datagrams. Sending padded datagrams ensures that the server is not overly constrained by the amplification restriction. Packet loss, in particular loss of a Handshake packet from the server, can cause a situation in which the server cannot send when", "comments": "Builds on .\nInitial packets need to be in 1200 byte datagrams for DoS amplification reasons. We have an exception for the case where an ACK for a Handshake packet has been received from the peer. In that case, we remove the requirement to pad. That condition is extraordinarily obscure, especially if we are suggesting that endpoints not send any more Initial packets once they have Handshake keys. Removing this text is the right answer.\n+1 to the proposed resolution.", "new_text": "can be mounted using spoofed source addresses. In determining this limit, servers only count the size of successfully processed packets. Clients MUST ensure that UDP datagrams containing only Initial packets are sized to at least 1200 bytes, adding padding to packets in the datagram as necessary. Sending padded datagrams ensures that the server is not overly constrained by the amplification restriction. Packet loss, in particular loss of a Handshake packet from the server, can cause a situation in which the server cannot send when"} {"id": "q-en-quicwg-base-drafts-16d278bf23957690cca64332ff4a627e7296d0f24fea631fb36dcef87136f91d", "old_text": "though 1-RTT keys are available to a server after receiving the first handshake messages from a client, the server cannot consider the client to be authenticated until it receives and validates the client's Finished message. The requirement for the server to wait for the client Finished message creates a dependency on that message being delivered. A", "comments": "I realise this may be a bit out of context for this PR, but I can't find any clear definition of when the handshake is complete other than this being magically spawned from the inner workings of TLS.\nPrevious discussions have established the CFIN is HoL blocking, partially because even though it's not strictly necessary, if client auth is in use, it's critical to receive the client's entire Handshake flight before the server processes 1RTT packets. I don't think the current TLS text is sufficiently clear(and normative) about that. Section 4.1.1 has: \"Important: Until the handshake is reported as complete, the connection and key exchange are not properly authenticated at the server. Even though 1-RTT keys are available to a server after receiving the first handshake messages from a client, the server cannot consider the client to be authenticated until it receives and validates the client's Finished message. The requirement for the server to wait for the client Finished message creates a dependency on that message being delivered. A client can avoid the potential for head-of-line blocking that this implies by sending a copy of the CRYPTO frame that carries the Finished message in multiple packets. This enables immediate server processing for those packets.\" URL\nI agree, perhaps this section needs to say something along the lines of:\nHow is this not a duplicate of ?\nNevermind, I can fix this easily.", "new_text": "though 1-RTT keys are available to a server after receiving the first handshake messages from a client, the server cannot consider the client to be authenticated until it receives and validates the client's Finished message. A server MUST NOT process 1-RTT packets until the handshake is complete. A server MAY buffer or discard 1-RTT packets that it cannot read. The requirement for the server to wait for the client Finished message creates a dependency on that message being delivered. A"} {"id": "q-en-quicwg-base-drafts-56b23823a110c3649c7861cdefb0a278ac86570e0e9dbe531b3ee5b7dd625170", "old_text": "contain frames. QUIC payloads MUST contain at least one frame, and MAY contain multiple frames and multiple frame types. Frames MUST fit within a single QUIC packet and MUST NOT span a QUIC packet boundary. Each frame begins with a Frame Type, indicating its type, followed by additional type-dependent fields: The frame types defined in this specification are listed in frame- types. The Frame Type in ACK, STREAM, MAX_STREAMS, STREAMS_BLOCKED,", "comments": "This was always intended (and also written in the description of FRAMEENCODINGERROR), but didn't show up in the text somehow.\nI agree with NAME about the MUSTs that you shuffled around are statements of fact, not interoperability requirements, but the addition is fine.", "new_text": "contain frames. QUIC payloads MUST contain at least one frame, and MAY contain multiple frames and multiple frame types. Frames MUST fit within a single QUIC packet and MUST NOT span a QUIC packet boundary. Each frame begins with a Frame Type, indicating its type, followed by additional type-dependent fields: The frame types defined in this specification are listed in frame- types. The Frame Type in ACK, STREAM, MAX_STREAMS, STREAMS_BLOCKED,"} {"id": "q-en-quicwg-base-drafts-56b23823a110c3649c7861cdefb0a278ac86570e0e9dbe531b3ee5b7dd625170", "old_text": "the frame. These frames are explained in more detail in frame- formats. All QUIC frames are idempotent in this version of QUIC. That is, a valid frame does not cause undesirable side effects or errors when received more than once.", "comments": "This was always intended (and also written in the description of FRAMEENCODINGERROR), but didn't show up in the text somehow.\nI agree with NAME about the MUSTs that you shuffled around are statements of fact, not interoperability requirements, but the addition is fine.", "new_text": "the frame. These frames are explained in more detail in frame- formats. An endpoint MUST treat the receipt of a frame of unknown type as a connection error of type FRAME_ENCODING_ERROR. All QUIC frames are idempotent in this version of QUIC. That is, a valid frame does not cause undesirable side effects or errors when received more than once."} {"id": "q-en-quicwg-base-drafts-f70e37a2d4a4d6a234352b8e39093092271f6428862061d0a445880fcb2a7264", "old_text": "connections. Servers MAY discard any Initial packet that does not carry the expected token. Unlike the token that is created for a Retry packet, there might be some time between when the token is created and when the token is subsequently used. Thus, a token SHOULD include an expiration time. The server MAY include either an explicit expiration time or an issued timestamp and dynamically calculate the expiration time. It is also unlikely that the client port number is the same on two different connections; validating the port is therefore unlikely to be successful. A token SHOULD be constructed for the server to easily distinguish it from tokens that are sent in Retry packets as they are carried in the same field. If the client has a token received in a NEW_TOKEN frame on a previous connection to what it believes to be the same server, it SHOULD include that value in the Token field of its Initial packet.", "comments": "\u2026are: require unlinkability by stating: \"SHOULD NOT expose linkability\" suggest use of encryption when embedding expiration time in the token\nThanks NAME !\nMy understanding is that tokens provided by NEW_TOKEN frames should not leak information that allows observers correlate the newly established connection with the previous connection. Such information include the time when the token was issued, RTT, server name (when SNI is not used or Encrypted SNI is used), or even the size of the token. However, the draft seems to be vague about the requirement. I think it might be worth clarifying the principle in the draft. \"a token SHOULD include an expiration time\" (section 8.1.2) - I think we should change this to \"a token SHOULD be associated an expiration time\", considering the fact that we are talking about \"stateful\" design (stateless design is mentioned briefly in one of the following paragraphs). In a stateful design, an opaque identifier is the only thing that a server should be allowed to send. \"In a stateless design, a server can use encrypted and authenticated tokens to pass information to clients\" - I think we might prefer using \"SHOULD\" rather than \"can\".\nI think the current text is fine: \"include\" translates to \"put into the token and encrypt\" in my mind. Reading that a token is \"associated\" with an expiration time would give me pause.\nWould you mind clarifying where the \"and encrypt\" comes from? My point is that that is the way it should be, however the text does not seem to clarify that.\nI don't recall how I arrived at it -- perhaps some common sense. Now that I have implemented it, it seems only natural. Perhaps we should simply add \"and encrypt\" to the text instead of changing to \"associated with,\" which is less clear?\nI'm happy to know that we share the common sense :+1: As stated in the opening comment, my \"editorial\" preference goes to stating the principle that any information in an NEW_TOKEN token (other than the opaque lookup key used in a stateful design) SHOULD be encrypted. If that is to be met, I think adding \"and encrypt\" would be fine.\nI agree with NAME -- I think stating this explicitly as a SHOULD is sensible.\n(For the chairs, and triage purposes). This suggests a normative requirement that I don't think that we need, or that is already implied by existing text. There would be new normative text. I believe that is still editorial on the basis that the requirement is already implied, but will defer to your judgment.\nFWIW, what I am shooting for is a change like URL\nNAME and others, is kazuhoNAME kind of the change you were envisioning?\nI was asked to move here. I miss the point, that the server SHOULD NOT construct the same token multiple times because this leads to additional scenarios where tracking by a network observer becomes feasible. The referenced PR does not fix this problem.\nNAME I think we are in agreement that unlinkability between tokens is required. IMO the phrase \"SHOULD NOT expose linkability\" in the proposed text captures the concept, though I might agree that \"i.e., ...\" needs improvement.\nNAME NAME change would be adequate, and I agree with NAME that this is effectively editorial. Do the chairs want to mark this issue as ?\nNAME Yes, I agree with you on the phrase \"Should NOT expose likability\". To address my concern, I suggest to include \"This includes that the server constructs each time a different token when issuing tokens to clients from the same source address.\" after line 1560 of #kazuho/base-draftsNAME\nMarking this as editorial\nLooks good, modulo the minor suggestions.", "new_text": "connections. Servers MAY discard any Initial packet that does not carry the expected token. A token SHOULD be constructed for the server to easily distinguish it from tokens that are sent in Retry packets as they are carried in the same field. The token MUST NOT include information that would allow it to be linked by an on-path observer to the connection on which it was issued. For example, it cannot include the connection ID or addressing information unless the values are encrypted. Unlike the token that is created for a Retry packet, there might be some time between when the token is created and when the token is subsequently used. Thus, a token SHOULD have an expiration time, which could be either an explicit expiration time or an issued timestamp that can be used to dynamically calculate the expiration time. A server can store the expiration time or include it in an encrypted form in the token. It is unlikely that the client port number is the same on two different connections; validating the port is therefore unlikely to be successful. If the client has a token received in a NEW_TOKEN frame on a previous connection to what it believes to be the same server, it SHOULD include that value in the Token field of its Initial packet."} {"id": "q-en-quicwg-base-drafts-e08e62d67e74c26323a54cc3e01b8b16fd441addb39a362f1d35dd5da6a0ac80", "old_text": "Before a TLS ciphersuite can be used with QUIC, a header protection algorithm MUST be specified for the AEAD used with that ciphersuite. This document defines algorithms for AEAD_AES_128_GCM, AEAD_AES_128_CCM, AEAD_AES_256_GCM, AEAD_AES_256_CCM (all AES AEADs are defined in AEAD), and AEAD_CHACHA20_POLY1305 CHACHA. Prior to TLS selecting a ciphersuite, AES header protection is used (hp-aes), matching the AEAD_AES_128_GCM packet protection. 5.4.2.", "comments": "draft-ietf-quic-tls currently mentions AEADAES256CCM, however there is no way to use that AEAD in TLS 1.3 - TLSAES256CCMSHA256 does not appear in the . If I understand this correctly, the presence of AEADAES256CCM in the document was an editorial oversight.\nThanks. This was extremely confusing: you meant CCM, but wrote GCM in tons of places.", "new_text": "Before a TLS ciphersuite can be used with QUIC, a header protection algorithm MUST be specified for the AEAD used with that ciphersuite. This document defines algorithms for AEAD_AES_128_GCM, AEAD_AES_128_CCM, AEAD_AES_256_GCM (all AES AEADs are defined in AEAD), and AEAD_CHACHA20_POLY1305 CHACHA. Prior to TLS selecting a ciphersuite, AES header protection is used (hp-aes), matching the AEAD_AES_128_GCM packet protection. 5.4.2."} {"id": "q-en-quicwg-base-drafts-e08e62d67e74c26323a54cc3e01b8b16fd441addb39a362f1d35dd5da6a0ac80", "old_text": "5.4.3. This section defines the packet protection algorithm for AEAD_AES_128_GCM, AEAD_AES_128_CCM, AEAD_AES_256_GCM, and AEAD_AES_256_CCM. AEAD_AES_128_GCM and AEAD_AES_128_CCM use 128-bit AES AES in electronic code-book (ECB) mode. AEAD_AES_256_GCM, and AEAD_AES_256_CCM use 256-bit AES in ECB mode. This algorithm samples 16 bytes from the packet ciphertext. This value is used as the input to AES-ECB. In pseudocode:", "comments": "draft-ietf-quic-tls currently mentions AEADAES256CCM, however there is no way to use that AEAD in TLS 1.3 - TLSAES256CCMSHA256 does not appear in the . If I understand this correctly, the presence of AEADAES256CCM in the document was an editorial oversight.\nThanks. This was extremely confusing: you meant CCM, but wrote GCM in tons of places.", "new_text": "5.4.3. This section defines the packet protection algorithm for AEAD_AES_128_GCM, AEAD_AES_128_CCM, and AEAD_AES_256_GCM. AEAD_AES_128_GCM and AEAD_AES_128_CCM use 128-bit AES AES in electronic code-book (ECB) mode. AEAD_AES_256_GCM uses 256-bit AES in ECB mode. This algorithm samples 16 bytes from the packet ciphertext. This value is used as the input to AES-ECB. In pseudocode:"} {"id": "q-en-quicwg-base-drafts-9debc51186134a4fc233fa1228ac82fb69049a267539045a43d3e43f3e3f769b", "old_text": "8.1. The QUIC version negotiation mechanism is used to negotiate the version of QUIC that is used prior to the completion of the handshake. However, this packet is not authenticated, enabling an active attacker to force a version downgrade. To ensure that a QUIC version downgrade is not forced by an attacker, version information is copied into the TLS handshake, which provides integrity protection for the QUIC negotiation. This does not prevent version downgrade prior to the completion of the handshake, though it means that a downgrade causes a handshake failure. QUIC requires that the cryptographic handshake provide authenticated protocol negotiation. TLS uses Application Layer Protocol Negotiation (ALPN) RFC7301 to select an application protocol. Unless", "comments": "I must have missed this when making the PR to remove the version negotiation text. Thanks NAME !\nI wasn\u2019t aware that there was an issue. Sorry for that!\nNo worries, thanks for doing this!\nWe can probably remove big chunks of section \"8.1. Protocol and Version Negotiation\" since we don't do version negotiation anymore in v1.\nFixed with . Thanks NAME", "new_text": "8.1. QUIC requires that the cryptographic handshake provide authenticated protocol negotiation. TLS uses Application Layer Protocol Negotiation (ALPN) RFC7301 to select an application protocol. Unless"} {"id": "q-en-quicwg-base-drafts-d7d2eaed0c9d16c2bb9d9e1dc74a6ad9615afc4f918a359514fb0db53bf5106d", "old_text": "have been lost or discarded by the server. A client MAY attempt to resend data in 0-RTT packets after it sends a new Initial packet. A client MUST NOT reset the packet number it uses for 0-RTT packets. The keys used to protect 0-RTT packets will not change as a result of responding to a Retry packet unless the client also regenerates the cryptographic handshake message. Sending packets with the same packet number in that case is likely to compromise the packet protection for all 0-RTT packets because the same key and nonce could be used to protect different content.", "comments": "As of , the client MUST use the same handshake message.\nNAME That's quite complicated, and I'm not sure if we've chosen the right design here. I'd like to keep this PR editorial though. I opened to discuss this.\nIn URL, NAME points out that after a Retry, all 0-RTT packets sent in the first flight are still valid, although they will probably have been dropped by the server (unless there's significant reordering). That means that a client SHOULD retransmit all 0-RTT packets in the second flight. We now have two different scenarios: In the case of a HelloRetryRequest, the client is prohibited from sending 0-RTT, and has to cancel retransmissions for all outstanding 0-RTT packets. In the case of Retry, the 0-RTT packets from the first flight are declared lost (without any effect on the cwnd though) and retransmitted. Considering that Retry adds a roundtrip anyway (and thereby limits the usefulness of 0-RTT), and that we only expect it to be used by a server that's under attack (which makes it likely to refuse 0-RTT later in the handshake), does it really make sense to send 0-RTT in that case?\nNAME I am not sure if I agree with the observation. The transport draft states that a server MAY send Retry packets in response to Initial and 0-RTT packets. A server can either discard or buffer 0-RTT packets that it receives (). Notice that the server is allowed to \"buffer\" 0-RTT packets. I am not sure if I like this ambiguity (because I think most if not all of us will be dropping 0-RTT packets that carry the original DCID when sending a Retry), but IMO the draft is clear.\nNAME thanks for pointing that out. My point is that we should change the draft (i.e. this is a design issue, not an editorial issue). The reason a server (or a LB) performs a Retry is that it either has reasons to suspect that the client isn't reachable at the 2-tuple it sent the packet from, or that it is under load, and either uses the Retry to redirect traffic to a different backend server, or just to buy itself some time Under both circumstances, a server wouldn't want to allocate additional state for buffering 0-RTT packets. I'm therefore suggesting to change the draft such that: A server that performs a Retry MUST discard 0-RTT packets. A client MUST NOT use 0-RTT after a Retry was performed. Does that make sense to you?\nYeah, I can envision a server that considers buffering some number of packets to be cheaper than doing a TLS handshake. For such servers, delaying the TLS handshake until client proves it's existence but buffering the 0-RTT packets in the meantime would make sense. That said, I'd be fine with this, assuming that the discarded 0-RTT packets are the ones that carry the original DCID. I would prefer not having this requirement. Once the client is known to exist, a server needs to communicate with the client. Use of 0-RTT helps reduce the server load, because when used, average lifetime of connections becomes 1-RTT less. That leads us to having less concurrent connections.\nWhen our implementation does Retry, it's because we've hit a memory pressure limit and we want to protect ourselves from attack. If a client supplies us with a valid token, we will accept 0-RTT still, but we won't buffer it from the previous Initial (without a token). So, if the client gets a Retry and still wants to do 0-RTT, they need to retransmit it, with the new CID, in a new packet (with a new packet number). I don't see why that design should be prevented.\nOk, I see that there's value in doing 0-RTT after a Retry, and furthermore, there seem to be good reasons (prevention of tampering by middle boxes) to disallow 0-RTT after a HelloRetryRequest in TLS 1.3, as NAME pointed out OOB. Since we can assume that most (all?) server implementation will drop 0-RTT packets when sending a Retry, the only reasonable thing to do for a client is to treat all 0-RTT packets as lost and retransmit them. Considering that, it would make sense to require servers to drop all Retry packets from the first flight, as NAME suggests.\nNAME Thank you for suggesting a path forward. I think my weak preference goes to something slightly different. I think we are in agreement that most if not all the servers would not buffer the 0-RTT packets carrying the original DCIDs when they send Retrys. It makes sense to acknowledge the fact in the specification and suggest clients to retransmit 0-RTT data it has already sent. What I am not sure is if there is a reason to forbid servers from buffering such 0-RTT packets. From client's point of view, sending 0-RTT data once per connection is an optimization. Retransmitting 0-RTT data when the server sends a Retry is further optimization. If we think that way, servers that buffer 0-RTT packets when sending a Retry has increased chance of utilizing 0-RTT data. Compared to that, there is no reason to mandate servers to drop 0-RTT packets. To summarize, I think all we need to clarify is that servers responding with Retries are likely to discard 0-RTT packets that carry the original DCIDs and therefore that retransmitting 0-RTT data makes sense. But nothing more.\nWhat if client takes action on Retry, and the old 0-RTT state lingers - wouldn't that make servers more vulnerable to DDoS attacks from packet sniffers?\nThis also bears on . If the DCID doesn't change, then the server won't be able to differentiate the first flight of 0-RTT packets, which you're proposing to drop, from the second flight of 0-RTT packets which it accepts. The current text says that the server MAY buffer and process them later; you're saying that servers probably won't. I think that's fine, but some might -- hence, I favor NAME suggestion to retain the permission to buffer but the caution to the client that the packets were quite possibly lost/dropped. In terms of actual spec change, I think that means this text in : ...becomes a SHOULD instead of a MAY.\nIt seems like you have sorted this out. I have no problem with using \"SHOULD\" over \"MAY\" here. Note that the reason we have this distinction is logical separation of the mechanism of Retry and 0-RTT acceptance. Retry strictly precedes the connection attempt. No point in blocking 0-RTT if you have to Retry. HelloRetryRequest is different in that certain characteristics it might cause to reset affect whether the 0-RTT works. Retry is required to use the same ClientHello, so there is no need to have the two interact; you can treat a Retry almost as a loss event for the first Initial and everything works fine.\nSo, the effect of not changing the handshake message is that all previously-sent 0-RTT data is still valid, but has potentially/probably been lost rather than queued. Clients obviously MUST NOT reuse the packet number for different data, but in their not-resetting, they also can't change the data that they've already sent in those packets. They'll need to retransmit it, because otherwise there's a potential DoS attack by delaying/replaying a 0-RTT packet by one RTT against a server that does Retries.This PR looks good to me as an editorial improvement. Regarding NAME point, the transport draft explicitly states that a server sending a Retry can buffer 0-RTT packets that carried the original DCID, and process them after receiving a response to the Retry. To put it differently, I think we are clear on the semantics and that there are ways to implement this without having security concerns. That said, I understand that NAME has opened a design issue () to discuss if that is the design we want to have.LGTM", "new_text": "have been lost or discarded by the server. A client MAY attempt to resend data in 0-RTT packets after it sends a new Initial packet. A client MUST NOT reset the packet number it uses for 0-RTT packets, since the keys used to protect 0-RTT packets will not change as a result of responding to a Retry packet. Sending packets with the same packet number in that case is likely to compromise the packet protection for all 0-RTT packets because the same key and nonce could be used to protect different content."} {"id": "q-en-quicwg-base-drafts-d7d2eaed0c9d16c2bb9d9e1dc74a6ad9615afc4f918a359514fb0db53bf5106d", "old_text": "from a packet number of 0. Thus, 0-RTT packets could need to use a longer packet number encoding. A client SHOULD instead generate a fresh cryptographic handshake message and start packet numbers from 0. This ensures that new 0-RTT packets will not use the same keys, avoiding any risk of key and nonce reuse; this also prevents 0-RTT packets from previous handshake attempts from being accepted as part of the connection. A client MUST NOT send 0-RTT packets once it starts processing 1-RTT packets from the server. This means that 0-RTT packets cannot contain any response to frames from 1-RTT packets. For instance, a", "comments": "As of , the client MUST use the same handshake message.\nNAME That's quite complicated, and I'm not sure if we've chosen the right design here. I'd like to keep this PR editorial though. I opened to discuss this.\nIn URL, NAME points out that after a Retry, all 0-RTT packets sent in the first flight are still valid, although they will probably have been dropped by the server (unless there's significant reordering). That means that a client SHOULD retransmit all 0-RTT packets in the second flight. We now have two different scenarios: In the case of a HelloRetryRequest, the client is prohibited from sending 0-RTT, and has to cancel retransmissions for all outstanding 0-RTT packets. In the case of Retry, the 0-RTT packets from the first flight are declared lost (without any effect on the cwnd though) and retransmitted. Considering that Retry adds a roundtrip anyway (and thereby limits the usefulness of 0-RTT), and that we only expect it to be used by a server that's under attack (which makes it likely to refuse 0-RTT later in the handshake), does it really make sense to send 0-RTT in that case?\nNAME I am not sure if I agree with the observation. The transport draft states that a server MAY send Retry packets in response to Initial and 0-RTT packets. A server can either discard or buffer 0-RTT packets that it receives (). Notice that the server is allowed to \"buffer\" 0-RTT packets. I am not sure if I like this ambiguity (because I think most if not all of us will be dropping 0-RTT packets that carry the original DCID when sending a Retry), but IMO the draft is clear.\nNAME thanks for pointing that out. My point is that we should change the draft (i.e. this is a design issue, not an editorial issue). The reason a server (or a LB) performs a Retry is that it either has reasons to suspect that the client isn't reachable at the 2-tuple it sent the packet from, or that it is under load, and either uses the Retry to redirect traffic to a different backend server, or just to buy itself some time Under both circumstances, a server wouldn't want to allocate additional state for buffering 0-RTT packets. I'm therefore suggesting to change the draft such that: A server that performs a Retry MUST discard 0-RTT packets. A client MUST NOT use 0-RTT after a Retry was performed. Does that make sense to you?\nYeah, I can envision a server that considers buffering some number of packets to be cheaper than doing a TLS handshake. For such servers, delaying the TLS handshake until client proves it's existence but buffering the 0-RTT packets in the meantime would make sense. That said, I'd be fine with this, assuming that the discarded 0-RTT packets are the ones that carry the original DCID. I would prefer not having this requirement. Once the client is known to exist, a server needs to communicate with the client. Use of 0-RTT helps reduce the server load, because when used, average lifetime of connections becomes 1-RTT less. That leads us to having less concurrent connections.\nWhen our implementation does Retry, it's because we've hit a memory pressure limit and we want to protect ourselves from attack. If a client supplies us with a valid token, we will accept 0-RTT still, but we won't buffer it from the previous Initial (without a token). So, if the client gets a Retry and still wants to do 0-RTT, they need to retransmit it, with the new CID, in a new packet (with a new packet number). I don't see why that design should be prevented.\nOk, I see that there's value in doing 0-RTT after a Retry, and furthermore, there seem to be good reasons (prevention of tampering by middle boxes) to disallow 0-RTT after a HelloRetryRequest in TLS 1.3, as NAME pointed out OOB. Since we can assume that most (all?) server implementation will drop 0-RTT packets when sending a Retry, the only reasonable thing to do for a client is to treat all 0-RTT packets as lost and retransmit them. Considering that, it would make sense to require servers to drop all Retry packets from the first flight, as NAME suggests.\nNAME Thank you for suggesting a path forward. I think my weak preference goes to something slightly different. I think we are in agreement that most if not all the servers would not buffer the 0-RTT packets carrying the original DCIDs when they send Retrys. It makes sense to acknowledge the fact in the specification and suggest clients to retransmit 0-RTT data it has already sent. What I am not sure is if there is a reason to forbid servers from buffering such 0-RTT packets. From client's point of view, sending 0-RTT data once per connection is an optimization. Retransmitting 0-RTT data when the server sends a Retry is further optimization. If we think that way, servers that buffer 0-RTT packets when sending a Retry has increased chance of utilizing 0-RTT data. Compared to that, there is no reason to mandate servers to drop 0-RTT packets. To summarize, I think all we need to clarify is that servers responding with Retries are likely to discard 0-RTT packets that carry the original DCIDs and therefore that retransmitting 0-RTT data makes sense. But nothing more.\nWhat if client takes action on Retry, and the old 0-RTT state lingers - wouldn't that make servers more vulnerable to DDoS attacks from packet sniffers?\nThis also bears on . If the DCID doesn't change, then the server won't be able to differentiate the first flight of 0-RTT packets, which you're proposing to drop, from the second flight of 0-RTT packets which it accepts. The current text says that the server MAY buffer and process them later; you're saying that servers probably won't. I think that's fine, but some might -- hence, I favor NAME suggestion to retain the permission to buffer but the caution to the client that the packets were quite possibly lost/dropped. In terms of actual spec change, I think that means this text in : ...becomes a SHOULD instead of a MAY.\nIt seems like you have sorted this out. I have no problem with using \"SHOULD\" over \"MAY\" here. Note that the reason we have this distinction is logical separation of the mechanism of Retry and 0-RTT acceptance. Retry strictly precedes the connection attempt. No point in blocking 0-RTT if you have to Retry. HelloRetryRequest is different in that certain characteristics it might cause to reset affect whether the 0-RTT works. Retry is required to use the same ClientHello, so there is no need to have the two interact; you can treat a Retry almost as a loss event for the first Initial and everything works fine.\nSo, the effect of not changing the handshake message is that all previously-sent 0-RTT data is still valid, but has potentially/probably been lost rather than queued. Clients obviously MUST NOT reuse the packet number for different data, but in their not-resetting, they also can't change the data that they've already sent in those packets. They'll need to retransmit it, because otherwise there's a potential DoS attack by delaying/replaying a 0-RTT packet by one RTT against a server that does Retries.This PR looks good to me as an editorial improvement. Regarding NAME point, the transport draft explicitly states that a server sending a Retry can buffer 0-RTT packets that carried the original DCID, and process them after receiving a response to the Retry. To put it differently, I think we are clear on the semantics and that there are ways to implement this without having security concerns. That said, I understand that NAME has opened a design issue () to discuss if that is the design we want to have.LGTM", "new_text": "from a packet number of 0. Thus, 0-RTT packets could need to use a longer packet number encoding. A client MUST NOT send 0-RTT packets once it starts processing 1-RTT packets from the server. This means that 0-RTT packets cannot contain any response to frames from 1-RTT packets. For instance, a"} {"id": "q-en-quicwg-base-drafts-eb5cdc664e016390a11c855a5fcbc2bb97b952e06536574f187093b39f5c89d0", "old_text": "compression-induced head-of-line blocking. See that document for additional details. An HTTP/3 implementation MAY impose a limit on the maximum size of the message header it will accept on an individual HTTP message. A server that receives a larger header field list than it is willing to", "comments": "This\nThe transport draft talks about QUIC being resilient to NAT rebindings. However, if a QUIC server is behind an L3 load balancer which simply routes based on 5-tuple, then connections to this server will (likely) not survive rebinding. I couldn't find any language which addressed whether such a deployment was \"OK\" or not. I think that since this load balance does not support NAT binding resilience, it is implicitly \"bad\" according to the draft, but others might disagree. In any case, I think there should be text to address this. Note, if the server advertised a preferredaddress which routed around the load balancer, and if clients were required to use this address, then that could obviously work. But preferredaddress support is a SHOULD, not a MUST.\nWe did discuss this issue. Use of empty connection IDs and demultiplexing based on source addresses. The conclusion there was that this was OK: people could do that, as long as it was clear that what they were getting was not different than TCP. That is, we made it clear that if you don't use the identifiers that you control (your addresses, the connection ID), then you accept that you can't handle migration at all and connections will drop (well, unless you do something like trial decryption, which has some obvious scaling issues). The outcome of that discussion was captured in . Assuming that Ryan finds this answer satisfactory (and I haven't missed anything), then I suggest we just close this as a duplicate of .\nI don't think this captures Ryan's concern: when the server is using a non-zero CID and also is behind 4-tuple routing. I think he's looking for a section of text to coherently describe what you have to do here: send either disablemigration or preferredaddress; or forward packets between servers; or either don't use a common stateless_reset key, or put client address/port in the reset token. This is sort of ops-drafty but there real requirements on servers to behave in a secure way. I think someone with full command of the transport draft would figure this out, but it is not clearly stated anywhere.\nI agree with Martin Duke and the language he suggests makes sense to me.\nThis does feel a bit ops-drafty to me, but I'd be happy to review a PR.\nNAME notes that: We need to normatively reference 7540 for this capability.\nSee also discussion thread starting at URL Hi, RFC7540 allows crumbling cookies for compression efficiency with HPACK. However, neither quic-http nor quic-qpack drafts mention cookies or crumbling. Is it considered safe for a QPACK implementation to crumble cookies before compression and concatenate them after decompression? If so, should the draft include clarification on this? Thanks, Bence", "new_text": "compression-induced head-of-line blocking. See that document for additional details. To allow for better compression efficiency, the cookie header field RFC6265 MAY be split into separate header fields, each with one or more cookie-pairs, before compression. If a decompressed header list contains multiple cookie header fields, these MUST be concatenated before being passed into a non-HTTP/2, non-HTTP/3 context, as described in HTTP2, Section 8.1.2.5. An HTTP/3 implementation MAY impose a limit on the maximum size of the message header it will accept on an individual HTTP message. A server that receives a larger header field list than it is willing to"} {"id": "q-en-quicwg-base-drafts-42c7f21da9890ba7f12eaf77a24b11bc85981c7ba959231b296a890147967a78", "old_text": "The most appropriate error code (error-codes) SHOULD be included in the frame that signals the error. Where this specification identifies error conditions, it also identifies the error code that is used. A stateless reset (stateless-reset) is not suitable for any error that can be signaled with a CONNECTION_CLOSE or RESET_STREAM frame.", "comments": "This just writes down what was proposed in .\nLooks fine, but what about some guidance on stream vs connection errors? And also the principle of nearly always choosing a hard protocol violation error, or a more granular error with same effect?\nStream vs. connection is something that application protocols (like HTTP) need to worry about. The transport only concerns itself with connection errors. The idea that you pick the most helpful error is not something that we need to codify (in my view).\nIsn't that worth mentioning? Or do we already have such text?\nThis is already discussed in Section 11, which makes me wonder if it would make more sense for text on this PR to be placed there too.\nThis I do not disagree with. I was merely suggesting that we explain why, as a general rule, a hard error is preferred as a design principle, not the nature of the error message which I do think should have a large degree of freedom.\nMaybe there is a followup needed for the HTTP draft. If the principles are generally the same, we can copy the text and then add some more to cover the idea that we make most errors fatal to the connection rather than the stream.\nI like Lucas suggestion of adding this text to the existing and already fairly complete text in section 11.\nI moved the second paragraph up to the error handling section, but the text about how codes are selected really belongs with the codes themselves.\nWe don't really have any real principles that we agree on for deciding what error codes we are describing. Proposal: if the error carries distinct semantics (like stream rejection in HTTP), then it gets a new error code if the error is frequent or particularly significant, then it gets a new error code otherwise, target a more generic error code that identifies the broad area of the protocol finally, if there is no more specific applicable error code, use PROTOCOL_VIOLATION\nis example of an issue discussed where it would have been useful to have principles.", "new_text": "The most appropriate error code (error-codes) SHOULD be included in the frame that signals the error. Where this specification identifies error conditions, it also identifies the error code that is used; though these are worded as requirements, different implementation strategies might lead to different errors being reported. In particular, an endpoint MAY use any applicable error code when it detects an error condition; a generic error code (such as PROTOCOL_VIOLATION or INTERNAL_ERROR) can always be used in place of specific error codes. A stateless reset (stateless-reset) is not suitable for any error that can be signaled with a CONNECTION_CLOSE or RESET_STREAM frame."} {"id": "q-en-quicwg-base-drafts-42c7f21da9890ba7f12eaf77a24b11bc85981c7ba959231b296a890147967a78", "old_text": "See iana-error-codes for details of registering new error codes. 20.1. Application protocol error codes are 62-bit unsigned integers, but", "comments": "This just writes down what was proposed in .\nLooks fine, but what about some guidance on stream vs connection errors? And also the principle of nearly always choosing a hard protocol violation error, or a more granular error with same effect?\nStream vs. connection is something that application protocols (like HTTP) need to worry about. The transport only concerns itself with connection errors. The idea that you pick the most helpful error is not something that we need to codify (in my view).\nIsn't that worth mentioning? Or do we already have such text?\nThis is already discussed in Section 11, which makes me wonder if it would make more sense for text on this PR to be placed there too.\nThis I do not disagree with. I was merely suggesting that we explain why, as a general rule, a hard error is preferred as a design principle, not the nature of the error message which I do think should have a large degree of freedom.\nMaybe there is a followup needed for the HTTP draft. If the principles are generally the same, we can copy the text and then add some more to cover the idea that we make most errors fatal to the connection rather than the stream.\nI like Lucas suggestion of adding this text to the existing and already fairly complete text in section 11.\nI moved the second paragraph up to the error handling section, but the text about how codes are selected really belongs with the codes themselves.\nWe don't really have any real principles that we agree on for deciding what error codes we are describing. Proposal: if the error carries distinct semantics (like stream rejection in HTTP), then it gets a new error code if the error is frequent or particularly significant, then it gets a new error code otherwise, target a more generic error code that identifies the broad area of the protocol finally, if there is no more specific applicable error code, use PROTOCOL_VIOLATION\nis example of an issue discussed where it would have been useful to have principles.", "new_text": "See iana-error-codes for details of registering new error codes. In defining these error codes, several principles are applied. Error conditions that might require specific action on the part of a recipient are given unique codes. Errors that represent common conditions are given specific codes. Absent either of these conditions, error codes are used to identify a general function of the stack, like flow control or transport parameter handling. Finally, generic errors are provided for conditions where implementations are unable or unwilling to use more specific codes. 20.1. Application protocol error codes are 62-bit unsigned integers, but"} {"id": "q-en-quicwg-base-drafts-a61b94d229cf00a768023aee966e5f390de6754d99c538eee4fb914931bbfb10", "old_text": "process performed at the beginning of the connection SHOULD be applied for all paths used by the connection. In case multiple connections share the same network path, as determined by having the same source and destination IP address and UDP ports, endpoints should try to co-ordinate across all connections to ensure a clear signal to any on-path measurement points. When the spin bit is disabled, endpoints MAY set the spin bit to any value, and MUST ignore any incoming value. It is RECOMMENDED that endpoints set the spin bit to a random value either chosen", "comments": "As discussed, this is a) hard to do in all cases, b) impossible to do in many, and c) not that useful.\ngood\nWhen multiple connections share the same network path or multiple connection IDs are used for the same path, using the spin bit is complicated by the difficulty in correlating forward and reverse flows. There is currently text that uses RFC 6919 language in the draft recommending coordination of the spin to avoid this problem. However, as , this coordination seems difficult and therefore unlikely. It also creates a strong signal that indicates that all the connections on the path are terminated at the same endpoint on both ends. Maybe we don't want to be generating that signal. Also, because this coordination can't be guaranteed, the on-path observer is forced to build code for correlating/segregating measurements by connection ID in the case where there is no coordination. Maybe it's best just to remove this recommendation entirely.\n+1 to removing the recommendation, or changing the statement to just a caution (e.g., if you are to coalesce connections that spin, then you'd better be aware of ...). FWIW I am not sure if I agree with this observation. IIUC, the connections sharing a 5-tuple will be terminated by the same endpoint on both ends; it is impossible for a middlebox to coalesce multiple QUIC connections. It is impossible because CIDs can change mid-connection, and a middlebox cannot determine to where the packets containing the new CID must be sent.\nThere definitely needs to be a degree of coordination between the middlebox and the endpoints, but that doesn't mean that you can't use something like the quic-lb stuff on either end (or both ends).\nYes, I think it is better to remove that recommendation, but perhaps replace it with a note to observers.", "new_text": "process performed at the beginning of the connection SHOULD be applied for all paths used by the connection. When the spin bit is disabled, endpoints MAY set the spin bit to any value, and MUST ignore any incoming value. It is RECOMMENDED that endpoints set the spin bit to a random value either chosen"} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "Indicates that x is A bits long Indicates that x uses the prefixed integer encoding defined in Section 5.1 of RFC7541, beginning with an A-bit prefix. Indicates that x is variable-length and extends to the end of the region.", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "Indicates that x is A bits long Indicates that x uses the prefixed integer encoding defined in prefixed-integers, beginning with an A-bit prefix. Indicates that x is variable-length and extends to the end of the region."} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "An encoder informs the decoder of a change to the dynamic table capacity using an instruction which begins with the '001' three-bit pattern. The new dynamic table capacity is represented as an integer with a 5-bit prefix (see Section 5.1 of RFC7541). The new capacity MUST be lower than or equal to the limit described in maximum-dynamic-table-capacity. In HTTP/3, this limit is the", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "An encoder informs the decoder of a change to the dynamic table capacity using an instruction which begins with the '001' three-bit pattern. This is followed by the new dynamic table capacity represented as an integer with a 5-bit prefix (see prefixed- integers). The new capacity MUST be lower than or equal to the limit described in maximum-dynamic-table-capacity. In HTTP/3, this limit is the"} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "or the dynamic table using an instruction that starts with the '1' one-bit pattern. The second (\"S\") bit indicates whether the reference is to the static or dynamic table. The 6-bit prefix integer (see Section 5.1 of RFC7541) that follows is used to locate the table entry for the header name. When S=1, the number represents the static table index; when S=0, the number is the relative index of the entry in the dynamic table. The header name reference is followed by the header field value represented as a string literal (see Section 5.2 of RFC7541). 4.3.3. An encoder adds an entry to the dynamic table where both the header field name and the header field value are represented as string literals (see primitives) using an instruction that starts with the '01' two-bit pattern. The name is represented as a 6-bit prefix string literal, while the value is represented as an 8-bit prefix string literal. 4.3.4. An encoder duplicates an existing entry in the dynamic table using an instruction that starts with the '000' three-bit pattern. The relative index of the existing entry is represented as an integer with a 5-bit prefix. The existing entry is re-inserted into the dynamic table without resending either the name or the value. This is useful to mitigate", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "or the dynamic table using an instruction that starts with the '1' one-bit pattern. The second (\"S\") bit indicates whether the reference is to the static or dynamic table. The 6-bit prefix integer (see prefixed-integers) that follows is used to locate the table entry for the header name. When S=1, the number represents the static table index; when S=0, the number is the relative index of the entry in the dynamic table. The header name reference is followed by the header field value represented as a string literal (see string-literals). 4.3.3. An encoder adds an entry to the dynamic table where both the header field name and the header field value are represented as string literals using an instruction that starts with the '01' two-bit pattern. This is followed by the name represented as a 6-bit prefix string literal, and the value represented as an 8-bit prefix string literal (see string-literals). 4.3.4. An encoder duplicates an existing entry in the dynamic table using an instruction that begins with the '000' three-bit pattern. This is followed by the relative index of the existing entry represented as an integer with a 5-bit prefix (see prefixed-integers. The existing entry is re-inserted into the dynamic table without resending either the name or the value. This is useful to mitigate"} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "After processing a header block whose declared Required Insert Count is not zero, the decoder emits a Header Acknowledgement instruction. The instruction begins with the '1' one-bit pattern and includes the header block's associated stream ID, encoded as a 7-bit prefix integer. It is used by the peer's encoder to know when it is safe to evict an entry (blocked-insertion), and possibly update the Known Received Count (known-received-count). If an encoder receives a Header Acknowledgement instruction referring to a stream on which every header block with a non-zero Required", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "After processing a header block whose declared Required Insert Count is not zero, the decoder emits a Header Acknowledgement instruction. The instruction begins with the '1' one-bit pattern which is followed by the header block's associated stream ID encoded as a 7-bit prefix integer (see prefixed-integers). This instruction is used as described in known-received-count and in state-synchronization. If an encoder receives a Header Acknowledgement instruction referring to a stream on which every header block with a non-zero Required"} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "When a stream is reset or reading is abandoned, the decoder emits a Stream Cancellation instruction. The instruction begins with the '01' two-bit pattern. The instruction includes the stream ID of the affected stream encoded as a 6-bit prefix integer. See state- synchronization. 4.4.3. The Insert Count Increment instruction begins with the '00' two-bit pattern. The instruction specifies the total number of dynamic table inserts and duplications since the last Insert Count Increment or Header Acknowledgement that increased the Known Received Count for the dynamic table (see known-received-count). The Increment field is encoded as a 6-bit prefix integer. The encoder uses this value to determine which table entries might cause a stream to become blocked, as described in state-synchronization. An encoder that receives an Increment field equal to zero or one that increases the Known Received Count beyond what the encoder has sent", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "When a stream is reset or reading is abandoned, the decoder emits a Stream Cancellation instruction. The instruction begins with the '01' two-bit pattern, which is followed by the stream ID of the affected stream encoded as a 6-bit prefix integer. This instruction is used as described in state-synchronization. 4.4.3. The Insert Count Increment instruction begins with the '00' two-bit pattern, followed by the Increment encoded as a 6-bit prefix integer. The value of the Increment is the total number of dynamic table insertions and duplications processed by the decoder since the last time it sent a Header Acknowledgement instruction that increased the Known Received Count (see known-received-count) or an Insert Count Increment instruction. The encoder uses this value to update the Known Received Count, as described in state-synchronization. An encoder that receives an Increment field equal to zero or one that increases the Known Received Count beyond what the encoder has sent"} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "4.5. Header blocks contain compressed representations of header lists and are carried in frames on streams defined by the enclosing protocol. These representations reference the static table, or dynamic table in a particular state without modifying it. 4.5.1.", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "4.5. A header block consists of a prefix and a possibly empty sequence of representations defined in this section. Each representation corresponds to a single header field. These representations reference the static table or the dynamic table in a particular state, but do not modify that state. Header blocks are carried in frames on streams defined by the enclosing protocol. 4.5.1."} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "absolute index less than the Base, this representation starts with the '1' 1-bit pattern, followed by the \"S\" bit indicating whether the reference is into the static or dynamic table. The 6-bit prefix integer (see Section 5.1 of RFC7541) that follows is used to locate the table entry for the header name. When S=1, the number represents the static table index; when S=0, the number is the relative index of the entry in the dynamic table.", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "absolute index less than the Base, this representation starts with the '1' 1-bit pattern, followed by the \"S\" bit indicating whether the reference is into the static or dynamic table. The 6-bit prefix integer (see prefixed-integers) that follows is used to locate the table entry for the header field. When S=1, the number represents the static table index; when S=0, the number is the relative index of the entry in the dynamic table."} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "than or equal to the Base, the representation starts with the '0001' 4-bit pattern, followed by the post-base index (see post-base) of the matching header field, represented as an integer with a 4-bit prefix (see Section 5.1 of RFC7541). 4.5.4.", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "than or equal to the Base, the representation starts with the '0001' 4-bit pattern, followed by the post-base index (see post-base) of the matching header field, represented as an integer with a 4-bit prefix (see prefixed-integers). 4.5.4."} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "absolute index less than the Base, this representation starts with the '01' two-bit pattern. Only the header field name stored in the static or dynamic table is used. Any header field value MUST be ignored. The following bit, 'N', indicates whether an intermediary is permitted to add this header to the dynamic header table on subsequent hops. When the 'N' bit is set, the encoded header MUST", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "absolute index less than the Base, this representation starts with the '01' two-bit pattern. The following bit, 'N', indicates whether an intermediary is permitted to add this header to the dynamic header table on subsequent hops. When the 'N' bit is set, the encoded header MUST"} {"id": "q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae", "old_text": "header field with the 'N' bit set, it MUST use a literal representation to forward this header field. This bit is intended for protecting header field values that are not to be put at risk by compressing them (see Section 7.1 of RFC7541 for more details). The fourth (\"S\") bit indicates whether the reference is to the static or dynamic table. The 4-bit prefix integer (see Section 5.1 of RFC7541) that follows is used to locate the table entry for the header name. When S=1, the number represents the static table index; when S=0, the number is the relative index of the entry in the dynamic table. 4.5.5. If the name entry is in the dynamic table with an absolute index greater than or equal to the Base, the representation starts with the '0000' four-bit pattern. The fifth bit is the 'N' bit as described in literal-name-reference. Finally, the header field name is represented using the post-base index of that entry (see post-base) encoded as an integer with a 3-bit prefix. 4.5.6. An addition to the header table where both the header field name and the header field value are represented as string literals (see primitives) starts with the '001' three-bit pattern. The fourth bit, 'N', indicates whether an intermediary is permitted to add this header to the dynamic header table on subsequent hops. When the 'N' bit is set, the encoded header MUST always be encoded with a literal representation. In particular, when a peer sends a header field that it received represented as a literal header field with the 'N' bit set, it MUST use a literal representation to forward this header field. This bit is intended for protecting header field values that are not to be put at risk by compressing them (see Section 7.1 of RFC7541 for more details). The name is represented as a 4-bit prefix string literal, while the value is represented as an 8-bit prefix string literal. 5.", "comments": "Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further.", "new_text": "header field with the 'N' bit set, it MUST use a literal representation to forward this header field. This bit is intended for protecting header field values that are not to be put at risk by compressing them (see security-considerations for more details). The fourth (\"S\") bit indicates whether the reference is to the static or dynamic table. The 4-bit prefix integer (see prefixed-integers) that follows is used to locate the table entry for the header name. When S=1, the number represents the static table index; when S=0, the number is the relative index of the entry in the dynamic table. Only the header field name is taken from the dynamic table entry; the header field value is encoded as an 8-bit prefix string literal (see string-literals). 4.5.5. A literal header field with post-base name reference represents a header field where the name matches the header field name of a dynamic table entry with an absolute index greater than or equal to the Base. This representation starts with the '0000' four-bit pattern. The fifth bit is the 'N' bit as described in literal-name-reference. This is followed by a post-base index of the dynamic table entry (see post-base) encoded as an integer with a 3-bit prefix (see prefixed- integers). Only the header field name is taken from the dynamic table entry; the header field value is encoded as an 8-bit prefix string literal (see string-literals). 4.5.6. The literal header field without name reference representation encodes a header field name and header field value as string literals. This representation begins with the '001' three-bit pattern. The fourth bit is the 'N' bit as described in literal-name-reference. The name follows, represented as a 4-bit prefix string literal, then the value, represented as an 8-bit prefix string literal (see string- literals). 5."} {"id": "q-en-quicwg-base-drafts-6ae837c0cee281f1b985db913bc5da95c058a7626027f330e055a30e4d193118", "old_text": "RETIRE_CONNECTION_ID frames and retransmitted if the packet containing them is lost. PING and PADDING frames contain no information, so lost PING or PADDING frames do not require repair.", "comments": "Is that not implied by the text here?\nKind of, but this text talks about the sender's perspective. Furthermore, checking for duplicates is inherently racy if you're using tokens for new connections while the old connection is still alive.\nYou know, I think that I put some proposed fixes for this in c016639e39c with rather than this branch. Moved it over here. NAME you approved , so I hope that moving that commit over here works for you.\nSec 13.2 of QUIC Transport explains what to do for the loss of a packet for every stream type except NEW_TOKEN. IMO it's nice but not critical to repair this loss, but I care less about how we resolve it than that we write it up.\nSomewhat related question: Does this section fit better in the recovery draft?\nI don't think so. This talks about the repair semantics of these frames, not the process of detecting loss. Recovery doesn't care about the distinction between PING and STREAM when it comes to repair.\nLGTM, but do we need some text saying that when receiving a NEW_TOKEN frame, you MUST make sure that it's not a duplicate? Otherwise, a client might just save the token twice, and effectively reuse the token.", "new_text": "RETIRE_CONNECTION_ID frames and retransmitted if the packet containing them is lost. NEW_TOKEN frames are retransmitted if the packet containing them is lost. No special support is made for detecting reordered and duplicated NEW_TOKEN frames other than a direct comparison of the frame contents. PING and PADDING frames contain no information, so lost PING or PADDING frames do not require repair."} {"id": "q-en-quicwg-base-drafts-6ae837c0cee281f1b985db913bc5da95c058a7626027f330e055a30e4d193118", "old_text": "An opaque blob that the client may use with a future Initial packet. 19.8. STREAM frames implicitly create a stream and carry stream data. The", "comments": "Is that not implied by the text here?\nKind of, but this text talks about the sender's perspective. Furthermore, checking for duplicates is inherently racy if you're using tokens for new connections while the old connection is still alive.\nYou know, I think that I put some proposed fixes for this in c016639e39c with rather than this branch. Moved it over here. NAME you approved , so I hope that moving that commit over here works for you.\nSec 13.2 of QUIC Transport explains what to do for the loss of a packet for every stream type except NEW_TOKEN. IMO it's nice but not critical to repair this loss, but I care less about how we resolve it than that we write it up.\nSomewhat related question: Does this section fit better in the recovery draft?\nI don't think so. This talks about the repair semantics of these frames, not the process of detecting loss. Recovery doesn't care about the distinction between PING and STREAM when it comes to repair.\nLGTM, but do we need some text saying that when receiving a NEW_TOKEN frame, you MUST make sure that it's not a duplicate? Otherwise, a client might just save the token twice, and effectively reuse the token.", "new_text": "An opaque blob that the client may use with a future Initial packet. An endpoint might receive multiple NEW_TOKEN frames that contain the same token value. Endpoints are responsible for discarding duplicate values, which might be used to link connection attempts; see validate-future. 19.8. STREAM frames implicitly create a stream and carry stream data. The"} {"id": "q-en-quicwg-base-drafts-1606a556476e356806d2b90b1d1ff257d8ee11e7c17aff9790cbcbc47a265195", "old_text": "10. The security considerations of HTTP/3 should be comparable to those of HTTP/2 with TLS. Note that where HTTP/2 employs PADDING frames and Padding fields in other frames to make a connection more resistant to traffic analysis, HTTP/3 can either rely on transport- layer padding or employ the reserved frame and stream types discussed in frame-grease and stream-grease. When HTTP Alternative Services is used for discovery for HTTP/3 endpoints, the security considerations of ALTSVC also apply. Several protocol elements contain nested length elements, typically in the form of frames with an explicit length containing variable- length integers. This could pose a security risk to an incautious implementer. An implementation MUST ensure that the length of a frame exactly matches the length of the fields it contains. The use of 0-RTT with HTTP/3 creates an exposure to replay attack. The anti-replay mitigations in HTTP-REPLAY MUST be applied when using HTTP/3 with 0-RTT. Certain HTTP implementations use the client address for logging or access-control purposes. Since a QUIC client's address might change during a connection (and future versions might support simultaneous", "comments": "The initial request was simply to add sub-headings. I still feel like Security Considerations is a bit of a hodge-podge, and would welcome concrete suggestions or PRs for tightening it up.\nCan you add a sub-heading(##) for each of these considerations, like in the transport draft? Originally posted by NAME in URL", "new_text": "10. The security considerations of HTTP/3 should be comparable to those of HTTP/2 with TLS; the considerations from Section 10 of HTTP2 apply in addition to those listed here. When HTTP Alternative Services is used for discovery for HTTP/3 endpoints, the security considerations of ALTSVC also apply. 10.1. Where HTTP/2 employs PADDING frames and Padding fields in other frames to make a connection more resistant to traffic analysis, HTTP/3 can either rely on transport-layer padding or employ the reserved frame and stream types discussed in frame-grease and stream- grease. These methods of padding produce different results in terms of the granularity of padding, the effect of packet loss and recovery, and how an implementation might control padding. 10.2. Several protocol elements contain nested length elements, typically in the form of frames with an explicit length containing variable- length integers. This could pose a security risk to an incautious implementer. An implementation MUST ensure that the length of a frame exactly matches the length of the fields it contains. 10.3. The use of 0-RTT with HTTP/3 creates an exposure to replay attack. The anti-replay mitigations in HTTP-REPLAY MUST be applied when using HTTP/3 with 0-RTT. 10.4. Certain HTTP implementations use the client address for logging or access-control purposes. Since a QUIC client's address might change during a connection (and future versions might support simultaneous"} {"id": "q-en-quicwg-base-drafts-170078558c54dcfaaf330a5e4cb533048f58a051bfeae4013b29a95022d8ea06", "old_text": "peer. For example, a client might be willing to consume a very large response header, while servers are more cautious about request size. Parameters MUST NOT occur more than once in the SETTINGS frame. A receiver MAY treat the presence of the same parameter more than once as a connection error of type HTTP_SETTINGS_ERROR. The payload of a SETTINGS frame consists of zero or more parameters. Each parameter consists of a setting identifier and a value, both", "comments": "I assume the intention was to forbid the same setting identifier occurring multiple times in a SETTINGS frame, regardless of the values. However, since a parameter is defined as an identifier-value pair, the current text does not forbid duplicate identifiers if the values are different.\nAn alternative below. But this is good either way.Looks fine to me, modulo the line length.", "new_text": "peer. For example, a client might be willing to consume a very large response header, while servers are more cautious about request size. The same setting identifier MUST NOT occur more than once in the SETTINGS frame. A receiver MAY treat the presence of duplicate setting identifiers as a connection error of type HTTP_SETTINGS_ERROR. The payload of a SETTINGS frame consists of zero or more parameters. Each parameter consists of a setting identifier and a value, both"} {"id": "q-en-quicwg-base-drafts-0afb6212e6b7f62a6fbb77a07953f5c76d8ce1422d79c20ae10f07500617dd28", "old_text": "ChaCha20 function as defined in Section 2.4 of CHACHA. This uses a 256-bit key and 16 bytes sampled from the packet protection output. The first 4 bytes of the sampled ciphertext are interpreted as a 32-bit number in little-endian order and are used as the block count. The remaining 12 bytes are interpreted as three concatenated 32-bit numbers in little-endian order and used as the nonce. The encryption mask is produced by invoking ChaCha20 to protect 5 zero bytes. In pseudocode:", "comments": "The nonce can just be opaque bytes. The block counter is tricky, as noted in the issue. The \"obvious\" choice is little-endian, as that is consistent with the philosophy of the designer (as I understand it), but that is not anything more than a guess. Absent strong evidence that big-endian is a better choice, I'm going to err on the side of not making a substantive change here. But I think that we need a consensus call to support that viewpoint.\nHere's what 5.4.4 says However, RFC 8439 says: This is a little confusing, but I think the correct way to read this is that the ChaCha function takes the nonce as a 96-bit opaque quantity, and the fact that it's little-endian is an internal detail irrelevant to our purposes, so we should just replace the second sentence with \"The remaining 12 bytes are used as the 96-bit ChaCha nonce\". This is reinforced by the fact that the test vectors have the nonce as a byte block. The situation with the counter is more complicated because we are told it is both a counter and 32-bits (which might make you think it's an integer parameter), but it is also treated as a little-endian integer, which makes one think it's a bitstring. And then the test vectors treat it as a value, rather than as a bitstring. If you look at code it typically thinks of this as an integer: See, for instance: URL So, I think the right answer here is to think of ChaCha as having the following API: Note that because the value on the wire is in big-endian format, this would invert the nonce, but I think it's the right answer anyway. Otherwise, you're having to pass in the counter in the wrong format on big-endian platforms.\nAs a completely general note, I don't think it is right to change endianness of crypto wire formats. AES-GCM has a completely backwards neither little nor big endian encoding (byte and bit swapped so no platform wins) this remains so on the wire. Especially for HW processing this is significant. But little endian is also more performant on nearly all relevant platforms today so there is no need to go over board forcing big endian if it can be avoided without insulting RFC's.\nYeah, this should simplify that text some.\nI like the proposed change, the text had somewhat confused me when implementing this in a previous life.\nI don't think that the reinterpretation of the little-endian thing is safe, as per NAME comment. The takes a byte buffer as input rather than a . That is to allow for different block counter sizes. I'll make the editorial change, but leave the suggestion there alone.\nI have a question out to the editor of the PKCS spec. It appears like there are no solid rules in the spec to support any particular position here. I'm going with my best guess at WWDJBD here, but I'm happy to make a change if there is a strong case made for something else.\nNAME NAME can we flip this to please? I don't think that we need to change the substance here, but I want to confirm the little-endian/big-endian choice here. I hope to have more information to share on the question soon, but that might not be definitive. We might just have to make a call ourselves. Absent better information, I'm going to suggest that we continue with little-endian, but I this is far from an obvious conclusion.\nThe API I'm using for ChaCha20 takes a const uint8_t nonce[12], and I currently pass in a pointer (offset by 4 bytes) to the sample as the nonce, without doing any endianness conversions. I'm in favor of clarifying the language for this input. (Given that this has the design label instead of the editorial label, my design opinion is to keep with the simple thing of passing the bytes directly from the sample to the nonce input of ChaCha20 and not do any endianness conversions. I'm assuming that other crypto libraries have similar APIs though.)\nThe API I have takes . PKCS takes pointers just like the API NAME refers to (but it fails to specify the endianness, which is a critical oversight, compounded by the fact that the size of the counter can vary). I suspect that - as NAME says - the right idea is to note that ChaCha20 will interpret the nonce as three little-endian integers and that the counter might be expressed as a byte sequence that is then (probably) interpreted as a little-endian integer.\nI understand and appreciate this being marked as \u201cdesign\u201d, though I consider the text that landed in master as editorial, as it (in my view) ended as adding practical advice while retaining the definition we have had.\nYes, but as a design issue, we want to confirm that \"no action\" is the correct outcome. It's cheap enough to do so.\nI like the updated text. It reads good as it gives the practical advice first, then amends that with the formal definition.", "new_text": "ChaCha20 function as defined in Section 2.4 of CHACHA. This uses a 256-bit key and 16 bytes sampled from the packet protection output. The first 4 bytes of the sampled ciphertext are the block counter. A ChaCha20 implementation could take a 32-bit integer in place of a byte sequence, in which case the byte sequence is interpreted as a little-endian value. The remaining 12 bytes are used as the nonce. A ChaCha20 implementation might take an array of three 32-bit integers in place of a byte sequence, in which case the nonce bytes are interpreted as a sequence of 32-bit little-endian integers. The encryption mask is produced by invoking ChaCha20 to protect 5 zero bytes. In pseudocode:"} {"id": "q-en-quicwg-base-drafts-651eb63082a61fa46341734cc08b590f940d955c6b5fb8a0d6d018e1e91a22dd", "old_text": "set to 0. A QUIC packet with a short header includes a Destination Connection ID. The short header does not include the Connection ID Lengths, Source Connection ID, or Version fields. The remainder of the packet has version-specific semantics.", "comments": "Invariants text was unclear on where the Destination Connection ID field was and how big it could be.\nSection 4.2 of the Invariant Short header are not explicit that the Destionation CID follows the first byte: \"A QUIC packet with a short header includes a Destination Connection ID. The short header does not include the Connection ID Lengths, Source Connection ID, or Version fields.\" As see it just says that the destination CID is part of the short header, not where to find it.\nAlso the length of the DCID in the short header is unclear. With the long header's restriction on both source and destination CID I don't think the CID is of arbitrary length as stated in Section 4.3. I think it is 0 to 18 bytes long as set agreed by the connection.\nYes, DCID follows the first octet, and we should say that. As for the length, 0/4-18 is a restriction that we agreed applies only to long headers. You can - if the QUIC version allows it - negotiate other lengths for connection IDs using other methods.\nOk, the spec is in some sense clear that this can be of arbitary length. However, that this is not required to be consistent with the long header is far from obvious and probably should be explicitly remarked. An realization I didn't have before over this design is that a given destination IP address and UDP port that deals with multiple QUIC connections will be foreced to use one commond DCID length across all QUIC connections using that port, otherwise it will have to use prefixes for DCIDs to be able to determine how long the actual DCID field are to enable decoding the header protection. It might have been a misstake to not include an DCID length field.\nThe endpoint can decide on its own DCID length encoding, so that is fine.\nOne editorial suggestion, but LG", "new_text": "set to 0. A QUIC packet with a short header includes a Destination Connection ID immediately following the first byte. The short header does not include the Connection ID Lengths, Source Connection ID, or Version fields. The length of the Destination Connection ID is not specified in packets with a short header and is not constrained by this specification. The remainder of the packet has version-specific semantics."} {"id": "q-en-quicwg-base-drafts-8c8484a600c934974a5094227eeee22fa6866c4336730aa0aa60476b0ae8bc36", "old_text": "Frame types which were used in HTTP/2 where there is no corresponding HTTP/3 frame have also been reserved (iana-frames). These frame types MUST NOT be sent, and receipt MAY be treated as an error of type HTTP_UNEXPECTED_FRAME. 8.", "comments": "This changes HTTPUNEXPECTEDFRAME, which is used only once, to HTTPFRAMEUNEXPECTED that is used multiple times.\nDuring the London interim, it was noted that error codes refer to things being \"malformed,\" \"bad,\" \"invalid,\" and \"unexpected.\" Pick one and use it consistently, or establish precise definitions for each. This applies to both the current document and .\nThe three words are not equivalent: while both \"malformed\" and \"invalid\" are \"bad,\" \"malformed\" is not the same as \"invalid.\"\nMinor correction here, \"bad\" is only used in . The only instance of \"bad\" in current document is a substring of example \"0x1abadaba\". I'll fix the PR before it lands, whatever order we end up doing things in.\nThe different value-judgement terms are corrected by the flurry of settings PRs that NAME already did. HTTPFRAMEERROR has replaced malformed, and UNEXPECTEDFRAME and WRONGSTREAM are reasonably descriptive (though see , ). We do still have some variation between HTTPTHINGDESCRIPTOR, HTTPTHINGERROR, and HTTPDESCRIPTORTHING (e.g. HTTPUNEXPECTEDFRAME versus HTTPREQUESTREJECTED) that might be worth reconciling. Is it worth turning into HTTPFRAMEUNEXPECTED?\nResolved by and something prior to it.\noops, thanks", "new_text": "Frame types which were used in HTTP/2 where there is no corresponding HTTP/3 frame have also been reserved (iana-frames). These frame types MUST NOT be sent, and receipt MAY be treated as an error of type HTTP_FRAME_UNEXPECTED. 8."} {"id": "q-en-quicwg-base-drafts-02caa24f2e5aafcdac85492415c17098cb880e5caaeaebcad8ac24bcc1506fdf", "old_text": "Packets containing only ACK frames are not congestion controlled, so there are limits on how frequently they can be sent. An endpoint MUST NOT send more than one ACK-frame-only packet in response to receiving an ack-eliciting packet (one containing frames other than ACK and/or PADDING). An endpoint MUST NOT send a packet containing only an ACK frame in response to a non-ack-eliciting packet (one containing only ACK and/or PADDING frames), even if there are packet gaps which precede the received packet. Limiting ACK frames avoids an infinite feedback loop of acknowledgements, which could prevent the connection from ever becoming idle. However, the endpoint acknowledges non-ack-eliciting packets when it sends an ACK frame. An endpoint SHOULD treat receipt of an acknowledgment for a packet it did not send as a connection error of type PROTOCOL_VIOLATION, if it", "comments": "Fixing\nchanged the rules regarding what kind of packet an endpoint is allowed to send after receiving a non-ack-eliciting packet. The intention of that text used to be to prevent peers from getting stuck in an ACK-loop, but now the text says: Among others, this means that once an endpoint receives a non-ack-eliciting packet, it's not allowed to send a packet containing only PADDING. This would be valuable for defeating traffic analysis. While it would be possible to hack your way out of this by sending a PING + PADDING packet, I'm not sure if this was the intention of .\nTo me, the key phrase here is 'in response to'. If you want to send PADDING for some other reason and the congestion controller allows it, then you can send it.\nThis feels editorial...\nI don't intend to do anything about this one. As Ian notes, you can use other reasons to justify sending of anything, just don't justify the sending on the basis of having received a non-ack-eliciting packet or you will get into the sort trouble that doesn't end.\nI think the text is clear that sending PADDING is allowed at any time. I'm closing this with no action.\nquic-transport draft 13.2.1: An endpoint MUST NOT send a packet containing only an ACK frame in response to a non-ACK-eliciting packet (one containing only ACK and/or PADDING frames), NAME points out that this allows one to send an ACK+PADDING packet, which seems to violate the spirit of the rule. This loophole breaks . I believe this may be Editorial if we all know what we meant.\nYes, it was intended that ACK should not elicit an ACK+PADDING packet.\nI agree with this change, but I disagree that this is a satisfactory fix for .\nThis issue should have been\nLGTM. Though all the \"non-ACK-eliciting packets\" is a bit of a mouth full.", "new_text": "Packets containing only ACK frames are not congestion controlled, so there are limits on how frequently they can be sent. An endpoint MUST NOT send more than one ACK-frame-only packet in response to receiving an ack-eliciting packet. An endpoint MUST NOT send a non- ack-eliciting packet in response to a non-ack-eliciting packet, even if there are packet gaps which precede the received packet. Limiting ACK frames avoids an infinite feedback loop of acknowledgements, which could prevent the connection from ever becoming idle. However, the endpoint acknowledges non-ACK-eliciting packets when it sends an ACK frame. An endpoint SHOULD treat receipt of an acknowledgment for a packet it did not send as a connection error of type PROTOCOL_VIOLATION, if it"} {"id": "q-en-quicwg-base-drafts-fdb9eed3abac2683d63323191fd673910b2939f42da9659ad203d89151bc4437", "old_text": "frames should therefore not be paced, to avoid delaying their delivery to the peer. As an example of a well-known and publicly available implementation of a flow pacer, implementers are referred to the Fair Queue packet scheduler (fq qdisc) in Linux (3.11 onwards).", "comments": "Came from the discussion of\nNAME : I realized after we did this PR yesterday that this does not I It reduces burstiness in general, which is still a good thing, but it doesn't change slow start increase. I'll send a PR or ask Vidhi if she wants to.\nWhen an ACK arrives that acknowledges N packets (where N could be very large), QUIC will increase the congestion window by more than 1MSS per congestion window during Congestion Avoidance causing bursty send. This could also affect the slow start. if (congestionwindow < ssthresh): // Slow start. congestionwindow += URL else: // Congestion avoidance. congestionwindow += kMaxDatagramSize * URL / congestionwindow One could argue that QUIC ACKs every other packet, but there ACK packet could get lost or ACK for ACK packet could get lost. The draft is probably assuming packet pacing but not all implementations support that. I discussed this with Jana and he thinks that we should add something in pseudo code or some specific text in the draft.\nPR added more specific text on avoiding bursts into the network, FYI.\nPlease note that PR addresses the issue of app-limited or pacing-limited cases to not increase cwnd when it is under utilized. My concern is cwnd being inflated due to an ACK frame containing ACKs for large number of packets.\nNAME : The problem here is that we don't talk about burstiness that can occur when pacing is not in place, and a stretch-ack is received. PRR handles this, but we don't do PRR in the draft.\nNo we need to spend lots of time on Reno as everybody appears to be doing Cubic or BBR?\nCubic has this problem. BBR cannot be assumed.\nExcept Cubic opens the window based on time spent, not just ack received\nI don't get the scenario. It seems that NAME is worried about receiving an ACK for a large number of packets. This is not a congestion window problem, because it happens even if the congestion windows size remains constant. The root cause is that the ACK of N packets now reduces the number of packet in flow by N. So this is strictly about pacing, not about congestion control.\nNAME yes, and it can and still does burst because of stretch acks\nNot just stretch Acks -- deliberate Ack compression by the network happens too. Very few alternative to pacing.\nURL describes\nDiscussed in Cupertino. NAME to prepare a PR that hopefully can incorporate some ideas from RFC3465 to add safety features for non-pacing stacks.\nBut 3465 does not solve the burst problem. It limits the growth of the window, but does not do pacing of any kind.\nI think we can change to something like this\nNAME I think we need to do 2MSS clamping per ACK received and not per packet acked. I am thinking that we can remove OnPacketAckedCC and instead do OnAckReceivedCC. Pseudocode change something like this:\nI think we can keep but just move cwnd update to as you propose.\nYup, that's an important missing fix, though it needs to be applied only when not pacing. Additionally, helps with burstiness in general. NAME : I would make a variable, and apply the min here when not pacing.\nThe proposed change has the effect of reducing the effective Window if ACKs are compressed. Say you have a window of 12 packets, acks are delayed, and a single ACK acknowledges 12 packets. Now you have a window of 2 packets, your transmission rate is 6 times slower than before, and it will take you 10 RTT to recover. Is that really what you want?\nNAME This is what ABC (RFC 3465) does for TCP. ACK compression has side-effects, and the way to deal with it is, as you say, to pace packets out. This reduction in performance is only when the endpoint does not pace, and it is the only way to limit bursts in that case.\nAlso, ACK compression does not necessarily imply ACK pruning. I have traces in which 1 RTT worth of ACKs is compressed and delivered back to back. In that case, the proposed algorithm will do nothing: you will get 2 MSS added for each ACK, back to back within less than 1 ms.\nFYI. FreeBSD implements rfc3465 (and enabled by default) URL Linux removed tcpabc implementation but saying it does in a different way: URL Considering FreeBSD doesn't have pacing by default, I think current proposed algorithm is not worse (or aggressive) than what tcp implementation does.\nNAME I am working on the PR for pseudocode changes.\nI agree that when not pacing, using an ABC limit is appropriate. FWIW Windows TCP was using a limit of 4 and not 2 for a long time and now uses 8 which is pretty close to IW10 burst in slow start phase.\nFor the WG: The proposed PR attempts to include a mechanism to limit CWND increase during slow start when not pacing. I'll note that by itself, it does not guarantee we follow the existing MUST: \"Implementations MUST either use pacing or limit such bursts to the initial congestion window, which is recommended to be the minimum of 10 maxdatagramsize and max(2 maxdatagramsize, 14720)), where maxdatagramsize is the current maximum size of a datagram for the connection, not including UDP or IP overhead.\" I don't think we should add this to the pseudocode, because we say SHOULD pace packets and the pseudocode currently only covers SHOULDs, not MAYs. Adding all MAYs to the pseudocode is too complex. I don't have a concern about adding a reference to the existing RFC as a mechanism that MAY help limit bursts during slow start. I'd like a decision from the WG on whether to include this change and if so, some criteria on what should be included going forward.\nI agree with the comment that burst from ACK compression or Stretch-ACKs result in bursts. I think the ways to change that is implement a maximum burst size or use pacing. The bursts do not appear a CC function to me.\nBringing conversation in from the list: With regards to , one of the things we found in testing was that it could (a) still allow an implementation to be vulnerable to too much bursting (the original problem), but more importantly (b) locked receivers into an ack pattern of acking every other packet. My main concern is that this causes problems if receivers want to experiment with and change acking strategies going forward in way that otherwise would be fine. One way to look at the problem described by this issue is that the existing text has a minor underspecification, in which it has the notion of a burst limit but says nothing about the frequency of such bursts. Two \"bursts\" of 10MSS that are sent within a couple nanoseconds are really one burst that's too big. Another way of fixing this is to remove even the specification of the 10MSS limit if we really don't want too much text about the non-pacing cases. Essentially, what this comes down to is: pace, or even if you don't pace, you need to not send bursts above some frequency; and if you don't set some pacing timers, you'll likely underutilize the link.\nI'm going to put this out there: the answer is MUST pace. You might formalize this by saying the number of packets sent in any interval cannot exceed by more than the initial cwnd (i.e., the 10MSS). That allows for bursting, but requires a period of quiet either preceding or following any burst.\nNAME Yes, it does largely reduce to \"implementations MUST pace\" (even if the pacing algorithm isn't very nuanced, in which case you'll perform poorly). The formalization of the rate, one option for which was specified in , I think is the heart of what should be done here.\nThere are two routes that we can take: Specify clearly what 'delay` refers to in the below statement, which is done in OR Change MUST pace to SHOULD and remove the 10 MSS burst size limit and provide general recommendation if an implementation doesn't pace. Something like:\nI'm sympathetic to saying MUST pace, and I like the formatlization that NAME suggests. I've heard some pushback on saying MUST pace in the past, but it might be worth giving it a shot again. NAME NAME NAME : What do you think?\nThat said... I will note that NAME 's formalization allows for a burst at the end of an RTT. Consider for example when a single ACK is received that acknowledges an entire window. That would be legal by this definition. I think that's ok.\nDiscussed in ZRH. Proposed resolution is to define and REQUIRE pacing.\nNAME I think requiring pacing makes good sense, with some formalization of what that means.\nTo clarify the status here, my understanding is that the proposal is to close this design issue with no action. The discussions have identified editorial improvements that will be addressed independently.\nLet's continue the editorial discussion on .\nClosing this issue and continuing discussion on\nThis needs to run through the consensus call procedure, so we'll keep it open for another week.\nLet's get this in -- it addresses some burstiness anyway.LGTM", "new_text": "frames should therefore not be paced, to avoid delaying their delivery to the peer. Sending multiple packets into the network without any delay between them creates a packet burst that might cause short-term congestion and losses. Implementations MUST either use pacing or limit such bursts to minimum of 10 * kMaxDatagramSize and max(2* kMaxDatagramSize, 14720)), the same as the recommended initial congestion window. As an example of a well-known and publicly available implementation of a flow pacer, implementers are referred to the Fair Queue packet scheduler (fq qdisc) in Linux (3.11 onwards)."} {"id": "q-en-quicwg-base-drafts-fdb9eed3abac2683d63323191fd673910b2939f42da9659ad203d89151bc4437", "old_text": "sender should not consider itself application limited if it would have fully utilized the congestion window without pacing delay. Sending multiple packets into the network without any delay between them creates a packet burst that might cause short-term congestion and losses. Implementations SHOULD either use pacing or reduce their congestion window to limit such bursts to minimum of 10 * kMaxDatagramSize and max(2* kMaxDatagramSize, 14720)), the same as the recommended initial congestion window. A sender MAY implement alternative mechanisms to update its congestion window after periods of under-utilization, such as those proposed for TCP in RFC7661.", "comments": "Came from the discussion of\nNAME : I realized after we did this PR yesterday that this does not I It reduces burstiness in general, which is still a good thing, but it doesn't change slow start increase. I'll send a PR or ask Vidhi if she wants to.\nWhen an ACK arrives that acknowledges N packets (where N could be very large), QUIC will increase the congestion window by more than 1MSS per congestion window during Congestion Avoidance causing bursty send. This could also affect the slow start. if (congestionwindow < ssthresh): // Slow start. congestionwindow += URL else: // Congestion avoidance. congestionwindow += kMaxDatagramSize * URL / congestionwindow One could argue that QUIC ACKs every other packet, but there ACK packet could get lost or ACK for ACK packet could get lost. The draft is probably assuming packet pacing but not all implementations support that. I discussed this with Jana and he thinks that we should add something in pseudo code or some specific text in the draft.\nPR added more specific text on avoiding bursts into the network, FYI.\nPlease note that PR addresses the issue of app-limited or pacing-limited cases to not increase cwnd when it is under utilized. My concern is cwnd being inflated due to an ACK frame containing ACKs for large number of packets.\nNAME : The problem here is that we don't talk about burstiness that can occur when pacing is not in place, and a stretch-ack is received. PRR handles this, but we don't do PRR in the draft.\nNo we need to spend lots of time on Reno as everybody appears to be doing Cubic or BBR?\nCubic has this problem. BBR cannot be assumed.\nExcept Cubic opens the window based on time spent, not just ack received\nI don't get the scenario. It seems that NAME is worried about receiving an ACK for a large number of packets. This is not a congestion window problem, because it happens even if the congestion windows size remains constant. The root cause is that the ACK of N packets now reduces the number of packet in flow by N. So this is strictly about pacing, not about congestion control.\nNAME yes, and it can and still does burst because of stretch acks\nNot just stretch Acks -- deliberate Ack compression by the network happens too. Very few alternative to pacing.\nURL describes\nDiscussed in Cupertino. NAME to prepare a PR that hopefully can incorporate some ideas from RFC3465 to add safety features for non-pacing stacks.\nBut 3465 does not solve the burst problem. It limits the growth of the window, but does not do pacing of any kind.\nI think we can change to something like this\nNAME I think we need to do 2MSS clamping per ACK received and not per packet acked. I am thinking that we can remove OnPacketAckedCC and instead do OnAckReceivedCC. Pseudocode change something like this:\nI think we can keep but just move cwnd update to as you propose.\nYup, that's an important missing fix, though it needs to be applied only when not pacing. Additionally, helps with burstiness in general. NAME : I would make a variable, and apply the min here when not pacing.\nThe proposed change has the effect of reducing the effective Window if ACKs are compressed. Say you have a window of 12 packets, acks are delayed, and a single ACK acknowledges 12 packets. Now you have a window of 2 packets, your transmission rate is 6 times slower than before, and it will take you 10 RTT to recover. Is that really what you want?\nNAME This is what ABC (RFC 3465) does for TCP. ACK compression has side-effects, and the way to deal with it is, as you say, to pace packets out. This reduction in performance is only when the endpoint does not pace, and it is the only way to limit bursts in that case.\nAlso, ACK compression does not necessarily imply ACK pruning. I have traces in which 1 RTT worth of ACKs is compressed and delivered back to back. In that case, the proposed algorithm will do nothing: you will get 2 MSS added for each ACK, back to back within less than 1 ms.\nFYI. FreeBSD implements rfc3465 (and enabled by default) URL Linux removed tcpabc implementation but saying it does in a different way: URL Considering FreeBSD doesn't have pacing by default, I think current proposed algorithm is not worse (or aggressive) than what tcp implementation does.\nNAME I am working on the PR for pseudocode changes.\nI agree that when not pacing, using an ABC limit is appropriate. FWIW Windows TCP was using a limit of 4 and not 2 for a long time and now uses 8 which is pretty close to IW10 burst in slow start phase.\nFor the WG: The proposed PR attempts to include a mechanism to limit CWND increase during slow start when not pacing. I'll note that by itself, it does not guarantee we follow the existing MUST: \"Implementations MUST either use pacing or limit such bursts to the initial congestion window, which is recommended to be the minimum of 10 maxdatagramsize and max(2 maxdatagramsize, 14720)), where maxdatagramsize is the current maximum size of a datagram for the connection, not including UDP or IP overhead.\" I don't think we should add this to the pseudocode, because we say SHOULD pace packets and the pseudocode currently only covers SHOULDs, not MAYs. Adding all MAYs to the pseudocode is too complex. I don't have a concern about adding a reference to the existing RFC as a mechanism that MAY help limit bursts during slow start. I'd like a decision from the WG on whether to include this change and if so, some criteria on what should be included going forward.\nI agree with the comment that burst from ACK compression or Stretch-ACKs result in bursts. I think the ways to change that is implement a maximum burst size or use pacing. The bursts do not appear a CC function to me.\nBringing conversation in from the list: With regards to , one of the things we found in testing was that it could (a) still allow an implementation to be vulnerable to too much bursting (the original problem), but more importantly (b) locked receivers into an ack pattern of acking every other packet. My main concern is that this causes problems if receivers want to experiment with and change acking strategies going forward in way that otherwise would be fine. One way to look at the problem described by this issue is that the existing text has a minor underspecification, in which it has the notion of a burst limit but says nothing about the frequency of such bursts. Two \"bursts\" of 10MSS that are sent within a couple nanoseconds are really one burst that's too big. Another way of fixing this is to remove even the specification of the 10MSS limit if we really don't want too much text about the non-pacing cases. Essentially, what this comes down to is: pace, or even if you don't pace, you need to not send bursts above some frequency; and if you don't set some pacing timers, you'll likely underutilize the link.\nI'm going to put this out there: the answer is MUST pace. You might formalize this by saying the number of packets sent in any interval cannot exceed by more than the initial cwnd (i.e., the 10MSS). That allows for bursting, but requires a period of quiet either preceding or following any burst.\nNAME Yes, it does largely reduce to \"implementations MUST pace\" (even if the pacing algorithm isn't very nuanced, in which case you'll perform poorly). The formalization of the rate, one option for which was specified in , I think is the heart of what should be done here.\nThere are two routes that we can take: Specify clearly what 'delay` refers to in the below statement, which is done in OR Change MUST pace to SHOULD and remove the 10 MSS burst size limit and provide general recommendation if an implementation doesn't pace. Something like:\nI'm sympathetic to saying MUST pace, and I like the formatlization that NAME suggests. I've heard some pushback on saying MUST pace in the past, but it might be worth giving it a shot again. NAME NAME NAME : What do you think?\nThat said... I will note that NAME 's formalization allows for a burst at the end of an RTT. Consider for example when a single ACK is received that acknowledges an entire window. That would be legal by this definition. I think that's ok.\nDiscussed in ZRH. Proposed resolution is to define and REQUIRE pacing.\nNAME I think requiring pacing makes good sense, with some formalization of what that means.\nTo clarify the status here, my understanding is that the proposal is to close this design issue with no action. The discussions have identified editorial improvements that will be addressed independently.\nLet's continue the editorial discussion on .\nClosing this issue and continuing discussion on\nThis needs to run through the consensus call procedure, so we'll keep it open for another week.\nLet's get this in -- it addresses some burstiness anyway.LGTM", "new_text": "sender should not consider itself application limited if it would have fully utilized the congestion window without pacing delay. A sender MAY implement alternative mechanisms to update its congestion window after periods of under-utilization, such as those proposed for TCP in RFC7661."} {"id": "q-en-quicwg-base-drafts-b1455ede8347f90daf53b6a7e5923f04c156e61012c78ba5a47e4cede9f4db81", "old_text": "be routed to a server instance with more resources available for new connections. A flow showing the use of a Retry packet is shown in fig-retry. 8.1.2.", "comments": "If the Retry token is known to be invalid by the server, then the server can close the connection with INVALID_TOKEN instead of waiting for a timeout. Came out of discussion of Issue Related to\nYou need to update the \"Transport Error Codes\" section too.\nThanks NAME I did that in a subsequent update. Refresh and tell me if it looks correct.\nNAME you're missing the section that gives some text to each error code .\nNAME do you have a suggestion about what the client should do? Clearly it could start a new connection, but it would't want to do that an unlimited number of times, and it may depend upon whether a TCP fallback is available. Given that, it wasn't clear I should recommend something. And I agree making them distinguishable should be a MUST, as stated in\nI think that the best thing we can say is \"the connection attempt failed\". The first version of the PR strongly suggested that making another connection attempt was the right thing to do, but that gets into lots of difficult questions about identifying why. Given that this is an error that only happens if a Retry is spoofed, then I think that all we can say is: !\nWhen the server receives a Token that is corrupted, it may want to close the connection quickly. This issue suggests adding a new error code to indicate the token is corrupted, INVALID_TOKEN.\nThis is - I believe - a design issue. The PR seems to have approval, we just need the appropriate process run.\nWe currently say that these tokens SHOULD be distinguishable by a server on the basis that an error in a NEW_TOKEN token is recoverable, but an error in a Retry token is fatal. I don't see why this can't be MUST. If we make it \"MUST\", then we can rely on this property in the design.\nAgreed, that'd be a useful property to rely on.\nDoes that property need to hold if there are bit flips in the token? If you\u2019re encrypting the token, this would be easy to ensure: put the distinguishing bit under crypto cover. If you\u2019re using a map lookup, you\u2019ll probably have to work a little bit harder.\nPlease ignore my last post, I mixed up Retry packets and Initials. Tokens in the client\u2019s Initial are part of the AAD, so we don\u2019t have to worry about bit flips.\nDepending how the server distinguishes them, a change of method / confusion about what server you're sending it to could make you appear to be presenting a Retry token instead of a regular token. This error code enables the client to recover and retry quickly, but there's no language about what the client should do when it receives the code.Rereading the changes, I have the following editorial suggestion.", "new_text": "be routed to a server instance with more resources available for new connections. If a server receives a client Initial that can be unprotected but contains an invalid Retry token, it knows the client will not accept another Retry token. The server can discard such a packet and allow the client to time out to detect handshake failure, but that could impose a significant latency penalty on the client. A server MAY proceed with the connection without verifying the token, though the server MUST NOT consider the client address validated. If a server chooses not to proceed with the handshake, it SHOULD immediately close (immediate-close) the connection with an INVALID_TOKEN error. Note that a server has not established any state for the connection at this point and so does not enter the closing period. A flow showing the use of a Retry packet is shown in fig-retry. 8.1.2."} {"id": "q-en-quicwg-base-drafts-b1455ede8347f90daf53b6a7e5923f04c156e61012c78ba5a47e4cede9f4db81", "old_text": "An endpoint detected an error with protocol compliance that was not covered by more specific error codes. An endpoint has received more data in CRYPTO frames than it can buffer.", "comments": "If the Retry token is known to be invalid by the server, then the server can close the connection with INVALID_TOKEN instead of waiting for a timeout. Came out of discussion of Issue Related to\nYou need to update the \"Transport Error Codes\" section too.\nThanks NAME I did that in a subsequent update. Refresh and tell me if it looks correct.\nNAME you're missing the section that gives some text to each error code .\nNAME do you have a suggestion about what the client should do? Clearly it could start a new connection, but it would't want to do that an unlimited number of times, and it may depend upon whether a TCP fallback is available. Given that, it wasn't clear I should recommend something. And I agree making them distinguishable should be a MUST, as stated in\nI think that the best thing we can say is \"the connection attempt failed\". The first version of the PR strongly suggested that making another connection attempt was the right thing to do, but that gets into lots of difficult questions about identifying why. Given that this is an error that only happens if a Retry is spoofed, then I think that all we can say is: !\nWhen the server receives a Token that is corrupted, it may want to close the connection quickly. This issue suggests adding a new error code to indicate the token is corrupted, INVALID_TOKEN.\nThis is - I believe - a design issue. The PR seems to have approval, we just need the appropriate process run.\nWe currently say that these tokens SHOULD be distinguishable by a server on the basis that an error in a NEW_TOKEN token is recoverable, but an error in a Retry token is fatal. I don't see why this can't be MUST. If we make it \"MUST\", then we can rely on this property in the design.\nAgreed, that'd be a useful property to rely on.\nDoes that property need to hold if there are bit flips in the token? If you\u2019re encrypting the token, this would be easy to ensure: put the distinguishing bit under crypto cover. If you\u2019re using a map lookup, you\u2019ll probably have to work a little bit harder.\nPlease ignore my last post, I mixed up Retry packets and Initials. Tokens in the client\u2019s Initial are part of the AAD, so we don\u2019t have to worry about bit flips.\nDepending how the server distinguishes them, a change of method / confusion about what server you're sending it to could make you appear to be presenting a Retry token instead of a regular token. This error code enables the client to recover and retry quickly, but there's no language about what the client should do when it receives the code.Rereading the changes, I have the following editorial suggestion.", "new_text": "An endpoint detected an error with protocol compliance that was not covered by more specific error codes. A server received a Retry Token in a client Initial that is invalid. An endpoint has received more data in CRYPTO frames than it can buffer."} {"id": "q-en-quicwg-base-drafts-7dc2d66bcd63e53cfbba19381557a64f976219e6996f74bc13cba5d0bae74148", "old_text": "4.5.2. An indexed header field representation identifies an entry in either the static table or the dynamic table and causes that header field to be added to the decoded header list. If the entry is in the static table, or in the dynamic table with an absolute index less than the Base, this representation starts with the '1' 1-bit pattern, followed by the 'T' bit indicating whether the reference is into the static or dynamic table. The 6-bit prefix integer (see prefixed-integers) that follows is used to locate the table entry for the header field. When T=1, the number represents the static table index; when T=0, the number is the relative index of the entry in the dynamic table. 4.5.3. If the entry is in the dynamic table with an absolute index greater than or equal to the Base, the representation starts with the '0001' 4-bit pattern, followed by the post-base index (see post-base) of the matching header field, represented as an integer with a 4-bit prefix (see prefixed-integers). 4.5.4. A literal header field with name reference represents a header field where the header field name matches the header field name of an entry stored in the static table or the dynamic table. If the entry is in the static table, or in the dynamic table with an absolute index less than the Base, this representation starts with the '01' two-bit pattern. The following bit, 'N', indicates whether an intermediary is permitted to add this header to the dynamic header table on subsequent hops. When the 'N' bit is set, the encoded header MUST always be encoded with a literal representation. In particular, when a peer sends a header field that it received represented as a literal header field with the 'N' bit set, it MUST use a literal representation to forward this header field. This bit is intended for protecting header field values that are not to be put at risk by compressing them (see security-considerations for more details). The fourth ('T') bit indicates whether the reference is to the static or dynamic table. The 4-bit prefix integer (see prefixed-integers)", "comments": "This was inspired by the comment by afrind at URL For each representation, uniformly use the word \"representation\" in the first sentence; use the word \"identifies\" if it's a verbatim entry; \"encodes\" otherwise; identify relative versus post-base indexing in the first sentence; move drawing right after first sentence; use consistent language describing binary format.", "new_text": "4.5.2. An indexed header field representation identifies an entry in the static table, or an entry in the dynamic table with an absolute index less than the Base. This representation starts with the '1' 1-bit pattern, followed by the 'T' bit indicating whether the reference is into the static or dynamic table. The 6-bit prefix integer (see prefixed-integers) that follows is used to locate the table entry for the header field. When T=1, the number represents the static table index; when T=0, the number is the relative index of the entry in the dynamic table. 4.5.3. An indexed header field with post-base index representation identifies an entry in the dynamic table with an absolute index greater than or equal to the Base. This representation starts with the '0001' 4-bit pattern. This is followed by the post-base index (see post-base) of the matching header field, represented as an integer with a 4-bit prefix (see prefixed-integers). 4.5.4. A literal header field with name reference representation encodes a header field where the header field name matches the header field name of an entry in the static table, or the header field name of an entry in the dynamic table with an absolute index less than the Base. This representation starts with the '01' two-bit pattern. The following bit, 'N', indicates whether an intermediary is permitted to add this header to the dynamic header table on subsequent hops. When the 'N' bit is set, the encoded header MUST always be encoded with a literal representation. In particular, when a peer sends a header field that it received represented as a literal header field with the 'N' bit set, it MUST use a literal representation to forward this header field. This bit is intended for protecting header field values that are not to be put at risk by compressing them (see security-considerations for more details). The fourth ('T') bit indicates whether the reference is to the static or dynamic table. The 4-bit prefix integer (see prefixed-integers)"} {"id": "q-en-quicwg-base-drafts-7dc2d66bcd63e53cfbba19381557a64f976219e6996f74bc13cba5d0bae74148", "old_text": "4.5.5. A literal header field with post-base name reference represents a header field where the name matches the header field name of a dynamic table entry with an absolute index greater than or equal to the Base. This representation starts with the '0000' four-bit pattern. The fifth bit is the 'N' bit as described in literal-name-reference.", "comments": "This was inspired by the comment by afrind at URL For each representation, uniformly use the word \"representation\" in the first sentence; use the word \"identifies\" if it's a verbatim entry; \"encodes\" otherwise; identify relative versus post-base indexing in the first sentence; move drawing right after first sentence; use consistent language describing binary format.", "new_text": "4.5.5. A literal header field with post-base name reference representation encodes a header field where the header field name matches the header field name of a dynamic table entry with an absolute index greater than or equal to the Base. This representation starts with the '0000' four-bit pattern. The fifth bit is the 'N' bit as described in literal-name-reference."} {"id": "q-en-quicwg-base-drafts-7dc2d66bcd63e53cfbba19381557a64f976219e6996f74bc13cba5d0bae74148", "old_text": "4.5.6. The literal header field without name reference representation encodes a header field name and header field value as string literals. This representation begins with the '001' three-bit pattern. The", "comments": "This was inspired by the comment by afrind at URL For each representation, uniformly use the word \"representation\" in the first sentence; use the word \"identifies\" if it's a verbatim entry; \"encodes\" otherwise; identify relative versus post-base indexing in the first sentence; move drawing right after first sentence; use consistent language describing binary format.", "new_text": "4.5.6. The literal header field without name reference representation encodes a header field name and a header field value as string literals. This representation begins with the '001' three-bit pattern. The"} {"id": "q-en-quicwg-base-drafts-4c7395a421428f4b374a3483f146b8d08b3f76c0a02ad7157b013da6ef0000bd", "old_text": "open a connection, which begins the exchange described in handshake; enable 0-RTT; and be informed when 0-RTT has been accepted or rejected by a server.", "comments": "In , we say that one of the required operations on a connection is \"enable 0-RTT\". NAME : The current text implies that implementations are required to support 0-RTT, which is not the case. It would be more correct to say that applications must be able to prevent early data from being offered on a connection, even if the implementation supports it.\nAlternately, the server section prefaces certain bullets with \"If Early Data is supported, ....\" That formulation might be used here as well. Also, this section uses \"0-RTT\" in the client piece but \"Early Data\" in the server piece. Should be consistent.", "new_text": "open a connection, which begins the exchange described in handshake; enable 0-RTT when available; and be informed when 0-RTT has been accepted or rejected by a server."} {"id": "q-en-quicwg-base-drafts-ff8468e1c609b252700b2f091977e9c747422ec199383beb18f47b85c87197fa", "old_text": "a token that is not applicable to the server that it is connecting to, unless the client has the knowledge that the server that issued the token and the server the client is connecting to are jointly managing the tokens. A token allows a server to correlate activity between the connection where the token was issued and any connection where it is used.", "comments": "It is permissible for a client to do 0-RTT resumption with a NST from one connection and a token from another. This clarifies that a client only needs to associate a token (from a NEW_TOKEN frame) with the server it was from and no additional state, and that a server can't require that a token be from the same connection as the NST in use.\nThe token is used for deciding whether the server believes the packet it's receiving actually comes from the address it says it's coming from. I could believe a server might trust a client only if it's doing a 1-RTT handshake, but not trust it if it's doing a 0-RTT handshake, so might encode that bit in the token. I don't think the server should be putting more specific bits like \"this is the NST that should be used\" or \"here are the SETTINGS from the previous connection\" in the token. If the server doesn't like a token it receives with a 0-RTT handshake, it can send a Retry packet but still continue with the 0-RTT crypto handshake (even though it's now 1-RTT from a transport perspective). I don't see the need for there to be fate sharing between accepting a token and accepting 0-RTT - a server can reject one without rejecting the other. I've created to express the problem statement as an issue.\nIf you're going to continue with 0-RTT, but not trust the token, why not just limit the amount sent by the amplification factor? The receipt of a Handshake packet constitutes path validation. Also, a Retry doesn't allow you to acknowledge the client's Initial or any 0-RTT packets.\nFor a 0-RTT resumption in H3, a client needs 4 pieces of state previously received from a server: 1) a NewSessionTicket 2) Transport Parameters 3) SETTINGS 4) a token (from a NEW_TOKEN frame) The last isn't strictly necessary, but without it the server is likely to burn a round trip with a Retry packet to prove the client controls the source address. On a given connection, there is only one server SETTINGS and Transport Parameters, though there may be multiple NewSessionTickets and tokens. The first 3 pieces of state are necessarily required to have come from the same connection. A NST (from any connection) is needed since without it resumption can't happen. Transport Parameters and SETTINGS must come from somewhere for 0-RTT to be useful; the obvious and currently specified place is the SETTINGS and Transport Parameters that were sent on the same connection the NST came from. There's no such need that the 4th piece of state come from the same connection as the others. The token is used to prove that the client controls the address the packet is purportedly coming from. This can be done separately from any crypto state (in the NST), application state (SETTINGS), or the rest of the transport state (Transport Parameters). For a client to store the state needed for 0-RTT resumption, one possible implementation is to create cache entries containing an NST, TP, and SETTINGS. It's easy to do that where the TP and SETTINGS get duplicated N times, assuming N NSTs were received on a connection. If M tokens were received on that connection, and tokens need to be added to those cache entries, either the M tokens need to be paired to the N NSTs somehow, or all M tokens are in all N cache entries. This introduces noticeable complexity to a client's session cache implementation to correlate unrelated things. It should be clarified that the client doesn't need to associate the token with the rest of the state needed for 0-RTT. has suggested wording for that clarification.\nAs pointed out in NAME in URL, the proposed change essentially forbids a server from storing SETTINGS in the token. I think I'm persuaded by NAME argument above that we'd better forbid such coordination, as it would be a burden to certain client implementations. OTOH, I think that untying the relationship between session tickets and tokens does open a new issue, see .\nI have some editiorial comments, but the design looks fine to me. While we might embed some information related to 0-RTT configuration in QUIC tokens, they would be decryptable and be usable, so we do not think there would be a problem for us (thanks to NAME for checking).A couple of editorial suggestions.", "new_text": "a token that is not applicable to the server that it is connecting to, unless the client has the knowledge that the server that issued the token and the server the client is connecting to are jointly managing the tokens. A client MAY use a token from any previous connection to that server. A token allows a server to correlate activity between the connection where the token was issued and any connection where it is used."} {"id": "q-en-quicwg-base-drafts-ff8468e1c609b252700b2f091977e9c747422ec199383beb18f47b85c87197fa", "old_text": "limit its use of tokens to only the information needed to validate client addresses. Attackers could replay tokens to use servers as amplifiers in DDoS attacks. To protect against such attacks, servers SHOULD ensure that tokens sent in Retry packets are only accepted for a short time.", "comments": "It is permissible for a client to do 0-RTT resumption with a NST from one connection and a token from another. This clarifies that a client only needs to associate a token (from a NEW_TOKEN frame) with the server it was from and no additional state, and that a server can't require that a token be from the same connection as the NST in use.\nThe token is used for deciding whether the server believes the packet it's receiving actually comes from the address it says it's coming from. I could believe a server might trust a client only if it's doing a 1-RTT handshake, but not trust it if it's doing a 0-RTT handshake, so might encode that bit in the token. I don't think the server should be putting more specific bits like \"this is the NST that should be used\" or \"here are the SETTINGS from the previous connection\" in the token. If the server doesn't like a token it receives with a 0-RTT handshake, it can send a Retry packet but still continue with the 0-RTT crypto handshake (even though it's now 1-RTT from a transport perspective). I don't see the need for there to be fate sharing between accepting a token and accepting 0-RTT - a server can reject one without rejecting the other. I've created to express the problem statement as an issue.\nIf you're going to continue with 0-RTT, but not trust the token, why not just limit the amount sent by the amplification factor? The receipt of a Handshake packet constitutes path validation. Also, a Retry doesn't allow you to acknowledge the client's Initial or any 0-RTT packets.\nFor a 0-RTT resumption in H3, a client needs 4 pieces of state previously received from a server: 1) a NewSessionTicket 2) Transport Parameters 3) SETTINGS 4) a token (from a NEW_TOKEN frame) The last isn't strictly necessary, but without it the server is likely to burn a round trip with a Retry packet to prove the client controls the source address. On a given connection, there is only one server SETTINGS and Transport Parameters, though there may be multiple NewSessionTickets and tokens. The first 3 pieces of state are necessarily required to have come from the same connection. A NST (from any connection) is needed since without it resumption can't happen. Transport Parameters and SETTINGS must come from somewhere for 0-RTT to be useful; the obvious and currently specified place is the SETTINGS and Transport Parameters that were sent on the same connection the NST came from. There's no such need that the 4th piece of state come from the same connection as the others. The token is used to prove that the client controls the address the packet is purportedly coming from. This can be done separately from any crypto state (in the NST), application state (SETTINGS), or the rest of the transport state (Transport Parameters). For a client to store the state needed for 0-RTT resumption, one possible implementation is to create cache entries containing an NST, TP, and SETTINGS. It's easy to do that where the TP and SETTINGS get duplicated N times, assuming N NSTs were received on a connection. If M tokens were received on that connection, and tokens need to be added to those cache entries, either the M tokens need to be paired to the N NSTs somehow, or all M tokens are in all N cache entries. This introduces noticeable complexity to a client's session cache implementation to correlate unrelated things. It should be clarified that the client doesn't need to associate the token with the rest of the state needed for 0-RTT. has suggested wording for that clarification.\nAs pointed out in NAME in URL, the proposed change essentially forbids a server from storing SETTINGS in the token. I think I'm persuaded by NAME argument above that we'd better forbid such coordination, as it would be a burden to certain client implementations. OTOH, I think that untying the relationship between session tickets and tokens does open a new issue, see .\nI have some editiorial comments, but the design looks fine to me. While we might embed some information related to 0-RTT configuration in QUIC tokens, they would be decryptable and be usable, so we do not think there would be a problem for us (thanks to NAME for checking).A couple of editorial suggestions.", "new_text": "limit its use of tokens to only the information needed to validate client addresses. Clients MAY use tokens obtained on one connection for any connection attempt using the same version. When selecting a token to use, clients do not need to consider other properties of the connection that is being attempted, including the choice of possible application protocols, session tickets, or other connection properties. Attackers could replay tokens to use servers as amplifiers in DDoS attacks. To protect against such attacks, servers SHOULD ensure that tokens sent in Retry packets are only accepted for a short time."} {"id": "q-en-quicwg-base-drafts-15ddd71e216a34974e9ab57960579ecb7b306c902135b50a9ca4e54029ca6a5f", "old_text": "write data, understanding when stream flow control credit (data- flow-control) has successfully been reserved to send the written data end the stream (clean termination), resulting in a STREAM frame (frame-stream) with the FIN bit set; and reset the stream (abrupt termination), resulting in a RESET_STREAM frame (frame-reset-stream), even if the stream was already ended. On the receiving part of a stream, application protocols need to be able to: read data abort reading of the stream and request closure, possibly resulting in a STOP_SENDING frame (frame-stop-sending) Applications also need to be informed of state changes on streams, including when the peer has opened or reset a stream, when a peer", "comments": ", requires transport stacks to support abrupt closure of a stream even after the stream has been cleanly closed. This reflects the possibility to transition from \"Data Sent\" (stream has ended) to \"Reset Sent\" by sending a RESETSTREAM. Based on the state diagram in the transport doc, if an implementation was already in the \"Data Recvd\" state, they wouldn't actually send a RESETSTREAM, as they're already in a terminal state; a reset call would be a no-op or an error of some kind. However, NAME points out that in a handle-based API, this essentially means that handles for closed streams have to be recognizable as such for the lifetime of the connection. This might be burdensome for some implementations. Should we reword the requirement here to say that it's permitted if the stream hasn't yet reached the \"Data Recvd\" state.\nThis was clearly just a miss on the wording. I don't think that the intent was ever to require that streams be kept around indefinitely. Sure, some stacks might be able to send RESET_STREAM at a whim, but it's not necessary", "new_text": "write data, understanding when stream flow control credit (data- flow-control) has successfully been reserved to send the written data; end the stream (clean termination), resulting in a STREAM frame (frame-stream) with the FIN bit set; and reset the stream (abrupt termination), resulting in a RESET_STREAM frame (frame-reset-stream), if the stream was not already in a terminal state. On the receiving part of a stream, application protocols need to be able to: read data; and abort reading of the stream and request closure, possibly resulting in a STOP_SENDING frame (frame-stop-sending). Applications also need to be informed of state changes on streams, including when the peer has opened or reset a stream, when a peer"} {"id": "q-en-quicwg-base-drafts-5e5293d1d4477e9ecc4c702a349dea02ac9a44fd9db808c802bc60586ec6afc9", "old_text": "A Retry packet causes a client to send another Initial packet, effectively restarting the connection process. A Retry packet indicates that the Initial was received, but not processed. A Retry packet MUST NOT be treated as an acknowledgment. Clients that receive a Retry packet reset congestion control and loss recovery state, including resetting any pending timers. Other", "comments": "I didn't notice this change in NAME PR , but I think the older text is more correct. I added a bit more detail about why it cannot be treated as an acknowledgement. URL\nNAME I updated the text to reflect both the lack of processing and packet number.", "new_text": "A Retry packet causes a client to send another Initial packet, effectively restarting the connection process. A Retry packet indicates that the Initial was received, but not processed. A Retry packet cannot be treated as an acknowledgment, because it does not indicate that a packet was processed or specify the packet number. Clients that receive a Retry packet reset congestion control and loss recovery state, including resetting any pending timers. Other"} {"id": "q-en-quicwg-base-drafts-4f027ff651f26f8241c6a05e4337c4bfa84f5609999e54da3b1a2fb2864fbb7f", "old_text": "3.1.3. QUIC ends a loss epoch when a packet sent after loss is declared is acknowledged. TCP waits for the gap in the sequence number space to be filled, and so if a segment is lost multiple times in a row, the loss epoch may not end for several round trips. Because both should reduce their congestion windows only once per epoch, QUIC will do it correctly once for every round trip that experiences loss, while TCP may only do it once across multiple round trips. 3.1.4.", "comments": "I was trying to come up with text for this but this PR looks great.\nI believe there's an extra \"is\".\nCan you write a PR? Removing either 'is' doesn't read well to me, but I suspect a sentence rewrite could be an improvement. ie: \"A loss epoch ends when a packet sent after loss is first declared is acknowledged.\"?\nI agree a rewrite would be better.", "new_text": "3.1.3. QUIC starts a loss epoch when a packet is lost and ends one when any packet sent after the epoch starts is acknowledged. TCP waits for the gap in the sequence number space to be filled, and so if a segment is lost multiple times in a row, the loss epoch may not end for several round trips. Because both should reduce their congestion windows only once per epoch, QUIC will do it correctly once for every round trip that experiences loss, while TCP may only do it once across multiple round trips. 3.1.4."} {"id": "q-en-quicwg-base-drafts-b83e18fc43c057e6a230f9a6ae908d30ada402a58456aab7711eee8319a2b962", "old_text": "(request-response). Multiplexing of requests is performed using the QUIC stream abstraction, described in Section 2 of QUIC-TRANSPORT. Each request and response consumes a single QUIC stream. Streams are independent of each other, so one stream that is blocked or suffers packet loss does not prevent progress on other streams.", "comments": "Lest some readers get the impression that each request consumes a single QUIC stream and each response consumes a single QUIC stream.", "new_text": "(request-response). Multiplexing of requests is performed using the QUIC stream abstraction, described in Section 2 of QUIC-TRANSPORT. Each request- response pair consumes a single QUIC stream. Streams are independent of each other, so one stream that is blocked or suffers packet loss does not prevent progress on other streams."} {"id": "q-en-quicwg-base-drafts-64dd189a51c0d40dc290361a49e73409daced9f4a2261d88f86956b849a6adc6", "old_text": "The RECOMMENDED initial value for the packet reordering threshold (kPacketThreshold) is 3, based on best practices for TCP loss detection RFC5681 RFC6675. Some networks may exhibit higher degrees of reordering, causing a sender to detect spurious losses. Implementers MAY use algorithms", "comments": "From Gorry's review.\nGorry suggested that the QUIC recovery draft should specifically recommend against packet reordering thresholds less than 3 because it will not be sufficiently tolerant of real-world reordering. Pull already created.\nFWIW, I think this is likely design because it adds a SHOULD NOT, but it also doesn't substantively change the existing recommendation.", "new_text": "The RECOMMENDED initial value for the packet reordering threshold (kPacketThreshold) is 3, based on best practices for TCP loss detection RFC5681 RFC6675. Implementations SHOULD NOT use a packet threshold less than 3, to keep in line with TCP RFC5681. Some networks may exhibit higher degrees of reordering, causing a sender to detect spurious losses. Implementers MAY use algorithms"} {"id": "q-en-quicwg-base-drafts-713f3c70d7284e43faef8e67f26c6ec37fdf3700c59a2784bbd98f1044480234", "old_text": "Endpoints validate ECN for packets sent on each network path independently. An endpoint thus validates ECN on new connection establishment, when switching to a new server preferred address, and on active connection migration to a new path. Even if an endpoint does not use ECN markings on packets it transmits, the endpoint MUST provide feedback about ECN markings", "comments": "This turned out to be fairly simple, but I think that it's worth writing down. I don't have concrete recommendations, but I could add them if people thought that they help. See the commented-out bit.\nThanks both for the reviews, I've taken some of the suggestions and moved this to the appendix.\nAfter discussing with editors, we agreed that this was a reasonable interpretation of existing text. We do agree that it would be worth discussing Mirja's views on ECN reliability and possibly changing the algorithm to reflect that, but it would be a design change.\nIn , NAME suggested that a different algorithm might be better to recommend enabling ECN and only disabling it if a failure is detected. That algorithm adopts the same posture as the draft: that ECN is optional and that endpoints are permitted to be cautious in enabling it. If we want to adopt the view that ECN works and endpoints are required to use it unless it breaks, that's a bigger change. I open this issue, while being aware that this is re-opening something we discussed and agreed previously, without there being new information to present. Chairs, if you do think we should discuss this, it's a design issue.\nI would not be in favor of reopening this.\nJust to clarify the proposal was that if you decide to enable ECN, you should just do it and only disable if you notice a failure, instead of having the more complicated probing logic that we have right now (in the appendix I think)\nThe shows that out of 14 implementations doing active tests, only 3 demonstrate ECN support. No ill will, but major places like AWS or the Windows OS do not support ECN, so it is not clear what more text in the spec would change.\nThis is just about this one paragraph: The concrete proposal would be to remove this paragraph.\nDiscussed in ZRH. Proposed resolution is to close with no action.\nI will make a proposal for an editorial PR for the paragraph mention above to make it sounds less scary that this risk exists.\nECN-capable is not a constant condition, but we don't have any algorithm that governs it. I don't think that we can just say \"mark until validation of ECN counts fails\". Maybe the following algorithm is OK: When starting a new path: set ECN state to \"testing\" after N packets or M round trips, set ECN state to \"unknown\" on every ACK, check ECN counts if ECN validation fails set ECN state to \"failed\". if ECN validation passes and ECN state is \"uncertain\", set it to \"capable\". when sending packets, mark with ECT(0) if state is \"testing\" or \"capable\" (I know that this is unchanged text, but I just realized that we could be clearer.) Gorry pointed out that we can add an extra thing: if a path is marked as \"failed\", you MAY choose to occasionally switch back to \"testing\". Mirja pointed out that we need to deal with bleaching more than we need to worry about black holes. We should negotiate that. Originally posted by NAME in URL\nI also would suggest one more intro sentence could please some to say \"When a path supports ECN, this enables ECN marking and uses ECN for a new path\" - to effectively say that the validation is about how to detect and react to problems with a path that is not ECN capable.\nDiscussed in Cupertino.\nPresumably this is just another section in Section 13.4? That would be useful.", "new_text": "Endpoints validate ECN for packets sent on each network path independently. An endpoint thus validates ECN on new connection establishment, when switching to a new server preferred address, and on active connection migration to a new path. ecn-alg describes one possible algorithm for testing paths for ECN support. Even if an endpoint does not use ECN markings on packets it transmits, the endpoint MUST provide feedback about ECN markings"} {"id": "q-en-quicwg-base-drafts-1f1ed2ed65111395c2b3a171b2b38f4656cc3819d2bf1bbd910467daee34410d", "old_text": "carry this connection ID for the duration of the connection or until its peer invalidates the connection ID via a RETIRE_CONNECTION_ID frame (frame-retire-connection-id). Connection IDs that are issued and not retired are considered active; any active connection ID can be used. An endpoint SHOULD ensure that its peer has a sufficient number of available and unused connection IDs. Endpoints store received", "comments": "Note also the editorial changes in , which are great and still needed: this adds a little bit more to the reference from those changes to do the key clarifying for . The remaining clarification about is coming in the SPA PRs, but I wanted to look at this paragraph in one place.\nSection 12.2 makes two quasi-conflicting statements about coalesced packets. In a connection where the sender has multiple CIDs in hand, using two different CIDs in the two coalesced packets is entirely consistent with the guidance to the sender, but would be dropped based on the guidance to the receiver. We should align these two statements.\nFixing this would require a change to normative language. I'm of the opinion that the first should change to different connection IDs rather than connections; the second might change to a MUST NOT, but it could remain a SHOULD and achieve the same end. The main reason for this is that routing will be based on the first packet. But there is also a secondary benefit in not creating a strong correlation between different connection IDs for the same connection.\nBut this complicates the coalescing logic, because you no longer only need to look at packet types but now also CID fields. At least in my stack, that is an ugly change to make at this stage.\nThe alternative, which I can now see is probably a smaller change is:\nWhat motivated this was two observations: we don't coalesce except during the handshake, where we are not migrating some stacks generate packets for coalescing while processing, and so they might generate a datagram with multiple connection IDs\nNAME writes: This is much simpler alternative than what is being proposed. Now an implementation can only drop a packet after doing a connection lookup, whereas before it could just compare the CID to the previous value. If this is just a matter of personal preference, I vote for this approach, so now the score is 1:1 (with NAME being on the opposite side). To take a step back: why is recommendation to drop packets exist in the first place? If a sender wants to coalesce packets from different connections, why not let it?\nI think it should not allow having multiple packets for different connections in a single UDP datagram - please. Currently I see implementations (including mine) have something like: Receive UDP packet. Parse packet header. Check CID, find the connection handler. Pass the packet to the respective connection (thread) to handle the packet. I think it makes implementations much more complicated (already). Also I wonder it may cause security issue if having bad assumption?\nThe classic example would be coalescing Handshake packets, whose DCID is the SCID used by the server, and Initial or Zero-RTT packets, whose DCID is the client-chosen Initial DCID. The client MAY switch to using the server chosen CID for Initial packets. Maybe we should say that it MUST do that if it is going to coalesce Initial and Handshake packets. Not so clear about 0-RTT packets. Is it actually allowed to send them with a different DCID than the Initial DCID?\nNAME : Our assumption has been that load balancers, for instance, can simply route on the CID in the first header they see in the UDP datagram. If we allow different connections to be coalesced, that complicates things. It may also start to leak some things about the way load is distributed within the server network: if some CIDs when combined cause loss and some don't, that reveals some things about how the server load is distributed. Is there a strong argument for changing this behavior? This new language in was always the intent. I suspect it didn't change when we started allowing multiple CIDs within a connection. Unless there's an argument for changing behavior, this is really an editorial PR.\nNAME If I understand correctly, every time you change the connection ID, you change the connection ID for all packet number spaces. Assuming that my reading of the spec is correct in this regard, it seems like there's no compelling reason for making this change?\nNAME the spec actually says (7.2): The first flight of 0-RTT packets use the same Destination Connection ID and Source Connection ID values as the client's first Initial packet. Upon first receiving an Initial or Retry packet from the server, the client uses the Source Connection ID supplied by the server as the Destination Connection ID for subsequent packets, including any 0-RTT packets. SO you are correct, there is no ambiguity.\nNAME Thank you for looking this up. Then I'm opposed to making any change here. Not only is it unnecessary (that alone should be enough), it would also make it more difficult to evolve the way we deal with coalesced packets in the future: I can imagine a future QUIC version introducing a way to authenticate the entire coalesced packet, while at the same time compressing the header. This will be easier if we keep the explicit property that all connection IDs of the coalesced packets match.\nNAME I'm not sure if I agree. Coalescing of QUIC packets is a feature specific to v1, entities that operate without the knowledge of the connection (e.g., load balancers, datagram dispatchers) are not expected to consult the CID values of QUIC packets that are appended. Therefore, what we are talking here does not affect what we can do in the future.\nThat argument was based on the thought that it will be easier in a future QUIC version because the diff to v1 would be smaller. My main point is that this change is unnecessary because when endpoints change connection IDs, they change it for all packer types at the same time.\nNAME Thank you for the clarification. I think I do not agree to this, too. IIUC, the consensus we reached with was that the CID of a Handshake packet and that of the 1-RTT packet can be different. While it is true that the initial DCID set by the client is expected to change when the client receives the first \"active\" connection ID supplied by the server, the fact does not mean that the CIDs of all QUIC packets within a datagram has to be the same in all the cases.\nI think we need to make a change, in one direction or the other. It's unfortunate to mix CIDs, since it clearly links them to any observer, but doesn't seem like a big deal. For some implementations, it sounds like it's a big deal to avoid mixing them. Therefore, while I'm content with either resolution, it seems like it makes more sense to resolve toward being more permissive; e.g. . Mixing packets from different connections seems like a generally bad idea, since it assumes an implementation structure on the far end for it to work. The more likely outcome is that coalesced packets from other connections will simply be dropped as invalid. (Of course, this very property could be used for padding / misleading observers.)\nHi Mike, Can you say more about why you don't see this as an issue? Is it because this use case is most likely during the handshake? Or is there some other linkability already present that makes this less of an issue? regards, Ted\nSome implementations try to change the CID as soon as possible, to avoid linking 1RTT traffic with the CID visible during the handshake and to improve privacy. There will be a brief window when they still need to send both some handshake packets and some 1RTT packets. But coalescing a Handshake packet with the old CID and a 1RTT packet with the new CID links old and new CID and thus negates the effort. And just in case someone asks, no, you don't really need to decrypt the packets to access the second CID. You just need the coalesced packet and another 1RTT packet. Not hard.\nNAME If that is the concern, wouldn't it be better for endpoints to migrate to a new CID after learning that the peer has dropped the Handshake keys? If an endpoint switches to the new CID before the handshake keys are dropped by the peer, there's chance that the peer might use a new CID in the Handshake packets that the peer sends. That at least leaks when the connection was established for that CID.\nNAME yes, the simplest way to not run into coalescing problems is to not start migrations or CID changes before Handshake Done.\nApologies, I'm getting lost in the weeds here. I agree with NAME that we need to make a change, and that is really meant to fix the text to what the intent always was. NAME NAME NAME Are we all agreeing on this point?\nNAME That is my view. I think to be an editorial amendment of . Regarding linkability problem, the simple and safe way of mitigating the problem (as proposed in URL) does not interfere with the proposed resolution.\nNAME I personally prefer the old version, force single CID in coalesced packet. Loosening the text to allow multiple CID requires a code change in my implementation.\nI'm with NAME Single CID.\nBoth, actually. This occurs during the handshake, when addresses are required to be stable. Unless you're running a large number of parallel connections, all CIDs used on that 4-tuple during the handshake belong to the same connection with very high probability. You can break that linkage by deliberately changing CID and port at the same time, but you can't do that until the handshake is confirmed. If you want the linkage broken, you need to do that jump post-handshake anyway.\nHi Mike, I think \"high probability\" is probably app-dependent, because there may be cases where there are significant numbers of parallel connections. In the case of proxied queries, for example, the use of the same 4-tuple but a different CID could turn out to be common, because the proxy may prefer to keep traffic from its different clients separate even if they could be served by the same upstream server. A DNS example: if you have an DNS resolver talking to an upstream proxy or a common authoritative server, that resolver may want to set a different Client Subnet for different query streams, so that input can be used to determine the preferred reply. In the presence of a NAT or CGNAT, there could be interesting interaction with the dwell timers in the state table of the middlebox. If you change it too late, the middlebox may recycle the most recently freed port, which could result in you getting the same 4-tuple to an observer on that part of the path. Speaking strictly as an individual, I think using a single CID per coalesced packet as Christian prefers is cleaner and leaves open app behaviors we may not have yet identified. regards Ted I'm not a big fan of Client Subnet, and these particular parallel connections may or may not pose a risk; it's just a proxy situation where the parallelism might occur.\nThe discussion is active, I'm marking this as design.\nNAME I might be missing something, but if an endpoint wants to cut linkability between the packets used during the handshake and the packets used in 1-RTT, that endpoint has to switch to a new CID after the peer has dropped its Initial and Handshake keys, regardless of the outcome of this issue. That is because if the endpoint switches to a new CID while the peer is still sending Initial or Handshake packets, we allow the peer to use that new CID in the long header packets sent in response. Therefore, I do not think that requiring endpoints to use one DCID in all the coalesced QUIC packets help us in practice.\nIn my implementation, ngtcp2, it processes packet in the way that PR proposes; which means I just over looked existing text, but anyway, I think that processing a packet atomically is cleaner approach in implementation wise rather than passing a some kind of pointer to the very first packet header.\nNAME Once a non-coalesced packet is passed to a connection, in my implementation I don't have to do any checking. The server already makes sure to pass packets to the correct connection based on the DCID. So there's additional validation logic required for coalesced packets. And the easiest way is to just check that all DCIDs match. Sure, it would be possible to somehow query which other DCIDs would also be valid for the same session, but it's more complicated. And given that nobody has come up with any good reason to coalesce packets with different DCIDs in the first place, I'm wondering how this additional complexitiy would be justified.\nI don't foresee a large problem with the proposed change, but I also don't think it's necessary.\nDo we have a proposed resolution for this one?\nI think we settled on requiring connection IDs to be the same, but I haven't had the time to write that up, so not just yet.\nNAME FTR, discussion happened on the mailing list . My sense is that it's still not clear to some (including me) why the current fix in (requiring \"connections\" to be the same) doesn't work, but nobody is arguing strongly against what NAME said above (requiring \"connection IDs\" to be the same). So that might be the least common denominator.\nadded the text \" Connection IDs that are issued and not retired are considered active; any active connection ID is valid for use at any time, in any packet type. \" As far as I can tell there is no text specifying that an issued connection ID cannot be used with subsequent connections. The introduction to connection IDs state that there must not be correlation across connections, but if an issued connection ID was never consumed, there would be no correlation because it was transmitted encrypted.\nWe're currently not specifying which DCID is supposed to be used on retransmissions of Handshake packets. A client might receive a NEWCONNECTIONID frame in the first couple of 1-RTT packets it receives, before receiving either a HANDSHAKEDONE or an acknowledgement for a 1-RTT packet is sent itself. Therefore, it's still running loss recovery on the Handshake packets it sent out earlier. If there's an acknowledgement outstanding for one of those Handshake packets, the client will have to retransmit that packet. Which DCID is it supposed to use on the retransmission? Is it the DCID that was used on the original packet? Or is it the CID provided in the NEWCONNECTIONID frame? Note that the NEWCONNECTION_ID frame might already have requested the retiring of the CID used during the Handshake.\nAs I can see, there are three options: a) let the receiver of NCID to send Handshake packets using the new CIDs b) require (or recommend) receiver of NCID to continue using the original DCID until the handshake is confirmed c) require (or recommend) the sender to withhold the emission NCID (with a non-zero RTP) to until it sees an ACK for handshake confirmation I prefer (c) (or maybe (b)), because (a) has corner cases even though it might seem easy. Consider the case where a server sends TP.preferredaddress then as the first packet, sends the NCID frame with RTP set to 2. If we adopt (a), this would mean that the client would have only one CID that it can use (the one being embedded in the NCID frame) even though it needs two (one for the original path, and another for the path specified by TP.preferredaddress). As you can see, the design of the NCID has been based on the assumption that providing one new CID alongside a retirement request guarantees that the receiver would have enough active CIDs. But in this particular case, that is not the case. Based on this observation, and based on my assumption that a non-zero RTP would be sent by only some of the endpoints (the necessity of CID retirement is questionable for HTTP/3, the scope of QUIC v1, as the HTTP connections are typcially short-lived and can most likely be gracefully terminated to induce a new connection when retirement of CIDs is necessary), my preference goes to (c): an endpoint that uses RTP should cover the cost.\nNAME (I think you mean RPT, not RTP) Thank you for the analysis. I prefer (c) as well, except that I do not want to condition it on non-zero RPT, to keep things simpler. Otherwise we're still left with what to do at the receiver. Ensuring that the handshake confirmation is acked means that there are no HS packets pending.\nI'll also note that this issue was found through a resilience test we've built for the interop runner -- high loss handshake test. We're excited to see how everyone's handshake machinery performs under 50% loss :-)\nAt the time of this writing, ngtcp2 does (b).\nNAME for you problem with (a) isn't that just a general problem with having multiple paths and then getting a NEWCONNECTIONID with RPT for all previous CIDs? You end up in a scenario where you only have 1 CID (the new one) and multiple paths that just had all their CIDs retired. If you can get away with using that one CID for the active path, you can still make forward progress. If not, the connection dies.\nI believe we specify (b) already: \"A client MUST only change the value it sends in the Destination Connection ID in response to the first packet of each type it receives from the server (Retry or Initial); a server MUST set its value based on the Initial packet. Any additional changes are not permitted; if subsequent packets of those types include a different Source Connection ID, they MUST be discarded. This avoids problems that might arise from stateless processing of multiple Initial packets producing different connection IDs.\" URL\nNAME that is just indicating when a client should change DCID in response to receiving a Initial/Retry packet with a new CID. I don't think anything restricts either side from sending NEWCONNECTIONID frames in their first 1-RTT packets and their peers immediately using them for any packets (including handshake) they send. Personally, I don't have a problem with (a). I think that no matter what receivers will have to handle the multiple paths but single CID scenario, so I would prefer not to special case the handshake. I also don't think we have handshake confirmation problems because an endpoint should only send a NEWCONNECTIONID out when it's able to accept it. Then, if the peer can decrypt the packet with the new CID, I see no reason why it shouldn't be allowed to use it immediately.\nNAME my reading of the text I quoted is that you cannot use any DCIDs in long header packets that weren't either the original client DCID/SCID or from a server Initial or Retry.\nNAME There's a fourth option: d) require that the sender withholds the NCID frame until it has confirmed the handshake. As soon as an endpoint confirms the handshake, it drops the Handshake keys (I'm assuming here). As a result, the endpoint obviously doesn't care about Handshake packets any longer (and especially doesn't care which DCID was used to send the packet). Therefore, there's no problem if the receiver of the NCID frame immediately retires the connection ID used during the handshake, even if the the NCID and the HANDSHAKE_DONE frame were reordered.\nNAME The part about what to do with DCID changes is underspecified. From the part of the spec you quote -- \"Any additional changes are not permitted; if subsequent packets of those types include a different Source Connection ID, they MUST be discarded.\" -- it's not clear what these additional changes are limited to. I think the intent was for all long-form packets, but that's certainly not clear here. In any case, the problem with that approach is that the client will have to carefully construct HS packets with the old DCID while sending everything else with the new DCID.\nNAME : (d) works too, but an endpoint still has to decide which DCID to use for sending HS packets in the case that the HANDSHAKE_DONE frame is lost. I'd rather not leave that unspecified.\n\"of those types\" => Retry or Initial, since that's what it was talking about. That's trying to avoid the case where the server statelessly generates a new Retry (or Initial, in the rare case that would be stateless) with a second SCID upon receiving a retransmission or duplication of the client's Initial. But it doesn't say anything about the server issuing new CIDs via an NCID frame in 1-RTT before the handshake is confirmed. I think (a) or (b) would be fine; the server is clearly intending to accept the other CID if it issued it, but could conceivably use separate logic for long-header versus short-header routing.\nCan someone explain why using the new connection ID is a problem?\nNAME I think that that is a good way at looking into the issue. I agree that the endpoints can start using the new CIDs during handshake, if the peer sends NCID frames. The only problem that I can think of is related to retirement (see URL). When the server provides TP.preferred_address, and also decides to retire the CIDs immediately after sending all the handshake transcript, there's going to be a very small time window in which it has to send two CIDs (one for the current path and another for the path using the preferred address). Unless the QUIC stack allows bundling of two NCID frames when building a QUIC packet, the client might end up not having sufficient amount of CIDs, in which case it might terminate the connection. It is my view that this is very different from the ordinary case of retiring CIDs, in which case there would be enough time window to guarantee that a handful number of new generation CIDs are delivered before the older ones are delivered (cc NAME That said, I now tend to think that the most we might have to do is the following two: clarify that CIDs received using NCID frames can be used for Handshake packets have a cautionary text stating that a server needs to make sure that the client has enough number of new generation CIDs when it retires an older generation of CIDs\nNAME Wouldn't we be able to solve this by using my proposed option d?\nArtificial constraints on when the frame can be sent would be terrible to enforce and would introduce a performance cost for using NEWCONNECTIONID. In a great many cases, the 0.5RTT from the server is idle, which makes it a great way to get things like NEWCONNECTIONID sent without taking capacity from real work. Asking the server to defer sending would mean that NEWCONNECTIONID could compete with HTTP responses and the like. I see no reason not to allow use of connection IDs when they are available. They apply to the connection as a whole. You will note that we allow changing them with the Initial/Retry mini-protocol in ways that don't correlate with other connection-level events, so this isn't any different in that regard. Forcing the connection ID state to synchronize with other state changes seems more likely to complicate things than help. The caution NAME mentions is always true. Whenever an endpoint is forced to retire connection IDs, a short supply is always a risk to the connection.\nI think QUIC allows more than 1 NCID frames in a single QUIC packet. Now my preference is (a). If server utilizes retire-prior-to, it should know the consequences and should provide sufficient backup connection IDs. Otherwise, it would be misconfiguration of server software.\nNAME The problem is that the client might end up receiving only some of the NCID frames, unless the server always sends (and retransmits) the set of NCID frames as a whole. But anyways, I agree that (a) is probably fine. That is because it is trivial to design a server that never requests CID retirement during the handshake. All you need to do is start issuing the new generation of CIDs at least N seconds before sending NCID frames that asks for the retirement of the previous generation, where N is your handshake timeout. Knowing that this design is why I initially preferred (c), not realizing that it works equally well with (a). But know that I understand that, I prefer (a) as it is simpler.\nGiven a server has to sent NCID to have the CID change, I'm fine with (a), but I thought it was not what we'd previously agreed to and specified, so I believed (b) was the status quo.\nI don't think this is a serious problem, because if the server doesn't want the long header DCID to change, it can wait before sending NEWCONNECTIONID. That said, we do have an issue here in that different readers of the draft had different interpretations of what was allowed. All that to say I don't much care if this is allowed or prohibited, but we should at least make an editorial change to clarify things.\nI agree with NAME here. Any alternative is ok, but the draft should say whether servers can expect that the client might switch DCID and whether the clients can expect the server to send NCID w/ RPT>0 before the handshake is complete.\nLet's keep it simple: no migration before Handshake Done, period. That would include prohibit NCID in 0-RTT Also state that implementations MAY drop any packet received with NCID different from original before Handshake Done.\nNAME The problem is not that simple. Consider the case where the server receives ClientFinished and sends two 1-RTT packets, the first packet containing HANDSHAKE_DONE, and the second packet containing an NCID with RPT set to 1. If the first 1-RTT packet gets lost, the client receives a request to retire the connection ID before it confirms the handshake.\nI think I'd prefer a design that meets the two requirements stated below: A server should not be required to withhold NCID frames until it receives an ACK for HANDSHAKEDONE. A client should be allowed to use the original CID until the handshake is confirmed. The intention is to minimize additional complexity to existing (and future) QUIC stacks. One solution that meets these requirements is to state that a NCID frame confirms the handshake, exactly the same way as HANDSHAKEDONE frame does. That would be a trivial change to any of the QUIC stacks, and there would be no ambiguity regarding the state changes.\nDiscussed in ZRH. Proposed resolution is to clarify that issuing a CID means being prepared to seeing it used at any time. Additional Editorial issue on packetization issues.\nEditorial PR on packetization challenges with retransmitted frames is .\nUgh, editorial changes for a design issue. Reopened for consensus calling.\nThis was fixed by . is related to this issue, but that simply clarifies things. To keep things sane, let's just close this when is merged.\nLooks good. URL discovered a minor \"discrepancy\" in our requirements for coalescing packets. We required that: senders not coalesce packets for different connections in the same datagram. receivers reject packets with different connection IDs in the same datagram. URL was the result of a discussion where I was convinced that looser constraints here weren't harmful. The requirements would identify connections, not connection IDs. Originally, my inclination was to fix the requirement on the sender to use connection IDs, which would remove some linkability. But then I was reminded that linkability is already defeated for several reasons. You can't coalesce outside of the handshake and we require a stable path for the handshake AND long headers include a source connection ID that can be used for linkability. So the linkability concern is basically lost. We also learned that some implementations were generating packets while processing incoming packets. Later they would coalesce these into a datagram that might then have different connection IDs on the packets it contains. There has been some opposition to the proposed resolution in PR 3870. Apparently, for some, having multiple connection IDs in the same datagram complicates processing. I don't understand this objection. It seems to me more difficult to retain state across packets than it is to process each atomically. I was hoping that Christian or Nick can explain more about how this affects them. Marten also indicated that there was a reason not to allow this, but if I understand that, this is based on some potential for future optimization, which could be conditional on having consistent connection IDs. If we moved to a scheme that required consistent connection IDs, anyone using that scheme could be required to avoid coalescing different connection IDs. So this seems more of a theoretical concern. And - in anticipation of maybe choosing to make senders use a consistent connection ID - if you might be generating coalesced datagrams with different connection IDs in packets, how hard would it be to ensure that these are consistent? Lars and Ian, I think both of you indicated that you might do this.", "new_text": "carry this connection ID for the duration of the connection or until its peer invalidates the connection ID via a RETIRE_CONNECTION_ID frame (frame-retire-connection-id). Connection IDs that are issued and not retired are considered active; any active connection ID is valid for use at any time, in any packet type. This includes the connection ID issued by the server via the preferred_address transport parameter. An endpoint SHOULD ensure that its peer has a sufficient number of available and unused connection IDs. Endpoints store received"} {"id": "q-en-quicwg-base-drafts-c17687922bfc24d60b84f3f65f63b7e3f1ea86a81076d6d3dc7a7bba8c2f011f", "old_text": "All HTTP/3 requests MUST include exactly one value for the \":method\", \":scheme\", and \":path\" pseudo-header fields, unless it is a CONNECT request (connect). An HTTP request that omits mandatory pseudo- header fields or contains invalid values for those fields is malformed (malformed). HTTP/3 does not define a way to carry the version identifier that is included in the HTTP/1.1 request line.", "comments": "From an interop discussion with NAME and NAME HTTP/1.1 the presence of a Host header: If no Host header is present, the section on describes how you determine what host is being asked for, ending with \"guess\" if it's an HTTP/1.0 request. HTTP/2 says that you generate an pseudo-header if sending direct-to-H2 and carry through the Host header if you're relaying HTTP/1.1. There is no discussion about what to do with an HTTP/1.0 (or 0.9) request being relayed over HTTP/2, which wouldn't have a Host header; you're neither required to generate or send one, even an empty one, nor are you given guidance about how to figure it out. HTTP/3 carries forward the H2 language (currently by reference, and explicitly after ). Do we want to have some guidance here? Or does this issue instead belong in URL to move some of the existing guidance to be HTTP version-independent? (NAME\nNAME Thank you for looking into the problem. Regarding if (or how) we should handle the issue, I think my weak preference goes to having one rule that applies to both HTTP/2 and HTTP/3. That would be less surprising to the users, and would also be helpful to implementations. It could well be the case for an HTTP server to have a common logic of handing pseudo headers (or lack of) between HTTP/2 and HTTP/3.\nHost is an HTTP/1.x header field (it was just late to the party), so HTTP/1.0 includes it as an extension. And that requirement is specific to HTTP/1.1, so would theoretically not apply to HTTP/1.2 (and that was intended): it was a short term requirement imposed by the IESG because they wanted to require deployment and didn't understand that it was already deployed. In general, I don't understand why \":authority\" exists (as opposed to always using \":host\"), but I think the HTTP requirement is simply that the identifying data must be present somewhere in the request.\n\":authority\" exists for the purpose of always transporting the equivalent of the , with \":scheme\" and \":path\" transporting the rest. It's mostly a consistency thing, I think. If you have a \"Host\" header instead, that's acceptable, too. So should we say that the target authority MUST be present in either \"Host\" or \":authority\", or do we leave it alone?\nWell, assuming we still allow requests for URIs that do not have an authority component (e.g, a URN or similar), I think it is better to just leave it alone. OTOH, folks are still going to ask what needs to be sent when the request target doesn't have an authority component, so that ought to be defined somewhere here more sensibly than it was for 1.1.\nDiscussed in ZRH. Proposed resolution is that if a URI scheme has a mandatory component, it better have one of those two things, preferably a :authority. Alert httpbis.\nDoesn't this beg the question \"what if the request is for a URI that does not have a mandatory authority component?\" All of the other cases are described. I would add something like: Thanks!", "new_text": "All HTTP/3 requests MUST include exactly one value for the \":method\", \":scheme\", and \":path\" pseudo-header fields, unless it is a CONNECT request (connect). If the \":scheme\" pseudo-header field identifies a scheme which has a mandatory authority component (including \"http\" and \"https\"), the request MUST contain either an \":authority\" pseudo-header field or a \"Host\" header field. If these fields are present, they MUST NOT be empty. If both fields are present, they MUST contain the same value. If the scheme does not have a mandatory authority component and none is provided in the request target, the request MUST NOT contain the \":authority\" pseudo-header and \"Host\" header fields. An HTTP request that omits mandatory pseudo-header fields or contains invalid values for those fields is malformed (malformed). HTTP/3 does not define a way to carry the version identifier that is included in the HTTP/1.1 request line."} {"id": "q-en-quicwg-base-drafts-4022537d9fb7ed8892c510c58cdc70411f7c4199e5527ad88e5c8afa0f5c8fcf", "old_text": "A packet that does not increase the largest received packet number for its packet number space (packet-numbers) by exactly one. A packet can arrive out of order if it is delayed or if earlier packets are lost or delayed. An entity that can participate in a QUIC connection by generating, receiving, and processing QUIC packets. There are only two types", "comments": "A sender can intentionally skip a packet number, causing the receiver to treat a packet as \"Out-of-order\". From NAME comment on", "new_text": "A packet that does not increase the largest received packet number for its packet number space (packet-numbers) by exactly one. A packet can arrive out of order if it is delayed, if earlier packets are lost or delayed, or if the sender intentionally skips a packet number. An entity that can participate in a QUIC connection by generating, receiving, and processing QUIC packets. There are only two types"} {"id": "q-en-quicwg-base-drafts-ca4ca8250a0b0ea64fe9361466c7bfccde12c9b17124c9c8ba2919cfecd62449", "old_text": "Servers MUST drop incoming packets under all other circumstances. 5.3. A QUIC connection is a stateful interaction between a client and", "comments": "Addresses . I'm flexible on wording and where exactly it lives in the document. I put it in Security Considerations.\nI just completely rearranged the new section based on your input. See how it reads now. Editors, I don't seem to have permission to request review. Can someone add NAME\nUnfortunately, some of the reviews are hidden as obsolete. I want to respond to this by NAME about disablemigration I tried to not recommend anything here except to send disablemigration. The client is going to decide what to do with regard to NAT rebinding. If wired, I probably send more PINGs or something. If wireless, maybe not. If I'm quite sure I'm not behind a NAT, I probably do nothing. There's nothing normative here but to send the TP, so if people hate this I can just delete the explanation.\nI moved the whole thing to Section 5 as NAME suggested. I've taken most of the other suggestions, but am not sure if the intro needs to change, or what exactly people don't like about the disable_migration explanatory text.\nI'm actually fine with this. We were considering keep alives on the server side when we can't support migration (or NAT rebinding) based on the current infrastructure. So having the spec suggest keep alives to the client would be better. It would allow the client to choose if it would prefer keep alives or just to close the connection.\nAfter taking all suggestions from NAME I simply deleted the explanatory text about sending disablemigration. I didn't see the stuff about NAT rebinding as prescriptive or changing the semantics, but it's non-normative, clients are gonna do what they gonna do, and it's not worth holding up the rest of this over. If people want to add NAT rebinding considerations to disablemigration we can fight that out in another issue.\nFWIW, we have a number of cases when the destination IP and port + CID would route correctly, but the original IP(being anycast) would not always route correctly. It's likely there are millions of connections to the destination IP being migrated to, but yes it's a smaller number than the anycast IP.\nSo NAME do you want new or revised language here, or will the other issue address the linkability stuff? I would like to close this out if it's good enough.\nGiven how hard this has been to get right, I want NAME to take a look. But I'm happy with this.\nA reminder for NAME to either take a look or say that this is OK to proceed without another review.\nI could not stand leaving this unmerged any longer. Anyone with more comments can make a new PR. Thanks to everyone who helped out here.\nThe transport draft talks about QUIC being resilient to NAT rebindings. However, if a QUIC server is behind an L3 load balancer which simply routes based on 5-tuple, then connections to this server will (likely) not survive rebinding. I couldn't find any language which addressed whether such a deployment was \"OK\" or not. I think that since this load balance does not support NAT binding resilience, it is implicitly \"bad\" according to the draft, but others might disagree. In any case, I think there should be text to address this. Note, if the server advertised a preferredaddress which routed around the load balancer, and if clients were required to use this address, then that could obviously work. But preferredaddress support is a SHOULD, not a MUST.\nWe did discuss this issue. Use of empty connection IDs and demultiplexing based on source addresses. The conclusion there was that this was OK: people could do that, as long as it was clear that what they were getting was not different than TCP. That is, we made it clear that if you don't use the identifiers that you control (your addresses, the connection ID), then you accept that you can't handle migration at all and connections will drop (well, unless you do something like trial decryption, which has some obvious scaling issues). The outcome of that discussion was captured in . Assuming that Ryan finds this answer satisfactory (and I haven't missed anything), then I suggest we just close this as a duplicate of .\nI don't think this captures Ryan's concern: when the server is using a non-zero CID and also is behind 4-tuple routing. I think he's looking for a section of text to coherently describe what you have to do here: send either disablemigration or preferredaddress; or forward packets between servers; or either don't use a common stateless_reset key, or put client address/port in the reset token. This is sort of ops-drafty but there real requirements on servers to behave in a secure way. I think someone with full command of the transport draft would figure this out, but it is not clearly stated anywhere.\nI agree with Martin Duke and the language he suggests makes sense to me.\nThis does feel a bit ops-drafty to me, but I'd be happy to review a PR.\nI'm sure that this is now basically unrecognizable...Some more suggestions, but overall this LG", "new_text": "Servers MUST drop incoming packets under all other circumstances. 5.2.3. A server deployment could load balance among servers using only source and destination IP addresses and ports. Changes to the client's IP address or port could result in packets being forwarded to the wrong server. Such a server deployment could use one of the following methods for connection continuity when a client's address changes. Servers could use an out-of-band mechanism to forward packets to the correct server based on Connection ID. If servers can use other dedicated server IP addresses or ports than the one that the client initially connects to, they could use the preferred_address transport parameter to request that clients move connections to these dedicated addresses. Note that clients could choose not to use the preferred address. A server in a deployment that does not implement a solution to maintain connection continuity during connection migration SHOULD disallow migration using the disable_active_migration transport parameter. Server deployments that use this simple form of load balancing MUST avoid the creation of a stateless reset oracle; see reset-oracle. 5.3. A QUIC connection is a stateful interaction between a client and"} {"id": "q-en-quicwg-base-drafts-ff01d0a590d684df919599b2ddf07c94e07c8fd115d3e45cf2e9ba4dfa20bc6f", "old_text": "in connection failures, as the issuing endpoint might be unable to continue using the connection IDs with the active connection. 5.2. Incoming packets are classified on receipt. Packets can either be", "comments": "This is attempt to write down NAME 's proposal to This was surprisingly hard to write, and undoubtedly could be improved. There are cleaner ways to solve this problem, but this one is lowest-impact to the existing spec.\nNAME thanks for the new text, that helped. I think we've resolved outstanding issues, unless my new text has created new problems. 2 possible ways to enlarge this solution: should we describe this attack in Security Considerations? As these MAYs are an effort to counter something in particular, perhaps we should discuss. it would clean up some accounting issues if each NCID frame MUST set Retire Prior To to more than the maximum continuous sequence number that it had recieved an RCID for. Worthwhile?\nThat's interesting to send at first glance, but then poses the question of what the recipient of NCID should do when it sees an RPT value change. There's a difference in behavior between \"I've seen these\" and \"you need to send these.\"\nI think this PR is sufficient to both document the issue and provide some minimum guidance, so I'd suggest we editorialize a bit and then move forward with a solution along these lines.\nIt seems all the discussion of the problem is happening here and not on the issue, so I will propose my alternative solution here: I know folks have been resistant to changing state based off receiving an acknowledgement of a packet/frame they sent, but what if we changed how the RetirePriorTo works, and just required the packet containing the frame to be acknowledged, instead of requiring the RCID to be sent? Would this solve the problems? This would mean the receiver of the NCID shouldn't have to track any additional state beyond what it takes to acknowledge the packet (which they have to do anyways); it just immediately throws away the state. And the RCID packet would only be used when the endpoint that was given the CID chooses to retire it.\nNAME I think this PR has been an attempt to clarify the \"leaky\" approach, that tries to clarify the amount of state that an endpoint should retain. There are indeed discussion of what that amount is, but I think it is essential to have such detailed discussion on the PR in order to determine exactly what a particular approach would be like. To paraphrase, I think we are making process on this PR to describe a specific approach. If we are to consider something different, I think that discussion should happen on the issue (and emerge as a separate PR).\nOk, trying to put this one in the ground. I think the only remaining dispute is between NAME and NAME on whether activeconnectionid_limit is a sufficient number of RCIDs to remember.\nReading through this proposal, I fear that this would be very difficult to implement. If the ACK for a RETIRECONNECTIONID frame is lost, and endpoint cannot safely send a NEWCONNECTIONID frame with a RetirePriorTo, since this would cause the peer to have more than RETIRECONNECTION_ID frames in flight. In order to safely use RetirePriorTo, this endpoint then would have to track receipt of the ACK frame. I'd very much like to avoid the complexity associated with that.\nI finally caught up with this issue, and I think that implementer advice is my preferred way forward. To that end, I like this PR as the resolution.\nOK -- there has been an enormous amount of discussion. The current state of the PR is that all RCIDs will eventually be sent, but if there are more than activeconnectionid_limit, some may be delayed by an RTT or more. If it gets out of hand the RCID sender can throw a connection error. I believe this reflects what NAME articulated a few dozen messages ago, and in the thread above I have pseudocode for a lightweight implementation of this (i.e. essentially no additional resources for an arbitrarily large backlog of RCIDs). The advantage of this approach is that it is lightweight and is very subtle change to behavior that will not will not affect interoperability at all or break the existing contract of a Retire Prior To. There are two alternatives that have some traction: 1) Just bite the bullet and do . This breaks interoperability but is a clean architectural change that addresses the root of the problem. I would be very comfortable with this, and prefer it to other things that break the existing semantics. 2) NAME \"leaky\" approach that allows endpoints to simply not send some RCIDs at all if there are too many pending. I used to prefer this, but given my existence proof above I'm not as excited about it anymore. It breaks the contract that is in draft-27 instead of merely bending it. I hope that covers the decision space. I would like to put this issue out of its misery.\nI'm reluctant to blow away the whole PR with brand new text, so in this comment I will propose four entirely new paragraphs that IMO says what needs to be said. Either the current PR, Martin's revision, or this proposal could serve as the basis of the eventual change: If a peer sends large numbers of NEWCONNECTIONID frames that increase Retire Prior To, and/or acks of packets that contain RETIRECONNECTIONID are lost, the state required at the RETIRECONNECTIONID sender can grow without regard to its activeconnectionidlimit. Therefore, endpoints SHOULD take steps to bound the state associated with needed RETIRECONNECTIONID frames while ensuring that it eventually transmits all required RETIRECONNECTIONID frames. For example, it might limit voluntarily retirement of sequence numbers if it has not received enough acknowledgments of packets containing previous retirements. It might also restrict the RETIRECONNECTIONID frames in flight to a single packet in order to simplify tracking of what is in flight, what needs retransmission, and what has been acknowledged. An endpoint MAY treat having too many connection IDs to retire as a connection error of type CONNECTIONIDLIMITERROR. The threshold for this error SHOULD be at least twice the endpoint's advertised activeconnectionidlimit. Endpoints SHOULD NOT issue updates of the Retire Prior To field prior to receiving all of the RETIRECONNECTION_ID frames for the previous update to Retire Prior To. ** If people much prefer this or MT's revision, I'll update the PR accordingly.\nI took NAME 's suggested text. I would prefer a little advice on how to implement the lag, but I can live with this text.\nLGTM.\nWhen an endpoint sends a RETIRECONNECTIONID frame carrying a connection ID that has been active until that point, that endpoint is expected to open room for accepting a new connection ID. That means that if the peer does not acknowledge the packets that contains RETIRECONNECTIONID frames, but keeps on sending NEWCONNECTIONID frames with increasing values in the Retire Prior To field, the number of unacknowledged RETIRECONNECTIONID frames (or to be precise, the number of unacked retiring connection IDs) keeps on increasing.\nTo address the issue, I think we need to do one of the following: State that the attack exists, and that QUIC stacks should implement mitigations. Introduce MAXCONNECTIONIDS frame. The reason we do not have similar issue in stream concurrency control is because the credit is controlled by MAX_STREAMS frames, rather than using FIN (or reset) as an implicit signal to indicate availability of new credit. We could do the same for connection IDs.\nMathematically, this might be unbounded, but it's more likely that in practice the limit is as it would be unlikely that unacknowledged RETIRECONNECTIONID frames would remain so once more NEWCONNECTIONID frames start arriving. More so as the issuer of those NEWCONNECTIONID frames has to have received the RETIRECONNECTION_ID frames. Adding new frames for this seems a little unwieldy, so I'd prefer to just acknowledge that the problem exists. Note also that Retire Prior To has the effect of making this limit squishy in other ways. (This is a design issue.)\nI'm not sure I understand the attack here. Why would an implementation need to keep track of acknowledgements of RETIRECONNECTIONID frames (other than for purposes of loss recovery, but then this applies to any retransmittable frame type)? An endpoint sends a RETIRECONNECTIONID frame when it decides to not use the respective CID any longer. At this point, it can forget about the connection ID altogether. It might make sense to hold on to the stateless reset token for a while (3 PTO?) longer, in order to be able to detect stateless resets, but this is an optimization and wouldn't lead to unbounded state anyway.\nNAME I think is fine, assuming that the endpoint would just stop retransmitting oldest RETIRECONNECTIONID frames. That limit would be too stern to be used for detecting potential attacks (and therefore closing the connection with PROTOCOLVIOLATION).\nNAME This is about keeping state for retransmission for recovering loss. As stated in , the issue is unique to RETIRECONNECTIONID frames, because this frame type is (ab)used for providing the peer additional credit, compared to other flow control that uses dedicated frame types (e.g., MAX_*) for communicating credit.\nThank you for the explanation, NAME I think I now understood the problem. This would require endpoints to make assumptions about how the peer handles the and how it issues new CIDs. This can get complicated really quickly and leave the connection in an unrecoverable state (or at the least a state with less CIDs than it should have). Introducing a new frame, as NAME suggested, seems a lot cleaner to me.\nJust to be clear, while I used RPT as a way of explaining the design issue, I think it is not a requirement. A malicious client can repeatedly migrate to a new address, initiating the use of new CID pairs, at the same time intentionally not acknowledging packets containing RETIRECONNECTIONID frames sent from the server. If the client does that, the number of CIDs that the server has to track for retirement increases as time goes.\nClarification question: If the person sending RCID isn't getting ACKs back, I think we still need to assume that those CIDs are being replaced? If not, then the non-RPT side of this would just mean the person sending RCID frames runs out of CIDs to use and has to stop. So the problem statement is that: this person is running over their maxactivecid limit from the perspective of the person withholding ACKs, but each side has their own view and so the person retiring them stopped counting them as active once the frame was sent (but not acknowledged). And that can happen via RPT as well, just because it means that they're going to be instructed to send the RCID, but the problem is the same either way? (Just want to make sure I'm understanding the problem correctly)\nNAME Yes, I think your problem statement is correct. Or to be even more concise, the problem is that the issuer of CIDs is given additional credit when it receives RCID, which happens before the consumer of CIDs drops the state it needs to retain for retransmitting RCID. For the attack to work, an attacker has to let the peer consume and retire CIDs. RPT and intentional migration are examples that makes that happen.\nSo the issue here is fundamentally that you can induce the peer to send retransmittable frames which aren't subject to some kind of flow control, and this is the only instance we have of that. STOPSENDING and failing to acknowledge the RESETSTREAM appears to have a similar profile, except that the number of RESETSTREAMs you could have outstanding is bounded by the number of streams the peer has opened, which they control. You could execute the same attack at a larger scale by choosing particular regions of stream data to never acknowledge, forcing the peer to keep them in memory for retransmission, but the peer could stop sending until that region gets through (applying backpressure) or simply reset the stream. Since a RESETSTREAM is smaller than the data region and only one is needed per stream, this collapses into the previous case. What's different here is that the endpoint being attacked doesn't choose whether to retire these CIDs and has no mechanism to apply backpressure. It seems like there are a couple paths to resolving this. As you suggest, we could define an explicit frame that grants the additional CID space, and implementations would not send / could withhold this frame until the RCID is ACK'd. This creates a backpressure mechanism and has the side benefit of allowing an implementation to vary the number of CIDs it wants to track over time, but the disadvantage of requiring an extra round trip before CIDs can be replaced in the normal case. We could mimic the \"collapsing\" behavior above by defining a frame that retires all CIDs up to a given sequence number. Then the implementation under memory pressure chooses to retire a few extra CIDs and collapses the whole set waiting to be retransmitted to a single frame.\nNAME I think your summarization of the problem is accurate. Regarding an attacker selectively acking packets carrying STREAM frames, yes, the attack exists, but it the sender can mitigate that problem in certain ways. We already state that in . This issue is the only case that a protocol cannot be implemented properly, with a bounded amount of state. Regarding how we should fix the problem, if we are going make a protocol change, my preference goes to applying the same design principle that we have in other places (i.e. introduce MAXCONNECTIONSIDS frame). I do not like designing a mechanism specific for this purpose, as there is a chance that we might make mistakes. I can also live with what NAME proposed in URL (i.e. encourage leaky behavior).\nSimilar to NAME I'd prefer to acknowledge the problem and indicate that peers may want to limit the number of unacknowledged RETIRECONNECTIONID frames to the number of connection IDs they support. The other option I can imagine is to make the RETIRECONNECTIONID frame cumulative, which provides a strong incentive to use connection IDs in order. Though we may have to make an exception for the connection ID in the received packet if we did that.\nSo the bounds on this aren't that terrible. Say you had to retire 1000 connection IDs all at once. You could track each one individually as you send RETIRECONNECTIONID. That's 1000 things to track, which might be more state than you want (it's not 1000 bits because you need to maintain relationships with packet numbers). Or you could just keep strictly limited, no matter how many connection IDs you need to retire. Tracking that, plus what you have been requested to retire should bound the state required to implement retirement. In other words, you do control how much state you commit to retirement, just like everything else.\nNAME That's my preferred solution, but in my reading of the spec this is not allowed. We MUST send RCID for each sequence number for which the peer has sent retire-prior-to. I think the language in Section 5.1.2 needs to change.\nNAME idea of allowing RCID to be cumulative is also good and computationally cleaner, although we'd need a different frame type or an additional field in the frame (I don't think we can get rid of retiring non-continuous sequence numbers)\nI'm happy to file a PR for the NAME or NAME approach based on the overall sentiment.\nOne purely implementation option: It's always legitimate to repeat a frame. Therefore, if you track only the smallest and largest un-acked RCID sequence numbers, you can just keep retiring the whole range every time you retransmit. ACKs will shrink the range (hopefully to zero). There's no harm beyond unnecessary bytes in retransmitting something a few extra times to avoid excessive state tracking.\nNAME While I won't say that's impossible, I'd argue that the design is going to be complex. When CIDs are used in a legitimate manner, they are not going to be retired one by one. A CID is retired when a path using that CID is abandoned. Consider the case where an endpoint has 4 active CIDs (CID0 to CID3), and two paths. One path could be using CID0, the other path could be using CID3. When the endpoint abandon the path that is using CID3, it has to retire CID3 but it cannot retire others. NAME The amount of CIDs that can be sent on wire at a time is restricted by the CWND, therefore there is no guarantee that you can send RCID frames for all CIDs that you want to retire at once. In other words, I think the approach at best collapses into what NAME has proposed (i.e. leaky behavior).\nAt this point, I also think NAME solution is the best approach, since it is a very minor change, and changing RETIRECONNECTIONID to be cumulative is a larger change than I want to accept at this point. NAME thanks for your examples.\nOne idea: Can we simplify everything by not requiring sending RETIRECONNECTIONID for CIDs less than the Retire Prior To? RETIRECONNECTIONID would then only be used when a CID was used and needed to be retired so more CIDs could be issued to the user of the CID.\nNAME see my comment in the PR. More disruptive to current implementations, and eliminates a useful signal. The current proposal will not matter for well-behaved clients but will protect servers from some attacks.\nWe haven't implemented this yet, so it's not disruptive to our implementation FWIW I wonder how useful the signal is. Retire Prior To is really saying \"These are going away sometime soon, whether you like it or not.\" to the peer. If an implementation never sent RETIRECONNECTIONID, would that create a problem of some sort, because I suspect someone will do just that.\nNAME As stated in URL, this attack can be mounted without using Retire Prior To.\nNAME in this case isn't the client limited by activeconnectionid_limit? As a server, if the client has in no way indicated it has accepted my retirement, I'm not going to accept new CIDs. The NCID/RPT case is different because the spec specifically allows the client to NCID speculatively assuming the server will immediately retire the CID.\nNAME The attack here is that when a malicious client retires a CID to which the server has responded, that server would retire the CID that it has used on that path, and also provide a new CID to the client. But the client never ACKs the RCID frames that the server sends. As an example, consider the case where a client uses sCID1 on a new path, the server responds on that path using cCID1, the client abandons that path and sends RCID (seq=sCID1). When receiving this RCID frame, the server would send RCID (seq=cCID1), and also send NCID(sCID2). The client intentionally does not ack the packet carrying these frames, but uses sCID2 on a new path, that carries NCID(cCID2), repeating this procedure.\nAh, I see now that my defense doesn't work, because if I reject the NCID I have to kill the connection, and it may just be because of unlucky ack losses. This whole section is not very precisely written and suffers from two-generals problems about when a CID is actually retired. I will revise to account for this, but I'm not sure how.\nNAME I agree with NAME that there's no valid thread model without Retire Prior To. If the peer doesn't acknowledge your retirement, then you shouldn't be issuing more CIDs, because you might exceed their limit.\nIn my opinion, running out of CIDs is not a big crisis. You can always create a new connection with 0-RTT with a relatively small overhead. We should be biased towards simplicity rather than never running out of CIDs.\nNAME unfortunately we are cornered by the spec. Consider two well behaving endpoints. I'm fully stocked with CIDs and then retire one with a frame. The peer gets the RCID and issues an NCID, but the ack of RCID is lost. If I don't count the CID as retired, then I MUST close the connection. That's bad, so I have to treat the CID as retired if I sent RCID. If you drop acks at scale, this becomes an attack. I'm really coming around to your cumulative retire idea, but will have another go at this tomorrow.\nNAME I think the framing is incorrect. The peer can enforce an endpoint to retire a CID (e.g., by changing RPT or discarding a path). As NAME points out, an endpoint is encouraged issue a new CID at this point, rather than withholding than until the ACK of RCID is received. Note also that RCID that an endpoint sends carries the CID that the peer has issued, while NCID that the endpoint carries the CID that that endpoint issues. Therefore, I am not sure what the rationale for withholding the emission of NCID until receiving an ACK for RCID would be. I agree. An endpoint is not expected to consume CID at high rate. We already discourage such behavior; see URL That's why I'm fine with the leaky behavior that NAME has suggested. NAME As previously stated, I am hesitant to have a new machinery that is unique to this purpose. There is a risk of creating something buggy, both in terms of specification and implementation. My preference goes to accepting the risk (and recommending mitigations, such as the leaky behavior), or reusing a design pattern that we already have (i.e. introduce MAXCONNECTIONIDS frame).\nThe cumulative idea is very appealing to me, minus a future with multipath. If two or more paths are active at once, I don't know how the peer would know that if the frame was cumulative. The only option would be to ensure the two(or more) CIDs in use are adjacent, which is possible, but may require switching CIDs more than should be necessary.\nNAME I would suggest a cumulative RCID as a supplement to the individual RCID. A cumulative one would cleanly handle a lot of cases where there is an unboundedly large list of sequence numbers to retire. If there is a low sequence number that is still in use, the sender can choose to retire it to enable use of the cumulative frame.\nI know folks have been resistant to changing state based off receiving an acknowledgement of a packet/frame they sent, but what if we changed how the RetirePriorTo works, and just required the packet containing the frame to be acknowledged, instead of requiring the RCID to be sent? Would this solve the problems? This would mean the receiver of the NCID shouldn't have to track any additional state beyond what it takes to acknowledge the packet (which they have to do anyways); it just immediately throws away the state. And the RCID packet would only be used when the endpoint that was given the CID chooses to retire it.\nFWIW, I have filed that fixes this issue without introducing a leaky behavior like . IMO, the observation is that the endpoints can check that the issuer of CIDs is not intentionally introducing gaps in the sequence numbers they generate. We already prohibit senders from introducing such gaps, detecting them on the receiver side would be a good thing to do. And if endpoints start doing that, using a cumulative RCID frame is a sensible solution, as the the cumulative list of retired CIDs at any time can be represented using a SACK-like structure (i.e., max-retired + small list of active CIDs below that max.).\nI know that you have characterized this as \"leaky\", but I'm not sure that I understand exactly what is being leaked. Nothing is lost in , except that the endpoint requesting retirement has to tolerate a potentially extended period without old connection IDs being retired. In case this helps, I have no big preference between and . There is a certain elegance to that might tip me toward that, but it is slightly more disruptive. I prefer either of those to the more disruptive change in , which also has the downside of strongly encouraging connection ID changes in order to preserve a clean separation between active and retired. doesn't force you to drop old connection IDs. However, the encoding of the frame is just too complicated for me to accept. I would prefer forcing a clean separation than having a complex encoding like this. is a non-starter for me for the reasons stated.\nNAME In the approach we take in , there is a possibility that the issuer of the CID might not receive the sequence numbers of all the CIDs that have been retired by the peer, when the issuer is issuing new CIDs at high rate and the receiver is also consuming them at a hight rate. That is the consequence of endpoints limiting the amount of inflight RCID frames it tracks. I think we can provide advice so that the \"leak\" would not be an issue in practice, and assuming that would happen, I am happy with being the solution. I think is cleaner, but I acknowledge that it is more disruptive than . Part of my intent behind creating is to see how a clean fix (in terms of state machinery) would look like, so that it can be compared against . As stated in URL, by itself only partly fixes the problem. I am not against adopting along with though. is also a partial fix, I also think that it is a non-starter.\nI think that I understand your point; thanks for your patience. I didn't really classify it as a leak though, so the choice of words threw me a little, it's more of a lag. Let's say that you have a pathological endpoint that sends an infinite series of NEWCONNECTIONID frames with Sequence Number N and Retire Prior To of (N-1). That's legal and will never violate any limits. But unless they maintain acknowledgments for RETIRECONNECTIONID frames at a similar rate (which, to be fair should be easy as for the same values), they could get far ahead of their peer. I don't think that it is strictly a leak, just a lag between the connection IDs being pushed and the connection IDs being successfully retired. Without or something else this could be problematic. And I now understand why is really just an orthogonal refinement, though it might make other defenses less likely to be required by making zero in cases where an endpoint isn't using discretionary retirement. The defense in is to make effectively constant, no matter how many frames need to be retired. My main objection is really that it is a maximal design where only a minimal one is warranted. It's probably worth pointing out that there is also the simplest defense: If you can't keep up with RETIRECONNECTIONID, stop talking to the peer that does this to you. This is just another example from that category of abuse we don't have specific protocol mechanisms to mitigate.\nI agree that I'd like to see a minimal solution, so I'm leery of heading towards . In order for a peer to need to send a lot of RETIRECONNECTIONID frames without Retire Prior To, a lot of 5-tuple + CID changes need to be coming in. One way to rate limit that naturally is to stop giving out CIDs if the peer hasn't acknowledged your RETIRECONNECTIONID frames. ie: Limit the number of NEWCONNECTIONID frames in flight to: So if the peer stops acknowledging your retirements, you stop giving them new CIDs. I think this avoids the need for any other limits? I added 1 in my example so one NCID could be sent even if all of the peer's CIDs had just been retired, though that concern disappears if we decided to also do something like\nWhat a mess we've made! Is there interest in doing a Zoom about this? Maybe 0900 Tokyo/1100 Sydney/1700 Seattle/2000 Boston on Tue/Wed? The root cause is that the flow control mechanism isn't all that well designed. IMO is the actual correct fix to this problem, but as people say, it's disruptive. uses ACKs as an implicit signal, which seems like a problem. has a different implicit signal, and I don't think it solves the case where the client is withholding acks for unsolicited RCIDs. I really think we should just use or depending on our appetite for changing the wire image. is great for the RCID sender, and if the receiver won't miss any RCIDs unless it's doing pathological stuff or the ack loss patterns are tremendously unlucky. , OTOH, is a comprehensive fix to the problem.\nI think with some added text about limiting outstanding NCIDs when there are lots of RCIDs would be sufficiently robust.\nI'm fine with trying to see if we can reach an agreement on .\nI'd prefer over . I can't really see why it would more disruptive than . The problem with is that it will likely work find in most cases, but there are some corner case (when the ACKs for RETIRECONNECTIONID frames are lost), in which things will break. Properly accounting for these corner cases requires some non-trivial tracking logic in order to avoid risking the connection being closed with a CONNECTIONIDLIMIT_ERROR. On the other hand, seems to be conceptually simpler. I'd rather implement a few changes to my frame parser than introduce some ACK-tracking logic.\nJust to give another example of how an endpoint might end up required to track the retirement of more than activeconnectionid_limit, when talking to a well-behaving peer: A server tries to supply three active CIDs to the peer. At the moment, it has provided CID0, CID1, CID2 to the client. The client sends RCID(CID1), RCID(CID2). The server provides NCID(CID3), NCID(CID4). The two NCID frames are delivered to the client, but the ACK for the RCID frames gets lost. The client decides to probe two paths concurrently, by using CID3 and CID4, but shortly after the packets are sent using those paths, the underlying network for those two paths disappear. The client now needs to make sure that RCID frames carrying 4 CIDs reach the peer. In this example, we might argue that it would be possible for the client to figure out that CID3 and CID4 were provided as substitutions for CID1 and CID2. But do we want to implement logic that detects such condition into our code? Note that tracking of substitutions can become tricky; if RCID(CID1) and RCID(CID2) were sent in different packets, it would be impossible for the client to see if CID4 was provided as a supplement for CID1 or for CID2.\nI agree with NAME that if we go with the limit should be higher than .\nI finally caught up with this issue, and implementer advice is definitely my preferred way forward. We're going to need some complexity to (and some magic numbers) no matter what, let's do this while limiting mechanism to a minimum. To that end, I agree with as a good resolution.\nWhile increasing the limit will certainly help, this introduces a really unpleasant flakiness in the protocol. QUIC, so far, has the property that no amount or pattern of packet loss will lead to a protocol violation (it might lead to a connection timeout if all packets are blackholed, but that's a different kind of error). This means that when running QUIC in production, screening for (transport-level) errors will be very useful for finding bugs in implementations. My fear regarding is that no matter how high we choose the limit, there will always be a loss pattern that will trigger a protocol violation.\nNAME Thank you for raising the concern, I think the recommended behavior when exceeding that limit is to stop sending RCID frames for some sequence numbers rather than triggering a protocol violation. That's actually what I had in mind hence called it \"leaky.\" The benefit of stopping retiring some sequence numbers is that the worst case becomes endpoints running out of CIDs - something that is expected to happen on ordinary connections too. Note also that in HTTP/3, reestablishing a connection in 0-RTT is always a viable option. To summarize, I think that the concern can be resolved for HTTP/3 by changing the recommended behavior from resetting the connection to stop sending RCIF frames for some sequence numbers.\nNAME I don't think not sending RCID frames is that easy. Tracking something that needs to be sent consumes comparable state to tracking it in flight. Or are you suggesting a different algorithm that indicates a RCID frame doesn't need to be sent? If the peer has to wait for one Retire Prior To to take effect before it's increased again, that leaves only the peer migrating really quickly and lost ACKs as the unbounded case. As NAME said, I think it's more likely a well-behaved peer would run out of CIDs in this case than any other failure mode. I previously suggested stopping sending out NCID frames when the RCID frame limit had been reached to force the issue, but that just changes who has to close the connection. FWIW, we already have sanity checks where our implementation closes the connection when a datastructure becomes too large. The limits are high enough that they're almost never hit unless there is a bug, but they've been helpful in finding bugs. I would advocate everyone have these. NAME You'll never get to 0 transport errors. Some implementations will have a bug, and some peers or servers will have faulty or corrupted memory. For example, in Google QUIC we found we received the ClientHello on a stream besides 1 surprisingly often.\nI think that the idea is to reduce the set of RCID frames needed to a range of sequence numbers. You only have to remember the few exceptions for sequence numbers that are in use, sequence numbers for which you have outstanding frames, and a range of sequence numbers that require retirement. As long as your discretionary use is sequential, this is bounded state. As mentioned, the risk is that lack of acknowledgment leads to potentially large delays in getting new connection IDs, but if you ever run short, abandon the connection. Either you have a pathological loss pattern or your peer is messing with you. In either case, it is probably not worth continuing.\nNAME Yes, this is one method of limiting the state that is being used for retaining sequence numbers that need to be sent. And it works even when discretionary use is not sequential, as the number of \"exceptions\" (i.e. gaps) is bounded by maxconnectionidlimit. The other assumption of using this method is that you would somehow bound the state that is used to track RCID frames being inflight. There are various ways of accomplishing that, e.g., have a limit on the number of RCID frames sent at once, track bunch of inflight RCID frames as one meta-frame. And it is correct that the outcome would be a \"lag\" in this method. The downside of this method is that it might be a disruptive change to existing implementations. It is essentially equivalent to what proposes, with the exception being that tracking RCID frames are difficult (as you track to send many). For existing implementations, it would be easier to adopt a \"leaky\" method, i.e.: let endpoints have a counter counting on the number of sequence numbers that are retired but are yet to be acknowledged when a CID is being retired, if the value of the counter is below a certain threshold (i.e. 2 maxconnectionidlimit), the counter is incremented, and a RCID frame carrying the sequence number of that CID is added to the outstanding queue. If the counter was not below the threshold, no RCID frame will be sent (leak!) when an RCID frame is deemed lost, it will be resubmitted into the outstanding queue when an RCID frame is acknowledged before deemed lost, the counter is decremented when an RCID frame is late-acked, we do nothing Assuming that the solution we prefer is the least-disruptive approach, I think we need to recommend the latter. If we have distaste in the latter and prefer the former, I think we'd better switch to , as it more cleanly reflects the design of the former. NAME I think the latter is not that a big change to existing implementations.\nYep, I think that is a good approach. Obviously, the range encoding is superior in terms of how much state it can hold, but a simple list is fine. And the effects of lag will be limited by the tolerance for retaining retired connection IDs, which seems proportional to me.\nNAME I doubt that's less impactful on existing implementations, certainly from an interoperability standpoint. Today endpoints should hold on to incoming CIDs until they are formally retired. Making them leaky can lead to trapped state. Sending them all, but with a delay, is no added state but solves this problem.\nAfter a lot of back-and-forth, I think that we've all generally converged on .\nStill LG, but I like ekr's suggestionLatest text looks good", "new_text": "in connection failures, as the issuing endpoint might be unable to continue using the connection IDs with the active connection. An endpoint SHOULD limit the number of connection IDs it has retired locally and have not yet been acknowledged. An endpoint SHOULD allow for sending and tracking a number of RETIRE_CONNECTION_ID frames of at least twice the active_connection_id limit. An endpoint MUST NOT forget a connection ID without retiring it, though it MAY choose to treat having connection IDs in need of retirement that exceed this limit as a connection error of type CONNECTION_ID_LIMIT_ERROR. Endpoints SHOULD NOT issue updates of the Retire Prior To field before receiving RETIRE_CONNECTION_ID frames that retire all connection IDs indicated by the previous Retire Prior To value. 5.2. Incoming packets are classified on receipt. Packets can either be"} {"id": "q-en-quicwg-base-drafts-1e222950496cda1e408548bad3afaeaec30712892ea4be395cb877eb42384ef6", "old_text": "7. TBD. Also see Section 7.1 of RFC7541. While the negotiated limit on the dynamic table size accounts for much of the memory that can be consumed by a QPACK implementation,", "comments": "Behold\nNAME I think this is inches of getting landed\nThe full text of that section is \"TBD.\"\nHPACK covers the following topics: Probing Dynamic Table State Static Huffman Encoding Memory Consumption/DoS Implementation Limits To what degree to we need to retread this vs referring to 7541? There are some new areas to cover related to DoS, like recommending implementation timeouts for blocking requests. Soliciting any other topics, and possibly a passionate security-minded volunteer to draft a PR.\nAll looks good (and familiar); some minor nits.", "new_text": "7. This section describes potential areas of security concern with QPACK: Use of compression as a length-based oracle for verifying guesses about secrets that are compressed into a shared compression context. Denial of service resulting from exhausting processing or memory capacity at a decoder. 7.1. QPACK reduces the length of header field encodings by exploiting the redundancy inherent in protocols like HTTP. The ultimate goal of this is to reduce the amount of data that is required to send HTTP requests or responses. The compression context used to encode header fields can be probed by an attacker who can both define header fields to be encoded and transmitted and observe the length of those fields once they are encoded. When an attacker can do both, they can adaptively modify requests in order to confirm guesses about the dynamic table state. If a guess is compressed into a shorter length, the attacker can observe the encoded length and infer that the guess was correct. This is possible even over the Transport Layer Security Protocol (TLS, see RFC5246), because while TLS provides confidentiality protection for content, it only provides a limited amount of protection for the length of that content. Padding schemes only provide limited protection against an attacker with these capabilities, potentially only forcing an increased number of guesses to learn the length associated with a given guess. Padding schemes also work directly against compression by increasing the number of bits that are transmitted. Attacks like CRIME CRIME demonstrated the existence of these general attacker capabilities. The specific attack exploited the fact that DEFLATE RFC1951 removes redundancy based on prefix matching. This permitted the attacker to confirm guesses a character at a time, reducing an exponential-time attack into a linear-time attack. 7.2. QPACK mitigates but does not completely prevent attacks modeled on CRIME CRIME by forcing a guess to match an entire header field value, rather than individual characters. An attacker can only learn whether a guess is correct or not, so is reduced to a brute force guess for the header field values. The viability of recovering specific header field values therefore depends on the entropy of values. As a result, values with high entropy are unlikely to be recovered successfully. However, values with low entropy remain vulnerable. Attacks of this nature are possible any time that two mutually distrustful entities control requests or responses that are placed onto a single HTTP/3 connection. If the shared QPACK compressor permits one entity to add entries to the dynamic table, and the other to access those entries, then the state of the table can be learned. Having requests or responses from mutually distrustful entities occurs when an intermediary either: sends requests from multiple clients on a single connection toward an origin server, or takes responses from multiple origin servers and places them on a shared connection toward a client. Web browsers also need to assume that requests made on the same connection by different web origins RFC6454 are made by mutually distrustful entities. 7.3. Users of HTTP that require confidentiality for header fields can use values with entropy sufficient to make guessing infeasible. However, this is impractical as a general solution because it forces all users of HTTP to take steps to mitigate attacks. It would impose new constraints on how HTTP is used. Rather than impose constraints on users of HTTP, an implementation of QPACK can instead constrain how compression is applied in order to limit the potential for dynamic table probing. An ideal solution segregates access to the dynamic table based on the entity that is constructing header fields. Header field values that are added to the table are attributed to an entity, and only the entity that created a particular value can extract that value. To improve compression performance of this option, certain entries might be tagged as being public. For example, a web browser might make the values of the Accept-Encoding header field available in all requests. An encoder without good knowledge of the provenance of header fields might instead introduce a penalty for a header field with many different values, such that a large number of attempts to guess a header field value results in the header field not being compared to the dynamic table entries in future messages, effectively preventing further guesses. Simply removing entries corresponding to the header field from the dynamic table can be ineffectual if the attacker has a reliable way of causing values to be reinstalled. For example, a request to load an image in a web browser typically includes the Cookie header field (a potentially highly valued target for this sort of attack), and web sites can easily force an image to be loaded, thereby refreshing the entry in the dynamic table. This response might be made inversely proportional to the length of the header field value. Disabling access to the dynamic table for a header field might occur for shorter values more quickly or with higher probability than for longer values. 7.4. Implementations can also choose to protect sensitive header fields by not compressing them and instead encoding their value as literals. Refusing to insert a header field into the dynamic table is only effective if doing so is avoided on all hops. The never indexed literal bit (see literal-name-reference) can be used to signal to intermediaries that a particular value was intentionally sent as a literal. An intermediary MUST NOT re-encode a value that uses a literal representation with the 'N' bit set with another representation that would index it. If QPACK is used for re-encoding, a literal representation with the 'N' bit set MUST be used. If HPACK is used for re-encoding, the never indexed literal representation (see Section 6.2.3 of RFC7541) MUST be used. The choice to mark that a header field should never be indexed depends on several factors. Since QPACK doesn't protect against guessing an entire header field value, short or low-entropy values are more readily recovered by an adversary. Therefore, an encoder might choose not to index values with low entropy. An encoder might also choose not to index values for header fields that are considered to be highly valuable or sensitive to recovery, such as the Cookie or Authorization header fields. On the contrary, an encoder might prefer indexing values for header fields that have little or no value if they were exposed. For instance, a User-Agent header field does not commonly vary between requests and is sent to any server. In that case, confirmation that a particular User-Agent value has been used provides little value. Note that these criteria for deciding to use a never indexed literal representation will evolve over time as new attacks are discovered. 7.5. There is no currently known attack against a static Huffman encoding. A study has shown that using a static Huffman encoding table created an information leakage, however this same study concluded that an attacker could not take advantage of this information leakage to recover any meaningful amount of information (see PETAL). 7.6. An attacker can try to cause an endpoint to exhaust its memory. QPACK is designed to limit both the peak and stable amounts of memory allocated by an endpoint. The amount of memory used by the encoder is limited by the protocol using QPACK through the definition of the maximum size of the dynamic table, and the maximum number of blocking streams. In HTTP/3, these values are controlled by the decoder through the settings parameters SETTINGS_QPACK_MAX_TABLE_CAPACITY and SETTINGS_QPACK_BLOCKED_STREAMS, respectively (see maximum-dynamic-table-capacity and blocked- streams). The limit on the size of the dynamic table takes into account the size of the data stored in the dynamic table, plus a small allowance for overhead. The limit on the number of blocked streams is only a proxy for the maximum amount of memory required by the decoder. The actual maximum amount of memory will depend on how much memory the decoder uses to track each blocked stream. A decoder can limit the amount of state memory used for the dynamic table by setting an appropriate value for the maximum size of the dynamic table. In HTTP/3, this is realized by setting an appropriate value for the SETTINGS_QPACK_MAX_TABLE_CAPACITY parameter. An encoder can limit the amount of state memory it uses by signaling a lower dynamic table size than the decoder allows (see eviction). A decoder can limit the amount of state memory used for blocked streams by setting an appropriate value for the maximum number of blocked streams. In HTTP/3, this is realized by setting an appropriate value for the QPACK_BLOCKED_STREAMS parameter. An encoder can limit the amount of state memory by only using as many blocked streams as it wishes to support; no signaling to the decoder is requred. The amount of temporary memory consumed by an encoder or decoder can be limited by processing header fields sequentially. A decoder implementation does not need to retain a complete list of header fields while decoding a header block. An encoder implementation does not need to retain a complete list of header fields while encoding a header block if it is using a single-pass algorithm. Note that it might be necessary for an application to retain a complete header list for other reasons; even if QPACK does not force this to occur, application constraints might make this necessary. While the negotiated limit on the dynamic table size accounts for much of the memory that can be consumed by a QPACK implementation,"} {"id": "q-en-quicwg-base-drafts-1e222950496cda1e408548bad3afaeaec30712892ea4be395cb877eb42384ef6", "old_text": "new streams, reading only from the encoder stream, or closing the connection. 8. 8.1.", "comments": "Behold\nNAME I think this is inches of getting landed\nThe full text of that section is \"TBD.\"\nHPACK covers the following topics: Probing Dynamic Table State Static Huffman Encoding Memory Consumption/DoS Implementation Limits To what degree to we need to retread this vs referring to 7541? There are some new areas to cover related to DoS, like recommending implementation timeouts for blocking requests. Soliciting any other topics, and possibly a passionate security-minded volunteer to draft a PR.\nAll looks good (and familiar); some minor nits.", "new_text": "new streams, reading only from the encoder stream, or closing the connection. 7.7. An implementation of QPACK needs to ensure that large values for integers, long encoding for integers, or long string literals do not create security weaknesses. An implementation has to set a limit for the values it accepts for integers, as well as for the encoded length (see prefixed-integers). In the same way, it has to set a limit to the length it accepts for string literals (see string-literals). 8. 8.1."} {"id": "q-en-quicwg-base-drafts-8e08801cd7ff274aef0a386d56f6163c57ae115e34acca04bcce4a81943a8848", "old_text": "would be too large to fit in a packet, however receivers MAY also limit ACK frame size further to preserve space for other frames. When discarding unacknowledged ACK Ranges, a receiver MUST retain the largest received packet number. A receiver SHOULD retain ACK Ranges containing newly received packets or higher-numbered packets. A receiver that sends only non-ack-eliciting packets, such as ACK frames, might not receive an acknowledgement for a long period of", "comments": "This explains what needs to be kept and why. Specifically, you need to keep ranges unless you have other means of ensuring that you don't accept packets from those ranges again. You also need to keep the largest acknowledged so that you can get a packet number from subsequent packets. This also recommends that ACK frames include the largest acknowledged always. That is primarily to ensure that ECN works properly, and even there, you only disable ECN if you get some weird reordering, so it's probably not a big deal if you don't follow this recommendation. The issue is marked design, but the resolution here is basically a restatement of other text. I think we can run this through the design process, but it isn't really worth flagging this in a change log in my opinion.\nNAME you should probably look at this. I have marked the associated issue anyway, but I'd appreciate another review.\nIn , NAME notes that added this \"MUST\": My understanding is that this prevents the largest acknowledged packet from going backwards. We use the monotonic increase of the Largest Acknowledged field in two ways: In ECN validation, this is used to filter out ACK frames that might have arrived out of order, which might have lower counts legitimately. In key updates, where the value of the largest acknowledged is used to drive the change to new keys and to prevent use of old keys after an update. In my view, having ECN validation fail is quite unfortunate, but likely workable. On the other hand, uncertainty about largest acknowledged could introduce instability in the key update process. If an endpoint's conception of what the value is changes, I doubt that we'll see extra key updates (I think that the AEAD would protect us from that), but we might leave open the possibility of interleaving of packets from two different key phases. I would suggest that we retain this \"MUST\".\nFor out of order ACK frames, you can use the packet number the ACK arrives in. I realize that we have tried to allow frames to be retransmitted, but I think retransmitting ACK frames without modifying the ACK delay/etc is so harmful we should probably prohibit that one frame from being retransmitted, since it creates a bunch of edge cases. Or we could let ECN validation could fail, but that'd be a bit sad. I'm confused about what the issue with key updates is? The largest acked you've ever seen doesn't change, just the largest acked from the last ACK frame. When determining the size of the packet number, we say: \"The sender MUST use a packet number size able to represent more than twice as large a range than the difference between the largest acknowledged packet and packet number being sent.\" I'd rather not make this a MUST unless we have a good reason. There are valid use cases that become more complex with this constraint.\nI really want to understand the use cases that you think that this might break. Because key updates very much depend on tracking largest acknowledged. The key updates case has a fairly simple example. Say you receive packet 10 in key phase 1. Then 11 in key phase 2. That's valid. But if you ACK those and then only remember up to packet 8, packet 9 can use key phase 2 and you won't know that it is invalid.\nI'd like to suggest tracking largest acknowledged from forcing all ACK frames to contain the largest acknowledged and have that value never decrease. Given you need to track largest acknowledged locally(ie: not only in the in-flight ACK frame), I don't think there are any problems there. I have two cases in mind where the largest acked could validly decrease: 1) A receiver that tracks a limited number of ACK blocks and the oldest block happens to be the one containing largest acked. 2) If an application was reordering tolerant(ie: from URL) then two different senders on a single 'connection' may be sending packets with disparate packet numbers, causing what appears to be very large scale reordering. In this case, it'd be valuable to discard the older ACK ranges even if they contained the largest acked. Maybe the first is an edge case we don't care about in a use case we aren't optimizing for and the latter is solvable with careful issuing of packet number ranges. Even so, I'm not sure what the motivation is for this change? I guess the ECN case could be, but using largest acked as a proxy for retransmitted ACKs is a very bad design in my opinion.\nI don't think we really need to cater for retransmitted ACK frames in any way. What I'm concerned about is reordering. Yes, using the increase of largest acknowledged as a proxy for strict ordering is a little weird, but the assumption is that frame handling is distinct from packet handling and packet numbers are not available when frames are handled. (I think that's the principle we agreed on; that's why it's OK to use packet numbers in key updates, for example, something that I can confirm is borne out by implementation experience.) The case you describe here has the appearance of exactly that type of reordering if the two senders also generate their own ACK frames (though this seems very strange). But I think that the consequences of that sort of design are not our responsibility to deal with.\nI don't recall on us agreeing that packet numbers weren't available when frames are handled, can you clarify why that property is important?\nI agree with NAME on the principle of not requiring the enclosing packet's packet number during frame handling, and we use sequence numbers within frames to help with reordering. This was at least true in an earlier version of NCID frames, and we had discussed this principle then. This is also why the ACK_FREQUENCY frame in URL includes a sequence number. That said, I don't understand the issue with key updates here. Largest acked so far is a connection-level state variable that can be used for key updates. I suspect that I'm not understanding the example scenario. NAME can you elaborate on your key update example? NAME Is your example motivated by a receiver that does not remember the largest acked or one that does not wish to ack anything but the packet that was just received?\nI was under the impression that the argument was that largest acknowledged was not being tracked, but Ian made it clear that the concern was about not signaling it. That removes key updates as a concern. If remains unclear, then I can't really do better.\nNAME my example was motivated by either case, but primarily not wanting to ACK anything besides what was just received.\nPerhaps it's a naive model, but I had envisioned an implementation remembering a chain of ranges of packet numbers it had received or not. The implementation would save space by forgetting the earliest packet numbers when the chain got too long, and would construct an ACK frame out of the latest set of these ranges it had room for. In that mental model, at least, there's not really room for the idea that you might drop the largest-acked frame number from your memory because it's too old -- it's always at the most-recent end of your chain, even if you currently happen to be receiving packets that fill in the middle for some implementation-specific reason. (Implementation-specific to the peer, which you obviously can't know about or code for!)\nThat matches my model for this. It is also likely the case that the next packet you receive is largest+1, so there is no real advantage to forgetting the largest acknowledged. Not sending largest acknowledged is slightly different though. I can understand how an implementation might choose to send only ranges that have recently changed. My contention is that this is not a good choice and we should recommend against that.\nNAME to produce some text here that separates retention policy (MUST) from signaling policy (SHOULD).", "new_text": "would be too large to fit in a packet, however receivers MAY also limit ACK frame size further to preserve space for other frames. A receiver MUST retain an ACK Range unless it can ensure that it will not subsequently accept packets with numbers in that range. Maintaining a minimum packet number that increases as ranges are discarded is one way to achieve this with minimal state. Receivers can discard all ACK Ranges, but they MUST retain the largest packet number that has been successfully processed as that is used to recover packet numbers from subsequent packets; see packet- encoding. A receiver SHOULD include an ACK Range containing the largest received packet number in every ACK frame. The Largest Acknowledged field is used in ECN validation at a sender and including a lower value than what was included in a previous ACK frame could cause ECN to be unnecessarily disabled; see ecn-validation. A receiver that sends only non-ack-eliciting packets, such as ACK frames, might not receive an acknowledgement for a long period of"} {"id": "q-en-quicwg-base-drafts-db3ccae93d2dfe3cc9431e962c1673f9b53f46590f5e720fa4f29cab954781f7", "old_text": "21.12.1. The QUIC handshake incorporates the TLS 1.3 handshake and enjoys the cryptographic properties described in Appendix E.1 of TLS13. In addition to those properties, the handshake is intended to provide some defense against DoS attacks on the handshake, as described below. 21.12.1.1.", "comments": "We don't ever really say that QUIC depends on TLS providing the properties that we claim TLS provides. So it's worth pointing that exposure. This is more so because some of the properties we claim are dependent on these properties.\n(Cleaned up duplicate instances of the same comment.)\nAs per comment URL I'll note that there is MITM protection in QUIC, but it is not immediate which is roughly why we have the concept of handshake confirmed.\nThe options for triage are close it, mark as editorial, or mark as design. This has a needs-discussion, and I'm seeing a lack of any discussion. So you might see where my thought process is going... NAME this relates to your PR on the QUIC threat model. Are you willing to provide some comment that helps us triage the issue?\nI think that we can take this as editorial, as it merely requires that we capture already agreed principles (we don't protect against an adversary who can inject packets). NAME assistance would be useful, but I can produce text if nothing is forthcoming.\nI can always provide comments, no guarantees on how much they'll help triage IIRC, this is an issue filed from a comment on the threat model PR that was several-months merged to potentially increase the scope of what that covered. I think that the original threat model that NAME and I put together, covering the handshake and migration respectively, is intended to be the start of that section -- as additional items are discovered that could use coverage there to help clarify QUIC's stance on whatever issue, they should totally be added. I agree with NAME that adding such items seem editorial, as they aren't introducing any new requirements and are only capturing current realities as specified elsewhere. (Of course, as that text is written, it seems logical that putting it all in words next to each other may highlight issues, and if the WG decides that we don't like how the current spec results in a particular security stance, we can open a design issue to change normative text elsewhere.) So, I'd probably consider this editorial, and ask NAME to potentially contribute some text or clarify what he's looking to have covered by the original comment -- once that's clear, happy to help get a PR up if needed (or NAME text is always good too!).\nSince this concerns the handshake, I'm not too confident in contributing text but I'd be happy to review and discuss. I believe it is correct that the issue is covered in existing text elsewhere but as NAME says, it makes sense to cover it in one place. The risk is that the reader is not aware how important it is to reach a certain point in the connection before trusting data, and especially to those seeking to create derived versions. From various discussions I'm not sure the importance is broadly understood. Additionally, I'm not entirely convinced that the current text fully captures the problem - not that we necessarily can - for example various stateless transmissions / redirects / token issues - might assume a level of privacy that doesn't exist in all cases, thus requiring stronger integrity elsewhere such as in CIDs, tokens etc..\nThanks folks. It seems we have a path forward, so I'm removing the needs-discussion label. The editorial process will naturally generate some back and forth but that is the same for all editorial issues.", "new_text": "21.12.1. The QUIC handshake incorporates the TLS 1.3 handshake and enjoys the cryptographic properties described in Appendix E.1 of TLS13. Many of the security properties of QUIC depend on the TLS handshake providing these properties. Any attack on the TLS handshake could affect QUIC. Any attack on the TLS handshake that compromises the secrecy or uniqueness of session keys affects other security guarantees provided by QUIC that depends on these keys. For instance, migration (migration) depends on the efficacy of confidentiality protections, both for the negotiation of keys using the TLS handshake and for QUIC packet protection, to avoid linkability across network paths. An attack on the integrity of the TLS handshake might allow an attacker to affect the selection of application protocol or QUIC version. In addition to the properties provided by TLS, the QUIC handshake provides some defense against DoS attacks on the handshake. 21.12.1.1."} {"id": "q-en-quicwg-base-drafts-db3ccae93d2dfe3cc9431e962c1673f9b53f46590f5e720fa4f29cab954781f7", "old_text": "packets being encrypted with per-version keys and the Handshake and later packets being encrypted with keys derived from the TLS key exchange. Further, parameter negotiation is folded into the TLS transcript and thus provides the same security guarantees as ordinary TLS negotiation. Thus, an attacker can observe the client's transport parameters (as long as it knows the version-specific salt) but cannot observe the server's transport parameters and cannot influence parameter negotiation.", "comments": "We don't ever really say that QUIC depends on TLS providing the properties that we claim TLS provides. So it's worth pointing that exposure. This is more so because some of the properties we claim are dependent on these properties.\n(Cleaned up duplicate instances of the same comment.)\nAs per comment URL I'll note that there is MITM protection in QUIC, but it is not immediate which is roughly why we have the concept of handshake confirmed.\nThe options for triage are close it, mark as editorial, or mark as design. This has a needs-discussion, and I'm seeing a lack of any discussion. So you might see where my thought process is going... NAME this relates to your PR on the QUIC threat model. Are you willing to provide some comment that helps us triage the issue?\nI think that we can take this as editorial, as it merely requires that we capture already agreed principles (we don't protect against an adversary who can inject packets). NAME assistance would be useful, but I can produce text if nothing is forthcoming.\nI can always provide comments, no guarantees on how much they'll help triage IIRC, this is an issue filed from a comment on the threat model PR that was several-months merged to potentially increase the scope of what that covered. I think that the original threat model that NAME and I put together, covering the handshake and migration respectively, is intended to be the start of that section -- as additional items are discovered that could use coverage there to help clarify QUIC's stance on whatever issue, they should totally be added. I agree with NAME that adding such items seem editorial, as they aren't introducing any new requirements and are only capturing current realities as specified elsewhere. (Of course, as that text is written, it seems logical that putting it all in words next to each other may highlight issues, and if the WG decides that we don't like how the current spec results in a particular security stance, we can open a design issue to change normative text elsewhere.) So, I'd probably consider this editorial, and ask NAME to potentially contribute some text or clarify what he's looking to have covered by the original comment -- once that's clear, happy to help get a PR up if needed (or NAME text is always good too!).\nSince this concerns the handshake, I'm not too confident in contributing text but I'd be happy to review and discuss. I believe it is correct that the issue is covered in existing text elsewhere but as NAME says, it makes sense to cover it in one place. The risk is that the reader is not aware how important it is to reach a certain point in the connection before trusting data, and especially to those seeking to create derived versions. From various discussions I'm not sure the importance is broadly understood. Additionally, I'm not entirely convinced that the current text fully captures the problem - not that we necessarily can - for example various stateless transmissions / redirects / token issues - might assume a level of privacy that doesn't exist in all cases, thus requiring stronger integrity elsewhere such as in CIDs, tokens etc..\nThanks folks. It seems we have a path forward, so I'm removing the needs-discussion label. The editorial process will naturally generate some back and forth but that is the same for all editorial issues.", "new_text": "packets being encrypted with per-version keys and the Handshake and later packets being encrypted with keys derived from the TLS key exchange. Further, parameter negotiation is folded into the TLS transcript and thus provides the same integrity guarantees as ordinary TLS negotiation. An attacker can observe the client's transport parameters (as long as it knows the version-specific keys) but cannot observe the server's transport parameters and cannot influence parameter negotiation."} {"id": "q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7", "old_text": "6.2.3. Stream types of the format \"0x1f * N + 0x21\" for integer values of N are reserved to exercise the requirement that unknown types be ignored. These streams have no semantics, and can be sent when application-layer padding is desired. They MAY also be sent on connections where no data is currently being transferred. Endpoints MUST NOT consider these streams to have any meaning upon receipt.", "comments": "Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes.", "new_text": "6.2.3. Stream types of the format \"0x1f * N + 0x21\" for non-negative integer values of N are reserved to exercise the requirement that unknown types be ignored. These streams have no semantics, and can be sent when application-layer padding is desired. They MAY also be sent on connections where no data is currently being transferred. Endpoints MUST NOT consider these streams to have any meaning upon receipt."} {"id": "q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7", "old_text": "The default value is unlimited. See header-formatting for usage. Setting identifiers of the format \"0x1f * N + 0x21\" for integer values of N are reserved to exercise the requirement that unknown identifiers be ignored. Such settings have no defined meaning. Endpoints SHOULD include at least one such setting in their SETTINGS frame. Endpoints MUST NOT consider such settings to have any meaning upon receipt. Because the setting has no defined meaning, the value of the setting can be any value the implementation selects.", "comments": "Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes.", "new_text": "The default value is unlimited. See header-formatting for usage. Setting identifiers of the format \"0x1f * N + 0x21\" for non-negative integer values of N are reserved to exercise the requirement that unknown identifiers be ignored. Such settings have no defined meaning. Endpoints SHOULD include at least one such setting in their SETTINGS frame. Endpoints MUST NOT consider such settings to have any meaning upon receipt. Because the setting has no defined meaning, the value of the setting can be any value the implementation selects."} {"id": "q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7", "old_text": "7.2.8. Frame types of the format \"0x1f * N + 0x21\" for integer values of N are reserved to exercise the requirement that unknown types be ignored (extensions). These frames have no semantics, and can be sent on any open stream when application-layer padding is desired. They MAY also be sent on connections where no data is currently being transferred. Endpoints MUST NOT consider these frames to have any meaning upon receipt. The payload and length of the frames are selected in any manner the implementation chooses.", "comments": "Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes.", "new_text": "7.2.8. Frame types of the format \"0x1f * N + 0x21\" for non-negative integer values of N are reserved to exercise the requirement that unknown types be ignored (extensions). These frames have no semantics, and can be sent on any open stream when application-layer padding is desired. They MAY also be sent on connections where no data is currently being transferred. Endpoints MUST NOT consider these frames to have any meaning upon receipt. The payload and length of the frames are selected in any manner the implementation chooses."} {"id": "q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7", "old_text": "The requested operation cannot be served over HTTP/3. The peer should retry over HTTP/1.1. Error codes of the format \"0x1f * N + 0x21\" for integer values of N are reserved to exercise the requirement that unknown error codes be treated as equivalent to H3_NO_ERROR (extensions). Implementations SHOULD select an error code from this space with some probability when they would have sent H3_NO_ERROR. 9.", "comments": "Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes.", "new_text": "The requested operation cannot be served over HTTP/3. The peer should retry over HTTP/1.1. Error codes of the format \"0x1f * N + 0x21\" for non-negative integer values of N are reserved to exercise the requirement that unknown error codes be treated as equivalent to H3_NO_ERROR (extensions). Implementations SHOULD select an error code from this space with some probability when they would have sent H3_NO_ERROR. 9."} {"id": "q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7", "old_text": "The entries in iana-frame-table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for integer values of N (that is, \"0x21\", \"0x40\", ..., through \"0x3FFFFFFFFFFFFFFE\") MUST NOT be assigned by IANA. 11.2.2.", "comments": "Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes.", "new_text": "The entries in iana-frame-table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for non- negative integer values of N (that is, \"0x21\", \"0x40\", ..., through \"0x3FFFFFFFFFFFFFFE\") MUST NOT be assigned by IANA. 11.2.2."} {"id": "q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7", "old_text": "The entries in iana-setting-table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for integer values of N (that is, \"0x21\", \"0x40\", ..., through \"0x3FFFFFFFFFFFFFFE\") MUST NOT be assigned by IANA. 11.2.3.", "comments": "Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes.", "new_text": "The entries in iana-setting-table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for non- negative integer values of N (that is, \"0x21\", \"0x40\", ..., through \"0x3FFFFFFFFFFFFFFE\") MUST NOT be assigned by IANA. 11.2.3."} {"id": "q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7", "old_text": "The entries in the iana-error-table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for integer values of N (that is, \"0x21\", \"0x40\", ..., through \"0x3FFFFFFFFFFFFFFE\") MUST NOT be assigned by IANA. 11.2.4.", "comments": "Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes.", "new_text": "The entries in the iana-error-table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for non- negative integer values of N (that is, \"0x21\", \"0x40\", ..., through \"0x3FFFFFFFFFFFFFFE\") MUST NOT be assigned by IANA. 11.2.4."} {"id": "q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7", "old_text": "The entries in the following table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for integer values of N (that is, \"0x21\", \"0x40\", ..., through \"0x3FFFFFFFFFFFFFFE\") MUST NOT be assigned by IANA. 12. References", "comments": "Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes.", "new_text": "The entries in the following table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for non- negative integer values of N (that is, \"0x21\", \"0x40\", ..., through \"0x3FFFFFFFFFFFFFFE\") MUST NOT be assigned by IANA. 12. References"} {"id": "q-en-quicwg-base-drafts-893e9ddeda21a96f12134ef2bdbb80ec4b47c361f77bfc0dff538fff495846d0", "old_text": "If TLS experiences an error, it generates an appropriate alert as defined in Section 6 of TLS13. A TLS alert is turned into a QUIC connection error by converting the one-byte alert description into a QUIC error code. The alert description is added to 0x100 to produce a QUIC error code from the range reserved for CRYPTO_ERROR. The resulting value is sent in a QUIC CONNECTION_CLOSE frame of type 0x1c.", "comments": "This defines a limit on the number of packets that can fail authentication before you have to use new keys. There is a big hole here in that AES-CCM (that is, the AEAD based on CBC-MAC) is currently permitted, but we have no analysis to support either the confidentiality limits in TLS 1.3 or the integrity limits in this document. It is probably OK, but that is not the standard we apply here. So this might have to remain open until we get some sort of resolution on that issue. My initial opinion is to cut CCM from the draft until/unless an analysis is produced.\nIn the interest of transparency, I have added one more commit here. When I did the same for DTLS, ekr observed that if you have updated, you might safely stop trying to accept packets with the old keys rather than killing the connection. That results in loss for any packets that genuinely did want to use the exhausted keys, but that is probably less disruptive than having the connection drop. Of course, you can't always update, so closure is still likely necessary in some cases.\nFor what it's worth, Martin, Felix, and I put together a document which might help users do this themselves: URL\nNAME Could you use that to come up with 2^16 limits and put them in the doc? I\u2019m really wary of letting implementors roll their own\nSure! Can you please file an issue with that request? Noting anything else you'd like spelled out in detail would be helpful, too.\ntl;dr We need to recommend limits on the number of failed decryptions. At one of the side discussions concentrated on the quality of the analysis of the and the applicability of that analysis to QUIC. Felix G\u00fcnther, along with Marc Fischlin, Christian Janson, and Kenny Paterson have done a little analysis, based on and the we used to set limits in TLS. There are two relevant limits in this analysis: The confidentiality bound, which is usually expressed in terms of advantage an attacker gains in being able to break confidentiality of records/packets as they get more data from valid packets. For instance, in AES-GCM, the analysis shows that after seeing 2^24.5 packets (almost 24 million) the attacker has a 2^-60 better chance than pure chance of winning the IND-CPA game. The forgery bound, which is again expressed as an probably of an attacker successfully creating a forged record/packet. Thanks to the analysis done previously, we know that AES-GCM and ChaCha20+Poly1305 have a forgery chance of 2^-57 after 2^60 and 2^36 forgery attempts respectively. The current text just points to TLS 1.3, which recommends the use of key updates to protect confidentiality. But TLS has a forgery tolerance of 0: a single failed forgery attempt causes the connection to break. QUIC (and DTLS) allow for multiple forgeries to be ignored, so we need to factor that in. As discussed in if we want to maintain security, we need to consider both confidentiality and integrity. For that we need two independent limits: The number of packets that are successfully encrypted and decrypted. The number of packets that are unsuccessfully decrypted. We have the former already. The latter is what this issue tracks. Once the limit is at risk of being exceeded, updating keys will force the attacker to start over. Based on Felix's recommendation I am going to suggest that we limit forgery attempts to 2^36. I have confirmed this with a close reading of the paper and concur. There is a significant difference here between AES-GCM and ChaCha20+Poly1305 but we believe that setting a single limit is the best approach. I will create a pull request to document this approach shortly. We could split the recommendation and allow a larger limit for AES-GCM, but I don't believe that 2^36 (~68 billion) is a significant barrier. For reference, if you were able to saturate a 10Gbps link with 50 byte IP packets that contained plausible QUIC packets, and the recipient didn't treat this as DoS traffic, it would take 5 minutes. If we assume that are more likely representative of potential packet rates, then the limit goes up to 10s of hours. I would point out that that volume of unproductive traffic is probably a good reason to use something other than a key update to deal with the sender. A reminder that the underlying analysis assumes that packets are all 2^14 in size, which is the record size in TLS. This is ~10 times larger than typical MTU on the Internet and many QUIC packets might be smaller than that, so - at least for AES-GCM - recommending a 2^36 limit is more conservative than necessary. However, if we were to allow for different limits, we'd have to address the question of packet size, as a 2^16 byte packet will be used by some deployments. For ChaCha20+Poly1305, the only value that matters is the number of packets, so we avoid that question. Quoting from Felix's email for reference (with permission):\nNAME Can I request that you review this analysis for correctness? I know that you had reservations about the claims here.\nAnother (simplier?) option could be to just update keys much more often. I've done performance tests with msquic on the effect of updating the key ~once per round trip. There was no noticeable effect on performance. In fact, more often than not, the key update perf tests happen to perform slightly better (within noise tolerance) than without key update. I'm not suggesting we advocate once a round trip or anything near that; more like every 2^20 packets (or 2^30 bytes?) sent or received (including failed decryptions). As I understand it, this is an ultra conservative number as far as protecting from any kind of attacks, and should be free as far as any performance impact on the connection. This also has the added benefit of exercising the key update scenario more often, which should hopefully improve interoperability of the feature.\nI support a change like this, but I am not qualified to say what the thresholds should be. There certainly isn't a reason to push it to very high limits. This change would not break interoperability, so it's a good one. Banning CCM might break interoperability for someone, but I'm not too concerned, particularly if we can update all the browsers to not offer it. A somewhat bigger concern is the lack of key update support in a lot of implementations: I read 9 of 15 servers as supporting it.\n(Tentative) good news on CCM: Kenny identified a security analysis paper and it seems relatively simple to go from that to numbers. I am not personally concerned about support for key update. 9/15 isn't 15, but that doesn't bother me. It turns out to be a little fiddly to implement, so if people decide to drop connections that hit these limits instead, the effect isn't that bad. Of course, if you want to follow Nick's suggestion, that works too (I remember Dan Bernstein seriously suggesting a cipher that updated every record, back when I first proposed this scheme for TLS).\nIf we're looking for a new bound to initiate a key update, there's precedent in the SSH RFC to rekey after every 1 GB of data transferred. See\nI should point out that my \"9 of 15\" figure is based on the interop matrix, where the client initiates an early update and the server responds. That is a different thing than a server initiating an update when required by the spec, which fewer than 9 servers might do.\nBased on by Jonsson, I have calculated the bounds on the number of forgeries and encryptions for CCM. The numbers aren't great, but that probably doesn't matter much for the purposes to which CCM is generally put. (I'm using the notation from the here because that is easier to type than some of what is being used in other papers. For reference, q is the number of genuine encryptions, l is the length of each in 16 byte blocks, v is the number of forgeries, n is the key length in bits, t is the tag length in bits.) Jonsson puts the advantage an attacker has over a generic PRF at . If we assume the same numbers in the AEBounds paper for record/packet size, (this is for consistency, most QUIC packets will be much smaller, in the order of 2^7, but we might also go as high as 2^12; this being dictated by the MTU). For AES, . To keep the advantage an attacker has to 2^-60 (to match the analysis in TLS), we therefore need to keep q to below 2^24.5. Somewhat unsurprisingly, that matches the numbers we have for AES-GCM. Jonsson puts the advantage for an attacker over a generic PRF at . As the first term is negligible even for large v (up to 2^64), we consider the second term alone and aim for a bound on the advantage of 2^-57 to match the analysis for other ciphers. That leaves us with . As q is already established to be 2^24.5, we can say that v should be limited to 2^25 (or if you want to be precise). (As a side note, for t=64, as in AEADAES128CCM8, the first term becomes relevant and our security bound limits the number of forgeries to 2^7, which is probably a bit limiting in practice. That's good justification for not enabling AEADAES128CCM_8 by default; though different applications might use a different target bound than 2^-57.) I'm an not a cryptographer. These papers all read like chicken-chicken-chicken. My calculations are not infallible either.\nThe shape of the bound seems plausible but I'll start a thread with the cryptographers to check everything\nI reproduced Martin's analysis of CCM based on of the bounds by . (Myself not being an expert in producing such bounds, I found the second-level confirmation by Rogaway to be helpful.) I essentially agree with the numbers, except that --when following the same metric as the -- the advantage over an ideal PRP (instead of PRF) should be considered. This simply means losing a 1/2 factor in the bound: Confidentiality bound: yields an advantage of . I'd hence say it's fine to keep TLS 1.3's recommendation on sending max. full-sized records. Integrity bound: , hence yields an advantage of (matching the other analyses). Most importantly, I concur with Martin that AEADAES128CCM8 would need either much lower bounds or a different risk assessment: applying the same limit of would bring the advantage incurred through the bound up to . Would it still be the best approach to define the same anti-forgery limit across all cipher suites, even if this means going down to ? Of course, I'm happy to discuss more with NAME !\nThanks for checking this NAME I was a little concerned about the PRP/PRF split, so I'm glad to have you correct me. I ultimately went with per-AEAD recommendations, and a lower limit for CCM. That's mainly to establish some sort of uniformity around 2^-60 and 2^-57. Even if those are basically arbitrary choices, at least they are defensible (I think!) and using them uniformly establishes the right expectations about what the standard is. I'm comfortable with specifying different limits for each AEAD in specifications. In practice, however, I expect that a far lower tolerance for forgery attempts. Assuming that these numbers work out, a review of the pull request would be greatly appreciated. If there are other relevant papers, I'm always happy to pull those in as well.\nI realize that I didn't respond to NAME That advice applies to the limits that TLS specifies already; this issue is primarily about responding to forgeries. That said, 1G is pretty good advice. However what we've seen from this analysis that 1G results in a lower safety margin than TLS aims for. But it's close enough that if the AEAD is as good as the set we have here, you are probably OK. However, this does depend a lot on the specific AEAD. I'd be uncomfortable using so simple a guide in the general case.\nNAME points out an error in my calculation. I had based my calculations on direct mapping of in the CCM analysis to in our calculations. That is, was the number of blocks in the message. But the analysis by Jonsson defines this as: ! The definition of Beta is the CCM function, so this function really reduces to 2 times the length of the message (in blocks), plus 1 (to account for the additional encryption). We can ignore the extra 1 as that is absorbed by the tag length. I'm updating the numbers in the PR. The result is another halving of the number of packets. NAME NAME it's pretty clear that I'm out of my depth here, so your input would be highly valued.\nGood catch, yes, the double passing over the message requires a factor 2 here. (The AD a may also be up to 3 blocks if I understand the header format correctly, but I guess that's negligible compared to the conservative 2^10 message size.)\nGood point about the AD. The AD - at least for QUIC - is always sent, so that fits in the same space. We don't necessarily need to double count the tiny number of blocks, but as you say, counting that only once yields a negligible difference. If we had a large AD that wasn't also transmitted, that would be different.\nAgreed. To be precise: if I'm not mistaken the AD isn't counted at all right now ( accounts for the max. payload blocks), but of course the difference is still negligible as long as the AD is small.\nThanks for the edits!", "new_text": "If TLS experiences an error, it generates an appropriate alert as defined in Section 6 of TLS13. A TLS alert is converted into a QUIC connection error. The alert description is added to 0x100 to produce a QUIC error code from the range reserved for CRYPTO_ERROR. The resulting value is sent in a QUIC CONNECTION_CLOSE frame of type 0x1c."} {"id": "q-en-quicwg-base-drafts-893e9ddeda21a96f12134ef2bdbb80ec4b47c361f77bfc0dff538fff495846d0", "old_text": "The alert level of all TLS alerts is \"fatal\"; a TLS stack MUST NOT generate alerts at the \"warning\" level. 4.11. After QUIC moves to a new encryption level, packet protection keys", "comments": "This defines a limit on the number of packets that can fail authentication before you have to use new keys. There is a big hole here in that AES-CCM (that is, the AEAD based on CBC-MAC) is currently permitted, but we have no analysis to support either the confidentiality limits in TLS 1.3 or the integrity limits in this document. It is probably OK, but that is not the standard we apply here. So this might have to remain open until we get some sort of resolution on that issue. My initial opinion is to cut CCM from the draft until/unless an analysis is produced.\nIn the interest of transparency, I have added one more commit here. When I did the same for DTLS, ekr observed that if you have updated, you might safely stop trying to accept packets with the old keys rather than killing the connection. That results in loss for any packets that genuinely did want to use the exhausted keys, but that is probably less disruptive than having the connection drop. Of course, you can't always update, so closure is still likely necessary in some cases.\nFor what it's worth, Martin, Felix, and I put together a document which might help users do this themselves: URL\nNAME Could you use that to come up with 2^16 limits and put them in the doc? I\u2019m really wary of letting implementors roll their own\nSure! Can you please file an issue with that request? Noting anything else you'd like spelled out in detail would be helpful, too.\ntl;dr We need to recommend limits on the number of failed decryptions. At one of the side discussions concentrated on the quality of the analysis of the and the applicability of that analysis to QUIC. Felix G\u00fcnther, along with Marc Fischlin, Christian Janson, and Kenny Paterson have done a little analysis, based on and the we used to set limits in TLS. There are two relevant limits in this analysis: The confidentiality bound, which is usually expressed in terms of advantage an attacker gains in being able to break confidentiality of records/packets as they get more data from valid packets. For instance, in AES-GCM, the analysis shows that after seeing 2^24.5 packets (almost 24 million) the attacker has a 2^-60 better chance than pure chance of winning the IND-CPA game. The forgery bound, which is again expressed as an probably of an attacker successfully creating a forged record/packet. Thanks to the analysis done previously, we know that AES-GCM and ChaCha20+Poly1305 have a forgery chance of 2^-57 after 2^60 and 2^36 forgery attempts respectively. The current text just points to TLS 1.3, which recommends the use of key updates to protect confidentiality. But TLS has a forgery tolerance of 0: a single failed forgery attempt causes the connection to break. QUIC (and DTLS) allow for multiple forgeries to be ignored, so we need to factor that in. As discussed in if we want to maintain security, we need to consider both confidentiality and integrity. For that we need two independent limits: The number of packets that are successfully encrypted and decrypted. The number of packets that are unsuccessfully decrypted. We have the former already. The latter is what this issue tracks. Once the limit is at risk of being exceeded, updating keys will force the attacker to start over. Based on Felix's recommendation I am going to suggest that we limit forgery attempts to 2^36. I have confirmed this with a close reading of the paper and concur. There is a significant difference here between AES-GCM and ChaCha20+Poly1305 but we believe that setting a single limit is the best approach. I will create a pull request to document this approach shortly. We could split the recommendation and allow a larger limit for AES-GCM, but I don't believe that 2^36 (~68 billion) is a significant barrier. For reference, if you were able to saturate a 10Gbps link with 50 byte IP packets that contained plausible QUIC packets, and the recipient didn't treat this as DoS traffic, it would take 5 minutes. If we assume that are more likely representative of potential packet rates, then the limit goes up to 10s of hours. I would point out that that volume of unproductive traffic is probably a good reason to use something other than a key update to deal with the sender. A reminder that the underlying analysis assumes that packets are all 2^14 in size, which is the record size in TLS. This is ~10 times larger than typical MTU on the Internet and many QUIC packets might be smaller than that, so - at least for AES-GCM - recommending a 2^36 limit is more conservative than necessary. However, if we were to allow for different limits, we'd have to address the question of packet size, as a 2^16 byte packet will be used by some deployments. For ChaCha20+Poly1305, the only value that matters is the number of packets, so we avoid that question. Quoting from Felix's email for reference (with permission):\nNAME Can I request that you review this analysis for correctness? I know that you had reservations about the claims here.\nAnother (simplier?) option could be to just update keys much more often. I've done performance tests with msquic on the effect of updating the key ~once per round trip. There was no noticeable effect on performance. In fact, more often than not, the key update perf tests happen to perform slightly better (within noise tolerance) than without key update. I'm not suggesting we advocate once a round trip or anything near that; more like every 2^20 packets (or 2^30 bytes?) sent or received (including failed decryptions). As I understand it, this is an ultra conservative number as far as protecting from any kind of attacks, and should be free as far as any performance impact on the connection. This also has the added benefit of exercising the key update scenario more often, which should hopefully improve interoperability of the feature.\nI support a change like this, but I am not qualified to say what the thresholds should be. There certainly isn't a reason to push it to very high limits. This change would not break interoperability, so it's a good one. Banning CCM might break interoperability for someone, but I'm not too concerned, particularly if we can update all the browsers to not offer it. A somewhat bigger concern is the lack of key update support in a lot of implementations: I read 9 of 15 servers as supporting it.\n(Tentative) good news on CCM: Kenny identified a security analysis paper and it seems relatively simple to go from that to numbers. I am not personally concerned about support for key update. 9/15 isn't 15, but that doesn't bother me. It turns out to be a little fiddly to implement, so if people decide to drop connections that hit these limits instead, the effect isn't that bad. Of course, if you want to follow Nick's suggestion, that works too (I remember Dan Bernstein seriously suggesting a cipher that updated every record, back when I first proposed this scheme for TLS).\nIf we're looking for a new bound to initiate a key update, there's precedent in the SSH RFC to rekey after every 1 GB of data transferred. See\nI should point out that my \"9 of 15\" figure is based on the interop matrix, where the client initiates an early update and the server responds. That is a different thing than a server initiating an update when required by the spec, which fewer than 9 servers might do.\nBased on by Jonsson, I have calculated the bounds on the number of forgeries and encryptions for CCM. The numbers aren't great, but that probably doesn't matter much for the purposes to which CCM is generally put. (I'm using the notation from the here because that is easier to type than some of what is being used in other papers. For reference, q is the number of genuine encryptions, l is the length of each in 16 byte blocks, v is the number of forgeries, n is the key length in bits, t is the tag length in bits.) Jonsson puts the advantage an attacker has over a generic PRF at . If we assume the same numbers in the AEBounds paper for record/packet size, (this is for consistency, most QUIC packets will be much smaller, in the order of 2^7, but we might also go as high as 2^12; this being dictated by the MTU). For AES, . To keep the advantage an attacker has to 2^-60 (to match the analysis in TLS), we therefore need to keep q to below 2^24.5. Somewhat unsurprisingly, that matches the numbers we have for AES-GCM. Jonsson puts the advantage for an attacker over a generic PRF at . As the first term is negligible even for large v (up to 2^64), we consider the second term alone and aim for a bound on the advantage of 2^-57 to match the analysis for other ciphers. That leaves us with . As q is already established to be 2^24.5, we can say that v should be limited to 2^25 (or if you want to be precise). (As a side note, for t=64, as in AEADAES128CCM8, the first term becomes relevant and our security bound limits the number of forgeries to 2^7, which is probably a bit limiting in practice. That's good justification for not enabling AEADAES128CCM_8 by default; though different applications might use a different target bound than 2^-57.) I'm an not a cryptographer. These papers all read like chicken-chicken-chicken. My calculations are not infallible either.\nThe shape of the bound seems plausible but I'll start a thread with the cryptographers to check everything\nI reproduced Martin's analysis of CCM based on of the bounds by . (Myself not being an expert in producing such bounds, I found the second-level confirmation by Rogaway to be helpful.) I essentially agree with the numbers, except that --when following the same metric as the -- the advantage over an ideal PRP (instead of PRF) should be considered. This simply means losing a 1/2 factor in the bound: Confidentiality bound: yields an advantage of . I'd hence say it's fine to keep TLS 1.3's recommendation on sending max. full-sized records. Integrity bound: , hence yields an advantage of (matching the other analyses). Most importantly, I concur with Martin that AEADAES128CCM8 would need either much lower bounds or a different risk assessment: applying the same limit of would bring the advantage incurred through the bound up to . Would it still be the best approach to define the same anti-forgery limit across all cipher suites, even if this means going down to ? Of course, I'm happy to discuss more with NAME !\nThanks for checking this NAME I was a little concerned about the PRP/PRF split, so I'm glad to have you correct me. I ultimately went with per-AEAD recommendations, and a lower limit for CCM. That's mainly to establish some sort of uniformity around 2^-60 and 2^-57. Even if those are basically arbitrary choices, at least they are defensible (I think!) and using them uniformly establishes the right expectations about what the standard is. I'm comfortable with specifying different limits for each AEAD in specifications. In practice, however, I expect that a far lower tolerance for forgery attempts. Assuming that these numbers work out, a review of the pull request would be greatly appreciated. If there are other relevant papers, I'm always happy to pull those in as well.\nI realize that I didn't respond to NAME That advice applies to the limits that TLS specifies already; this issue is primarily about responding to forgeries. That said, 1G is pretty good advice. However what we've seen from this analysis that 1G results in a lower safety margin than TLS aims for. But it's close enough that if the AEAD is as good as the set we have here, you are probably OK. However, this does depend a lot on the specific AEAD. I'd be uncomfortable using so simple a guide in the general case.\nNAME points out an error in my calculation. I had based my calculations on direct mapping of in the CCM analysis to in our calculations. That is, was the number of blocks in the message. But the analysis by Jonsson defines this as: ! The definition of Beta is the CCM function, so this function really reduces to 2 times the length of the message (in blocks), plus 1 (to account for the additional encryption). We can ignore the extra 1 as that is absorbed by the tag length. I'm updating the numbers in the PR. The result is another halving of the number of packets. NAME NAME it's pretty clear that I'm out of my depth here, so your input would be highly valued.\nGood catch, yes, the double passing over the message requires a factor 2 here. (The AD a may also be up to 3 blocks if I understand the header format correctly, but I guess that's negligible compared to the conservative 2^10 message size.)\nGood point about the AD. The AD - at least for QUIC - is always sent, so that fits in the same space. We don't necessarily need to double count the tiny number of blocks, but as you say, counting that only once yields a negligible difference. If we had a large AD that wasn't also transmitted, that would be different.\nAgreed. To be precise: if I'm not mistaken the AD isn't counted at all right now ( accounts for the max. payload blocks), but of course the difference is still negligible as long as the AD is small.\nThanks for the edits!", "new_text": "The alert level of all TLS alerts is \"fatal\"; a TLS stack MUST NOT generate alerts at the \"warning\" level. QUIC permits the use of a generic code in place of a specific error code; see Section 11 of QUIC-TRANSPORT. For TLS alerts, this includes replacing any alert with a generic alert, such as handshake_failure (0x128 in QUIC). Endpoints MAY use a generic error code to avoid possibly exposing confidential information. 4.11. After QUIC moves to a new encryption level, packet protection keys"} {"id": "q-en-quicwg-base-drafts-893e9ddeda21a96f12134ef2bdbb80ec4b47c361f77bfc0dff538fff495846d0", "old_text": "Key updates MUST be initiated before usage limits on packet protection keys are exceeded. For the cipher suites mentioned in this document, the limits in Section 5.5 of TLS13 apply. Other cipher suites MUST define usage limits in order to be used with QUIC. 6.7.", "comments": "This defines a limit on the number of packets that can fail authentication before you have to use new keys. There is a big hole here in that AES-CCM (that is, the AEAD based on CBC-MAC) is currently permitted, but we have no analysis to support either the confidentiality limits in TLS 1.3 or the integrity limits in this document. It is probably OK, but that is not the standard we apply here. So this might have to remain open until we get some sort of resolution on that issue. My initial opinion is to cut CCM from the draft until/unless an analysis is produced.\nIn the interest of transparency, I have added one more commit here. When I did the same for DTLS, ekr observed that if you have updated, you might safely stop trying to accept packets with the old keys rather than killing the connection. That results in loss for any packets that genuinely did want to use the exhausted keys, but that is probably less disruptive than having the connection drop. Of course, you can't always update, so closure is still likely necessary in some cases.\nFor what it's worth, Martin, Felix, and I put together a document which might help users do this themselves: URL\nNAME Could you use that to come up with 2^16 limits and put them in the doc? I\u2019m really wary of letting implementors roll their own\nSure! Can you please file an issue with that request? Noting anything else you'd like spelled out in detail would be helpful, too.\ntl;dr We need to recommend limits on the number of failed decryptions. At one of the side discussions concentrated on the quality of the analysis of the and the applicability of that analysis to QUIC. Felix G\u00fcnther, along with Marc Fischlin, Christian Janson, and Kenny Paterson have done a little analysis, based on and the we used to set limits in TLS. There are two relevant limits in this analysis: The confidentiality bound, which is usually expressed in terms of advantage an attacker gains in being able to break confidentiality of records/packets as they get more data from valid packets. For instance, in AES-GCM, the analysis shows that after seeing 2^24.5 packets (almost 24 million) the attacker has a 2^-60 better chance than pure chance of winning the IND-CPA game. The forgery bound, which is again expressed as an probably of an attacker successfully creating a forged record/packet. Thanks to the analysis done previously, we know that AES-GCM and ChaCha20+Poly1305 have a forgery chance of 2^-57 after 2^60 and 2^36 forgery attempts respectively. The current text just points to TLS 1.3, which recommends the use of key updates to protect confidentiality. But TLS has a forgery tolerance of 0: a single failed forgery attempt causes the connection to break. QUIC (and DTLS) allow for multiple forgeries to be ignored, so we need to factor that in. As discussed in if we want to maintain security, we need to consider both confidentiality and integrity. For that we need two independent limits: The number of packets that are successfully encrypted and decrypted. The number of packets that are unsuccessfully decrypted. We have the former already. The latter is what this issue tracks. Once the limit is at risk of being exceeded, updating keys will force the attacker to start over. Based on Felix's recommendation I am going to suggest that we limit forgery attempts to 2^36. I have confirmed this with a close reading of the paper and concur. There is a significant difference here between AES-GCM and ChaCha20+Poly1305 but we believe that setting a single limit is the best approach. I will create a pull request to document this approach shortly. We could split the recommendation and allow a larger limit for AES-GCM, but I don't believe that 2^36 (~68 billion) is a significant barrier. For reference, if you were able to saturate a 10Gbps link with 50 byte IP packets that contained plausible QUIC packets, and the recipient didn't treat this as DoS traffic, it would take 5 minutes. If we assume that are more likely representative of potential packet rates, then the limit goes up to 10s of hours. I would point out that that volume of unproductive traffic is probably a good reason to use something other than a key update to deal with the sender. A reminder that the underlying analysis assumes that packets are all 2^14 in size, which is the record size in TLS. This is ~10 times larger than typical MTU on the Internet and many QUIC packets might be smaller than that, so - at least for AES-GCM - recommending a 2^36 limit is more conservative than necessary. However, if we were to allow for different limits, we'd have to address the question of packet size, as a 2^16 byte packet will be used by some deployments. For ChaCha20+Poly1305, the only value that matters is the number of packets, so we avoid that question. Quoting from Felix's email for reference (with permission):\nNAME Can I request that you review this analysis for correctness? I know that you had reservations about the claims here.\nAnother (simplier?) option could be to just update keys much more often. I've done performance tests with msquic on the effect of updating the key ~once per round trip. There was no noticeable effect on performance. In fact, more often than not, the key update perf tests happen to perform slightly better (within noise tolerance) than without key update. I'm not suggesting we advocate once a round trip or anything near that; more like every 2^20 packets (or 2^30 bytes?) sent or received (including failed decryptions). As I understand it, this is an ultra conservative number as far as protecting from any kind of attacks, and should be free as far as any performance impact on the connection. This also has the added benefit of exercising the key update scenario more often, which should hopefully improve interoperability of the feature.\nI support a change like this, but I am not qualified to say what the thresholds should be. There certainly isn't a reason to push it to very high limits. This change would not break interoperability, so it's a good one. Banning CCM might break interoperability for someone, but I'm not too concerned, particularly if we can update all the browsers to not offer it. A somewhat bigger concern is the lack of key update support in a lot of implementations: I read 9 of 15 servers as supporting it.\n(Tentative) good news on CCM: Kenny identified a security analysis paper and it seems relatively simple to go from that to numbers. I am not personally concerned about support for key update. 9/15 isn't 15, but that doesn't bother me. It turns out to be a little fiddly to implement, so if people decide to drop connections that hit these limits instead, the effect isn't that bad. Of course, if you want to follow Nick's suggestion, that works too (I remember Dan Bernstein seriously suggesting a cipher that updated every record, back when I first proposed this scheme for TLS).\nIf we're looking for a new bound to initiate a key update, there's precedent in the SSH RFC to rekey after every 1 GB of data transferred. See\nI should point out that my \"9 of 15\" figure is based on the interop matrix, where the client initiates an early update and the server responds. That is a different thing than a server initiating an update when required by the spec, which fewer than 9 servers might do.\nBased on by Jonsson, I have calculated the bounds on the number of forgeries and encryptions for CCM. The numbers aren't great, but that probably doesn't matter much for the purposes to which CCM is generally put. (I'm using the notation from the here because that is easier to type than some of what is being used in other papers. For reference, q is the number of genuine encryptions, l is the length of each in 16 byte blocks, v is the number of forgeries, n is the key length in bits, t is the tag length in bits.) Jonsson puts the advantage an attacker has over a generic PRF at . If we assume the same numbers in the AEBounds paper for record/packet size, (this is for consistency, most QUIC packets will be much smaller, in the order of 2^7, but we might also go as high as 2^12; this being dictated by the MTU). For AES, . To keep the advantage an attacker has to 2^-60 (to match the analysis in TLS), we therefore need to keep q to below 2^24.5. Somewhat unsurprisingly, that matches the numbers we have for AES-GCM. Jonsson puts the advantage for an attacker over a generic PRF at . As the first term is negligible even for large v (up to 2^64), we consider the second term alone and aim for a bound on the advantage of 2^-57 to match the analysis for other ciphers. That leaves us with . As q is already established to be 2^24.5, we can say that v should be limited to 2^25 (or if you want to be precise). (As a side note, for t=64, as in AEADAES128CCM8, the first term becomes relevant and our security bound limits the number of forgeries to 2^7, which is probably a bit limiting in practice. That's good justification for not enabling AEADAES128CCM_8 by default; though different applications might use a different target bound than 2^-57.) I'm an not a cryptographer. These papers all read like chicken-chicken-chicken. My calculations are not infallible either.\nThe shape of the bound seems plausible but I'll start a thread with the cryptographers to check everything\nI reproduced Martin's analysis of CCM based on of the bounds by . (Myself not being an expert in producing such bounds, I found the second-level confirmation by Rogaway to be helpful.) I essentially agree with the numbers, except that --when following the same metric as the -- the advantage over an ideal PRP (instead of PRF) should be considered. This simply means losing a 1/2 factor in the bound: Confidentiality bound: yields an advantage of . I'd hence say it's fine to keep TLS 1.3's recommendation on sending max. full-sized records. Integrity bound: , hence yields an advantage of (matching the other analyses). Most importantly, I concur with Martin that AEADAES128CCM8 would need either much lower bounds or a different risk assessment: applying the same limit of would bring the advantage incurred through the bound up to . Would it still be the best approach to define the same anti-forgery limit across all cipher suites, even if this means going down to ? Of course, I'm happy to discuss more with NAME !\nThanks for checking this NAME I was a little concerned about the PRP/PRF split, so I'm glad to have you correct me. I ultimately went with per-AEAD recommendations, and a lower limit for CCM. That's mainly to establish some sort of uniformity around 2^-60 and 2^-57. Even if those are basically arbitrary choices, at least they are defensible (I think!) and using them uniformly establishes the right expectations about what the standard is. I'm comfortable with specifying different limits for each AEAD in specifications. In practice, however, I expect that a far lower tolerance for forgery attempts. Assuming that these numbers work out, a review of the pull request would be greatly appreciated. If there are other relevant papers, I'm always happy to pull those in as well.\nI realize that I didn't respond to NAME That advice applies to the limits that TLS specifies already; this issue is primarily about responding to forgeries. That said, 1G is pretty good advice. However what we've seen from this analysis that 1G results in a lower safety margin than TLS aims for. But it's close enough that if the AEAD is as good as the set we have here, you are probably OK. However, this does depend a lot on the specific AEAD. I'd be uncomfortable using so simple a guide in the general case.\nNAME points out an error in my calculation. I had based my calculations on direct mapping of in the CCM analysis to in our calculations. That is, was the number of blocks in the message. But the analysis by Jonsson defines this as: ! The definition of Beta is the CCM function, so this function really reduces to 2 times the length of the message (in blocks), plus 1 (to account for the additional encryption). We can ignore the extra 1 as that is absorbed by the tag length. I'm updating the numbers in the PR. The result is another halving of the number of packets. NAME NAME it's pretty clear that I'm out of my depth here, so your input would be highly valued.\nGood catch, yes, the double passing over the message requires a factor 2 here. (The AD a may also be up to 3 blocks if I understand the header format correctly, but I guess that's negligible compared to the conservative 2^10 message size.)\nGood point about the AD. The AD - at least for QUIC - is always sent, so that fits in the same space. We don't necessarily need to double count the tiny number of blocks, but as you say, counting that only once yields a negligible difference. If we had a large AD that wasn't also transmitted, that would be different.\nAgreed. To be precise: if I'm not mistaken the AD isn't counted at all right now ( accounts for the max. payload blocks), but of course the difference is still negligible as long as the AD is small.\nThanks for the edits!", "new_text": "Key updates MUST be initiated before usage limits on packet protection keys are exceeded. For the cipher suites mentioned in this document, the limits in Section 5.5 of TLS13 apply. TLS13 does not specify a limit for AEAD_AES_128_CCM, but the analysis in ccm- bounds shows that a limit of 2^23 packets can be used to obtain the same confidentiality protection as the limits specified in TLS. The usage limits defined in TLS 1.3 exist for protection against attacks on confidentiality and apply to successful applications of AEAD protection. The integrity protections in authenticated encryption also depend on limiting the number of attempts to forge packets. TLS achieves this by closing connections after any record fails an authentication check. In comparison, QUIC ignores any packet that cannot be authenticated, allowing multiple forgery attempts. Endpoints MUST count the number of received packets that fail authentication for each set of keys. If the number of packets that fail authentication with the same key exceeds a limit that is specific to the AEAD in use, the endpoint MUST stop using those keys. Endpoints MUST initiate a key update before reaching this limit. If a key update is not possible, the endpoint MUST immediately close the connection. Applying a limit reduces the probability that an attacker is able to successfully forge a packet; see AEBounds and ROBUST. Due to the way that header protection protects the Key Phase, packets that are discarded are likely to have an even distribution of both Key Phase values. This means that packets that fail authentication will often use the packet protection keys from the next key phase. It is therefore necessary to also track the number of packets that fail authentication with the next set of packet protection keys. To avoid exhaustion of both sets of keys, it might be necessary to initiate two key updates in succession. For AEAD_AES_128_GCM, AEAD_AES_256_GCM, and AEAD_CHACHA20_POLY1305, the limit on the number of packets that fail authentication is 2^36. Note that the analysis in AEBounds supports a higher limit for the AEAD_AES_128_GCM and AEAD_AES_256_GCM, but this specification recommends a lower limit. For AEAD_AES_128_CCM, the limit on the number of packets that fail authentication is 2^23.5; see ccm-bounds. These limits were originally calculated using assumptions about the limits on TLS record size. The maximum size of a TLS record is 2^14 bytes. In comparison, QUIC packets can be up to 2^16 bytes. However, it is expected that QUIC packets will generally be smaller than TLS records. Where packets might be larger than 2^14 bytes in length, smaller limits might be needed. Any TLS cipher suite that is specified for use with QUIC MUST define limits on the use of the associated AEAD function that preserves margins for confidentiality and integrity. That is, limits MUST be specified for the number of packets that can be authenticated and for the number packets that can fail authentication. Providing a reference to any analysis upon which values are based - and any assumptions used in that analysis - allows limits to be adapted to varying usage conditions. 6.7."} {"id": "q-en-quicwg-base-drafts-893e9ddeda21a96f12134ef2bdbb80ec4b47c361f77bfc0dff538fff495846d0", "old_text": "protocol negotiation. TLS uses Application Layer Protocol Negotiation (ALPN) ALPN to select an application protocol. Unless another mechanism is used for agreeing on an application protocol, endpoints MUST use ALPN for this purpose. When using ALPN, endpoints MUST immediately close a connection (see Section 10.3 in QUIC- TRANSPORT) if an application protocol is not negotiated with a no_application_protocol TLS alert (QUIC error code 0x178, see tls- errors). While ALPN only specifies that servers use this alert, QUIC clients MUST also use it to terminate a connection when ALPN negotiation fails. An application protocol MAY restrict the QUIC versions that it can operate over. Servers MUST select an application protocol compatible", "comments": "This defines a limit on the number of packets that can fail authentication before you have to use new keys. There is a big hole here in that AES-CCM (that is, the AEAD based on CBC-MAC) is currently permitted, but we have no analysis to support either the confidentiality limits in TLS 1.3 or the integrity limits in this document. It is probably OK, but that is not the standard we apply here. So this might have to remain open until we get some sort of resolution on that issue. My initial opinion is to cut CCM from the draft until/unless an analysis is produced.\nIn the interest of transparency, I have added one more commit here. When I did the same for DTLS, ekr observed that if you have updated, you might safely stop trying to accept packets with the old keys rather than killing the connection. That results in loss for any packets that genuinely did want to use the exhausted keys, but that is probably less disruptive than having the connection drop. Of course, you can't always update, so closure is still likely necessary in some cases.\nFor what it's worth, Martin, Felix, and I put together a document which might help users do this themselves: URL\nNAME Could you use that to come up with 2^16 limits and put them in the doc? I\u2019m really wary of letting implementors roll their own\nSure! Can you please file an issue with that request? Noting anything else you'd like spelled out in detail would be helpful, too.\ntl;dr We need to recommend limits on the number of failed decryptions. At one of the side discussions concentrated on the quality of the analysis of the and the applicability of that analysis to QUIC. Felix G\u00fcnther, along with Marc Fischlin, Christian Janson, and Kenny Paterson have done a little analysis, based on and the we used to set limits in TLS. There are two relevant limits in this analysis: The confidentiality bound, which is usually expressed in terms of advantage an attacker gains in being able to break confidentiality of records/packets as they get more data from valid packets. For instance, in AES-GCM, the analysis shows that after seeing 2^24.5 packets (almost 24 million) the attacker has a 2^-60 better chance than pure chance of winning the IND-CPA game. The forgery bound, which is again expressed as an probably of an attacker successfully creating a forged record/packet. Thanks to the analysis done previously, we know that AES-GCM and ChaCha20+Poly1305 have a forgery chance of 2^-57 after 2^60 and 2^36 forgery attempts respectively. The current text just points to TLS 1.3, which recommends the use of key updates to protect confidentiality. But TLS has a forgery tolerance of 0: a single failed forgery attempt causes the connection to break. QUIC (and DTLS) allow for multiple forgeries to be ignored, so we need to factor that in. As discussed in if we want to maintain security, we need to consider both confidentiality and integrity. For that we need two independent limits: The number of packets that are successfully encrypted and decrypted. The number of packets that are unsuccessfully decrypted. We have the former already. The latter is what this issue tracks. Once the limit is at risk of being exceeded, updating keys will force the attacker to start over. Based on Felix's recommendation I am going to suggest that we limit forgery attempts to 2^36. I have confirmed this with a close reading of the paper and concur. There is a significant difference here between AES-GCM and ChaCha20+Poly1305 but we believe that setting a single limit is the best approach. I will create a pull request to document this approach shortly. We could split the recommendation and allow a larger limit for AES-GCM, but I don't believe that 2^36 (~68 billion) is a significant barrier. For reference, if you were able to saturate a 10Gbps link with 50 byte IP packets that contained plausible QUIC packets, and the recipient didn't treat this as DoS traffic, it would take 5 minutes. If we assume that are more likely representative of potential packet rates, then the limit goes up to 10s of hours. I would point out that that volume of unproductive traffic is probably a good reason to use something other than a key update to deal with the sender. A reminder that the underlying analysis assumes that packets are all 2^14 in size, which is the record size in TLS. This is ~10 times larger than typical MTU on the Internet and many QUIC packets might be smaller than that, so - at least for AES-GCM - recommending a 2^36 limit is more conservative than necessary. However, if we were to allow for different limits, we'd have to address the question of packet size, as a 2^16 byte packet will be used by some deployments. For ChaCha20+Poly1305, the only value that matters is the number of packets, so we avoid that question. Quoting from Felix's email for reference (with permission):\nNAME Can I request that you review this analysis for correctness? I know that you had reservations about the claims here.\nAnother (simplier?) option could be to just update keys much more often. I've done performance tests with msquic on the effect of updating the key ~once per round trip. There was no noticeable effect on performance. In fact, more often than not, the key update perf tests happen to perform slightly better (within noise tolerance) than without key update. I'm not suggesting we advocate once a round trip or anything near that; more like every 2^20 packets (or 2^30 bytes?) sent or received (including failed decryptions). As I understand it, this is an ultra conservative number as far as protecting from any kind of attacks, and should be free as far as any performance impact on the connection. This also has the added benefit of exercising the key update scenario more often, which should hopefully improve interoperability of the feature.\nI support a change like this, but I am not qualified to say what the thresholds should be. There certainly isn't a reason to push it to very high limits. This change would not break interoperability, so it's a good one. Banning CCM might break interoperability for someone, but I'm not too concerned, particularly if we can update all the browsers to not offer it. A somewhat bigger concern is the lack of key update support in a lot of implementations: I read 9 of 15 servers as supporting it.\n(Tentative) good news on CCM: Kenny identified a security analysis paper and it seems relatively simple to go from that to numbers. I am not personally concerned about support for key update. 9/15 isn't 15, but that doesn't bother me. It turns out to be a little fiddly to implement, so if people decide to drop connections that hit these limits instead, the effect isn't that bad. Of course, if you want to follow Nick's suggestion, that works too (I remember Dan Bernstein seriously suggesting a cipher that updated every record, back when I first proposed this scheme for TLS).\nIf we're looking for a new bound to initiate a key update, there's precedent in the SSH RFC to rekey after every 1 GB of data transferred. See\nI should point out that my \"9 of 15\" figure is based on the interop matrix, where the client initiates an early update and the server responds. That is a different thing than a server initiating an update when required by the spec, which fewer than 9 servers might do.\nBased on by Jonsson, I have calculated the bounds on the number of forgeries and encryptions for CCM. The numbers aren't great, but that probably doesn't matter much for the purposes to which CCM is generally put. (I'm using the notation from the here because that is easier to type than some of what is being used in other papers. For reference, q is the number of genuine encryptions, l is the length of each in 16 byte blocks, v is the number of forgeries, n is the key length in bits, t is the tag length in bits.) Jonsson puts the advantage an attacker has over a generic PRF at . If we assume the same numbers in the AEBounds paper for record/packet size, (this is for consistency, most QUIC packets will be much smaller, in the order of 2^7, but we might also go as high as 2^12; this being dictated by the MTU). For AES, . To keep the advantage an attacker has to 2^-60 (to match the analysis in TLS), we therefore need to keep q to below 2^24.5. Somewhat unsurprisingly, that matches the numbers we have for AES-GCM. Jonsson puts the advantage for an attacker over a generic PRF at . As the first term is negligible even for large v (up to 2^64), we consider the second term alone and aim for a bound on the advantage of 2^-57 to match the analysis for other ciphers. That leaves us with . As q is already established to be 2^24.5, we can say that v should be limited to 2^25 (or if you want to be precise). (As a side note, for t=64, as in AEADAES128CCM8, the first term becomes relevant and our security bound limits the number of forgeries to 2^7, which is probably a bit limiting in practice. That's good justification for not enabling AEADAES128CCM_8 by default; though different applications might use a different target bound than 2^-57.) I'm an not a cryptographer. These papers all read like chicken-chicken-chicken. My calculations are not infallible either.\nThe shape of the bound seems plausible but I'll start a thread with the cryptographers to check everything\nI reproduced Martin's analysis of CCM based on of the bounds by . (Myself not being an expert in producing such bounds, I found the second-level confirmation by Rogaway to be helpful.) I essentially agree with the numbers, except that --when following the same metric as the -- the advantage over an ideal PRP (instead of PRF) should be considered. This simply means losing a 1/2 factor in the bound: Confidentiality bound: yields an advantage of . I'd hence say it's fine to keep TLS 1.3's recommendation on sending max. full-sized records. Integrity bound: , hence yields an advantage of (matching the other analyses). Most importantly, I concur with Martin that AEADAES128CCM8 would need either much lower bounds or a different risk assessment: applying the same limit of would bring the advantage incurred through the bound up to . Would it still be the best approach to define the same anti-forgery limit across all cipher suites, even if this means going down to ? Of course, I'm happy to discuss more with NAME !\nThanks for checking this NAME I was a little concerned about the PRP/PRF split, so I'm glad to have you correct me. I ultimately went with per-AEAD recommendations, and a lower limit for CCM. That's mainly to establish some sort of uniformity around 2^-60 and 2^-57. Even if those are basically arbitrary choices, at least they are defensible (I think!) and using them uniformly establishes the right expectations about what the standard is. I'm comfortable with specifying different limits for each AEAD in specifications. In practice, however, I expect that a far lower tolerance for forgery attempts. Assuming that these numbers work out, a review of the pull request would be greatly appreciated. If there are other relevant papers, I'm always happy to pull those in as well.\nI realize that I didn't respond to NAME That advice applies to the limits that TLS specifies already; this issue is primarily about responding to forgeries. That said, 1G is pretty good advice. However what we've seen from this analysis that 1G results in a lower safety margin than TLS aims for. But it's close enough that if the AEAD is as good as the set we have here, you are probably OK. However, this does depend a lot on the specific AEAD. I'd be uncomfortable using so simple a guide in the general case.\nNAME points out an error in my calculation. I had based my calculations on direct mapping of in the CCM analysis to in our calculations. That is, was the number of blocks in the message. But the analysis by Jonsson defines this as: ! The definition of Beta is the CCM function, so this function really reduces to 2 times the length of the message (in blocks), plus 1 (to account for the additional encryption). We can ignore the extra 1 as that is absorbed by the tag length. I'm updating the numbers in the PR. The result is another halving of the number of packets. NAME NAME it's pretty clear that I'm out of my depth here, so your input would be highly valued.\nGood catch, yes, the double passing over the message requires a factor 2 here. (The AD a may also be up to 3 blocks if I understand the header format correctly, but I guess that's negligible compared to the conservative 2^10 message size.)\nGood point about the AD. The AD - at least for QUIC - is always sent, so that fits in the same space. We don't necessarily need to double count the tiny number of blocks, but as you say, counting that only once yields a negligible difference. If we had a large AD that wasn't also transmitted, that would be different.\nAgreed. To be precise: if I'm not mistaken the AD isn't counted at all right now ( accounts for the max. payload blocks), but of course the difference is still negligible as long as the AD is small.\nThanks for the edits!", "new_text": "protocol negotiation. TLS uses Application Layer Protocol Negotiation (ALPN) ALPN to select an application protocol. Unless another mechanism is used for agreeing on an application protocol, endpoints MUST use ALPN for this purpose. When using ALPN, endpoints MUST immediately close a connection (see Section 10.3 of QUIC-TRANSPORT) with a no_application_protocol TLS alert (QUIC error code 0x178; see tls-errors) if an application protocol is not negotiated. While ALPN only specifies that servers use this alert, QUIC clients MUST use error 0x178 to terminate a connection when ALPN negotiation fails. An application protocol MAY restrict the QUIC versions that it can operate over. Servers MUST select an application protocol compatible"} {"id": "q-en-quicwg-base-drafts-893e9ddeda21a96f12134ef2bdbb80ec4b47c361f77bfc0dff538fff495846d0", "old_text": "Including transport parameters in the TLS handshake provides integrity protection for these values. The \"extension_data\" field of the quic_transport_parameters extension contains a value that is defined by the version of QUIC that is in use.", "comments": "This defines a limit on the number of packets that can fail authentication before you have to use new keys. There is a big hole here in that AES-CCM (that is, the AEAD based on CBC-MAC) is currently permitted, but we have no analysis to support either the confidentiality limits in TLS 1.3 or the integrity limits in this document. It is probably OK, but that is not the standard we apply here. So this might have to remain open until we get some sort of resolution on that issue. My initial opinion is to cut CCM from the draft until/unless an analysis is produced.\nIn the interest of transparency, I have added one more commit here. When I did the same for DTLS, ekr observed that if you have updated, you might safely stop trying to accept packets with the old keys rather than killing the connection. That results in loss for any packets that genuinely did want to use the exhausted keys, but that is probably less disruptive than having the connection drop. Of course, you can't always update, so closure is still likely necessary in some cases.\nFor what it's worth, Martin, Felix, and I put together a document which might help users do this themselves: URL\nNAME Could you use that to come up with 2^16 limits and put them in the doc? I\u2019m really wary of letting implementors roll their own\nSure! Can you please file an issue with that request? Noting anything else you'd like spelled out in detail would be helpful, too.\ntl;dr We need to recommend limits on the number of failed decryptions. At one of the side discussions concentrated on the quality of the analysis of the and the applicability of that analysis to QUIC. Felix G\u00fcnther, along with Marc Fischlin, Christian Janson, and Kenny Paterson have done a little analysis, based on and the we used to set limits in TLS. There are two relevant limits in this analysis: The confidentiality bound, which is usually expressed in terms of advantage an attacker gains in being able to break confidentiality of records/packets as they get more data from valid packets. For instance, in AES-GCM, the analysis shows that after seeing 2^24.5 packets (almost 24 million) the attacker has a 2^-60 better chance than pure chance of winning the IND-CPA game. The forgery bound, which is again expressed as an probably of an attacker successfully creating a forged record/packet. Thanks to the analysis done previously, we know that AES-GCM and ChaCha20+Poly1305 have a forgery chance of 2^-57 after 2^60 and 2^36 forgery attempts respectively. The current text just points to TLS 1.3, which recommends the use of key updates to protect confidentiality. But TLS has a forgery tolerance of 0: a single failed forgery attempt causes the connection to break. QUIC (and DTLS) allow for multiple forgeries to be ignored, so we need to factor that in. As discussed in if we want to maintain security, we need to consider both confidentiality and integrity. For that we need two independent limits: The number of packets that are successfully encrypted and decrypted. The number of packets that are unsuccessfully decrypted. We have the former already. The latter is what this issue tracks. Once the limit is at risk of being exceeded, updating keys will force the attacker to start over. Based on Felix's recommendation I am going to suggest that we limit forgery attempts to 2^36. I have confirmed this with a close reading of the paper and concur. There is a significant difference here between AES-GCM and ChaCha20+Poly1305 but we believe that setting a single limit is the best approach. I will create a pull request to document this approach shortly. We could split the recommendation and allow a larger limit for AES-GCM, but I don't believe that 2^36 (~68 billion) is a significant barrier. For reference, if you were able to saturate a 10Gbps link with 50 byte IP packets that contained plausible QUIC packets, and the recipient didn't treat this as DoS traffic, it would take 5 minutes. If we assume that are more likely representative of potential packet rates, then the limit goes up to 10s of hours. I would point out that that volume of unproductive traffic is probably a good reason to use something other than a key update to deal with the sender. A reminder that the underlying analysis assumes that packets are all 2^14 in size, which is the record size in TLS. This is ~10 times larger than typical MTU on the Internet and many QUIC packets might be smaller than that, so - at least for AES-GCM - recommending a 2^36 limit is more conservative than necessary. However, if we were to allow for different limits, we'd have to address the question of packet size, as a 2^16 byte packet will be used by some deployments. For ChaCha20+Poly1305, the only value that matters is the number of packets, so we avoid that question. Quoting from Felix's email for reference (with permission):\nNAME Can I request that you review this analysis for correctness? I know that you had reservations about the claims here.\nAnother (simplier?) option could be to just update keys much more often. I've done performance tests with msquic on the effect of updating the key ~once per round trip. There was no noticeable effect on performance. In fact, more often than not, the key update perf tests happen to perform slightly better (within noise tolerance) than without key update. I'm not suggesting we advocate once a round trip or anything near that; more like every 2^20 packets (or 2^30 bytes?) sent or received (including failed decryptions). As I understand it, this is an ultra conservative number as far as protecting from any kind of attacks, and should be free as far as any performance impact on the connection. This also has the added benefit of exercising the key update scenario more often, which should hopefully improve interoperability of the feature.\nI support a change like this, but I am not qualified to say what the thresholds should be. There certainly isn't a reason to push it to very high limits. This change would not break interoperability, so it's a good one. Banning CCM might break interoperability for someone, but I'm not too concerned, particularly if we can update all the browsers to not offer it. A somewhat bigger concern is the lack of key update support in a lot of implementations: I read 9 of 15 servers as supporting it.\n(Tentative) good news on CCM: Kenny identified a security analysis paper and it seems relatively simple to go from that to numbers. I am not personally concerned about support for key update. 9/15 isn't 15, but that doesn't bother me. It turns out to be a little fiddly to implement, so if people decide to drop connections that hit these limits instead, the effect isn't that bad. Of course, if you want to follow Nick's suggestion, that works too (I remember Dan Bernstein seriously suggesting a cipher that updated every record, back when I first proposed this scheme for TLS).\nIf we're looking for a new bound to initiate a key update, there's precedent in the SSH RFC to rekey after every 1 GB of data transferred. See\nI should point out that my \"9 of 15\" figure is based on the interop matrix, where the client initiates an early update and the server responds. That is a different thing than a server initiating an update when required by the spec, which fewer than 9 servers might do.\nBased on by Jonsson, I have calculated the bounds on the number of forgeries and encryptions for CCM. The numbers aren't great, but that probably doesn't matter much for the purposes to which CCM is generally put. (I'm using the notation from the here because that is easier to type than some of what is being used in other papers. For reference, q is the number of genuine encryptions, l is the length of each in 16 byte blocks, v is the number of forgeries, n is the key length in bits, t is the tag length in bits.) Jonsson puts the advantage an attacker has over a generic PRF at . If we assume the same numbers in the AEBounds paper for record/packet size, (this is for consistency, most QUIC packets will be much smaller, in the order of 2^7, but we might also go as high as 2^12; this being dictated by the MTU). For AES, . To keep the advantage an attacker has to 2^-60 (to match the analysis in TLS), we therefore need to keep q to below 2^24.5. Somewhat unsurprisingly, that matches the numbers we have for AES-GCM. Jonsson puts the advantage for an attacker over a generic PRF at . As the first term is negligible even for large v (up to 2^64), we consider the second term alone and aim for a bound on the advantage of 2^-57 to match the analysis for other ciphers. That leaves us with . As q is already established to be 2^24.5, we can say that v should be limited to 2^25 (or if you want to be precise). (As a side note, for t=64, as in AEADAES128CCM8, the first term becomes relevant and our security bound limits the number of forgeries to 2^7, which is probably a bit limiting in practice. That's good justification for not enabling AEADAES128CCM_8 by default; though different applications might use a different target bound than 2^-57.) I'm an not a cryptographer. These papers all read like chicken-chicken-chicken. My calculations are not infallible either.\nThe shape of the bound seems plausible but I'll start a thread with the cryptographers to check everything\nI reproduced Martin's analysis of CCM based on of the bounds by . (Myself not being an expert in producing such bounds, I found the second-level confirmation by Rogaway to be helpful.) I essentially agree with the numbers, except that --when following the same metric as the -- the advantage over an ideal PRP (instead of PRF) should be considered. This simply means losing a 1/2 factor in the bound: Confidentiality bound: yields an advantage of . I'd hence say it's fine to keep TLS 1.3's recommendation on sending max. full-sized records. Integrity bound: , hence yields an advantage of (matching the other analyses). Most importantly, I concur with Martin that AEADAES128CCM8 would need either much lower bounds or a different risk assessment: applying the same limit of would bring the advantage incurred through the bound up to . Would it still be the best approach to define the same anti-forgery limit across all cipher suites, even if this means going down to ? Of course, I'm happy to discuss more with NAME !\nThanks for checking this NAME I was a little concerned about the PRP/PRF split, so I'm glad to have you correct me. I ultimately went with per-AEAD recommendations, and a lower limit for CCM. That's mainly to establish some sort of uniformity around 2^-60 and 2^-57. Even if those are basically arbitrary choices, at least they are defensible (I think!) and using them uniformly establishes the right expectations about what the standard is. I'm comfortable with specifying different limits for each AEAD in specifications. In practice, however, I expect that a far lower tolerance for forgery attempts. Assuming that these numbers work out, a review of the pull request would be greatly appreciated. If there are other relevant papers, I'm always happy to pull those in as well.\nI realize that I didn't respond to NAME That advice applies to the limits that TLS specifies already; this issue is primarily about responding to forgeries. That said, 1G is pretty good advice. However what we've seen from this analysis that 1G results in a lower safety margin than TLS aims for. But it's close enough that if the AEAD is as good as the set we have here, you are probably OK. However, this does depend a lot on the specific AEAD. I'd be uncomfortable using so simple a guide in the general case.\nNAME points out an error in my calculation. I had based my calculations on direct mapping of in the CCM analysis to in our calculations. That is, was the number of blocks in the message. But the analysis by Jonsson defines this as: ! The definition of Beta is the CCM function, so this function really reduces to 2 times the length of the message (in blocks), plus 1 (to account for the additional encryption). We can ignore the extra 1 as that is absorbed by the tag length. I'm updating the numbers in the PR. The result is another halving of the number of packets. NAME NAME it's pretty clear that I'm out of my depth here, so your input would be highly valued.\nGood catch, yes, the double passing over the message requires a factor 2 here. (The AD a may also be up to 3 blocks if I understand the header format correctly, but I guess that's negligible compared to the conservative 2^10 message size.)\nGood point about the AD. The AD - at least for QUIC - is always sent, so that fits in the same space. We don't necessarily need to double count the tiny number of blocks, but as you say, counting that only once yields a negligible difference. If we had a large AD that wasn't also transmitted, that would be different.\nAgreed. To be precise: if I'm not mistaken the AD isn't counted at all right now ( accounts for the max. payload blocks), but of course the difference is still negligible as long as the AD is small.\nThanks for the edits!", "new_text": "Including transport parameters in the TLS handshake provides integrity protection for these values. The extension_data field of the quic_transport_parameters extension contains a value that is defined by the version of QUIC that is in use."} {"id": "q-en-quicwg-base-drafts-c5ce1c742b7429aa9c38e9b9478ae1f377cc346cc12805053a445e6d57124e42", "old_text": "output of AEAD_AES_128_GCM AEAD used with the following inputs: The secret key, K, is 128 bits equal to 0x4d32ecdb2a2133c841e4043df27d4430. The nonce, N, is 96 bits equal to 0x4d1611d05513a552c587d575. The plaintext, P, is empty.", "comments": "Any chance we could also update the initial salt?\nI was saving that for the final version, by popular demand. If you have a reason to change, then I can do that (I'll also want to update the keys for Retry in that case).\nI agree with NAME Let's leave the salt update to the final version.\nWith Google's QUIC deployment, we're already experiencing version ossification due to middleboxes inspecting initial packets - some of our traffic is stuck on older versions of GoogleQUIC because of this. Changing the salts every time we change the version number will help remind everyone that this is not an ossified aspect of the protocol. The cost to implementors is pretty much zero. NAME out of curiosity, why the pushback?\nWhat NAME says makes sense to me, so I would be in favor of shipping draft-29 with new salts.\nI updated the salts and the test vectors in quic-go, and I can confirm that the values match.\nThe draft describes that a client SHOULD NOT reuse a token in different connections (URL). However, I miss the argument that the server SHOULD NOT construct the same token multiple times. This argument addresses the risks of client tracking via tokens if a deterministic approach is used to construct them.\nMaybe ?\nNAME do you mean already covers this?\nYes, my view is that and the corresponding mailing list thread (URL) covers this issue. They discuss about unlinkability between the tokens issued by a server.\nNAME do you agree that this is (mostly) covered by ? If yes, can we close this issue and resolve it as part of ?\nNo, does not cover this particular point. makes the point that a network observer should not be able to derive information such as a timestamp from observed tokens. However, this issue says that the construction of the token should not be deterministic. To illustrate this problem, assume that the token is more or less the hash of the client's source address. Thus, the client receives upon repeat connections each time the same token, if its source address is identical. As a result, even if the client uses each token only a single time the corresponding connections established with these tokens can be correlated to each other by a network observer.\nNAME could be modified to cover your issue? could you comment there?\nNAME Ok, I will make proposals within to cover this issue.\nNAME the automation failed here - this should not be moved to \"Text Incorporated\", it should just be closed As stated in URL, my understanding is that a token sent in a NEW_TOKEN frame SHOULD be encrypted. The only exception would be the case of the token carrying only an opaque and randomly-generated key identifier in a stateful design (i.e. a server using a key-value store to retain the information of previous connections). Otherwise, an observer can use the information contained in the token sent in an resuming connection, to correlate that connection to the previous connection that issued the token. For example, if a server includes the generation time of the token in plaintext, an observer can extract that information found in an Initial packet to nail down the connection that issued the token. While I think we might be call this an editorial issue, I think we should clarify this as a normative requirement (e.g., SHOULD). Hence bringing the discussion to the mailing list as well. Kazuho Oku", "new_text": "output of AEAD_AES_128_GCM AEAD used with the following inputs: The secret key, K, is 128 bits equal to 0xccce187ed09a09d05728155a6cb96be1. The nonce, N, is 96 bits equal to 0xe54930f97f2136f0530a8c1c. The plaintext, P, is empty."} {"id": "q-en-quicwg-base-drafts-c5ce1c742b7429aa9c38e9b9478ae1f377cc346cc12805053a445e6d57124e42", "old_text": "The secret key and the nonce are values derived by calling HKDF- Expand-Label using 0x656e61e336ae9417f7f0edd8d78d461e2aa7084aba7a14c1e9f726d55709169a as the secret, with labels being \"quic key\" and \"quic iv\" (protection- keys).", "comments": "Any chance we could also update the initial salt?\nI was saving that for the final version, by popular demand. If you have a reason to change, then I can do that (I'll also want to update the keys for Retry in that case).\nI agree with NAME Let's leave the salt update to the final version.\nWith Google's QUIC deployment, we're already experiencing version ossification due to middleboxes inspecting initial packets - some of our traffic is stuck on older versions of GoogleQUIC because of this. Changing the salts every time we change the version number will help remind everyone that this is not an ossified aspect of the protocol. The cost to implementors is pretty much zero. NAME out of curiosity, why the pushback?\nWhat NAME says makes sense to me, so I would be in favor of shipping draft-29 with new salts.\nI updated the salts and the test vectors in quic-go, and I can confirm that the values match.\nThe draft describes that a client SHOULD NOT reuse a token in different connections (URL). However, I miss the argument that the server SHOULD NOT construct the same token multiple times. This argument addresses the risks of client tracking via tokens if a deterministic approach is used to construct them.\nMaybe ?\nNAME do you mean already covers this?\nYes, my view is that and the corresponding mailing list thread (URL) covers this issue. They discuss about unlinkability between the tokens issued by a server.\nNAME do you agree that this is (mostly) covered by ? If yes, can we close this issue and resolve it as part of ?\nNo, does not cover this particular point. makes the point that a network observer should not be able to derive information such as a timestamp from observed tokens. However, this issue says that the construction of the token should not be deterministic. To illustrate this problem, assume that the token is more or less the hash of the client's source address. Thus, the client receives upon repeat connections each time the same token, if its source address is identical. As a result, even if the client uses each token only a single time the corresponding connections established with these tokens can be correlated to each other by a network observer.\nNAME could be modified to cover your issue? could you comment there?\nNAME Ok, I will make proposals within to cover this issue.\nNAME the automation failed here - this should not be moved to \"Text Incorporated\", it should just be closed As stated in URL, my understanding is that a token sent in a NEW_TOKEN frame SHOULD be encrypted. The only exception would be the case of the token carrying only an opaque and randomly-generated key identifier in a stateful design (i.e. a server using a key-value store to retain the information of previous connections). Otherwise, an observer can use the information contained in the token sent in an resuming connection, to correlate that connection to the previous connection that issued the token. For example, if a server includes the generation time of the token in plaintext, an observer can extract that information found in an Initial packet to nail down the connection that issued the token. While I think we might be call this an editorial issue, I think we should clarify this as a normative requirement (e.g., SHOULD). Hence bringing the discussion to the mailing list as well. Kazuho Oku", "new_text": "The secret key and the nonce are values derived by calling HKDF- Expand-Label using 0x8b0d37eb8535022ebc8d76a207d80df22646ec06dc809642c30a8baa2baaff4c as the secret, with labels being \"quic key\" and \"quic iv\" (protection- keys)."} {"id": "q-en-quicwg-base-drafts-5dbed44c140a9a58effd88fb623334977ba5a4969a9e86e08c2ebce789192771", "old_text": "The payload of an Initial packet includes a CRYPTO frame (or frames) containing a cryptographic handshake message, ACK frames, or both. PING, PADDING, and CONNECTION_CLOSE frames are also permitted. An endpoint that receives an Initial packet containing other frames can either discard the packet as spurious or treat it as a connection error. The first packet sent by a client always includes a CRYPTO frame that contains the start or all of the first cryptographic handshake", "comments": "When we prohibited the use of application-layer CONNECTION_CLOSE in these packets, we should have updated the text in the description of the packets also.\nI think the spec is ambiguous when it says in section 17.2: and: >Handshake packets MAY contain CONNECTIONCLOSE frames. But CONNECTIONCLOSE can mean 0x1c or 0x1d. Are both permitted during the handshake?\nNAME In this case, the intent is to allow both. Unless explicitly specified, I would read the text to mean both types of CONNECTION_CLOSE frames.\nOkay, if more people think that way, we can close with no action.\nSo on the surface, there is no inconsistency: the two sections permit CONNECTIONCLOSE. But the question about the different types of CONNECTIONCLOSE is more relevant than NAME response might imply. We currently require that CONNECTIONCLOSE of type 0x1d (application close) is not sent in Initial or Handshake packets. So that's a drafting error, and we might reasonably take this as an editorial issue. However, if the question is about what a reaction to receiving an application CONNECTIONCLOSE in one of those packets, we might consider a different posture. If you get an application CONNECTIONCLOSE in an Initial packet that is otherwise valid, you probably want to drop the connection. And you probably don't want to start sending CONNECTIONCLOSE in response every time, because that leads to madness. But this is a general problem: if you detect and error and that error keeps being repeated, it probably isn't a good idea to generate a CONNECTION_CLOSE in response to every infraction.\n(removing my comment as NAME posted what needs to be said couple of seconds earlier).\nI'm going to re-close the can of worms I half-opened in my earlier comment and take this as editorial. From memory, the larger question of how to deal with junk is something we've discussed several times. If anyone wants to take that up, I'm happy to do that, but let's at least correct the obvious error.\nIt does make sense that we only allow 0x1c during the handshake.\nNAME : Apologies, I misspoke -- I was speaking to interpretation of text, rather than the actual question itself.", "new_text": "The payload of an Initial packet includes a CRYPTO frame (or frames) containing a cryptographic handshake message, ACK frames, or both. PING, PADDING, and CONNECTION_CLOSE frames of type 0x1c are also permitted. An endpoint that receives an Initial packet containing other frames can either discard the packet as spurious or treat it as a connection error. The first packet sent by a client always includes a CRYPTO frame that contains the start or all of the first cryptographic handshake"} {"id": "q-en-quicwg-base-drafts-5dbed44c140a9a58effd88fb623334977ba5a4969a9e86e08c2ebce789192771", "old_text": "The payload of this packet contains CRYPTO frames and could contain PING, PADDING, or ACK frames. Handshake packets MAY contain CONNECTION_CLOSE frames. Endpoints MUST treat receipt of Handshake packets with other frames as a connection error. Like Initial packets (see discard-initial), data in CRYPTO frames for Handshake packets is discarded - and no longer retransmitted - when", "comments": "When we prohibited the use of application-layer CONNECTION_CLOSE in these packets, we should have updated the text in the description of the packets also.\nI think the spec is ambiguous when it says in section 17.2: and: >Handshake packets MAY contain CONNECTIONCLOSE frames. But CONNECTIONCLOSE can mean 0x1c or 0x1d. Are both permitted during the handshake?\nNAME In this case, the intent is to allow both. Unless explicitly specified, I would read the text to mean both types of CONNECTION_CLOSE frames.\nOkay, if more people think that way, we can close with no action.\nSo on the surface, there is no inconsistency: the two sections permit CONNECTIONCLOSE. But the question about the different types of CONNECTIONCLOSE is more relevant than NAME response might imply. We currently require that CONNECTIONCLOSE of type 0x1d (application close) is not sent in Initial or Handshake packets. So that's a drafting error, and we might reasonably take this as an editorial issue. However, if the question is about what a reaction to receiving an application CONNECTIONCLOSE in one of those packets, we might consider a different posture. If you get an application CONNECTIONCLOSE in an Initial packet that is otherwise valid, you probably want to drop the connection. And you probably don't want to start sending CONNECTIONCLOSE in response every time, because that leads to madness. But this is a general problem: if you detect and error and that error keeps being repeated, it probably isn't a good idea to generate a CONNECTION_CLOSE in response to every infraction.\n(removing my comment as NAME posted what needs to be said couple of seconds earlier).\nI'm going to re-close the can of worms I half-opened in my earlier comment and take this as editorial. From memory, the larger question of how to deal with junk is something we've discussed several times. If anyone wants to take that up, I'm happy to do that, but let's at least correct the obvious error.\nIt does make sense that we only allow 0x1c during the handshake.\nNAME : Apologies, I misspoke -- I was speaking to interpretation of text, rather than the actual question itself.", "new_text": "The payload of this packet contains CRYPTO frames and could contain PING, PADDING, or ACK frames. Handshake packets MAY contain CONNECTION_CLOSE frames of type 0x1c. Endpoints MUST treat receipt of Handshake packets with other frames as a connection error. Like Initial packets (see discard-initial), data in CRYPTO frames for Handshake packets is discarded - and no longer retransmitted - when"} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "Indicates that x is repeated zero or more times (and that each instance is length E) fig-ex-format shows an example structure: 5.", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "Indicates that x is repeated zero or more times (and that each instance is length E) This document uses network byte order (that is, big endian) values. Fields are placed starting from the high-order bits of each byte. fig-ex-format shows an example structure: 5."} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "A QUIC packet with a long header has the high bit of the first byte set to 1. All other bits in that byte are version specific. The next four bytes include a 32-bit Version field (see version). The next byte contains the length in bytes of the Destination Connection ID (see connection-id) field that follows it. This length is encoded as an 8-bit unsigned integer. The Destination Connection ID field follows the DCID Length field and is between 0 and 255 bytes in length. The next byte contains the length in bytes of the Source Connection ID field that follows it. This length is encoded as a 8-bit unsigned integer. The Source Connection ID field follows the SCID Length field and is between 0 and 255 bytes in length. The remainder of the packet contains version-specific content.", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "A QUIC packet with a long header has the high bit of the first byte set to 1. All other bits in that byte are version specific. The next four bytes include a 32-bit Version field. Versions are described in version. The next byte contains the length in bytes of the Destination Connection ID field that follows it. This length is encoded as an 8-bit unsigned integer. The Destination Connection ID field follows the Destination Connection ID Length field and is between 0 and 255 bytes in length. Connection IDs are described in connection-id. The next byte contains the length in bytes of the Source Connection ID field that follows it. This length is encoded as a 8-bit unsigned integer. The Source Connection ID field follows the Source Connection ID Length field and is between 0 and 255 bytes in length. The remainder of the packet contains version-specific content."} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "A QUIC packet with a short header includes a Destination Connection ID immediately following the first byte. The short header does not include the Connection ID Lengths, Source Connection ID, or Version fields. The length of the Destination Connection ID is not specified in packets with a short header and is not constrained by this specification.", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "A QUIC packet with a short header includes a Destination Connection ID immediately following the first byte. The short header does not include the Connection ID Lengths, Source Connection ID, or Version fields. The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification."} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "The primary function of a connection ID is to ensure that changes in addressing at lower protocol layers (UDP, IP, and below) don't cause packets for a QUIC connection to be delivered to the wrong endpoint. The connection ID is used by endpoints and the intermediaries that support them to ensure that each QUIC packet can be delivered to the correct instance of an endpoint. At the endpoint, the connection ID is used to identify which QUIC connection the packet is intended for. The connection ID is chosen by each endpoint using version-specific methods. Packets for the same QUIC connection might use different", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "The primary function of a connection ID is to ensure that changes in addressing at lower protocol layers (UDP, IP, and below) don't cause packets for a QUIC connection to be delivered to the wrong QUIC endpoint. The connection ID is used by endpoints and the intermediaries that support them to ensure that each QUIC packet can be delivered to the correct instance of an endpoint. At the endpoint, the connection ID is used to identify which QUIC connection the packet is intended for. The connection ID is chosen by each endpoint using version-specific methods. Packets for the same QUIC connection might use different"} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "Indicates that x is repeated zero or more times (and that each instance is length E) By convention, individual fields reference a complex field by using the name of the complex field.", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "Indicates that x is repeated zero or more times (and that each instance is length E) This document uses network byte order (that is, big endian) values. Fields are placed starting from the high-order bits of each byte. By convention, individual fields reference a complex field by using the name of the complex field."} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "packet protection keys for Initial packets. The client populates the Source Connection ID field with a value of its choosing and sets the SCID Length field to indicate the length. The first flight of 0-RTT packets use the same Destination Connection ID and Source Connection ID values as the client's first Initial", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "packet protection keys for Initial packets. The client populates the Source Connection ID field with a value of its choosing and sets the Source Connection ID Length field to indicate the length. The first flight of 0-RTT packets use the same Destination Connection ID and Source Connection ID values as the client's first Initial"} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "If a zero-length connection ID is selected, the corresponding transport parameter is included with a zero-length value. fig-auth-cid shows the connection IDs that are used in a complete handshake. The exchange of Initial packets is shown, plus the later exchange of 1-RTT packets that includes the connection ID established during the handshake.", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "If a zero-length connection ID is selected, the corresponding transport parameter is included with a zero-length value. fig-auth-cid shows the connection IDs (with DCID=Destination Connection ID, SCID=Source Connection ID) that are used in a complete handshake. The exchange of Initial packets is shown, plus the later exchange of 1-RTT packets that includes the connection ID established during the handshake."} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "SHOULD be able to read longer connection IDs from other QUIC versions in order to properly form a version negotiation packet. The Destination Connection ID field follows the DCID Length field and is between 0 and 20 bytes in length. negotiating-connection- ids describes the use of this field in more detail. The byte following the Destination Connection ID contains the length in bytes of the Source Connection ID field that follows it.", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "SHOULD be able to read longer connection IDs from other QUIC versions in order to properly form a version negotiation packet. The Destination Connection ID field follows the Destination Connection ID Length field and is between 0 and 20 bytes in length. negotiating-connection-ids describes the use of this field in more detail. The byte following the Destination Connection ID contains the length in bytes of the Source Connection ID field that follows it."} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "IDs from other QUIC versions in order to properly form a version negotiation packet. The Source Connection ID field follows the SCID Length field and is between 0 and 20 bytes in length. negotiating-connection-ids describes the use of this field in more detail. In this version of QUIC, the following packet types with the long header are defined:", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "IDs from other QUIC versions in order to properly form a version negotiation packet. The Source Connection ID field follows the Source Connection ID Length field and is between 0 and 20 bytes in length. negotiating- connection-ids describes the use of this field in more detail. In this version of QUIC, the following packet types with the long header are defined:"} {"id": "q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b", "old_text": "The Initial packet contains a long header as well as the Length and Packet Number fields. The first byte contains the Reserved and Packet Number Length bits. Between the SCID and Length fields, there are two additional fields specific to the Initial packet. A variable-length integer specifying the length of the Token field, in bytes. This value is zero if no token is present.", "comments": "Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change.", "new_text": "The Initial packet contains a long header as well as the Length and Packet Number fields. The first byte contains the Reserved and Packet Number Length bits. Between the Source Connection ID and Length fields, there are two additional fields specific to the Initial packet. A variable-length integer specifying the length of the Token field, in bytes. This value is zero if no token is present."} {"id": "q-en-quicwg-base-drafts-d3abad4ea4d4e6a17a4e3521b523b7b281b524f231c3ae9c8096bb3d8523639f", "old_text": "transport parameter; see termination. However, state in middleboxes might time out earlier than that. Though REQ-5 in RFC4787 recommends a 2 minute timeout interval, experience shows that sending packets every 15 to 30 seconds is necessary to prevent the majority of middleboxes from losing state for UDP flows. 10.3.", "comments": "Thanks to Lucas for finding the study. Note that I found at least one middlebox with a default timeout of 30s; that was not a home gateway included in this study.\nsays that \"experience shows\" you should send packets every 15-30 seconds, RFC4787 notwithstanding. Ideally, we would have a source that we could reference for this \"experience.\"\nMaybe see URL\nI'm going to say that while the 15s floor is a good requirement, I don't want to cite RFC 8085 for this gem, which I think is straight-up misinformation: \"NATs require a state timeout of 2 minutes or longer [RFC4787].\" We do cite RFC 8085 elsewhere and that's fine, but the fact here is that the IETF has requested that NATs provide a state timeout of 2 minutes. That does not mean that they do.\nI have seen anecdata as well that showing 30 seconds is an important threshold. I think the ref we now have is good.", "new_text": "transport parameter; see termination. However, state in middleboxes might time out earlier than that. Though REQ-5 in RFC4787 recommends a 2 minute timeout interval, experience shows that sending packets every 30 seconds is necessary to prevent the majority of middleboxes from losing state for UDP flows GATEWAY. 10.3."} {"id": "q-en-quicwg-base-drafts-0f8ff62f22cbf1e4272aa9378a65bcbf3b7e914c439237b08dee8014d7fb0001", "old_text": "After a client receives a Retry packet, 0-RTT packets are likely to have been lost or discarded by the server. A client SHOULD attempt to resend data in 0-RTT packets after it sends a new Initial packet. A client MUST NOT reset the packet number it uses for 0-RTT packets, since the keys used to protect 0-RTT packets will not change as a result of responding to a Retry packet. Sending packets with the same packet number in that case is likely to compromise the packet protection for all 0-RTT packets because the same key and nonce could be used to protect different content. A client only receives acknowledgments for its 0-RTT packets once the handshake is complete. Consequently, a server might expect 0-RTT", "comments": "This moves text from the section on 0-RTT to the section on continuing after Retry. Though 0-RTT is the only thing that is seriously affected in the current design, Initial packets might also be affected if we ever wanted to protect those with anything other than a static key.\nsays that after receiving a Retry, clients MUST NOT reset the packet number for any packet number space, and references 17.2.3 for more info. However, 17.2.3 only explains why this is the case for 0-RTT packets. We should either narrow the prohibition or expand the explanation.\nOr maybe move the explanation that currently exists 17.2.3 to 17.2.5.3 and use it as an example (to avoid it being read as the only reason).\nNAME does NAME editorial change address your issue?\nYes, this is fine.", "new_text": "After a client receives a Retry packet, 0-RTT packets are likely to have been lost or discarded by the server. A client SHOULD attempt to resend data in 0-RTT packets after it sends a new Initial packet. New packet numbers MUST be used for any new packets that are sent; as described in retry-continue, reusing packet numbers could compromise packet protection. A client only receives acknowledgments for its 0-RTT packets once the handshake is complete. Consequently, a server might expect 0-RTT"} {"id": "q-en-quicwg-base-drafts-0f8ff62f22cbf1e4272aa9378a65bcbf3b7e914c439237b08dee8014d7fb0001", "old_text": "response to receiving a Retry. A client MUST NOT reset the packet number for any packet number space after processing a Retry packet; packet-0rtt contains more information on this. A server acknowledges the use of a Retry packet for a connection using the retry_source_connection_id transport parameter; see", "comments": "This moves text from the section on 0-RTT to the section on continuing after Retry. Though 0-RTT is the only thing that is seriously affected in the current design, Initial packets might also be affected if we ever wanted to protect those with anything other than a static key.\nsays that after receiving a Retry, clients MUST NOT reset the packet number for any packet number space, and references 17.2.3 for more info. However, 17.2.3 only explains why this is the case for 0-RTT packets. We should either narrow the prohibition or expand the explanation.\nOr maybe move the explanation that currently exists 17.2.3 to 17.2.5.3 and use it as an example (to avoid it being read as the only reason).\nNAME does NAME editorial change address your issue?\nYes, this is fine.", "new_text": "response to receiving a Retry. A client MUST NOT reset the packet number for any packet number space after processing a Retry packet. In particular, 0-RTT packets contain confidential information that will most likely be retransmitted on receiving a Retry packet. The keys used to protect these new 0-RTT packets will not change as a result of responding to a Retry packet. However, the data sent in these packets could be different than what was sent earlier. Sending these new packets with the same packet number is likely to compromise the packet protection for those packets because the same key and nonce could be used to protect different content. A server acknowledges the use of a Retry packet for a connection using the retry_source_connection_id transport parameter; see"} {"id": "q-en-quicwg-base-drafts-bfd3df17e81a128d19612a3151c67988747198f2b9f2116a2b8ceb90b5ff4d95", "old_text": "12.1. All QUIC packets except Version Negotiation packets use authenticated encryption with additional data (AEAD) RFC5116 to provide confidentiality and integrity protection. Retry packets use AEAD to provide integrity protection. Details of packet protection are found in QUIC-TLS; this section includes an overview of the process.", "comments": "I'd always heard this as \"Authenticated Encryption with Additional Data,\" so it was -tls that sounded funny. But looking at 5116, it's actually \"Associated\" data and -tls is correct. Fixing in -transport.\nI read this as referring to Authenticated at first.", "new_text": "12.1. All QUIC packets except Version Negotiation packets use authenticated encryption with associated data (AEAD) RFC5116 to provide confidentiality and integrity protection. Retry packets use AEAD to provide integrity protection. Details of packet protection are found in QUIC-TLS; this section includes an overview of the process."} {"id": "q-en-quicwg-base-drafts-e9fe73e034bfb871c42b9693a8fa7960cb3ec8fc3629a55430a684abc759311b", "old_text": "Authenticated and encrypted header and payload QUIC uses UDP as a substrate to avoid requiring changes to legacy client operating systems and middleboxes. QUIC authenticates all of its headers and encrypts most of the data it exchanges, including its signaling, to avoid incurring a dependency on middleboxes. 1.1.", "comments": "This does a little shuffling to move some of the context-setting up. It then adds some more meat to the introductory section.\nThanks for the review Jana. I've taken most of this, with small tweaks only. Two things I didn't take: Interleaving explanation with references to sections sounded appealing, but it turns out to be both harder to read as the links break up the flow, and harder to scan for links (the structure section serves as an abbreviated and accessible ToC). I didn't move the connection piece below streams as I think that - at this level - talking about the connection is important context. I've moved the picture to the section, which didn't have that simple diagram (just some more complex ones), and I think benefits from it.\nWhen I was reading quic-transport I found it very difficult to deal with the forward references. Streams are presented before connections, the overall state machine isn't introduced up front, packets come long after frames, and both are long before the wire formats are described. It might worth revisiting the section ordering.\nAt some point, transport was hugely restructured, and I believe many of those changes were good, but some didn't work out that well. I find streams coming before connections particularly odd.\nThe larger idea is \"high-level concepts, then wire encoding\" which I think makes sense. It puts the things that someone needs for a conceptual understanding of the protocol in one place, and the things that someone needs to implement the protocol elsewhere. Within the high-level concepts, I also find it strange that connections don't come before streams. However, the \"connections\" group also includes things that really don't deserve pride-of-place as the first couple things to be discussed, either.\nI think that the main concept you need is that QUIC has a connection that is shared state between two endpoints. The later sections about connections go into more detail than you need to understand the stream sections. But I note that we don't properly provide that basic knowledge in our introductory parts, which are quite lean. Leanness is good, but we might have overdone it slightly in this regard.\nHaving worked my way through the spec, my suggested resolution to this is that Section 5.3 needs to be earlier, perhaps much earlier. At the least, it should open Section 5; some or all of it might belong in Section 1.\nThanks for working through this!", "new_text": "Authenticated and encrypted header and payload QUIC establishes a connection, which is a stateful interaction between a client and server. The primary purpose of a connection is to support the structured exchange of data by an application protocol. Streams are means by which an application protocol exchanges information. Streams are ordered sequences of bytes. Two types of stream can be created: bidirectional streams, which allow both endpoints to send data; and unidirectional streams, which allow a single endpoint to send. A credit-based scheme is used to limit stream creation and to bound the amount of data that can be sent. The QUIC handshake combines negotiation of cryptographic and transport parameters. The handshake is structured to permit the exchange of application data as soon as possible. This includes an option for clients to send data immediately (0-RTT), which might require prior communication to enable. QUIC connections are not strictly bound to a single network path. Connection migration uses connection identifiers to allow connections to transfer to a new network path. Frames are used in QUIC to communicate between endpoints. One or more frames are assembled into packets. QUIC authenticates all packets and encrypts as much as is practical. QUIC packets are carried in UDP datagrams to better facilitate deployment in existing systems and networks. Once established, multiple options are provided for connection termination. Applications can manage a graceful shutdown, endpoints can negotiate a timeout period, errors can cause immediate connection teardown, and a stateless mechanism provides for termination of connections after one endpoint has lost state. 1.1."} {"id": "q-en-quicwg-base-drafts-e9fe73e034bfb871c42b9693a8fa7960cb3ec8fc3629a55430a684abc759311b", "old_text": "5. QUIC's connection establishment combines version negotiation with the cryptographic and transport handshakes to reduce connection establishment latency, as described in handshake. During connection establishment, each side validates the peer's address, as described in address-validation. Once established, a connection may migrate to a different IP or port at either endpoint as described in migration. Finally, a connection may be terminated by either endpoint, as described in termination. 5.1.", "comments": "This does a little shuffling to move some of the context-setting up. It then adds some more meat to the introductory section.\nThanks for the review Jana. I've taken most of this, with small tweaks only. Two things I didn't take: Interleaving explanation with references to sections sounded appealing, but it turns out to be both harder to read as the links break up the flow, and harder to scan for links (the structure section serves as an abbreviated and accessible ToC). I didn't move the connection piece below streams as I think that - at this level - talking about the connection is important context. I've moved the picture to the section, which didn't have that simple diagram (just some more complex ones), and I think benefits from it.\nWhen I was reading quic-transport I found it very difficult to deal with the forward references. Streams are presented before connections, the overall state machine isn't introduced up front, packets come long after frames, and both are long before the wire formats are described. It might worth revisiting the section ordering.\nAt some point, transport was hugely restructured, and I believe many of those changes were good, but some didn't work out that well. I find streams coming before connections particularly odd.\nThe larger idea is \"high-level concepts, then wire encoding\" which I think makes sense. It puts the things that someone needs for a conceptual understanding of the protocol in one place, and the things that someone needs to implement the protocol elsewhere. Within the high-level concepts, I also find it strange that connections don't come before streams. However, the \"connections\" group also includes things that really don't deserve pride-of-place as the first couple things to be discussed, either.\nI think that the main concept you need is that QUIC has a connection that is shared state between two endpoints. The later sections about connections go into more detail than you need to understand the stream sections. But I note that we don't properly provide that basic knowledge in our introductory parts, which are quite lean. Leanness is good, but we might have overdone it slightly in this regard.\nHaving worked my way through the spec, my suggested resolution to this is that Section 5.3 needs to be earlier, perhaps much earlier. At the least, it should open Section 5; some or all of it might belong in Section 1.\nThanks for working through this!", "new_text": "5. A QUIC connection is shared state between a client and a server. Each connection starts with a handshake phase, during which the two endpoints establish a shared secret using the cryptographic handshake protocol QUIC-TLS and negotiate the application protocol. The handshake (handshake) confirms that both endpoints are willing to communicate (validate-handshake) and establishes parameters for the connection (transport-parameters). An application protocol can use the connection during the handshake phase with some limitations. 0-RTT allows application data to be sent by a client before receiving a response from the server. However, 0-RTT provides no protection against replay attacks; see Section 9.2 of QUIC-TLS. A server can also send application data to a client before it receives the final cryptographic handshake messages that allow it to confirm the identity and liveness of the client. These capabilities allow an application protocol to offer the option of trading some security guarantees for reduced latency. The use of connection IDs (connection-id) allows connections to migrate to a new network path, both as a direct choice of an endpoint and when forced by a change in a middlebox. migration describes mitigations for the security and privacy issues associated with migration. For connections that are no longer needed or desired, there are several ways for a client and server to terminate a connection (termination). 5.1."} {"id": "q-en-quicwg-base-drafts-e9fe73e034bfb871c42b9693a8fa7960cb3ec8fc3629a55430a684abc759311b", "old_text": "5.3. A QUIC connection is a stateful interaction between a client and server, the primary purpose of which is to support the exchange of data by an application protocol. Streams (streams) are the primary means by which an application protocol exchanges information. Each connection starts with a handshake phase, during which client and server establish a shared secret using the cryptographic handshake protocol QUIC-TLS and negotiate the application protocol. The handshake (handshake) confirms that both endpoints are willing to communicate (validate-handshake) and establishes parameters for the connection (transport-parameters). An application protocol can also operate in a limited fashion during the handshake phase. 0-RTT allows application data to be sent by a client before receiving any response from the server. However, 0-RTT lacks certain key security guarantees. In particular, there is no protection against replay attacks in 0-RTT; see QUIC-TLS. Separately, a server can also send application data to a client before it receives the final cryptographic handshake messages that allow it to confirm the identity and liveness of the client. These capabilities allow an application protocol to offer the option to trade some security guarantees for reduced latency. The use of connection IDs (connection-id) allows connections to migrate to a new network path, both as a direct choice of an endpoint and when forced by a change in a middlebox. migration describes mitigations for the security and privacy issues associated with migration. For connections that are no longer needed or desired, there are several ways for a client and server to terminate a connection (termination). 5.4. There are certain operations that an application MUST be able to perform when interacting with the QUIC transport. This document does not specify an API, but any implementation of this version of QUIC", "comments": "This does a little shuffling to move some of the context-setting up. It then adds some more meat to the introductory section.\nThanks for the review Jana. I've taken most of this, with small tweaks only. Two things I didn't take: Interleaving explanation with references to sections sounded appealing, but it turns out to be both harder to read as the links break up the flow, and harder to scan for links (the structure section serves as an abbreviated and accessible ToC). I didn't move the connection piece below streams as I think that - at this level - talking about the connection is important context. I've moved the picture to the section, which didn't have that simple diagram (just some more complex ones), and I think benefits from it.\nWhen I was reading quic-transport I found it very difficult to deal with the forward references. Streams are presented before connections, the overall state machine isn't introduced up front, packets come long after frames, and both are long before the wire formats are described. It might worth revisiting the section ordering.\nAt some point, transport was hugely restructured, and I believe many of those changes were good, but some didn't work out that well. I find streams coming before connections particularly odd.\nThe larger idea is \"high-level concepts, then wire encoding\" which I think makes sense. It puts the things that someone needs for a conceptual understanding of the protocol in one place, and the things that someone needs to implement the protocol elsewhere. Within the high-level concepts, I also find it strange that connections don't come before streams. However, the \"connections\" group also includes things that really don't deserve pride-of-place as the first couple things to be discussed, either.\nI think that the main concept you need is that QUIC has a connection that is shared state between two endpoints. The later sections about connections go into more detail than you need to understand the stream sections. But I note that we don't properly provide that basic knowledge in our introductory parts, which are quite lean. Leanness is good, but we might have overdone it slightly in this regard.\nHaving worked my way through the spec, my suggested resolution to this is that Section 5.3 needs to be earlier, perhaps much earlier. At the least, it should open Section 5; some or all of it might belong in Section 1.\nThanks for working through this!", "new_text": "5.3. There are certain operations that an application MUST be able to perform when interacting with the QUIC transport. This document does not specify an API, but any implementation of this version of QUIC"} {"id": "q-en-quicwg-base-drafts-e9fe73e034bfb871c42b9693a8fa7960cb3ec8fc3629a55430a684abc759311b", "old_text": "ordered delivery of cryptographic handshake data start from zero in each packet number space. Endpoints MUST explicitly negotiate an application protocol. This avoids situations where there is a disagreement about the protocol that is in use.", "comments": "This does a little shuffling to move some of the context-setting up. It then adds some more meat to the introductory section.\nThanks for the review Jana. I've taken most of this, with small tweaks only. Two things I didn't take: Interleaving explanation with references to sections sounded appealing, but it turns out to be both harder to read as the links break up the flow, and harder to scan for links (the structure section serves as an abbreviated and accessible ToC). I didn't move the connection piece below streams as I think that - at this level - talking about the connection is important context. I've moved the picture to the section, which didn't have that simple diagram (just some more complex ones), and I think benefits from it.\nWhen I was reading quic-transport I found it very difficult to deal with the forward references. Streams are presented before connections, the overall state machine isn't introduced up front, packets come long after frames, and both are long before the wire formats are described. It might worth revisiting the section ordering.\nAt some point, transport was hugely restructured, and I believe many of those changes were good, but some didn't work out that well. I find streams coming before connections particularly odd.\nThe larger idea is \"high-level concepts, then wire encoding\" which I think makes sense. It puts the things that someone needs for a conceptual understanding of the protocol in one place, and the things that someone needs to implement the protocol elsewhere. Within the high-level concepts, I also find it strange that connections don't come before streams. However, the \"connections\" group also includes things that really don't deserve pride-of-place as the first couple things to be discussed, either.\nI think that the main concept you need is that QUIC has a connection that is shared state between two endpoints. The later sections about connections go into more detail than you need to understand the stream sections. But I note that we don't properly provide that basic knowledge in our introductory parts, which are quite lean. Leanness is good, but we might have overdone it slightly in this regard.\nHaving worked my way through the spec, my suggested resolution to this is that Section 5.3 needs to be earlier, perhaps much earlier. At the least, it should open Section 5; some or all of it might belong in Section 1.\nThanks for working through this!", "new_text": "ordered delivery of cryptographic handshake data start from zero in each packet number space. fig-hs shows a simplied handshake and the exchange of packets and frames that are used to advance the handshake. Exchange of application data during the handshake is enabled where possible, shown with a '*'. Once completed, endpoints are able to exchange application data. Endpoints MUST explicitly negotiate an application protocol. This avoids situations where there is a disagreement about the protocol that is in use."} {"id": "q-en-quicwg-base-drafts-3aa95478e0b778b217085ab4e4dc734a4d7a005ea3c908eb4dab1e64051838a2", "old_text": "A stateless reset uses an entire UDP datagram, starting with the first two bits of the packet header. The remainder of the first byte and an arbitrary number of bytes following it that are set to unpredictable values. The last 16 bytes of the datagram contain a Stateless Reset Token. To entities other than its intended recipient, a stateless reset will appear to be a packet with a short header. For the stateless reset", "comments": "I see a couple of places where we require that values have high entropy. For instance of the packet header. The remainder of the first byte and an arbitrary number of bytes following it that are set to unpredictable values. The last 16 bytes of the datagram contain a Stateless Reset Token. and to ensure that it is easier to receive the packet than it is to guess the value correctly. In general, this means that things contain sufficient entropy (so, for instance, you might have a value with some fixed values and some high entropy values). In any case, I think the first of these should be \"indistinguishable from random\" and the second should be \"containing at least 64 bits worth of entropy\"\nLGTM", "new_text": "A stateless reset uses an entire UDP datagram, starting with the first two bits of the packet header. The remainder of the first byte and an arbitrary number of bytes following it that are set to values that SHOULD be indistinguishable from random. The last 16 bytes of the datagram contain a Stateless Reset Token. To entities other than its intended recipient, a stateless reset will appear to be a packet with a short header. For the stateless reset"} {"id": "q-en-quicwg-base-drafts-3aa95478e0b778b217085ab4e4dc734a4d7a005ea3c908eb4dab1e64051838a2", "old_text": "This 8-byte field contains arbitrary data. A PATH_CHALLENGE frame containing 8 bytes that are hard to guess is sufficient to ensure that it is easier to receive the packet than it is to guess the value correctly. The recipient of this frame MUST generate a PATH_RESPONSE frame (frame-path-response) containing the same Data.", "comments": "I see a couple of places where we require that values have high entropy. For instance of the packet header. The remainder of the first byte and an arbitrary number of bytes following it that are set to unpredictable values. The last 16 bytes of the datagram contain a Stateless Reset Token. and to ensure that it is easier to receive the packet than it is to guess the value correctly. In general, this means that things contain sufficient entropy (so, for instance, you might have a value with some fixed values and some high entropy values). In any case, I think the first of these should be \"indistinguishable from random\" and the second should be \"containing at least 64 bits worth of entropy\"\nLGTM", "new_text": "This 8-byte field contains arbitrary data. Including 64 bits of entropy in a PATH_CHALLENGE frame ensures that it is easier to receive the packet than it is to guess the value correctly. The recipient of this frame MUST generate a PATH_RESPONSE frame (frame-path-response) containing the same Data."} {"id": "q-en-quicwg-base-drafts-9b0ae3a6c445b6c16701623b50bb7cabd751468d1f859ec230a1ea2bf8c8e9e4", "old_text": "Any received 0-RTT data that the server responds to might be due to a replay attack. Therefore, the server's use of 1-RTT keys MUST be limited to sending data before the handshake is complete. A server MUST NOT process incoming 1-RTT protected packets before the TLS handshake is complete. Because sending acknowledgments indicates that all frames in a packet have been processed, a server cannot send acknowledgments", "comments": "This was never explicit for the client as it is usually not possible. in a way that perhaps was not anticipated.\nis complete. Consequently, a server might expect 0-RTT packets to start with a However, this is not necessarily strictly true, because you might have the ACKs in 0.5RTT. Now you might say that the TLS stack reports the handshake as complete as soon as it receives Finished but that's an implementation detail\nI think that this is true. The server can send before then, but we require that the client not install 1-RTT keys until it considers the handshake complete.\nThat's not entirely true. It's the server that needs to hold back keys (namely, the 1-RTT read key) until the handshake completes. The client can use keys as soon as they become available.\nI think NAME is correct in sense that the following three events happen at the same moment on the client side: handshake is declared complete TLS stack provides the 1-RTT read and write keys TLS stack provides ClientFinished\nMartin is correct. There are two different forms of handshake completion: complete when 1rtt can be sent and received, confirmed when the peer is known to be complete. The client cannot receive 0rtt acks before installing 1rtt keys, hence before hanshake is complete.\na nit, but LGTM", "new_text": "Any received 0-RTT data that the server responds to might be due to a replay attack. Therefore, the server's use of 1-RTT keys before the handshake is complete is limited to sending data. A server MUST NOT process incoming 1-RTT protected packets before the TLS handshake is complete. Because sending acknowledgments indicates that all frames in a packet have been processed, a server cannot send acknowledgments"} {"id": "q-en-quicwg-base-drafts-9b0ae3a6c445b6c16701623b50bb7cabd751468d1f859ec230a1ea2bf8c8e9e4", "old_text": "receiving a TLS ClientHello. The server MAY retain these packets for later decryption in anticipation of receiving a ClientHello. 5.8. Retry packets (see the Retry Packet section of QUIC-TRANSPORT) carry", "comments": "This was never explicit for the client as it is usually not possible. in a way that perhaps was not anticipated.\nis complete. Consequently, a server might expect 0-RTT packets to start with a However, this is not necessarily strictly true, because you might have the ACKs in 0.5RTT. Now you might say that the TLS stack reports the handshake as complete as soon as it receives Finished but that's an implementation detail\nI think that this is true. The server can send before then, but we require that the client not install 1-RTT keys until it considers the handshake complete.\nThat's not entirely true. It's the server that needs to hold back keys (namely, the 1-RTT read key) until the handshake completes. The client can use keys as soon as they become available.\nI think NAME is correct in sense that the following three events happen at the same moment on the client side: handshake is declared complete TLS stack provides the 1-RTT read and write keys TLS stack provides ClientFinished\nMartin is correct. There are two different forms of handshake completion: complete when 1rtt can be sent and received, confirmed when the peer is known to be complete. The client cannot receive 0rtt acks before installing 1rtt keys, hence before hanshake is complete.\na nit, but LGTM", "new_text": "receiving a TLS ClientHello. The server MAY retain these packets for later decryption in anticipation of receiving a ClientHello. A client generally receives 1-RTT keys at the same time as the handshake completes. Even if it has 1-RTT secrets, a client MUST NOT process incoming 1-RTT protected packets before the TLS handshake is complete. 5.8. Retry packets (see the Retry Packet section of QUIC-TRANSPORT) carry"} {"id": "q-en-quicwg-base-drafts-213d8a56d76b11bc07300610ef1ca02bb88662432eed518b74e39850cc5c6b2f", "old_text": "Invalid packets without packet protection, such as Initial, Retry, or Version Negotiation, MAY be discarded. An endpoint MUST generate a connection error if it commits changes to state before discovering an error. 5.2.1.", "comments": "Those packets that are not authenticated can be discarded, but only if processing them has no permanent effect on existing state. This attempts to explain that.\nS 5.2. says: Invalid packets without packet protection, such as Initial, Retry, or Version Negotiation, MAY be discarded. An endpoint MUST generate a connection error if it commits changes to state before discovering an error. I'm having trouble parsing this.\nLGTM", "new_text": "Invalid packets without packet protection, such as Initial, Retry, or Version Negotiation, MAY be discarded. An endpoint MUST generate a connection error if processing of the contents of these packets prior to discovering an error resulted in changes to the state of a connection that cannot be reverted. 5.2.1."} {"id": "q-en-quicwg-base-drafts-6a7c226a2a349e8725a67424f5c4c4ec81f5e702fe7c404240ac52dd4ad14d8f", "old_text": "On the other hand, reducing the frequency of packets that carry only acknowledgements reduces packet transmission and processing cost at both endpoints. It can also improve connection throughput on severely asymmetric links; see Section 3 of RFC3449. A receiver SHOULD send an ACK frame after receiving at least two ack- eliciting packets. This recommendation is general in nature and", "comments": "This is a small tweak of Gorry's suggestion.\nLooks good to me.\nI promised to raise an issue relating to this text: \"It can also improve connection throughput on severely asymmetric links; see Section 3 of [RFC3449].\" This partly addresss the case, but it does not yet note the implications of the return path traffic on congestion of the return path. I think this is a useful addition, because it helps alert the reader of the tradeoff ... which can be quite important for half-duplex/shared radio, etc where the spectrum consumed sending an ACK can even outweigh the cost of sending a data packet (because they use different design of PHY). I suggest: \"It can improve connection throughput using severely asymmetric links and can also reduce the volume of acknowledgment traffic using return path capacity; see Section 3 of [RFC3449].\"", "new_text": "On the other hand, reducing the frequency of packets that carry only acknowledgements reduces packet transmission and processing cost at both endpoints. It can improve connection throughput on severely asymmetric links and reduce the volume of acknowledgment traffic using return path capacity; see Section 3 of RFC3449. A receiver SHOULD send an ACK frame after receiving at least two ack- eliciting packets. This recommendation is general in nature and"} {"id": "q-en-quicwg-base-drafts-8ab194b7a5c73b128004a98751095379c72b24838acc49fd758ab6d4856af565", "old_text": "constants-of-interest) for a new path, but the delay SHOULD NOT be considered an RTT sample. Prior to handshake completion, when few to none RTT samples have been generated, it is possible that the probe timer expiration is due to an incorrect RTT estimate at the client. To allow the client to improve its RTT estimate, the new packet that it sends MUST be ack- eliciting. Initial packets and Handshake packets could be never acknowledged, but they are removed from bytes in flight when the Initial and Handshake keys are discarded, as described below in discarding-", "comments": "PTO packets always have to be ACK-eliciting, not just during the handshake, per existing text: \"When a PTO timer expires, a sender MUST send at least one ack-eliciting packet in the packet number space as a probe,...\" Also removed an \"unless\" clause, because if there's nothing to send, you don't arm the PTO timer.", "new_text": "constants-of-interest) for a new path, but the delay SHOULD NOT be considered an RTT sample. Initial packets and Handshake packets could be never acknowledged, but they are removed from bytes in flight when the Initial and Handshake keys are discarded, as described below in discarding-"} {"id": "q-en-quicwg-base-drafts-8ab194b7a5c73b128004a98751095379c72b24838acc49fd758ab6d4856af565", "old_text": "6.2.4. When a PTO timer expires, a sender MUST send at least one ack- eliciting packet in the packet number space as a probe, unless there is no data available to send. An endpoint MAY send up to two full- sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces. All probe packets sent on a PTO MUST be ack-eliciting. In addition to sending data in the packet number space for which the timer expired, the sender SHOULD send ack-eliciting packets from", "comments": "PTO packets always have to be ACK-eliciting, not just during the handshake, per existing text: \"When a PTO timer expires, a sender MUST send at least one ack-eliciting packet in the packet number space as a probe,...\" Also removed an \"unless\" clause, because if there's nothing to send, you don't arm the PTO timer.", "new_text": "6.2.4. When a PTO timer expires, a sender MUST send at least one ack- eliciting packet in the packet number space as a probe. An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces. All probe packets sent on a PTO MUST be ack-eliciting. In addition to sending data in the packet number space for which the timer expired, the sender SHOULD send ack-eliciting packets from"} {"id": "q-en-quicwg-base-drafts-2625890c489b20d5fc6e6c0d87ee8a536a4d118835cf7f46ee488cc5954733ab", "old_text": "5.2. min_rtt is the minimum RTT observed for a given network path. min_rtt is set to the latest_rtt on the first RTT sample, and to the lesser of min_rtt and latest_rtt on subsequent samples. In this document, min_rtt is used by loss detection to reject implausibly small rtt samples. An endpoint uses only locally observed times in computing the min_rtt and does not adjust for acknowledgment delays reported by the peer.", "comments": "I believe this covers the discussion on the issue. It also introduces some MUSTs which should've always been there. I don't have a strong opinion on the use of RFC 2119 language here, and I'm happy to remove all of that language if folks think that's better. I would like not to bikeshed this, but I would also like to be on a beach right now, enjoying a lovely drink and watching the sunset.\nNAME wrote the following in his WGLC review (): Splitting this off from as a potential design issue.\nThis seems like a reasonable thing to do.\nI'd support a can or a MAY, as an example when you might reset min_rtt, but I don't think we need a SHOULD here.\nminrtt is currently only used for filtering unexpectedly large ackdelay's reported by the peer. If the path RTT truly inflates then the peer will be able to report much larger ackdelay values and get a much lower RTT sample accepted. But I don't think this is a security problem in practice. The only other consideration is that in future QUIC implementations may reuse this computed value for other purposes like delay based congestion control. It's hard to make a judgement call on how important it is to reset minrtt. I guess I'd be happy with a SHOULD but I can live with a MAY.\nJana made a good argument that persistent congestion could be declared due to an overly small RTT, so resetting minrtt could avoid an endpoint repeatedly declaring persistent congestion. That being said, the RTT samples also need to be too low to spuriously declare persistent congestion, so this is increasingly sounding like an attack scenario, rather than having a poor estimate of minrtt. I think a SHOULD is Ok now, but I'd like to state the best rationale we have for why it's a SHOULD and neither a MUST or a MAY.\nI think there's one tricky point in resetting the minrtt after handshake confirmation. That is that all the packets can be acked with delays. IIRC, one of the reasons we were happy with using minrtt for capping latestrtt is that Initial and Handshake packets do not have ackdelays. If we are to suggest that an endpoint MAY reset minrtt when persistent congestion is declared, I think we should consider the risk of not having good estimate of minrtt, as we would have during the handshake.\nI don't see that as a risk here. min_rtt doesn't consider any discretionary delays.\nNAME That's exactly the problem. will be inflated by , unless and until the peer sends an immediate acknowledgement. And until that happens, will as well.\nI think that's OK. In practice, immediate ACKs do happen, and even at low rates, this should resolve to a \"true\" minimum. And if it doesn't, then you have worse performance than you might had otherwise. You have to have a pretty poor network (or a terrible ACK generation regime) for minrtt not to eventually resolve to a useful minimum. Explaining this risk is fine, but the endpoint that does this made the choice to throw out minrtt and that's just a consequence of that.\nOn one hand I agree that we can expect to see immediate ACKs, especially when HTTP/3 is used. At the same time, I am not sure if resetting minrtt when observing persistent congestion is better than other cases where we might reset minrtt, but that we do not currently mention. Change of IPTTL could be one of the examples in this regard (IIRC NAME pointed out that we do not use that value for detecting path changes). To summarize, I think talking about resetting minrtt might be fine, though I am not sure if that discussion should be tied to persistent congestion.\nTo give a specific example, one problem that I know with the current definition of minrtt is that it cannot detect and correct RTT estimate of a congested path. The sole purpose of minrtt is to discard excessive ack-delays being reported by the receiver. This works by capping the lower bound of adjustedrtt to minrtt. The problem here is that minrtt is a value that is typically collected when the path is almost idle. Once the path becomes congested, the true RTT becomes much greater than minrtt. For instance, if RTT-on-idle is 10ms and if RTT-on-congestion is 50ms, minrtt would be 10ms, and therefore cannot be used to correct RTT estimates when the peer reports an excessive ackdelay of, say 25ms. The RTT estimate would become 25ms in such case. I tend to think that this might be a bigger problem of the original issue, and therefore that we should state that an endpoint can periodically re-collect minrtt estimates, if we are to make a change to the current definition of minrtt.\nBased on agreement on the resolution in , which introduces normative language, let's mark this design.\nSeems fine. I'm not 100% on the strength of the recommendation to reset after persistent congestion, but it's sensible advice.The resetting of min RTT is also specified with LEDBAT or BBR, which differ from each other and from this proposed spec. This is a complex issue, because resetting min RTT in LEDBAT \"because of congestion\" is a variation of the latecomer advantage. But as long as the text applies only to RENO as specified here, then I am fine with it.I like the cautionary text. Just one suggestion but LGTM regardless. Thank you for working on this.", "new_text": "5.2. min_rtt is the sender's estimate of the minimum RTT observed for a given network path. In this document, min_rtt is used by loss detection to reject implausibly small rtt samples. min_rtt MUST be set to the latest_rtt on the first RTT sample. min_rtt MUST be set to the lesser of min_rtt and latest_rtt (latest- rtt) on all other samples. An endpoint uses only locally observed times in computing the min_rtt and does not adjust for acknowledgment delays reported by the peer."} {"id": "q-en-quicwg-base-drafts-2625890c489b20d5fc6e6c0d87ee8a536a4d118835cf7f46ee488cc5954733ab", "old_text": "The RTT for a network path may change over time. If a path's actual RTT decreases, the min_rtt will adapt immediately on the first low sample. If the path's actual RTT increases, the min_rtt will not adapt to it, allowing future RTT samples that are smaller than the new RTT to be included in smoothed_rtt. 5.3.", "comments": "I believe this covers the discussion on the issue. It also introduces some MUSTs which should've always been there. I don't have a strong opinion on the use of RFC 2119 language here, and I'm happy to remove all of that language if folks think that's better. I would like not to bikeshed this, but I would also like to be on a beach right now, enjoying a lovely drink and watching the sunset.\nNAME wrote the following in his WGLC review (): Splitting this off from as a potential design issue.\nThis seems like a reasonable thing to do.\nI'd support a can or a MAY, as an example when you might reset min_rtt, but I don't think we need a SHOULD here.\nminrtt is currently only used for filtering unexpectedly large ackdelay's reported by the peer. If the path RTT truly inflates then the peer will be able to report much larger ackdelay values and get a much lower RTT sample accepted. But I don't think this is a security problem in practice. The only other consideration is that in future QUIC implementations may reuse this computed value for other purposes like delay based congestion control. It's hard to make a judgement call on how important it is to reset minrtt. I guess I'd be happy with a SHOULD but I can live with a MAY.\nJana made a good argument that persistent congestion could be declared due to an overly small RTT, so resetting minrtt could avoid an endpoint repeatedly declaring persistent congestion. That being said, the RTT samples also need to be too low to spuriously declare persistent congestion, so this is increasingly sounding like an attack scenario, rather than having a poor estimate of minrtt. I think a SHOULD is Ok now, but I'd like to state the best rationale we have for why it's a SHOULD and neither a MUST or a MAY.\nI think there's one tricky point in resetting the minrtt after handshake confirmation. That is that all the packets can be acked with delays. IIRC, one of the reasons we were happy with using minrtt for capping latestrtt is that Initial and Handshake packets do not have ackdelays. If we are to suggest that an endpoint MAY reset minrtt when persistent congestion is declared, I think we should consider the risk of not having good estimate of minrtt, as we would have during the handshake.\nI don't see that as a risk here. min_rtt doesn't consider any discretionary delays.\nNAME That's exactly the problem. will be inflated by , unless and until the peer sends an immediate acknowledgement. And until that happens, will as well.\nI think that's OK. In practice, immediate ACKs do happen, and even at low rates, this should resolve to a \"true\" minimum. And if it doesn't, then you have worse performance than you might had otherwise. You have to have a pretty poor network (or a terrible ACK generation regime) for minrtt not to eventually resolve to a useful minimum. Explaining this risk is fine, but the endpoint that does this made the choice to throw out minrtt and that's just a consequence of that.\nOn one hand I agree that we can expect to see immediate ACKs, especially when HTTP/3 is used. At the same time, I am not sure if resetting minrtt when observing persistent congestion is better than other cases where we might reset minrtt, but that we do not currently mention. Change of IPTTL could be one of the examples in this regard (IIRC NAME pointed out that we do not use that value for detecting path changes). To summarize, I think talking about resetting minrtt might be fine, though I am not sure if that discussion should be tied to persistent congestion.\nTo give a specific example, one problem that I know with the current definition of minrtt is that it cannot detect and correct RTT estimate of a congested path. The sole purpose of minrtt is to discard excessive ack-delays being reported by the receiver. This works by capping the lower bound of adjustedrtt to minrtt. The problem here is that minrtt is a value that is typically collected when the path is almost idle. Once the path becomes congested, the true RTT becomes much greater than minrtt. For instance, if RTT-on-idle is 10ms and if RTT-on-congestion is 50ms, minrtt would be 10ms, and therefore cannot be used to correct RTT estimates when the peer reports an excessive ackdelay of, say 25ms. The RTT estimate would become 25ms in such case. I tend to think that this might be a bigger problem of the original issue, and therefore that we should state that an endpoint can periodically re-collect minrtt estimates, if we are to make a change to the current definition of minrtt.\nBased on agreement on the resolution in , which introduces normative language, let's mark this design.\nSeems fine. I'm not 100% on the strength of the recommendation to reset after persistent congestion, but it's sensible advice.The resetting of min RTT is also specified with LEDBAT or BBR, which differ from each other and from this proposed spec. This is a complex issue, because resetting min RTT in LEDBAT \"because of congestion\" is a variation of the latecomer advantage. But as long as the text applies only to RENO as specified here, then I am fine with it.I like the cautionary text. Just one suggestion but LGTM regardless. Thank you for working on this.", "new_text": "The RTT for a network path may change over time. If a path's actual RTT decreases, the min_rtt will adapt immediately on the first low sample. If the path's actual RTT increases however, the min_rtt will not adapt to it, allowing future RTT samples that are smaller than the new RTT to be included in smoothed_rtt. Endpoints SHOULD set the min_rtt to the newest RTT sample after persistent congestion is established. This is to allow a connection to reset its estimate of min_rtt and smoothed_rtt (smoothed-rtt) after a disruptive network event, and because it is possible that an increase in path delay resulted in persistent congestion being incorrectly declared. Endpoints MAY re-establish the min_rtt at other times in the connection, such as when traffic volume is low and an acknowledgement is received with a low acknowledgement delay. Implementations SHOULD NOT refresh the min_rtt value too often, since the actual minimum RTT of the path is not frequently observable. 5.3."} {"id": "q-en-quicwg-base-drafts-4ccd572f6df056c52b21b9e862c59b973a35fadabc37b9ee0d6667022fb9ab59", "old_text": "with a value greater than 2^60, this would allow a maximum stream ID that cannot be expressed as a variable-length integer; see integer- encoding. If either is received, the connection MUST be closed immediately with a connection error of type STREAM_LIMIT_ERROR; see immediate-close. Endpoints MUST NOT exceed the limit set by their peer. An endpoint", "comments": "This is an error that can be detected in frame parsing without additional context.\nOver on URL we had a discussion about whether a MAXSTREAMS frame with value exceeding 2^60 generates a FRAMEENCODING or a STREAMLIMIT error. Draft 29 Section 19.11 says: corresponding type that can be opened over the lifetime of the connection. This value cannot exceed 2^60, as it is not possible to encode stream IDs larger than 2^62-1. Receipt of a frame that permits opening of a stream larger than this limit MUST be treated as a FRAMEENCODINGERROR. but as NAME points out, Section 4.5 says with a value greater than 2^60, this would allow a maximum stream ID that cannot be expressed as a variable-length integer; see Section 16. If either is received, the connection MUST be closed immediately with a connection error of type STREAMLIMITERROR Meanwhile, for the STREAMSBLOCKED frame in Section 19.15 we have: I suspect both Vidhi and I are correct, bogus MAXSTREAMS can be treated as either STREAMLIMIT or FRAME_ENCODING. If so, its probably a minor editorial fix to state either/or in both the sections.\nNAME suggested that it should only be a FRAMEENCODINGERROR, as it's invalid to send a MAX_STREAMS larger than 2^60 under any circumstance, regardless of connection state, and I agree that's the clearest application of our principles.\nI agree that requiring the use of FRAMEENOCDINGERROR is fine. Even though we occasionally allow an endpoint to use multiple error codes when there is more than one way of checking the status (i.e. FLOWCONTROLERROR vs. FRAMEENCODINGERROR for STREAM frames), the check required in this particular case is purely unrelated to the connection state. Hence my +1.\nI would be happy with a single error too. I'm not fussy about the actual code used as long as the spec is clear and consistent. That means that we'd need to reduce STREAMS_BLOCKED to one error code too, and potentially any other situation that allows either.\nI'm going to drop . I agree with the comments suggesting that FRAMEENCODINGERROR is the right answer. Note that you might still receive a different error, as we permit endpoints that detect multiple errors to send any that it detects, and this case clearly justifies use of STREAMLIMITERROR.", "new_text": "with a value greater than 2^60, this would allow a maximum stream ID that cannot be expressed as a variable-length integer; see integer- encoding. If either is received, the connection MUST be closed immediately with a connection error of type FRAME_ENCODING_ERROR; see immediate-close. Endpoints MUST NOT exceed the limit set by their peer. An endpoint"} {"id": "q-en-quicwg-base-drafts-b5b12890d1a66a9f72ce8654350cb85b43ee44b3ed5283dd7e2a20db88b858e5", "old_text": "address validation by sending a Retry packet (packet-retry) containing a token. This token MUST be repeated by the client in all Initial packets it sends for that connection after it receives the Retry packet. In response to processing an Initial containing a token, a server can either abort the connection or permit it to proceed. As long as it is not possible for an attacker to generate a valid token for its own address (see token-integrity) and the client is", "comments": "The original text here might be read to imply that a server can't send Retry if it receives an Initial that contains a token that came from NEW_TOKEN. That's wrong.", "new_text": "address validation by sending a Retry packet (packet-retry) containing a token. This token MUST be repeated by the client in all Initial packets it sends for that connection after it receives the Retry packet. In response to processing an Initial containing a token that was provided in a Retry packet, a server cannot send another Retry packet; it can only refuse the connection or permit it to proceed. As long as it is not possible for an attacker to generate a valid token for its own address (see token-integrity) and the client is"} {"id": "q-en-quicwg-base-drafts-c0444bd4d2c090276002b62d2fe0674a5fa041b705eb9c2fc255890ab144f30c", "old_text": "To avoid excessively small idle timeout periods, endpoints MUST increase the idle timeout period to be at least three times the current Probe Timeout (PTO). This allows for multiple PTOs to expire, and therefore multiple probes to be sent in the event of loss, prior to idle timeout. 10.1.1.", "comments": "An update to I read the 4091 text as implying that probes were sent upon a loss.", "new_text": "To avoid excessively small idle timeout periods, endpoints MUST increase the idle timeout period to be at least three times the current Probe Timeout (PTO). This allows for multiple PTOs to expire, and therefore multiple probes to be sent and lost, prior to idle timeout. 10.1.1."} {"id": "q-en-quicwg-base-drafts-cea706f20e2ed062fcaa9fb750179f98e98913ee45e8cf7baf4268e1d3031896", "old_text": "MAX_STREAMS frames; see frame-max-streams. Separate limits apply to unidirectional and bidirectional streams. If a max_streams transport parameter or MAX_STREAMS frame is received with a value greater than 2^60, this would allow a maximum stream ID that cannot be expressed as a variable-length integer; see integer- encoding. If either is received, the connection MUST be closed immediately with a connection error of type FRAME_ENCODING_ERROR; see immediate-close. Endpoints MUST NOT exceed the limit set by their peer. An endpoint that receives a frame with a stream ID exceeding the limit it has", "comments": "I noticed this in the Transport Draft (ID-30): Is this intentional? Should an invalid transport parameter result in a frame encoding error?\nI believe this aligns with our principles that a framing error which can be detected with no connection context is a FRAMEENCODINGERROR. Is there another error you'd suggest?\nMy concern is that transport parameter values have nothing to do with framing and using FRAMEENCODINGERROR is therefore misleading.\nWe only ever applied this principle to frames though, so I'm not sure we can use it as a precedent here. The name is indeed confusing, as it's FRAMEENCODINGERROR and not FRAMINGERROR or ENCODINGERROR.\nI always rationalized this as \"the encoded content of the CRYPTO frame is erroneous\".\nWell, if we've been through this before, I won't insist -- it's just something that jumped out at me.\nThat's even more confusing. The CRYPTO frame itself is totally fine. I'd be in favor of changing it to a TRANSPORTPARAMETERERROR.", "new_text": "MAX_STREAMS frames; see frame-max-streams. Separate limits apply to unidirectional and bidirectional streams. If a max_streams transport parameter or a MAX_STREAMS frame is received with a value greater than 2^60, this would allow a maximum stream ID that cannot be expressed as a variable-length integer; see integer-encoding. If either is received, the connection MUST be closed immediately with a connection error of type TRANSPORT_PARAMETER_ERROR if the offending value was received in a transport parameter or of type FRAME_ENCODING_ERROR if it was received in a frame; see immediate-close. Endpoints MUST NOT exceed the limit set by their peer. An endpoint that receives a frame with a stream ID exceeding the limit it has"} {"id": "q-en-quicwg-base-drafts-05e47be171b7536e987cc05083b4ae42340aa982c87e24ad951d6eedcdc2177f", "old_text": "Clients SHOULD send a CANCEL_PUSH frame upon receipt of a PUSH_PROMISE frame carrying a request that is not cacheable, is not known to be safe, that indicates the presence of a request body, or for which it does not consider the server authoritative. Each pushed response is associated with one or more client requests. The push is associated with the request stream on which the", "comments": "In the HTTP/3 draft 30, section 4.4 on Server Push says and then adds in 10.4 (URL) I don't think normative requirements belong in security considerations except in reference to that requirement elsewhere. Also, requirements in general should clearly state who is responsible for implementing them, whereas the latter requirement implies a client (maybe). And sending CANCEL_PUSH is kind of like the MUST NOT, but not really the same thing; it seems a bit weak in response given that the server just tried to poison the client. I think the server MUST is an interop requirement because the client MUST NOT is a critical security requirement. I think both should be described in Server Push and merely noted in the Security Considerations, since they are really important. Way more important than the spec reads now. My guess is that this is editorial, but is the kind of thing that should be fixed before IESG last call.\nThe reason it's only a SHOULD CANCEL_PUSH is that the server can't know with certainty for which origins the client considers it authoritative. The server only knows for which origins it believes that it's authoritative. So having received a push the client considers out-of-bounds, the client can't know for sure whether it's a poisoning attempt or simply a difference in view. Yes, I think this is editorial, since the core request is to relocate existing text without changing the normative requirements.", "new_text": "Clients SHOULD send a CANCEL_PUSH frame upon receipt of a PUSH_PROMISE frame carrying a request that is not cacheable, is not known to be safe, that indicates the presence of a request body, or for which it does not consider the server authoritative. Any corresponding responses MUST NOT be used or cached. Each pushed response is associated with one or more client requests. The push is associated with the request stream on which the"} {"id": "q-en-quicwg-base-drafts-05e47be171b7536e987cc05083b4ae42340aa982c87e24ad951d6eedcdc2177f", "old_text": "served out of cache, overriding the actual representation that the authoritative tenant provides. Pushed responses for which an origin server is not authoritative (see connection-reuse) MUST NOT be used or cached. 10.5.", "comments": "In the HTTP/3 draft 30, section 4.4 on Server Push says and then adds in 10.4 (URL) I don't think normative requirements belong in security considerations except in reference to that requirement elsewhere. Also, requirements in general should clearly state who is responsible for implementing them, whereas the latter requirement implies a client (maybe). And sending CANCEL_PUSH is kind of like the MUST NOT, but not really the same thing; it seems a bit weak in response given that the server just tried to poison the client. I think the server MUST is an interop requirement because the client MUST NOT is a critical security requirement. I think both should be described in Server Push and merely noted in the Security Considerations, since they are really important. Way more important than the spec reads now. My guess is that this is editorial, but is the kind of thing that should be fixed before IESG last call.\nThe reason it's only a SHOULD CANCEL_PUSH is that the server can't know with certainty for which origins the client considers it authoritative. The server only knows for which origins it believes that it's authoritative. So having received a push the client considers out-of-bounds, the client can't know for sure whether it's a poisoning attempt or simply a difference in view. Yes, I think this is editorial, since the core request is to relocate existing text without changing the normative requirements.", "new_text": "served out of cache, overriding the actual representation that the authoritative tenant provides. Clients are required to reject pushed responses for which an origin server is not authoritative; see server-push. 10.5."} {"id": "q-en-quicwg-base-drafts-1fd6faf975b50084359f2d7e4577aaea36b8ec7f82df29309e92146221e794f6", "old_text": "21.6. An adversarial sender might intentionally send fragments of stream data in an attempt to cause disproportionate receive buffer memory commitment and/or creation of a large and inefficient data structure. An adversarial receiver might intentionally not acknowledge packets containing stream data in an attempt to force the sender to store the", "comments": "Proposal to clarify \"stream fragmentation\". Suggested by Issue .\nI don\u2019t understand what section 21.6 is about\u2026 the title seems to suggests it is about IP Fragmentation and Reassembly Attacks, but elsewhere the spec already prohibits IP Fragmentation in section 14? And I have seen no encouragement to do any form of network-layer fragmentation - so I don\u2019t understand a security consideration in this topic. Is this therefiore some other thing? It may about something different where a sender originates data with \u201choles\u201d of missing packet to try and exercise the server? or soemthing? What is this?\nHi NAME 21.6 is titled \"Stream Fragmentation and Reassembly Attacks\". It refers to streams, not IP packets. The attack involves sending stream frames with holes such as and some receivers might allocate the entire stream receive buffer between 0 and 10000001. I found the section pretty clear, but we should make it clear to everyone - could you perhaps suggest text that would make it clearer?\nOK. I see. I can try a PR, if I can find words that avoid \"fragment\": Is this close enough to start: /An adversarial sender might intentionally omit to send portions of the stream data causing the receiver to commit resources for the omitted data, this could cause a disproportionate receive buffer memory commitment and/or creation of a large and inefficient data structure.\nDo we really need to avoid the term \"fragment\"? It's not the only overloaded networking term in the spec. Perhaps adding a \"Note that stream fragmentation is unrelated to IP fragmentation\" could suffice?\nI don't think this clarification is necessary. The context makes it clear what fragmentation this is, and talking about IP fragmentation only leads to confuse the reader. (I like the first part of your PR though, as I've noted on my review of it.)", "new_text": "21.6. An adversarial sender might intentionally not send portions of the stream data, causing the receiver to commit resources for the unsent data. This could cause a disproportionate receive buffer memory commitment and/or the creation of a large and inefficient data structure at the receiver. An adversarial receiver might intentionally not acknowledge packets containing stream data in an attempt to force the sender to store the"} {"id": "q-en-quicwg-base-drafts-6540593ae4677aef7585de6160765d9d3164b21a631d52b8ef09d13f9ef6c96e", "old_text": "Using a value for \"N\" that is small, but at least 1 (for example, 1.25) ensures that variations in round-trip time do not result in under-utilization of the congestion window. Values of 'N' larger than 1 ultimately result in sending packets as acknowledgments are received rather than when timers fire, provided the congestion window is fully utilized and acknowledgments arrive at regular intervals. Practical considerations, such as packetization, scheduling delays, and computational efficiency, can cause a sender to deviate from this", "comments": "N was introduced in to describe how a sender might pace at a rate that allows it to send a congestion window's worth of data in a period that is shorter than an RTT. This PR clarifies text that goes into unnecessary detail, trying to describe assumptions (the network is spreading the packets out, for example, resulting in an ack clock).\nCan we make progress in this one?\nYes, I'm waiting for the design issues to get done before moving on the remaining editorial ones.\nI think that has happened now :-)\nNudge", "new_text": "Using a value for \"N\" that is small, but at least 1 (for example, 1.25) ensures that variations in round-trip time do not result in under-utilization of the congestion window. Practical considerations, such as packetization, scheduling delays, and computational efficiency, can cause a sender to deviate from this"} {"id": "q-en-quicwg-base-drafts-ff9da4c64eeb20fcd23ba8879c3eab6a778713074e7ce6abc23518f241ff88d8", "old_text": "If a path has been validated to support ECN (RFC3168, RFC8311), QUIC treats a Congestion Experienced (CE) codepoint in the IP header as a signal of congestion. This document specifies an endpoint's response when its peer receives packets with the ECN-CE codepoint. 7.2.", "comments": "Also removes odd MAY when suggesting that a sender can use TCP's pipeACK method. Addresses .\nIn Recovery-31 there are several references that I judge as normative and not informative ones. Section 7.8: \"A sender MAY use the pipeACK method described in Section 4.3 of [RFC7661] to determine if the congestion window is sufficiently utilized.\" A normative reference necessary to follow if one selects to use the optional method.Sectiom 7.3.2: \"Implementations MAY reduce the congestion window immediately upon entering a recovery period or use other mechanisms, such as Proportional Rate Reduction ([PRR]), to reduce the congestion window more gradually.\" Another case of a suggested mechanism, which if one is to follow it requires one to read that PPR reference. Secondly, why the alternative reference style.Section 8.3: \"Markings can be treated as equivalent to loss ([RFC3168]), but other responses can be specified, such as ([RFC8511]) or ([RFC8311]).\" So RFC3168 is the normative specification for the treat it equal to loss. RFC8511 is the TCP alternative back-off an experimental specification for the alternative. And RFC8311 is the process for performing other experiments with alternative responses. I judge all necessary depending on what one intendeds to do. So to my understanding this results in two downrefs to experimental specifications. I think that is acceptable in the context they are used in this document. But to be tried in IETF last call. However, I think I don't want to generally recommend them to be acceptable as downrefs so I have no intentions to promote them for the Downref registry.\nMoving discussion from the PR to the issue. NAME : I'm not the expert on what should be normative, but these were intentionally marked as informative references since they're all TCP specific RFCs. The pipeACK reference in particular seems like an odd one. If this is what the chairs want, then I'm ok with it as an editor, but these choices were intentional. NAME : I see what NAME is saying, but I also see some wiggle room in RFC 3967, Section 1.1: When a reference to an example is made, such a reference need not be normative. For example, text such as \"an algorithm such as the one specified in [RFCxxxx] would be acceptable\" indicates an informative reference, since that cited algorithm is just one of several possible algorithms that could be used. I am also somewhat concerned that trying to get the downrefs into the registry is more likely to run into trouble (Sec 3) This procedure should not be used if the proper step is to move the document to which the reference is being made into the appropriate category. It is not intended as an easy way out of normal process. Rather, the procedure is intended for dealing with specific cases where putting particular documents into the required category is problematic and unlikely ever to happen.\nThanks for digging up the relevant text, NAME Based on that text from RFC 3967, it doesn't seem like we need to make these references normative. I agree that the pipeACK reference is odd, but I think that's probably because of the use of MAY there. I'm happy to replace the MAY with a can -- it is meant to present an option, not a permission -- and make that reference informative as well. NAME : thoughts?\nI think at last RFC3168 should be referenced normatively somewhere (probably section 7.1 though) because you need to know what the ECN bits are in order to do anything with it. I guess because of section 7.1. RFC8311 should also be a normative reference.\nNAME RFC3168 is already a normative reference in the transport doc, which in my view is appropriate.\nWas wondering about this as well given the transport doc is of course a normative reference of this doc. However, as this doc uses terms like CE also directly itself I think the right practice is to also have it as a normative reference here, and as you have to read it no matter what, it definitely shouldn't hurt.\ntalks about ECN-CE markings, but really all of Section 7 talks about a response to an increase in ECN-CE count by the peer, which is defined in the transport doc. This does point to a small editorial issue. The text in Section 4.1 says: This really should be:\nGood point on the editorial issue.\nWith that change, the only thing we're left with is . This does require an understanding of ECN, and it has RFC 2119 keywords, but it's really an optional mechanism that is suggested. We could remove the RFC 2119 words there, since they're really unnecessary.\nMy point is that RFC3168 should be normative because not matter what you need to understand what an ECN-CE mark is and that is defined/specified in RFC3168.\nMirja has a point, see URL\n-transport is a normative reference. So 3168 is already (transitively) a normative reference. The question is whether this specific text requires anything from 3168 to understand. I don't see that it does, especially given the suggested change, which talks about a concept defined in -transport, not 3168.\nhas made these editorial changes; NAME can you see if anything else needs to be done here?\nI will note that it was never my intention to let any of theses Experimental or informational TCP specs to be added into the downref registry by this last call. So, the pipeack reference needs to be reformulated to not be a normative one. As currently stated it is a clear normative reference per IESG Statement ([URL]) , and especially \"Note 1: Even references that are relevant only for optional features must be classified as normative if they meet the above conditions for normative references. \" You need to find a formulation that normative describes what the implementation MAY do, and then you can exemplify with the TCP version. Then one in Section 7.3.2 I can accept as still being informative as the MAY statement is related to the primary aspect of the sentence and the reference to PPR is exemplifying. Still this is very much on the border and may still generate a comment in IESG. For RFC 3168 I don't understand why not making it to a normative one. There are no issues with its status as it is proposed standard. And simply the security consideration section still implies that you need to know what a ECN-CE mark is to interpret this. So please make this one normative.\nI'm ok with RFC 3168 being normative, particularly since it's already normative in transport, but would like to avoid making the others normative.\nNAME if pipeack is not normative then a reformulation is necessary of that sentence in Section 7.8 to not state is as a mechanism that MAY be implemented.\nThe sentence currently has a can, not a MAY: \"A sender can use the pipeACK method described in Section 4.3 of {{?RFC7661}} to determine if the congestion window is sufficiently utilized.\" But I sent out PR to make it clearer that pipeACK is an example, PTAL and see if that helps.\nNAME what PR proposes removing that first sentence do resolve the pipeack to me. So merging , and moving RFC3168 to normative will resolve this issue to my understanding.\nThanks! I'll update PR to include moving RFC3168 to normative.\nNAME NAME : I believe this is purely editorial, but I'd like to confirm.\nWe've meandered to a route that AFAICT makes reference types accurate to their existing intent, or avoids them becoming normative. I'm happy for this to be editorials.\nI think that this is the most appropriate response.", "new_text": "If a path has been validated to support ECN (RFC3168, RFC8311), QUIC treats a Congestion Experienced (CE) codepoint in the IP header as a signal of congestion. This document specifies an endpoint's response when the peer-reported ECN-CE count increases; see Section 13.4.2 of QUIC-TRANSPORT. 7.2."} {"id": "q-en-quicwg-base-drafts-ff9da4c64eeb20fcd23ba8879c3eab6a778713074e7ce6abc23518f241ff88d8", "old_text": "the congestion window. The minimum congestion window is the smallest value the congestion window can decrease to as a response to loss, ECN-CE, or persistent congestion. The RECOMMENDED value is 2 * max_datagram_size. 7.3.", "comments": "Also removes odd MAY when suggesting that a sender can use TCP's pipeACK method. Addresses .\nIn Recovery-31 there are several references that I judge as normative and not informative ones. Section 7.8: \"A sender MAY use the pipeACK method described in Section 4.3 of [RFC7661] to determine if the congestion window is sufficiently utilized.\" A normative reference necessary to follow if one selects to use the optional method.Sectiom 7.3.2: \"Implementations MAY reduce the congestion window immediately upon entering a recovery period or use other mechanisms, such as Proportional Rate Reduction ([PRR]), to reduce the congestion window more gradually.\" Another case of a suggested mechanism, which if one is to follow it requires one to read that PPR reference. Secondly, why the alternative reference style.Section 8.3: \"Markings can be treated as equivalent to loss ([RFC3168]), but other responses can be specified, such as ([RFC8511]) or ([RFC8311]).\" So RFC3168 is the normative specification for the treat it equal to loss. RFC8511 is the TCP alternative back-off an experimental specification for the alternative. And RFC8311 is the process for performing other experiments with alternative responses. I judge all necessary depending on what one intendeds to do. So to my understanding this results in two downrefs to experimental specifications. I think that is acceptable in the context they are used in this document. But to be tried in IETF last call. However, I think I don't want to generally recommend them to be acceptable as downrefs so I have no intentions to promote them for the Downref registry.\nMoving discussion from the PR to the issue. NAME : I'm not the expert on what should be normative, but these were intentionally marked as informative references since they're all TCP specific RFCs. The pipeACK reference in particular seems like an odd one. If this is what the chairs want, then I'm ok with it as an editor, but these choices were intentional. NAME : I see what NAME is saying, but I also see some wiggle room in RFC 3967, Section 1.1: When a reference to an example is made, such a reference need not be normative. For example, text such as \"an algorithm such as the one specified in [RFCxxxx] would be acceptable\" indicates an informative reference, since that cited algorithm is just one of several possible algorithms that could be used. I am also somewhat concerned that trying to get the downrefs into the registry is more likely to run into trouble (Sec 3) This procedure should not be used if the proper step is to move the document to which the reference is being made into the appropriate category. It is not intended as an easy way out of normal process. Rather, the procedure is intended for dealing with specific cases where putting particular documents into the required category is problematic and unlikely ever to happen.\nThanks for digging up the relevant text, NAME Based on that text from RFC 3967, it doesn't seem like we need to make these references normative. I agree that the pipeACK reference is odd, but I think that's probably because of the use of MAY there. I'm happy to replace the MAY with a can -- it is meant to present an option, not a permission -- and make that reference informative as well. NAME : thoughts?\nI think at last RFC3168 should be referenced normatively somewhere (probably section 7.1 though) because you need to know what the ECN bits are in order to do anything with it. I guess because of section 7.1. RFC8311 should also be a normative reference.\nNAME RFC3168 is already a normative reference in the transport doc, which in my view is appropriate.\nWas wondering about this as well given the transport doc is of course a normative reference of this doc. However, as this doc uses terms like CE also directly itself I think the right practice is to also have it as a normative reference here, and as you have to read it no matter what, it definitely shouldn't hurt.\ntalks about ECN-CE markings, but really all of Section 7 talks about a response to an increase in ECN-CE count by the peer, which is defined in the transport doc. This does point to a small editorial issue. The text in Section 4.1 says: This really should be:\nGood point on the editorial issue.\nWith that change, the only thing we're left with is . This does require an understanding of ECN, and it has RFC 2119 keywords, but it's really an optional mechanism that is suggested. We could remove the RFC 2119 words there, since they're really unnecessary.\nMy point is that RFC3168 should be normative because not matter what you need to understand what an ECN-CE mark is and that is defined/specified in RFC3168.\nMirja has a point, see URL\n-transport is a normative reference. So 3168 is already (transitively) a normative reference. The question is whether this specific text requires anything from 3168 to understand. I don't see that it does, especially given the suggested change, which talks about a concept defined in -transport, not 3168.\nhas made these editorial changes; NAME can you see if anything else needs to be done here?\nI will note that it was never my intention to let any of theses Experimental or informational TCP specs to be added into the downref registry by this last call. So, the pipeack reference needs to be reformulated to not be a normative one. As currently stated it is a clear normative reference per IESG Statement ([URL]) , and especially \"Note 1: Even references that are relevant only for optional features must be classified as normative if they meet the above conditions for normative references. \" You need to find a formulation that normative describes what the implementation MAY do, and then you can exemplify with the TCP version. Then one in Section 7.3.2 I can accept as still being informative as the MAY statement is related to the primary aspect of the sentence and the reference to PPR is exemplifying. Still this is very much on the border and may still generate a comment in IESG. For RFC 3168 I don't understand why not making it to a normative one. There are no issues with its status as it is proposed standard. And simply the security consideration section still implies that you need to know what a ECN-CE mark is to interpret this. So please make this one normative.\nI'm ok with RFC 3168 being normative, particularly since it's already normative in transport, but would like to avoid making the others normative.\nNAME if pipeack is not normative then a reformulation is necessary of that sentence in Section 7.8 to not state is as a mechanism that MAY be implemented.\nThe sentence currently has a can, not a MAY: \"A sender can use the pipeACK method described in Section 4.3 of {{?RFC7661}} to determine if the congestion window is sufficiently utilized.\" But I sent out PR to make it clearer that pipeACK is an example, PTAL and see if that helps.\nNAME what PR proposes removing that first sentence do resolve the pipeack to me. So merging , and moving RFC3168 to normative will resolve this issue to my understanding.\nThanks! I'll update PR to include moving RFC3168 to normative.\nNAME NAME : I believe this is purely editorial, but I'd like to confirm.\nWe've meandered to a route that AFAICT makes reference types accurate to their existing intent, or avoids them becoming normative. I'm happy for this to be editorials.\nI think that this is the most appropriate response.", "new_text": "the congestion window. The minimum congestion window is the smallest value the congestion window can decrease to as a response to loss, increase in the peer- reported ECN-CE count, or persistent congestion. The RECOMMENDED value is 2 * max_datagram_size. 7.3."} {"id": "q-en-quicwg-base-drafts-ff9da4c64eeb20fcd23ba8879c3eab6a778713074e7ce6abc23518f241ff88d8", "old_text": "increased in either slow start or congestion avoidance. This can happen due to insufficient application data or flow control limits. A sender MAY use the pipeACK method described in Section 4.3 of RFC7661 to determine if the congestion window is sufficiently utilized.", "comments": "Also removes odd MAY when suggesting that a sender can use TCP's pipeACK method. Addresses .\nIn Recovery-31 there are several references that I judge as normative and not informative ones. Section 7.8: \"A sender MAY use the pipeACK method described in Section 4.3 of [RFC7661] to determine if the congestion window is sufficiently utilized.\" A normative reference necessary to follow if one selects to use the optional method.Sectiom 7.3.2: \"Implementations MAY reduce the congestion window immediately upon entering a recovery period or use other mechanisms, such as Proportional Rate Reduction ([PRR]), to reduce the congestion window more gradually.\" Another case of a suggested mechanism, which if one is to follow it requires one to read that PPR reference. Secondly, why the alternative reference style.Section 8.3: \"Markings can be treated as equivalent to loss ([RFC3168]), but other responses can be specified, such as ([RFC8511]) or ([RFC8311]).\" So RFC3168 is the normative specification for the treat it equal to loss. RFC8511 is the TCP alternative back-off an experimental specification for the alternative. And RFC8311 is the process for performing other experiments with alternative responses. I judge all necessary depending on what one intendeds to do. So to my understanding this results in two downrefs to experimental specifications. I think that is acceptable in the context they are used in this document. But to be tried in IETF last call. However, I think I don't want to generally recommend them to be acceptable as downrefs so I have no intentions to promote them for the Downref registry.\nMoving discussion from the PR to the issue. NAME : I'm not the expert on what should be normative, but these were intentionally marked as informative references since they're all TCP specific RFCs. The pipeACK reference in particular seems like an odd one. If this is what the chairs want, then I'm ok with it as an editor, but these choices were intentional. NAME : I see what NAME is saying, but I also see some wiggle room in RFC 3967, Section 1.1: When a reference to an example is made, such a reference need not be normative. For example, text such as \"an algorithm such as the one specified in [RFCxxxx] would be acceptable\" indicates an informative reference, since that cited algorithm is just one of several possible algorithms that could be used. I am also somewhat concerned that trying to get the downrefs into the registry is more likely to run into trouble (Sec 3) This procedure should not be used if the proper step is to move the document to which the reference is being made into the appropriate category. It is not intended as an easy way out of normal process. Rather, the procedure is intended for dealing with specific cases where putting particular documents into the required category is problematic and unlikely ever to happen.\nThanks for digging up the relevant text, NAME Based on that text from RFC 3967, it doesn't seem like we need to make these references normative. I agree that the pipeACK reference is odd, but I think that's probably because of the use of MAY there. I'm happy to replace the MAY with a can -- it is meant to present an option, not a permission -- and make that reference informative as well. NAME : thoughts?\nI think at last RFC3168 should be referenced normatively somewhere (probably section 7.1 though) because you need to know what the ECN bits are in order to do anything with it. I guess because of section 7.1. RFC8311 should also be a normative reference.\nNAME RFC3168 is already a normative reference in the transport doc, which in my view is appropriate.\nWas wondering about this as well given the transport doc is of course a normative reference of this doc. However, as this doc uses terms like CE also directly itself I think the right practice is to also have it as a normative reference here, and as you have to read it no matter what, it definitely shouldn't hurt.\ntalks about ECN-CE markings, but really all of Section 7 talks about a response to an increase in ECN-CE count by the peer, which is defined in the transport doc. This does point to a small editorial issue. The text in Section 4.1 says: This really should be:\nGood point on the editorial issue.\nWith that change, the only thing we're left with is . This does require an understanding of ECN, and it has RFC 2119 keywords, but it's really an optional mechanism that is suggested. We could remove the RFC 2119 words there, since they're really unnecessary.\nMy point is that RFC3168 should be normative because not matter what you need to understand what an ECN-CE mark is and that is defined/specified in RFC3168.\nMirja has a point, see URL\n-transport is a normative reference. So 3168 is already (transitively) a normative reference. The question is whether this specific text requires anything from 3168 to understand. I don't see that it does, especially given the suggested change, which talks about a concept defined in -transport, not 3168.\nhas made these editorial changes; NAME can you see if anything else needs to be done here?\nI will note that it was never my intention to let any of theses Experimental or informational TCP specs to be added into the downref registry by this last call. So, the pipeack reference needs to be reformulated to not be a normative one. As currently stated it is a clear normative reference per IESG Statement ([URL]) , and especially \"Note 1: Even references that are relevant only for optional features must be classified as normative if they meet the above conditions for normative references. \" You need to find a formulation that normative describes what the implementation MAY do, and then you can exemplify with the TCP version. Then one in Section 7.3.2 I can accept as still being informative as the MAY statement is related to the primary aspect of the sentence and the reference to PPR is exemplifying. Still this is very much on the border and may still generate a comment in IESG. For RFC 3168 I don't understand why not making it to a normative one. There are no issues with its status as it is proposed standard. And simply the security consideration section still implies that you need to know what a ECN-CE mark is to interpret this. So please make this one normative.\nI'm ok with RFC 3168 being normative, particularly since it's already normative in transport, but would like to avoid making the others normative.\nNAME if pipeack is not normative then a reformulation is necessary of that sentence in Section 7.8 to not state is as a mechanism that MAY be implemented.\nThe sentence currently has a can, not a MAY: \"A sender can use the pipeACK method described in Section 4.3 of {{?RFC7661}} to determine if the congestion window is sufficiently utilized.\" But I sent out PR to make it clearer that pipeACK is an example, PTAL and see if that helps.\nNAME what PR proposes removing that first sentence do resolve the pipeack to me. So merging , and moving RFC3168 to normative will resolve this issue to my understanding.\nThanks! I'll update PR to include moving RFC3168 to normative.\nNAME NAME : I believe this is purely editorial, but I'd like to confirm.\nWe've meandered to a route that AFAICT makes reference types accurate to their existing intent, or avoids them becoming normative. I'm happy for this to be editorials.\nI think that this is the most appropriate response.", "new_text": "increased in either slow start or congestion avoidance. This can happen due to insufficient application data or flow control limits. A sender can use the pipeACK method described in Section 4.3 of RFC7661 to determine if the congestion window is sufficiently utilized."} {"id": "q-en-quicwg-base-drafts-ff9da4c64eeb20fcd23ba8879c3eab6a778713074e7ce6abc23518f241ff88d8", "old_text": "cause a sender to increase their send rate. This increase could result in congestion and loss. A sender MAY attempt to detect suppression of reports by marking occasional packets that they send with ECN-CE. If a packet sent with ECN-CE is not reported as having been CE marked when the packet is acknowledged, then the sender SHOULD disable ECN for that path. Reporting additional ECN-CE markings will cause a sender to reduce their sending rate, which is similar in effect to advertising reduced", "comments": "Also removes odd MAY when suggesting that a sender can use TCP's pipeACK method. Addresses .\nIn Recovery-31 there are several references that I judge as normative and not informative ones. Section 7.8: \"A sender MAY use the pipeACK method described in Section 4.3 of [RFC7661] to determine if the congestion window is sufficiently utilized.\" A normative reference necessary to follow if one selects to use the optional method.Sectiom 7.3.2: \"Implementations MAY reduce the congestion window immediately upon entering a recovery period or use other mechanisms, such as Proportional Rate Reduction ([PRR]), to reduce the congestion window more gradually.\" Another case of a suggested mechanism, which if one is to follow it requires one to read that PPR reference. Secondly, why the alternative reference style.Section 8.3: \"Markings can be treated as equivalent to loss ([RFC3168]), but other responses can be specified, such as ([RFC8511]) or ([RFC8311]).\" So RFC3168 is the normative specification for the treat it equal to loss. RFC8511 is the TCP alternative back-off an experimental specification for the alternative. And RFC8311 is the process for performing other experiments with alternative responses. I judge all necessary depending on what one intendeds to do. So to my understanding this results in two downrefs to experimental specifications. I think that is acceptable in the context they are used in this document. But to be tried in IETF last call. However, I think I don't want to generally recommend them to be acceptable as downrefs so I have no intentions to promote them for the Downref registry.\nMoving discussion from the PR to the issue. NAME : I'm not the expert on what should be normative, but these were intentionally marked as informative references since they're all TCP specific RFCs. The pipeACK reference in particular seems like an odd one. If this is what the chairs want, then I'm ok with it as an editor, but these choices were intentional. NAME : I see what NAME is saying, but I also see some wiggle room in RFC 3967, Section 1.1: When a reference to an example is made, such a reference need not be normative. For example, text such as \"an algorithm such as the one specified in [RFCxxxx] would be acceptable\" indicates an informative reference, since that cited algorithm is just one of several possible algorithms that could be used. I am also somewhat concerned that trying to get the downrefs into the registry is more likely to run into trouble (Sec 3) This procedure should not be used if the proper step is to move the document to which the reference is being made into the appropriate category. It is not intended as an easy way out of normal process. Rather, the procedure is intended for dealing with specific cases where putting particular documents into the required category is problematic and unlikely ever to happen.\nThanks for digging up the relevant text, NAME Based on that text from RFC 3967, it doesn't seem like we need to make these references normative. I agree that the pipeACK reference is odd, but I think that's probably because of the use of MAY there. I'm happy to replace the MAY with a can -- it is meant to present an option, not a permission -- and make that reference informative as well. NAME : thoughts?\nI think at last RFC3168 should be referenced normatively somewhere (probably section 7.1 though) because you need to know what the ECN bits are in order to do anything with it. I guess because of section 7.1. RFC8311 should also be a normative reference.\nNAME RFC3168 is already a normative reference in the transport doc, which in my view is appropriate.\nWas wondering about this as well given the transport doc is of course a normative reference of this doc. However, as this doc uses terms like CE also directly itself I think the right practice is to also have it as a normative reference here, and as you have to read it no matter what, it definitely shouldn't hurt.\ntalks about ECN-CE markings, but really all of Section 7 talks about a response to an increase in ECN-CE count by the peer, which is defined in the transport doc. This does point to a small editorial issue. The text in Section 4.1 says: This really should be:\nGood point on the editorial issue.\nWith that change, the only thing we're left with is . This does require an understanding of ECN, and it has RFC 2119 keywords, but it's really an optional mechanism that is suggested. We could remove the RFC 2119 words there, since they're really unnecessary.\nMy point is that RFC3168 should be normative because not matter what you need to understand what an ECN-CE mark is and that is defined/specified in RFC3168.\nMirja has a point, see URL\n-transport is a normative reference. So 3168 is already (transitively) a normative reference. The question is whether this specific text requires anything from 3168 to understand. I don't see that it does, especially given the suggested change, which talks about a concept defined in -transport, not 3168.\nhas made these editorial changes; NAME can you see if anything else needs to be done here?\nI will note that it was never my intention to let any of theses Experimental or informational TCP specs to be added into the downref registry by this last call. So, the pipeack reference needs to be reformulated to not be a normative one. As currently stated it is a clear normative reference per IESG Statement ([URL]) , and especially \"Note 1: Even references that are relevant only for optional features must be classified as normative if they meet the above conditions for normative references. \" You need to find a formulation that normative describes what the implementation MAY do, and then you can exemplify with the TCP version. Then one in Section 7.3.2 I can accept as still being informative as the MAY statement is related to the primary aspect of the sentence and the reference to PPR is exemplifying. Still this is very much on the border and may still generate a comment in IESG. For RFC 3168 I don't understand why not making it to a normative one. There are no issues with its status as it is proposed standard. And simply the security consideration section still implies that you need to know what a ECN-CE mark is to interpret this. So please make this one normative.\nI'm ok with RFC 3168 being normative, particularly since it's already normative in transport, but would like to avoid making the others normative.\nNAME if pipeack is not normative then a reformulation is necessary of that sentence in Section 7.8 to not state is as a mechanism that MAY be implemented.\nThe sentence currently has a can, not a MAY: \"A sender can use the pipeACK method described in Section 4.3 of {{?RFC7661}} to determine if the congestion window is sufficiently utilized.\" But I sent out PR to make it clearer that pipeACK is an example, PTAL and see if that helps.\nNAME what PR proposes removing that first sentence do resolve the pipeack to me. So merging , and moving RFC3168 to normative will resolve this issue to my understanding.\nThanks! I'll update PR to include moving RFC3168 to normative.\nNAME NAME : I believe this is purely editorial, but I'd like to confirm.\nWe've meandered to a route that AFAICT makes reference types accurate to their existing intent, or avoids them becoming normative. I'm happy for this to be editorials.\nI think that this is the most appropriate response.", "new_text": "cause a sender to increase their send rate. This increase could result in congestion and loss. A sender can detect suppression of reports by marking occasional packets that it sends with an ECN-CE marking. If a packet sent with an ECN-CE marking is not reported as having been CE marked when the packet is acknowledged, then the sender can disable ECN for that path by not setting ECT codepoints in subsequent packets sent on that path RFC3168. Reporting additional ECN-CE markings will cause a sender to reduce their sending rate, which is similar in effect to advertising reduced"} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "packetization defines models for the transmission, retransmission, and acknowledgement of data, and packet-size specifies rules for managing the size of packets. Finally, encoding details of QUIC protocol elements are described in:", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "packetization defines models for the transmission, retransmission, and acknowledgement of data, and datagram-size specifies rules for managing the size of datagrams carrying QUIC packets. Finally, encoding details of QUIC protocol elements are described in:"} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "14. The QUIC packet size includes the QUIC header and protected payload, but not the UDP or IP headers. QUIC depends upon a minimum IP packet size of at least 1280 bytes. This is the IPv6 minimum size (IPv6) and is also supported by most modern IPv4 networks. Assuming the minimum IP header size, this results in a QUIC maximum packet size of 1232 bytes for IPv6 and 1252 bytes for IPv4. The QUIC maximum packet size is the largest size of QUIC packet that can be sent across a network path using a single packet. Any maximum packet size larger than 1200 bytes can be discovered using Path Maximum Transmission Unit Discovery (PMTUD; see pmtud) or Datagram Packetization Layer PMTU Discovery (DPLPMTUD; see dplpmtud). Enforcement of the max_udp_payload_size transport parameter (transport-parameter-definitions) might act as an additional limit on the maximum packet size. A sender can avoid exceeding this limit, once the value is known. However, prior to learning the value of the transport parameter, endpoints risk datagrams being lost if they send packets larger than the smallest allowed maximum packet size of 1200 bytes. UDP datagrams MUST NOT be fragmented at the IP layer. In IPv4 (IPv4), the DF bit MUST be set if possible, to prevent fragmentation", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "14. A UDP datagram can include one or more QUIC packets. The datagram size refers to the total UDP payload size of a single UDP datagram carrying QUIC packets. The datagram size includes one or more QUIC packet headers and protected payloads, but not the UDP or IP headers. The maximum datagram size is defined as the largest size of UDP payload that can be sent across a network path using a single UDP datagram. QUIC depends upon a minimum IP packet size of at least 1280 bytes. This is the IPv6 minimum size (IPv6) and is also supported by most modern IPv4 networks. Assuming the minimum IP header size of 40 bytes for IPv6 and 20 bytes for IPv4 and a UDP header size of 8 bytes, this results in a maximum datagram size of 1232 bytes for IPv6 and 1252 bytes for IPv4. The maximum datagram size MUST be at least 1200 bytes. Any maximum datagram size larger than 1200 bytes can be discovered using Path Maximum Transmission Unit Discovery (PMTUD; see pmtud) or Datagram Packetization Layer PMTU Discovery (DPLPMTUD; see dplpmtud). Enforcement of the max_udp_payload_size transport parameter (transport-parameter-definitions) might act as an additional limit on the maximum datagram size. A sender can avoid exceeding this limit, once the value is known. However, prior to learning the value of the transport parameter, endpoints risk datagrams being lost if they send datagrams larger than the smallest allowed maximum datagram size of 1200 bytes. UDP datagrams MUST NOT be fragmented at the IP layer. In IPv4 (IPv4), the DF bit MUST be set if possible, to prevent fragmentation"} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "14.1. A client MUST expand the payload of all UDP datagrams carrying Initial packets to at least the smallest allowed maximum packet size (1200 bytes) by adding PADDING frames to the Initial packet or by coalescing the Initial packet; see packet-coalesce. Sending a UDP datagram of this size ensures that the network path from the client to the server supports a reasonable Path Maximum Transmission Unit (PMTU). This also helps reduce the amplitude of amplification", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "14.1. A client MUST expand the payload of all UDP datagrams carrying Initial packets to at least the smallest allowed maximum datagram size (1200 bytes) by adding PADDING frames to the Initial packet or by coalescing the Initial packet; see packet-coalesce. Sending a UDP datagram of this size ensures that the network path from the client to the server supports a reasonable Path Maximum Transmission Unit (PMTU). This also helps reduce the amplitude of amplification"} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "that it chooses. A server MUST discard an Initial packet that is carried in a UDP datagram with a payload that is less than the smallest allowed maximum packet size of 1200 bytes. A server MAY also immediately close the connection by sending a CONNECTION_CLOSE frame with an error code of PROTOCOL_VIOLATION; see immediate-close-hs.", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "that it chooses. A server MUST discard an Initial packet that is carried in a UDP datagram with a payload that is smaller than the smallest allowed maximum datagram size of 1200 bytes. A server MAY also immediately close the connection by sending a CONNECTION_CLOSE frame with an error code of PROTOCOL_VIOLATION; see immediate-close-hs."} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "The Path Maximum Transmission Unit (PMTU) is the maximum size of the entire IP packet including the IP header, UDP header, and UDP payload. The UDP payload includes the QUIC packet header, protected payload, and any authentication fields. The PMTU can depend on path characteristics, and can therefore change over time. The largest UDP payload an endpoint sends at any given time is referred to as the endpoint's maximum packet size. An endpoint SHOULD use DPLPMTUD (dplpmtud) or PMTUD (pmtud) to determine whether the path to a destination will support a desired maximum packet size without fragmentation. In the absence of these mechanisms, QUIC endpoints SHOULD NOT send IP packets larger than the smallest allowed maximum packet size. Both DPLPMTUD and PMTUD send IP packets that are larger than the current maximum packet size, referred to as PMTU probes. All QUIC packets that are not sent in a PMTU probe SHOULD be sized to fit within the maximum packet size to avoid the packet being fragmented or dropped (RFC8085). If a QUIC endpoint determines that the PMTU between any pair of local and remote IP addresses has fallen below the smallest allowed maximum packet size of 1200 bytes, it MUST immediately cease sending QUIC packets, except for those in PMTU probes or those containing CONNECTION_CLOSE frames, on the affected path. An endpoint MAY terminate the connection if an alternative path cannot be found. Each pair of local and remote addresses could have a different PMTU. QUIC implementations that implement any kind of PMTU discovery therefore SHOULD maintain a maximum packet size for each combination of local and remote IP addresses. A QUIC implementation MAY be more conservative in computing the maximum packet size to allow for unknown tunnel overheads or IP header options/extensions. 14.2.1. Path Maximum Transmission Unit Discovery (PMTUD; RFC1191, RFC8201) relies on reception of ICMP messages (e.g., IPv6 Packet Too Big messages) that indicate when a packet is dropped because it is larger than the local router MTU. DPLPMTUD can also optionally use these messages. This use of ICMP messages is potentially vulnerable to off-path attacks that successfully guess the addresses used on the path and reduce the PMTU to a bandwidth-inefficient value. An endpoint MUST ignore an ICMP message that claims the PMTU has decreased below the minimum QUIC packet size. The requirements for generating ICMP (RFC1812, RFC4443) state that the quoted packet should contain as much of the original packet as", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "The Path Maximum Transmission Unit (PMTU) is the maximum size of the entire IP packet including the IP header, UDP header, and UDP payload. The UDP payload includes one or more QUIC packet headers and protected payloads. The PMTU can depend on path characteristics, and can therefore change over time. The largest UDP payload an endpoint sends at any given time is referred to as the endpoint's maximum datagram size. An endpoint SHOULD use DPLPMTUD (dplpmtud) or PMTUD (pmtud) to determine whether the path to a destination will support a desired maximum datagram size without fragmentation. In the absence of these mechanisms, QUIC endpoints SHOULD NOT send datagrams larger than the smallest allowed maximum datagram size. Both DPLPMTUD and PMTUD send datagrams that are larger than the current maximum datagram size, referred to as PMTU probes. All QUIC packets that are not sent in a PMTU probe SHOULD be sized to fit within the maximum datagram size to avoid the datagram being fragmented or dropped (RFC8085). If a QUIC endpoint determines that the PMTU between any pair of local and remote IP addresses has fallen below the smallest allowed maximum datagram size of 1200 bytes, it MUST immediately cease sending QUIC packets, except for those in PMTU probes or those containing CONNECTION_CLOSE frames, on the affected path. An endpoint MAY terminate the connection if an alternative path cannot be found. Each pair of local and remote addresses could have a different PMTU. QUIC implementations that implement any kind of PMTU discovery therefore SHOULD maintain a maximum datagram size for each combination of local and remote IP addresses. A QUIC implementation MAY be more conservative in computing the maximum datagram size to allow for unknown tunnel overheads or IP header options/extensions. 14.2.1. Path Maximum Transmission Unit Discovery (PMTUD; RFC1191, RFC8201) relies on reception of ICMP messages (e.g., IPv6 Packet Too Big messages) that indicate when an IP packet is dropped because it is larger than the local router MTU. DPLPMTUD can also optionally use these messages. This use of ICMP messages is potentially vulnerable to off-path attacks that successfully guess the addresses used on the path and reduce the PMTU to a bandwidth-inefficient value. An endpoint MUST ignore an ICMP message that claims the PMTU has decreased below QUIC's smallest allowed maximum datagram size. The requirements for generating ICMP (RFC1812, RFC4443) state that the quoted packet should contain as much of the original packet as"} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "validation. An endpoint MUST NOT increase PMTU based on ICMP messages; see Section 3, clause 6 of DPLPMTUD. Any reduction in the QUIC maximum packet size in response to ICMP messages MAY be provisional until QUIC's loss detection algorithm determines that the quoted packet has actually been lost.", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "validation. An endpoint MUST NOT increase PMTU based on ICMP messages; see Section 3, clause 6 of DPLPMTUD. Any reduction in QUIC's maximum datagram size in response to ICMP messages MAY be provisional until QUIC's loss detection algorithm determines that the quoted packet has actually been lost."} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "PADDING frame implement \"Probing using padding data\", as defined in Section 4.1 of DPLPMTUD. Endpoints SHOULD set the initial value of BASE_PMTU (see Section 5.1 of DPLPMTUD) to be consistent with the minimum QUIC packet size. The MIN_PLPMTU is the same as the BASE_PMTU. QUIC endpoints implementing DPLPMTUD maintain a maximum packet size (DPLPMTUD MPS) for each combination of local and remote IP addresses. 14.3.1. From the perspective of DPLPMTUD, QUIC is an acknowledged packetization layer (PL). A sender can therefore enter the DPLPMTUD BASE state when the QUIC connection handshake has been completed. 14.3.2. QUIC provides an acknowledged PL, therefore a sender does not implement the DPLPMTUD CONFIRMATION_TIMER while in the SEARCH_COMPLETE state; see Section 5.2 of DPLPMTUD. 14.3.3.", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "PADDING frame implement \"Probing using padding data\", as defined in Section 4.1 of DPLPMTUD. Endpoints SHOULD set the initial value of BASE_PLPMTU (Section 5.1 of DPLPMTUD) to be consistent with QUIC's smallest allowed maximum datagram size. The MIN_PLPMTU is the same as the BASE_PLPMTU. QUIC endpoints implementing DPLPMTUD maintain a DPLPMTUD Maximum Packet Size (MPS, Section 4.4 of DPLPMTUD) for each combination of local and remote IP addresses. This corresponds to the maximum datagram size. 14.3.1. From the perspective of DPLPMTUD, QUIC is an acknowledged Packetization Layer (PL). A QUIC sender can therefore enter the DPLPMTUD BASE state (Section 5.2 of DPLPMTUD) when the QUIC connection handshake has been completed. 14.3.2. QUIC is an acknowledged PL, therefore a QUIC sender does not implement a DPLPMTUD CONFIRMATION_TIMER while in the SEARCH_COMPLETE state; see Section 5.2 of DPLPMTUD. 14.3.3."} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "PMTU probes are ack-eliciting packets. Endpoints could limit the content of PMTU probes to PING and PADDING frames as packets that are larger than the current maximum packet size are more likely to be dropped by the network. Loss of a QUIC packet that is carried in a PMTU probe is therefore not a reliable indication of congestion and SHOULD NOT trigger a congestion control reaction; see Section 3, Bullet 7 of DPLPMTUD. However, PMTU probes consume congestion window, which could delay subsequent transmission by an application. 14.4.1.", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "PMTU probes are ack-eliciting packets. Endpoints could limit the content of PMTU probes to PING and PADDING frames, since packets that are larger than the current maximum datagram size are more likely to be dropped by the network. Loss of a QUIC packet that is carried in a PMTU probe is therefore not a reliable indication of congestion and SHOULD NOT trigger a congestion control reaction; see Section 3, Bullet 7 of DPLPMTUD. However, PMTU probes consume congestion window, which could delay subsequent transmission by an application. 14.4.1."} {"id": "q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af", "old_text": "This limit does act as an additional constraint on datagram size in the same way as the path MTU, but it is a property of the endpoint and not the path; see packet-size. It is expected that this is the space an endpoint dedicates to holding incoming packets.", "comments": "Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good.", "new_text": "This limit does act as an additional constraint on datagram size in the same way as the path MTU, but it is a property of the endpoint and not the path; see datagram-size. It is expected that this is the space an endpoint dedicates to holding incoming packets."} {"id": "q-en-quicwg-base-drafts-2ac9508f8cbf28edc7ef3ba3854add1339158101f93b34390ef7283138fb400c", "old_text": "tuple, as shown in transport-parameter-encoding-fig: The Transport Parameter Length field contains the length of the Transport Parameter Value field. QUIC encodes transport parameters into a sequence of bytes, which is then included in the cryptographic handshake.", "comments": "Does this win the record for least amount of bytes touched in a PR?\nLars, I can't find it easily, but I believe that I pushed a 1 bit change at one point in this process.\nParameter Value field. Presumably the length in bytes.", "new_text": "tuple, as shown in transport-parameter-encoding-fig: The Transport Parameter Length field contains the length of the Transport Parameter Value field in bytes. QUIC encodes transport parameters into a sequence of bytes, which is then included in the cryptographic handshake."} {"id": "q-en-quicwg-base-drafts-c8aed42aa1cee5b120e04eea2694e643074a44651d0cecba348e66fa9940cb23", "old_text": "entity that receives an Initial packet from a client can recover the keys that will allow them to both read the contents of the packet and generate Initial packets that will be successfully authenticated at either endpoint. All other packets are protected with keys derived from the cryptographic handshake. The cryptographic handshake ensures that", "comments": "The description neglected to mention this. Obviously, the confidentiality protection (specifically, the lack thereof) is the part that needs to be highlighted, but we shouldn't neglect the other purpose of using an AEAD here.", "new_text": "entity that receives an Initial packet from a client can recover the keys that will allow them to both read the contents of the packet and generate Initial packets that will be successfully authenticated at either endpoint. The AEAD also protects Initial packets against accidental modification. All other packets are protected with keys derived from the cryptographic handshake. The cryptographic handshake ensures that"} {"id": "q-en-quicwg-base-drafts-e234c15427075c7e5e2ea666be43ceb47d59b62e26f59c825f5a17734751eafa", "old_text": "the inclusion of uppercase field names, or the inclusion of invalid characters in field names or values A request or response that includes a payload body can include a Content-Length header field. A request or response is also malformed", "comments": "if we want to do that. There was a hole in the error code space, so I filled it with this. I haven't pulled Push out to its own error; it doesn't seem like it belongs with this one.\nWe might want to add a mention of this in A.4 where we talk about PROTOCOL_ERROR\n(I couldn't find anything on this, but I'm prepared to be corrected. Also, it's late, so if this is closed, I'm totally OK with that.) We're just in the process of doing full validation of pseudo-header fields (I know, seems late, what are you gonna do about that). And we realized that there is no error code for indicating this sort of error. That is, if pseudo-fields are out of order, or a request contains , or there are uppercase characters. contains lots of MUSTs, but it seems to rely entirely on the definition of \"malformed\" which leads to H3GENERALPROTOCOL_ERROR. This is fine, but not especially helpful in isolating problems. Could we define an error code for this specific case?\nH2 just says malformed is treated as PROTOCOL_ERROR. So H3 is providing parity. There might be an argument for more fidelity but I'm not convinced it would help much. It could require more contortions in the text - do you have different error for different types of malformed?\nInternally, we use a different error code. Of course, all internal error codes map to the same code for sending in RESETSTREAM/CONNECTIONCLOSE.\nI'm inclined not to do much, if anything, here. They all trace back to \"malformed,\" as you note, which leads to GENERALPROTOCOLERROR. The instances in which we use GPE are: Malformed HTTP messages Push whose request headers change across multiple promises As a substitute for more specific codes I wouldn't be opposed to splitting the first two off into a new / two new codes, because that would be consistent with our principle of indicating the general area where the failure occurred. I would push back on having separate error codes for specific ways the message could be malformed, because that's pretty expansive rabbit hole.\nOK, I'm satisfied with that response. I think that we'll just have to signal extra information in the text field. Happy to close this.\nNAME do you think an error code for \"malformed message\" would be useful, if we don't go into the various types of malformed?\nThat might be useful, yes.\nThis is a design change, albeit a marginal one.\nNAME let's merge the associated PR but keep this issue open and marked as \"call-issued\"\nKeeping open, as requested.\nClosing this now that the IESG review of draft 33 has concluded.\nIn fairness, I question whether the IESG review of the HTTP drafts has concluded. But as this is not an IESG comment, I'm content to close it. People have had time to object.\nYes I agree Mike. This was due to me being over eager with the close button. The issue may not have strictly been raised in the period that the IESG were directed to review the doc, and they have yet to ballot on draft 33. But either way there was no push back from the WG on this change since draft 33 was published.", "new_text": "the inclusion of uppercase field names, or the inclusion of invalid characters in field names or values. A request or response that includes a payload body can include a Content-Length header field. A request or response is also malformed"} {"id": "q-en-quicwg-base-drafts-e234c15427075c7e5e2ea666be43ceb47d59b62e26f59c825f5a17734751eafa", "old_text": "intermediary not acting as a tunnel) MUST NOT forward a malformed request or response. Malformed requests or responses that are detected MUST be treated as a stream error (errors) of type H3_GENERAL_PROTOCOL_ERROR. For malformed requests, a server MAY send an HTTP response indicating the error prior to closing or resetting the stream. Clients MUST NOT", "comments": "if we want to do that. There was a hole in the error code space, so I filled it with this. I haven't pulled Push out to its own error; it doesn't seem like it belongs with this one.\nWe might want to add a mention of this in A.4 where we talk about PROTOCOL_ERROR\n(I couldn't find anything on this, but I'm prepared to be corrected. Also, it's late, so if this is closed, I'm totally OK with that.) We're just in the process of doing full validation of pseudo-header fields (I know, seems late, what are you gonna do about that). And we realized that there is no error code for indicating this sort of error. That is, if pseudo-fields are out of order, or a request contains , or there are uppercase characters. contains lots of MUSTs, but it seems to rely entirely on the definition of \"malformed\" which leads to H3GENERALPROTOCOL_ERROR. This is fine, but not especially helpful in isolating problems. Could we define an error code for this specific case?\nH2 just says malformed is treated as PROTOCOL_ERROR. So H3 is providing parity. There might be an argument for more fidelity but I'm not convinced it would help much. It could require more contortions in the text - do you have different error for different types of malformed?\nInternally, we use a different error code. Of course, all internal error codes map to the same code for sending in RESETSTREAM/CONNECTIONCLOSE.\nI'm inclined not to do much, if anything, here. They all trace back to \"malformed,\" as you note, which leads to GENERALPROTOCOLERROR. The instances in which we use GPE are: Malformed HTTP messages Push whose request headers change across multiple promises As a substitute for more specific codes I wouldn't be opposed to splitting the first two off into a new / two new codes, because that would be consistent with our principle of indicating the general area where the failure occurred. I would push back on having separate error codes for specific ways the message could be malformed, because that's pretty expansive rabbit hole.\nOK, I'm satisfied with that response. I think that we'll just have to signal extra information in the text field. Happy to close this.\nNAME do you think an error code for \"malformed message\" would be useful, if we don't go into the various types of malformed?\nThat might be useful, yes.\nThis is a design change, albeit a marginal one.\nNAME let's merge the associated PR but keep this issue open and marked as \"call-issued\"\nKeeping open, as requested.\nClosing this now that the IESG review of draft 33 has concluded.\nIn fairness, I question whether the IESG review of the HTTP drafts has concluded. But as this is not an IESG comment, I'm content to close it. People have had time to object.\nYes I agree Mike. This was due to me being over eager with the close button. The issue may not have strictly been raised in the period that the IESG were directed to review the doc, and they have yet to ballot on draft 33. But either way there was no push back from the WG on this change since draft 33 was published.", "new_text": "intermediary not acting as a tunnel) MUST NOT forward a malformed request or response. Malformed requests or responses that are detected MUST be treated as a stream error (errors) of type H3_MESSAGE_ERROR. For malformed requests, a server MAY send an HTTP response indicating the error prior to closing or resetting the stream. Clients MUST NOT"} {"id": "q-en-quicwg-base-drafts-e234c15427075c7e5e2ea666be43ceb47d59b62e26f59c825f5a17734751eafa", "old_text": "The client's stream terminated without containing a fully-formed request. The TCP connection established in response to a CONNECT request was reset or abnormally closed.", "comments": "if we want to do that. There was a hole in the error code space, so I filled it with this. I haven't pulled Push out to its own error; it doesn't seem like it belongs with this one.\nWe might want to add a mention of this in A.4 where we talk about PROTOCOL_ERROR\n(I couldn't find anything on this, but I'm prepared to be corrected. Also, it's late, so if this is closed, I'm totally OK with that.) We're just in the process of doing full validation of pseudo-header fields (I know, seems late, what are you gonna do about that). And we realized that there is no error code for indicating this sort of error. That is, if pseudo-fields are out of order, or a request contains , or there are uppercase characters. contains lots of MUSTs, but it seems to rely entirely on the definition of \"malformed\" which leads to H3GENERALPROTOCOL_ERROR. This is fine, but not especially helpful in isolating problems. Could we define an error code for this specific case?\nH2 just says malformed is treated as PROTOCOL_ERROR. So H3 is providing parity. There might be an argument for more fidelity but I'm not convinced it would help much. It could require more contortions in the text - do you have different error for different types of malformed?\nInternally, we use a different error code. Of course, all internal error codes map to the same code for sending in RESETSTREAM/CONNECTIONCLOSE.\nI'm inclined not to do much, if anything, here. They all trace back to \"malformed,\" as you note, which leads to GENERALPROTOCOLERROR. The instances in which we use GPE are: Malformed HTTP messages Push whose request headers change across multiple promises As a substitute for more specific codes I wouldn't be opposed to splitting the first two off into a new / two new codes, because that would be consistent with our principle of indicating the general area where the failure occurred. I would push back on having separate error codes for specific ways the message could be malformed, because that's pretty expansive rabbit hole.\nOK, I'm satisfied with that response. I think that we'll just have to signal extra information in the text field. Happy to close this.\nNAME do you think an error code for \"malformed message\" would be useful, if we don't go into the various types of malformed?\nThat might be useful, yes.\nThis is a design change, albeit a marginal one.\nNAME let's merge the associated PR but keep this issue open and marked as \"call-issued\"\nKeeping open, as requested.\nClosing this now that the IESG review of draft 33 has concluded.\nIn fairness, I question whether the IESG review of the HTTP drafts has concluded. But as this is not an IESG comment, I'm content to close it. People have had time to object.\nYes I agree Mike. This was due to me being over eager with the close button. The issue may not have strictly been raised in the period that the IESG were directed to review the doc, and they have yet to ballot on draft 33. But either way there was no push back from the WG on this change since draft 33 was published.", "new_text": "The client's stream terminated without containing a fully-formed request. An HTTP message was malformed and cannot be processed. The TCP connection established in response to a CONNECT request was reset or abnormally closed."} {"id": "q-en-quicwg-base-drafts-11df47ad2c21a9ba3fb74794acf5361660155c081cd75e1d50af7410992c296f", "old_text": "The payload of this packet contains CRYPTO frames and could contain PING, PADDING, or ACK frames. Handshake packets MAY contain CONNECTION_CLOSE frames of type 0x1c. Endpoints MUST treat receipt of Handshake packets with other frames as a connection error. Like Initial packets (see discard-initial), data in CRYPTO frames for Handshake packets is discarded - and no longer retransmitted - when", "comments": "Because it always is.\nI'm checking \"MUST treat\" in the transport draft. Sec 17.2.4 says: This is only the place where the type of a connection error is NOT specified. This part should be modified as \"a connection error of XXX\".\nThis was deliberate, but you can now send CONNECTION_CLOSE in this case, so it makes sense to use a code here.", "new_text": "The payload of this packet contains CRYPTO frames and could contain PING, PADDING, or ACK frames. Handshake packets MAY contain CONNECTION_CLOSE frames of type 0x1c. Endpoints MUST treat receipt of Handshake packets with other frames as a connection error of type PROTOCOL_VIOLATION. Like Initial packets (see discard-initial), data in CRYPTO frames for Handshake packets is discarded - and no longer retransmitted - when"} {"id": "q-en-quicwg-base-drafts-0f3a71de3488ebccc67b66178cd5535935f57b2273471dc5b8e6b40a3a162f02", "old_text": "control across the entire connection, QUIC has the capability to improve the performance of HTTP compared to a TCP mapping. QUIC also incorporates TLS 1.3 (TLS13) at the transport layer, offering comparable security to running TLS over TCP, with the improved connection setup latency of TCP Fast Open (TFO). This document defines a mapping of HTTP semantics over the QUIC transport protocol, drawing heavily on the design of HTTP/2. While", "comments": "Mostly because QUIC doesn't prevent all forms of request forgery. Thankfully, this is dealt with in detail in the transport draft.\nNAME this looks good. Can you also change one sentence in Sec 1.2, from: to .\nHilarie Orman wrote: all this is assumed to be provided by QUIC. Section 10.2 says that all QUIC packets are encrypted; I'm not sure if that's true, or if QUIC has an option for \"non-modifiable\" without encryption.\nActually, we now know that this is not true, so this really needs to be fixed: -- from . The real solution here is to cite of the transport draft.\nThe version of QUIC required by this draft is always encrypted; that seems sufficient for the original issue. NAME issue is distinct, and worth fixing. I'll use this issue to track it.\n... and we could add some text stating the integrity point in Section 1.2. NAME suggests: \"comparable integrity and confidentiality to TLS over TCP\"\nCould we make some progress on this one?\nProgress made; the PR now needs review.", "new_text": "control across the entire connection, QUIC has the capability to improve the performance of HTTP compared to a TCP mapping. QUIC also incorporates TLS 1.3 (TLS13) at the transport layer, offering comparable confidentiality and integrity to running TLS over TCP, with the improved connection setup latency of TCP Fast Open (TFO). This document defines a mapping of HTTP semantics over the QUIC transport protocol, drawing heavily on the design of HTTP/2. While"} {"id": "q-en-quicwg-base-drafts-0f3a71de3488ebccc67b66178cd5535935f57b2273471dc5b8e6b40a3a162f02", "old_text": "The use of ALPN in the TLS and QUIC handshakes establishes the target application protocol before application-layer bytes are processed. Because all QUIC packets are encrypted, it is difficult for an attacker to control the plaintext bytes of an HTTP/3 connection, which could be used in a cross-protocol attack on a plaintext protocol. 10.3.", "comments": "Mostly because QUIC doesn't prevent all forms of request forgery. Thankfully, this is dealt with in detail in the transport draft.\nNAME this looks good. Can you also change one sentence in Sec 1.2, from: to .\nHilarie Orman wrote: all this is assumed to be provided by QUIC. Section 10.2 says that all QUIC packets are encrypted; I'm not sure if that's true, or if QUIC has an option for \"non-modifiable\" without encryption.\nActually, we now know that this is not true, so this really needs to be fixed: -- from . The real solution here is to cite of the transport draft.\nThe version of QUIC required by this draft is always encrypted; that seems sufficient for the original issue. NAME issue is distinct, and worth fixing. I'll use this issue to track it.\n... and we could add some text stating the integrity point in Section 1.2. NAME suggests: \"comparable integrity and confidentiality to TLS over TCP\"\nCould we make some progress on this one?\nProgress made; the PR now needs review.", "new_text": "The use of ALPN in the TLS and QUIC handshakes establishes the target application protocol before application-layer bytes are processed. This ensures that endpoints have strong assurances that peers are using the same protocol. This does not guarantee protection from all cross-protocol attacks. Section 21.5 of QUIC-TRANSPORT describes some ways in which the plaintext of QUIC packets can be used to perform request forgery against endpoints that don't use authenticated transports. 10.3."} {"id": "q-en-quicwg-base-drafts-879d9b7ee7c9e7ca0b64974745784b759ed9ea0be8f49604c4fbc29d0a2a673f", "old_text": "A PATH_RESPONSE frame received on any network path validates the path on which the PATH_CHALLENGE was sent. If the PATH_CHALLENGE frame that resulted in successful path validation was sent in a datagram that was not expanded to at least 1200 bytes, the endpoint can regard the address as valid. The endpoint is then able to send more than three times the amount of data that has been received. However, the endpoint MUST initiate another path validation with an expanded datagram to verify that the path supports required MTU. Receipt of an acknowledgment for a packet containing a PATH_CHALLENGE frame is not adequate validation, since the acknowledgment can be", "comments": "This was definitely odd. Let's take another tilt at making it comprehensible.\nErik Kline said:\nYep, when it's called out, it's distinctly odd. Hopefully it's an easy fix, but writing is hardd.", "new_text": "A PATH_RESPONSE frame received on any network path validates the path on which the PATH_CHALLENGE was sent. If an endpoint sends a PATH_CHALLENGE frame in a datagram that is not expanded to at least 1200 bytes, and if the response to it validates the peer address, the path is validated but not the path MTU. As a result, the endpoint can now send more than three times the amount of data that has been received. However, the endpoint MUST initiate another path validation with an expanded datagram to verify that the path supports the required MTU. Receipt of an acknowledgment for a packet containing a PATH_CHALLENGE frame is not adequate validation, since the acknowledgment can be"} {"id": "q-en-quicwg-base-drafts-ccec8d29d156ab7b60ccaf21ccc04fcd86d8f73bc345da0f869abdb3dd856c19", "old_text": "authenticated negotiation of an application protocol (TLS uses ALPN ALPN for this purpose) Endpoints can use packets sent during the handshake to test for Explicit Congestion Notification (ECN) support; see ecn. An endpoint verifies support for ECN by observing whether the ACK frames acknowledging the first packets it sends carry ECN counts, as described in ecn-validation. The CRYPTO frame can be sent in different packet number spaces (packet-numbers). The offsets used by CRYPTO frames to ensure ordered delivery of cryptographic handshake data start from zero in", "comments": "This is my best stab at it.\nErik Kline said:\nI thought that section reference was wrong, but Section 7 does indeed mention ECN twice. Once is almost too much for this section. I checked the history and the commits that affected this text likely are the result of a merge, and one that either git resolved incorrectly for us, or one that we messed up. Either way, this is worth fixing.\none nit", "new_text": "authenticated negotiation of an application protocol (TLS uses ALPN ALPN for this purpose) The CRYPTO frame can be sent in different packet number spaces (packet-numbers). The offsets used by CRYPTO frames to ensure ordered delivery of cryptographic handshake data start from zero in"} {"id": "q-en-quicwg-base-drafts-ccec8d29d156ab7b60ccaf21ccc04fcd86d8f73bc345da0f869abdb3dd856c19", "old_text": "shown with a '*'. Once completed, endpoints are able to exchange application data. An endpoint validates support for Explicit Congestion Notification (ECN) by observing whether the ACK frames acknowledging the first packets it sends carry ECN counts, as described in ecn-validation. Endpoints MUST explicitly negotiate an application protocol. This avoids situations where there is a disagreement about the protocol", "comments": "This is my best stab at it.\nErik Kline said:\nI thought that section reference was wrong, but Section 7 does indeed mention ECN twice. Once is almost too much for this section. I checked the history and the commits that affected this text likely are the result of a merge, and one that either git resolved incorrectly for us, or one that we messed up. Either way, this is worth fixing.\none nit", "new_text": "shown with a '*'. Once completed, endpoints are able to exchange application data. Endpoints can use packets sent during the handshake to test for Explicit Congestion Notification (ECN) support; see ecn. An endpoint validates support for ECN by observing whether the ACK frames acknowledging the first packets it sends carry ECN counts, as described in ecn-validation. Endpoints MUST explicitly negotiate an application protocol. This avoids situations where there is a disagreement about the protocol"} {"id": "q-en-quicwg-base-drafts-6042864a4839c6073412ce88d2fa4c102fcb81e75e00ab2f42958802ef0eed2a", "old_text": "A token allows a server to correlate activity between the connection where the token was issued and any connection where it is used. Clients that want to break continuity of identity with a server MAY discard tokens provided using the NEW_TOKEN frame. In comparison, a token obtained in a Retry packet MUST be used immediately during the connection attempt and cannot be used in subsequent connection", "comments": "NAME said:\nI think \"can\" works equally well, given the precondition. I'll make that change.", "new_text": "A token allows a server to correlate activity between the connection where the token was issued and any connection where it is used. Clients that want to break continuity of identity with a server can discard tokens provided using the NEW_TOKEN frame. In comparison, a token obtained in a Retry packet MUST be used immediately during the connection attempt and cannot be used in subsequent connection"} {"id": "q-en-quicwg-base-drafts-6f0e7da6bde8e04727cc4dcd984e72bfe28d64d5122e842cdbec7852054a3c3c", "old_text": "The next bit (0x40) of byte 0 is set to 1. Packets containing a zero value for this bit are not valid packets in this version and MUST be discarded. The next two bits (those with a mask of 0x30) of byte 0 contain a packet type. Packet types are listed in long-packet-types.", "comments": "NAME said:\nThis particular choice has not a lot to do with QUIC and would therefore bring in a lot of non-sequitur references, but a single sentence won't hurt. It has to be repeated, which is gross, but I can live with that.", "new_text": "The next bit (0x40) of byte 0 is set to 1. Packets containing a zero value for this bit are not valid packets in this version and MUST be discarded. A value of 1 for this bit allows QUIC to coexist with other protocols; see RFC7983. The next two bits (those with a mask of 0x30) of byte 0 contain a packet type. Packet types are listed in long-packet-types."} {"id": "q-en-quicwg-base-drafts-6f0e7da6bde8e04727cc4dcd984e72bfe28d64d5122e842cdbec7852054a3c3c", "old_text": "The next bit (0x40) of byte 0 is set to 1. Packets containing a zero value for this bit are not valid packets in this version and MUST be discarded. The third most significant bit (0x20) of byte 0 is the latency spin bit, set as described in spin-bit.", "comments": "NAME said:\nThis particular choice has not a lot to do with QUIC and would therefore bring in a lot of non-sequitur references, but a single sentence won't hurt. It has to be repeated, which is gross, but I can live with that.", "new_text": "The next bit (0x40) of byte 0 is set to 1. Packets containing a zero value for this bit are not valid packets in this version and MUST be discarded. A value of 1 for this bit allows QUIC to coexist with other protocols; see RFC7983. The third most significant bit (0x20) of byte 0 is the latency spin bit, set as described in spin-bit."} {"id": "q-en-quicwg-base-drafts-47f5105f5df8e9d356e0868f8034c0c57f2a8cb95deae3e3d006ca449268a8d3", "old_text": "newer encryption level are available. An endpoint cannot discard keys for a given encryption level unless it has both received and acknowledged all CRYPTO frames for that encryption level and when all CRYPTO frames for that encryption level have been acknowledged by its peer. However, this does not guarantee that no further packets will need to be received or sent at that encryption level because a peer might not have received all the acknowledgments necessary to reach the same state. Though an endpoint might retain older keys, new data MUST be sent at the highest currently-available encryption level. Only ACK frames", "comments": "The three points are wordy enough it might be worth making this a bulleted list, however. Thoughts?\nNAME said:\nWith NAME fixes, this is good.", "new_text": "newer encryption level are available. An endpoint cannot discard keys for a given encryption level unless it has received all the cryptographic handshake messages from its peer at that encryption level and its peer has done the same. Different methods for determining this are provided for Initial keys (discard-initial) and Handshake keys (discard-handshake). These methods do not prevent packets from being received or sent at that encryption level because a peer might not have received all the acknowledgments necessary. Though an endpoint might retain older keys, new data MUST be sent at the highest currently-available encryption level. Only ACK frames"} {"id": "q-en-quicwg-base-drafts-950a2377f473ac8551fbeb0825ed7352d3cc562f7b46d4264a57d577e203fe04", "old_text": "21.13. Deployments should limit the ability of an attacker to target a new connection to a particular server instance. This means that client- controlled fields, such as the initial Destination Connection ID used on Initial and 0-RTT packets SHOULD NOT be used by themselves to make routing decisions. Ideally, routing decisions are made independently of client-selected values; a Source Connection ID can be selected to route later packets to the same server. 21.14.", "comments": "This was just old and needed a little bit of a refresh. Removing the recommendation, which was counter to established views and unnecessary. I will let others determine whether this is editorial/design.\nNAME said:\nI think that the point of this section is to highlight the possibility that routing based on client-selected values exposes servers to load that is balanced under client - and therefore attacker - control. I don't think that it needs to include such a strong recommendation. If people deploying load balancers are not concerned about this, then that is good. That makes the recommendation less good. As this is very old text, it needs a bit of a cleanup anyway. I'm going to suggest something along the lines of my statement above in a pull request and we can polish the wording there. I don't think that we should remove the Retry SCID language. That remains an option for servers, even if they don't want to exercise the option (whether load balancers look at Retry tokens is very much down to how the various functions are distributed). What could happen is that the Retry SCID is chosen so that the next initial can be routed as though it were chosen by a client, but in such a way as it results in the connection being sent to a lightly-loaded server instance. That is, if the load balancer routes inchoate Initials based on connection ID, knowledge of that routing algorithm is used to direct traffic. That said, I understand that what is more likely here is that Initials with inauthentic Destination Connection ID fields will be routed using other information: round-robin, random, five-tuple, or whatever.\nI think you're proposing that the text I quoted be deleted. That is satisfactory, and what I would recommend as well.\nReturning to an open state as the chairs directed in their plan.\nClosing this now that the IESG have approved the document(s).\nTo be safe, let's mark this design.", "new_text": "21.13. Deployments should limit the ability of an attacker to target a new connection to a particular server instance. Ideally, routing decisions are made independently of client-selected values, including addresses. Once an instance is selected, a connection ID can be selected so that later packets are routed to the same instance. 21.14."} {"id": "q-en-quicwg-base-drafts-4deb661a1607aedfbe35ea3ad3f5252484928306a6caed434ec9bb386fb6b2e3", "old_text": "The endpoint uses RTT samples and peer-reported host delays (see Section 13.2 of QUIC-TRANSPORT) to generate a statistical description of the network path's RTT. An endpoint computes the following three values for each path: the minimum value observed over the lifetime of the path (min_rtt), an exponentially-weighted moving average (smoothed_rtt), and the mean deviation (referred to as \"variation\" in the rest of this document) in the observed RTT samples (rttvar). 5.1.", "comments": "This should be fixed. I've sent out .\nThe natural question that follows will be \"which period of time\", but I think that is already.Thanks, LG", "new_text": "The endpoint uses RTT samples and peer-reported host delays (see Section 13.2 of QUIC-TRANSPORT) to generate a statistical description of the network path's RTT. An endpoint computes the following three values for each path: the minimum value over a period of time (min_rtt), an exponentially-weighted moving average (smoothed_rtt), and the mean deviation (referred to as \"variation\" in the rest of this document) in the observed RTT samples (rttvar). 5.1."} {"id": "q-en-quicwg-base-drafts-4deb661a1607aedfbe35ea3ad3f5252484928306a6caed434ec9bb386fb6b2e3", "old_text": "5.2. min_rtt is the sender's estimate of the minimum RTT observed for a given network path. In this document, min_rtt is used by loss detection to reject implausibly small rtt samples. min_rtt MUST be set to the latest_rtt on the first RTT sample. min_rtt MUST be set to the lesser of min_rtt and latest_rtt (latest-", "comments": "This should be fixed. I've sent out .\nThe natural question that follows will be \"which period of time\", but I think that is already.Thanks, LG", "new_text": "5.2. min_rtt is the sender's estimate of the minimum RTT observed for a given network path over a period of time. In this document, min_rtt is used by loss detection to reject implausibly small rtt samples. min_rtt MUST be set to the latest_rtt on the first RTT sample. min_rtt MUST be set to the lesser of min_rtt and latest_rtt (latest-"} {"id": "q-en-quicwg-base-drafts-bb216150e7b688c2675247295d5cd2639066db8625a3779c082cc63606dd804f", "old_text": "in compliance with RFC6437, unless the local API does not allow setting IPv6 flow labels. The IPv6 flow label SHOULD be a pseudo-random function of the source and destination addresses, source and destination UDP ports, and the Destination Connection ID field. The flow label generation MUST be designed to minimize the chances of linkability with a previously used flow label, as a stable flow label would enable correlating activity on multiple paths; see migration-linkability. A possible implementation is to compute the flow label as a cryptographic hash function of the source and destination addresses, source and destination UDP ports, Destination Connection ID field, and a local secret. 10.", "comments": "This was a little tricky as the current text already had what we wanted. But it wasn't quite clear on rationale, so I added some of that.\n\u00c9ric Vyncke said:\nI believe that the view was that a PRNG was inferior to a PRF because it required state, but yes, it should be fine. Should we have avoided repeating advice from 6437? (I forget who contributed this text, but I think that it was an RFC 6437 author.)\nText originates with Christian NAME in . A few people have touched it since for editorial purposes, but that doesn't change the point of what you're asking about.", "new_text": "in compliance with RFC6437, unless the local API does not allow setting IPv6 flow labels. The flow label generation MUST be designed to minimize the chances of linkability with a previously used flow label, as a stable flow label would enable correlating activity on multiple paths; see migration- linkability. RFC6437 suggests deriving values using a pseudorandom function to generate flow labels. Including the Destination Connection ID field in addition to source and destination addresses when generating flow labels ensures that changes are synchronized with changes in other observable identifiers. A cryptographic hash function that combines these inputs with a local secret is one way this might be implemented. 10."} {"id": "q-en-quicwg-base-drafts-3b2f391ea494fdcfed3c714b00a845bf3d8f6e8dade9b7e12e210eb81c9e2d2d", "old_text": "QUIC begins every connection in slow start with the congestion window set to an initial value. Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (max_datagram_size), limited to the larger of 14720 bytes or twice the maximum datagram size. This follows the analysis and recommendations in RFC6928, increasing the byte limit to account for the smaller 8-byte overhead of UDP compared to the 20-byte overhead for TCP. If the maximum datagram size changes during the connection, the initial congestion window SHOULD be recalculated with the new size.", "comments": "\u00c9ric Vyncke said:\nI think the ambiguity is whether the \"limited to\" is saying: the maximum datagram size is no more than twice the maximum datagram size (which is nonsensical) the initial congestion window is min(10maxdatagramsize, max(14720 bytes, 2maxdatagramsize)) Given that one of the possible interpretations is logically inconsistent, the other one must be correct. I'm not sure how to make this sentence clearer except through pseudo-code. Looking at the pseudo-code, there isn't any -- kInitialWindow is declared to be a constant, value defined by this text, but the definition of this constant is given in terms of a variable (maxdatagramsize). Maybe kInitialWindow should become a variable itself, and then have pseudo-code with the max/min relationship added to B.3?\nThis is clearer; thanks.", "new_text": "QUIC begins every connection in slow start with the congestion window set to an initial value. Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (max_datagram_size), while limiting the window to the larger of 14720 bytes or twice the maximum datagram size. This follows the analysis and recommendations in RFC6928, increasing the byte limit to account for the smaller 8-byte overhead of UDP compared to the 20-byte overhead for TCP. If the maximum datagram size changes during the connection, the initial congestion window SHOULD be recalculated with the new size."} {"id": "q-en-quicwg-base-drafts-804e5899d677f62fbff964740e5b08a712678a6431f09751ec52c617b85a2060", "old_text": "If the recipient permits the migration, it MUST send subsequent packets to the new peer address and MUST initiate path validation (migrate-validate) to verify the peer's ownership of the address if validation is not already underway. An endpoint only changes the address to which it sends packets in response to the highest-numbered non-probing packet. This ensures", "comments": "Section 9.3 of Transport(URL) says: \"If the recipient permits the migration, it MUST send subsequent packets to the new peer address and MUST initiate path validation (Section 8.2) to verify the peer's ownership of the address if validation is not already underway.\" In section 9.5(URL) \"Similarly, an endpoint MUST NOT reuse a connection ID when sending to more than one destination address. Due to network changes outside the control of its peer, an endpoint might receive packets from a new source address with the same destination connection ID, in which case it MAY continue to use the current connection ID with the new remote address while still sending from the same local address.\" So as long as it's an unintentional change, everything is clear. But if the destination address changes and the incoming CID changes, but the sender(ie: server) doesn't have any more CIDs, it's not clear what should be done, as the two MUSTs seem to contradict one another. Or maybe they imply the server can't send anything?\nLater in 9.5, there's also this text that is relevant, but I don't think it clarifies how to resolve the MUSTs. \" An endpoint that exhausts available connection IDs cannot probe new paths or initiate migration, nor can it respond to probes or attempts by its peer to migrate. To ensure that migration is possible and packets sent on different paths cannot be correlated, endpoints SHOULD provide new connection IDs before peers migrate; see Section 5.1.1. If a peer might have exhausted available connection IDs, a migrating endpoint could include a NEWCONNECTIONID frame in all packets sent on a new network path.\"\nSo the second quotation has that clarification (with emphasis): Isn't this all coherent?\nYes, that clarifies the issue when the CID does not change. But that does not clarify the case when the CID does change and the responder has no more CIDs.\nIsn't that addressed by this: ?\nSo the answer is: 1) It can't send anything 2) It can send on the old path, but not the new one\nIt can probe the old path - but that is all it can do.\nWhy would it probe the old path that it's already validated?\nI didn't say that it was useful, just possible :)\nI don't see a pressing reason to do anything about this. That said, it's an unfortunate edge case, perhaps we could call it out explicitly, mentioning that this shouldn't arise if the other parts of the migration machinery are followed?\nTo be clear, I'm not looking for a fix as much as clarity on what is supposed to occur. I'll write a PR that I think will clarify it some and call out the edge case.\nI'd argue this is really a MUST unless case, but people tend not to like those, so I tried to stick to the facts in my PR.\nWFM", "new_text": "If the recipient permits the migration, it MUST send subsequent packets to the new peer address and MUST initiate path validation (migrate-validate) to verify the peer's ownership of the address if validation is not already underway. If the recipient has no unused connection IDs from the peer, it will not be able to send anything on the new path until the peer provides one; see migration-linkability. An endpoint only changes the address to which it sends packets in response to the highest-numbered non-probing packet. This ensures"} {"id": "q-en-quicwg-base-drafts-0731ba321efd8bd48d4e62d21fec228a95ad69fe232d2f3a830cff8ed24a5302", "old_text": "SHOULD begin processing partial HTTP messages once enough of the message has been received to make progress. If a client-initiated stream terminates without enough of the HTTP message to provide a complete response, the server SHOULD abort its response with the error code H3_REQUEST_INCOMPLETE; see errors. A server can send a complete response prior to the client sending an entire request if the response does not depend on any portion of the", "comments": "to the extent I'm willing to do so.\nSec 4.1 and 4.1.2 do not explicitly define , , as stream errors. A.4.1 says that is a stream error. It would be nice if they are explicitly defined as stream errors in Sec 4.1 and 4.1.2. Also, it would be nice if each error code is categorized to either a connection error or a stream error in Table 4.\nCommenting as an individual. The error code space is shared between connection and streams. The action associated with a stream error or connection error cause QUIC transport frames to be emitted, which can use a code. H3 inherits H2's design philosophy that endpoints can generate a connection error at will. So while there are specific cases that describe \"treat X as stream error foo\", endpoints are also allowed to \"treat X as connection error foo\". Constraining the error codes to different uses has a certain appeal. But then what do we say the the receiver of an error code of the wrong type should do? Generate its own error? I think this has come up a couple of times during H3 drafts but I'm satisfied that the flexibility we have today is about the best we can do.\nI also think \"stream error\" and \"abort a stream\" are unique and consistent terms used through the document. But I'll let Mike spec to that point with his editor hat.\nEven though categorization is not reasonable, Sec 4.1, for instance, should say like \"the server SHOULD abort its response and treat as a stream error of type H3REQUESTINCOMPLETE\".\n\"Abort\" is a defined term in section 2.2. I personally don't see much clarity in the proposal, it threatens to add inconsistency to other places where streams are aborted due to application decisions rather than a protocol error.\nMy example proposal is to replace \"the server SHOULD abort its response with the error code H3REQUESTINCOMPLETE\" with \"the server SHOULD abort its response and treat as a stream error of type H3REQUESTINCOMPLETE\" in Sec 4.1. I don't understand why this would introduce inconsistency.\nNAME For that particular instance, I think it is not unreasonable to change the text from \"SHOULD abort its response with the error code ...\" to \"SHOULD abort its response stream with the error code ...\", as the latter is the pattern that we use. Though I am not sure if such change is necessary; I tend to agree with NAME that the sentence is already clear; \"abort\" is a defined word and use of \"response\" implies a stream.\nWRT inconsistency, the editor would need to sweep for all instances of abort and reason about whether they need additional text or not. And there's a risk that someone later on picks up on an inconsistency of usage because they didn't appreciate the nuance. Since adding more detail after an instance of seems duplicative, I would favour not adding any more.\nI'm okay with NAME addition of \"stream\" since abort is defined as a stream operation. I don't think we need to separate out \"abort\" and \"stream error\" as separate statements, because abort already describes what needs to be done.\nIf I was worried too much, no change is necessary. One alternative approach is to add a sentence like \"errors in this section are stream errors\" in the beginning of Sec 4.", "new_text": "SHOULD begin processing partial HTTP messages once enough of the message has been received to make progress. If a client-initiated stream terminates without enough of the HTTP message to provide a complete response, the server SHOULD abort its response stream with the error code H3_REQUEST_INCOMPLETE; see errors. A server can send a complete response prior to the client sending an entire request if the response does not depend on any portion of the"} {"id": "q-en-quicwg-base-drafts-e3c560be66a567b44621380df3be701a344b51f781c731dbe504a6585bd2682d", "old_text": "data received, as specified in Section 8.1 of QUIC-TRANSPORT. If no additional data can be sent, the server's PTO timer MUST NOT be armed until datagrams have been received from the client, because packets sent on PTO count against the anti-amplification limit. Note that the server could fail to validate the client's address even if 0-RTT is accepted. Since the server could be blocked until more datagrams are received from the client, it is the client's responsibility to send packets to", "comments": "Text and pseudocode have been updated.\nI just updated the pseudocode as well, since it's quite straightforward.\nWhen the PTO is not armed due to the amplification limit and then a packet is received: URL, it's important to send data for the PN spaces the PTO is armed for(Initial or Handshake) prior to new application data. Typically PTO suggests writing new data instead of retransmitting data which may or may not be lost, but that's not true during the handshake. This is only an issue for the server, since it's caused by enforcing the amplification limits prior to address confirmation. This is most evident when 0-RTT is accepted, but address validation fails, since otherwise a server would not usually have enough data to write to create a problem.\nI went and double checked the msquic code , and we always re-enable the loss detection timer when we become \"unblocked\" by amplification protection logic. I thought this logic was already driven by some statement in the spec. I just went back and found this: Do we need any more than this?\nThe potential issue is that if the PTO is armed, but should have already expired, it's important to execute it prior to sending any pending data. I'm working on a PR to make this clearer, and we can whether the extra clarification is necessary?\nI wonder whether there is something else going on. Amplification or not, 0-RTT packets can only be acknowledged by ACK frames in 1-RTT packets. Those can only be received after the client gets the 1-rtt keys. So implementation should just start a short timer after the handshake completes.\nThis seems like an editorial change, please confirm. Either way, lets ensure NAME has an opportunity to comment before landing any changes.\nngtcp2 has dedicated code for this situation: URL I feel this is a kind of optimization or implementation techniques.\nIn response to NAME If the WG establish a consensus on the need for a change a particular fix then this PR can be applied in the AUTH48 with AD approval. I think we need to figure out if not merging the PR until AUTH48 is the best? It makes it simple to provide the changes to the RFC-editor, but maybe makes it harder for the rest of the community to keep track of the changes if there are no editors copy that contain all agreed things. However, considering the hopefully limited time that may make sense.\nWe can tag the versions that entered the RFC Editor queue, and then generate a diff against those at AUTH48.\nThat tag already exists, so we are good.\nAt the moment, states, quote: \"Clients MUST ensure that UDP datagrams containing only Initial packets are sized to at least 1200 bytes, adding padding to packets in the datagram as necessary.\" I think this is incorrect and that \"only\" should be dropped, as it excludes a UDP datagram that contains an Initial and a 0-RTT packet. Pointed out by NAME in .\nTechnically a design change. I think that it should be any packets that contain Initial and don't contain Handshake or 1-RTT packets. At least, that's what I implemented.\nI'll note that I think there are other requirements formulated in other sections. Perhaps define a term such a minimal initial datagram size and have the requirement in one place rather than sprinkling 1200 bytes in multiple places where it can be difficult to get the finer points (such as UDP vs QUIC packets and with or without header size). For example section 14 in transport:\nNAME What you implemented should be fine. However, Initial + 0-RTT coalesced in a datagram should be padded I believe and I don't think we're clear on that today.\nIt might be simpler to require every UDP packet that contains an Initial be padded to at least 1200? I'm worried that only requiring that of Initial if there is no Handshake/1-RTT present might be tricky on the receiver: what do I do if I receive a non-padded Initial with Handshake but I can't decrypt the Handshake data?\nNAME What we are talking here is a requirement specific to clients, and I am not sure if a client has a chance of sending Initial and Handshake packets at the same time. Quoting of the TLS draft, \"a client MUST discard Initial keys when it first sends a Handshake packet.\" And even if there is a possibility of a client sending Initial and Handshake at the same moment, I do not think requiring the client to pad a packet that contains only an Initial is correct, as coalescing is an optional behavior to the send side. It would mean that clients coalescing the QUIC packets would send a non-padded datagram, while clients not coalescing would send a padded datagram. That seems like an odd behavior.\nAnd yet another issue gets sucked into the wormhole that is the key discard mess. I guess that we will have to make a final determination based on the outcome of that stuff.\nOn consideration, I think that the requirement is simple and we don't have to stress about interactions with discarding keys. If the packet contains an Initial, it's possible that the Initial packet is the only packet that can be used by a server. Therefore, the datagram should be padded to 1200, regardless of whatever else is included. So we strike the \"only\" and we're good. See 6cfcbe26385f17072ccd330aea619f65bf3bdcb4.\nI wonder - if 0-RTT does not have huge certificates, would it not make sense to permit small packets in this case, provided the auth token validates. But in other cases 1200 bytes must be provided. But since 0-RTT might fail, this could then require a retransmission. Resource constrained devices might be prefer the latter.\nNAME Regardless of 0-RTT, for bandwidth constrained deployments, it makes sense for a client to always send a small packet and let the server validate the path. That said, the chartered goal of V1 is to reduce the connection establishment latency, and we have explicitly decided to require clients to send full-sized packets until the path is validated. I do not think we should revisit that design. Maybe we can in v2.\nI do not believe that this is necessarily correct. The reason is that we have not yet nailed down Initial key discard and in at least one version, an Initial ACK would appear in the client's second flight, but that need not be padded.\nI think the specification should mandate padding of server initial packets as well. When I was traveling a couple weeks ago, I was on a network where, when using IPv6, it allowed 1200-byte UDP packets through from client to server but blackholed them from server to client. Since our implementation pads initials in both directions, we were able to detect this failure during the handshake and fall back to TCP/TLS. If the server didn't pad its initials, we would have succeeded at the handshake but never gotten an HTTP response.\nThis definitely seems like a design issue. The purpose of the padding rule is solely to limit amplification, not to establish a minimum MTU. If you want some sort of MTU rule, that should be discussed separately.\nThis issue is marked as design. Are you suggesting I should open a separate one?\nNAME I believe it is dual purpose: From the current transport draft: >The payload of a UDP datagram carrying the first Initial packet MUST be expanded to at least 1200 bytes, by adding PADDING frames to the Initial packet and/or by coalescing the Initial packet (see Section 12.2). Sending a UDP datagram of this size ensures that the network path supports a reasonable Maximum Transmission Unit (MTU), and helps reduce the amplitude of amplification attacks caused by server responses toward an unverified client address; see Section 8.\nYeah, that's dicta as far as I am concerned. I don't think we ever had consensus that we were going to enforce MTU here.\nThe insight I had here was this: If you are sending Initial, it is possible that this is the only packet in the datagram that can be processed by your peer. If that is the case, then a client needs to ensure that it pads the datagram as though no other packets were present. You can infer from the fact that you have Handshake keys that the server is able to process Handshake packets, and then extend that to say that if you include Handshake packets then the server won't need padding. But then you can extend that inference to saying that there is no need to send any Initial packets (as the current draft does; currently, you are not required to keep Initial keys once you are sending with Handshake keys). Either way, I figure that it is better to keep things simple and say \"if Initial, pad\". In those cases where padding ends up being excessive, it's only excessive for a short while.\nI don't know where you get this from. It's total datagram size that matters, not the contents of the rest of the datagram (which is why the neqo padding strategy works). Anyway, the algorithm you propose seems fine, but it's not necessary, and therefore there's no need to specify it in the spec.\nThis was my thought too.\nI'm not seeing a conflict between these statements. If the packet contains an Initial, the total datagram size must be at least 1200 bytes. You can achieve this by padding any packet in the payload; I don't think anyone is advocating for requiring the padding be in a particular spot.\nI don't understand what \"pads the datagram as though no other packets were present\" is supposed to mean, then. The way neqo does this is that it puts the packets in the datagram and then pads the rest, without any regard to what keys the server might have.\nImprecise wording on my part. \"pads the datagram as though no other packets were present\" might be better as \"as though no other packets can be processed by the server\". Let's keep in mind that the proposed changes is just striking \"only\" in \"Clients MUST ensure that UDP datagrams containing Initial packets have UDP payloads of at least 1200 bytes, [...]\". I agree that this will result in larger packets than are ideal in a couple of cases. But the ACK-only Initial packets a client sent are most likely coalesced with other packets outside of cases where the padding is genuinely required.\nI'm sorry, but I still don't see what benefit this change provides. Perhaps this would be easier next week.\nThe change makes it clear that if you send Initial+0-RTT then you should still pad to 1200. That's all. Previously, you would not have been required to pad, which would have been bad. Most of the debate and discussion is about the knock-on consequences of applying the broader rule, which I don't think are that important. And yes, the whole \"when do you discard Initial keys\" thing does confound things a little.\nDiscussed in Cupertino. The word \"only\" is to be removed from the respective sentence. Moving to Editorial.\nThanks Ian.", "new_text": "data received, as specified in Section 8.1 of QUIC-TRANSPORT. If no additional data can be sent, the server's PTO timer MUST NOT be armed until datagrams have been received from the client, because packets sent on PTO count against the anti-amplification limit. When the server receives a datagram from the client, the amplification limit is increased and the server resets the PTO timer. If the PTO timer is then set to a time in the past, it is executed immediately. Doing so avoids sending new 1-RTT packets prior to packets critical to the completion of the handshake. In particular, this can happen when 0-RTT is accepted but the server fails to validate the client's address. Since the server could be blocked until more datagrams are received from the client, it is the client's responsibility to send packets to"} {"id": "q-en-quicwg-base-drafts-33e77dacdb2c1a48f142fc18f4a11a300f202c0f0885983823b425e3d17dd48c", "old_text": "An endpoint MUST include the value from the Source Connection ID field of the packet it receives in the Destination Connection ID field. The value for Source Connection ID MUST be copied from the Destination Connection ID of the received packet, which is initially randomly selected by a client. Echoing both connection IDs gives clients some assurance that the server received the packet and that the Version Negotiation packet was not generated by an attacker that is unable to observe packets. An endpoint that receives a Version Negotiation packet might change the version that it decides to use for subsequent packets. The", "comments": "The names are capitalized, but we don't say \"the ... field.\" We probably should.", "new_text": "An endpoint MUST include the value from the Source Connection ID field of the packet it receives in the Destination Connection ID field. The value for the Source Connection ID field MUST be copied from the Destination Connection ID field of the received packet, which is initially randomly selected by a client. Echoing both connection IDs gives clients some assurance that the server received the packet and that the Version Negotiation packet was not generated by an attacker that is unable to observe packets. An endpoint that receives a Version Negotiation packet might change the version that it decides to use for subsequent packets. The"} {"id": "q-en-quicwg-base-drafts-2e6e030d9db326e16b26f01147564449df6b4d29750658a505e7f0be9a06e92d", "old_text": "The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 BCP14 when, and only when, they appear in all capitals, as shown here. This document uses the variable-length integer encoding from QUIC- TRANSPORT.", "comments": "I've had to make some changes in the template to deal with some odd regressions in CircleCI (see URL for details). This is the updated configuration.\nYeah, given how this breaks in Circle; that's probably wise.\nNAME NAME do you think that you could stop following this project in CircleCI? Then we can merge this one.\nIs there a reason not to simply move this repo over to GitHub Actions as well?", "new_text": "The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. This document uses the variable-length integer encoding from QUIC- TRANSPORT."} {"id": "q-en-quicwg-base-drafts-2e6e030d9db326e16b26f01147564449df6b4d29750658a505e7f0be9a06e92d", "old_text": "The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 BCP14 when, and only when, they appear in all capitals, as shown here. Definitions of terms that are used in this document:", "comments": "I've had to make some changes in the template to deal with some odd regressions in CircleCI (see URL for details). This is the updated configuration.\nYeah, given how this breaks in Circle; that's probably wise.\nNAME NAME do you think that you could stop following this project in CircleCI? Then we can merge this one.\nIs there a reason not to simply move this repo over to GitHub Actions as well?", "new_text": "The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. Definitions of terms that are used in this document:"} {"id": "q-en-quicwg-base-drafts-226295e3516c58bf1f1f2b6f1695254ba7d37fc374f8eaba3c597ddc102ae280", "old_text": "The entries in iana-setting-table are registered by this document. For fomatting reasons, setting names can be abbreviated by removing the 'SETTING_' prefix. Each code of the format \"0x1f * N + 0x21\" for non-negative integer values of \"N\" (that is, 0x21, 0x40, ..., through 0x3ffffffffffffffe)", "comments": "which also occurs in H3 but apparently wasn't caught.\nThroughout, the document uses \"SETTINGS\" as the first part of thte HTTP/3 setting names. We have updated this sentence for consistency. Please let us know if any corrections are needed. Original: For fomatting reasons, the setting names here are abbreviated by removing the 'SETTING' prefix. Current: For formatting reasons, the setting names here are abbreviated by removing the 'SETTINGS_' prefix.\nFixed in .", "new_text": "The entries in iana-setting-table are registered by this document. For formatting reasons, setting names can be abbreviated by removing the 'SETTINGS_' prefix. Each code of the format \"0x1f * N + 0x21\" for non-negative integer values of \"N\" (that is, 0x21, 0x40, ..., through 0x3ffffffffffffffe)"} {"id": "q-en-quicwg-datagram-53329bf1c1bd1557291c8c8238ab849fef123afcde01682885e44d4ac2b91deb", "old_text": "bytes. An endpoint that includes this parameter supports the DATAGRAM frame types and is willing to receive such frames on this connection. Endpoints MUST NOT send DATAGRAM frames until they have sent and received the max_datagram_frame_size transport parameter. Endpoints MUST NOT send DATAGRAM frames of size strictly larger than the value of max_datagram_frame_size the endpoint has received from its peer. An endpoint that receives a DATAGRAM frame when it has not sent the max_datagram_frame_size transport parameter MUST terminate the connection with error PROTOCOL_VIOLATION. An endpoint that receives a DATAGRAM frame that is strictly larger than the value it sent in its max_datagram_frame_size transport parameter MUST terminate the connection with error PROTOCOL_VIOLATION. Endpoints that wish to use DATAGRAM frames need to ensure they send a max_datagram_frame_size value sufficient to allow their peer to use them. It is RECOMMENDED to send the value 65536 in the max_datagram_frame_size transport parameter as that indicates to the peer that this endpoint will accept any DATAGRAM frame that fits inside a QUIC packet. When clients use 0-RTT, they MAY store the value of the server's max_datagram_frame_size transport parameter. Doing so allows the", "comments": "Defines the transport parameter as a unidirectional configuration.\nThe spec clearly states the following: But it's not completely clear what happens if both sides send different values. Is the value purely a unidirectional configuration? For instance, if the server advertises a value of 500, and the client advertises a value of 100, can the client still send 500 byte datagrams? Or does this essentially negotiate the max value either side can use to 100? If this is a unidirectional configuration, why require the peer to send the TP at all, if all they want to do is send datagrams, and not receive them? I'm loosely basing my thoughts on what the design could be on how we negotiate the number of streams an endpoint is willing to accept. Following that model, I'd recommend a design where, if an endpoint is willing to receive datagrams, it advertises a it's willing to accept. The TP has absolutely no meaning for the send direction. The protocol on top of QUIC decides how to interpret only a single direction allowing datagrams to be sent.\nI agree with NAME principly due to this clause in the present draft: My read of the current text is that it is unidirectional. An endpoint tells it's peer that it is willing to receive a DATAGRAM up to size N by using . Having asymmetric values is fine because the paths can be asymmetric and other TPs also behave asymmetrically. The draft also says: For an application protocol like siduck, both ends need to support reception of DATAGRAM or else the application will fail. For something like an IoT sensor feed, it might be fine to support a send-only/receive-only model. The current text basically requires the application protocol to mandate that is 0 on the side that is send-only AND to describe what happens if the TP is missing - that seems like it will cause a duplication of effort.\nI agree that we should make this purely unidirectional. I would resolve this by replacing with", "new_text": "bytes. An endpoint that includes this parameter supports the DATAGRAM frame types and is willing to receive such frames on this connection. Endpoints MUST NOT send DATAGRAM frames until they have received the max_datagram_frame_size transport parameter. Endpoints MUST NOT send DATAGRAM frames of size strictly larger than the value of max_datagram_frame_size the endpoint has received from its peer. An endpoint that receives a DATAGRAM frame when it has not sent the max_datagram_frame_size transport parameter MUST terminate the connection with error PROTOCOL_VIOLATION. An endpoint that receives a DATAGRAM frame that is strictly larger than the value it sent in its max_datagram_frame_size transport parameter MUST terminate the connection with error PROTOCOL_VIOLATION. Endpoints that wish to use DATAGRAM frames need to ensure they send a max_datagram_frame_size value sufficient to allow their peer to use them. It is RECOMMENDED to send the value 65536 in the max_datagram_frame_size transport parameter as that indicates to the peer that this endpoint will accept any DATAGRAM frame that fits inside a QUIC packet. The max_datagram_frame_size transport parameter is a unidirectional limit and indication of support of DATAGRAM frames. Application protocols that use DATAGRAM frames MAY choose to only negotiate and use them in a single direction. When clients use 0-RTT, they MAY store the value of the server's max_datagram_frame_size transport parameter. Doing so allows the"} {"id": "q-en-quicwg-datagram-0c36c919e72bc6caf088d1c5f0fafc3ad6ad39d5bfdceea7be6c521f741d9bd0", "old_text": "to receiving packets that only contain DATAGRAM frames, since the timing of these acknowledgements is not used for loss recovery. If a sender detects that a packet containing a specific DATAGRAM frame might have been lost, the implementation MAY notify the application that it believes the datagram was lost.", "comments": "The draft has some good text in the \"Acknowledgement Handling\" and \"Congestion Control\" sections that essentially is stating that DATAGRAM is just like any other ACK-eliciting packet, but is not automatically retransmitted by the transport. That's all reasonable, but I think there needs to be more text on how the (suspected) loss of a packet with a DATAGRAM frame or a PTO with only an outstanding DATAGRAM packet should be handled. I can see two possible models: It's just like any other packet. The goal is to elicit some ACK from the peer to get accurate loss information about the outstanding packet as soon as possible. If there is nothing outstanding we could use to send in a new packet, just send a PING frame. It's special. Because we don't necessarily intend to retransmit the data in the packet if it is actually lost, we don't actually care about immediate loss information/feedback. Don't force anything to be sent immediately to elicit the ACK. So far as I have it coded up in MsQuic, I've assumed (1). This essentially results in an immediate PING frame/packet being sent out if I have nothing else to retransmit to try to elicit an ACK for the DATAGRAM frame/packet. This could result in a slightly noisier connection if the app doesn't care about all loss information about their datagrams, but, IMO makes for a cleaner design. I don't know what the general consequences to congestion control might be if we don't do this. Assuming folks are in agreement, we should have some text on this topic in the draft.\nNAME NAME as authors of the recovery draft do you have an opinion here?\nI support NAME choice of 1 and it aligns with the recovery draft. It also matches our implementation. Adding clarifying text around this SG and I'd be happy to review it.\nI agree -- choice (1) is cleanest and I would argue that choice(2) is really not principled.\nLate to the issue, but I think (1) is the right choice here. It's not simply that datagrams aren't retransmitted, but that they might be retransmitted by the application if they're declared lost. If you don't have prompt feedback, that scenario suffers.\nSeems like we have good agreement on this, just need to write the text\nNAME -- To NAME 's comment at the mic, you should cite .\nThis text sounds good, though I'd love a little bit more text mentioning the benefit of this probeThis text is fine with me. I am wondering if we should say something additional, like that packets with datagram frames shouldn't be treated special, with respect to loss recovery.", "new_text": "to receiving packets that only contain DATAGRAM frames, since the timing of these acknowledgements is not used for loss recovery. As with any ack-eliciting frame, when a sender suspects that a packet containing only DATAGRAM frames has been lost, it MAY send probe packets to elicit a faster acknowledgement as described in Section 6.2.4 of RFC9002. If a sender detects that a packet containing a specific DATAGRAM frame might have been lost, the implementation MAY notify the application that it believes the datagram was lost."} {"id": "q-en-ratelimit-headers-40e7d30d182b4b0b29ccd24e08c6a3c176a2aee87ee88865418f1e208b6e52f3", "old_text": "2.1. A quota policy is described in terms of service-limit and a time- window. It is an Item whose bare item is a service-limit, along with associated Parameters. The following parameters are defined in this specification: The REQUIRED \"w\" parameter value conveys a time window value as defined in time-window. Other parameters are allowed and can be regarded as comments. They ought to be registered within the \"Hypertext Transfer Protocol (HTTP) RateLimit Parameters Registry\", as described in iana-ratelimit- parameters. For example, a quota policy of 100 quota units per minute: The definition of a quota policy does not imply any specific distribution of quota units within the time window. If applicable, these details can be conveyed as extension parameters. For example, two quota policies containing further details via extension parameters: To avoid clashes, implementers SHOULD prefix unregistered parameters with a vendor identifier, e.g. \"acme-policy\", \"acme-burst\". While it is useful to define a clear syntax and semantics even for custom parameters, it is important to note that user agents are not required to process quota policy information. 2.2. Rate limit policies limit the number of acceptable requests within a given time interval, known as a time window. The time window is a non-negative Integer value expressing that interval in seconds, similar to the \"delay-seconds\" rule defined in Section 10.2.3 of HTTP. Subsecond precision is not supported. 2.3. The service limit is associated with the maximum number of requests that the server is willing to accept from one or more clients on a given basis (originating IP, authenticated user, geographical, ..) during a time-window. The service limit is a non-negative Integer expressed in quota units. The service limit SHOULD match the maximum number of acceptable requests. However, the service limit MAY differ from the total number of acceptable requests when weight mechanisms, bursts, or other server policies are implemented. If the service limit does not match the maximum number of acceptable requests the relation with that SHOULD be communicated out-of-band. Example: A server could count once requests like \"/books/{id}\" count twice search requests like \"/books?author=WuMing\" so that we have the following counters 3.", "comments": "Section 2.2: Section 2.3: This appears to be a circular definition -- what's actually being defined here?\n(noting that 2.2 doesn't actually define quota-units)\nQuota unit is now defined as the measurement unit for service limits.\nPTAL - I tried to rearrange to address this a bit more holistically.\nLooks good!LGTM", "new_text": "2.1. A quota policy is maintained by a server to limit the activity (counted in quota units) of a given client over a period of time (known as the time-window) to a specified amount (known as the service-limit). Quota policies can be advertised by servers (see ratelimit-policy- field), but they are not required to be, and more than one quota policy can affect a given request from a client to a server. A quota policy is expressed in Structured Fields STRUCTURED-FIELDS as an Integer that indicates the service limit with associated parameters. The following Parameters are defined in this specification: The REQUIRED \"w\" parameter value conveys a time window (time- window). For example, a quota policy of 100 quota units per minute is expressed as: Other parameters are allowed and can be regarded as comments. Parameters for use by more than one implementation or service ought to be registered within the \"Hypertext Transfer Protocol (HTTP) RateLimit Parameters Registry\", as described in iana-ratelimit- parameters. Implementation- or service-specific parameters SHOULD be prefixed parameters with a vendor identifier, e.g. \"acme-policy\", \"acme- burst\". 2.2. The service limit is a non-negative Integer indicating the maximum amount of activity that the server is willing to accept from what it identifies as the client (e.g., based upon originating IP or user authentication) during a time-window. The activity being limited is usually the HTTP requests made by the client; for example \"you can make 100 requests per minute\". However, a server might only rate limit some requests (based upon URI, method, user identity, etc.), and it might weigh requests differently. Therefore, quota policies are defined in terms of \"quota units\". Servers SHOULD document how they count quota units. For example, a server could count requests like \"/books/{id}\" once, but count search requests like \"/books?author=WuMing\" twice. This might result in the following counters: Often, the service limit advertised will match the server's actual limit. However, it MAY differ when weight mechanisms, bursts, or other server policies are implemented. In that case the difference SHOULD be communicated using an extension or documented separately. 2.3. Quota policies limit the number of acceptable requests within a given time interval, known as a time window. The time window is a non-negative Integer value expressing that interval in seconds, similar to the \"delay-seconds\" rule defined in Section 10.2.3 of HTTP. Subsecond precision is not supported. By default, a quota policy does not constrain the distribution of quota units within the time window. If necessary, these details can be conveyed as extension parameters. For example, two quota policies containing further details via extension parameters: 3."} {"id": "q-en-ratelimit-headers-19a3eddecbff179607b62df9f3d4648c4932a9314b6d49312f6a864e46d83c1f", "old_text": "header field names proliferates. Client applications interfacing with different servers may thus need to process different headers, or the very same application interface that sits behind different reverse proxies may reply with different throttling headers.", "comments": "Improves terminology\nIn other parts of the document we still use client as we are not sure whether to refer explicitly to a User Agent or not. Ideas welcome...\nSee RFC 7230 terminology definitions (, not changing in the new specs, FWIW).", "new_text": "header field names proliferates. User Agents interfacing with different servers may thus need to process different headers, or the very same application interface that sits behind different reverse proxies may reply with different throttling headers."} {"id": "q-en-ratelimit-headers-b26bbcb68200cdebb5ca10e71218380d209aab60c06eb7e1d9b35bdb5ae5adc5", "old_text": "\"RateLimit-Reset\": containing the time remaining in the current window, specified in seconds. The behavior of \"RateLimit-Reset\" is compatible with the \"delta- seconds\" notation of \"Retry-After\". The fields definition allows to describe complex policies, including", "comments": "uses http-core semantics fixes Sections refs cc: NAME\nI'll wait for the new i-d-template image, but will do.\nPlease use either [RFCxxxx], Section yyy or Section yyy of [RFCxxxx]\nAddressed by now.\nIf you start using the new kramdown-rcf2629 section ref syntax, you may want to do that consistently...", "new_text": "\"RateLimit-Reset\": containing the time remaining in the current window, specified in seconds. The behavior of \"RateLimit-Reset\" is compatible with the \"delay- seconds\" notation of \"Retry-After\". The fields definition allows to describe complex policies, including"} {"id": "q-en-ratelimit-headers-b26bbcb68200cdebb5ca10e71218380d209aab60c06eb7e1d9b35bdb5ae5adc5", "old_text": "This specification does not cover the throttling scope, that may be the given resource-target, its parent path or the whole Origin RFC6454 section 7. The rate-limit headers may be returned in both Successful and non Successful responses. This specification does not cover whether", "comments": "uses http-core semantics fixes Sections refs cc: NAME\nI'll wait for the new i-d-template image, but will do.\nPlease use either [RFCxxxx], Section yyy or Section yyy of [RFCxxxx]\nAddressed by now.\nIf you start using the new kramdown-rcf2629 section ref syntax, you may want to do that consistently...", "new_text": "This specification does not cover the throttling scope, that may be the given resource-target, its parent path or the whole Origin (see Section 7 of RFC6454). The rate-limit headers may be returned in both Successful and non Successful responses. This specification does not cover whether"} {"id": "q-en-ratelimit-headers-b26bbcb68200cdebb5ca10e71218380d209aab60c06eb7e1d9b35bdb5ae5adc5", "old_text": "capitals, as shown here. This document uses the Augmented BNF defined in RFC5234 and updated by RFC7405 along with the \"#rule\" extension defined in Section 7 of MESSAGING. The term Origin is to be interpreted as described in RFC6454 section 7. The \"delta-seconds\" rule is defined in CACHING section 1.2.1. 2.", "comments": "uses http-core semantics fixes Sections refs cc: NAME\nI'll wait for the new i-d-template image, but will do.\nPlease use either [RFCxxxx], Section yyy or Section yyy of [RFCxxxx]\nAddressed by now.\nIf you start using the new kramdown-rcf2629 section ref syntax, you may want to do that consistently...", "new_text": "capitals, as shown here. This document uses the Augmented BNF defined in RFC5234 and updated by RFC7405 along with the \"#rule\" extension defined in Section 5.6.1 of SEMANTICS. The term Origin is to be interpreted as described in Section 7 of RFC6454. The \"delay-seconds\" rule is defined in Section 10.2.4 of SEMANTICS. 2."} {"id": "q-en-ratelimit-headers-b26bbcb68200cdebb5ca10e71218380d209aab60c06eb7e1d9b35bdb5ae5adc5", "old_text": "The header value is The delta-seconds format is used because: it does not rely on clock synchronization and is resilient to clock adjustment and clock skew between client and server (see SEMANTICS Section 4.1.1.1); it mitigates the risk related to thundering herd when too many clients are serviced with the same timestamp.", "comments": "uses http-core semantics fixes Sections refs cc: NAME\nI'll wait for the new i-d-template image, but will do.\nPlease use either [RFCxxxx], Section yyy or Section yyy of [RFCxxxx]\nAddressed by now.\nIf you start using the new kramdown-rcf2629 section ref syntax, you may want to do that consistently...", "new_text": "The header value is The delay-seconds format is used because: it does not rely on clock synchronization and is resilient to clock adjustment and clock skew between client and server (see Section 5.6.7 of SEMANTICS); it mitigates the risk related to thundering herd when too many clients are serviced with the same timestamp."} {"id": "q-en-ratelimit-headers-b26bbcb68200cdebb5ca10e71218380d209aab60c06eb7e1d9b35bdb5ae5adc5", "old_text": "5. This section documents the considerations advised in Section 15.3.3 of SEMANTICS. An intermediary that is not part of the originating service", "comments": "uses http-core semantics fixes Sections refs cc: NAME\nI'll wait for the new i-d-template image, but will do.\nPlease use either [RFCxxxx], Section yyy or Section yyy of [RFCxxxx]\nAddressed by now.\nIf you start using the new kramdown-rcf2629 section ref syntax, you may want to do that consistently...", "new_text": "5. This section documents the considerations advised in Section 16.3.3 of SEMANTICS. An intermediary that is not part of the originating service"} {"id": "q-en-ratelimit-headers-47fb81b02164bea688e8cf51a48876dec146d0108f03eb1597d04b6970947f93", "old_text": "8.3.1. The server does not expose \"RateLimit-Remaining\" values, but resets the limit counter every second. It communicates to the client the limit of 10 quota-units per second always returning the couple \"RateLimit-Limit\" and \"RateLimit-Reset\".", "comments": "Improves example description\nhttps://ietf-wg-URL The example with just RateLimit-Limit and RateLimit-Reset In it seems this is a case where the RateLimit-Reset value is not able to be used to determine how long one should wait, but is rather just a Limit definition ? Why not then just use RateLimit-Limit: 10;w=1 and omit the other two headers ? Is this to cater for the inevitable 429 to provide a RateLimit-Reset value at that point ? If that is the case I question the value of using the two headers in \u201csuccessful\u201d responses.\nThis example is related to an implementation that uses the \"old\" ratelimit field set, that do not support . We should improve the description.\n:+1:", "new_text": "8.3.1. The server does not expose \"RateLimit-Remaining\" values (for example, because the underlying counters are not available). Instead, it resets the limit counter every second. It communicates to the client the limit of 10 quota-units per second always returning the couple \"RateLimit-Limit\" and \"RateLimit-Reset\"."} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "quotas. This was partially addressed by the \"Retry-After\" header field defined in SEMANTICS to be returned in \"429 Too Many Requests\" (see STATUS429) or \"503 Service Unavailable\" responses. Widely deployed quota mechanisms limit the number of acceptable requests in a given time window, e.g. 10 requests per second;", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "quotas. This was partially addressed by the \"Retry-After\" header field defined in SEMANTICS to be returned in 429 (Too Many Request) (see STATUS429) or 503 (Service Unavailable) responses. Widely deployed quota mechanisms limit the number of acceptable requests in a given time window, e.g. 10 requests per second;"} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "the current window; \"RateLimit-Reset\": containing the time remaining in the current window, specified in seconds. The behavior of \"RateLimit-Reset\" is compatible with the \"delay- seconds\" notation of \"Retry-After\".", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "the current window; \"RateLimit-Reset\": containing the time remaining in the current window, specified in seconds; \"RateLimit-Policy\": containing the quota policy information. The behavior of \"RateLimit-Reset\" is compatible with the \"delay- seconds\" notation of \"Retry-After\"."} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "The term Origin is to be interpreted as described in Section 7 of RFC6454. This specification uses Structured Fields SF to specify syntax. The terms sf-list, sf-item, sf-string, sf-token, sf-integer, bare- item and key refer to the structured types defined therein. 2.", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "The term Origin is to be interpreted as described in Section 7 of RFC6454. This document uses the following terminology from Section 3 of SF to specify syntax and parsing: List, Item, String, Token and Integer together with the concept of bare item. 2."} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "A time window is expressed in seconds, using the following syntax: Where \"delay-seconds\" is a non-negative sf-integer compatible with the \"delay-seconds\" rule defined in Section 10.2.3 of SEMANTICS. Subsecond precision is not supported.", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "A time window is expressed in seconds, using the following syntax: Where \"delay-seconds\" is a non-negative Integer compatible with the \"delay-seconds\" rule defined in Section 10.2.3 of SEMANTICS. Subsecond precision is not supported."} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "The \"service-limit\" is expressed in \"quota-units\" and has the following syntax: where \"quota-units\" is a non-negative sf-integer. The \"service-limit\" SHOULD match the maximum number of acceptable requests.", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "The \"service-limit\" is expressed in \"quota-units\" and has the following syntax: where \"quota-units\" is a non-negative Integer. The \"service-limit\" SHOULD match the maximum number of acceptable requests."} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "available in that moment. Nonetheless servers MAY decide to send the \"RateLimit\" fields in a trailer section. To ease the migration from existing rate limit headers, a server SHOULD be able to provide the \"RateLimit-Limit\" field even without the optional \"quota-policy\" section. 3.1. Servers are not required to return \"RateLimit\" fields in every", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "available in that moment. Nonetheless servers MAY decide to send the \"RateLimit\" fields in a trailer section. 3.1. Servers are not required to return \"RateLimit\" fields in every"} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "fields only when a given quota is going to expire. Implementers concerned with response fields' size, might take into account their ratio with respect to the payload data, or use header- compression http features such as HPACK. 4.", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "fields only when a given quota is going to expire. Implementers concerned with response fields' size, might take into account their ratio with respect to the content length, or use header-compression HTTP features such as HPACK. 4."} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "otherwise gathered metrics) to better estimate the \"RateLimit-Reset\" moment intended by the server. The \"quota-policy\" values and comments provided in \"RateLimit-Limit\" are informative and MAY be ignored. If a response contains both the \"RateLimit-Reset\" and \"Retry-After\" fields, \"Retry-After\" MUST take precedence and \"RateLimit-Reset\" MAY", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "otherwise gathered metrics) to better estimate the \"RateLimit-Reset\" moment intended by the server. The details provided in \"RateLimit-Policy\" are informative and MAY be ignored. If a response contains both the \"RateLimit-Reset\" and \"Retry-After\" fields, \"Retry-After\" MUST take precedence and \"RateLimit-Reset\" MAY"} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "If the client exceeds that limit, it MAY not be served. The field is a List Structured Field of positive length. The first member is named \"expiring-limit\" and its syntax is \"service-limit\", while the syntax of the other optional members is \"quota-policy\" The \"expiring-limit\" value MUST be set to the \"service-limit\" that is closer to reach its limit. The \"quota-policy\" is defined in quota-policy, and its values are informative. A \"time-window\" associated to \"expiring-limit\" can be communicated via an optional \"quota-policy\" value, like shown in the following example If the \"expiring-limit\" is not associated to a \"time-window\", the \"time-window\" MUST either be: inferred by the value of \"RateLimit-Reset\" at the moment of the reset, or communicated out-of-band (e.g. in the documentation). Policies using multiple quota limits MAY be returned using multiple \"quota-policy\" items, like shown in the following two examples: This field MUST NOT occur multiple times and can be sent in a trailer section. 5.2. The \"RateLimit-Remaining\" response field indicates the remaining \"quota-units\" defined in service-limit associated to the client. The field is an Integer Structured Field and its value is This field MUST NOT occur multiple times and can be sent in a trailer section.", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "If the client exceeds that limit, it MAY not be served. The field is a non-negative Integer. Its value is named \"expiring- limit\". The \"expiring-limit\" value MUST be set to the \"service-limit\" that is closer to reach its limit, and the associated \"time-window\" MUST either be: inferred by the value of \"RateLimit-Reset\" at the moment of the reset, or communicated out-of-band (e.g. in the documentation). The \"RateLimit-Policy\" field (see ratelimit-policy-field), might contain information on the associated \"time-window\". This field MUST NOT occur multiple times and can be sent in a trailer section. 5.2. The \"RateLimit-Policy\" response field indicates the \"quota\" associated to the client and its value is informative. The field is a non-empty List of quota policies (see quota-policy). A \"time-window\" associated to \"expiring-limit\" can be communicated via \"RateLimit-Policy\", like shown in the following example. Policies using multiple quota limits MAY be returned using multiple \"quota-policy\" items, like shown in the following two examples: This field MUST NOT occur multiple times and can be sent in a trailer section. 5.3. The \"RateLimit-Remaining\" response field indicates the remaining \"quota-units\" defined in service-limit associated to the client. The field is a non-negative Integer expressed in \"quota-units\". This field MUST NOT occur multiple times and can be sent in a trailer section."} {"id": "q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405", "old_text": "One example of \"RateLimit-Remaining\" use is below. 5.3. The \"RateLimit-Reset\" response field indicates either the number of seconds until the quota resets. The field is an Integer Structured Field and its value is The delay-seconds format is used because:", "comments": "moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field.", "new_text": "One example of \"RateLimit-Remaining\" use is below. 5.4. The \"RateLimit-Reset\" response field indicates the number of seconds until the quota resets. The field is a non-negative Integer. The delay-seconds format is used because:"} {"id": "q-en-resource-directory-c955419a38115de312f9189507679ebc200d8f7bbe61a0e7f71f03ddaf277fe4", "old_text": "operations, and \"core.rd-group\" is used to discover the URI path for RD Group operations. Upon success, the response will contain a payload with a link format entry for each RD function discovered, indicating the URI path of the RD function returned and the corresponding Resource Type. When performing multicast discovery, the multicast IP address used will depend on the scope required and the multicast capabilities of the network. A Resource Directory MAY provide hints about the content-formats it supports in the links it exposes or registers, using the \"ct\" link", "comments": "This contains what I think of the easy fixes to the requirements of replicated RDs. See URL for more discussion, please comment on whether you think those changes are acceptable.\nThe minimum for interoperability IMO is that client's don't always pick the first choice -- because if they do, no fault tolerance is possible. Do you think that that even this is too much specification of behavior, or did I write more in the PR than what would be needed to satisfy that minimum requirement?\nchrysn schreef op 2018-02-01 17:03: Is that so? The first choice of one client is not necessarily the first choice of another when we are confronted with RDs which have contact problems. Several protocols have handled this problem: like dhcp for example. Why not copy that behavior?\nI think we're talking about different \"first\"s here. I'm not concerned with what happens when multiple servers answer the multicast CoAP request. (Led to an interesting chat with PIM people but is not a matter that should concern the RD). What I do think we should give guidance is what happens if the RDAO announces multiple addresses (something explicitly allowed in the current draft), or the URI discovery response contains more than one rt=\"URL\" link Both are within what I think we describe as acceptable server behavior. If clients are unaware and just always pick the first line (because it looks like a zero-or-one situation to the implementers), interoperability issues will arise when two are announced an the first one fails.\nOk, got it, see below chrysn schreef op 2018-02-02 12:11: Yeah, that's a nuisance. Someone seems to know what they want that for, annd probably have some intelligent algorithm to break ties.......... One might say that client MAY choose their RD randomly. I assume that is a multicast response? Then the first response is probably the closest and the best choice. And when one MC response contains multiple RD links, We are not responsible, and something clever is being thought out somewhere, we hope. I think DHCP specifies clients to wait so many seconds before choosing a server. When the first one fails, the client will try the second one, und so weiter ... What am I missing still?\nfor interoperability, the client behavior when multiple URIs are discovered, or the RD does not answer, need not be specified. The text starts to look more and more like Bonjour", "new_text": "operations, and \"core.rd-group\" is used to discover the URI path for RD Group operations. Upon success, the response will contain a payload with a link format entry for each RD function discovered, indicating the URI of the RD function returned and the corresponding Resource Type. When performing multicast discovery, the multicast IP address used will depend on the scope required and the multicast capabilities of the network. A Resource Directory MAY provide hints about the content-formats it supports in the links it exposes or registers, using the \"ct\" link"} {"id": "q-en-resource-directory-c955419a38115de312f9189507679ebc200d8f7bbe61a0e7f71f03ddaf277fe4", "old_text": "An RD implementation of this specification MUST support query filtering for the rt parameter as defined in RFC6690. The URI Discovery operation can yield multiple URIs of a particular resource type. The client may use any of the discovered addresses initially.", "comments": "This contains what I think of the easy fixes to the requirements of replicated RDs. See URL for more discussion, please comment on whether you think those changes are acceptable.\nThe minimum for interoperability IMO is that client's don't always pick the first choice -- because if they do, no fault tolerance is possible. Do you think that that even this is too much specification of behavior, or did I write more in the PR than what would be needed to satisfy that minimum requirement?\nchrysn schreef op 2018-02-01 17:03: Is that so? The first choice of one client is not necessarily the first choice of another when we are confronted with RDs which have contact problems. Several protocols have handled this problem: like dhcp for example. Why not copy that behavior?\nI think we're talking about different \"first\"s here. I'm not concerned with what happens when multiple servers answer the multicast CoAP request. (Led to an interesting chat with PIM people but is not a matter that should concern the RD). What I do think we should give guidance is what happens if the RDAO announces multiple addresses (something explicitly allowed in the current draft), or the URI discovery response contains more than one rt=\"URL\" link Both are within what I think we describe as acceptable server behavior. If clients are unaware and just always pick the first line (because it looks like a zero-or-one situation to the implementers), interoperability issues will arise when two are announced an the first one fails.\nOk, got it, see below chrysn schreef op 2018-02-02 12:11: Yeah, that's a nuisance. Someone seems to know what they want that for, annd probably have some intelligent algorithm to break ties.......... One might say that client MAY choose their RD randomly. I assume that is a multicast response? Then the first response is probably the closest and the best choice. And when one MC response contains multiple RD links, We are not responsible, and something clever is being thought out somewhere, we hope. I think DHCP specifies clients to wait so many seconds before choosing a server. When the first one fails, the client will try the second one, und so weiter ... What am I missing still?\nfor interoperability, the client behavior when multiple URIs are discovered, or the RD does not answer, need not be specified. The text starts to look more and more like Bonjour", "new_text": "An RD implementation of this specification MUST support query filtering for the rt parameter as defined in RFC6690. While the link targets in this discovery step are often expressed in path-absolute form, this is not a requirement. Clients SHOULD therefore accept URIs of all schemes they support, both in absolute and relative forms, and not limit the set of discovered URIs to those hosted at the address used for URI discovery. The URI Discovery operation can yield multiple URIs of a particular resource type. The client may use any of the discovered addresses initially."} {"id": "q-en-resource-directory-c955419a38115de312f9189507679ebc200d8f7bbe61a0e7f71f03ddaf277fe4", "old_text": "on its configuration. The registration message is a list of links to registration resources of the endpoints that belong to that group. The endpoints MAY be hosted by a different RD than the the group hosting RD. In that case the endpoint link points to the registration resource on the other RD. The commissioning tool SHOULD not send any target attributes with the links to the registration resources, and the resource directory", "comments": "This contains what I think of the easy fixes to the requirements of replicated RDs. See URL for more discussion, please comment on whether you think those changes are acceptable.\nThe minimum for interoperability IMO is that client's don't always pick the first choice -- because if they do, no fault tolerance is possible. Do you think that that even this is too much specification of behavior, or did I write more in the PR than what would be needed to satisfy that minimum requirement?\nchrysn schreef op 2018-02-01 17:03: Is that so? The first choice of one client is not necessarily the first choice of another when we are confronted with RDs which have contact problems. Several protocols have handled this problem: like dhcp for example. Why not copy that behavior?\nI think we're talking about different \"first\"s here. I'm not concerned with what happens when multiple servers answer the multicast CoAP request. (Led to an interesting chat with PIM people but is not a matter that should concern the RD). What I do think we should give guidance is what happens if the RDAO announces multiple addresses (something explicitly allowed in the current draft), or the URI discovery response contains more than one rt=\"URL\" link Both are within what I think we describe as acceptable server behavior. If clients are unaware and just always pick the first line (because it looks like a zero-or-one situation to the implementers), interoperability issues will arise when two are announced an the first one fails.\nOk, got it, see below chrysn schreef op 2018-02-02 12:11: Yeah, that's a nuisance. Someone seems to know what they want that for, annd probably have some intelligent algorithm to break ties.......... One might say that client MAY choose their RD randomly. I assume that is a multicast response? Then the first response is probably the closest and the best choice. And when one MC response contains multiple RD links, We are not responsible, and something clever is being thought out somewhere, we hope. I think DHCP specifies clients to wait so many seconds before choosing a server. When the first one fails, the client will try the second one, und so weiter ... What am I missing still?\nfor interoperability, the client behavior when multiple URIs are discovered, or the RD does not answer, need not be specified. The text starts to look more and more like Bonjour", "new_text": "on its configuration. The registration message is a list of links to registration resources of the endpoints that belong to that group. The registration resources MAY be located on different hosts than the group hosting RD. In that case the endpoint link points to the registration resource on the other RD. The commissioning tool SHOULD NOT attempt to enter a foreign registration in a group unless it found it in the group RD's lookup results, or has other reasons to assume that the foreign registration will be accepted. The commissioning tool SHOULD not send any target attributes with the links to the registration resources, and the resource directory"} {"id": "q-en-resource-directory-c955419a38115de312f9189507679ebc200d8f7bbe61a0e7f71f03ddaf277fe4", "old_text": "does not need to make them accessible to clients. Clients SHOULD NOT attempt to dereference or manipulate them. 7.3. Using the Accept Option, the requester can control whether this list", "comments": "This contains what I think of the easy fixes to the requirements of replicated RDs. See URL for more discussion, please comment on whether you think those changes are acceptable.\nThe minimum for interoperability IMO is that client's don't always pick the first choice -- because if they do, no fault tolerance is possible. Do you think that that even this is too much specification of behavior, or did I write more in the PR than what would be needed to satisfy that minimum requirement?\nchrysn schreef op 2018-02-01 17:03: Is that so? The first choice of one client is not necessarily the first choice of another when we are confronted with RDs which have contact problems. Several protocols have handled this problem: like dhcp for example. Why not copy that behavior?\nI think we're talking about different \"first\"s here. I'm not concerned with what happens when multiple servers answer the multicast CoAP request. (Led to an interesting chat with PIM people but is not a matter that should concern the RD). What I do think we should give guidance is what happens if the RDAO announces multiple addresses (something explicitly allowed in the current draft), or the URI discovery response contains more than one rt=\"URL\" link Both are within what I think we describe as acceptable server behavior. If clients are unaware and just always pick the first line (because it looks like a zero-or-one situation to the implementers), interoperability issues will arise when two are announced an the first one fails.\nOk, got it, see below chrysn schreef op 2018-02-02 12:11: Yeah, that's a nuisance. Someone seems to know what they want that for, annd probably have some intelligent algorithm to break ties.......... One might say that client MAY choose their RD randomly. I assume that is a multicast response? Then the first response is probably the closest and the best choice. And when one MC response contains multiple RD links, We are not responsible, and something clever is being thought out somewhere, we hope. I think DHCP specifies clients to wait so many seconds before choosing a server. When the first one fails, the client will try the second one, und so weiter ... What am I missing still?\nfor interoperability, the client behavior when multiple URIs are discovered, or the RD does not answer, need not be specified. The text starts to look more and more like Bonjour", "new_text": "does not need to make them accessible to clients. Clients SHOULD NOT attempt to dereference or manipulate them. A Resource Directory can report endpoints or groups in lookup that are not hosted at the same address. While the setup and management of such a distributed system is out of scope for this document, lookup clients MUST be prepared to see arbitrary URIs as registration or group resources in the results. 7.3. Using the Accept Option, the requester can control whether this list"} {"id": "q-en-resource-directory-4f8d4b6e13be3fc2f8a633f44d8a5c37b00e3c680f62ed71f556f6c3ce4332df", "old_text": "associated domain of the registration. A Context is a base URL that gives scheme and (typically) authority information about an Endpoint. The Context of an Endpoint is provided at registration time, and is used by the Resource Directory to resolve relative references inside the registration into absolute URIs. This is a distinct concept from the \"link context\" defined in RFC8288. The target of a link is the destination address (URI) of the link.", "comments": "This allows for gateway devices that register their mapped legacy devices individually. Closes: URL The delta is smaller than I expected because at some places, paths were already allowed (eg. in the IANA table for \"con\"). One place where I'm unsure if we should change it because I don't know what it actually means is the \"scheme, IP, port\" annotation in fig-hierarchy -- what does it mean there?\nwhat is a path no scheme form? An example might help here.\npath-noscheme form is one of the possible ABNF forms for a relative reference from ; it basically means \"relative references that does not even start with a slash\". I'm conflicted about having an example in the draft (our examples already feel like a test suite), but I'll give one here and we can still consider moving it in: ~\nMerging this with a clarification for the ABNF item, so we have a single version to talk about later today.\nI'm aware this is a discussion we've already had, but back then we dropped it for its complexity and because we had no use cases. Matthias has brought up use cases for allowing a path in the con as part of the -13 reviews. He argues that a gateway bridging into a non-CoAP world would want to register its proxees (is that a word?) individually, and if it doesn't use name-based virtual hosting (which is kind of rare in CoAP and needs wildcard DNS records), could go for path-based hosting. As I see it, this would be a trivial addition from the RD point of view (\"just allow it\"). The onus is on the endpoint / gateway to do it right, ie. to never use path-absolute URI references unless they mean it. What does make this a bit more complicated is that none of the content formats we currently describe allows using that meaningfully -- but we are not limiting the RD to those content formats, and they may (and hopefully will) still change while we're finishing RD. On the editing track, I would keep those cases out of the examples (I'm game for building a test suite, but the examples shouldn't be that). Then, the editing becomes a matter of replacing \"specified URI MUST NOT have a path component\" with \"typically does not have a path component\", adding the one or the other \"typically\", and being done with it. (This would also make URL obsolete, as those components would be allowed). NAME (and possibly NAME I'd assume you won't be too happy with (based on previous discussions of that topic), but do you see Matthias' point, or could point out concrete issues you see from allowing this?\nI would prefer that not be made obsolete on this. You should be able to have a URI w/ a path and a relative URI which are then combined. If we need to combine things like query parameters then it is going to get really messy really fast because there are not good rules about that. I don't have any problems w/ allowing for the con to have a path and resolving with it. I just think that we need to establish what the rules are in the RD document rather than using the horrible link format rules.\nThere are very good rules about that query parts and fragments in the Base URI: Anything that is not an empty string or starts with a \"#\" removes the fragment identifier and the query parameters. The Base URI's fragment identifier is never kept. (Derived from RFC3986 Section 5.2). I'll try to write text on this issue on the weekend (NAME a \"no we can't have that because\" would be more appreciated today than later), let's talk about again when we have that. Dear list RD allows the \"con\" parameter to provide a base URI for relative resource URIs, when the registration is done from a different source address then the registered endpoint itself. \"con\" is restricted to the scheme and authority part of a URI, that is, no path segments are allowed. This came from the assumption that all CoRE devices will naturally correlate with socket-address endpoints. However, a registered device(=endpoint, \"ep\") might not be a natural endpoint. When using gateways or device proxies, the base URI will have path segments and multiple logically different endpoints will share the same socket-address (e.g., URL, URL, URL). This makes it expensive to move such logical endpoints, as updating \"con\" will not help. All links have to be removed, changed to the new prefix, and registered anew. This is a problem for some applications (including a bigger alliance using CoAP), in particular because the patch mechanism is still to be defined - yet rewriting URI prefixes still will not be really efficient even when having patch registration updates. I thus propose to generalize \"con\" to be any base URI that will be applied to all relative links an endpoint registered. If there is a conflict with a deployed mechanism, we could look into adopting \"anchor\" and let RD use it similarly to \"con\", but allowing full base URIs (including paths). Or we align with RFC 8288 altogether and replace \"con\" with \"anchor\": URL Ciao, Matthias", "new_text": "associated domain of the registration. A Context is a base URL that typically gives scheme and authority information about an Endpoint. The Context of an Endpoint is provided at registration time, and is used by the Resource Directory to resolve relative references inside the registration into absolute URIs. This is a distinct concept from the \"link context\" defined in RFC8288. The target of a link is the destination address (URI) of the link."} {"id": "q-en-resource-directory-4f8d4b6e13be3fc2f8a633f44d8a5c37b00e3c680f62ed71f556f6c3ce4332df", "old_text": "one ep (endpoint with a unique name) one con (a string describing the scheme://authority part) one lt (lifetime),", "comments": "This allows for gateway devices that register their mapped legacy devices individually. Closes: URL The delta is smaller than I expected because at some places, paths were already allowed (eg. in the IANA table for \"con\"). One place where I'm unsure if we should change it because I don't know what it actually means is the \"scheme, IP, port\" annotation in fig-hierarchy -- what does it mean there?\nwhat is a path no scheme form? An example might help here.\npath-noscheme form is one of the possible ABNF forms for a relative reference from ; it basically means \"relative references that does not even start with a slash\". I'm conflicted about having an example in the draft (our examples already feel like a test suite), but I'll give one here and we can still consider moving it in: ~\nMerging this with a clarification for the ABNF item, so we have a single version to talk about later today.\nI'm aware this is a discussion we've already had, but back then we dropped it for its complexity and because we had no use cases. Matthias has brought up use cases for allowing a path in the con as part of the -13 reviews. He argues that a gateway bridging into a non-CoAP world would want to register its proxees (is that a word?) individually, and if it doesn't use name-based virtual hosting (which is kind of rare in CoAP and needs wildcard DNS records), could go for path-based hosting. As I see it, this would be a trivial addition from the RD point of view (\"just allow it\"). The onus is on the endpoint / gateway to do it right, ie. to never use path-absolute URI references unless they mean it. What does make this a bit more complicated is that none of the content formats we currently describe allows using that meaningfully -- but we are not limiting the RD to those content formats, and they may (and hopefully will) still change while we're finishing RD. On the editing track, I would keep those cases out of the examples (I'm game for building a test suite, but the examples shouldn't be that). Then, the editing becomes a matter of replacing \"specified URI MUST NOT have a path component\" with \"typically does not have a path component\", adding the one or the other \"typically\", and being done with it. (This would also make URL obsolete, as those components would be allowed). NAME (and possibly NAME I'd assume you won't be too happy with (based on previous discussions of that topic), but do you see Matthias' point, or could point out concrete issues you see from allowing this?\nI would prefer that not be made obsolete on this. You should be able to have a URI w/ a path and a relative URI which are then combined. If we need to combine things like query parameters then it is going to get really messy really fast because there are not good rules about that. I don't have any problems w/ allowing for the con to have a path and resolving with it. I just think that we need to establish what the rules are in the RD document rather than using the horrible link format rules.\nThere are very good rules about that query parts and fragments in the Base URI: Anything that is not an empty string or starts with a \"#\" removes the fragment identifier and the query parameters. The Base URI's fragment identifier is never kept. (Derived from RFC3986 Section 5.2). I'll try to write text on this issue on the weekend (NAME a \"no we can't have that because\" would be more appreciated today than later), let's talk about again when we have that. Dear list RD allows the \"con\" parameter to provide a base URI for relative resource URIs, when the registration is done from a different source address then the registered endpoint itself. \"con\" is restricted to the scheme and authority part of a URI, that is, no path segments are allowed. This came from the assumption that all CoRE devices will naturally correlate with socket-address endpoints. However, a registered device(=endpoint, \"ep\") might not be a natural endpoint. When using gateways or device proxies, the base URI will have path segments and multiple logically different endpoints will share the same socket-address (e.g., URL, URL, URL). This makes it expensive to move such logical endpoints, as updating \"con\" will not help. All links have to be removed, changed to the new prefix, and registered anew. This is a problem for some applications (including a bigger alliance using CoAP), in particular because the patch mechanism is still to be defined - yet rewriting URI prefixes still will not be really efficient even when having patch registration updates. I thus propose to generalize \"con\" to be any base URI that will be applied to all relative links an endpoint registered. If there is a conflict with a deployed mechanism, we could look into adopting \"anchor\" and let RD use it similarly to \"con\", but allowing full base URIs (including paths). Or we align with RFC 8288 altogether and replace \"con\" with \"anchor\": URL Ciao, Matthias", "new_text": "one ep (endpoint with a unique name) one con (a string typically describing the scheme://authority part) one lt (lifetime),"} {"id": "q-en-resource-directory-4f8d4b6e13be3fc2f8a633f44d8a5c37b00e3c680f62ed71f556f6c3ce4332df", "old_text": "Context (optional). This parameter sets the Default Base URI under which the request's links are to be interpreted. The specified URI MUST NOT have a path component of its own, but MUST be suitable as a base URI to resolve any relative references given in the registration. The parameter is therefore of the shape \"scheme://authority\" for HTTP and CoAP URIs. In the absence of this parameter the scheme of the protocol, source address and source port of the registration request are assumed. This parameter is mandatory when the directory is filled by a third party such as an commissioning tool. If the endpoint uses an ephemeral port to register with, it MUST include the con parameter in the registration to provide a valid network path. If the endpoint which is located behind a NAT gateway is registering with a Resource Directory which is on the network service side of the NAT gateway, the endpoint MUST use a persistent port for the outgoing registration in order to provide the NAT gateway with a valid network address for replies and incoming requests. Additional registration attributes (optional). The endpoint can pass any parameter registered at iana-registry to the", "comments": "This allows for gateway devices that register their mapped legacy devices individually. Closes: URL The delta is smaller than I expected because at some places, paths were already allowed (eg. in the IANA table for \"con\"). One place where I'm unsure if we should change it because I don't know what it actually means is the \"scheme, IP, port\" annotation in fig-hierarchy -- what does it mean there?\nwhat is a path no scheme form? An example might help here.\npath-noscheme form is one of the possible ABNF forms for a relative reference from ; it basically means \"relative references that does not even start with a slash\". I'm conflicted about having an example in the draft (our examples already feel like a test suite), but I'll give one here and we can still consider moving it in: ~\nMerging this with a clarification for the ABNF item, so we have a single version to talk about later today.\nI'm aware this is a discussion we've already had, but back then we dropped it for its complexity and because we had no use cases. Matthias has brought up use cases for allowing a path in the con as part of the -13 reviews. He argues that a gateway bridging into a non-CoAP world would want to register its proxees (is that a word?) individually, and if it doesn't use name-based virtual hosting (which is kind of rare in CoAP and needs wildcard DNS records), could go for path-based hosting. As I see it, this would be a trivial addition from the RD point of view (\"just allow it\"). The onus is on the endpoint / gateway to do it right, ie. to never use path-absolute URI references unless they mean it. What does make this a bit more complicated is that none of the content formats we currently describe allows using that meaningfully -- but we are not limiting the RD to those content formats, and they may (and hopefully will) still change while we're finishing RD. On the editing track, I would keep those cases out of the examples (I'm game for building a test suite, but the examples shouldn't be that). Then, the editing becomes a matter of replacing \"specified URI MUST NOT have a path component\" with \"typically does not have a path component\", adding the one or the other \"typically\", and being done with it. (This would also make URL obsolete, as those components would be allowed). NAME (and possibly NAME I'd assume you won't be too happy with (based on previous discussions of that topic), but do you see Matthias' point, or could point out concrete issues you see from allowing this?\nI would prefer that not be made obsolete on this. You should be able to have a URI w/ a path and a relative URI which are then combined. If we need to combine things like query parameters then it is going to get really messy really fast because there are not good rules about that. I don't have any problems w/ allowing for the con to have a path and resolving with it. I just think that we need to establish what the rules are in the RD document rather than using the horrible link format rules.\nThere are very good rules about that query parts and fragments in the Base URI: Anything that is not an empty string or starts with a \"#\" removes the fragment identifier and the query parameters. The Base URI's fragment identifier is never kept. (Derived from RFC3986 Section 5.2). I'll try to write text on this issue on the weekend (NAME a \"no we can't have that because\" would be more appreciated today than later), let's talk about again when we have that. Dear list RD allows the \"con\" parameter to provide a base URI for relative resource URIs, when the registration is done from a different source address then the registered endpoint itself. \"con\" is restricted to the scheme and authority part of a URI, that is, no path segments are allowed. This came from the assumption that all CoRE devices will naturally correlate with socket-address endpoints. However, a registered device(=endpoint, \"ep\") might not be a natural endpoint. When using gateways or device proxies, the base URI will have path segments and multiple logically different endpoints will share the same socket-address (e.g., URL, URL, URL). This makes it expensive to move such logical endpoints, as updating \"con\" will not help. All links have to be removed, changed to the new prefix, and registered anew. This is a problem for some applications (including a bigger alliance using CoAP), in particular because the patch mechanism is still to be defined - yet rewriting URI prefixes still will not be really efficient even when having patch registration updates. I thus propose to generalize \"con\" to be any base URI that will be applied to all relative links an endpoint registered. If there is a conflict with a deployed mechanism, we could look into adopting \"anchor\" and let RD use it similarly to \"con\", but allowing full base URIs (including paths). Or we align with RFC 8288 altogether and replace \"con\" with \"anchor\": URL Ciao, Matthias", "new_text": "Context (optional). This parameter sets the Default Base URI under which the request's links are to be interpreted. The specified URI typically does not have a path component of its own, and MUST be suitable as a base URI to resolve any relative references given in the registration. The parameter is therefore usually of the shape \"scheme://authority\" for HTTP and CoAP URIs. In the absence of this parameter the scheme of the protocol, source address and source port of the registration request are assumed. This parameter is mandatory when the directory is filled by a third party such as an commissioning tool. If the endpoint uses an ephemeral port to register with, it MUST include the con parameter in the registration to provide a valid network path. If the endpoint which is located behind a NAT gateway is registering with a Resource Directory which is on the network service side of the NAT gateway, the endpoint MUST use a persistent port for the outgoing registration in order to provide the NAT gateway with a valid network address for replies and incoming requests. Endpoints that register with a con that contains a path component need to carefully consider the rules of relative URI resolution. Typically, links submitted by such an endpoint are of the \"path-noscheme\" form. Additional registration attributes (optional). The endpoint can pass any parameter registered at iana-registry to the"} {"id": "q-en-resource-directory-4096d948873e1d3fb961fce391fa6b92c4af1b1cecdfe0ee8210e7aed629c4dc", "old_text": "[ The RFC editor is asked to replace MCD1 and MCD2 with the assigned addresses throughout the document. ] 10. Two examples are presented: a Lighting Installation example in lt-ex", "comments": "Closes: URL\nAlexeys' review mentions IANA considerations for .well-known/, and while we don't have any from the sentence he referred to, we are adding to behavior of .well-known/core in the simple registration -- grabbing its POST behavior. Do we need to update the registration entry for that (or update 6690 for its registration)?", "new_text": "[ The RFC editor is asked to replace MCD1 and MCD2 with the assigned addresses throughout the document. ] 9.6. IANA is asked to extend the reference for the \"core\" URI suffix in the \"Well-Known URIs\" registry to reference this document next to RFC6690, as this defines the resource's behavior for POST requests. 10. Two examples are presented: a Lighting Installation example in lt-ex"} {"id": "q-en-resource-directory-32bef8a51464c6c5a02632ed16d500cedf54eb860fa443e8a60db272c6dde287", "old_text": "The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in RFC2119. The term \"byte\" is used in its now customary sense as a synonym for \"octet\". This specification requires readers to be familiar with all the terms and concepts that are discussed in RFC3986, RFC8288 and RFC6690.", "comments": "All expected to be uncontroversial; to be merged soon.\nCommit 3809be46013e9708a6584c4fb730e7cfecab5f22 should have been part of this as well, and is already applied to master.", "new_text": "The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. The term \"byte\" is used in its now customary sense as a synonym for \"octet\". This specification requires readers to be familiar with all the terms and concepts that are discussed in RFC3986, RFC8288 and RFC6690."} {"id": "q-en-resource-directory-32bef8a51464c6c5a02632ed16d500cedf54eb860fa443e8a60db272c6dde287", "old_text": "they do not limit what a server may respond under atypical circumstances. REST clients (registrant-EPs / CTs, lookup clients, RD servers during simple registrations) MUST be prepared to receive any unsuccessful code and act upon it according to its definition, options and/or payload to the best of their capabilities, falling back to failing the operation if recovery is not possible. In particular, they should retry the request upon 5.03 (Service Unavailable; 503 in HTTP) according to the Max-Age (Retry-After in HTTP) option, and fall back to link-format when receiving 4.15 (Unsupported Content-Format; 415 in HTTP). A resource directory MAY make the information submitted to it available to further directories, if it can ensure that a loop does", "comments": "All expected to be uncontroversial; to be merged soon.\nCommit 3809be46013e9708a6584c4fb730e7cfecab5f22 should have been part of this as well, and is already applied to master.", "new_text": "they do not limit what a server may respond under atypical circumstances. REST clients (registrant-EPs and CTs during registration and maintenance, lookup clients, RD servers during simple registrations) MUST be prepared to receive any unsuccessful code and act upon it according to its definition, options and/or payload to the best of their capabilities, falling back to failing the operation if recovery is not possible. In particular, they should retry the request upon 5.03 (Service Unavailable; 503 in HTTP) according to the Max-Age (Retry-After in HTTP) option, and fall back to link-format when receiving 4.15 (Unsupported Content-Format; 415 in HTTP). A resource directory MAY make the information submitted to it available to further directories, if it can ensure that a loop does"} {"id": "q-en-resource-directory-32bef8a51464c6c5a02632ed16d500cedf54eb860fa443e8a60db272c6dde287", "old_text": "8.1. An Endpoint (name, sector) pair is unique within the et of endpoints registered by the RD. An Endpoint MUST NOT be identified by its protocol, port or IP address as these may change over the lifetime of an Endpoint.", "comments": "All expected to be uncontroversial; to be merged soon.\nCommit 3809be46013e9708a6584c4fb730e7cfecab5f22 should have been part of this as well, and is already applied to master.", "new_text": "8.1. An Endpoint (name, sector) pair is unique within the set of endpoints registered by the RD. An Endpoint MUST NOT be identified by its protocol, port or IP address as these may change over the lifetime of an Endpoint."} {"id": "q-en-resource-directory-32bef8a51464c6c5a02632ed16d500cedf54eb860fa443e8a60db272c6dde287", "old_text": "IPv4 - \"all CoRE resource directories\" address MCD2 (suggestion: 224.0.1.189), from the \"IPv4 Multicast Address Space Registry\". As the address is used for discovery that may span beyond a single network, it has come from the Internetwork Control Block (224.0.1.x, RFC 5771). IPv6 - \"all CoRE resource directories\" address MCD1 (suggestions FF0X::FE), from the \"IPv6 Multicast Address Space Registry\", in the", "comments": "All expected to be uncontroversial; to be merged soon.\nCommit 3809be46013e9708a6584c4fb730e7cfecab5f22 should have been part of this as well, and is already applied to master.", "new_text": "IPv4 - \"all CoRE resource directories\" address MCD2 (suggestion: 224.0.1.189), from the \"IPv4 Multicast Address Space Registry\". As the address is used for discovery that may span beyond a single network, it has come from the Internetwork Control Block (224.0.1.x) RFC5771. IPv6 - \"all CoRE resource directories\" address MCD1 (suggestions FF0X::FE), from the \"IPv6 Multicast Address Space Registry\", in the"} {"id": "q-en-resource-directory-958cbbcfd5d5e27cc91cdc47736839b3390a9d2736a4da82eece6f3c02eb5bb8", "old_text": "depending on their current availability and capabilities as well as application requirements, thus avoiding silo like solutions. One of the crucial enablers of such design is the ability to discover resources (machines -- endpoints) capable of providing required information at a given time or acting on instructions from the end users. Imagine a scenario where endpoints installed on vehicles enable tracking of the position of these vehicles for fleet management", "comments": "as suggested by Russ in the genart review: NAME did I get the intention of the expression right?\nLGTM", "new_text": "depending on their current availability and capabilities as well as application requirements, thus avoiding silo like solutions. One of the crucial enablers of such design is the ability to discover resources (and thus the endpoints they are hosted on) capable of providing required information at a given time or acting on instructions from the end users. Imagine a scenario where endpoints installed on vehicles enable tracking of the position of these vehicles for fleet management"} {"id": "q-en-resource-directory-e3348fd9e0f63e09d108538f25c276a0170649827f5f727ac6908f899cc8e5bd", "old_text": "For cases where the device is not specifically configured with a way to find an RD, the network may want to provide a suitable default. If the address configuration of the network is performed via SLAAC, this is provided by the RDAO option rdao. If the address configuration of the network is performed via DHCP, this could be provided via a DHCP option (no such option is defined at the time of writing). Finally, if neither the device nor the network offers any specific configuration, the device may want to employ heuristics to find a", "comments": "See-Also: URL This is quite literally from my notes on URL A bit of uncertainty remains whether this change should be two-sided. Should the line above say \"When SLAAC is in use,\" (or \"When ND is in use,\"?) as well? Can RA options be used in presence of DHCP or static address configuration?\nUpdated to a more plain and not SLAAC related statement about the RDAO as per last interim. Thank you for the work put into this document. As you can noticed, I have cleared my 2 DISCUSS points (kept below for archive) thank you also for replying to my comments over email. BTW, I appreciated the use of ASCII art to represent an entity-relationship diagram ! I hope that this helps to improve the document, -- Section 4.1 -- It will be trivial to fix, in IPv6 address configuration (SLAAC vs. DHCP) is orthogonal to DHCP 'other-information'. E.g., even if address is configured via SLAAC, DHCPv6 other-information can be used to configure the Recursive DNS Server (or possibly the RD). -- Section 4.1.1 -- Another trivial DISCUSS to fix: in which message is this RDAO sent ? I guess unicast Router Advertisement but this MUST be specified. In general, I wonder how much interactions and exchanges of ideas have happened in the long history of this document with the DNSSD (DNS Service Discovery) Working Group that has very similar constraints (sleeping nodes) and same objectives. -- Section 2 -- To be honest: I am not too much an APP person; therefore, I was surprised to see \"source address (URI)\" used to identify the \"anchor=\"... I do not mind too much the use of \"destination address (URI)\" as it is really a destination but the anchor does not appear to me as a \"source address\". Is it common terminology ? If so, then ignore my COMMENT, else I suggest to change to \"destination URI\" and simply \"anchor\" ? -- Section 3.3 -- Should the lifetime be specified in seconds at first use in the text? -- Section 3.6 -- Is the use of \"M2M\" still current? I would suggest to use the word \"IoT\" esp when 6LBR (assuming it is 6LO Border Router) is cited later. Please expand and add reference for 6LBR. Using 'modern' technologies (cfr LP-WAN WG) could also add justification to section 3.5. -- Section 4.1 -- About \"coap://[MCD1]/.well-known/core?rt=core.rd*\", what is the value of MCD1 ? The IANA section discuss about it but it may help the reader to give a hint before (or simply use TBDx that is common in I-D). Any reason to use \"address\" rather than \"group\" in \"When answering a multicast request directed at a link-local address\" ? Later \"to use one of their own routable addresses for registration.\" but there can be multiple configured prefixes... Which one should the RD select ? Should this be specified ? As a co-author of RFC 8801, I would have appreciated to read PvD option mentionned to discover the RD. Any reason why PvD Option cannot be used ? -- Section 4.1.1 -- I suggest to swap the reserved and lifetime fields in order to be able to use a lifetime in units of seconds (to be consistent with other NDP options). -- Section 5 -- May be I missed it, but, can an end-point register multiple base URI ? E.g., multiple IPv6 addresses. -- Section 9.2 -- For information, value 38 is already assigned to RFC 8781. -- Section 2 -- The extra new lines when defining \"Sector\" are slighly confusing. Same applies to \"Target\" and \"Context\". This is cosmetic only.", "new_text": "For cases where the device is not specifically configured with a way to find an RD, the network may want to provide a suitable default. The IPv6 Neighbor Discovery option RDAO rdao can do that. When DHCP is in use, this could be provided via a DHCP option (no such option is defined at the time of writing). Finally, if neither the device nor the network offers any specific configuration, the device may want to employ heuristics to find a"} {"id": "q-en-resource-directory-e3348fd9e0f63e09d108538f25c276a0170649827f5f727ac6908f899cc8e5bd", "old_text": "The First-Come-First-Remembered policy is added as an example and a potential default behavior. changes from -24 to -25 Large rework of section 7 (Security policies)", "comments": "See-Also: URL This is quite literally from my notes on URL A bit of uncertainty remains whether this change should be two-sided. Should the line above say \"When SLAAC is in use,\" (or \"When ND is in use,\"?) as well? Can RA options be used in presence of DHCP or static address configuration?\nUpdated to a more plain and not SLAAC related statement about the RDAO as per last interim. Thank you for the work put into this document. As you can noticed, I have cleared my 2 DISCUSS points (kept below for archive) thank you also for replying to my comments over email. BTW, I appreciated the use of ASCII art to represent an entity-relationship diagram ! I hope that this helps to improve the document, -- Section 4.1 -- It will be trivial to fix, in IPv6 address configuration (SLAAC vs. DHCP) is orthogonal to DHCP 'other-information'. E.g., even if address is configured via SLAAC, DHCPv6 other-information can be used to configure the Recursive DNS Server (or possibly the RD). -- Section 4.1.1 -- Another trivial DISCUSS to fix: in which message is this RDAO sent ? I guess unicast Router Advertisement but this MUST be specified. In general, I wonder how much interactions and exchanges of ideas have happened in the long history of this document with the DNSSD (DNS Service Discovery) Working Group that has very similar constraints (sleeping nodes) and same objectives. -- Section 2 -- To be honest: I am not too much an APP person; therefore, I was surprised to see \"source address (URI)\" used to identify the \"anchor=\"... I do not mind too much the use of \"destination address (URI)\" as it is really a destination but the anchor does not appear to me as a \"source address\". Is it common terminology ? If so, then ignore my COMMENT, else I suggest to change to \"destination URI\" and simply \"anchor\" ? -- Section 3.3 -- Should the lifetime be specified in seconds at first use in the text? -- Section 3.6 -- Is the use of \"M2M\" still current? I would suggest to use the word \"IoT\" esp when 6LBR (assuming it is 6LO Border Router) is cited later. Please expand and add reference for 6LBR. Using 'modern' technologies (cfr LP-WAN WG) could also add justification to section 3.5. -- Section 4.1 -- About \"coap://[MCD1]/.well-known/core?rt=core.rd*\", what is the value of MCD1 ? The IANA section discuss about it but it may help the reader to give a hint before (or simply use TBDx that is common in I-D). Any reason to use \"address\" rather than \"group\" in \"When answering a multicast request directed at a link-local address\" ? Later \"to use one of their own routable addresses for registration.\" but there can be multiple configured prefixes... Which one should the RD select ? Should this be specified ? As a co-author of RFC 8801, I would have appreciated to read PvD option mentionned to discover the RD. Any reason why PvD Option cannot be used ? -- Section 4.1.1 -- I suggest to swap the reserved and lifetime fields in order to be able to use a lifetime in units of seconds (to be consistent with other NDP options). -- Section 5 -- May be I missed it, but, can an end-point register multiple base URI ? E.g., multiple IPv6 addresses. -- Section 9.2 -- For information, value 38 is already assigned to RFC 8781. -- Section 2 -- The extra new lines when defining \"Sector\" are slighly confusing. Same applies to \"Target\" and \"Context\". This is cosmetic only.", "new_text": "The First-Come-First-Remembered policy is added as an example and a potential default behavior. RD discovery: Drop the previously stated assumption that RDAO and any DHCP options would only be used together with SLAAC and DHCP for address configuration, respectivly. changes from -24 to -25 Large rework of section 7 (Security policies)"} {"id": "q-en-resource-directory-b6f7edcdf98a1c1df9b2d4c92deebd62e85cc65118699f3535188be4400a1f25", "old_text": "name and the same value, allowing for a trailing \"*\" wildcard operator as in Section 4.1 of RFC6690. Attributes that are defined as \"link-type\" match if the search value matches any of their values (see Section 4.1 of RFC6690; e.g. \"?if=core.s\" matches \";if=\"abc core.s\";\"). A resource link also matches a search criterion if its endpoint would match the criterion, and vice versa, an endpoint link matches a search criterion if any of its resource links matches it. Note that \"href\" is a valid search criterion and matches target references. Like all search criteria, on a resource lookup it can", "comments": "... and a few other cases of example errors and use of unregistered names.", "new_text": "name and the same value, allowing for a trailing \"*\" wildcard operator as in Section 4.1 of RFC6690. Attributes that are defined as \"link-type\" match if the search value matches any of their values (see Section 4.1 of RFC6690; e.g. \"?if=tag:example.net,2020:sensor\" matches \";if=\"example.regname tag:example.net,2020:sensor\";\"). A resource link also matches a search criterion if its endpoint would match the criterion, and vice versa, an endpoint link matches a search criterion if any of its resource links matches it. Note that \"href\" is a valid search criterion and matches target references. Like all search criteria, on a resource lookup it can"} {"id": "q-en-resource-directory-b6f7edcdf98a1c1df9b2d4c92deebd62e85cc65118699f3535188be4400a1f25", "old_text": "identifiers; the precise semantics of such links are left to future specifications. The following example shows a client performing an endpoint type (et) lookup with the value oic.d.sensor (which is currently a registered rt value): 7.", "comments": "... and a few other cases of example errors and use of unregistered names.", "new_text": "identifiers; the precise semantics of such links are left to future specifications. The following example shows a client performing an endpoint lookup limited to endpoints of endpoint type \"tag:example.com,2020:platform\": 7."} {"id": "q-en-resource-directory-b6f7edcdf98a1c1df9b2d4c92deebd62e85cc65118699f3535188be4400a1f25", "old_text": "is specified in the coap-group resource. The presence sensor can learn the presence of groups that support resources with rt=light in its own sector by sending the same request, as used by the luminary. The presence sensor learns the multicast address to use for sending messages to the luminaries. 10.2.", "comments": "... and a few other cases of example errors and use of unregistered names.", "new_text": "is specified in the coap-group resource. The presence sensor can learn the presence of groups that support resources with rt=tag:example.com,2020:light in its own sector by sending the same request, as used by the luminary. The presence sensor learns the multicast address to use for sending messages to the luminaries. 10.2."} {"id": "q-en-resource-directory-b6f7edcdf98a1c1df9b2d4c92deebd62e85cc65118699f3535188be4400a1f25", "old_text": "RDAO: Clarify that it is an option for RAs and not other ND messages. changes from -24 to -25 Large rework of section 7 (Security policies)", "comments": "... and a few other cases of example errors and use of unregistered names.", "new_text": "RDAO: Clarify that it is an option for RAs and not other ND messages. Examples: Use example URIs rather than unclear reg names (unless it's RFC6690 examples, which were kept for continuity) changes from -24 to -25 Large rework of section 7 (Security policies)"} {"id": "q-en-resource-directory-1df8d512d22b612934f94e99a1dbd86e986265b7017189d3468a2b33cade1660", "old_text": "at the CoAP layer then it may be inclined to use the endpoint name for looking up what information to provision to the malicious device. Endpoint authentication needs to be checked independently of whether there are configured requirements on the credentials for a given endpoint name (secure-ep) or whether arbitrary names are accepted (arbitrary-ep). Simple registration could be used to circumvent address-based access control: An attacker would send a simple registration request with", "comments": "Authentication alone is never sufficient; with the First-Come-Fist-Remembered policy of , it should be easier to see how this is still authorization. See-Also: URL Thanks for addressing my DISCUSS and COMMENT feedback", "new_text": "at the CoAP layer then it may be inclined to use the endpoint name for looking up what information to provision to the malicious device. Endpoint authorization needs to be checked on registration and registration resource operations independently of whether there are configured requirements on the credentials for a given endpoint name (secure-ep) or whether arbitrary names are accepted (arbitrary-ep). Simple registration could be used to circumvent address-based access control: An attacker would send a simple registration request with"} {"id": "q-en-resource-directory-814e81dff78dd4f5bb53b33227fa893e677132faa817af05a450a97e97ef462b", "old_text": "and its links. Commissioning Tool (CT) is a device that assists during the installation of the network by assigning values to parameters, naming endpoints and groups, or adapting the installation to the needs of the applications. Registrant-ep is the endpoint that is registered into the RD. The", "comments": "Closes: URL See-Also: URL\nFrom Benjamin Kaduk's comments: Is it? Paging NAME NAME and NAME as I honestly don't know. Trouble is: The way CTs come across in the examples, the CT goes in, does its job and leaves. But the RD's lifetimes are finite, and \"Registrations in the RD are soft state and need to be periodically refreshed\". So what happens when the time runs out?\nA CT can indeed come in later to assist in a reconfiguration of the network/applications, e.g. adding a device, removing, or changing members of a group, or changing parameters of devices. When the CT performed a registration on behalf of an IoT device the expectation is that a very long lifetime will be picked, such that the CT doesn't have to come back to refresh. This looks problematic in case the RD state is easily lost - e.g. suppose it reboots and loses all such entries? Then there may be no CT around to send the entries to the RD again. (I would assume in this scenario that the CT's registration are more \"permanent\" so they would survive a reboot for example. Can be restored back from a database. In another project I work in the CT can perform such persistent registrations that survive reboots, using a special 'max' value of lifetime that signals persistent registration.)\nThanks, with that I can at least phrase an answer to the immediate question. Would it, in the deployments you describe, be conceivable that the CT is recalled on site on extraordinary events, like when the RD host is migrated to a different system or address? On the reboots, I hope that we are OK with the current situation that an RD would persist its state when it can, and administrators would not use the more volatile variety when they intend to configure long lifetimes. (If we aren't, we'd need to start thinking about negotiation of lifetimes, and that'd bring a new aspect of complexity I hope to avoid).\nchrysn wrote: Hi. I thought I had commented on this somewhere already. A BRSKI-based \"CT\" would have a registrar which does not really leave. If we are talking about how I think that brski-async-enroll will work, then the CT will enter the no-internet-zone, do some connections, and then leave. (Think new home construction in a new suburb) The RD lifetimes could be finite, but a few years in duration. The home owner, upon getting the keys (physically and cryptographically), would of course, rekey the entire house, and now there would be a management system to update RD lifetimes. Thread has a CT. I have no idea how it works, but perhaps NAME could comment. Zigbee CHIP project is doing something, but it's all NDA for now. I think that, when the time runs out, if there is nothing else to manage the system, then it should fail. I think that's okay. I think that the timeouts won't be minutes, but months... yes, there are issues. I don't think that the RD document needs to cover them though. {Feel free to invite me to collaborate on a script for the sequel to Sneakers/Leverage/MissionImpossible/EnemyOfTheState. I suggest it star Tom Greene rather than WillSmith or TomCruise.}\nI think that there is probably a followup document that provides a way to backup/restore RD host data. OPC UA told me that they have created a spec for device independent backup/restore of configuration data in the case of device failure, but I don't have the details yet.\nThis might fit into -- any exchange format between the RDs could also be re-used for persistence. (With the caveat that the receiving server would need to inhabit the same address as the original server, or otherwise it could not service registration updates.)\nWell, could it be an anycast address so that the RD could be highly available?\nI don't see this as a viable design for a deployment. A change of system/address/network in the backend should never require the CT to come in and redo commissioning operations on IoT devices. Feeding the 'new RD' with stored information from a database is what would be most useful here. This database could be all those entries of devices that the CT had scanned initially, not necessarily the full RD state. Or alternatively all RD entries with lifetime e.g. > 48 hours could be restored; other devices need to re-register. Those entries that were registered with the CT could have a 10-year lifetime for example.\nHI all, There are many installation firms, all with their own procedures and tools. It is possible that an installation company deploys a CT during installation and takes it home afterwards. The contract might stipulate that he leaves behind a program to restart, update or refresh the network. It might be that an another company does the maintenance with another set of tools. Concerning duration, we may talk about years of operation without interruption. On other sites weekly upgrades may be necessary. At the end of the day, There are no general rules that may fix the CT behavior. I am afraid this does not help a lot and emphasizes that the CT example is just a likely example. Cheerio, Peter Esko Dijk schreef op 2020-10-21 09:02: Links: [1] URL [2] URL Thanks for addressing my DISCUSS and COMMENT feedback", "new_text": "and its links. Commissioning Tool (CT) is a device that assists during installation events by assigning values to parameters, naming endpoints and groups, or adapting the installation to the needs of the applications. Registrant-ep is the endpoint that is registered into the RD. The"} {"id": "q-en-resource-directory-814e81dff78dd4f5bb53b33227fa893e677132faa817af05a450a97e97ef462b", "old_text": "any DHCP options would only be used together with SLAAC and DHCP for address configuration, respectivly. changes from -24 to -25 Large rework of section 7 (Security policies)", "comments": "Closes: URL See-Also: URL\nFrom Benjamin Kaduk's comments: Is it? Paging NAME NAME and NAME as I honestly don't know. Trouble is: The way CTs come across in the examples, the CT goes in, does its job and leaves. But the RD's lifetimes are finite, and \"Registrations in the RD are soft state and need to be periodically refreshed\". So what happens when the time runs out?\nA CT can indeed come in later to assist in a reconfiguration of the network/applications, e.g. adding a device, removing, or changing members of a group, or changing parameters of devices. When the CT performed a registration on behalf of an IoT device the expectation is that a very long lifetime will be picked, such that the CT doesn't have to come back to refresh. This looks problematic in case the RD state is easily lost - e.g. suppose it reboots and loses all such entries? Then there may be no CT around to send the entries to the RD again. (I would assume in this scenario that the CT's registration are more \"permanent\" so they would survive a reboot for example. Can be restored back from a database. In another project I work in the CT can perform such persistent registrations that survive reboots, using a special 'max' value of lifetime that signals persistent registration.)\nThanks, with that I can at least phrase an answer to the immediate question. Would it, in the deployments you describe, be conceivable that the CT is recalled on site on extraordinary events, like when the RD host is migrated to a different system or address? On the reboots, I hope that we are OK with the current situation that an RD would persist its state when it can, and administrators would not use the more volatile variety when they intend to configure long lifetimes. (If we aren't, we'd need to start thinking about negotiation of lifetimes, and that'd bring a new aspect of complexity I hope to avoid).\nchrysn wrote: Hi. I thought I had commented on this somewhere already. A BRSKI-based \"CT\" would have a registrar which does not really leave. If we are talking about how I think that brski-async-enroll will work, then the CT will enter the no-internet-zone, do some connections, and then leave. (Think new home construction in a new suburb) The RD lifetimes could be finite, but a few years in duration. The home owner, upon getting the keys (physically and cryptographically), would of course, rekey the entire house, and now there would be a management system to update RD lifetimes. Thread has a CT. I have no idea how it works, but perhaps NAME could comment. Zigbee CHIP project is doing something, but it's all NDA for now. I think that, when the time runs out, if there is nothing else to manage the system, then it should fail. I think that's okay. I think that the timeouts won't be minutes, but months... yes, there are issues. I don't think that the RD document needs to cover them though. {Feel free to invite me to collaborate on a script for the sequel to Sneakers/Leverage/MissionImpossible/EnemyOfTheState. I suggest it star Tom Greene rather than WillSmith or TomCruise.}\nI think that there is probably a followup document that provides a way to backup/restore RD host data. OPC UA told me that they have created a spec for device independent backup/restore of configuration data in the case of device failure, but I don't have the details yet.\nThis might fit into -- any exchange format between the RDs could also be re-used for persistence. (With the caveat that the receiving server would need to inhabit the same address as the original server, or otherwise it could not service registration updates.)\nWell, could it be an anycast address so that the RD could be highly available?\nI don't see this as a viable design for a deployment. A change of system/address/network in the backend should never require the CT to come in and redo commissioning operations on IoT devices. Feeding the 'new RD' with stored information from a database is what would be most useful here. This database could be all those entries of devices that the CT had scanned initially, not necessarily the full RD state. Or alternatively all RD entries with lifetime e.g. > 48 hours could be restored; other devices need to re-register. Those entries that were registered with the CT could have a 10-year lifetime for example.\nHI all, There are many installation firms, all with their own procedures and tools. It is possible that an installation company deploys a CT during installation and takes it home afterwards. The contract might stipulate that he leaves behind a program to restart, update or refresh the network. It might be that an another company does the maintenance with another set of tools. Concerning duration, we may talk about years of operation without interruption. On other sites weekly upgrades may be necessary. At the end of the day, There are no general rules that may fix the CT behavior. I am afraid this does not help a lot and emphasizes that the CT example is just a likely example. Cheerio, Peter Esko Dijk schreef op 2020-10-21 09:02: Links: [1] URL [2] URL Thanks for addressing my DISCUSS and COMMENT feedback", "new_text": "any DHCP options would only be used together with SLAAC and DHCP for address configuration, respectivly. Terminology: Clarify that the CTs' installation events can occur multiple times. changes from -24 to -25 Large rework of section 7 (Security policies)"} {"id": "q-en-resource-directory-fb4f6110ea9ae3a5c021810487c5f0e30e2ebc2951691eefda7b8d8b13c6ec08", "old_text": "resource returned by the initial registration operation. An update MAY update registration parameters like lifetime, base URI or others. Parameters that are not being changed SHOULD NOT be included in an update. Adding parameters that have not changed increases the size of the message but does not have any other implications. Parameters MUST be included as query parameters in an update operation as in registration. A registration update resets the timeout of the registration to the", "comments": "The SHOULD is not an interoperability requirement but a quality-of-implementation level recommendation; the MUST could have been read as a requirement to have parameters there and is more describing mechanism than compatibility. See-Also: URL Hi, Thank you for this document. I'm glad that the shepherd's writeup indicates that there are implementations since that indicates that there does appear to be value in standardizing this work, despite its long journey. My main comment (which I was considering raising as a discuss) is: Since this document is defining a new service, I think that this document would benefit on having a short later section on \"Operations, Administration and Management\". This document does seem to cover some management/administration considerations in various sections in a somewhat ad hoc way, but having a section highlighting those would potentially make it easier deploy. Defining a common management model (or API) would potentially also make administration of the service easier, I don't know if that had been considered, and if useful could be done as a separate document. A few other comments: What happens if an endpoint is managing the registration and is upgraded to new hardware with a different certificate? Would the updated endpoint expect to be able to update the registration? Or would it have to wait for the existing registration to timeout (which could be a long time)? Nit: Perhaps reword the second sentence. Otherwise it seems to conflict with the last sentence of the prior paragraph. The \"SHOULD NOT\" feels a bit strong to me, and I would prefer to see this as \"MAY NOT\". In many cases, if the configuration is not too big then providing the full configuration makes it easy to guarantee that the receiver has exactly the correct configuration. I appreciate that there are many cases where from an endpoint perspective it may want to keep the update small, but if I was doing this from a CT, I think that I would rather just resend the entire configuration, if it is not large. Just to check, is it correct that the anchor in the http link is also to coap://? If this is wrong then there is a second example in the same section that also needs to be fixed.", "new_text": "resource returned by the initial registration operation. An update MAY update registration parameters like lifetime, base URI or others. Parameters that are not being changed should not be included in an update. Adding parameters that have not changed increases the size of the message but does not have any other implications. Parameters are included as query parameters in an update operation as in registration. A registration update resets the timeout of the registration to the"} {"id": "q-en-resource-directory-27f98938ee9f6721f145416ee13d07e7b183226c5a93d3940633b32230ca2f17", "old_text": "URIs and relative references, and not limit the set of discovered URIs to those hosted at the address used for URI discovery. The URI Discovery operation can yield multiple URIs of a given resource type. The client of the RD can use any of the discovered addresses initially.", "comments": "This provides two band-aids to URL, and may also close it far enough that RD can progress without the underlying topic being solved in full.\nThis is a tracking issue to cover the topics of URL -- see there for discussion, the issue is primarily kept for cross-linking PRs.\nSince this is covered well enough that it need not stop RD. Further exploration should happen in ACE and CoRE. Hello CoRE, one of the outstanding issues in RD is server authorization with respect to operations on particular paths. This continues an ACE thread[1] I regrettably missed to follow up on, but simplifies the examples based on the recent interims. While originally phrased as the attacker tricking the server into using the client's good credentials for something the client did not intend, here they are phrased now in terms of server authorization as relevant for RD discovery and RD security policies. For two examples, a client needs to trust a server to provide a service, but has received misleading hints that cause disagreement between the client and the server about the performed operation. Example 1 An RD with a \"links are confidential\" policy also operates a bulletin board at . During the unprotected discovery phase, the endpoint is fed a link . The client connects to URL, verifies that its server credentials are good for an RD that will reveal links posted to it only to authorized lookup clients. It then posts all its links to /bb, where anyone who knows to use a bulletin board can read them. Example 2 A time server also exposes its current core temperature as . A malicious RD operator modifies the time server's registration to read . A starting device in the network asks the RD for any rt=unixtime, verifies that URL is indeed authorized to give the right time, and wakes up on the morning of January 1st, 1970. Problem summary The client has an abstract intention with an operation, which it checks against the server's authorization. Then it forms that intention, with additional unverified knowledge about the server, into a request, which it protects. The server then reconstructs that intention with its authoritative knowledge of itself, and uses any client provided credentials to act on it. If the client's knowledge about the server is erroneous, the intentions are misaligned. Neither is the intention protected, nor does the client point out the claims it wants to hold the server to in its request (or, in the original examples from the ACE thread, express the precise claims about itself it intends the server to assess). Solution approaches In the ACE thread back then, Jim said that he'd expect the client to use different keys (originally in talking to the AS, and thus different keys in talking to the RS). Doing that to the end would probably result in the devices having to handle a lot more keys, something like one for any operation that could be mixed up with any other? Michael in the last interim mentioned that in OAuth it's common to only run one service on one host for the very same reason. Kind of boils down to the same thing, as virtual-hosting these services would create multiple keys. My approach in the -25 RD Section 7.2 is to (losely, there; would need stronger and more precise wording) say that any information that any information using which intention is encoded in the request in a lossy way is to be rechecked from the origin server. That'd be more or less straightforward for example 1, but incur an additional costly recheck with the origin server about many things that could be learned efficiently from the RD. Does any of that solve the issues? Can we do any better? How do we put that into RD to satisfy the small comment about Section that opens this can of worms? Is there anything quotable on OAuth's best practices on multiple services on the same host? Best regards Christian [1]: URL To use raw power is to make yourself infinitely vulnerable to greater powers. -- Bene Gesserit axiom", "new_text": "URIs and relative references, and not limit the set of discovered URIs to those hosted at the address used for URI discovery. With security policies where the client requires the RD to be authorized to act as an RD, that authorization may be limited to resources on which the authorized RD advertises the adequate resource types. Clients that have obtained links they can not rely on yet can repeat the URI discovery step at the /.well-known/core resource of the indicated host to obtain the resource type information from an authorized source. The URI Discovery operation can yield multiple URIs of a given resource type. The client of the RD can use any of the discovered addresses initially."} {"id": "q-en-resource-directory-27f98938ee9f6721f145416ee13d07e7b183226c5a93d3940633b32230ca2f17", "old_text": "which the lookup client relies on the RD. To avoid the limitations, RD applications should consider prescribing that lookup clients only use the discovered information as hints, and describe which pieces of information need to be verified with the server because they impact the application's security. 7.3.", "comments": "This provides two band-aids to URL, and may also close it far enough that RD can progress without the underlying topic being solved in full.\nThis is a tracking issue to cover the topics of URL -- see there for discussion, the issue is primarily kept for cross-linking PRs.\nSince this is covered well enough that it need not stop RD. Further exploration should happen in ACE and CoRE. Hello CoRE, one of the outstanding issues in RD is server authorization with respect to operations on particular paths. This continues an ACE thread[1] I regrettably missed to follow up on, but simplifies the examples based on the recent interims. While originally phrased as the attacker tricking the server into using the client's good credentials for something the client did not intend, here they are phrased now in terms of server authorization as relevant for RD discovery and RD security policies. For two examples, a client needs to trust a server to provide a service, but has received misleading hints that cause disagreement between the client and the server about the performed operation. Example 1 An RD with a \"links are confidential\" policy also operates a bulletin board at . During the unprotected discovery phase, the endpoint is fed a link . The client connects to URL, verifies that its server credentials are good for an RD that will reveal links posted to it only to authorized lookup clients. It then posts all its links to /bb, where anyone who knows to use a bulletin board can read them. Example 2 A time server also exposes its current core temperature as . A malicious RD operator modifies the time server's registration to read . A starting device in the network asks the RD for any rt=unixtime, verifies that URL is indeed authorized to give the right time, and wakes up on the morning of January 1st, 1970. Problem summary The client has an abstract intention with an operation, which it checks against the server's authorization. Then it forms that intention, with additional unverified knowledge about the server, into a request, which it protects. The server then reconstructs that intention with its authoritative knowledge of itself, and uses any client provided credentials to act on it. If the client's knowledge about the server is erroneous, the intentions are misaligned. Neither is the intention protected, nor does the client point out the claims it wants to hold the server to in its request (or, in the original examples from the ACE thread, express the precise claims about itself it intends the server to assess). Solution approaches In the ACE thread back then, Jim said that he'd expect the client to use different keys (originally in talking to the AS, and thus different keys in talking to the RS). Doing that to the end would probably result in the devices having to handle a lot more keys, something like one for any operation that could be mixed up with any other? Michael in the last interim mentioned that in OAuth it's common to only run one service on one host for the very same reason. Kind of boils down to the same thing, as virtual-hosting these services would create multiple keys. My approach in the -25 RD Section 7.2 is to (losely, there; would need stronger and more precise wording) say that any information that any information using which intention is encoded in the request in a lossy way is to be rechecked from the origin server. That'd be more or less straightforward for example 1, but incur an additional costly recheck with the origin server about many things that could be learned efficiently from the RD. Does any of that solve the issues? Can we do any better? How do we put that into RD to satisfy the small comment about Section that opens this can of worms? Is there anything quotable on OAuth's best practices on multiple services on the same host? Best regards Christian [1]: URL To use raw power is to make yourself infinitely vulnerable to greater powers. -- Bene Gesserit axiom", "new_text": "which the lookup client relies on the RD. To avoid the limitations, RD applications should consider prescribing that lookup clients only use the discovered information as hints, and describe which pieces of information need to be verified because they impact the application's security. A straightforward way to verify such information is to request it again from an authorized server, typically the one that hosts the target resource. That similar to what happens in discovery when the URI discovery step is repeated. 7.3."} {"id": "q-en-resource-directory-91ba2272401eaacc60627b904522db8d45253d3e2e9011e06e753537b8af8b53", "old_text": "A web entity that stores information about web resources and implements the REST interfaces defined in this specification for discovery, for the creation, the maintenance and the removal of registrations, and for lookup of the registered resources.", "comments": "Some small mainly editorial fixes from Ben's review, to be merged soon.\nFrom Ben's updated ballot: Good point, queued up.\nFrom Ben's updated ballot:: \u2192 \"The RD (always, but especially here) then needs to verify any lookup client's authorization before reveling this information directly (in resource lookup) or indirectly (when using it to satisfy a resource lookup search criterion)\"\nFrom follow-up on Ben's initial DISCUSS: \u2192 \"regular registrations (i.e., per {{registration}})\"\nFrom follow-up on Ben's initial DISCUSS: It'd sound like abuse of semicolons to me; keeping it open to ask RFC editor for their technical writing expertise.", "new_text": "A web entity that stores information about web resources and implements the REST interfaces defined in this specification for discovery, for the creation, maintenance and removal of registrations, and for lookup of the registered resources."} {"id": "q-en-resource-directory-91ba2272401eaacc60627b904522db8d45253d3e2e9011e06e753537b8af8b53", "old_text": "format payload to register. The registrant-ep includes the same registration parameters in the POST request as it would per registration. The registration base URI of the registration is taken from the registrant-ep's network address (as is default with regular registrations). Example request from registrant-EP to RD (unanswered until the next step):", "comments": "Some small mainly editorial fixes from Ben's review, to be merged soon.\nFrom Ben's updated ballot: Good point, queued up.\nFrom Ben's updated ballot:: \u2192 \"The RD (always, but especially here) then needs to verify any lookup client's authorization before reveling this information directly (in resource lookup) or indirectly (when using it to satisfy a resource lookup search criterion)\"\nFrom follow-up on Ben's initial DISCUSS: \u2192 \"regular registrations (i.e., per {{registration}})\"\nFrom follow-up on Ben's initial DISCUSS: It'd sound like abuse of semicolons to me; keeping it open to ask RFC editor for their technical writing expertise.", "new_text": "format payload to register. The registrant-ep includes the same registration parameters in the POST request as it would with a regular registration per registration. The registration base URI of the registration is taken from the registrant-ep's network address (as is default with regular registrations). Example request from registrant-EP to RD (unanswered until the next step):"} {"id": "q-en-resource-directory-91ba2272401eaacc60627b904522db8d45253d3e2e9011e06e753537b8af8b53", "old_text": "lookup clients may access the information. In this case, the endpoint (and not the lookup clients) needs to be careful to check the RD's authorization. 7.4.", "comments": "Some small mainly editorial fixes from Ben's review, to be merged soon.\nFrom Ben's updated ballot: Good point, queued up.\nFrom Ben's updated ballot:: \u2192 \"The RD (always, but especially here) then needs to verify any lookup client's authorization before reveling this information directly (in resource lookup) or indirectly (when using it to satisfy a resource lookup search criterion)\"\nFrom follow-up on Ben's initial DISCUSS: \u2192 \"regular registrations (i.e., per {{registration}})\"\nFrom follow-up on Ben's initial DISCUSS: It'd sound like abuse of semicolons to me; keeping it open to ask RFC editor for their technical writing expertise.", "new_text": "lookup clients may access the information. In this case, the endpoint (and not the lookup clients) needs to be careful to check the RD's authorization. The RD needs to check any lookup client's authorization before revealing information directly (in resource lookup) or indirectly (when using it to satisfy a resource lookup search criterion). 7.4."} {"id": "q-en-resource-directory-91ba2272401eaacc60627b904522db8d45253d3e2e9011e06e753537b8af8b53", "old_text": "If the client's credentials indicate any subject name that is certified by any authority which the RD recognizes (which may be the system's trust anchor store), all those subject names are stored. With CWT or JWT based credentials (as common with ACE), the Subject (sub) claim is stored as a single name, if it exists. With X.509 certificates, the Common Name (CN) and the complete", "comments": "Some small mainly editorial fixes from Ben's review, to be merged soon.\nFrom Ben's updated ballot: Good point, queued up.\nFrom Ben's updated ballot:: \u2192 \"The RD (always, but especially here) then needs to verify any lookup client's authorization before reveling this information directly (in resource lookup) or indirectly (when using it to satisfy a resource lookup search criterion)\"\nFrom follow-up on Ben's initial DISCUSS: \u2192 \"regular registrations (i.e., per {{registration}})\"\nFrom follow-up on Ben's initial DISCUSS: It'd sound like abuse of semicolons to me; keeping it open to ask RFC editor for their technical writing expertise.", "new_text": "If the client's credentials indicate any subject name that is certified by any authority which the RD recognizes (which may be the system's trust anchor store), all such subject names are stored. With CWT or JWT based credentials (as common with ACE), the Subject (sub) claim is stored as a single name, if it exists. With X.509 certificates, the Common Name (CN) and the complete"} {"id": "q-en-resource-directory-91ba2272401eaacc60627b904522db8d45253d3e2e9011e06e753537b8af8b53", "old_text": "12. changes from -26 to -27 In general, this addresses the points that were pointed out in", "comments": "Some small mainly editorial fixes from Ben's review, to be merged soon.\nFrom Ben's updated ballot: Good point, queued up.\nFrom Ben's updated ballot:: \u2192 \"The RD (always, but especially here) then needs to verify any lookup client's authorization before reveling this information directly (in resource lookup) or indirectly (when using it to satisfy a resource lookup search criterion)\"\nFrom follow-up on Ben's initial DISCUSS: \u2192 \"regular registrations (i.e., per {{registration}})\"\nFrom follow-up on Ben's initial DISCUSS: It'd sound like abuse of semicolons to me; keeping it open to ask RFC editor for their technical writing expertise.", "new_text": "12. changes from -27 to -28 Security policies / link confidentiality: Point out the RD's obligations that follow from such a policy. Simple registration: clarify term \"regular registration\" by introducing it along with the reference to Wording fix in first-come-first-remembered Wording fixes in RD definition changes from -26 to -27 In general, this addresses the points that were pointed out in"} {"id": "q-en-resource-directory-716f025c175d49e0acccc9ec19d47ebf6718d2620229814ec51720fd997bf2ef", "old_text": "operations on a registration. After the initial registration, the registering endpoint retains the returned location of the Registration Resource for further operations, including refreshing the registration in order to extend the lifetime and \"keep-alive\" the registration. When the lifetime of the registration has expired, the RD SHOULD NOT respond to discovery queries concerning this endpoint. The RD SHOULD continue to provide access to the Registration Resource after a registration time-out occurs in order to enable the registering endpoint to eventually refresh the registration. The RD MAY eventually remove the registration resource for the purpose of garbage collection. If the Registration Resource is removed, the corresponding endpoint will need to be re-registered. The Registration Resource may also be used cancel the registration using DELETE, and to perform further operations beyond the scope of this specification. Operations on the Registration Resource are sensitive to reordering; freshness describes how order is restored. The operations on the Registration Resource are described below. 5.3.1.", "comments": "Closes: URL For immediate merging\n... and other terms defined in terminology. Best addressed after and the anchor changes are through.", "new_text": "operations on a registration. After the initial registration, the registering endpoint retains the returned location of the registration resource for further operations, including refreshing the registration in order to extend the lifetime and \"keep-alive\" the registration. When the lifetime of the registration has expired, the RD SHOULD NOT respond to discovery queries concerning this endpoint. The RD SHOULD continue to provide access to the registration resource after a registration time-out occurs in order to enable the registering endpoint to eventually refresh the registration. The RD MAY eventually remove the registration resource for the purpose of garbage collection. If the registration resource is removed, the corresponding endpoint will need to be re-registered. The registration resource may also be used cancel the registration using DELETE, and to perform further operations beyond the scope of this specification. Operations on the registration resource are sensitive to reordering; freshness describes how order is restored. The operations on the registration resource are described below. 5.3.1."} {"id": "q-en-resource-directory-716f025c175d49e0acccc9ec19d47ebf6718d2620229814ec51720fd997bf2ef", "old_text": "victim's /.well-known/core content in the RD. Mitigation for this is recommended in simple. The Registration Resource path is visible to any client that is allowed endpoint lookup, and can be extracted by resource lookup clients as well. The same goes for registration attributes that are shown as target attributes or lookup attributes. The RD needs to consider this in the choice of Registration Resource paths, and administrators or endpoint in their choice of attributes. 8.3.", "comments": "Closes: URL For immediate merging\n... and other terms defined in terminology. Best addressed after and the anchor changes are through.", "new_text": "victim's /.well-known/core content in the RD. Mitigation for this is recommended in simple. The registration resource path is visible to any client that is allowed endpoint lookup, and can be extracted by resource lookup clients as well. The same goes for registration attributes that are shown as target attributes or lookup attributes. The RD needs to consider this in the choice of registration resource paths, and administrators or endpoint in their choice of attributes. 8.3."} {"id": "q-en-resource-directory-716f025c175d49e0acccc9ec19d47ebf6718d2620229814ec51720fd997bf2ef", "old_text": "Wording fixes in RD definition changes from -26 to -27 In general, this addresses the points that were pointed out in", "comments": "Closes: URL For immediate merging\n... and other terms defined in terminology. Best addressed after and the anchor changes are through.", "new_text": "Wording fixes in RD definition Capitalization: Consistently using \"registration resource\" changes from -26 to -27 In general, this addresses the points that were pointed out in"} {"id": "q-en-rfc-censorship-tech-9f2e8ccafa57621911359d6b73e43d83e8999dfaf5a5764f3b342f268c539711", "old_text": "the connection, the session is terminated. Trade-offs: RST Packet Injection has a few advantages that make it extremely popular is a censorship technique. RST Packet Injection is an out-of-band prevention mechanism, allowing the avoidance of the the QoS bottleneck one can encounter with inline techniques such as Packet Dropping. This out-of-band property allows a censor to", "comments": "I think there is a trivial typo in the first sentence of \"Trade-offs\".\nthanks!", "new_text": "the connection, the session is terminated. Trade-offs: RST Packet Injection has a few advantages that make it extremely popular as a censorship technique. RST Packet Injection is an out-of-band prevention mechanism, allowing the avoidance of the the QoS bottleneck one can encounter with inline techniques such as Packet Dropping. This out-of-band property allows a censor to"} {"id": "q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d", "old_text": "1. Censorship is where an entity in a position of power - such as a government, organization, or individual - supresses communication that it considers objectionable, harmful, sensitive, politically incorrect or inconvenient. (While censors that engage in censorship or establish censorship regimes must do so through legal, military,", "comments": "s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:", "new_text": "1. Censorship is where an entity in a position of power - such as a government, organization, or individual - suppresses communication that it considers objectionable, harmful, sensitive, politically incorrect or inconvenient. (While censors that engage in censorship or establish censorship regimes must do so through legal, military,"} {"id": "q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d", "old_text": "interference. Prescription is the process by which censors determine what types of material they should block, i.e. they decide to block a list of pornographic websites. Identification is the process by which censors classify specific traffic to be blocked or impaird, i.e. the censor blocks or impairs all webpages containing \"sex\" in the title or traffic to sex.com. Interference is the process by which the censor intercedes in communication and prevents access to", "comments": "s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:", "new_text": "interference. Prescription is the process by which censors determine what types of material they should block, i.e. they decide to block a list of pornographic websites. Identification is the process by which censors classify specific traffic to be blocked or impaired, i.e. the censor blocks or impairs all webpages containing \"sex\" in the title or traffic to sex.com. Interference is the process by which the censor intercedes in communication and prevents access to"} {"id": "q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d", "old_text": "4.1. Packet dropping is a simple mechanism to prevent undesirable traffic. The censor identifies undesirable traffic and chooses to not properly forward any packets it sees associated with the traversing", "comments": "s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:", "new_text": "4.1. While other interference techniques outlined in this section mostly focus on blocking or preventing access to content, it can be an effective censorship strategy in some cases to not entirely block access to a given destination, or service but instead degrade the performance of the relevant network connection. The resulting user experience for a site or service under performance degradation can be so bad that users opt to use a different site, service, or method of communication, or may not engage in communication at all if there are no alternatives. Traffic shaping techniques that rate-limit the bandwidth available to certain types of traffic is one example of a performance degradation. Trade offs: While implementing a performance degradation will not always eliminate the ability of people to access a desire resource, it may force them to use other means of communication where censorship (or surveillance) is more easily accomplished. Empirical examples: Iran is known to shape the bandwidth available to HTTPS traffic to encourage unencrypted HTTP traffic Aryan-2012. 4.2. Packet dropping is a simple mechanism to prevent undesirable traffic. The censor identifies undesirable traffic and chooses to not properly forward any packets it sees associated with the traversing"} {"id": "q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d", "old_text": "uses Packet Dropping as the mechanisms for throttling SSH Aryan-2012. These are but two examples of a ubiquitous censorship practice. 4.2. Packet injection, generally, refers to a man-in-the-middle (MITM) network interference technique that spoofs packets in an established", "comments": "s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:", "new_text": "uses Packet Dropping as the mechanisms for throttling SSH Aryan-2012. These are but two examples of a ubiquitous censorship practice. 4.3. Packet injection, generally, refers to a man-in-the-middle (MITM) network interference technique that spoofs packets in an established"} {"id": "q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d", "old_text": "interference is especially evident in the interruption of encrypted/ obfuscated protocols, such as those used by Tor Winter-2012. 4.3. There are a variety of mechanisms that censors can use to block or filter access to content by altering responses from the DNS", "comments": "s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:", "new_text": "interference is especially evident in the interruption of encrypted/ obfuscated protocols, such as those used by Tor Winter-2012. 4.4. There are a variety of mechanisms that censors can use to block or filter access to content by altering responses from the DNS"} {"id": "q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d", "old_text": "TLDs as well, but only Iran has acted by blocking all Israeli (.il) domains Albert-2011. 4.4. Distributed Denial of Service attacks are a common attack mechanism used by \"hacktivists\" and black-hat hackers, but censors have used", "comments": "s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:", "new_text": "TLDs as well, but only Iran has acted by blocking all Israeli (.il) domains Albert-2011. 4.5. Distributed Denial of Service attacks are a common attack mechanism used by \"hacktivists\" and black-hat hackers, but censors have used"} {"id": "q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d", "old_text": "comandeered those user agents to send DDoS traffic to various sites Marczak-2015. 4.5. While it is perhaps the crudest of all censorship techniques, there is no more effective way of making sure undesirable information isn't", "comments": "s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:", "new_text": "comandeered those user agents to send DDoS traffic to various sites Marczak-2015. 4.6. While it is perhaps the crudest of all censorship techniques, there is no more effective way of making sure undesirable information isn't"} {"id": "q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d", "old_text": "can be physically seized or the hosting provider can be required to prevent access Anderson-2011. 7. This document benefited from discussions with Stephane Bortzmeyer,", "comments": "s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:", "new_text": "can be physically seized or the hosting provider can be required to prevent access Anderson-2011. 6.4. In some countries, legal mechanisms exist where an individual can issue a legal request to a content host that requires the host to take down content. Examples include the voluntary systems employed by companies like Google to comply with \"Right to be Forgotten\" policies in the European Union Google-RTBF and the copyright-oriented notice and takedown regime of the United States Digital Millennium Copyright Act (DMCA) Section 512 DMLP-512. 7. This document benefited from discussions with Stephane Bortzmeyer,"} {"id": "q-en-rfc5245bis-b5cd4409765c0a452650b7c974d9b27157a8fc8b4f344164ace01f4c1805e5d4", "old_text": "13. During the gathering phase of ICE (sec-gathering) and while ICE is performing connectivity checks (sec-connectivity_check), an agent sends STUN and TURN transactions. These transactions are paced at a rate of one every Ta milliseconds, and utilize a specific RTO. This section describes how the values of Ta and RTO are computed. This computation depends on whether ICE is being used with a real-time media stream (such as RTP) or something else. When ICE is used for a stream with a known maximum bandwidth, the computation in sec-rtp- based MAY be followed to rate-control the ICE exchanges. For all other streams, the computation in sec-non-rtp MUST be followed. 13.1. The values of RTO and Ta change during the lifetime of ICE processing. One set of values applies during the gathering phase, and the other, for connectivity checks. The value of Ta SHOULD be configurable, and SHOULD have a default of: where k is the number of media streams. During the gathering phase, Ta is computed based on the number of media streams the agent has indicated in the candidate information, and the RTP packet size and RTP ptime are those of the most preferred codec for each media stream. Once the candidate exchange is completed, the agent recomputes Ta to pace the connectivity checks. In that case, the value of Ta is based on the number of media streams that will actually be used in the session, and the RTP packet size and RTP ptime are those of the most preferred codec with which the agent will send. In addition, the retransmission timer for the STUN transactions, RTO, defined in RFC5389, SHOULD be configurable and during the gathering phase, SHOULD have a default of: where the number of pairs refers to the number of pairs of candidates with STUN or TURN servers. For connectivity checks, RTO SHOULD be configurable and SHOULD have a default of: where Num-Waiting is the number of checks in the check list in the Waiting state, and Num-In-Progress is the number of checks in the In- Progress state. Note that the RTO will be different for each transaction as the number of checks in the Waiting and In-Progress states change. These formulas are aimed at causing STUN transactions to be paced at the same rate as media. This ensures that ICE will work properly under the same network conditions needed to support the media as well. See sec-pacing for additional discussion and motivations. Because of this pacing, it will take a certain amount of time to obtain all of the server reflexive and relayed candidates. Implementations should be aware of the time required to do this, and if the application requires a time budget, limit the number of candidates that are gathered. The formulas result in a behavior whereby an agent will send its first packet for every single connectivity check before performing a retransmit. This can be seen in the formulas for the RTO (which represents the retransmit interval). Those formulas scale with N, the number of checks to be performed. As a result of this, ICE maintains a nicely constant rate, but becomes more sensitive to packet loss. The loss of the first single packet for any connectivity check is likely to cause that pair to take a long time to be validated, and instead, a lower-priority check (but one for", "comments": "Ta and RTO settings are now the same for real-time media and non-real-time media. An appendix has been added, showing bandwidth consumption based on different values, and different ufrag value sizes.", "new_text": "13. 13.1. During the ICE gathering phase (sec-gathering) and while ICE is performing connectivity checks (sec-connectivity_check), an agent triggers STUN and TURN transactions. These transactions are paced at a rate indicated by Ta, and the retransmission interval for each transaction is calculated based on the the retransmission timer for the STUN transactions (RTO) RFC5389. This section describes how the Ta and RTO values are computed during the ICE gathering pahse and while ICE is performing connectivity checks. NOTE: Previously, in RFC 5245, different formulas were defined for comupting Ta and RTO, depending on whether ICE was used for a real- time media stream (e.g. RTP) or not. The formulas below result in a behavior whereby an agent will send its first packet for every single connectivity check before performing a retransmit. This can be seen in the formulas for the RTO (which represents the retransmit interval). Those formulas scale with N, the number of checks to be performed. As a result of this, ICE maintains a nicely constant rate, but becomes more sensitive to packet loss. The loss of the first single packet for any connectivity check is likely to cause that pair to take a long time to be validated, and instead, a lower-priority check (but one for"} {"id": "q-en-rfc5245bis-b5cd4409765c0a452650b7c974d9b27157a8fc8b4f344164ace01f4c1805e5d4", "old_text": "13.2. In cases where ICE is used to establish some kind of session that is not real time, and has no fixed rate associated with it that is known to work on the network in which ICE is deployed, Ta and RTO revert to more conservative values. Ta SHOULD be configurable, SHOULD have a default of 500 ms, and MUST NOT be configurable to be less than 500 ms. If other Ta value than the default is used, the agent MUST indicate the value it prefers to use in the ICE exchange. Both agents MUST use the higher out of the two proposed values. In addition, the retransmission timer for the STUN transactions, RTO, SHOULD be configurable and during the gathering phase, SHOULD have a default of: where the number of pairs refers to the number of pairs of candidates with STUN or TURN servers. For connectivity checks, RTO SHOULD be configurable and SHOULD have a default of: 14.", "comments": "Ta and RTO settings are now the same for real-time media and non-real-time media. An appendix has been added, showing bandwidth consumption based on different values, and different ufrag value sizes.", "new_text": "13.2. ICE agents SHOULD use the default Ta value, 50 ms, but MAY use another value based on the characteristics of the associated media. ICE agents MUST NOT use a Ta value smaller than 5 ms. If an ICE agent wants to use another Ta value than the default value, the agent MUST indicate the proposed value to its peer during the ICE exchange. Both agents MUST use the higher value of the proposed values. If an agent does not propose a value, the default value is used for that agent when comparing which value is higher. NOTE: sec-concheckbw shows examples of required bandwidth, using different Ta values. 13.3. During the ICE gathering phase, ICE agents SHOULD calculate the RTO value using the following formula: For connectivity checks, ICE agents SHOULD calculate the RTO value using the following formula: ICE agents MAY calculate the RTO value using other mechanisms than those described above. ICE agents MUST NOT use a RTO value smaller than 500 ms. 14."} {"id": "q-en-rfc5245bis-18df5ec1119759300591667ddf2fe2b465b6fc8ea75944023f0749690f7da685", "old_text": "In order to limit the attacks described in sec-ice-hammer, an ICE agent MUST limit the total number of connectivity checks the agent performs across all check lists to a specific value, and this value MUST be configurable. A default of 100 is RECOMMENDED. This limit is enforced by discarding the lower-priority candidate pairs until there are less than 100. It is RECOMMENDED that a lower value be utilized when possible, set to the maximum number of plausible checks that might be seen in an actual deployment configuration. The requirement for configuration is meant to provide a tool for fixing this value in the field if, once deployed, it is found to be problematic. 6.1.2.6.", "comments": "The following issues are addressed in this commit: URL URL\nLGTM\nTo prevent one check list from starving.\nThis was discussed (and, as far as I am concerned, agreed) on the list 1st June. But, it's good to have a reminder :)\nThis one has not been addressed as far as I can tell. Now we get to decided if it's important enough or not to do something about :)\nI had a look at this, but for some reason it never got implemented. I think it should be an easy fix.\nThe changes section looks very small compared to how many changes we've made, so it definitely needs updating. Now we need to decide whether we have to update it before or after requesting publication.\nYes - we just have to remember everything :) However, I don't think we should go into details: the major changes (that I can think of when writing this) are removal of regular/aggressive nomination, and the changes related to \"Emil's table\".\nLGTM", "new_text": "In order to limit the attacks described in sec-ice-hammer, an ICE agent MUST limit the total number of connectivity checks the agent performs across all check lists in the check list set. This is done by limiting the total number of candidate pairs in the check list set. The default limit of candidate pairs for the check list set is 100, but the value MUST be configurable. The limit is enforced by, within in each check list, discarding lower-priority candidate pairs until the total number of candidate pairs in the check list set is smaller than the limit value. The discarding SHOULD be done evenly so that the number of candidate pairs in each check list is reduced the same amount. It is RECOMMENDED that a lower limit value than the default is picked when possible, and that the value is set to the maximum number of plausible candidate pairs that might be created in an actual deployment configuration. The requirement for configuration is meant to provide a tool for fixing this value in the field if, once deployed, it is found to be problematic. 6.1.2.6."} {"id": "q-en-rfc5245bis-18df5ec1119759300591667ddf2fe2b465b6fc8ea75944023f0749690f7da685", "old_text": "22. Following is the list of changes from RFC 5245 The specification was generalized to be more usable with any protocol and the parts that are specific to SIP and SDP were moved to a SIP/SDP usage document I-D.ietf-mmusic-ice-sip-sdp. Default candidates, multiple components, ICE mismatch detection, subsequent offer/answer, and role conflict resolution were made optional since they are not needed with every protocol using ICE. With IPv6, the precedence rules of RFC 6724 are used instead of the obsoleted RFC 3483 and using address preferences provided by the host operating system is recommended. Candidate gathering rules regarding loopback addresses and IPv6 addresses were clarified. ", "comments": "The following issues are addressed in this commit: URL URL\nLGTM\nTo prevent one check list from starving.\nThis was discussed (and, as far as I am concerned, agreed) on the list 1st June. But, it's good to have a reminder :)\nThis one has not been addressed as far as I can tell. Now we get to decided if it's important enough or not to do something about :)\nI had a look at this, but for some reason it never got implemented. I think it should be an easy fix.\nThe changes section looks very small compared to how many changes we've made, so it definitely needs updating. Now we need to decide whether we have to update it before or after requesting publication.\nYes - we just have to remember everything :) However, I don't think we should go into details: the major changes (that I can think of when writing this) are removal of regular/aggressive nomination, and the changes related to \"Emil's table\".\nLGTM", "new_text": "22. The purpose of this updated ICE specification is to: Clarify procedures in RFC 5245. Make technical changes, due to discovered flows in RFC 5245 and based on feedback from the community that has implemented and deployed ICE applications based on RFC 5245. Make the procedures signaling protocol independent, by removing the SIP and SDP procedures. Procedures specific to a signaling protocol will be defined in separate usage documents. I-D.ietf- mmusic-ice-sip-sdp defines the ICE usage with SIP and SDP. The following technical changes have been done: Aggressive nomination removed. The procedures for calculating candidate pair states and scheduling connectivity checks modified. Procedures for calculation of Ta and RTO modified. Active check list and frozen check list definitions removed. 'ice2' ice option added. IPv6 considerations modified. Usage with no-op for keepalives, and keepalives with non-ICE peers, removed. "} {"id": "q-en-rtcweb-overview-c71b8494d792b0e174eccbccf56094298fd9a4b21d1735aeb9aa77e0e4d0d165", "old_text": "identification systems (such as is served by telephone numbers or email addresses in other communications systems). Development of The Universal Solution has proved hard, however, for all the usual reasons. The last few years have also seen a new platform rise for deployment of services: The browser-embedded application, or \"Web application\".", "comments": "Made changes to address Warren's first and third, but not 2nd nit. I'm unsure whether the changes in the spacing I introduced will fix the odd spacing in Olle's name.\nWarren noted the following nits in his ballot: 1: \"Development of The Universal Solution has proved hard, however, for all the usual reasons.\" -- this is cute, but leaves people wondering what \"all the usual reasons are\". Perhaps just \"Development of The Universal Solution has, however, proved hard.\" (or just cut after the \"however in the original\"). 2: I'm not sure why you have \"Protocol\" in the terminology section. It doesn't seem like it is useful for the document, and this document doesn't seem like the right place to (re) define it. 3: Acknowledgements: Funny spacing in \"Olle E. Johansson", "new_text": "identification systems (such as is served by telephone numbers or email addresses in other communications systems). Development of The Universal Solution has, however, proved hard. The last few years have also seen a new platform rise for deployment of services: The browser-embedded application, or \"Web application\"."} {"id": "q-en-rtcweb-overview-2c7568696a410e860319c25dabdd7a0b1480d697c1a3f6fe605ce753f131fd87", "old_text": "entities that handle the data, but do not modify it (such as TURN relays). It includes necessary functions for congestion control: When not to send data. WebRTC endpoints MUST implement the transport protocols described in I-D.ietf-rtcweb-transports.", "comments": "Spencer had the following comment: I'm not sure how tutorial you want section 4 to be, but I'd at least mention appropriate retransmission and in-order delivery, in addition to congestion control, since you get that with SCTP on the data channel.", "new_text": "entities that handle the data, but do not modify it (such as TURN relays). It includes necessary functions for congestion control, retransmission, and in-order delivery. WebRTC endpoints MUST implement the transport protocols described in I-D.ietf-rtcweb-transports."} {"id": "q-en-rtcweb-overview-d39635abf99490885bfc724e1820bfed430f5e5e194bd44ee9df30d642be01c3", "old_text": "multiple data channels between them. An implementation of the Interactive Connectivity Establishment (ICE) I-D.ietf-ice-rfc5245bis protocol. An ICE Agent may also be an SDP Agent, but there exist ICE Agents that do not use SDP (for instance those that use Jingle XEP-0166). Communication between multiple parties, where the expectation is that an action from one party can cause a reaction by another", "comments": "EKR entered the following DISCUSS: Your citation to ICE is to 5245-bis, but at least the JSEP editor consensus was that WebRTC depended on 5245, so this needs to be resolved one way or the other.\nSean asked me to leave this one open. Personally, I'd somewhat prefer to 5245bis because it separates out the SDP machinery, but JSEP has a specific section reference (section 15.4) that has to go to either 5245 or to mmusic-ice-sip-sdp, and one could argue that we need to pull in -sip-sdp here too. I'm OK with 5245 as a reference, but will let the chairs make the call.\nI no longer care about this distinction.", "new_text": "multiple data channels between them. An implementation of the Interactive Connectivity Establishment (ICE) RFC5245 protocol. An ICE Agent may also be an SDP Agent, but there exist ICE Agents that do not use SDP (for instance those that use Jingle XEP-0166). Communication between multiple parties, where the expectation is that an action from one party can cause a reaction by another"} {"id": "q-en-rtcweb-transport-59c0e12ecbf6363dec5e18c957a182b44e2e7a151c5b2f79f7fb74f3ca0e3d2a", "old_text": "browser MUST be able to communicate. When TURN is used, and the TURN server has IPv4 or IPv6 connectivity to the peer or its TURN server, candidates of the appropriate types MUST be supported. The \"Happy Eyeballs\" specification for ICE I- D.martinsen-mmusic-ice-dualstack-fairness SHOULD be supported. 3.3.", "comments": "reformulated the TCP point, addressed security with a reference to -qos.\nFrom Ben Campbell's review:\nSecond point is fixed by . I'm not sure the third point is an improvement.\nSSL is used exactly once, as part of the acronym \"TURN/SSL\". This is also the first mention of TURN, but the second was much more convenient to attach the expansion to. Given that it was so awkward to attach them, I included a new section listing the \"used protocols\" instead.\nFrom Magnus Westerlund in IETF LC: Section 8.1: [I-D.martinsen-mmusic-ice-dualstack-fairness] Martinsen, P., Reddy, T., and P. Patil, \"ICE IPv4/IPv6 Dual Stack Fairness\", draft-martinsen-mmusic-ice- dualstack-fairness-02 (work in progress), February 2015. Can be updated as this is now an WG item in ICE.\nUpdated in\nFixed by\nI have reviewed this document in preparation for IETF last call. It is in good shape and I have requested the last call. While the last call is ongoing, I would like to double-check that this document (particularly Section 4) raises no new security considerations that should be documented in Section 6. I'm wondering if there are scenarios where a malicious application could try to game the priority scheme to some ill effect? draft-ietf-tsvwg-rtcweb-qos talks a bit about this, so I'm wondering if anything needs to be said here. And I found a couple of nits to be resolved together with any LC comments: Sec 3.2 - Please update the citation to point to draft-ietf-mmusic-ice-dualstack-fairness-02. Sec 3.4 - \"Third, using TCP only between the endpoint and its relay may result in less issues with TCP in regards to real-time constraints, e.g. due to head of line blocking.\" This is awkwardly phrased. I would suggest something like \"Third, using TCP between the client's TURN server and its peer may result in more performance problems (due to head-of-line blocking) than using UDP.\" And one bit I left out \u2014 if text is to be added to reference draft-ietf-rtcweb-return, that should be done together with addressing nits and LC comments.\nThere seems no consensus to add RETURN, so I'm skipping this for now.", "new_text": "browser MUST be able to communicate. When TURN is used, and the TURN server has IPv4 or IPv6 connectivity to the peer or the peer's TURN server, candidates of the appropriate types MUST be supported. The \"Happy Eyeballs\" specification for ICE I-D.ietf-mmusic-ice-dualstack-fairness SHOULD be supported. 3.3."} {"id": "q-en-rtcweb-transport-59c0e12ecbf6363dec5e18c957a182b44e2e7a151c5b2f79f7fb74f3ca0e3d2a", "old_text": "establishing UDP relay candidates using TURN over TCP to connect to their respective relay servers. Third, using TCP only between the endpoint and its relay may result in less issues with TCP in regards to real-time constraints, e.g. due to head of line blocking. ICE-TCP candidates RFC6544 MUST be supported; this may allow applications to communicate to peers with public IP addresses across", "comments": "reformulated the TCP point, addressed security with a reference to -qos.\nFrom Ben Campbell's review:\nSecond point is fixed by . I'm not sure the third point is an improvement.\nSSL is used exactly once, as part of the acronym \"TURN/SSL\". This is also the first mention of TURN, but the second was much more convenient to attach the expansion to. Given that it was so awkward to attach them, I included a new section listing the \"used protocols\" instead.\nFrom Magnus Westerlund in IETF LC: Section 8.1: [I-D.martinsen-mmusic-ice-dualstack-fairness] Martinsen, P., Reddy, T., and P. Patil, \"ICE IPv4/IPv6 Dual Stack Fairness\", draft-martinsen-mmusic-ice- dualstack-fairness-02 (work in progress), February 2015. Can be updated as this is now an WG item in ICE.\nUpdated in\nFixed by\nI have reviewed this document in preparation for IETF last call. It is in good shape and I have requested the last call. While the last call is ongoing, I would like to double-check that this document (particularly Section 4) raises no new security considerations that should be documented in Section 6. I'm wondering if there are scenarios where a malicious application could try to game the priority scheme to some ill effect? draft-ietf-tsvwg-rtcweb-qos talks a bit about this, so I'm wondering if anything needs to be said here. And I found a couple of nits to be resolved together with any LC comments: Sec 3.2 - Please update the citation to point to draft-ietf-mmusic-ice-dualstack-fairness-02. Sec 3.4 - \"Third, using TCP only between the endpoint and its relay may result in less issues with TCP in regards to real-time constraints, e.g. due to head of line blocking.\" This is awkwardly phrased. I would suggest something like \"Third, using TCP between the client's TURN server and its peer may result in more performance problems (due to head-of-line blocking) than using UDP.\" And one bit I left out \u2014 if text is to be added to reference draft-ietf-rtcweb-return, that should be done together with addressing nits and LC comments.\nThere seems no consensus to add RETURN, so I'm skipping this for now.", "new_text": "establishing UDP relay candidates using TURN over TCP to connect to their respective relay servers. Third, using TCP between the client's TURN server and the peer may result in more performance problems than using UDP, e.g. due to head of line blocking. ICE-TCP candidates RFC6544 MUST be supported; this may allow applications to communicate to peers with public IP addresses across"} {"id": "q-en-rtcweb-transport-59c0e12ecbf6363dec5e18c957a182b44e2e7a151c5b2f79f7fb74f3ca0e3d2a", "old_text": "6. Security considerations are enumerated in I-D.ietf-rtcweb-security. ", "comments": "reformulated the TCP point, addressed security with a reference to -qos.\nFrom Ben Campbell's review:\nSecond point is fixed by . I'm not sure the third point is an improvement.\nSSL is used exactly once, as part of the acronym \"TURN/SSL\". This is also the first mention of TURN, but the second was much more convenient to attach the expansion to. Given that it was so awkward to attach them, I included a new section listing the \"used protocols\" instead.\nFrom Magnus Westerlund in IETF LC: Section 8.1: [I-D.martinsen-mmusic-ice-dualstack-fairness] Martinsen, P., Reddy, T., and P. Patil, \"ICE IPv4/IPv6 Dual Stack Fairness\", draft-martinsen-mmusic-ice- dualstack-fairness-02 (work in progress), February 2015. Can be updated as this is now an WG item in ICE.\nUpdated in\nFixed by\nI have reviewed this document in preparation for IETF last call. It is in good shape and I have requested the last call. While the last call is ongoing, I would like to double-check that this document (particularly Section 4) raises no new security considerations that should be documented in Section 6. I'm wondering if there are scenarios where a malicious application could try to game the priority scheme to some ill effect? draft-ietf-tsvwg-rtcweb-qos talks a bit about this, so I'm wondering if anything needs to be said here. And I found a couple of nits to be resolved together with any LC comments: Sec 3.2 - Please update the citation to point to draft-ietf-mmusic-ice-dualstack-fairness-02. Sec 3.4 - \"Third, using TCP only between the endpoint and its relay may result in less issues with TCP in regards to real-time constraints, e.g. due to head of line blocking.\" This is awkwardly phrased. I would suggest something like \"Third, using TCP between the client's TURN server and its peer may result in more performance problems (due to head-of-line blocking) than using UDP.\" And one bit I left out \u2014 if text is to be added to reference draft-ietf-rtcweb-return, that should be done together with addressing nits and LC comments.\nThere seems no consensus to add RETURN, so I'm skipping this for now.", "new_text": "6. RTCWEB security considerations are enumerated in I-D.ietf-rtcweb- security. Security considerations pertaining to the use of DSCP are enumerated in I-D.ietf-tsvwg-rtcweb-qos. "} {"id": "q-en-rtcweb-transport-d4db569a2abc191074a19e240b157bc8fde757fcf47e554bd5899b2a3700b0e1", "old_text": "UDP-blocking firewalls without using a TURN server. If TCP connections are used, RTP framing according to RFC4571 MUST be used, both for the RTP packets and for the DTLS packets used to carry data channels. The ALTERNATE-SERVER mechanism specified in RFC5389 (STUN) section 11 (300 Try Alternate) MUST be supported.", "comments": "NAME please review\nFrom Magnus Westerlund in IETF LC review: Section 3.4: If TCP connections are used, RTP framing according to [RFC4571] MUST be used, both for the RTP packets and for the DTLS packets used to carry data channels. I think the last part of this sentence, unintentionally excludes the DTLS handshake packets. The ICE TCP spec also specifies that the STUN connectivity checks are using RFC4571 framing. Thus, I think some reformulation of this sentence is in order.", "new_text": "UDP-blocking firewalls without using a TURN server. If TCP connections are used, RTP framing according to RFC4571 MUST be used for all packets. This includes the RTP packets, DTLS packets used to carry data channels, and STUN connectivity check packets. The ALTERNATE-SERVER mechanism specified in RFC5389 (STUN) section 11 (300 Try Alternate) MUST be supported."} {"id": "q-en-security-arch-10bf1bca901b8f66f9272baa2ea8b8d623c270016151adbe4f2b9330cfcb7d9d", "old_text": "2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in RFC2119. 3.", "comments": "Just saving some typing because this section will eventually need to match what's in RFC8174.", "new_text": "2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here. 3."} {"id": "q-en-security-arch-8f9acac3c47187e956ae13ea556433c14443e9560c2f05b016f8f0ee0cb83f35", "old_text": "authoritative IdPs, thus allowing the user to instantly grasp that the call is being authenticated by Facebook, Google, etc. 6.4.5. A number of optional Web security features have the potential to", "comments": "NAME PTAL\nThe current text on user identity covers the domain portion well enough, but doesn't really say anything about how the part preceding the 'NAME is handled. There is text regarding the use of a local address book for the purposes of rendering, which might help avoid these issues entirely... where available. We may need to consider addressing the issue of confusable characters and whether we need to define the use of a particular normalization form. (See )", "new_text": "authoritative IdPs, thus allowing the user to instantly grasp that the call is being authenticated by Facebook, Google, etc. 6.4.4.1. Because a broad range of characters are permitted in identity strings, it may be possible for attackers to craft identities which are confusable with other identities (see RFC6943 for more on this topic). This is a problem with any identifier space of this type (e.g., e-mail addresses). Those minting identifers should avoid mixed scripts and similar confusable characters. Those presenting these identifiers to a user should consider highlighting cases of mixed script usage (see RFC5890, section 4.4). Other best practices are still in development. 6.4.5. A number of optional Web security features have the potential to"} {"id": "q-en-senml-spec-1229d9115d7203b9d57179bfc0356dba1c3acdc583b0da56f50e6ad5f5f53041", "old_text": "11.3. The following registrations are done following the procedure specified in RFC6838 and RFC7303. Note to RFC Editor - please remove this paragraph. Note that a request for media type review for senml+json was sent to the media-", "comments": "See email from paul L Nov1 to media-types Hello Cullen, it seems to me that all the content-types you have been describing about senML are candidate content types to be exchanged over the clipboard of a normal mobile or desk computer; e.g. from a reading app to another, \u2026 Do you agree on this? If yes, then I\u2019d like to recommend to add: MacOS Uniform Type Identiers Windows Clipboard Name for each of the content-types. If there is earlier experience in this realm, probably this should be taken, otherwise, I can probably suggest some clipboard names. Standardising such at the Media-Type declaration level allows to set a reference frame so that other implementations can follow, just as the content-type name. For examples of including this in the media-type registration, see that of SVG or MathML. thanks in advance. Paul\nAt IETF 97 was \"probably should do this\" - see URL\nSee MathML at URL\nOn the UTI, currently only apple can declare ones in the public space ( see URL ). I do not see how to request one. There is a bunch of extra info apple would need to add to the OS to really make this work outlined in URL\nWindows clipboard info at URL(v=vs.85).aspx\nMakes sense; no need for a clipboard format for the streaming and compressed versions actually.", "new_text": "11.3. The following registrations are done following the procedure specified in RFC6838 and RFC7303. Clipboard formats are defined for the JSON and XML form of lists but do not make sense for streams or other formats. Note to RFC Editor - please remove this paragraph. Note that a request for media type review for senml+json was sent to the media-"} {"id": "q-en-senml-spec-1229d9115d7203b9d57179bfc0356dba1c3acdc583b0da56f50e6ad5f5f53041", "old_text": "File extension(s): senml and sensml Macintosh file type code(s): none Person & email address to contact for further information: Cullen Jennings ", "comments": "See email from paul L Nov1 to media-types Hello Cullen, it seems to me that all the content-types you have been describing about senML are candidate content types to be exchanged over the clipboard of a normal mobile or desk computer; e.g. from a reading app to another, \u2026 Do you agree on this? If yes, then I\u2019d like to recommend to add: MacOS Uniform Type Identiers Windows Clipboard Name for each of the content-types. If there is earlier experience in this realm, probably this should be taken, otherwise, I can probably suggest some clipboard names. Standardising such at the Media-Type declaration level allows to set a reference frame so that other implementations can follow, just as the content-type name. For examples of including this in the media-type registration, see that of SVG or MathML. thanks in advance. Paul\nAt IETF 97 was \"probably should do this\" - see URL\nSee MathML at URL\nOn the UTI, currently only apple can declare ones in the public space ( see URL ). I do not see how to request one. There is a bunch of extra info apple would need to add to the OS to really make this work outlined in URL\nWindows clipboard info at URL(v=vs.85).aspx\nMakes sense; no need for a clipboard format for the streaming and compressed versions actually.", "new_text": "File extension(s): senml and sensml Windows Clipboard Name: \"JSON Sensor Measurement List\" for senml Macintosh file type code(s): none Macintosh Universal Type Identifier code: org.ietf.senml-json conforms to public.text for senml Person & email address to contact for further information: Cullen Jennings "} {"id": "q-en-senml-spec-1229d9115d7203b9d57179bfc0356dba1c3acdc583b0da56f50e6ad5f5f53041", "old_text": "File extension(s): senmlx and sensmlx Macintosh file type code(s): none Person & email address to contact for further information: Cullen Jennings ", "comments": "See email from paul L Nov1 to media-types Hello Cullen, it seems to me that all the content-types you have been describing about senML are candidate content types to be exchanged over the clipboard of a normal mobile or desk computer; e.g. from a reading app to another, \u2026 Do you agree on this? If yes, then I\u2019d like to recommend to add: MacOS Uniform Type Identiers Windows Clipboard Name for each of the content-types. If there is earlier experience in this realm, probably this should be taken, otherwise, I can probably suggest some clipboard names. Standardising such at the Media-Type declaration level allows to set a reference frame so that other implementations can follow, just as the content-type name. For examples of including this in the media-type registration, see that of SVG or MathML. thanks in advance. Paul\nAt IETF 97 was \"probably should do this\" - see URL\nSee MathML at URL\nOn the UTI, currently only apple can declare ones in the public space ( see URL ). I do not see how to request one. There is a bunch of extra info apple would need to add to the OS to really make this work outlined in URL\nWindows clipboard info at URL(v=vs.85).aspx\nMakes sense; no need for a clipboard format for the streaming and compressed versions actually.", "new_text": "File extension(s): senmlx and sensmlx Windows Clipboard Name: \"XML Sensor Measurement List\" for senmlx Macintosh file type code(s): none Macintosh Universal Type Identifier code: org.ietf.senml-xml conforms to publc.xml for senmlx Person & email address to contact for further information: Cullen Jennings "} {"id": "q-en-senml-spec-bdc8464c3e6ae57492519655751378e4b3fc0582d5b535fa3c8260715379e561", "old_text": "IANA will create a registry of SenML unit symbols. The primary purpose of this registry is to make sure that symbols uniquely map to give type of measurement. Definitions for many of these units can be found in location such as NIST811 and BIPM. Note 1: A value of 0.0 indicates the switch is off while 1.0 indicates on and 0.5 would be half on. Note 2: Assumed to be in WGS84 unless another reference frame is known for the sensor.", "comments": "This should now consider and . There are no perfect solutions for these, but the solutions chosen should be close.\nI don't agree wth some of this but agree with a bunch. I took carsten's changes for all the parts I agree with and put them in URL\nI might suggest we pull that PR and then look at what is left (on a bunch of them it is simply if there should be an * suggesting don't use or not) then we can rebase this PR and see what is left to resolve.\nOops, I just changed my PR to react to your comments before seeing\nPushed another version that has both / and % for the same thing, where % is marked with a *\nNow, instead of delete, it's \"not recommend\" and mentions kg as exception. I think this is good to merge.\nThe following units are questionable: % -- this looks like a percentage (0..100), but it is not (0..1). count -- what? This is about a dimensionless quantity? %RH, %EL appear to be percentages, but imply a quantity. EL may be a quantity, but the unit is second. In SI, we don't do new units for new quantities (there is no \"millimeter radius\" or \"nanometer wavelength\"). Bspl -- this really should be in dB (see ), SPL is the quantity again. beat/min, beats -- these should be 1/min (but see ) and 1 (dimensionless); how is beats different from count? We don't have bit (but bit/s), byte. 1/min is there only for heartbeats, not for revolutions (\"rpm\"). Should really be 1/s (but see ). lat and lon are a bit weird, too. (Celsius is also questionable -- the rationale of should call for all measurements being expressed in Kelvin.)\non on kevin vs C, every thermostat manufacture I can find has said they want C over kelvin. bunch of implementation are using 0 to 1 for light levels but imagine we could change % to 0 to 100 count - yep count :-) Bunch of sensors simply count pulses of who knows what. Totally agree this is not a unit but this is the pragmatic sort of thing needed EL is a great example of something that many sensors in sports have to deal with in common way. They use this and it works well for them\nSo I changed % to mean percentage. Cel of course stays in (even though it is wrong wrong wrong :-). I have marked count and EL with the \"not so recommended for new producers\" range. But we don't really have a base unit for \"1\" (dimensionless) either. The problem remains that giving a unit doesn't tell you what quantity actually has been measured, and the trend to make more and more finer grained units for the same kinds quantities measured in different ways (as in psi vs. psig etc.) leads into the woods.\njust finding all the email now \u2026 I think the thing that people like about this is it\u2019s not a SI unit, it\u2019s something that tells you something useful about what you are measuring across a wide range of devices. I think we need to think of these from a point of view of broadly useful than are they a SI unit. Agree with your slippery slope psi, psig, etc argument but all of those exist for a reason.\nThe SI unit for mass is kg. says: That leads to being unable to use the SI unit for mass, instead, we are using g. But then we are using derived units such as l (dm3) and l/s (dm3/s). So: We need to decide the above TODO. We need to decide whether to stick with SI units or allow simple names for derived units (gram, minute, liter). Domain-specific units are often weird (1/min = \"rpm\", mmHg or mol/dl in medicine, ...).\nI think a key part of the registry vs existing SI unit tables was to be able to use things that were not proper units but agree things can be cleaned up. Clearly rpm and bpm have the same unit in the sense of what the dimensions are but what they are used to measure are pretty different and knowing that it is bpm often provides a level of information about how to process the information. As related to our whole conversation on why we needed the IANA table, the ability to have things in this that are not really a classic SI unit is the key value of SenML. The rules are just guidelines. If we want to change the unit of mass from g to kg, we can do that but the point on the prefixes was not to have both g and kg as that just reduces interoperability", "new_text": "IANA will create a registry of SenML unit symbols. The primary purpose of this registry is to make sure that symbols uniquely map to give type of measurement. Definitions for many of these units can be found in location such as NIST811 and BIPM. Units marked with an asterisk are NOT RECOMMENDED to be produced by new implementations, but are in active use and SHOULD be implemented by consumers that can use the related base units. Note 1: A value of 0.0 indicates the switch is off while 1.0 indicates on and 0.5 would be half on. The preferred name of this unit is \"/\". For historical reasons, the name \"%\" is also provided for the same unit - but note that while that name strongly suggests a percentage (0..100) -- it is however NOT a percentage, but the absolute ratio! Note 2: Assumed to be in WGS84 unless another reference frame is known for the sensor."} {"id": "q-en-senml-spec-bdc8464c3e6ae57492519655751378e4b3fc0582d5b535fa3c8260715379e561", "old_text": "single canonical representation outweighs the convenience of easy human representations or loss of precision in a conversion. Use of SI prefixes such as \"k\" before the unit is not allowed. Instead one can represent the value using scientific notation such a 1.2e3. TODO - Open Issue. Some people would like to have SI prefixes to improve human readability. For a given type of measurement, there will only be one unit type defined. So for length, meters are defined and other lengths such", "comments": "This should now consider and . There are no perfect solutions for these, but the solutions chosen should be close.\nI don't agree wth some of this but agree with a bunch. I took carsten's changes for all the parts I agree with and put them in URL\nI might suggest we pull that PR and then look at what is left (on a bunch of them it is simply if there should be an * suggesting don't use or not) then we can rebase this PR and see what is left to resolve.\nOops, I just changed my PR to react to your comments before seeing\nPushed another version that has both / and % for the same thing, where % is marked with a *\nNow, instead of delete, it's \"not recommend\" and mentions kg as exception. I think this is good to merge.\nThe following units are questionable: % -- this looks like a percentage (0..100), but it is not (0..1). count -- what? This is about a dimensionless quantity? %RH, %EL appear to be percentages, but imply a quantity. EL may be a quantity, but the unit is second. In SI, we don't do new units for new quantities (there is no \"millimeter radius\" or \"nanometer wavelength\"). Bspl -- this really should be in dB (see ), SPL is the quantity again. beat/min, beats -- these should be 1/min (but see ) and 1 (dimensionless); how is beats different from count? We don't have bit (but bit/s), byte. 1/min is there only for heartbeats, not for revolutions (\"rpm\"). Should really be 1/s (but see ). lat and lon are a bit weird, too. (Celsius is also questionable -- the rationale of should call for all measurements being expressed in Kelvin.)\non on kevin vs C, every thermostat manufacture I can find has said they want C over kelvin. bunch of implementation are using 0 to 1 for light levels but imagine we could change % to 0 to 100 count - yep count :-) Bunch of sensors simply count pulses of who knows what. Totally agree this is not a unit but this is the pragmatic sort of thing needed EL is a great example of something that many sensors in sports have to deal with in common way. They use this and it works well for them\nSo I changed % to mean percentage. Cel of course stays in (even though it is wrong wrong wrong :-). I have marked count and EL with the \"not so recommended for new producers\" range. But we don't really have a base unit for \"1\" (dimensionless) either. The problem remains that giving a unit doesn't tell you what quantity actually has been measured, and the trend to make more and more finer grained units for the same kinds quantities measured in different ways (as in psi vs. psig etc.) leads into the woods.\njust finding all the email now \u2026 I think the thing that people like about this is it\u2019s not a SI unit, it\u2019s something that tells you something useful about what you are measuring across a wide range of devices. I think we need to think of these from a point of view of broadly useful than are they a SI unit. Agree with your slippery slope psi, psig, etc argument but all of those exist for a reason.\nThe SI unit for mass is kg. says: That leads to being unable to use the SI unit for mass, instead, we are using g. But then we are using derived units such as l (dm3) and l/s (dm3/s). So: We need to decide the above TODO. We need to decide whether to stick with SI units or allow simple names for derived units (gram, minute, liter). Domain-specific units are often weird (1/min = \"rpm\", mmHg or mol/dl in medicine, ...).\nI think a key part of the registry vs existing SI unit tables was to be able to use things that were not proper units but agree things can be cleaned up. Clearly rpm and bpm have the same unit in the sense of what the dimensions are but what they are used to measure are pretty different and knowing that it is bpm often provides a level of information about how to process the information. As related to our whole conversation on why we needed the IANA table, the ability to have things in this that are not really a classic SI unit is the key value of SenML. The rules are just guidelines. If we want to change the unit of mass from g to kg, we can do that but the point on the prefixes was not to have both g and kg as that just reduces interoperability", "new_text": "single canonical representation outweighs the convenience of easy human representations or loss of precision in a conversion. Use of SI prefixes such as \"k\" before the unit is not recommended. Instead one can represent the value using scientific notation such a 1.2e3. The \"kg\" unit is exception to this rule since it is an SI base unit; the \"g\" unit is provided for legacy compatibility. For a given type of measurement, there will only be one unit type defined. So for length, meters are defined and other lengths such"} {"id": "q-en-sframe-f81072599758aba5a849e448f12f5c42012e6216a926b9f934381be7219f3a2c", "old_text": "extra cost. Adding a digital signature to each encrypted frame will be an overkill, instead we propose adding signature over N frames. Because some frames could be lost and never delivered, when the signature is sent, it will also send all the hashes it used to calculate the signature, and the recipient client will only use these hashes if they didn't receive the matching frame. For example Client A sends a signature every 5 frames, so it sends the signature and Hash(Frame1), ...,Hash(Frame5), client B received only frames 1,2,4 and 5. When B receives the signature and the hashes, it will compute the hashes of frames 1,2,4 and 5 locally and use the received Hash(Frame3) to verify the signature. It is up to the application to decide what to do when signature verification fails. The signature keys are exchanged out of band along the secret keys. 4.", "comments": "I think that the hash/auth tag list should be better moved at the end of the frame after the auth tag to make it easier to apply the encryption and generating the auth tag without having to reconstruct the frame to insert the signature in the middle of it. So, my new proposal would be first to generate the frame header with the s bit = 1, append the payload, encrypt the frame and add the auth tag as normal: Then append the tags before that frame at the end and the number of auth tags without the current frame one Then calculate the signature of the auth tags in reverse order: And then append the Signature at the end of the frame: When the receiver gets the frame, it will also be able to reuse the buffer to calculate the Signature without having to move momory around. In case it is not secure to use the auth tags, it would be still be similar: The receiver would have to generate the Hash of the frame up to the Auth tag, and then calculate the signature with the rest of the hashes on the list.\nAdding Signature at the end will mean adding a new offset field to point to where the Signature block start as it will be a variable length block (depends how many hashes it has), where if it was put at the beginning, we won't need to add an offset field.I think doing signature over the auth tag is fineYou are right about the startFrameIndex, it won't be enough. What about having startFrameIndex 4 bytes or whatever, then having one byte frameOffset for every other hash to get the frameindex by using this offset from the start index, assuming 255 is enough range. I just don't want to add full length frameId for every hash\ndidn't you said that: Why is it fixed if it is at the beginning, but variable if it is in the end? why do we need the at all? The receiver would just need to have an ordered map of the pending to be verified authentication tags and remove them when they signature for each is received. The receiver should have also a verification timeout and if a signature is not received for that tag in time, it would be considered as failed.\nOk, I see, ECDSA signatures are ASN1 encoded objects, so the length can be derived by parsing it from the start bite. Is that correct?\nI have made a new tentative approach with my comments above. What do you think?\nSorry I should have been more clear The signature itself is fixed length (ex 64 bytes for ed25519) but The entire signature block will be variable depends on the number of signature, unless you want to make this number fixed (ex always 50 hashes)We need frameIndex so the recipient know which hashes to use to verify the signature, we can't just relay on an ordered list of all previous hashes because the signature block could have hashes of missing frames, or even the signature packet could be lost itself\nWhat is a signature block? What is the difference between the signature and the signature block? That should be covered by my proposal.\nsignature = sign(data) Signature block is what I referred to on my drawing as signature header+ signature + hash list\nIf I understood your proposal correctly, you are sending the whole list of hashes so the receiver can recreate the if any of them is missing. So you don't really don't need the index of each frame, as you there should be (almost) a 1-to-1 mapping between the hashes/auth tag and the frame index (how probable is to have a hash or auth tag collision?). So, given the hash/auth tag list, the receiver could derive the frame index of it (by keeping an internal map)\nI think that's good for now. My initial thought was to have the sender chose the packet to sign, they can be different number each time, also they might chose not ti sign every packet, for example random X packets every Y seconds. Anyway, we can always improve later, your proposal is good to start with\nThis should be also possible with current approach, right? Have you seen any reason that it couldn't be done?\nIn the draft it says that: >when the signature is sent, it will also send all the hashes it used to calculate the signature How is the bitstream format of the signature+hashes, also how is the signature added to the frame (after the auth tag?) and how is the presence of the signature in a frame signaled?\nHow about something like this for the signature format? The signature is appended after the block if the bit of the SFrame header is 1. The first byte (SLEN) is the length of the signature data . After the signature there will be 1 or more blocks of HMACs. A they can be off different size, each HMAC block will have 1 byte header with HNUM (4 bits) indicating the number of HMACs present and HLEN(4 bits) to indicate the size in bytes of each HMAC. To indicate the end of the HMACS blocks an HMAC header block with HNUM=0 and HLEN=0 will be inserted.\nOne question, what are the sizes of the HMACs? I have considered the ones for AES which are 4 and 8, but in GCM are bigger, right? Are they required to be even or a power of 2? we could do either length=2^(HLEN+1) or LEN=2(HLEN+2) in that case.\nI don't think they have to be 2*n. SHA-256 is, of course, but SHA-384 isn't\nCurrently it says that: However, if using SVC, the hash will be calculated over all the frames of the different spatial layers within the same superframe/picture. However the SFU will be able to drop frames within the same stream (either spatial or temporal) to match target bitrate. In that case, the receiver will never get a full frame and be able to check the signature. Which will render it a kind of useless feature. An easy way of solving the issue would be to perform signature only on the base layer or take into consideration the dependency graph and send multiple signatures in parallel (each for a branch of the dependency graph).\nThis is a good point and suggests 'frame' really refers to each independently decodable unit.\nI think I mixed two things on this issue and my original comment is not totally accurate. The signature will contain the HMAC of all the frames it contains, so if any of them is dropped, the receiver can still compute the signature of the received ones. However, if the signature is not sent on the base layer, then it can be dropped by the SFU. I think it still makes sense to send different signatures over different layers to avoid having to send data on the wire that will be dropped by the SFU.", "new_text": "extra cost. Adding a digital signature to each encrypted frame will be an overkill, instead we propose adding signature over multiple frames. The signature is calculated by contatenating the authentication tags of the frames that the sender wants to authenticate (in reverse sent order) and signing it with the signature key. Signature keys are exchanged out of band along the secret keys. The authentication tags for the previous frames covered by the signature and the signature itself will be appended at end of the frame, after the current frame authentication tag, in the same order that the signature was calculated, and the SFrame header metadata signature bit (S) will be set to 1. Note that the authentication tag for the current frame will only authenticate the SFrame header and the encrypted payload, ant not the signature nor the previous frames's authentication tags (N-1 to N-M) used to calculate the signature. The last byte (NUM) after the authentication tag list and before the signature indicates the number of the authentication tags from previous frames present in the current frame. All the authentications tags MUST have the same size, which MUST be equal to the authentication tag size of the current frame. The signature is fixed size depending on the signature algorithm used (for example, 64 bytes for Ed25519). The receiver has to keep track of all the frames received but yet not verified, by storing the authentication tags of each received frame. When a signature is received, the reciever will verify it with the signature key associated to the key id of the frame the singature was sent in. If the verification is sucessful, the received will mark the frames as authenticated and remove them from the list of the not verified frames. It is up to the application to decide what to do when signature verification fails. When using SVC, the hash will be calculated over all the frames of the different spatial layers within the same superframe/picture. However the SFU will be able to drop frames within the same stream (either spatial or temporal) to match target bitrate. If the signature is sent on a frame which layer that is dropped by the SFU, the receiver will not receive it and will not be able to perform the signature of the other received layers. An easy way of solving the issue would be to perform signature only on the base layer or take into consideration the frame dependency graph and send multiple signatures in parallel (each for a branch of the dependency graph). In case of simulcast or K-SVC, each spatial layer sould be authenticated with different signatures to prevent the SFU to discard frames with the signature info. In any case, it is possible that the frame with the signature is lost or the SFU drops it, so the receiver MUST be prepared to not receive a signature for a frame and remove it from the pending to be verified list after a timeout. 4."} {"id": "q-en-smime-444895715fa5b3e6169fa4469f0d490f9d5f70604e9e10ee7d229f440db18d89", "old_text": "Modification of the ciphertext can go undetected if authentication is not also used, which is the case when sending EnvelopedData without wrapping it in SignedData or enclosing SignedData within it. If an implementation is concerned about compliance with National Institute of Standards and Technology (NIST) key size", "comments": "This adds a security consideration for padding. THis addresses issue\nWith the move from CBC to GCM as a padding mode, there is now no padding for a S/MIME message at all. We should probably add a security consideration dealing with the fact that in some cases traffic analysis might be able to get at the content of an encrypted message based on its size.", "new_text": "Modification of the ciphertext can go undetected if authentication is not also used, which is the case when sending EnvelopedData without wrapping it in SignedData or enclosing SignedData within it. This is one of the reasons for moving from EnvelopedData to AuthEnvelopedData as the authenticated encryption algorithms provide the authentication without needing the SignedData layer. If an implementation is concerned about compliance with National Institute of Standards and Technology (NIST) key size"} {"id": "q-en-smime-444895715fa5b3e6169fa4469f0d490f9d5f70604e9e10ee7d229f440db18d89", "old_text": "the fact that the CEK is going to be known to both parties. Thus the origination is always built on a presumption that \"I did not send this message to myself.\" ", "comments": "This adds a security consideration for padding. THis addresses issue\nWith the move from CBC to GCM as a padding mode, there is now no padding for a S/MIME message at all. We should probably add a security consideration dealing with the fact that in some cases traffic analysis might be able to get at the content of an encrypted message based on its size.", "new_text": "the fact that the CEK is going to be known to both parties. Thus the origination is always built on a presumption that \"I did not send this message to myself.\" All of the authenticated encryption algorithms in this document use counter mode for the encryption portion of the algorithm. This means that the length of the plain text will always be known as the cipher text length and the plain text length are always the same. This information can enable passive observers to infer information based solely on the length of the message. Applications for which this is a significant problem need to provide some type of padding algorithm so that the length of the message does not provide this information. "} {"id": "q-en-sniencryption-914239e60c23c93d6c2bd889a54ce30985eab73f57e510afa0bbeea46131cada", "old_text": "4- The multiplexed server establishes the connection to the protected service, thus revealing the identity of the service. SNI encryption designs MUST mitigate this attack. 2.2.", "comments": "\u2026fix the various typos flagged by DKG, submit as WG draft.", "new_text": "4- The multiplexed server establishes the connection to the protected service, thus revealing the identity of the service. One of the goals of SNI encryption is to prevent adversaries from knowing which Hidden Service the client is using. Successful replay attacks breaks that goal by allowing adversaries to discover that service. SNI encryption designs MUST mitigate this attack. 2.2."} {"id": "q-en-sniencryption-914239e60c23c93d6c2bd889a54ce30985eab73f57e510afa0bbeea46131cada", "old_text": "HTTPS, which effectively treats the Fronting Server as an HTTP Proxy. In this solution, the client establishes a TLS connection to the Fronting Server, and then issues an HTTP Connect request to the Hidden Server. This will effectively establish an end-to-end HTTPS over TLS connection between the client and the Hidden Server, mitigating the issues described in nocontextsharing. The HTTPS in HTTPS solution requires double encryption of every packet. It also requires that the fronting server decrypts and relay", "comments": "\u2026fix the various typos flagged by DKG, submit as WG draft.", "new_text": "HTTPS, which effectively treats the Fronting Server as an HTTP Proxy. In this solution, the client establishes a TLS connection to the Fronting Server, and then issues an HTTP Connect request to the Hidden Server. This will establish an end-to-end HTTPS over TLS connection between the client and the Hidden Server, mitigating the issues described in nocontextsharing. The HTTPS in HTTPS solution requires double encryption of every packet. It also requires that the fronting server decrypts and relay"} {"id": "q-en-sniencryption-914239e60c23c93d6c2bd889a54ce30985eab73f57e510afa0bbeea46131cada", "old_text": "Antoine Delignaut-Lavaud. The delegation token design comes from many people, including Ben Schwartz and Rich Salz. 9. References", "comments": "\u2026fix the various typos flagged by DKG, submit as WG draft.", "new_text": "Antoine Delignaut-Lavaud. The delegation token design comes from many people, including Ben Schwartz, Brian Sniffen and Rich Salz. 9. References"} {"id": "q-en-stateless-a5dc70a9e28eda22dc0cd1846836b00c446dcc6263f83ddd8bbcb529421ef650", "old_text": "Transporting the state needed by a client to process a response as serialized state information in the token has several significant and non-obvious security and privacy implications that need to be mitigated; see serialized-state for recommendations. In addition to the format requirements outlined there, implementations need to ensure that they are not vulnerable to maliciously crafted, delayed, or replayed tokens. It is generally expected that the use of encryption, integrity protection, and replay protection for serialized state is appropriate. In the absence of integrity and reply protection, an on-path attacker or rogue server/intermediary could return a state (either one modified in a reply, or an unsolicited one) that could alter the internal state of the client. It is this reason that at least the use of integrity protection on the token is always recommended. It maybe that in some very specific case, as a result of a careful and detailed analysis of any potential attacks, that there may be cases where such cryptographic protections do not add value. The authors of this document have not found such a use case as yet. RFC3610 with a 64-bit tag is RECOMMENDED, combined with a sequence number and a replay window. Where encryption is not needed, RFC6234, combined with a sequence number and a replay window, may be used. When using an encryption mode that depends on a nonce, such as AES- CCM, repeated use of the same nonce under the same key causes the cipher to fail catastrophically. If a nonce is ever used for more than one encryption operation with the same key, then the same key stream gets used to encrypt both plaintexts and the confidentiality guarantees are voided. Devices with low-quality entropy sources -- as is typical with constrained devices, which incidentally happen to be a natural candidate for the stateless mechanism described in this document -- need to carefully pick a nonce generation mechanism that provides the above uniqueness guarantee. Additionally, since it can be difficult to use AES-CCM securely when using statically configured keys, implementations should use RFC4107. 6.", "comments": "the automated key management reference was made in error. RFC4107 is about systems like IKEv2, TLS, etc. where a key has to be agreed upon between two (or more nodes). The key needed for the encrypted state token never leaves the sender, so it can be a randomly generated key, provided that the nonce is never repeated. This text fixed the considerations for this section.", "new_text": "Transporting the state needed by a client to process a response as serialized state information in the token has several significant and non-obvious security and privacy implications that need to be mitigated; see serialized-state for recommendations. In addition to the format requirements outlined there, implementations need to ensure that they are not vulnerable to maliciously crafted, delayed, or replayed tokens. It is generally expected that the use of encryption, integrity protection, and replay protection for serialized state is appropriate. In the absence of integrity and reply protection, an on-path attacker or rogue server/intermediary could return a state (either one modified in a reply, or an unsolicited one) that could alter the internal state of the client. It is this reason that at least the use of integrity protection on the token is always recommended. It maybe that in some very specific case, as a result of a careful and detailed analysis of any potential attacks, that there may be cases where such cryptographic protections do not add value. The authors of this document have not found such a use case as yet, but this is a local decision. It should further be emphasized that the encrypted state is created by the sending node, and decrypted by the same node when receiving a response. The key is not shared with any other system. Therefore the choice of encryption scheme and the generation of the key for this system is purely a local matter. When encryption is used, the use of RFC3610 with a 64-bit tag is recommended, combined with a sequence number and a replay window. This choice is informed by available hardware acceleration of on many constrained systems. If a different algorithm is available accelerated on the sender, with similar strength, then it SHOULD be preferred. Where privacy of the state is not required, and encryption is not needed, RFC6234, combined with a sequence number and a replay window, may be used. This size of the replay window depends upon the number of outstanding requests that need to be outstanding. This can be determined from the rate at which new ones are made, and the expected duration in which responses are expected. For instance, given a CoAP ACK_TIMEOUT of 2s, and a request rate of 10 requests/second, any request that is not answered within 2s will be considered to have failed. Thus at most 20 request can be outstanding at a time, and any convenient replay window larger than 20 will work. As replay windows are often implemented with a sliding window and a bit, the use of a 32-bit window would be sufficient. For use cases where requests are being relayed from another node, the request rate may be estimated by the total link capacity allocated for that kind of traffic. An alternate view would consider how many IPv6 Neighbor Cache Entries (NCEs) the system can afford to allocate for this use. When using an encryption mode that depends on a nonce, such as AES- CCM, repeated use of the same nonce under the same key causes the cipher to fail catastrophically. If a nonce is ever used for more than one encryption operation with the same key, then the same key stream gets used to encrypt both plaintexts and the confidentiality guarantees are voided. Devices with low-quality entropy sources -- as is typical with constrained devices, which incidentally happen to be a natural candidate for the stateless mechanism described in this document -- need to carefully pick a nonce generation mechanism that provides the above uniqueness guarantee. RFC8613 appendix B.1.1 (\"Sender Sequence Number\") provides a model for how to maintain non-repeating nonces without causing excessive wear of flash. 6."} {"id": "q-en-suit-firmware-encryption-68a4fb9ea0dd59ce34ddbc623537fa9b23f50464b2ceeaa732fc1dfdd44aeec1", "old_text": "8. The ability to restart an interrupted firmware update is often a requirement for low-end IoT devices. To fulfill this requirement it is necessary to chunk a larger firmware image into blocks and to encrypt each block individually using a cipher that does not increase the size of the resulting ciphertext (i.e., by not adding an authentication tag after each encrypted block). When the encrypted firmware image has been transferred to the device, it will typically be stored in a staging area. Then, the bootloader starts decrypting the downloaded image block-by-block and swaps it with the currently valid image. Note that the currently valid image is available in cleartext and hence it has to be re-encrypted before copying it to the staging area. This approach of swapping the newly downloaded image with the previously valid image is often referred as A/B approach. A/B refers to the two storage areas, sometimes called slots, involved. Two slots are used to allow the update to be reversed in case the newly obtained firmware image fails to boot. This approach adds robustness to the firmware update procedure. When an update gets aborted while the bootloader is decrypting the newly obtained image and swapping the blocks, the bootloader can restart where it left off. This technique again offers robustness. To accomplish this functionality, ciphers without integrity protection are used to encrypt the firmware image. Integrity protection for the firmware image is, however, important and therefore the image digest defined in I-D.ietf-suit-manifest MUST be used. I-D.housley-cose-aes-ctr-and-cbc registers several ciphers that do not offer integrity protection. 9.", "comments": "Thanks for your comment, Russ. Here is the issue: Only after the swap has been completed the plaintext firmware image is in the primary slot. Before the swap, the encrypted firmware image was in the secondary slot. It would be possible to do a \"dummy decrypt\" to compute the hash prior to doing the decrypt with swapping.\nYes, the AES-CTR and AES-CBC document in the COSE WG states that the signature provides the integrity protection, so this description needs to say how that happens.\nRuss, I have added text to address your comment.", "new_text": "8. Flash memory on microcontrollers is a type of non-volatile memory that erases data in units called blocks, pages or sectors and re- writes data at byte level (often 4-bytes). Flash memory is furthermore segmented into different memory regions, which store the bootloader, different versions of firmware images (in so-called slots), and configuration data. image-layout shows an example layout of a microcontroller flash area. The primary slot contains the firmware image to be executed by the bootloader, which is a common deployment on devices that do not offer the concept of position independent code. When the encrypted firmware image has been transferred to the device, it will typically be stored in a staging area, in the secondary slot in our example. At the next boot, the bootloader will recognize a new firmware image in the secondary slot and will start decrypting the downloaded image sector-by-sector and will swap it with the image found in the primary slot. The swap should only take place after the signature on the plaintext is verified. Note that the plaintext firmware image is available in the primary slot only after the swap has been completed, unless \"dummy decrypt\" is used to compute the hash over the plaintext prior to executing the decrypt operation during a swap. Dummy decryption here refers to the decryption of the firmware image found in the secondary slot sector-by-sector and computing a rolling hash over the resulting plaintext firmware image (also sector-by-sector) without performing the swap operation. While there are performance optimizations possible, such as conveying hashes for each sector in the manifest rather than a hash of the entire firmware image, such optimizations are not described in this specification. This approach of swapping the newly downloaded image with the previously valid image is often referred as A/B approach. A/B refers to the two slots involved. Two slots are used to allow the update to be reversed in case the newly obtained firmware image fails to boot. This approach adds robustness to the firmware update procedure. Since the image in primary slot is available in cleartext it may need to re-encrypted it before copying it to the secondary slot. This may be necessary when the secondary slot has different access permissions or when the staging area is located in an off-chip flash memory and therefore more vulnerable to physical attacks. Note that this description assumes that the processor does not execute encrypted memory (i.e. using on-the-fly decryption in hardware). The ability to restart an interrupted firmware update is often a requirement for low-end IoT devices. To fulfill this requirement it is necessary to chunk a firmware image into sectors and to encrypt each sector individually using a cipher that does not increase the size of the resulting ciphertext (i.e., by not adding an authentication tag after each encrypted block). When an update gets aborted while the bootloader is decrypting the newly obtained image and swapping the sectors, the bootloader can restart where it left off. This technique offers robustness and better performance. For this purpose ciphers without integrity protection are used to encrypt the firmware image. Integrity protection for the firmware image must, however, be provided and the the suit-parameter-image- digest, defined in Section 8.4.8.6 of I-D.ietf-suit-manifest, MUST be used. I-D.housley-cose-aes-ctr-and-cbc registers AES Counter mode (AES-CTR) and AES Cipher Block Chaining (AES-CBC) ciphers that do not offer integrity protection. These ciphers are useful for the use cases that require firmware encryption on IoT devices. For many other use cases where software packages, configuration information or personalization data needs to be encrypted, the use of Authenticated Encryption with Additional Data (AEAD) ciphers is preferred. The following sub-sections provide further information about the initialization vector (IV) selection for use with AES-CBC and AES-CTR in the firmware encryption context. An IV MUST NOTE be re-used when the same key is used. For this application, the IVs are not random but rather based on the slot/sector-combination in flash memory. The text below assumes that the block-size of AES is (much) smaller than sector size. The typical sector-size of flash memory is in the order of KiB. Hence, multiple AES blocks need to be decrypted until an entire sector is completed. 8.1. In AES-CBC a single IV is used for encryption of firmware belonging to a single sector since individual AES blocks are chained toghether, as shown in aes-cbc. The numbering of sectors in a slot MUST start with zero (0) and MUST increase by one with every sector till the end of the slot is reached. The IV follows this numbering. For example, let us assume the slot size of a specific flash controller on an IoT device is 64 KiB, the sector size 4096 bytes (4 KiB) and AES-128-CBC uses an AES-block size of 128 bit (16 bytes). Hence, sector 0 needs 4096/16=256 AES-128-CBC operations using IV 0. If the firmware image fills the entire slot then that slot contains 16 sectors, i.e. IVs ranging from 0 to 15. 8.2. Unlike AES-CBC, AES-CTR uses an IV per AES operation. Hence, when an image is encrypted using AES-CTR-128 or AES-CTR-256, the IV MUST start with zero (0) and MUST be incremented by one for each 16-byte plaintext block within the entire slot. Using the previous example with a slot size of 64 KiB, the sector size 4096 bytes and the AES plaintext block size of 16 byte requires IVs from 0 to 255 in the first sector and 16 * 256 IVs for the remaining sectors in the slot. The last IV used to encrypt data in the slot is therefore 9."} {"id": "q-en-suit-firmware-encryption-e7b5a3925cd52967bece723211b21c5504782654a620e158c151aeb17e6b7ae0", "old_text": "a digital signature), multiple recipients, encryption of manifests (in comparison to firmware images).]] 10. The algorithms described in this document assume that the party", "comments": "Adding an example manifest using AES-KW in case where the Author knows the recipients (the Author and the Devices have the same pre-shared key).\nThis is a good example. I looked through it and it made sense to me.\nIn the firmware encryption draft we have defined an extension to the envelope that indicates what encryption parameters are applied to which component. Here is the CDDL: Currently the text provides no explanation or an example of how this linkage works.\nThis aspect is related to URL and to the mailing list discussion URL\nThis issue is also related to URL\nWith in PR , this issue is solved. Internet-Draft draft-ietf-suit-firmware-encryption-URL is now available. It is a work item of the Software Updates for Internet of Things (SUIT) WG of the IETF. Title: Encrypted Payloads in SUIT Manifests Authors: Hannes Tschofenig Russ Housley Brendan Moran David Brown Ken Takayama Name: draft-ietf-suit-firmware-encryption-URL Pages: 52 Dates: 2024-03-03 Abstract: This document specifies techniques for encrypting software, firmware, machine learning models, and personalization data by utilizing the IETF SUIT manifest. Key agreement is provided by ephemeral-static (ES) Diffie-Hellman (DH) and AES Key Wrap (AES-KW). ES-DH uses public key cryptography while AES-KW uses a pre-shared key. Encryption of the plaintext is accomplished with conventional symmetric key cryptography. The IETF datatracker status page for this Internet-Draft is: URL There is also an HTMLized version available at: URL A diff from the previous version is available at: https://author-URL Internet-Drafts are also available by rsync at: URL", "new_text": "a digital signature), multiple recipients, encryption of manifests (in comparison to firmware images).]] The following manifests examplify how to deliver the encrypted firmware and its encryption info to the Devices. The examples are signed using the following ECDSA secp256r1 key: The corresponding public key can be used to verify these examples: Each example uses SHA256 as the digest function. 9.1. Diagnostic notation of the SUIT Manifest: In hex: 10. The algorithms described in this document assume that the party"} {"id": "q-en-tls-exported-authenticator-80d776118aa174a7644b543bfe55fd4dfd303b9299f3eafd31e5fe460d2f2e4c", "old_text": "If an authenticator request is present, the extensions used to guide the construction of these messages are taken from the authenticator request. If the certificate_request_context from the authenticator request has already been used in the connection, then no authenticator should be constructed. If there is no authenticator request, the extensions are chosen from the TLS handshake. Only servers can provide an authenticator without a corresponding request. In such cases, ClientHello extensions are used to determine permissible extensions in the Certificate message. 4.2.1.", "comments": "In Section 3 of the RFC, it states that the authenticator request includes those extensions allowed for the CertificateRequest message. However, this does not mean it could only includes them and other extensions could also be sent. In the TLS spec CertificateRequest explicitly states \"Clients MUST ignore unrecognized extensions.\", we should have the same here.", "new_text": "If an authenticator request is present, the extensions used to guide the construction of these messages are taken from the authenticator request. Unrecognized extensions MUST be ignored. If the certificate_request_context from the authenticator request has already been used in the connection, then no authenticator should be constructed. If there is no authenticator request, the extensions are chosen from the TLS handshake. Only servers can provide an authenticator without a corresponding request. In such cases, ClientHello extensions are used to determine permissible extensions in the Certificate message. 4.2.1."} {"id": "q-en-tls-flags-b2f5a6d596ac8f7d264c92881b549ba236d03b883580c46875e31a5cb16ec743", "old_text": "A Flags Extension for TLS 1.3 draft-ietf-tls-tlsflags-09 Abstract", "comments": "This proposed text disallows specifying a flag extension where the response is a regular extension with content rather than a mere acknowledgment.\nWell, I think of this as being the request extension. I can live with this if it's the WG consensus, but it seems like it would be useful to be able to say \"I support largeish extension X\" (we already have some such extensions) and it would be kind of sad if that took 4 bytes.\nSeems strange to have to state this, but that's OK. Do you want to explain why? That is, because this results in a \"response\" extension appearing without a \"request\" extension, which is not permitted.", "new_text": "A Flags Extension for TLS 1.3 draft-ietf-tls-tlsflags-10 Abstract"} {"id": "q-en-tls-flags-b2f5a6d596ac8f7d264c92881b549ba236d03b883580c46875e31a5cb16ec743", "old_text": "understands and supports the extension, but some extensions do not require this acknowledgement. A flag proposed by the client in ClientHello (CH) that requires acknowledgement SHOULD be acknowledged in either ServerHello (SH), in EncryptedExtensions (EE), in Certificate (CT), or in", "comments": "This proposed text disallows specifying a flag extension where the response is a regular extension with content rather than a mere acknowledgment.\nWell, I think of this as being the request extension. I can live with this if it's the WG consensus, but it seems like it would be useful to be able to say \"I support largeish extension X\" (we already have some such extensions) and it would be kind of sad if that took 4 bytes.\nSeems strange to have to state this, but that's OK. Do you want to explain why? That is, because this results in a \"response\" extension appearing without a \"request\" extension, which is not permitted.", "new_text": "understands and supports the extension, but some extensions do not require this acknowledgement. For a flag that does require a response, the only proper response is the same flag in a flags extension. This extension MUST NOT be used to specify extensions where the response is a proper extension with content. A flag proposed by the client in ClientHello (CH) that requires acknowledgement SHOULD be acknowledged in either ServerHello (SH), in EncryptedExtensions (EE), in Certificate (CT), or in"} {"id": "q-en-tls-subcerts-af3afd6542a20048a6c8153279d54c8b53bfd744e128c13948d7a4b31f69975d", "old_text": "entire certificate and the delegated credential, cannot. Each delegated credential is bound to a specific signature algorithm for use use in the TLS handshake (RFC8446 section 4.2.3). This prevents them from being used with other, perhaps unintended signature algorithms. 3.2.", "comments": "Addresses .\n.Add some additional text to highlight that sig on server cert and DC can be different. from . Hi Sean, Thanks for the follow-up. I refreshed my memory as much as I could. Please see my responses in line. Yours, Daniel Sean Turner wrote: I think I would rather take the structure: 1 - Use case 2 - Problem description 3 - Solution I am fine with whatever fits best to you. Server operators often deploy TLS termination services in locations such as remote data centers or Content Delivery Networks (CDNs) where it may be difficult to detect key compromises. Short-lived certificates may be used to limit the exposure of keys in these cases. However, short-lived certificates need to be renewed more frequently than long-lived certificates. If an external CA is unable to issue a certificate in time to replace a deployed certificate, the server would no longer be able to present a valid certificate to clients. With short-lived certificates, there is a smaller window of time to renew a certificates and therefore a higher risk that an outage at a CA will negatively affect the uptime of the service. Typically, a TLS server uses a certificate provided by some entity other than the operator of the server (a \"Certification Authority\" or CA) [RFC8446 ] [RFC5280 ]. This organizational separation makes the TLS server operator dependent on the CA for some aspects of its operations, for example: Whenever the server operator wants to deploy a new certificate, it has to interact with the CA. The server operator can only use TLS signature schemes for which the CA will issue credentials. To reduce the dependency on external CAs, this document proposes a limited delegation mechanism that allows a TLS peer to issue its own credentials within the scope of a certificate issued by an external CA. These credentials only enable the recipient of the delegation to speak for names that the CA has authorized. Furthermore, this mechanism allows the server to use modern signature algorithms such as Ed25519 [RFC8032 ] even if their CA does not support them. We will refer to the certificate issued by the CA as a \"certificate\", or \"delegation certificate\", and the one issued by the operator as a \"delegated credential\" or \"DC\". I think my concern has been clarified by Ilari. It was resolved by your b). I am fine with both text. works for me. The text does not address my concern and will recap my perspective of the discussion. My initial comment was that the reference [KEYLESS] points to a solution that does not work with TLS 1.3 and currently the only effort compatible with TLS 1.3 is draft-mglt-lurk-tls13. I expect this reference to be mentioned and I do not see it in the text proposed text. If the intention with [KEYLESS] is to mention TLS Handshake Proxying techniques, then I argued that other mechanisms be mentioned and among others draft-mglt-lurk-tls12 and draft-mglt-lurk-tls13 that address different versions of TLS. This was my second concern and I do not see any of these references being mentioned. The way I understand your response is that the paper pointed out by [KEYLESS] is not Cloudflare's commercial product but instead a generic TLS Handshake proxying and that you propose to add a reference to an academic paper that discusses the LURK extension for TLS 1.2. I appreciate that other solutions - and in this case LURK, are being considered. I am wondering however why draft-mglt-lurk-tls12 is being replaced by an academic reference [REF2] and why draft-mglt-lurk-tls13 has been omitted. It seems that we are trying to avoid any reference of the work that is happening at the IETF. Is there anything I am missing ? My suggestion for the text OLD: (e.g., a PKCS interface or a remote signing mechanism [KEYLESS]) NEW: (e.g., a PKCS interface or a remote signing mechanism such as [draft-mglt-lurk-tls13] or [draft-mglt-lurk-tls12] ([LURK-TLS12]) or [KEYLESS]) with [KEYLESS] Sullivan, N. and D. Stebila, \"An Analysis of TLS Handshake Proxying\", IEEE TrustCom/BigDataSE/ISPA 2015, 2015. [LURK-TLS12] Boureanu, I., Migault, D., Preda, S., Alamedine, H.A., Mishra, S. Fieau, F., and M. Mannan, \"LURK: Server-Controlled TLS Delegation\u201d, IEEE TrustCom 2020, URL, 2020. [draft-mglt-lurk-tls12] IETF draft [draft-mglt-lurk-tls13] IETF draft It is unclear to me what is generic about the paper pointed by [KEYLESS] or [REF1]. In any case, for clarification, the paper clearly refers to Cloudflare commercial products as opposed to a generic mechanism, and this will remain whatever label will be used as a reference. Typically, Cloudflare co-authors the paper appears 12 times in the 8 page paper. The contribution section mentions: \"\"\" Our work focuses specifically on the TLS handshake proxying system implemented by CloudFlare in their Keyless SSL product [6], [7] \"\"\" The methodology mentions says: \"\"\" We tested TLS key proxying using CloudFlare\u2019s implementation, which was implemented with the following three parts: \"\"\" \"\"\" The different scenarios were set up through CloudFlare\u2019s control panel. In the direct handshake handshake scenario, the site is set up with no reverse proxy. In the scenario where the key is held by the edge server, the same certificate that was used on the origin is uploaded to CloudFlare and the reverse proxy is enabled for the site. \"\"\" Cloudflare appears in at least references URL CloudFlare Inc., \u201cCloudFlare Keyless SSL,\u201d Sep. 2014, URL N. Sullivan, \u201cKeyless SSL: The nitty gritty technical details,\u201d Sep. 2014, URL works for me. works for me seems clearer. I suspect earlier clarifications addressed this. thanks that it way clearer. Daniel Migault Ericsson", "new_text": "entire certificate and the delegated credential, cannot. Each delegated credential is bound to a specific signature algorithm for use in the TLS handshake (RFC8446 section 4.2.3). This prevents them from being used with other, perhaps unintended signature algorithms. The signature algorithm bound to the delegated credential can be chosen independantly of the set of signature algorithms supported by the end-entity certificate. 3.2."} {"id": "q-en-tls-subcerts-751d08c61aa4871f9a7de48d9e41348fb02eec4b35deb95a5ff81b4ccfb6d360", "old_text": "The initiator uses information from the peer's certificate to verify the delegated credential and that the peer is asserting an expected identity, determining an authentication result for the peer.. Peers accepting the delegated credential use it as the certificate key for the (D)TLS handshake.", "comments": "Need to address and directorate reviews.\nCorrection ARTART editorial are addressed via and .\nand .\nAlso noted in ARTART review. Reviewer: Christian Ams\u00fcss Review result: Ready with Nits Thanks for this well-written document ART topics: The document does not touch on any of the typical ART review issues; times are relative in well understood units, and versioning, formal language (ASN.1, which is outside of my experience to check) and encoding infrastructure (struct) follows TLS practices. General comments: The introduction of this mechanism gives the impression of a band-aid applied to a PKI ecosystem that has accumulated many limitations as outlined in section The present solution appears good, but if there is ongoing work on the underlying issues (even experimentally), I'd appreciate a careful reference to it. Section 7.6 hints at the front end querying the back-end for creation of new DCs -- other than that, DC distribution (neither push- nor pull-based) is discussed. If there are any mechanisms brewing, I'd appreciate a reference as well. Please check: The IANA considerations list \"delegated_credential\" for CH, CR and CT messages. I did not find a reference in the text for Ct, only for CH and CR. Editorial comments: (p5) \"result for the peer..\" -- extraneous period. (p9, p15, p16) The \"7 days\" are introduced as the default for a profilable prarameter, but later used without further comment.\nReviewer: Elwyn Davies Review result: Ready with Nits I am the assigned Gen-ART reviewer for this draft. The General Area Review Team (Gen-ART) reviews all IETF documents being processed by the IESG for the IETF Chair. Please treat these comments just like any other last call comments. For more information, please see the FAQ at . Document: draft-ietf-tls-subcerts-?? Reviewer: Elwyn Davies Review Date: 2022-04-08 IETF LC End Date: 2022-04-08 IESG Telechat date: Not scheduled for a telechat Summary: Ready with nits. Just a few editrial level nits. Major issues: None Minor issues: None. Nits/editorial comments: Abstract: The exact form of the abbreviation (D)TLS is not in the set of well-known abbreviations. I assume it is supposed to mean DTLS or TLS - This ought to be expanded on first use. Abstract: s/mechanism to to/mechanism to/ s1, para 2: CA is used before its expansion in para 3. s1, next to last para: \"this document proposes\" Hopefully when it becomes an RFC it will do more than propose. Suggest \"this document introduces\". s1, next to last para: \"to speak for names\" sounds a bit anthropomorphic to me, but I can't think of a simple alternative word. s1, last para: s/We will refer/This document refers/ [Not an academic paper!] s3.1, 2nd bullet: s/provide are not necessary/provide is not necessary/ s4, definition of expectedcertverify_algorithm: \" Only signature algorithms allowed for use in CertificateVerify message are allowed.\" Does this need a reference to the place where the list of such algorithms is recorded? s4.1.1 and s4.1.2: In s4.1.1: \"the client SHOULD ignore delegated credentials sent as extensions to any other certificate.\" I would have though this ought to be a MUST. There is an equivalent in s4.1.2. I am not sure what the client/server might do if it doesn't ignore the DC. s4.1.3, para 1: s/same way that is done/same way that it is done/ s4.2, para 1: s/We define/This docuent defines/ sS/s5.1: RFC conventions prefer not to have sections empty of text: Add something like: \"The following operational consideration should be taken into consideration when using Delegated Certificates:\"", "new_text": "The initiator uses information from the peer's certificate to verify the delegated credential and that the peer is asserting an expected identity, determining an authentication result for the peer. Peers accepting the delegated credential use it as the certificate key for the (D)TLS handshake."} {"id": "q-en-tls13-spec-ad42bcb6cbfe59958e0b09f5856b1e7e8a7e10730ae8b56d5d17ba62fe9eea24", "old_text": "the following additional information MUST be provisioned to both parties: The cipher suite for use with this PSK The Application-Layer Protocol Negotiation (ALPN) protocol, if any", "comments": "The 0-RTT key might differ between TLS versions (as demonstrated with the draft -20 changes). Be explicit about storing this version number since section 4.2.9 requires this information too.", "new_text": "the following additional information MUST be provisioned to both parties: The TLS version number for use with this PSK The cipher suite for use with this PSK The Application-Layer Protocol Negotiation (ALPN) protocol, if any"} {"id": "q-en-tls13-spec-f8cb8c7145d2274dd6f0bf7518d016f5f84b51c60645349ba1256e91fedf56a1", "old_text": "draft-04 Remove renegotiation. draft-03", "comments": "It has a proper definition in appendix A.4. Note though that RFC 4507 section 7 has allocated this value for the session_ticket/NewSessionTicket handshake type. This point may be moot if RFC 4507 has been integrated with TLS 1.3, in which case RFC 4507 should be added to the list of obsoleted RFCs.\nThe handshake packets in TLS 1.2 and earlier are numbered so that the HandshakeType for each of client and server is strictly increasing when packets arrive in correct order, and HandshakeType/10 indicates the flight number. This allows for using a simpler state-machine. The new handshake packets {server,client}keyshare break this convention by using numbers in the flight 2 range (17 & 18), while belonging in flight 1. This suggestion is to renumber them to eg clientkeyshare(5) and serverkeyshare(6)", "new_text": "draft-04 Renumber the new handshake messages to be somewhat more consistent with existing convention and to remove a duplicate registration. Remove renegotiation. draft-03"} {"id": "q-en-tls13-spec-0feb1dcbb5e7a4b2fdaf19836a86d210717ff0408404aa442737959fb604a610", "old_text": "\"pre_shared_key\" is REQUIRED for PSK key agreement. A client is considered to be attempting to negotiate using this specification if the ClientHello contains a \"supported_versions\" extension with 0x0304 as the highest version number contained in its", "comments": "\"A client MUST provide a 'pskkeyexchangemodes' extension if it offers a 'preshared_key' extension.\"", "new_text": "\"pre_shared_key\" is REQUIRED for PSK key agreement. \"psk_key_exchange_modes\" is REQUIRED for PSK key agreement. A client is considered to be attempting to negotiate using this specification if the ClientHello contains a \"supported_versions\" extension with 0x0304 as the highest version number contained in its"} {"id": "q-en-tls13-spec-4f312a1bf92d88293c9b8119dc6fdb1ed2f006fbd29c58ecc5a46655be3c3a1e", "old_text": "rsa_pss_rsae_sha256, rsa_pss_rsae_sha384, rsa_pss_rsae_sha512, rsa_pss_pss_sha256, rsa_pss_pss_sha384, rsa_pss_pss_sha512, and ed25519. ", "comments": "It appears we should have included rules for populating a registry for the pskkeyexchange_modes extension.\nYes, an obvious oversight in our oversite. :) I approve.\n+1\nFWIW, +1\n+1 to a new registry.\nYes, we should create a registry. But: We need to update the \"this document defines a new registry\" to match the new number of registries (two) the space of allowed values is only one byte, so we don't need to talk about \"first byte\"", "new_text": "rsa_pss_rsae_sha256, rsa_pss_rsae_sha384, rsa_pss_rsae_sha512, rsa_pss_pss_sha256, rsa_pss_pss_sha384, rsa_pss_pss_sha512, and ed25519. TLS PskKeyExchangeMode Registry: Values with the first byte in the range 0-253 (decimal) are assigned via Specification Required RFC8126. Values with the first byte 254 or 255 (decimal) are reserved for Private Use RFC8126. This registry SHALL have a \"Recommended\" column. The registry [shall be/ has been] initially populated psk_ke (0) and psk_dhe_ke (1). Both SHALL be marked as \"Recommended\". "} {"id": "q-en-tls13-spec-f0a2eef6f6a3ba7192d80421ec09fe991ff4f3289dc0997fa6b4ebb45dff721d", "old_text": "over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery. 1. RFC EDITOR: PLEASE REMOVE THE FOLLOWING PARAGRAPH The source for this", "comments": "The I-D checklist requires that updates and obsoletes be included in the abstract; thought RFC. I personally feel this is [redacted] stupid, but I do not think that we should risk a process appeal to fight this battle.", "new_text": "over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery. This document updates RFCs 4492, 5705, and 6066 and it obsoletes RFCs 5077, 5246, and 6961. 1. RFC EDITOR: PLEASE REMOVE THE FOLLOWING PARAGRAPH The source for this"} {"id": "q-en-tls13-spec-223efddd27ef373df431d830c666b85c9c250a59f5da335bfa8a98d4aa1d0223", "old_text": "an editorial update. It contains updated text in areas which were found to be unclear as well as other editorial improvements. In addition, it removes the use of the term \"master\" as applied to secrets in favor of the term \"main\". 1.3.", "comments": "While we're renaming these anyway, exportersecret and resumptionsecret are just as descriptive.\nexportermainsecret and resumptionmainsecret strike me as not needing an adjective in the middle at all. Perhaps exportersecret and resumptionsecret? For the exporter secret, dropping \"main\" avoids the acronym collision with the EMS extension, and there are no other secrets associated with exporters, so there's no use in designating this as the main one. For the resumption secret, there is a potential confusion between resumption[main]secret and the derived PSK associated with the ticket (section {#NSTMessage}), but we don't seem to give the latter a name in the first place so I think that's fine. This also gives us nice short KDF labels when we're able to change those: \"exporter\" and \"resumption\". (Or perhaps even \"export\" and \"resume\" if those don't fit in a hash block.)", "new_text": "an editorial update. It contains updated text in areas which were found to be unclear as well as other editorial improvements. In addition, it removes the use of the term \"master\" as applied to secrets in favor of the term \"main\" or shorter names where no term was neccessary. 1.3."} {"id": "q-en-tls13-spec-223efddd27ef373df431d830c666b85c9c250a59f5da335bfa8a98d4aa1d0223", "old_text": "depend on the ServerHello and therefore has weaker guarantees. This is especially relevant if the data is authenticated either with TLS client authentication or inside the application protocol. The same warnings apply to any use of the early_exporter_main_secret. 0-RTT data cannot be duplicated within a connection (i.e., the server will not process the same data twice for the same connection), and an", "comments": "While we're renaming these anyway, exportersecret and resumptionsecret are just as descriptive.\nexportermainsecret and resumptionmainsecret strike me as not needing an adjective in the middle at all. Perhaps exportersecret and resumptionsecret? For the exporter secret, dropping \"main\" avoids the acronym collision with the EMS extension, and there are no other secrets associated with exporters, so there's no use in designating this as the main one. For the resumption secret, there is a potential confusion between resumption[main]secret and the derived PSK associated with the ticket (section {#NSTMessage}), but we don't seem to give the latter a name in the first place so I think that's fine. This also gives us nice short KDF labels when we're able to change those: \"exporter\" and \"resumption\". (Or perhaps even \"export\" and \"resume\" if those don't fit in a hash block.)", "new_text": "depend on the ServerHello and therefore has weaker guarantees. This is especially relevant if the data is authenticated either with TLS client authentication or inside the application protocol. The same warnings apply to any use of the early_exporter_secret. 0-RTT data cannot be duplicated within a connection (i.e., the server will not process the same data twice for the same connection), and an"} {"id": "q-en-tls13-spec-223efddd27ef373df431d830c666b85c9c250a59f5da335bfa8a98d4aa1d0223", "old_text": "At any time after the server has received the client Finished message, it MAY send a NewSessionTicket message. This message creates a unique association between the ticket value and a secret PSK derived from the resumption main secret (see cryptographic- computations). The client MAY use this PSK for future handshakes by including the", "comments": "While we're renaming these anyway, exportersecret and resumptionsecret are just as descriptive.\nexportermainsecret and resumptionmainsecret strike me as not needing an adjective in the middle at all. Perhaps exportersecret and resumptionsecret? For the exporter secret, dropping \"main\" avoids the acronym collision with the EMS extension, and there are no other secrets associated with exporters, so there's no use in designating this as the main one. For the resumption secret, there is a potential confusion between resumption[main]secret and the derived PSK associated with the ticket (section {#NSTMessage}), but we don't seem to give the latter a name in the first place so I think that's fine. This also gives us nice short KDF labels when we're able to change those: \"exporter\" and \"resumption\". (Or perhaps even \"export\" and \"resume\" if those don't fit in a hash block.)", "new_text": "At any time after the server has received the client Finished message, it MAY send a NewSessionTicket message. This message creates a unique association between the ticket value and a secret PSK derived from the resumption secret (see cryptographic- computations). The client MAY use this PSK for future handshakes by including the"} {"id": "q-en-tls13-spec-223efddd27ef373df431d830c666b85c9c250a59f5da335bfa8a98d4aa1d0223", "old_text": "server implementation declines all PSK identities with different SNI values, these two values are always the same. Note: Although the resumption main secret depends on the client's second flight, a server which does not request certificate-based client authentication MAY compute the remainder of the transcript independently and then send a NewSessionTicket immediately upon sending its Finished rather than waiting for the client Finished. This might be appropriate in cases where the client is expected to", "comments": "While we're renaming these anyway, exportersecret and resumptionsecret are just as descriptive.\nexportermainsecret and resumptionmainsecret strike me as not needing an adjective in the middle at all. Perhaps exportersecret and resumptionsecret? For the exporter secret, dropping \"main\" avoids the acronym collision with the EMS extension, and there are no other secrets associated with exporters, so there's no use in designating this as the main one. For the resumption secret, there is a potential confusion between resumption[main]secret and the derived PSK associated with the ticket (section {#NSTMessage}), but we don't seem to give the latter a name in the first place so I think that's fine. This also gives us nice short KDF labels when we're able to change those: \"exporter\" and \"resumption\". (Or perhaps even \"export\" and \"resume\" if those don't fit in a hash block.)", "new_text": "server implementation declines all PSK identities with different SNI values, these two values are always the same. Note: Although the resumption ecret depends on the client's second flight, a server which does not request certificate-based client authentication MAY compute the remainder of the transcript independently and then send a NewSessionTicket immediately upon sending its Finished rather than waiting for the client Finished. This might be appropriate in cases where the client is expected to"} {"id": "q-en-tls13-spec-223efddd27ef373df431d830c666b85c9c250a59f5da335bfa8a98d4aa1d0223", "old_text": "In this version of TLS 1.3, the two input secrets are: PSK (a pre-shared key established externally or derived from the resumption_main_secret value from a previous connection) (EC)DHE shared secret (ecdhe-shared-secret-calculation)", "comments": "While we're renaming these anyway, exportersecret and resumptionsecret are just as descriptive.\nexportermainsecret and resumptionmainsecret strike me as not needing an adjective in the middle at all. Perhaps exportersecret and resumptionsecret? For the exporter secret, dropping \"main\" avoids the acronym collision with the EMS extension, and there are no other secrets associated with exporters, so there's no use in designating this as the main one. For the resumption secret, there is a potential confusion between resumption[main]secret and the derived PSK associated with the ticket (section {#NSTMessage}), but we don't seem to give the latter a name in the first place so I think that's fine. This also gives us nice short KDF labels when we're able to change those: \"exporter\" and \"resumption\". (Or perhaps even \"export\" and \"resume\" if those don't fit in a hash block.)", "new_text": "In this version of TLS 1.3, the two input secrets are: PSK (a pre-shared key established externally or derived from the resumption_secret value from a previous connection) (EC)DHE shared secret (ecdhe-shared-secret-calculation)"} {"id": "q-en-tls13-spec-223efddd27ef373df431d830c666b85c9c250a59f5da335bfa8a98d4aa1d0223", "old_text": "will still be HKDF-Extract(0, 0). For the computation of the binder_key, the label is \"ext binder\" for external PSKs (those provisioned outside of TLS) and \"res binder\" for resumption PSKs (those provisioned as the resumption main secret of a previous handshake). The different labels prevent the substitution of one type of PSK for the other. There are multiple potential Early Secret values, depending on which PSK the server ultimately selects. The client will need to compute", "comments": "While we're renaming these anyway, exportersecret and resumptionsecret are just as descriptive.\nexportermainsecret and resumptionmainsecret strike me as not needing an adjective in the middle at all. Perhaps exportersecret and resumptionsecret? For the exporter secret, dropping \"main\" avoids the acronym collision with the EMS extension, and there are no other secrets associated with exporters, so there's no use in designating this as the main one. For the resumption secret, there is a potential confusion between resumption[main]secret and the derived PSK associated with the ticket (section {#NSTMessage}), but we don't seem to give the latter a name in the first place so I think that's fine. This also gives us nice short KDF labels when we're able to change those: \"exporter\" and \"resumption\". (Or perhaps even \"export\" and \"resume\" if those don't fit in a hash block.)", "new_text": "will still be HKDF-Extract(0, 0). For the computation of the binder_key, the label is \"ext binder\" for external PSKs (those provisioned outside of TLS) and \"res binder\" for resumption PSKs (those provisioned as the resumption secret of a previous handshake). The different labels prevent the substitution of one type of PSK for the other. There are multiple potential Early Secret values, depending on which PSK the server ultimately selects. The client will need to compute"} {"id": "q-en-tls13-spec-223efddd27ef373df431d830c666b85c9c250a59f5da335bfa8a98d4aa1d0223", "old_text": "The exporter value is computed as: Where Secret is either the early_exporter_main_secret or the exporter_main_secret. Implementations MUST use the exporter_main_secret unless explicitly specified by the application. The early_exporter_main_secret is defined for use in settings where an exporter is needed for 0-RTT data. A separate interface for the early exporter is RECOMMENDED; this avoids the exporter user accidentally using an early exporter when a regular one is desired or vice versa. If no context is provided, the context_value is zero length. Consequently, providing no context computes the same value as", "comments": "While we're renaming these anyway, exportersecret and resumptionsecret are just as descriptive.\nexportermainsecret and resumptionmainsecret strike me as not needing an adjective in the middle at all. Perhaps exportersecret and resumptionsecret? For the exporter secret, dropping \"main\" avoids the acronym collision with the EMS extension, and there are no other secrets associated with exporters, so there's no use in designating this as the main one. For the resumption secret, there is a potential confusion between resumption[main]secret and the derived PSK associated with the ticket (section {#NSTMessage}), but we don't seem to give the latter a name in the first place so I think that's fine. This also gives us nice short KDF labels when we're able to change those: \"exporter\" and \"resumption\". (Or perhaps even \"export\" and \"resume\" if those don't fit in a hash block.)", "new_text": "The exporter value is computed as: Where Secret is either the early_exporter_secret or the exporter_secret. Implementations MUST use the exporter_secret unless explicitly specified by the application. The early_exporter_secret is defined for use in settings where an exporter is needed for 0-RTT data. A separate interface for the early exporter is RECOMMENDED; this avoids the exporter user accidentally using an early exporter when a regular one is desired or vice versa. If no context is provided, the context_value is zero length. Consequently, providing no context computes the same value as"} {"id": "q-en-tls13-spec-95a19637583515c674662c0d499e929e20b345c450321c771dc62d62de2d8537", "old_text": "an analysis of these limits under the assumption that the underlying primitive (AES or ChaCha20) has no weaknesses. Implementations SHOULD do a key update as described in key-update prior to reaching these limits. For AES-GCM, up to 2^24.5 full-size records (about 24 million) may be encrypted on a given connection while keeping a safety margin of", "comments": "The reason for the SHOULD is that the current text says: do a key update as described in {{key-update}} prior to reaching these limits. Note that it is not possible to perform a KeyUpdate for early data and therefore implementations SHOULD not exceed the limits when sending early data. We could of course change both.\nDTLS 1.3 introduces integrity limits for the AEAD algorithms, which is good. TLS 1.3 does not have any AEAD integrity limits, probably because the assumption that only a single AEAD forgery can be attempted. My understanding is that this is not true for 0-RTT data which works more like DTLS and where an attacker can attempt any number of forgery attempts on a single AEAD key. RFC 8446 has relatively strong anti-replay protection: \"The server MUST ensure that any instance of it (be it a machine, a thread, or any other entity within the relevant serving infrastructure) would accept 0-RTT for the same 0-RTT handshake at most once.\" but this does not stop an attacker from attempting an unlimited number of forgeries. I think consideration / mitigation of this is missing from RFC8446. Some alternative suggestions: Should the 0-RTT requirement above be strengthen to: \"The server MUST ensure that any instance of it (be it a machine, a thread, or any other entity within the relevant serving infrastructure) would process a 0-RTT handshake with the same PSK, URL pair at most once.\" should RFC8446bis continue to ignore AEAD limits for 0-RTT? 0-RTT data already has lower security (weaker replay protection)? In this case I think some considerations should be added describing that the integrity protection of is weaker than for normal application data. should RFC8446bis introduce DTLS 1.3 type integrity limits for clientearlytraffic_secret? This aligns the integrity limits of 0-RTT data in TLS with the normal application data in DTLS.\nCould you say a bit more about how this would work in practice? I did not consult the spec just now, but I had the impression that 0-RTT still had only a single attempt at forgery (at least, per replay cache instance/backend server).\nAh, the linked DTLS issue mentions long-lived external PSKs specifically. I agree that our story for those is less well specified and probably not great. Off the top of my head, I would suggest \"don't use 0-RTT with long-lived PSKs\".\nI think that the original is fine. I don't think that there is any meaningful distinction between 0-RTT handshake and the (PSK, URL) tuple, at least as far as the derived keys go. Change the PSK, PSK identity, or random and you get a new key derivation. I had expected that early data was subject to the same constraints as later epochs. The only caveat being that it is impossible to update early data keys using KeyUpdate. The answer is to finish the handshake of course, so a hard error is fine. We might advise servers to cap maxearlydata_size to help avoid any issues of overrun, though with certain perverse padding arrangements, that might not provide any real guarantee. As above integrity limits should also apply.\nURL fixes (1) The case for (3) is actually more subtle, as the problem is not clientearlytraffic_secret but rather the trial decryption skipping phase. We might need to apply DTLS 1.3 like limits there.", "new_text": "an analysis of these limits under the assumption that the underlying primitive (AES or ChaCha20) has no weaknesses. Implementations SHOULD do a key update as described in key-update prior to reaching these limits. Note that it is not possible to perform a KeyUpdate for early data and therefore implementations SHOULD not exceed the limits when sending early data. For AES-GCM, up to 2^24.5 full-size records (about 24 million) may be encrypted on a given connection while keeping a safety margin of"} {"id": "q-en-tls13-spec-db48cce3a9d880a60914093875d798c306f2a5d810bc7d3e47190a89cdda99a5", "old_text": "signature is represented as a DER-encoded X690 ECDSA-Sig-Value structure as defined in RFC4492. Indicates a signature algorithm using RSASSA-PSS RFC8017 with mask generation function 1. The digest used in the mask generation function and the digest being signed are both the corresponding hash algorithm as defined in SHS. The length of the Salt MUST be equal to the length of the output of the digest algorithm. If the public key is carried in an X.509 certificate, it MUST use the rsaEncryption OID RFC5280. Indicates a signature algorithm using EdDSA as defined in RFC8032 or its successors. Note that these correspond to the \"PureEdDSA\" algorithms and not the \"prehash\" variants. Indicates a signature algorithm using RSASSA-PSS RFC8017 with mask generation function 1. The digest used in the mask generation function and the digest being signed are both the corresponding hash algorithm as defined in SHS. The length of the Salt MUST be equal to the length of the digest algorithm. If the public key is carried in an X.509 certificate, it MUST use the RSASSA-PSS OID RFC5756. When used in certificate signatures, the algorithm parameters MUST be DER encoded. If the corresponding public key's parameters are present, then the parameters in the signature MUST be identical to those in the public key. Indicates algorithms which are being deprecated because they use algorithms with known weaknesses, specifically SHA-1 which is used", "comments": "MGF1 got unhelpfully spelled out as \"mask generation function 1\" in RFC 8446. I'm guessing this is the result of an acronym expansion editting pass, but the function is simply called \"MGF1\". There is no such thing as \"mask generation function 1\". Correct this back to MGF1. In hopes this doesn't get hit the same editing pass later, I've tweaked the wording and moved the citation so that \"mask generation function\" is still uttered before MGF1, and it's clearer that both primitives are defined in RFC 8017.", "new_text": "signature is represented as a DER-encoded X690 ECDSA-Sig-Value structure as defined in RFC4492. Indicates a signature algorithm using RSASSA-PSS with a mask generation function of MGF1, as defined in RFC8017. The digest used in MGF1 and the digest being signed are both the corresponding hash algorithm as defined in SHS. The length of the Salt MUST be equal to the length of the output of the digest algorithm. If the public key is carried in an X.509 certificate, it MUST use the rsaEncryption OID RFC5280. Indicates a signature algorithm using EdDSA as defined in RFC8032 or its successors. Note that these correspond to the \"PureEdDSA\" algorithms and not the \"prehash\" variants. Indicates a signature algorithm using RSASSA-PSS with a mask generation function of MGF1, as defined in RFC8017. The digest used in MGF1 and the digest being signed are both the corresponding hash algorithm as defined in SHS. The length of the Salt MUST be equal to the length of the digest algorithm. If the public key is carried in an X.509 certificate, it MUST use the RSASSA-PSS OID RFC5756. When used in certificate signatures, the algorithm parameters MUST be DER encoded. If the corresponding public key's parameters are present, then the parameters in the signature MUST be identical to those in the public key. Indicates algorithms which are being deprecated because they use algorithms with known weaknesses, specifically SHA-1 which is used"} {"id": "q-en-tls13-spec-f9157afb6e7c93789082a8cba0743e50ee96a3e3308333a51ea251e12cf0e504", "old_text": "shown below. The derivation process is as follows, where L denotes the length of the underlying hash function for HKDF RFC5869. The traffic keys are computed from xSS, xES, and the master_secret as described in traffic-key-calculation below.", "comments": "Just a few things I noticed that I think make it a little bit easier to follow. Name resumption_secret consistently when used Show full legend for each handshake flow Simplify/clarify secret sources table a bit and make text cases consistent The resumption secret naming was also apparently noticed by NAME in issue . (Edited to note: this PR has been pruned down to a smaller list of changes; see commit message)\nOk, to respond to comments I've pushed a new commit. RE: capitalization: Upper or lower is fine, as long as it's consistent. I've switched it to all upper (title case) instead of all lower. RE: legend repetition: If you're just reading one section, e.g. got to it via TOC, you might not have seen the legend (recently). I've made it less wordy now and also added a line to indicate \"+\" is for extensions, as I realized that wasn't necessarily clear. (and added a \"+\" where the notation was inconsistent) We do have an extension that was previously a message, now. The basic point is consistency; make each handshake flow make sense on its own. I think repeating the legend is helpful, but if you really don't want to repeat it that I can cut this down to just the little stuff and table. Showing \"N/A\" instead of repeating things in the table is the most helpful part here, I think. It states that SS=ES in some circumstances elsewhere, but the table isn't clear that these are the exact same values, especially because the extracted ones are not.\nEdited again and repushed for some more naming consistency.\nRevised & rebased. Of note, I think these changes to the table make it far easier to see what's going on here.\nNAME If you're going to be rewriting this, I'll just set those parts aside. The commit here is now just the little stuff: fix the spacing, add a line stating what SS/ES are in the list of derivations, and fix \"shared_secret\" (there's no such value named that; just drop the underscore).\nNAME commit updated with fixes for your comments above\nThe key derivation process uses the \"master secret\" as the basis for application traffic keys and both the resumption and exporter secrets. There are lots of other secrets though. I don't have a better name right now, so close this if none better can be found. Also s/resumption master secret/resumption secret/ throughout, it's a little inconsistent.\nNow master of more", "new_text": "shown below. The derivation process is as follows, where L denotes the length of the underlying hash function for HKDF RFC5869. SS and ES denote the sources from the table above. Whilst SS and ES may be the same in some cases, the extracted xSS and xES will not. The traffic keys are computed from xSS, xES, and the master_secret as described in traffic-key-calculation below."} {"id": "q-en-tls13-spec-f9157afb6e7c93789082a8cba0743e50ee96a3e3308333a51ea251e12cf0e504", "old_text": "7.2.2. A conventional Diffie-Hellman computation is performed. The negotiated key (Z) is used as the shared_secret, and is used in the key schedule as specified above. Leading bytes of Z that contain all zero bits are stripped before it is used as the input to HKDF.", "comments": "Just a few things I noticed that I think make it a little bit easier to follow. Name resumption_secret consistently when used Show full legend for each handshake flow Simplify/clarify secret sources table a bit and make text cases consistent The resumption secret naming was also apparently noticed by NAME in issue . (Edited to note: this PR has been pruned down to a smaller list of changes; see commit message)\nOk, to respond to comments I've pushed a new commit. RE: capitalization: Upper or lower is fine, as long as it's consistent. I've switched it to all upper (title case) instead of all lower. RE: legend repetition: If you're just reading one section, e.g. got to it via TOC, you might not have seen the legend (recently). I've made it less wordy now and also added a line to indicate \"+\" is for extensions, as I realized that wasn't necessarily clear. (and added a \"+\" where the notation was inconsistent) We do have an extension that was previously a message, now. The basic point is consistency; make each handshake flow make sense on its own. I think repeating the legend is helpful, but if you really don't want to repeat it that I can cut this down to just the little stuff and table. Showing \"N/A\" instead of repeating things in the table is the most helpful part here, I think. It states that SS=ES in some circumstances elsewhere, but the table isn't clear that these are the exact same values, especially because the extracted ones are not.\nEdited again and repushed for some more naming consistency.\nRevised & rebased. Of note, I think these changes to the table make it far easier to see what's going on here.\nNAME If you're going to be rewriting this, I'll just set those parts aside. The commit here is now just the little stuff: fix the spacing, add a line stating what SS/ES are in the list of derivations, and fix \"shared_secret\" (there's no such value named that; just drop the underscore).\nNAME commit updated with fixes for your comments above\nThe key derivation process uses the \"master secret\" as the basis for application traffic keys and both the resumption and exporter secrets. There are lots of other secrets though. I don't have a better name right now, so close this if none better can be found. Also s/resumption master secret/resumption secret/ throughout, it's a little inconsistent.\nNow master of more", "new_text": "7.2.2. A conventional Diffie-Hellman computation is performed. The negotiated key (Z) is used as the shared secret, and is used in the key schedule as specified above. Leading bytes of Z that contain all zero bits are stripped before it is used as the input to HKDF."} {"id": "q-en-tls13-spec-f59833485b57c11f408e034826dc7644741d95081f93a94aca5a295e14f30a6b", "old_text": "tls-resumption-psk shows a pair of handshakes in which the first establishes a PSK and the second uses it: Note that the client supplies a ClientKeyShare to the server as well, which allows the server to decline resumption and fall back to a full handshake. However, because the server is authenticating via a PSK, it does not send a Certificate or a CertificateVerify. PSK-based resumption cannot be used to provide a new ServerConfiguration. The contents and significance of each message will be presented in detail in the following sections.", "comments": "The current wording at the end of the section on resumption and PSK should be changed to indicate that if a server declines resumption then a full handshake, including the server Certificate and CertificateVerify messages, should take place. In its current state, the comment on omission of these messages is not disjoint from the statement regarding the server's rejection of resumption.", "new_text": "tls-resumption-psk shows a pair of handshakes in which the first establishes a PSK and the second uses it: As the server is authenticating via a PSK, it does not send a Certificate or a CertificateVerify. PSK-based resumption cannot be used to provide a new ServerConfiguration. Note that the client supplies a ClientKeyShare to the server as well, which allows the server to decline resumption and fall back to a full handshake. The contents and significance of each message will be presented in detail in the following sections."} {"id": "q-en-tls13-spec-f5fb7e5ce24d1198f54331a2fe37fa29b9ff08ca1df4e5c30019f5770290185f", "old_text": "send a Certificate message and a CertificateVerify message, even if the \"known_configuration\" extension was used for this handshake, thus requiring a signature over the configuration before it can be used by the client. 6.3.8.", "comments": "LGTM\nHubert Karlo writes: And how does the client know that the algorithms came from the server. We should have a \"client MUST wait for the full handshake to finish before recording this information\" or we will have a very nice cipher downgrade. Just having it signed is likely not a good idea, as they may depend on ciphersuites advertised by client.\nSince the ServerConfiguration and the suites are covered by the signature the server provides, isn't it safe to use the configuration add soon as you have verified the server identity?\nn.b., I have no problem with mandating that clients wait for finished, because it is immediately after the signature, but I wasn't the properties to be clear.", "new_text": "send a Certificate message and a CertificateVerify message, even if the \"known_configuration\" extension was used for this handshake, thus requiring a signature over the configuration before it can be used by the client. Clients MUST not rely on the ServerConfiguration message until successfully receiving and processing the server's Certificate, CertificateVerify, and Finished. If there is a failure in processing those messages, the client MUST discard the ServerConfiguration. 6.3.8."} {"id": "q-en-tls13-spec-02dfe179a4b9929ef3b92ab6ca7bfe1c534cc8229ae00f2933bc51f4096cc74a", "old_text": "close a connection does not prohibit a session from being resumed. This message notifies the recipient that the sender will not send any more messages on this connection. Any data received after a closure MUST be ignored. This message is sent by the client to indicate that all 0-RTT application_data messages have been transmitted and that the next message will be handshake message protected with the 1-RTT handshake keys. This alert MUST be at the warning level. Servers MUST NOT send this message and clients receiving this message MUST terminate the connection with an \"unexpected_message\" alert. This message notifies the recipient that the sender is canceling the handshake for some reason unrelated to a protocol failure. If a user cancels an operation after the handshake is complete, just closing the connection by sending a \"close_notify\" is more appropriate. This alert SHOULD be followed by a \"close_notify\". This alert is generally a warning.", "comments": "NAME did a massive message->alert change a while back and I did yet more when we were rebasing his patch. Looks like there's more up here in Closure Alerts. We're referring to alerts, not the messages that carry them. Switch to \"alerts\" instead to remain consistent with the rest of the document, notably the following section.", "new_text": "close a connection does not prohibit a session from being resumed. This alert notifies the recipient that the sender will not send any more messages on this connection. Any data received after a closure MUST be ignored. This alert is sent by the client to indicate that all 0-RTT application_data messages have been transmitted and that the next message will be handshake message protected with the 1-RTT handshake keys. This alert MUST be at the warning level. Servers MUST NOT send this alert and clients receiving this alert MUST terminate the connection with an \"unexpected_message\" alert. This alert notifies the recipient that the sender is canceling the handshake for some reason unrelated to a protocol failure. If a user cancels an operation after the handshake is complete, just closing the connection by sending a \"close_notify\" is more appropriate. This alert SHOULD be followed by a \"close_notify\". This alert is generally a warning."} {"id": "q-en-tls13-spec-0d1ecfd253663e2b7c32cc4d9351735475d67e8b167e3441c175afdb867ce63a", "old_text": "ServerConfiguration format. The semantics of this message are to establish a shared state between the client and server for use with the \"known_configuration\" extension with the key specified in key and with the handshake parameters negotiated by this handshake. When the ServerConfiguration message is sent, the server MUST also send a Certificate message and a CertificateVerify message, even if the \"known_configuration\" extension was used for this handshake, thus requiring a signature over the configuration before it can be used by the client. Clients MUST NOT rely on the ServerConfiguration message until successfully receiving and processing the server's Certificate,", "comments": "Cleaned up references to knownconfiguration extension (merged into earlydata in draft-08)", "new_text": "ServerConfiguration format. The semantics of this message are to establish a shared state between the client and server for use with the \"early_data\" extension with the key specified in \"static_key_share\" and with the handshake parameters negotiated by this handshake. When the ServerConfiguration message is sent, the server MUST also send a Certificate message and a CertificateVerify message, even if the \"early_data\" extension was used for this handshake, thus requiring a signature over the configuration before it can be used by the client. Clients MUST NOT rely on the ServerConfiguration message until successfully receiving and processing the server's Certificate,"} {"id": "q-en-tls13-spec-ad2c22035251401aafa73efa89531db44a8ca2a1347629b85b5bac2dc73063c6", "old_text": "\"supported_signature_algorithms\" value: %%% Signature Algorithm Extension enum { // RSASSA-PKCS-v1_5 algorithms. rsa_pkcs1_sha1 (0x0201), rsa_pkcs1_sha256 (0x0401), rsa_pkcs1_sha384 (0x0501), rsa_pkcs1_sha512 (0x0601), Note: This production is named \"SignatureScheme\" because there is", "comments": "Minor editorial tweaks. I just noticed the extra line that gets left in there when shown up top (two lines between first two shown blocks, instead of one). This fixes that. Also drops the periods after the comments, as they're not sentences.", "new_text": "\"supported_signature_algorithms\" value: %%% Signature Algorithm Extension enum { // RSASSA-PKCS-v1_5 algorithms rsa_pkcs1_sha1 (0x0201), rsa_pkcs1_sha256 (0x0401), rsa_pkcs1_sha384 (0x0501), rsa_pkcs1_sha512 (0x0601), Note: This production is named \"SignatureScheme\" because there is"} {"id": "q-en-tls13-spec-9925d690b4ea6ead3bb041a6db8a7dbc081093beecbf815e9fad3fe759bcd5c4", "old_text": "described in Section 2.1 of RFC5116. The key is either the client_write_key or the server_write_key. The type field is identical to TLSPlaintext.type. The version field is identical to TLSPlaintext.version. The length (in bytes) of the following TLSCiphertext.fragment. The length MUST NOT exceed 2^14 + 2048. The AEAD encrypted form of TLSPlaintext.fragment. Each AEAD cipher suite MUST specify how the nonce supplied to the AEAD operation is constructed, and what is the length of the GenericAEADCipher.nonce_explicit part. In many cases, it is appropriate to use the partially implicit nonce technique described in Section 3.2.1 of RFC5116; with record_iv_length being the length of the explicit part. In this case, the implicit part SHOULD be derived from key_block as client_write_iv and server_write_iv (as described in key-calculation), and the explicit part is included in GenericAEAEDCipher.nonce_explicit. The plaintext is the TLSPlaintext.fragment.", "comments": "The draft was in a sort of halfway state between whether TLSCiphertext has a fragment that is a selection or just embedding the AEAD elements directly. This should normalize it to the embedded view without the extra GenericAEADCipher struct definition. (i don't think this changes the logic of the draft at all)", "new_text": "described in Section 2.1 of RFC5116. The key is either the client_write_key or the server_write_key. The type field is identical to TLSPlaintext.type. The version field is identical to TLSPlaintext.version. The length (in bytes) of the following TLSCiphertext.fragment. The length MUST NOT exceed 2^14 + 2048. The AEAD encrypted form of TLSPlaintext.fragment. Each AEAD cipher suite MUST specify how the nonce supplied to the AEAD operation is constructed, and what is the length of the TLSCiphertext.nonce_explicit part. In many cases, it is appropriate to use the partially implicit nonce technique described in Section 3.2.1 of RFC5116; with record_iv_length being the length of the explicit part. In this case, the implicit part SHOULD be derived from key_block as client_write_iv and server_write_iv (as described in key-calculation), and the explicit part is included in GenericAEAEDCipher.nonce_explicit. The plaintext is the TLSPlaintext.fragment."} {"id": "q-en-tls13-spec-3a830cb07d7819cdd967eb6311d6298ae1da33a4a39b46c139f2a9beb34f0508", "old_text": "The server's key exchange parameters. For non-anonymous key exchanges, a signature over the server's key exchange parameters. If the client has offered the \"signature_algorithms\" extension, the signature algorithm and hash algorithm MUST be a pair listed in that", "comments": "This change clarifies that the signature over ServerDHParams data is a signature over the hash of the client + server hello random random data and the ServerDHParams data. I believe this small regression was introduced in TLS1.2 with the result that the TLS1.2 RFC (and current TLS1.3 draft) does not contain sufficient detail to implement TLS + DHE. All TLS1.2 clients and servers that I have tested do include the hello random data (and omitting it would leave implementations open to replay attacks).\nThanks for the PR. Note that the formal PDU definitions here appear to be correct, so there should be enough information to implement things correctly: I agree it wouldn't hurt to have the explanatory text be clearer. However, in TLS 1.2 we stopped explicitly saying that things were hashed before signing (because we are treating signing as a primitive), so we probably don't want to say \"hash\". Maybe just say a signature over the random values and the params?\nAh, this explains my confusion. After reading section 7.4.1.4.1, I formed the opposite impression about signing; that hashing and signing are explicitly different primitive operations and may be composed in a variety of ways. But I see how that \"digitally-signed\" seems to be a reserved term that implies both operations.\nI'm not trying to discourage you here. If you were confused, others will probably be as well, so if you have some text that you think would clear this up, it would be welcome.\nThanks! I've updated the PR with an even simpler diff that I think would have made me less confused when I got to it.", "new_text": "The server's key exchange parameters. For non-anonymous key exchanges, a signature over the client and server hello random data and the server's key exchange parameters. If the client has offered the \"signature_algorithms\" extension, the signature algorithm and hash algorithm MUST be a pair listed in that"} {"id": "q-en-tls13-spec-ab60678e9dfba6b293e78551c17c4190e87e03fd3cd5880bded7497cc96d3f3e", "old_text": "1.2. draft-14 - Allow cookies to be longer. draft-13", "comments": "Lack of a blank line after \"draft-14\" but before the bullet point causes it to render all on one line: URL This is just a quick patch to make the spacing consistent because apparently Markdown (or, at least whatever is interpreting it there) can be picky. ;) This is just a whitespace fix.", "new_text": "1.2. draft-14 Allow cookies to be longer. draft-13"} {"id": "q-en-tls13-spec-a2240ec46c85735d8cd15e17a8979efeafaa72d68c0533fca7b5bd04c6354985", "old_text": "6. One of the content types supported by the TLS record layer is the alert type. Alert messages convey the severity of the message (warning or fatal) and a description of the alert. Alert messages with a level of fatal result in the immediate termination of the connection. Like other messages, alert messages are encrypted as specified by the current connection state. %%% Alert Messages 6.1.", "comments": "NAME NAME PTAL\nLGTM\nlgtm\nDon't forget no_certificate.\nNot relevant: nocertificateRESERVED(41), / fatal / , Martin Thomson EMAIL wrote:\nRight. I just noticed that we send an empty certificate instead.", "new_text": "6. One of the content types supported by the TLS record layer is the alert type. Like other messages, alert messages are encrypted as specified by the current connection state. Alert messages convey the severity of the message (warning or fatal) and a description of the alert. Warning-level messages are used to indicate orderly closure of the connection (see closure-alerts). Upon receiving a warning-level alert, the TLS implementation SHOULD indicate end-of-data to the application and, if appropriate for the alert type, send a closure alert in response. Fatal-level messages are used to indicate abortive closure of the connection (See error-alerts). Upon receiving a fatal-level alert, the TLS implementation SHOULD indicate an error to the application and MUST NOT allow any further data to be sent or received on the connection. Servers and clients MUST forget keys and secrets associated with a failed connection. Stateful implementations of session tickets (as in many clients) SHOULD discard tickets associated with failed connections. All the alerts listed in error-alerts MUST be sent as fatal and MUST be treated as fatal regardless of the AlertLevel in the message. Unknown alert types MUST be treated as fatal. %%% Alert Messages 6.1."} {"id": "q-en-tls13-spec-a2240ec46c85735d8cd15e17a8979efeafaa72d68c0533fca7b5bd04c6354985", "old_text": "6.2. Error handling in TLS is very simple. When an error is detected, the detecting party sends a message to its peer. Upon transmission or receipt of a fatal alert message, both parties immediately close the connection. Servers and clients MUST forget keys, and secrets associated with a failed connection. Stateful implementations of session tickets (as in many clients) SHOULD discard tickets associated with failed connections. Whenever an implementation encounters a condition which is defined as a fatal alert, it MUST send the appropriate alert prior to closing the connection. For all errors where an alert level is not explicitly specified, the sending party MAY determine at its discretion whether to treat this as a fatal error or not. If the implementation chooses to send an alert but intends to close the connection immediately afterwards, it MUST send that alert at the fatal alert level. If an alert with a level of warning is sent and received, generally the connection can continue normally. If the receiving party decides not to proceed with the connection (e.g., after having received a \"user_canceled\" alert that it is not willing to accept), it SHOULD send a fatal alert to terminate the connection. Given this, the sending peer cannot, in general, know how the receiving party will behave. Therefore, warning alerts are not very useful when the sending party wants to continue the connection, and thus are sometimes omitted. For example, if a party decides to accept an expired certificate (perhaps after confirming this with the user) and wants to continue the connection, it would not generally send a \"certificate_expired\" alert. The following error alerts are defined:", "comments": "NAME NAME PTAL\nLGTM\nlgtm\nDon't forget no_certificate.\nNot relevant: nocertificateRESERVED(41), / fatal / , Martin Thomson EMAIL wrote:\nRight. I just noticed that we send an empty certificate instead.", "new_text": "6.2. Error handling in the TLS Handshake Protocol is very simple. When an error is detected, the detecting party sends a message to its peer. Upon transmission or receipt of a fatal alert message, both parties immediately close the connection. Whenever an implementation encounters a condition which is defined as a fatal alert, it MUST send the appropriate alert prior to closing the connection. The following error alerts are defined:"} {"id": "q-en-tls13-spec-f79a09e09ad644ff353717f930a2734942cc1f35a914e9daf1a347d00445c0d4", "old_text": "Upon transmission or receipt of a fatal alert message, both parties immediately close the connection. Whenever an implementation encounters a condition which is defined as a fatal alert, it MUST send the appropriate alert prior to closing the connection. The following error alerts are defined: An inappropriate message was received. This alert is always fatal and should never be observed in communication between proper implementations. This alert is returned if a record is received which cannot be deprotected. Because AEAD algorithms combine decryption and verification, this alert is used for all deprotection failures. This alert is always fatal and should never be observed in communication between proper implementations (except when messages were corrupted in the network). A TLSCiphertext record was received that had a length more than 2^14 + 256 bytes, or a record decrypted to a TLSPlaintext record with more than 2^14 bytes. This alert is always fatal and should never be observed in communication between proper implementations (except when messages were corrupted in the network). Reception of a \"handshake_failure\" alert message indicates that the sender was unable to negotiate an acceptable set of security parameters given the options available. This alert is always fatal. A certificate was corrupt, contained signatures that did not verify correctly, etc.", "comments": "As all error alerts are now considered fatal (PR ), we no longer need \"This alert is always fatal\" explicitly stated everywhere in that section.", "new_text": "Upon transmission or receipt of a fatal alert message, both parties immediately close the connection. Whenever an implementation encounters a condition which is defined as a fatal alert, it MUST send the appropriate alert prior to closing the connection. All alerts defined in this section below, as well as all unknown alerts are universally considered fatal as of TLS 1.3 (see alert-protocol). The following error alerts are defined: An inappropriate message was received. This alert should never be observed in communication between proper implementations. This alert is returned if a record is received which cannot be deprotected. Because AEAD algorithms combine decryption and verification, this alert is used for all deprotection failures. This alert should never be observed in communication between proper implementations, except when messages were corrupted in the network. A TLSCiphertext record was received that had a length more than 2^14 + 256 bytes, or a record decrypted to a TLSPlaintext record with more than 2^14 bytes. This alert should never be observed in communication between proper implementations, except when messages were corrupted in the network. Reception of a \"handshake_failure\" alert message indicates that the sender was unable to negotiate an acceptable set of security parameters given the options available. A certificate was corrupt, contained signatures that did not verify correctly, etc."} {"id": "q-en-tls13-spec-f79a09e09ad644ff353717f930a2734942cc1f35a914e9daf1a347d00445c0d4", "old_text": "certificate, rendering it unacceptable. A field in the handshake was out of range or inconsistent with other fields. This alert is always fatal. A valid certificate chain or partial chain was received, but the certificate was not accepted because the CA certificate could not be located or couldn't be matched with a known, trusted CA. This alert is always fatal. A valid certificate or PSK was received, but when access control was applied, the sender decided not to proceed with negotiation. This alert is always fatal. A message could not be decoded because some field was out of the specified range or the length of the message was incorrect. This alert is always fatal and should never be observed in communication between proper implementations (except when messages were corrupted in the network). A handshake cryptographic operation failed, including being unable to correctly verify a signature or validate a Finished message. This alert is always fatal. The protocol version the peer has attempted to negotiate is recognized but not supported. (For example, old protocol versions might be avoided for security reasons.) This alert is always fatal. Returned instead of \"handshake_failure\" when a negotiation has failed specifically because the server requires ciphers more secure than those supported by the client. This alert is always fatal. An internal error unrelated to the peer or the correctness of the protocol (such as a memory allocation failure) makes it impossible to continue. This alert is always fatal. Sent by a server in response to an invalid connection retry attempt from a client. (see [RFC7507]) This alert is always fatal. Sent by endpoints that receive a hello message not containing an extension that is mandatory to send for the offered TLS version. This message is always fatal. [[TODO: IANA Considerations.]] Sent by endpoints receiving any hello message containing an extension known to be prohibited for inclusion in the given hello message, including any extensions in a ServerHello not first offered in the corresponding ClientHello. This alert is always fatal. Sent by servers when unable to obtain a certificate from a URL provided by the client via the \"client_certificate_url\" extension", "comments": "As all error alerts are now considered fatal (PR ), we no longer need \"This alert is always fatal\" explicitly stated everywhere in that section.", "new_text": "certificate, rendering it unacceptable. A field in the handshake was out of range or inconsistent with other fields. A valid certificate chain or partial chain was received, but the certificate was not accepted because the CA certificate could not be located or couldn't be matched with a known, trusted CA. A valid certificate or PSK was received, but when access control was applied, the sender decided not to proceed with negotiation. A message could not be decoded because some field was out of the specified range or the length of the message was incorrect. This alert should never be observed in communication between proper implementations, except when messages were corrupted in the network. A handshake cryptographic operation failed, including being unable to correctly verify a signature or validate a Finished message. The protocol version the peer has attempted to negotiate is recognized but not supported. (see backward-compatibility) Returned instead of \"handshake_failure\" when a negotiation has failed specifically because the server requires ciphers more secure than those supported by the client. An internal error unrelated to the peer or the correctness of the protocol (such as a memory allocation failure) makes it impossible to continue. Sent by a server in response to an invalid connection retry attempt from a client. (see [RFC7507]) Sent by endpoints that receive a hello message not containing an extension that is mandatory to send for the offered TLS version. [[TODO: IANA Considerations.]] Sent by endpoints receiving any hello message containing an extension known to be prohibited for inclusion in the given hello message, including any extensions in a ServerHello not first offered in the corresponding ClientHello. Sent by servers when unable to obtain a certificate from a URL provided by the client via the \"client_certificate_url\" extension"} {"id": "q-en-tls13-spec-f79a09e09ad644ff353717f930a2734942cc1f35a914e9daf1a347d00445c0d4", "old_text": "Sent by servers when a retrieved object does not have the correct hash provided by the client via the \"client_certificate_url\" extension RFC6066. This alert is always fatal. Sent by servers when a PSK cipher suite is selected but no acceptable PSK identity is provided by the client. Sending this", "comments": "As all error alerts are now considered fatal (PR ), we no longer need \"This alert is always fatal\" explicitly stated everywhere in that section.", "new_text": "Sent by servers when a retrieved object does not have the correct hash provided by the client via the \"client_certificate_url\" extension RFC6066. Sent by servers when a PSK cipher suite is selected but no acceptable PSK identity is provided by the client. Sending this"} {"id": "q-en-tls13-spec-bb0b17608a0013fe1bdfae21a13ad08210946c78a22c2180819a5717961776ad", "old_text": "RFC5705 defines keying material exporters for TLS in terms of the TLS PRF. This document replaces the PRF with HKDF, thus requiring a new construction. The exporter interface remains the same, however the value is computed as: 8.", "comments": "No context and empty context are now declared to be the same. Add some ALL CAPS WORDS for future specifications to deprecate no context. I used SHOULD to forbid no context because, if some protocol currently uses no context, switching it to empty context is a breaking change and probably not worth it. We just want new ones to always provide a context. This affects the rules for using exporters, so RFC 5705 is added to the list of updated documents. Closes issue .", "new_text": "RFC5705 defines keying material exporters for TLS in terms of the TLS PRF. This document replaces the PRF with HKDF, thus requiring a new construction. The exporter interface remains the same. If context is provided, the value is computed as: If no context is provided, the value is computed as: Note that providing no context computes the same value as providing an empty context. As of this document's publication, no allocated exporter label is used with both modes. Future specifications MUST NOT provide an empty context and no context with the same label and SHOULD provide a context, possibly empty, in all exporter computations. 8."} {"id": "q-en-tls13-spec-48607ee16cca07331ad3213a4b70c7c0ccb4908311571338b856b2f53335d731", "old_text": "The key exchange and authentication modes this PSK is allowed to be used with The Application-Layer Protocol Negotiation (ALPN) label(s) The Server Name Indication (SNI), if any is to be used", "comments": "1) Switches uses of ALPN label/value to \"ALPN protocol\" (to clarify this is referring to a single protocol, not the entire extension) 2) Remove plural from external PSK case (if there's not way for the client to select which protocol the 0-RTT data uses, it doesn't make sense for there to be multiple options) 3) Switch reference from RFC7443 to RFC7301 (I assume this is what was intended) I think this makes the intended behavior of the spec a bit more clear.", "new_text": "The key exchange and authentication modes this PSK is allowed to be used with The Application-Layer Protocol Negotiation (ALPN) protocol, if any is to be used The Server Name Indication (SNI), if any is to be used"} {"id": "q-en-tls13-spec-48607ee16cca07331ad3213a4b70c7c0ccb4908311571338b856b2f53335d731", "old_text": "handshake but reject 0-RTT, and SHOULD NOT take any other action that assumes that this ClientHello is fresh. The parameters for the 0-RTT data (symmetric cipher suite, ALPN, etc.) are the same as those which were negotiated in the connection which established the PSK. The PSK used to encrypt the early data MUST be the first PSK listed in the client's \"pre_shared_key\" extension. 0-RTT messages sent in the first flight have the same content types as their corresponding messages sent in other flights (handshake,", "comments": "1) Switches uses of ALPN label/value to \"ALPN protocol\" (to clarify this is referring to a single protocol, not the entire extension) 2) Remove plural from external PSK case (if there's not way for the client to select which protocol the 0-RTT data uses, it doesn't make sense for there to be multiple options) 3) Switch reference from RFC7443 to RFC7301 (I assume this is what was intended) I think this makes the intended behavior of the spec a bit more clear.", "new_text": "handshake but reject 0-RTT, and SHOULD NOT take any other action that assumes that this ClientHello is fresh. The parameters for the 0-RTT data (symmetric cipher suite, ALPN protocol, etc.) are the same as those which were negotiated in the connection which established the PSK. The PSK used to encrypt the early data MUST be the first PSK listed in the client's \"pre_shared_key\" extension. 0-RTT messages sent in the first flight have the same content types as their corresponding messages sent in other flights (handshake,"} {"id": "q-en-tls13-spec-48607ee16cca07331ad3213a4b70c7c0ccb4908311571338b856b2f53335d731", "old_text": "with a HelloRetryRequest. A client MUST NOT include the \"early_data\" extension in its followup ClientHello. In order to accept early data, the server server MUST have accepted a PSK cipher suite and selected the the first key offered in the client's \"pre_shared_key\" extension. In addition, it MUST verify that the following values are consistent with those negotiated in the connection during which the ticket was established. The TLS version number, AEAD algorithm, and the hash for HKDF. The selected ALPN RFC7443 value, if any. Future extensions MUST define their interaction with 0-RTT.", "comments": "1) Switches uses of ALPN label/value to \"ALPN protocol\" (to clarify this is referring to a single protocol, not the entire extension) 2) Remove plural from external PSK case (if there's not way for the client to select which protocol the 0-RTT data uses, it doesn't make sense for there to be multiple options) 3) Switch reference from RFC7443 to RFC7301 (I assume this is what was intended) I think this makes the intended behavior of the spec a bit more clear.", "new_text": "with a HelloRetryRequest. A client MUST NOT include the \"early_data\" extension in its followup ClientHello. In order to accept early data, the server MUST have accepted a PSK cipher suite and selected the first key offered in the client's \"pre_shared_key\" extension. In addition, it MUST verify that the following values are consistent with those negotiated in the connection during which the ticket was established. The TLS version number, AEAD algorithm, and the hash for HKDF. The selected ALPN RFC7301 protocol, if any. Future extensions MUST define their interaction with 0-RTT."} {"id": "q-en-tls13-spec-48607ee16cca07331ad3213a4b70c7c0ccb4908311571338b856b2f53335d731", "old_text": "application MAY opt to retransmit the data once the handshake has been completed. TLS stacks SHOULD not do this automatically and client applications MUST take care that the negotiated parameters are consistent with those it expected. For example, if the ALPN value has changed, it is likely unsafe to retransmit the original application layer data. 4.2.9.1.", "comments": "1) Switches uses of ALPN label/value to \"ALPN protocol\" (to clarify this is referring to a single protocol, not the entire extension) 2) Remove plural from external PSK case (if there's not way for the client to select which protocol the 0-RTT data uses, it doesn't make sense for there to be multiple options) 3) Switch reference from RFC7443 to RFC7301 (I assume this is what was intended) I think this makes the intended behavior of the spec a bit more clear.", "new_text": "application MAY opt to retransmit the data once the handshake has been completed. TLS stacks SHOULD not do this automatically and client applications MUST take care that the negotiated parameters are consistent with those it expected. For example, if the selected ALPN protocol has changed, it is likely unsafe to retransmit the original application layer data. 4.2.9.1."} {"id": "q-en-tls13-spec-4ebd38559cbf47c0f0c77b8ee44c9d874a9a85f41b834a6afe28e142ecfd4982", "old_text": "%%% Key Exchange Messages This field contains the version of TLS negotiated for this session. Servers MUST select the lower of the highest supported server version and the version offered by the client in the ClientHello. In particular, servers MUST accept ClientHello messages with versions higher than those supported and negotiate the highest mutually supported version. For this version of the specification, the version is 0x0304. (See backward-compatibility for details about backward compatibility.) This structure is generated by the server and MUST be generated independently of the ClientHello.random.", "comments": "It seems section 4.1.3 Server Hello has not been updated to reflect the new version negotiation mechanism. says \"servers says \"servers MUST [...] negotiate the highest mutually supported version\".\nThanks, I caught a few of these on my reread yesterday too, but I probably missed some too, so keep them coming!", "new_text": "%%% Key Exchange Messages This field contains the version of TLS negotiated for this session. Servers MUST select a version from the list in ClientHello.supported_versions extension. A client which receives a version that was not offered MUST abort the handshake. For this version of the specification, the version is 0x0304. (See backward-compatibility for details about backward compatibility.) This structure is generated by the server and MUST be generated independently of the ClientHello.random."} {"id": "q-en-tls13-spec-cd9967d9c2fa0564ee26948f668f619d06c37890f75968d87dec69ffba04fa7a", "old_text": "otherwise, a TLS-compliant application MUST implement the TLS_AES_128_GCM_SHA256 cipher suite and SHOULD implement the TLS_AES_256_GCM_SHA384 and TLS_CHACHA20_POLY1305_SHA256 cipher suites. A TLS-compliant application MUST support digital signatures with rsa_pkcs1_sha256 (for certificates), rsa_pss_sha256 (for", "comments": "Just noticed that the MTI suites section didn't link to the suites, so here's a minor PR with some reference fiddling for this section: Add a link in MTI suites to the suites section, now that they're defined in this document. Nitpick ordering of extensions in MTI extensions to have them in same order as sections (so numbers in links are consistient). Change plaintext reference to appendices in Security Considerations to actual references.", "new_text": "otherwise, a TLS-compliant application MUST implement the TLS_AES_128_GCM_SHA256 cipher suite and SHOULD implement the TLS_AES_256_GCM_SHA384 and TLS_CHACHA20_POLY1305_SHA256 cipher suites. (see cipher-suites) A TLS-compliant application MUST support digital signatures with rsa_pkcs1_sha256 (for certificates), rsa_pss_sha256 (for"} {"id": "q-en-tls13-spec-cd9967d9c2fa0564ee26948f668f619d06c37890f75968d87dec69ffba04fa7a", "old_text": "Supported Versions (\"supported_versions\"; supported-versions) Signature Algorithms (\"signature_algorithms\"; signature- algorithms)", "comments": "Just noticed that the MTI suites section didn't link to the suites, so here's a minor PR with some reference fiddling for this section: Add a link in MTI suites to the suites section, now that they're defined in this document. Nitpick ordering of extensions in MTI extensions to have them in same order as sections (so numbers in links are consistient). Change plaintext reference to appendices in Security Considerations to actual references.", "new_text": "Supported Versions (\"supported_versions\"; supported-versions) Cookie (\"cookie\"; cookie) Signature Algorithms (\"signature_algorithms\"; signature- algorithms)"} {"id": "q-en-tls13-spec-cd9967d9c2fa0564ee26948f668f619d06c37890f75968d87dec69ffba04fa7a", "old_text": "Pre-Shared Key (\"pre_shared_key\"; pre-shared-key-extension) Cookie (\"cookie\"; cookie) Server Name Indication (\"server_name\"; Section 3 of RFC6066) All implementations MUST send and use these extensions when offering", "comments": "Just noticed that the MTI suites section didn't link to the suites, so here's a minor PR with some reference fiddling for this section: Add a link in MTI suites to the suites section, now that they're defined in this document. Nitpick ordering of extensions in MTI extensions to have them in same order as sections (so numbers in links are consistient). Change plaintext reference to appendices in Security Considerations to actual references.", "new_text": "Pre-Shared Key (\"pre_shared_key\"; pre-shared-key-extension) Server Name Indication (\"server_name\"; Section 3 of RFC6066) All implementations MUST send and use these extensions when offering"} {"id": "q-en-tls13-spec-cd9967d9c2fa0564ee26948f668f619d06c37890f75968d87dec69ffba04fa7a", "old_text": "9. Security issues are discussed throughout this memo, especially in Appendices B, C, and D. 10.", "comments": "Just noticed that the MTI suites section didn't link to the suites, so here's a minor PR with some reference fiddling for this section: Add a link in MTI suites to the suites section, now that they're defined in this document. Nitpick ordering of extensions in MTI extensions to have them in same order as sections (so numbers in links are consistient). Change plaintext reference to appendices in Security Considerations to actual references.", "new_text": "9. Security issues are discussed throughout this memo, especially in implementation-notes, backward-compatibility, and security-analysis. 10."} {"id": "q-en-tls13-spec-9b5d93c634c595bd554bc8c7b63c6dd777a62175ca5c3a59db8c9705689cc99d", "old_text": "The client uses the \"signature_algorithms\" extension to indicate to the server which signature algorithms may be used in digital signatures. Clients which desire the server to authenticate via a certificate MUST send this extension. If a server is authenticating via a certificate and the client has not sent a \"signature_algorithms\" extension then the server MUST abort the handshake with a \"missing_extension\" alert (see mti-extensions).", "comments": "Adding \"itself\" makes the sentence much easier to understand. Actually, two Japanese, who are implementing TLS 1.3 independently, took a long time to understand this.", "new_text": "The client uses the \"signature_algorithms\" extension to indicate to the server which signature algorithms may be used in digital signatures. Clients which desire the server to authenticate itself via a certificate MUST send this extension. If a server is authenticating via a certificate and the client has not sent a \"signature_algorithms\" extension then the server MUST abort the handshake with a \"missing_extension\" alert (see mti-extensions)."} {"id": "q-en-tls13-spec-fe9f29eb2c085fd634bea607be196fc3b0c387ea56aee87026a0479acdec9312", "old_text": "prevents passive observers from correlating sessions unless tickets are reused. Note: because ticket lifetimes are restricted to a week, 32 bits is enough to represent any plausible age, even in milliseconds. External tickets SHOULD use an obfuscated_ticket_age of 0; servers MUST ignore this value for external tickets. A list of the identities that the client is willing to negotiate with the server. If sent alongside the \"early_data\" extension", "comments": "Ticket is a term that refers to a PSK identity established using an handshake. For out-of-band PSK, we do not use the term.", "new_text": "prevents passive observers from correlating sessions unless tickets are reused. Note: because ticket lifetimes are restricted to a week, 32 bits is enough to represent any plausible age, even in milliseconds. For identities established externally an obfuscated_ticket_age of 0 SHOULD be used, and servers MUST ignore the value. A list of the identities that the client is willing to negotiate with the server. If sent alongside the \"early_data\" extension"} {"id": "q-en-tls13-spec-222548c2d3397d936f57abfea11ec2078104f8dc2f5a914d8690fdfc7b607185", "old_text": "length of the digest output. This codepoint is also defined for use with TLS 1.2. Indicates a signature algorithm using EdDSA as defined in I- D.irtf-cfrg-eddsa or its successors. Note that these correspond to the \"PureEdDSA\" algorithms and not the \"prehash\" variants. rsa_pkcs1_sha1, dsa_sha1, and ecdsa_sha1 SHOULD NOT be offered. Clients offering these values for backwards compatibility MUST list", "comments": "draft_eddsa is now . sandj-tls-iana-registry-updates is now ietf-tls-iana-registry-updates.", "new_text": "length of the digest output. This codepoint is also defined for use with TLS 1.2. Indicates a signature algorithm using EdDSA as defined in RFC8032 or its successors. Note that these correspond to the \"PureEdDSA\" algorithms and not the \"prehash\" variants. rsa_pkcs1_sha1, dsa_sha1, and ecdsa_sha1 SHOULD NOT be offered. Clients offering these values for backwards compatibility MUST list"} {"id": "q-en-tls13-spec-222548c2d3397d936f57abfea11ec2078104f8dc2f5a914d8690fdfc7b607185", "old_text": "suites to the registry. The \"Value\" and \"Description\" columns are taken from the table. The \"DTLS-OK\" and \"Recommended\" columns are both marked as \"Yes\" for each new cipher suite. [[This assumes I- D.sandj-tls-iana-registry-updates has been applied.]] TLS ContentType Registry: Future values are allocated via Standards Action RFC5226.", "comments": "draft_eddsa is now . sandj-tls-iana-registry-updates is now ietf-tls-iana-registry-updates.", "new_text": "suites to the registry. The \"Value\" and \"Description\" columns are taken from the table. The \"DTLS-OK\" and \"Recommended\" columns are both marked as \"Yes\" for each new cipher suite. [[This assumes I- D.ietf-tls-iana-registry-updates has been applied.]] TLS ContentType Registry: Future values are allocated via Standards Action RFC5226."} {"id": "q-en-tls13-spec-ac45606a08ed48aa4c2cb709ea8cd06d0183b0640b363ab3e9e133c2bd66e243", "old_text": "\"encrypted_extensions\", \"end_of_early_data\", \"key_update\", and \"handshake_hash\" values. This document also uses a registry originally created in RFC4366. IANA has updated it to reference this document. The registry and its allocation policy is listed below: IANA [SHALL update/has updated] this registry to include the \"key_share\", \"pre_shared_key\", \"psk_key_exchange_modes\",", "comments": "Need to be explicit say that it's the ExtensionType registry from RFC 4366.\nI literally was making the same change in my editor when the notification came in! (Should it be the \"TLS Extensions ExtensionType registry\"?)", "new_text": "\"encrypted_extensions\", \"end_of_early_data\", \"key_update\", and \"handshake_hash\" values. This document also uses the TLS ExtensionType Registry originally created in RFC4366. IANA has updated it to reference this document. The registry and its allocation policy is listed below: IANA [SHALL update/has updated] this registry to include the \"key_share\", \"pre_shared_key\", \"psk_key_exchange_modes\","} {"id": "q-en-tls13-spec-ee73af626d4cec286f6294b7f35b786ab175a61afee38df50e54f404378cca7b", "old_text": "authorities). Servers which are authenticating with a PSK MUST NOT send the CertificateRequest message. 4.3.2.1.", "comments": "When we banned client auth and PSK, we only meant to do it for the main handshake, not the post-handshake phase. This reverts that change, as well as clarifies the prophibition on PSK plus cert-based auth.", "new_text": "authorities). Servers which are authenticating with a PSK MUST NOT send the CertificateRequest message in the main handshake, though they MAY send it in post-handshake authentication (see post-handshake- authentication) provided that the client has sent the \"post_handshake_auth\" extension (see post_handshake_auth). 4.3.2.1."} {"id": "q-en-trickle-ca5ccf38a10bac7f822d24248bcd3a8bd86a8bfe83d908deeca47a6e4ca85235", "old_text": "candidates, such as STUN and TURN. All of the candidates sent within an ICE session; these are the candidates that are associated with a local/remote ufrag pair (which will change on ICE restart, if any). Any session-related (as opposed to candidate-related) attributes required to configure an ICE agent. These include but are not", "comments": "Changed \"media stream\" to \"media stream and component\" in one place, and made some grammatical changes to the definition of an ICE generation.", "new_text": "candidates, such as STUN and TURN. All of the candidates sent within an ICE session; these are the candidates that are associated with a specific local/remote ufrag pair (which will change on ICE restart, if any occurs). Any session-related (as opposed to candidate-related) attributes required to configure an ICE agent. These include but are not"} {"id": "q-en-trickle-ca5ccf38a10bac7f822d24248bcd3a8bd86a8bfe83d908deeca47a6e4ca85235", "old_text": "Once the candidate has been sent to the remote party, the agent checks if any remote candidates are currently known for this same stream. If not, the new candidate will simply be added to the list of local candidates. Otherwise, if the agent has already learned of one or more remote candidates for this stream and component, it will begin pairing the", "comments": "Changed \"media stream\" to \"media stream and component\" in one place, and made some grammatical changes to the definition of an ICE generation.", "new_text": "Once the candidate has been sent to the remote party, the agent checks if any remote candidates are currently known for this same stream and component. If not, the new candidate will simply be added to the list of local candidates. Otherwise, if the agent has already learned of one or more remote candidates for this stream and component, it will begin pairing the"} {"id": "q-en-using-github-778bdc61a0b07d47cf2f655e7f705840c6cfe274bc66e4e279b67b6ba9d7d726", "old_text": "The use of source control improves traceability and visibility of changes. Issue tracking can be used to manage open issues and provide a record of their resolution. Pull requests allow for better engagement on technical and edditorial changes, and encourage contributions from a larger set of contributors. Using GitHub can also broaden the community of contributors for a specification.", "comments": "A good start. Would you like to open another PR adding yourself as an author?\nLet me know if what I created (experiences + author) needs to be something different. This is a very scary world for me. Ultra light description is next. Barbara", "new_text": "The use of source control improves traceability and visibility of changes. Issue tracking can be used to manage open issues and provide a record of their resolution. Pull requests allow for better engagement on technical and editorial changes, and encourage contributions from a larger set of contributors. Using GitHub can also broaden the community of contributors for a specification."} {"id": "q-en-using-github-778bdc61a0b07d47cf2f655e7f705840c6cfe274bc66e4e279b67b6ba9d7d726", "old_text": "Working Groups might integrate the capabilities provided by GitHub into their processes for developing Internet-Drafts. This document is meant as an supplement to existing Working Group practices. It provides guidance to Working Group chairs and participants on how they can best use GitHub within the framework established RFC 2418 RFC2418. The small number of rules in this document are there to ensure common usage patterns between working groups and to avoid issues that have been encountered in the past. A companion document, GH-CONFIG, describes administrative processes that supports the practices described in this document. 1.1.", "comments": "A good start. Would you like to open another PR adding yourself as an author?\nLet me know if what I created (experiences + author) needs to be something different. This is a very scary world for me. Ultra light description is next. Barbara", "new_text": "Working Groups might integrate the capabilities provided by GitHub into their processes for developing Internet-Drafts. This document is meant as a supplement to existing Working Group practices. It provides guidance to Working Group chairs and participants on how they can best use GitHub within the framework established by RFC 2418 RFC2418. The small number of rules in this document are there to ensure common usage patterns between working groups and to avoid issues that have been encountered in the past. A companion document, GH-CONFIG, describes administrative processes that support the practices described in this document. 1.1."} {"id": "q-en-using-github-778bdc61a0b07d47cf2f655e7f705840c6cfe274bc66e4e279b67b6ba9d7d726", "old_text": "the only possible choice for hosting. There are other services that host revision control repositories and provide similar additional features to GitHub. For instance, BitBucket [4], or GitLab [5] provide a similar feature set. In additional to a hosted service, software for custom installations exists. This document concentrates primarily on GitHub as it has a large and", "comments": "A good start. Would you like to open another PR adding yourself as an author?\nLet me know if what I created (experiences + author) needs to be something different. This is a very scary world for me. Ultra light description is next. Barbara", "new_text": "the only possible choice for hosting. There are other services that host revision control repositories and provide similar additional features to GitHub. For instance, BitBucket [4], or GitLab [5] provide a similar feature set. In addition to a hosted service, software for custom installations exists. This document concentrates primarily on GitHub as it has a large and"} {"id": "q-en-using-github-778bdc61a0b07d47cf2f655e7f705840c6cfe274bc66e4e279b67b6ba9d7d726", "old_text": "3. A Working Group Chairs are responsible for determining how to best accomplish the Charter in an open and transparent fashion. The Working Group Chairs are responsible for determining if there is interest in using GitHub and making a consensus call to determine if a the proposed policy and use is acceptable. Chairs MUST involve Area Directors any decision to use GitHub for anything more than managing of drafts. While a document editor can still use GitHub independently for documents that they edit, even if the working group does not", "comments": "A good start. Would you like to open another PR adding yourself as an author?\nLet me know if what I created (experiences + author) needs to be something different. This is a very scary world for me. Ultra light description is next. Barbara", "new_text": "3. Working Group Chairs are responsible for determining how to best accomplish the Charter in an open and transparent fashion. The Working Group Chairs are responsible for determining if there is interest in using GitHub and making a consensus call to determine if the proposed policy and use is acceptable. Chairs MUST involve Area Directors in any decision to use GitHub for anything more than managing drafts. While a document editor can still use GitHub independently for documents that they edit, even if the working group does not"} {"id": "q-en-using-github-778bdc61a0b07d47cf2f655e7f705840c6cfe274bc66e4e279b67b6ba9d7d726", "old_text": "multiple documents. Maintaining multiple documents in the same repository can add overheads that negatively affect individual documents. For instance, issues might require additional markings to identify the document that they affect. Also, because editors all have write access to the repository, managing the set of people with write access to a larger", "comments": "A good start. Would you like to open another PR adding yourself as an author?\nLet me know if what I created (experiences + author) needs to be something different. This is a very scary world for me. Ultra light description is next. Barbara", "new_text": "multiple documents. Maintaining multiple documents in the same repository can add overhead that negatively affects individual documents. For instance, issues might require additional markings to identify the document that they affect. Also, because editors all have write access to the repository, managing the set of people with write access to a larger"} {"id": "q-en-using-github-778bdc61a0b07d47cf2f655e7f705840c6cfe274bc66e4e279b67b6ba9d7d726", "old_text": "One common practice is to use these continuous integration services to build a text or HTML version of a document. This is then published to GitHub Pages, which allows users to view a version of the most recent revision of a document. Including prominent link to this version of the document (such as in the README) makes it easier for new contributors to find a readable copy of the most recent version of a draft. Continuous integration can also validate pull requests and other changes for errors. The most basic check is whether the source file can be transformed successful into a valid Internet-Draft. For example, this might include checking that XML source is syntactically correct. For documents that use formal languages a part of specifications, such as schema or source code, a continuous integration system might also be used to validate any formal language that the document contains. Tests for any source code that the document contains might be run, or examples might be checked for correctness. 8.", "comments": "A good start. Would you like to open another PR adding yourself as an author?\nLet me know if what I created (experiences + author) needs to be something different. This is a very scary world for me. Ultra light description is next. Barbara", "new_text": "One common practice is to use these continuous integration services to build a text or HTML version of a document. This is then published to GitHub Pages, which allows users to view a version of the most recent revision of a document. Including a prominent link to this version of the document (such as in the README) makes it easier for new contributors to find a readable copy of the most recent version of a draft. Continuous integration can also validate pull requests and other changes for errors. The most basic check is whether the source file can be transformed successfully into a valid Internet-Draft. For example, this might include checking that XML source is syntactically correct. For a document that use formal languages as part of the specification, such as schema or source code, a continuous integration system might also be used to validate any formal language that the document contains. Tests for any source code that the document contains might be run, or examples might be checked for correctness. 8."} {"id": "q-en-using-github-778bdc61a0b07d47cf2f655e7f705840c6cfe274bc66e4e279b67b6ba9d7d726", "old_text": "If a contributor makes a comment that raises what you believe to be a new issue, create an issue for them. If the issue has an obvious solution, consider creating a pull request. It doesn't matter what venue the issue was raised in, email, issue discussion, a pull request review, capturing issues quickly ensures that problems become visible and can be tracked. This takes a little more effort, but these simple steps can help encourage contributions, which will ultimately improve the quality of", "comments": "A good start. Would you like to open another PR adding yourself as an author?\nLet me know if what I created (experiences + author) needs to be something different. This is a very scary world for me. Ultra light description is next. Barbara", "new_text": "If a contributor makes a comment that raises what you believe to be a new issue, create an issue for them. If the issue has an obvious solution, consider creating a pull request. It doesn't matter what venue the issue was raised in (e.g., email, issue discussion, a pull request review); capturing issues quickly ensures that problems become visible and can be tracked. This takes a little more effort, but these simple steps can help encourage contributions, which will ultimately improve the quality of"} {"id": "q-en-using-github-71e1019235713fb9b45ac9228ae27287d076e1f32404c878513fab3a2ebfb08f", "old_text": "Group will rely upon. features contains a more thorough discussion on the different features that can be used. Once a document is published in a repository on GitHub, many features like pull requests, issue tracking or the wiki can be individually disabled. If specific features are not used by the Working Group in the development of the document, disabling those features avoids creating confusion in the wider community about what can be used. 3.2. Working Group Chairs that decide to use GitHub MUST inform their", "comments": "Two changes, two issues, but really this is the one idea. The previous draft said that features not used could be disabled. This is bad advice because that might be denying editors access to tools that would make their task easier. The new advice says that editors can use features that the working group doesn't rely on. It also says that chairs should consult editors to ensure that policies aren't adversely affecting them.\nMost of the operating modes assume that editors are given considerable latitude in how repositories are managed. This is good up until the point that the Working Group starts to formally rely on things that are in the respository. The document sort of prevaricates about this point, including strong admonitions that suggest rules be formalized for every aspect of usage. Instead, I think that it would be better if the document said - up-front - that editors are to be given control unless the Working Group starts to formally depend on a particular function (like issue opening/closing, labels, pull requests, whatever...). At that point, it is the chair's responsibility to identify what the WG depends on and to clearly articulate the policies around that usage.\nSection 3.1 talks about turning off GitHub features not being used by the WG, but it doesn't say how to do that in the UI. I ask this because I cannot see a way to prevent issues or PRs just by wandering through the GUI.\nThis isn't something that I'd want to publish. The way that features are configured is too tightly coupled to UX decisions that GitHub might remake and any point. In fact, I might prefer not to include the offending text at all. Disabling features can be a useful way to slim down the options presented to users (which can help new users), but it can also be fairly hostile to new contributors. For instance, if the working group disables issues because they aren't using them, it sounds reasonable, but new contributors might then be dissuaded from opening a genuine issue. Working groups might be better served by informally letting contributors know how to best provide their input.\nI'm happy with either the current text plus \"At the time this is written, to turn of y\" or removing the suggestion.", "new_text": "Group will rely upon. features contains a more thorough discussion on the different features that can be used. 3.2. Working Group Chairs that decide to use GitHub MUST inform their"} {"id": "q-en-using-github-71e1019235713fb9b45ac9228ae27287d076e1f32404c878513fab3a2ebfb08f", "old_text": "that new contributors need. A link to the CONTRIBUTING file from the README is advised. 3.3. New repositories can be created within the Working Group organization", "comments": "Two changes, two issues, but really this is the one idea. The previous draft said that features not used could be disabled. This is bad advice because that might be denying editors access to tools that would make their task easier. The new advice says that editors can use features that the working group doesn't rely on. It also says that chairs should consult editors to ensure that policies aren't adversely affecting them.\nMost of the operating modes assume that editors are given considerable latitude in how repositories are managed. This is good up until the point that the Working Group starts to formally rely on things that are in the respository. The document sort of prevaricates about this point, including strong admonitions that suggest rules be formalized for every aspect of usage. Instead, I think that it would be better if the document said - up-front - that editors are to be given control unless the Working Group starts to formally depend on a particular function (like issue opening/closing, labels, pull requests, whatever...). At that point, it is the chair's responsibility to identify what the WG depends on and to clearly articulate the policies around that usage.\nSection 3.1 talks about turning off GitHub features not being used by the WG, but it doesn't say how to do that in the UI. I ask this because I cannot see a way to prevent issues or PRs just by wandering through the GUI.\nThis isn't something that I'd want to publish. The way that features are configured is too tightly coupled to UX decisions that GitHub might remake and any point. In fact, I might prefer not to include the offending text at all. Disabling features can be a useful way to slim down the options presented to users (which can help new users), but it can also be fairly hostile to new contributors. For instance, if the working group disables issues because they aren't using them, it sounds reasonable, but new contributors might then be dissuaded from opening a genuine issue. Working groups might be better served by informally letting contributors know how to best provide their input.\nI'm happy with either the current text plus \"At the time this is written, to turn of y\" or removing the suggestion.", "new_text": "that new contributors need. A link to the CONTRIBUTING file from the README is advised. The set of GitHub features (features) that the Working Group relies upon need to be clearly documented in policies. This document provides some guidance on potential policies and how those might be applied. Features that the Working Group does not rely upon SHOULD be made available to document editors. Editors are then able to use these features for their own purposes. For example, though the Working Group might not formally use issues to track items that require further discussion in order to reach consensus, keeping the issue tracker available to editors can be valuable. Working Group policies need to be set with the goal of improving transparency, participation, and ultimately the quality of the consensus behind documents. At times, it might be appropriate to impose some limitations on what document editors are able to do in order to serve these goals. Chairs SHOULD periodically consult with document editors to ensure that policies are effective and not unjustifiably constraining progress. 3.3. New repositories can be created within the Working Group organization"} {"id": "q-en-using-github-dce8a2ed7c37e92d6329ea35055f4d79b62e9fb72807ea63fc3c103e04f5db10", "old_text": "sufficient, though it might be helpful to occasionally remind new contributors of these guidelines. A choice to use GitHub is similar to the formation of a design team (see Section 6.5 of RFC2418) provided that the work uses a public repository. That is, the output of any activity using GitHub needs to be taken to the Working Group mailing list and subject to approval, rejection, or modification by the Working Group as with any other input. Working Group Chairs are responsible for ensuring that any policy they adopt is enforced and maintained.", "comments": "This is the way we've been treating this, so let's acknowledge that.\nAs I noted on the list, I don't think this is correct. I proposed alternate text.\nStill LGTM. :-)", "new_text": "sufficient, though it might be helpful to occasionally remind new contributors of these guidelines. Working Group Chairs are responsible for ensuring that any policy they adopt is enforced and maintained."} {"id": "q-en-using-github-dce8a2ed7c37e92d6329ea35055f4d79b62e9fb72807ea63fc3c103e04f5db10", "old_text": "documents they edit but preserves the need for contributors to understand their obligations with respect to IETF processes. 3.2. New repositories can be created within the Working Group organization", "comments": "This is the way we've been treating this, so let's acknowledge that.\nAs I noted on the list, I don't think this is correct. I proposed alternate text.\nStill LGTM. :-)", "new_text": "documents they edit but preserves the need for contributors to understand their obligations with respect to IETF processes. Work done in GitHub has no special status. The output of any activity using GitHub needs to be taken to the Working Group and is subject to approval, rejection, or modification by the Working Group as with any other input. 3.2. New repositories can be created within the Working Group organization"} {"id": "q-en-vulnerability-scenario-90f24c263e047ae19e4354745fc8b528f630fb3fa4072c04a688c753abafbdd9", "old_text": "discovered. Such elements may include classes of data, major roles, and role interactions. This scenario also informs protocol and data model development in support of vulnerability assessment, as part of overall posture assessment. Vulnerability discovery, disclosure, and publication is out of scope. 2.", "comments": "We should review the appendices and include the appropriate references to them through the main body of the document.", "new_text": "discovered. Such elements may include classes of data, major roles, and role interactions. This scenario also informs protocol and data model development in support of vulnerability assessment, as part of overall posture assessment (see implementation-examples for examples of solutions that support this scenario). Vulnerability discovery, disclosure, publication, and prioritization is out of scope. However, given the importance of prioritization in an enterprise's vulnerability assessment process, it is discussed in priority. Information on how the scenario aligns with SACM and other existing work is discussed in sacm-usage-scenarios through alignment-with- other-existing-works. 2."} {"id": "q-en-vulnerability-scenario-90f24c263e047ae19e4354745fc8b528f630fb3fa4072c04a688c753abafbdd9", "old_text": "data, and vulnerability assessment results. The enterprise has a procedure for reassessment of endpoints at some point after initial assessment. 4.", "comments": "We should review the appendices and include the appropriate references to them through the main body of the document.", "new_text": "data, and vulnerability assessment results. The enterprise has a procedure for reassessment of endpoints at some point after initial assessment (see continuous-vulnerability- assessment for more information). 4."} {"id": "q-en-vulnerability-scenario-90f24c263e047ae19e4354745fc8b528f630fb3fa4072c04a688c753abafbdd9", "old_text": "performed on an ongoing basis, resulting in routine, or even event- driven, collection of basic endpoint information. See \"Data Attribute Tables and Definitions\" for information-specific details. 4.2.", "comments": "We should review the appendices and include the appropriate references to them through the main body of the document.", "new_text": "performed on an ongoing basis, resulting in routine, or even event- driven, collection of basic endpoint information. See data-attribute-table for information-specific details. 4.2."} {"id": "q-en-vulnerability-scenario-90f24c263e047ae19e4354745fc8b528f630fb3fa4072c04a688c753abafbdd9", "old_text": "processing of vulnerability description data is expected to trigger the vulnerability assessment. See \"Data Attribute Tables and Definitions\" for information-specific details. 5.", "comments": "We should review the appendices and include the appropriate references to them through the main body of the document.", "new_text": "processing of vulnerability description data is expected to trigger the vulnerability assessment. See data-attribute-table for information-specific details. 5."} {"id": "q-en-vulnerability-scenario-90f24c263e047ae19e4354745fc8b528f630fb3fa4072c04a688c753abafbdd9", "old_text": "capability can be pushed to the vulnerability assessment capability by the endpoint whenever that information changes. See \"Data Attribute Tables and Definitions\" for information-specific details. 6.", "comments": "We should review the appendices and include the appropriate references to them through the main body of the document.", "new_text": "capability can be pushed to the vulnerability assessment capability by the endpoint whenever that information changes. See data-attribute-table for information-specific details. 6."} {"id": "q-en-vulnerability-scenario-90f24c263e047ae19e4354745fc8b528f630fb3fa4072c04a688c753abafbdd9", "old_text": "with sufficient context, so that appropriate action can be taken. Vulnerability assessment results are ideally stored for later use. See \"Data Attribute Tables and Definitions\" for information-specific details. 7.", "comments": "We should review the appendices and include the appropriate references to them through the main body of the document.", "new_text": "with sufficient context, so that appropriate action can be taken. Vulnerability assessment results are ideally stored for later use. See data-attribute-table for information-specific details. 7."} {"id": "q-en-webpush-protocol-427f51d785e8147289d3d0a855e76cf079ba29e4982860ca42e81afd7207c60c", "old_text": "value MUST be less than or equal to the value provided by the application server. 7. A user agent requests the delivery of new push messages by making a", "comments": "I've specified a Topic header field that we can use as a \"collapse key\".\nRegarding max topics, I don't plan on limiting the max topics. I'm using a sharded db with a compound hash+range key for indexing, and plan on storing the topic in the range key for direct access to delete it for later messages with a topic field. There is the side-effect that the UA will get messages out-of-order as a result. ie. all messages with no topic will be delivered in order, then messages with a topic will be delivered in the order of their topic names.\nWe currently impose a limit of four topics (n\u00e9e collapse keys) +NAME I do like the change. GCM has a separate feature called Topics, i.e. subscriptions to a category, which may cause some confusion. Thinking about the name without taking this into account, I still think it makes sense in this context. URL\nNAME yup, the common use case of sending \"You have N messages in your mailbox\", makes sense to just tell a developer to add the topic \"Inbox\", and the user will always get the last one sent in. I am curious about how systems having topic limits plan on communicating this to the app-server. Is there some minimum the spec should mandate just so that an app developer isn't surprised later when their app needing 5 topics doesn't work right on a browser that has topic limits?\nWould 429 work for systems that run out of topics (or collapse keys)? That seems right to me.\n429 seems to specific to be used for this scenario: The 429 status code indicates that the user has sent too many requests in a given amount of time (\"rate limiting\"). Could be a 500?\nIs there a length limitation on the Topic? This could also be implementation-specific and signaled by: 431 Request Header Fields Too Large (RFC6585)\nI think that NAME and I concluded that we could accept any size of topic. Up to the limits on header field sizes, that is, which would trigger 431. Maybe, but it is the best match I have. And I think that 4xx is better than 5xx.\nIf you continue with 429, how does an application server discern between 429 Rate Limiting and 429 Too Many Topics? By the presence of Retry-After?\nRetry-After might be present in all cases. However, you raise a point that the definition of 429 leaves open: Is this too many requests globally? By the same client? To the same resource? To the same resource by the same client? Basically, the \"scope\" of applicability for 429 is hard - if not impossible - to determine. The scope is defined in some server specific terms. The question I'd ask is whether we want to provide some webpush-specific mechanism for identifying the scope of applicability for the rejection.\nWhat about using a Scope header field in the response? In this case it will be 'Topic', maybe for too many messages for a subscription it could be 'Subscription', etc.\nAll these things are possible. The question is how important it is. Also, if we wanted to address the scope problem, I would want to address it more generally and then maybe add any push-specific extensions as necessary. This is a problem with 429 that - if it is worth fixing here - is probably worth fixing generally.\nMaybe this was answered elsewhere, but why not just reusing the push message URI? To replace a previous message, the application would create a new message and include the previous message URI in the creation request, inside a \"Replace\" header. Otherwise, it would work in the same way as this proposal. This would be easy to handle on the Push Server side. On the application server side, this would mean storing the previous message URI. But the application server needs to store the topic of the previous message anyway.\nNAME the idea here was that remembering the push message URI was a burden we didn't want to impose on applications. Those that aren't using acknowledgments will be able to use this version without holding any special state. Thanks for the grammatical review, I'll fix those up.\nNAME an application wanting to be able to update a message will need to keep the topic of this message, so it will need to keep this state. I don't think keeping a topic or the push message URI is a large difference. However, it's true that simple applications could use only 1 topic, which would be predefined (or a few predefined topics).\nNAME an application can hard-code a topic or small set of topics. They can also derive a topic from context. A URL needs to be first learned, then memorized.\nNAME do you have any opinions on this change?\nUpdated based on feedback from NAME Note that I haven't changed the name. Not sure that we have consensus on that yet.\nConversation starts : How does the PS know that AS can accept more receipts, and how does it know a receipt has been accepted by the AS ?\nTwo-Pipe model a la mode de TIP? URL\nNAME started on HTTPbis\nCostin : we discussed my concerns on flow control for push promises, and I think it's reasonable to have them addressed either as part of http2 or as a separate document. I'm closing this issue for webpush since it's no longer actionable - NAME - re-open if you want to track for \"futures\". I'll tag appropriately. +1 - we discussed my concerns on flow control for push promises, and I think it's reasonable to have them addressed either as part of http2 or as a separate document. Other than that I think it's reasonable and stable base - extensions and features can be added on top of it. Costin Richard Maher wrote: >\n+1 But I'm not sure we can support the mode that Martin described, so I think we may take advantage of \"PS MAY return a new receipt subscription if it is unable to reuse\". Buffering and flow control are complicated - the choice is between dropping receipts or sending them to a different DC. I haven't checked, but I think there is no guarantee on saving receipts if AS doesn't have enough ingress capacity. One question for HTTP/2 experts: I assumed the receipt promises are subject to normal 'max concurrent streams' ( and this will be the main mechanism to do flow control/balancing from PS to AS). However I see rfc7540: ... \" Note: The client never sends a frame with the END_STREAM flag for a server push.\" How does the PS know that AS can accept more receipts, and how does it know a receipt has been accepted by the AS ? Costin Brian Raymor wrote: >", "new_text": "value MUST be less than or equal to the value provided by the application server. 6.3. A push message that has been stored by the push service can be replaced with new content. If the user agent is offline during the time that the push messages are sent, updating a push message avoids the situation where obsolete or redundant messages are sent to the user agent. Only push messages that have been assigned a topic can be updated. A push message with a topic replaces any outstanding push message with an identical topic. A push message topic is a string. A topic is carried in a Topic header field. A topic is used to correlate push messages sent to the same subscription, it does not convey any other semantics. The grammar for the Topic header field uses the \"token\" and \"quoted- string\" rules defined in RFC7230. Any double quotes from the \"quoted-string\" form are removed before comparing topics for equality. For use with this protocol, the Topic header field MUST be restricted to no more than 32 characters from the URL- and filename-safe base64 set [RFC4648]. A push service that receives a request with a Token header field that does not meet these constraints SHOULD return an HTTP 400 (Bad Request) status code to an application server. A push message request creates a new push message resource, but simultaneously deletes any existing message resource that has a matching topic. In effect, the information that is stored for the push message is updated, but a new resource is created to avoid problems with in flight acknowledgments for the old message. The push service MAY suppress acknowledgement receipts for the replaced message. This mechanism doesn't allow for stored push messages to be deleted. An application server can remove a push message using a DELETE request to the push message URL. A push message with a topic that has a zero or absent time-to-live will cause a stored message with the same topic to be removed as well as not being stored itself, but the push service will attempt to forward the updated message to the user agent. A push message with a topic that is not shared by an outstanding message to the same subscription is stored or delivered as normal. The value of the Topic header field MUST NOT be forwarded to user agents. Its value is not encrypted or authenticated. 7. A user agent requests the delivery of new push messages by making a"} {"id": "q-en-webpush-protocol-427f51d785e8147289d3d0a855e76cf079ba29e4982860ca42e81afd7207c60c", "old_text": "entities that are authorized to send messages on the channel. The push service does not require access to this public key. 9.2. Push message confidentiality does not ensure that the identity of who", "comments": "I've specified a Topic header field that we can use as a \"collapse key\".\nRegarding max topics, I don't plan on limiting the max topics. I'm using a sharded db with a compound hash+range key for indexing, and plan on storing the topic in the range key for direct access to delete it for later messages with a topic field. There is the side-effect that the UA will get messages out-of-order as a result. ie. all messages with no topic will be delivered in order, then messages with a topic will be delivered in the order of their topic names.\nWe currently impose a limit of four topics (n\u00e9e collapse keys) +NAME I do like the change. GCM has a separate feature called Topics, i.e. subscriptions to a category, which may cause some confusion. Thinking about the name without taking this into account, I still think it makes sense in this context. URL\nNAME yup, the common use case of sending \"You have N messages in your mailbox\", makes sense to just tell a developer to add the topic \"Inbox\", and the user will always get the last one sent in. I am curious about how systems having topic limits plan on communicating this to the app-server. Is there some minimum the spec should mandate just so that an app developer isn't surprised later when their app needing 5 topics doesn't work right on a browser that has topic limits?\nWould 429 work for systems that run out of topics (or collapse keys)? That seems right to me.\n429 seems to specific to be used for this scenario: The 429 status code indicates that the user has sent too many requests in a given amount of time (\"rate limiting\"). Could be a 500?\nIs there a length limitation on the Topic? This could also be implementation-specific and signaled by: 431 Request Header Fields Too Large (RFC6585)\nI think that NAME and I concluded that we could accept any size of topic. Up to the limits on header field sizes, that is, which would trigger 431. Maybe, but it is the best match I have. And I think that 4xx is better than 5xx.\nIf you continue with 429, how does an application server discern between 429 Rate Limiting and 429 Too Many Topics? By the presence of Retry-After?\nRetry-After might be present in all cases. However, you raise a point that the definition of 429 leaves open: Is this too many requests globally? By the same client? To the same resource? To the same resource by the same client? Basically, the \"scope\" of applicability for 429 is hard - if not impossible - to determine. The scope is defined in some server specific terms. The question I'd ask is whether we want to provide some webpush-specific mechanism for identifying the scope of applicability for the rejection.\nWhat about using a Scope header field in the response? In this case it will be 'Topic', maybe for too many messages for a subscription it could be 'Subscription', etc.\nAll these things are possible. The question is how important it is. Also, if we wanted to address the scope problem, I would want to address it more generally and then maybe add any push-specific extensions as necessary. This is a problem with 429 that - if it is worth fixing here - is probably worth fixing generally.\nMaybe this was answered elsewhere, but why not just reusing the push message URI? To replace a previous message, the application would create a new message and include the previous message URI in the creation request, inside a \"Replace\" header. Otherwise, it would work in the same way as this proposal. This would be easy to handle on the Push Server side. On the application server side, this would mean storing the previous message URI. But the application server needs to store the topic of the previous message anyway.\nNAME the idea here was that remembering the push message URI was a burden we didn't want to impose on applications. Those that aren't using acknowledgments will be able to use this version without holding any special state. Thanks for the grammatical review, I'll fix those up.\nNAME an application wanting to be able to update a message will need to keep the topic of this message, so it will need to keep this state. I don't think keeping a topic or the push message URI is a large difference. However, it's true that simple applications could use only 1 topic, which would be predefined (or a few predefined topics).\nNAME an application can hard-code a topic or small set of topics. They can also derive a topic from context. A URL needs to be first learned, then memorized.\nNAME do you have any opinions on this change?\nUpdated based on feedback from NAME Note that I haven't changed the name. Not sure that we have consensus on that yet.\nConversation starts : How does the PS know that AS can accept more receipts, and how does it know a receipt has been accepted by the AS ?\nTwo-Pipe model a la mode de TIP? URL\nNAME started on HTTPbis\nCostin : we discussed my concerns on flow control for push promises, and I think it's reasonable to have them addressed either as part of http2 or as a separate document. I'm closing this issue for webpush since it's no longer actionable - NAME - re-open if you want to track for \"futures\". I'll tag appropriately. +1 - we discussed my concerns on flow control for push promises, and I think it's reasonable to have them addressed either as part of http2 or as a separate document. Other than that I think it's reasonable and stable base - extensions and features can be added on top of it. Costin Richard Maher wrote: >\n+1 But I'm not sure we can support the mode that Martin described, so I think we may take advantage of \"PS MAY return a new receipt subscription if it is unable to reuse\". Buffering and flow control are complicated - the choice is between dropping receipts or sending them to a different DC. I haven't checked, but I think there is no guarantee on saving receipts if AS doesn't have enough ingress capacity. One question for HTTP/2 experts: I assumed the receipt promises are subject to normal 'max concurrent streams' ( and this will be the main mechanism to do flow control/balancing from PS to AS). However I see rfc7540: ... \" Note: The client never sends a frame with the END_STREAM flag for a server push.\" How does the PS know that AS can accept more receipts, and how does it know a receipt has been accepted by the AS ? Costin Brian Raymor wrote: >", "new_text": "entities that are authorized to send messages on the channel. The push service does not require access to this public key. The Topic header field exposes information that allows more granular correlation of push messages on the same subject. This might be used to aid traffic analysis of push messages by the push service. 9.2. Push message confidentiality does not ensure that the identity of who"} {"id": "q-en-webpush-protocol-57588e180ca26566d36e18a3f3df3b80755cf05e90e8e5d5f189774058fdabcf", "old_text": "push messages to the user agent. Flow control SHOULD be used to limit the state commitment for delivery of large messages. 6. A user agent requests the delivery of new push messages by making a", "comments": "This is a work in progress, but ready for an initial review. I will add more commits to the branch for dealing with IANA registration of the header field. I also need to do some massaging of Section 7.2 to account for changes there. I'll have a follow-up pull request that deals with learning the server TTL. I think that the simplest thing to do would be to have the server echo the header field, using a lower value if it doesn't support it. Application servers can discover the maximum by sending a large value, like . The alternative is a declarative push service policy, but I want to encourage an adaptive policy with respect to TTL.\nAn alternative to echoing the TTL is just using the header field, as is currently described. I think that might actually be sufficient. That still leaves user agents unable to determine whether they need a full resync though. We need something for .\nLooking at the semantics of and in relation to and , I think that it would be unwise to use them here. Current plan is to echo .\nIt was less clear that this was needed than was, but the idea is that it would be good for a user agent (or maybe an application server too) to be able to learn the maximum TTL that a push service was willing to maintain.\nDiscussed today: conclusion was to have subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nIt should be fine if we let subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nThere is a related failure case in the current draft related to acknowledgements: The push server MUST push a response with a status code of 5XX (TBD) if the user agent fails to acknowledge the receipt of the push message or the push server fails to deliver the message prior to its expiration.\nDiscussed. Discovery of maximum TTL is not required at this time.\nThis is just for the request from the application server to the push service.", "new_text": "push messages to the user agent. Flow control SHOULD be used to limit the state commitment for delivery of large messages. 5.1. A push service can improve the reliability of push message delivery considerably by storing push messages for a period. User agents are often only intermittendly connected, and so benefit from having short term message storage at the push service. Delaying delivery might also be used to batch communication with the user agent, thereby conserving radio resources. Some push messages are not useful once a certain period of time elapses. Delivery of messages after they have ceased to be relevant is wasteful. For example, if the push message contains a call notification, receiving a message after the caller has abandoned the call is of no value; the application at the user agent is forced to suppress the message so that it does not generate a useless alert. An application server can use the TTL header field to limit the time that a push message is retained by a push service. The TTL header field contains a value in seconds that describes how long a push message is retained by the push service. Once the TTL period elapses, the push service MUST remove the push message and cease any attempt to deliver it to the user agent. A push service might retain values for a short duration after the TTL period to account for time accounting errors in processing. For instance, distributing a push message within a server cluster might accrue errors due to clock variation, or processing and transit delays. A push service is not obligated to account for time spent by the application server in sending a push message to the push service, or delays incurred while sending a push message to the user agent. An application server needs to account for transit delays in selecting a TTL header field value. Absence of the TTL header field is interpreted as equivalent to a zero value. Push messages with a zero TTL indicate that storage is not needed and that the message can be dropped if the user agent isn't immediately available to receive the message. Push messages with a zero TTL can be delivered very efficiently. A push service MAY choose to retain a push message for a shorter duration than that requested. It indicates this by including a TTL header field in the response that includes the actual TTL. This TTL value MUST be less than or equal to the value provided by the application server. 6. A user agent requests the delivery of new push messages by making a"} {"id": "q-en-webpush-protocol-57588e180ca26566d36e18a3f3df3b80755cf05e90e8e5d5f189774058fdabcf", "old_text": "7.2. Push services typically store messages for some time to allow for limited recovery from transient faults. If a push message is stored, but not delivered, the push service can indicate the probable duration of storage by including expiration information in the response to the push request. A push service is not obligated to store messages indefinitely. If a user agent is not actively monitoring for push messages, those messages can be lost or overridden by newer messages on the same subscription. Push messages that were stored and not delivered to a user agent are delivered when the user agent recommences monitoring. Stored push messages SHOULD include a Last-Modified header field (see Section 2.2 of RFC7232) indicating when delivery was requested by an application", "comments": "This is a work in progress, but ready for an initial review. I will add more commits to the branch for dealing with IANA registration of the header field. I also need to do some massaging of Section 7.2 to account for changes there. I'll have a follow-up pull request that deals with learning the server TTL. I think that the simplest thing to do would be to have the server echo the header field, using a lower value if it doesn't support it. Application servers can discover the maximum by sending a large value, like . The alternative is a declarative push service policy, but I want to encourage an adaptive policy with respect to TTL.\nAn alternative to echoing the TTL is just using the header field, as is currently described. I think that might actually be sufficient. That still leaves user agents unable to determine whether they need a full resync though. We need something for .\nLooking at the semantics of and in relation to and , I think that it would be unwise to use them here. Current plan is to echo .\nIt was less clear that this was needed than was, but the idea is that it would be good for a user agent (or maybe an application server too) to be able to learn the maximum TTL that a push service was willing to maintain.\nDiscussed today: conclusion was to have subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nIt should be fine if we let subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nThere is a related failure case in the current draft related to acknowledgements: The push server MUST push a response with a status code of 5XX (TBD) if the user agent fails to acknowledge the receipt of the push message or the push server fails to deliver the message prior to its expiration.\nDiscussed. Discovery of maximum TTL is not required at this time.\nThis is just for the request from the application server to the push service.", "new_text": "7.2. Storage of push messages based on the TTL header field comprises a potentially significant amount of storage for a push service. A push service is not obligated to store messages indefinitely. A push service is able to indicate how long it intends to retain a message to an application server using the TTL header field (see ttl). A user agent that does not actively monitor for push messages will not receive messages that expire during that interval. Push messages that are stored and not delivered to a user agent are delivered when the user agent recommences monitoring. Stored push messages SHOULD include a Last-Modified header field (see Section 2.2 of RFC7232) indicating when delivery was requested by an application"} {"id": "q-en-webpush-protocol-57588e180ca26566d36e18a3f3df3b80755cf05e90e8e5d5f189774058fdabcf", "old_text": "Push services might need to limit the size and number of stored push messages to avoid overloading. In addition to using the 413 (Payload Too Large) status code for too large push messages, a push service MAY expire push messages prior to any advertised expiration time. 7.3.", "comments": "This is a work in progress, but ready for an initial review. I will add more commits to the branch for dealing with IANA registration of the header field. I also need to do some massaging of Section 7.2 to account for changes there. I'll have a follow-up pull request that deals with learning the server TTL. I think that the simplest thing to do would be to have the server echo the header field, using a lower value if it doesn't support it. Application servers can discover the maximum by sending a large value, like . The alternative is a declarative push service policy, but I want to encourage an adaptive policy with respect to TTL.\nAn alternative to echoing the TTL is just using the header field, as is currently described. I think that might actually be sufficient. That still leaves user agents unable to determine whether they need a full resync though. We need something for .\nLooking at the semantics of and in relation to and , I think that it would be unwise to use them here. Current plan is to echo .\nIt was less clear that this was needed than was, but the idea is that it would be good for a user agent (or maybe an application server too) to be able to learn the maximum TTL that a push service was willing to maintain.\nDiscussed today: conclusion was to have subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nIt should be fine if we let subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nThere is a related failure case in the current draft related to acknowledgements: The push server MUST push a response with a status code of 5XX (TBD) if the user agent fails to acknowledge the receipt of the push message or the push server fails to deliver the message prior to its expiration.\nDiscussed. Discovery of maximum TTL is not required at this time.\nThis is just for the request from the application server to the push service.", "new_text": "Push services might need to limit the size and number of stored push messages to avoid overloading. In addition to using the 413 (Payload Too Large) status code for too large push messages, a push service MAY expire push messages prior to any advertised expiration time. A push service can reduce the impact push message retention by reducing the time-to-live of push messages. 7.3."} {"id": "q-en-webpush-protocol-57588e180ca26566d36e18a3f3df3b80755cf05e90e8e5d5f189774058fdabcf", "old_text": "9. This document registers three URNs for use in identifying link relation types. These are added to a new \"Web Push Identifiers\" registry according to the procedures in Section 4 of RFC3553; the", "comments": "This is a work in progress, but ready for an initial review. I will add more commits to the branch for dealing with IANA registration of the header field. I also need to do some massaging of Section 7.2 to account for changes there. I'll have a follow-up pull request that deals with learning the server TTL. I think that the simplest thing to do would be to have the server echo the header field, using a lower value if it doesn't support it. Application servers can discover the maximum by sending a large value, like . The alternative is a declarative push service policy, but I want to encourage an adaptive policy with respect to TTL.\nAn alternative to echoing the TTL is just using the header field, as is currently described. I think that might actually be sufficient. That still leaves user agents unable to determine whether they need a full resync though. We need something for .\nLooking at the semantics of and in relation to and , I think that it would be unwise to use them here. Current plan is to echo .\nIt was less clear that this was needed than was, but the idea is that it would be good for a user agent (or maybe an application server too) to be able to learn the maximum TTL that a push service was willing to maintain.\nDiscussed today: conclusion was to have subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nIt should be fine if we let subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nThere is a related failure case in the current draft related to acknowledgements: The push server MUST push a response with a status code of 5XX (TBD) if the user agent fails to acknowledge the receipt of the push message or the push server fails to deliver the message prior to its expiration.\nDiscussed. Discovery of maximum TTL is not required at this time.\nThis is just for the request from the application server to the push service.", "new_text": "9. This protocol defines new HTTP header fields in iana.header.fields. New link relation types are identified using the URNs defined in iana.urns. 9.1. HTTP header fields are registered within the \"Message Headers\" registry maintained at . This document defines the following HTTP header fields, so their associated registry entries shall be added according to the permanent registrations below (see RFC3864): The change controller is: \"IETF (iesg@ietf.org) - Internet Engineering Task Force\". 9.2. This document registers three URNs for use in identifying link relation types. These are added to a new \"Web Push Identifiers\" registry according to the procedures in Section 4 of RFC3553; the"} {"id": "q-en-webpush-protocol-57588e180ca26566d36e18a3f3df3b80755cf05e90e8e5d5f189774058fdabcf", "old_text": "(this document) Martin Thomson (martin.thomson@gmail) or the Web Push WG (webpush@ietf.org) urn:ietf:params:push:reg", "comments": "This is a work in progress, but ready for an initial review. I will add more commits to the branch for dealing with IANA registration of the header field. I also need to do some massaging of Section 7.2 to account for changes there. I'll have a follow-up pull request that deals with learning the server TTL. I think that the simplest thing to do would be to have the server echo the header field, using a lower value if it doesn't support it. Application servers can discover the maximum by sending a large value, like . The alternative is a declarative push service policy, but I want to encourage an adaptive policy with respect to TTL.\nAn alternative to echoing the TTL is just using the header field, as is currently described. I think that might actually be sufficient. That still leaves user agents unable to determine whether they need a full resync though. We need something for .\nLooking at the semantics of and in relation to and , I think that it would be unwise to use them here. Current plan is to echo .\nIt was less clear that this was needed than was, but the idea is that it would be good for a user agent (or maybe an application server too) to be able to learn the maximum TTL that a push service was willing to maintain.\nDiscussed today: conclusion was to have subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nIt should be fine if we let subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nThere is a related failure case in the current draft related to acknowledgements: The push server MUST push a response with a status code of 5XX (TBD) if the user agent fails to acknowledge the receipt of the push message or the push server fails to deliver the message prior to its expiration.\nDiscussed. Discovery of maximum TTL is not required at this time.\nThis is just for the request from the application server to the push service.", "new_text": "(this document) The Web Push WG (webpush@ietf.org) urn:ietf:params:push:reg"} {"id": "q-en-webpush-protocol-57588e180ca26566d36e18a3f3df3b80755cf05e90e8e5d5f189774058fdabcf", "old_text": "(this document) Martin Thomson (martin.thomson@gmail) or the Web Push WG (webpush@ietf.org) urn:ietf:params:push:sub", "comments": "This is a work in progress, but ready for an initial review. I will add more commits to the branch for dealing with IANA registration of the header field. I also need to do some massaging of Section 7.2 to account for changes there. I'll have a follow-up pull request that deals with learning the server TTL. I think that the simplest thing to do would be to have the server echo the header field, using a lower value if it doesn't support it. Application servers can discover the maximum by sending a large value, like . The alternative is a declarative push service policy, but I want to encourage an adaptive policy with respect to TTL.\nAn alternative to echoing the TTL is just using the header field, as is currently described. I think that might actually be sufficient. That still leaves user agents unable to determine whether they need a full resync though. We need something for .\nLooking at the semantics of and in relation to and , I think that it would be unwise to use them here. Current plan is to echo .\nIt was less clear that this was needed than was, but the idea is that it would be good for a user agent (or maybe an application server too) to be able to learn the maximum TTL that a push service was willing to maintain.\nDiscussed today: conclusion was to have subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nIt should be fine if we let subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nThere is a related failure case in the current draft related to acknowledgements: The push server MUST push a response with a status code of 5XX (TBD) if the user agent fails to acknowledge the receipt of the push message or the push server fails to deliver the message prior to its expiration.\nDiscussed. Discovery of maximum TTL is not required at this time.\nThis is just for the request from the application server to the push service.", "new_text": "(this document) The Web Push WG (webpush@ietf.org) urn:ietf:params:push:sub"} {"id": "q-en-webpush-protocol-57588e180ca26566d36e18a3f3df3b80755cf05e90e8e5d5f189774058fdabcf", "old_text": "(this document) Martin Thomson (martin.thomson@gmail) or the Web Push WG (webpush@ietf.org) ", "comments": "This is a work in progress, but ready for an initial review. I will add more commits to the branch for dealing with IANA registration of the header field. I also need to do some massaging of Section 7.2 to account for changes there. I'll have a follow-up pull request that deals with learning the server TTL. I think that the simplest thing to do would be to have the server echo the header field, using a lower value if it doesn't support it. Application servers can discover the maximum by sending a large value, like . The alternative is a declarative push service policy, but I want to encourage an adaptive policy with respect to TTL.\nAn alternative to echoing the TTL is just using the header field, as is currently described. I think that might actually be sufficient. That still leaves user agents unable to determine whether they need a full resync though. We need something for .\nLooking at the semantics of and in relation to and , I think that it would be unwise to use them here. Current plan is to echo .\nIt was less clear that this was needed than was, but the idea is that it would be good for a user agent (or maybe an application server too) to be able to learn the maximum TTL that a push service was willing to maintain.\nDiscussed today: conclusion was to have subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nIt should be fine if we let subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nThere is a related failure case in the current draft related to acknowledgements: The push server MUST push a response with a status code of 5XX (TBD) if the user agent fails to acknowledge the receipt of the push message or the push server fails to deliver the message prior to its expiration.\nDiscussed. Discovery of maximum TTL is not required at this time.\nThis is just for the request from the application server to the push service.", "new_text": "(this document) The Web Push WG (webpush@ietf.org) "} {"id": "q-en-webpush-protocol-c778fad8586884b2cb602db41e115822e4ba41329f31bc64e83f7435f47bed8f", "old_text": "monitor for new push messages. Requesting the delivery of events is particularly important for the Web Push API. The subscription, management and monitoring functions are currently fulfilled by proprietary protocols; these are adequate, but do not offer any of the advantages that standardization affords.", "comments": "Pretty good. I think that we should at least try to put some MUST-strength language in regarding confidentiality, etc... That makes writing the rest of the security considerations that much easier (you don't have to worry that someone might have ignored your advice and account for it).\nLooks good. Further refinements can be done later :)\nThere is an adequate explanation of the related W3C Push API in the Introduction. Additional references about specific API behavior in Section 9.3 and 9.4 can be removed.\nBesides removing extraneous API references, informative references should be added for the separate drafts for optional content encoding and authentication.\nMoving to -05. Waiting for a decision on the VAPID call for adoption.", "new_text": "monitor for new push messages. Requesting the delivery of events is particularly important for the W3C Push API. The subscription, management and monitoring functions are currently fulfilled by proprietary protocols; these are adequate, but do not offer any of the advantages that standardization affords."} {"id": "q-en-webpush-protocol-c778fad8586884b2cb602db41e115822e4ba41329f31bc64e83f7435f47bed8f", "old_text": "scheme. This provides confidentiality and integrity protection for subscriptions and push messages from external parties. 8.1. The protection afforded by TLS does not protect content from the push service. Without additional safeguards, a push service is able to see and modify the content of the messages. Applications are able to provide additional confidentiality, integrity or authentication mechanisms within the push message itself. The application server sending the push message and the application on the user agent that receives it are frequently just different instances of the same application, so no standardized protocol is needed to establish a proper security context. The process of providing the application server with subscription information provides a convenient medium for key agreement. The Web Push API codifies this practice by requiring that each push subscription created by the browser be bound to a browser generated encryption key. Pushed messages are authenticated and decrypted by the browser before delivery to applications. This scheme ensures that the push service is unable to examine the contents of push messages. The public key for a subscription ensures that applications using that subscription can identify messages from unknown sources and discard them. This depends on the public key only being disclosed to entities that are authorized to send messages on the channel. The push service does not require access to this public key. The Topic header field exposes information that allows more granular correlation of push messages on the same subject. This might be used", "comments": "Pretty good. I think that we should at least try to put some MUST-strength language in regarding confidentiality, etc... That makes writing the rest of the security considerations that much easier (you don't have to worry that someone might have ignored your advice and account for it).\nLooks good. Further refinements can be done later :)\nThere is an adequate explanation of the related W3C Push API in the Introduction. Additional references about specific API behavior in Section 9.3 and 9.4 can be removed.\nBesides removing extraneous API references, informative references should be added for the separate drafts for optional content encoding and authentication.\nMoving to -05. Waiting for a decision on the VAPID call for adoption.", "new_text": "scheme. This provides confidentiality and integrity protection for subscriptions and push messages from external parties. Applications using this protocol MUST use mechanisms that provide confidentiality, integrity and data origin authentication. The application server sending the push message and the application on the user agent that receives it are frequently just different instances of the same application, so no standardized protocol is needed to establish a proper security context. The distribution of subscription information from the user agent to its application server also offers a convenient medium for key agreement. 8.1. The protection afforded by TLS does not protect content from the push service. Without additional safeguards, a push service can inspect and modify the message content. For its requirements, the API has adopted I-D.ietf-webpush-encryption to secure the content of messages from the push service. Other scenarios can be addressed by similar policies. The Topic header field exposes information that allows more granular correlation of push messages on the same subject. This might be used"} {"id": "q-en-webpush-protocol-c778fad8586884b2cb602db41e115822e4ba41329f31bc64e83f7435f47bed8f", "old_text": "8.4. Discarding unwanted messages at the user agent based on message authentication doesn't protect against a denial of service attack on the user agent. Even a relatively small volume of push messages can cause battery-powered devices to exhaust power reserves. An application can limit where valid push messages can originate by limiting the distribution of push URIs to authorized entities. Ensuring that push URIs are hard to guess ensures that only application servers that have been given a push URI can use it. A malicious application with a valid push URI could use the greater resources of a push service to mount a denial of service attack on a", "comments": "Pretty good. I think that we should at least try to put some MUST-strength language in regarding confidentiality, etc... That makes writing the rest of the security considerations that much easier (you don't have to worry that someone might have ignored your advice and account for it).\nLooks good. Further refinements can be done later :)\nThere is an adequate explanation of the related W3C Push API in the Introduction. Additional references about specific API behavior in Section 9.3 and 9.4 can be removed.\nBesides removing extraneous API references, informative references should be added for the separate drafts for optional content encoding and authentication.\nMoving to -05. Waiting for a decision on the VAPID call for adoption.", "new_text": "8.4. A user agent can control where valid push messages originate by limiting the distribution of push URIs to authorized application servers. Ensuring that push URIs are hard to guess ensures that only application servers that have received a push URI can use it. Push messages that are not successfully authenticated by the user agent will not be delivered, but this can present a denial of service risk. Even a relatively small volume of push messages can cause battery-powered devices to exhaust power reserves. To address this case, the API has adopted I-D.ietf-webpush-vapid, which allows a user agent to restrict a subscription to a specific application server. The push service can then identity and reject unwanted messages without contacting the user agent. A malicious application with a valid push URI could use the greater resources of a push service to mount a denial of service attack on a"} {"id": "q-en-webpush-protocol-c778fad8586884b2cb602db41e115822e4ba41329f31bc64e83f7435f47bed8f", "old_text": "A push service or user agent MAY also delete that receive too many push messages. End-to-end confidentiality mechanisms, such as those in API, prevent an entity with a valid push message subscription URI from learning the contents of push messages. Push messages that are not successfully authenticated will not be delivered by the API, but this can present a denial of service risk. Conversely, a push service is also able to deny service to user agents. Intentional failure to deliver messages is difficult to distinguish from faults, which might occur due to transient network errors, interruptions in user agent availability, or genuine service outages. 8.5. Server request logs can reveal subscription-related URIs. Acquiring a push message subscription URI enables the receipt of messages or deletion of the subscription. Acquiring a push URI permits the sending of push messages. Logging could also reveal relationships between different subscription-related URIs for the same user agent. Encrypted message contents are not revealed to the push service. Limitations on log retention and strong access control mechanisms can ensure that URIs are not learned by unauthorized entities. 9.", "comments": "Pretty good. I think that we should at least try to put some MUST-strength language in regarding confidentiality, etc... That makes writing the rest of the security considerations that much easier (you don't have to worry that someone might have ignored your advice and account for it).\nLooks good. Further refinements can be done later :)\nThere is an adequate explanation of the related W3C Push API in the Introduction. Additional references about specific API behavior in Section 9.3 and 9.4 can be removed.\nBesides removing extraneous API references, informative references should be added for the separate drafts for optional content encoding and authentication.\nMoving to -05. Waiting for a decision on the VAPID call for adoption.", "new_text": "A push service or user agent MAY also delete that receive too many push messages. A push service is also able to deny service to user agents. Intentional failure to deliver messages is difficult to distinguish from faults, which might occur due to transient network errors, interruptions in user agent availability, or genuine service outages. 8.5. Server request logs can reveal subscription-related URIs or relationships between subscription-related URIs for the same user agent. Limitations on log retention and strong access control mechanisms can ensure that URIs are not revealed to unauthorized entities. 9."} {"id": "q-en-webpush-vapid-fac156bc960b05507a1c207f1f0da8cb4c422283a281a124a637be3e88b6996e", "old_text": "2. Application servers that wish to self-identity generate and maintain a signing key pair. This key pair MUST be usable with elliptic curve digital signature (ECDSA) over the P-256 curve FIPS186. Use of this key when sending push messages establishes an identity for the", "comments": "Found some minor stuff in the draft, and had two questions (below). If I should send those to the list instead, let me know :) Does an application server have to have a unique keypair? E.g lets say I have multiple application servers sitting behind a load balancer, could I just share the keys used for vapid between them? In 5.2: Why is a MAY? If I, as a UA, create a restricted subscription my assumption would be that that is actually checked, right? If the Push Service is not planning to check it anyway, wouldn't it make sense to tell the UA directly by responding w/ an error at restricted subscription creation time?\nThe concept of an application server does not define the quantity\u2014 it's fine for multiple servers to share a single key. Martin added text allowing additional claims () which further clarifies that it's fine to include more specific information, for example which server created the message, which is useful if the push service has means for dealing with this. (E.g. QoS or analytics.) I don't recall why that's a MAY. We certainly shouldn't forward unauthorized messages for restricted subscriptions to the user. /cc NAME The changes themselves look good, thanks! :)\nThanks for the explanation NAME Should I open an issue regarding the ?", "new_text": "2. Application servers that wish to self-identify generate and maintain a signing key pair. This key pair MUST be usable with elliptic curve digital signature (ECDSA) over the P-256 curve FIPS186. Use of this key when sending push messages establishes an identity for the"} {"id": "q-en-webpush-vapid-fac156bc960b05507a1c207f1f0da8cb4c422283a281a124a637be3e88b6996e", "old_text": "2.1. If the application server wishes to provide contact details it MAY include an \"sub\" (Subject) claim in the JWT. The \"sub\" claim SHOULD include a contact URI for the application server as either a \"mailto:\" (email) RFC6068 or an \"https:\" RFC2818 URI.", "comments": "Found some minor stuff in the draft, and had two questions (below). If I should send those to the list instead, let me know :) Does an application server have to have a unique keypair? E.g lets say I have multiple application servers sitting behind a load balancer, could I just share the keys used for vapid between them? In 5.2: Why is a MAY? If I, as a UA, create a restricted subscription my assumption would be that that is actually checked, right? If the Push Service is not planning to check it anyway, wouldn't it make sense to tell the UA directly by responding w/ an error at restricted subscription creation time?\nThe concept of an application server does not define the quantity\u2014 it's fine for multiple servers to share a single key. Martin added text allowing additional claims () which further clarifies that it's fine to include more specific information, for example which server created the message, which is useful if the push service has means for dealing with this. (E.g. QoS or analytics.) I don't recall why that's a MAY. We certainly shouldn't forward unauthorized messages for restricted subscriptions to the user. /cc NAME The changes themselves look good, thanks! :)\nThanks for the explanation NAME Should I open an issue regarding the ?", "new_text": "2.1. If the application server wishes to provide contact details it MAY include a \"sub\" (Subject) claim in the JWT. The \"sub\" claim SHOULD include a contact URI for the application server as either a \"mailto:\" (email) RFC6068 or an \"https:\" RFC2818 URI."} {"id": "q-en-webpush-vapid-fac156bc960b05507a1c207f1f0da8cb4c422283a281a124a637be3e88b6996e", "old_text": "An application server requests the delivery of a push message as described in I-D.ietf-webpush-protocol. If the application server wishes to self-identify, it includes an Authorization header field with credentials that use the \"WebPush\" authentication scheme auth and a Crypto-Key header field that includes its public key key. Note that the header fields shown in ex-push don't include line wrapping. Extra whitespace is added to meet formatting constraints.", "comments": "Found some minor stuff in the draft, and had two questions (below). If I should send those to the list instead, let me know :) Does an application server have to have a unique keypair? E.g lets say I have multiple application servers sitting behind a load balancer, could I just share the keys used for vapid between them? In 5.2: Why is a MAY? If I, as a UA, create a restricted subscription my assumption would be that that is actually checked, right? If the Push Service is not planning to check it anyway, wouldn't it make sense to tell the UA directly by responding w/ an error at restricted subscription creation time?\nThe concept of an application server does not define the quantity\u2014 it's fine for multiple servers to share a single key. Martin added text allowing additional claims () which further clarifies that it's fine to include more specific information, for example which server created the message, which is useful if the push service has means for dealing with this. (E.g. QoS or analytics.) I don't recall why that's a MAY. We certainly shouldn't forward unauthorized messages for restricted subscriptions to the user. /cc NAME The changes themselves look good, thanks! :)\nThanks for the explanation NAME Should I open an issue regarding the ?", "new_text": "An application server requests the delivery of a push message as described in I-D.ietf-webpush-protocol. If the application server wishes to self-identify, it includes an Authorization header field with credentials that use the \"WebPush\" authentication scheme (auth) and a Crypto-Key header field that includes its public key (key). Note that the header fields shown in ex-push don't include line wrapping. Extra whitespace is added to meet formatting constraints."} {"id": "q-en-webrtc-http-ingest-protocol-c0945eedc2b8539e9c9cabeede23184a19c1c8eb4b2e7da43035ebd0601ae761", "old_text": "clients and media servers, WHIP imposes the following restrictions regarding WebRTC usage: Bothe the WHIP client and the media server SHALL use SDP bundle RFC8843. The SDP offer created by the WHIP client must include the bundle-only attribute in all m-lines as per RFC8843. Also, RTCP muxing SHALL be supported by both the WHIP client and the media server. When a WHIP client sends an SDP offer, it SHOULD insert an SDP \"setup\" attribute with an \"actpass\" attribute value, as defined in", "comments": "I have changed the by as it is the terminology that rfc 8443 uses. Also quoted the sdp attribute values. NAME wdyt?\nadded tentative text\nThe text says: \"Unlike [RFC5763] a WHIP client MAY use a setup attribute value of setup:active in the SDP offer, in which case the WHIP endpoint MUST use a setup attribute value of setup:passive in the SDP answer.\" First, check whether it would be useful to also reference RFC 8842. Second, I think it is a BAD idea to go against the \"MUST use setup:actpass\" text in RFC 5763. Since you specify that the answerer MUST use setup:passive, there is no reason why the offer couldn't use setup:actpass, following the standard.\nThe idea is to prevent the WHIP client to have to implement active and passive setup. The MUST for the server is only in case that the client uses the setup attribute, maybe some rewording helps\nYou can still use the MUST for the server, even if the client uses 'actpass'.\nYes, we could, but would it be worthy? If the client sends , server MUST send . If the client sends , then the client MUST support to receive either an or response from the server, so I don't think we should put any extra requirement on the server. If the media server wants to only implement they are allowed in any case.\nThe document specifies the behavior of the media server, so even if the standard allows both 'active' and 'passive' you can still specify that a WHIP compliant server MUST send 'passive'. It is better than having the client do something which is not standard compliant.\nok, I get your point. The idea would be then to ask the client to send on the offer, despite only supporting the part, and force the media server to reply , right? Not sure if I like it or not, will ping the mailing list for consensus.\nYes. Specify that a WHIP-compliant media server MUST reply 'passive'.\nIf a MUST has to be added on servers, I think it should be 'active', not passive. It's an unwritten convention in basically all WebRTC applications that who receives the offer will send 'active', and while a convention is not a rule, forcing a change in this pattern would break a lot of implementations out there that may want to support WHIP on the server side.\nWhich is exactly why I wanted the original language that the client could say setup:active to indicate that it only implements DTLS client. Which isn't as unlikely as you might think, especially on devices with a secure element providing the DTLS certificate. It makes no sense to have an offer of actpass and then not actually allow both replies. Either you believe in the O/A model or you don't. By the way all my 'server' devices send answers with setup:passive because they are servers so running a DTLS server makes perfect sense. Seems to work fine...\nAn implementation can of course send whatever it wants, including setup:active. There is nothing anyone can do about that :) But I don't think we should standardize such behavior. As far as the WebRTC and SDP O/A standards are concerned, the offerer can act as both DTLS client and server. If that is the wrong assumption, then I think it shall be discussed in the affected WGs, and based on those discussions potentially update RFC 5763, 8839 etc. I don't think a standards document shall discard MUSTs just because we don't want to support required functionality. But then it should not matter if the offer contains setup:actpass, should it? :)\nI fully agree with Tim. WHIP complicane with WebRTC means that WebRTC endpoints (or whatever name we give to them) can act like a WHIP client, but not all WHIP client needs to be fully compliant WebRTC endpoints. For the server side we are already reducing the requirements of a JSEP client as explained in the gateway draft: So, I would challenge that reducing the requirements on the WHIP client makes WHIP non-webrtc compliant. The idea behing this change is that there is no need for a whip client talking to a whip server to have to implement both active and passive setup. Negotiating actpass when only one mode is supported seems completelly wrong to me.\nWe've already found out that Tim always sends passive from servers, while Janus always sends active when it receives an offer: this means that whatever you force on the WHIP client will break one of the two servers. I think clients should send actpass as the RFC says.\nMandating clients to implement both active and passive is a no go for me. The goal has always been to reduce the burden on the clients because there was none available. Increasing the complexity of implementing them is not going to help.\nbtw, I think that Tim is referring that his servers respond as passive, while their clients sends active. So his (whip) clients would be able to talk with janus server, which is what we intended.\nNo, they wouldn't, because Janus expects actpass when it receives an offer, and will set the role of active for itself. Two actives won't get anywhere.\nWhen you say 'clients', what exactly are you referring to? The WebRTC library that you are using in the application?\nNo, I am referring to the WHIP clients\nImagine I have a smart camera acting as a WHIP source which can only do Active DTLS (because of a hardware limitation). Let's assume when it tries to talk to Janus it will fail because Janus will only do active. If the camera signals this in it's WHIP offer with setup:active then the O/A fails early with a sensible error message http status:402 If it sends a (mendacious) setup:actpass then the connection will silently fail after ice connects and a 16 second DTLS timeout. So sending setup:active in the offer is the right thing for the camera to do. The other option is to say that such hardware shouldn't be doing WHIP, which seems like a needless limitation on hardware encoder design.\nIncidentally I checked my (open source) Whipi client and it actually sends ! Which is why it works with Janus. (The target hardware (raspberry pi) does not have a secure element, so it is just a code size simplification not a hardware limitation - It still halves the complexity of the DTLS layer by only having to implement dtls server)\nBack to the High level point. The WebRTC spec was designed to be Peer-to-Peer - so it makes sense that both sides need equal capabilities since you never know who will initiate an offer - so mandating actpass in the offer is a good idea. WISH/WHIP is explicitly client-server so asymmetry is to be expected. I would argue that the original wording is wrong however - \"WHIP endpoint MUST use a setup attribute value of setup:passive in the SDP answer.\" Should read \"WHIP endpoint SHOULD use a setup attribute value of setup:passive in the SDP answer. Or if it does not support passive it MUST reply with an http 402 error status and an explanation error message.\"\nI think it shall be MUST. If the server cannot do that for whatever reason it will obviously have to return an error. We can for sure specify which error code to use. Also, I think it is better to say that the server MUST follow the procedures in RFC 4573. Regarding the WHIP client, a compromise would be to say that it SHOULD use actpass, according to 4573, unless it only implements the DTLS client role in which case it can use active. Because, we should not forbid a WHIP client from using the standard 4573 behavior if it is able to do so. In addition, there could be a note saying a few words about why a WHIP client might only implement the DTLS client role.\nHere is some new text suggestion (I have not created a PR yet) for Section 4.2. Note that I also modify the BUNDLE text. The SDP BUNDLE mechanism [RFC8843] MUST be used by both WHIP clients and media servers. Each m- line MUST be part of a single BUNDLE group. Hence, when a WHIP client sends an SDP offer, it MUST include a bundle-only attribute in each bundled m- line. In addition, per [8843] the WHIP client and media server will use RTP/RTCP multiplexing for all bundled media. The WHIP client and media server SHOULD include the rtcp-mux-only attribute in each bundled m- line. When a WHIP client sends an SDP offer, it SHOULD use the setup:actpass attribute, as defined in [RFC5763]. However, if the WHIP client only implements the DTLS client role, it MAY use the setup:active attribute. NOTE: [RFC5763] defines that the offerer must use the setup:actpass attribute. However, the WHIP client will always communicate with a media server that is expected to support the DTLS server role, in which case the client might choose to only implement support for the DTLS client role.\nI like that and I think it is a good compromise, creating the PR now\nI have split it in two PRs so we can review/merge them independently: BUNDLE: URL SETUP: URL The include the proposed changes by NAME with a bit of rewording to match the notation of the referenced RFCs, and include that the server SHOULD response with a 422 error if the setup:active is not supported.\nWhen reading this again, I think there still need to be some changes: both the WHIP endpoint and the media server need to support BUNDLE/multiplexing: the WHIP endpoint needs to support the BUNDLE SDP extension, while the media server needs to support the multiplexed media associated with the BUNDLE group.", "new_text": "clients and media servers, WHIP imposes the following restrictions regarding WebRTC usage: Both the the WHIP client and the WHIP endpoint SHALL use SDP bundle RFC8843. Each \"m=\" section MUST be part of a single BUNDLE group. Hence, when a WHIP client sends an SDP offer, it MUST include a \"bundle-only\" attribute in each bundled \"m=\" section. The WHIP client and the Media Server MUST support multiplexed media associated with the BUNDLE group as per RFC8843 section 9. In addition, per RFC8843 the WHIP client and Media Server will use RTP/RTCP multiplexing for all bundled media. The WHIP client and media server SHOULD include the \"rtcp-mux-only\" attribute in each bundled \"m=\" sections. When a WHIP client sends an SDP offer, it SHOULD insert an SDP \"setup\" attribute with an \"actpass\" attribute value, as defined in"} {"id": "q-en-webrtc-http-ingest-protocol-9b007690b830600c0559cfef6a4be4d7dda30792abcb38ca4f3c8e4104a6ebf9", "old_text": "server configuration on the responses to OPTIONS request sent to the WHIP endpoint URL before the POST request is sent. The generation of the TURN server credentials may require performing a request to an external provider, which can both add latency to the OPTION request processing and increase the processing required to", "comments": "Consider a client that does Since the ICE gathering process has started at the time of , it is too late at this point to call . Thus, a way is needed to find the set of ICE servers before the offer is POSTed to the server. In his mail of 12 July 2022, Sergio suggests a different procedure: This alternate procedure should work, but it is somewhat unusual; thus, the spec should describe this procedure in detail, and indicate that this is the procedure that the WHIP client must use if it wishes to use the server-provided ICE servers. What is more, with this procedure the posted offer does not contain any local candidates, which might imply that client-side Trickle ICE is a requirement, at least in some environments. Again, this should be discussed in the draft.\nThat is not true. If server is on a public IP address or behind a port forwarding NAT. The candidates sent by the server are enough to establish the ICE connection. Gathering all the ICE candidates on the local offer are only required if the client does not support trickle and the server is behind of a NAT that requires hole punching. I don't think we should explain how to use webrtc apis in the spec.\nSo WHIP is restricted to the case where the server is on a public IP, with no firewall? That would be fine with me, but if that's the case then IMHO it needs to be stated explicitly in the draft. What happens if the client is on a restrictive network that only lets outgoing TCP through, which is unfortunately still the case in many university networks? Are you assuming that the server supports passive TCP ICE? If so, should that be mentioned in the draft somewhere?\nThe turn server configuration via OPTIONS is not going to be removed. I can add a note like:", "new_text": "server configuration on the responses to OPTIONS request sent to the WHIP endpoint URL before the POST request is sent. NOTE: Depending on the ICE Agent implementation, the WHIP client may need to call the setConfiguration method before calling the setLocalDespcription method with the local SDP offer in order to avoid having to perform an ICE restart for applying the updated STUN/ TURN server configuration on the next ICE gathering phase. The generation of the TURN server credentials may require performing a request to an external provider, which can both add latency to the OPTION request processing and increase the processing required to"}