This is a never finished Internet Draft with my thoughts about the HTTP/2.0 protocol.
Network Working Group P. Kamp
Internet-Draft Den Andensidste Viking
Intended status: Informational April 2, 2012
Expires: October 4, 2012
An architectural vision for HTTP/2.0
draft-phk-http-architecture-httpbis-01
Abstract
The HTTP protocol is undoubtedly one of the biggest successes of the
Internet family of protocols, and the world has changed a lot under
the architectural assumptions it was built on. Before rushing into
standardization of HTTP/2.0 based on past experience and grievances
with HTTP/1.1, we should examine what the architecture underlying the
next 10-20 years of web-browsing must do for us.
Status of this Memo
This document is an Internet-Draft and is in full conformance with
all provisions of Section 10 of RFC 2026.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on October 4, 2012.
Copyright Notice
Copyright (C) The Internet Society (2012). All Rights Reserved.
Kamp Expires October 4, 2012 [Page 1]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. When I'm sixty-four: What are we looking at . . . . . . . . . 4
3. New Kid on the Block: HTTP routers . . . . . . . . . . . . . . 5
3.1. Taking care of business: What a HTTP router does . . . . . 5
3.2. I got 99 problems: Trouble for HTTP routers . . . . . . . 6
3.3. When I wish upon a star: What we can do for HTTP
routers . . . . . . . . . . . . . . . . . . . . . . . . . 6
4. Everybody's Got Something to Hide Except Me and My Monkey . . 8
5. Return to sender: The client-server model . . . . . . . . . . 10
6. Route sixty-six: Getting there . . . . . . . . . . . . . . . . 11
7. Little Deuce Coupe: Smarter rides than TCP . . . . . . . . . . 12
8. Counting the Cattle: A more efficient protocol . . . . . . . . 13
9. Under Wraps: What to protect . . . . . . . . . . . . . . . . . 15
9.1. The envelope . . . . . . . . . . . . . . . . . . . . . . . 15
9.2. The metadata . . . . . . . . . . . . . . . . . . . . . . . 15
9.3. The body . . . . . . . . . . . . . . . . . . . . . . . . . 16
10. Headbanging The Piano: Bringing it all together. . . . . . . . 17
10.1. Container level . . . . . . . . . . . . . . . . . . . . . 17
10.2. Message Level . . . . . . . . . . . . . . . . . . . . . . 18
10.3. Examples . . . . . . . . . . . . . . . . . . . . . . . . . 18
11. Security Considerations . . . . . . . . . . . . . . . . . . . 21
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 22
Intellectual Property and Copyright Statements . . . . . . . . . . 23
Kamp Expires October 4, 2012 [Page 2]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
1. Introduction
HTTP is not what it used to be, and although the protocol has scaled
remarkably well, the current level of performance seems insufficient
for even the very short term growth of the internet.
Efforts have been made to find optimizations and improvements But
usually in a frame of mind of incremental improvement, rather than
architectual review.
This document is a loose architectural sketch of how HTTP/2.0 could
look, more or less based on a single persons random sketchings and
ideas.
The intention is not that the bits and pieces of this document should
be polished and ratified as a protocol, but to present some ideas and
observations for discussion.
Kamp Expires October 4, 2012 [Page 3]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
2. When I'm sixty-four: What are we looking at
Jim Gettys long time ago warned the X11 project that "The only thing
worse than generalizing from one example is generalizing from no
examples at all." Far too many standardization efforts neglegt this
sage wisdom.
But in one area we will have to extrapolate from just one example:
What is the probably lifetime of HTTP/2.0 going to be ?
HTTP/1.1 was first standardized in 1997 and there is no credible
reason to belive we will get rid of it for at least another five
years, so assuming and aiming for a 20 year lifetime of HTTP/2.0 is
reasonable.
What computing will look like in 20 years time is anyones guess, but
we do have some clues to the near term developments: Computers won't
run faster. The current solid-state and process technologies have
hit a clock speed barrier around 3-5 GHz which shows no signs of
being radically breached in the near term.
What we see instead of higher clock speed, is inceasingly imperfect
parallelism (NUMA), increasingly complex instruction sets and
attempts to offload processing to dedicated hardware. For instance
most network cards already have support for calculating TCP/IP
checksums.
With respect to bandwidth, there does not seem to be any similar
restrictions, although the technological problems of extending beyond
10 and 40 Gbit/s are not trivial. I think it is fully reasonable to
expect transmission media with 1 Tbit/s capacity in the next 20
years.
I think that given HTTP's central role in mass communication, being
able to deal with HTTP/2.0 at that speed, either in hardware or
software is a goal in its own right.
Kamp Expires October 4, 2012 [Page 4]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
3. New Kid on the Block: HTTP routers
Probably the most radical change in the HTTP ecosystem is the
appearance of a new role in addition to the client, proxy & server
triad.
For reasons of scalability and redundancy, most larger websites today
deploy light-weight proxies with load-balancing and load-directing
facilities.
In terms of RFC standardization, these boxes are clearly proxies, and
as such they have been treated until now.
But these boxes occupy a very special role, because they sit where
the traffic is most densely concentrated, and they usually take
almost no notice of the semantic meaning of the HTTP messages they
handle.
I propose we introduce a new role for these boxes, and name them
"HTTP routers", since their role is much closer to that of a packet
router, than of a webserver.
3.1. Taking care of business: What a HTTP router does
In a typical large-ish web-site, the HTTP router is responsible for
directing incoming HTTP requests to HTTP servers based on simple
criteria, and to direct the reponse from the HTTP server back to the
client.
Because of this central position in the HTTP message path the HTTP
router is where the traffic concentration is highest, and where both
intentional and malicious traffic must be handled somehow.
The HTTP router seldom terminates the HTTP path in any meaningful
way, it simply picks it off one TCP connection and places it on
another, pretty much like a packet router will shuffle packets
between its interfaces.
A very common routing criteria is "does the HTTP server work ?" With
two or more equally capable HTTP servers to choose from, the HTTP
router will route requests only to those HTTP servers which seem to
actually work.
This simple routing policy allows HTTP servers to crash, to be taken
down for maintenance or to be moved, without taking the entire
website offline as a result.
Other more involved criteria might be to send requests for images and
Kamp Expires October 4, 2012 [Page 5]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
other static elements to one set of servers and requests relating to
dynamic content to another set of servers, based on pattern-matching
the URI.
3.2. I got 99 problems: Trouble for HTTP routers
HTTP routers run into some deficiences of the HTTP/1.1 definition,
which seriously cramp their style. Some of these problems are at a
functional level and some are purely performance.
An example of a functional problem is the lack of a session concept
in HTTP.
A that a very common routing criteria is session-stability. The
clients first request is routed to a web-server using some random
criteria, but subsequent requests from that same client should be
routed to the same backend, even if that criteria changes, because
the client and/or backend maintains state, for instance the contents
of a shopping basket, of relevance to the "session".
Most often this session concept is simulated with Cookies, but
amongst other sideeffects, this makes objects which would otherwise
be perfectly cacheable impossible to cache in shared proxies.
In the performance class of problems, the most prominent is the
indeterminacy of HTTP/1.1 message length: Only once we encouter the
end do we know where to find it. Implementation wise, things would
be so much easier, if the length were sent up front, so that memory
allocation and socket-API paramters can be chosen intelligently.
3.3. When I wish upon a star: What we can do for HTTP routers
If a HTTP/2.0 router should be able to deal with 1Tbit/s of traffic,
we need to avoid complicated encodings or transformations for the
fields it must examine.
We should probably adopt how other routable protocols have a clear
distinction between the "envelope" and "the contents" and designate a
small subset of the HTTP message attributes as part of the envelope
and make them convenient for HTTP routers to work with.
Given how almost universal the "session" concept on the Internet we
should add it to the HTTP/2.0 standard, and make it available for
HTTP routers to use as a "flow-label" for routing.
At the byte-serialization level, we should also try to the extent
possible, to use prefix coding to make available information about
what will happen next. For instance HTTP messages should state
Kamp Expires October 4, 2012 [Page 6]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
clearly, with one single bit, if there they have a body/entity coming
or not, so that the HTTP router does not have to investigate three
different HTTP headers to make this determination.
Kamp Expires October 4, 2012 [Page 7]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
4. Everybody's Got Something to Hide Except Me and My Monkey
Cryptographic services is a minefield, but we have to navigate it.
On all sides of this issue are strong desires to try to advance
political and rights agendas by technical means, and as we have seen
countless times before, getting tangled up in OSI layer 8 can doom
any standard.
In the FreeBSD project the admonition "We deliver tools, not
policies" has been used to capture this attitude to political and
emotional subjects, and I would recommend the HTTPbis WG adopts it as
well.
One of the most controversial subjects are proxies which interfere
with free speech, either by preventing, modifying or by deanonymizing
it.
But we cannot escape however, that there are legitimate situations
for proxies interfering with trafic, from preventing malware
infection and leakage of sensitive documents over parental controls
of minors web-browsing, to legal requirements to record all
communications.
Similarly, while most users have a right to privacy of communication,
some users specifically do not, for instance inmates in jails.
Implementing cryptographic policies is hard, the inmate with privacy
restriction can trivially implement a covert channel by surfing an
webpage in a particular pattern and it has been shown that the
interval between packets can leak information about a password typed
over a protected connection.
Neither HTTP/2.0 nor any other protocol can do anything about
situations like that, such is the nature of secrecy.
What we can do with HTTP/2.0 is support legitimate cryptographic
functionalities and be fair about it, by provide precise and correct
information about the actual cryptographic situation, to all parties
in a HTTP exchange, so that both clients and servers know if they are
subject to meddling proxies or if they have end to end privacy and
integrity.
In HTTP/1.1 the choice to do cryptographic protection is almost
universally for the server to make, typically in the form of a
mandatory redirect from unprotected to protected pages.
It might be worth considering if clients should have a way of
Kamp Expires October 4, 2012 [Page 8]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
requesting cryptographic protection in cases where the server does
not by default demand it, but obviously subject to the servers
willingness to engage.
One particular aspect of cryptography is efficiency, and it would be
very desirable if the upgrade from unprotected to crypto-protected
communications did not require a new TCP connection to be
established.
It would also be an performance advantage, if protected and
unprotected messages can share the same TCP connection, for instance
on the multiplexed path between a proxy and a server.
Kamp Expires October 4, 2012 [Page 9]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
5. Return to sender: The client-server model
Very fundamental to the HTTP protocol, is the strict client-server
model of interaction, but the increasingly interactive
webapplications have indicated a need for "server-push" or "reverse
transactions".
Breaking the conceptually simple client-server model should not be
done lightly, doing so requires a lot of hard questions about command
and control of the client computer.
At the same time it would be foolish to rule out extensions to the
client-server model in the next 20 years, so HTTP/2.0 should not be
designed to rely on the "ping-pong" aspect of current HTTP traffic,
even if that is all it is going to allow initially.
Sticking to the client-server model does not preclude supporting
multiplexing and pipelining of request, in fact HTTP/2.0 should most
certainly do so, simply for reasons of speed and resource
conservation.
Kamp Expires October 4, 2012 [Page 10]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
6. Route sixty-six: Getting there
Today the almost universal carrier of HTTP is the TCP protocol, and
the entrenched infrastructure of firewalls and gateways on the
internet makes it foolish to even imagine that HTTP/2.0 could be
deployed rapidly if it required new TCP ports to be opened.
This means that coexistence with HTTP/1.1 and possibly HTTPS on a
single TCP port is an inescapeable requirement for HTTP/2.0.
Initially it is to be expected that most servers will only offer
HTTP/1.1 and therefore clients will send a HTTP/1.1 request with some
kind of hint that they are willing to do HTTP/2.0 also. If the
server bites, the protocol is upgraded and HTTP/2.0 happiness
spreads.
At some point, hopefully, a critical mass of HTTP/2.0 servers is
reached, and it makes sense for clients to attempt to go directly on
HTTP/2.0 right away, and suffer the cost of a retry if the server
does not support HTTP/2.0.
This situation can be significantly optimized, if the HTTP/2.0
protocol allows a client to send an optional "magic string" on a
newly opened connection to detect non HTTP/2.0 servers.
This string should be designed to be as short as possible, while
still producing a error message from HTTP/1.1 servers which does not
result in the TCP connection being closed, while at the same time
being clearly distinct from any legal HTTP/2.0 message.
One possible way would be to "wrap" the HTTP/2.0 request in an almost
legitimate HTTP/1.1 operation:
"X / HTTP/1.1" CRNL
"Content-Length: 48" CRNL
CRNL
[48 bytes of HTTP/2.0 request]
A HTTP/1.1 server will return an error, which can be recognized
because it starts with "HTTP..." whereas the a HTTP/2.0 server would
send a HTTP/2.0 reponse to the request, ignoring the HTTP/1.1 "bogo-
header".
For something like this to work, it is imperative that the HTTP/2.0
serialization of a response can never start with the bytevalue 'H'
and it may be advisable to also "blacklist" certain other characters,
CR and NL for instance.
Kamp Expires October 4, 2012 [Page 11]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
7. Little Deuce Coupe: Smarter rides than TCP
A surprising large fraction of todays webtraffic could fit into UDP
packets, one for the request and one for the reponse.
This idea is not without problems however, there is a significant
potential for Dos-amplification in any UDP protocol where the reponse
is larger than the request,
But there are many closed domains where such risks may have no
relevance, and the benefit of using UDP might be very high in terms
of time and cost.
I will not advocate that HTTP/2.0 standardize HTTP over UDP, but
neither should it be prevented.
One of the biggest factors in the IP protocols success is that the
packet were defined independent of the underlying transmission media,
allowing IP packets to spread from 56kbit leased lines to norwegian
carrier pigeons without any need to reopen RFC 791.
When finalizing the serialization of HTTP/2.0 messages onto
bytestreams, we should focus on such "portability" rather than assume
that HTTP/2.0 is a TCP-only protocol.
Kamp Expires October 4, 2012 [Page 12]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
8. Counting the Cattle: A more efficient protocol
HTTP/1.1 was designed in an era where things were very different, and
as much help as it was to be able to TELNET to pretty much any server
and "do a hand-job", the number of bytes and cpu-cycles wasted in
HTTP/1.1 is simply monumental.
There are many ways reduce the size of HTTP messages and they should
all be considered in turn.
At the most fundamental level, is not sending stuff to begin with.
For instance a credible case can be made that the "Date:" header is
surplus to requirements in a majority of circumstances.
The next level is to send less, for instance sending the allowable
cache time of an object as a number of seconds ("300") takes up a lot
less space than sending the time it expires ("Mon, 02 Apr 2012
19:10:42 GMT")
Next up is sending things less often, for instance if HTTP/2.0 has a
functional session-mechanism, it would be enough for the client to
send "User-Agent" and most cookies to the server only on the first
HTTP message on the connection, for all subsequent messages a "ditto"
would suffice.
And finally we get to sending data more efficiently. Many cookies
have cryptographic content which are ASCII encoded using base64 or
HEX encoding. HTTP never has and never will run over transmission
paths which are not 8-bit clean, and allowing cookies to have binary
content would reduce their size by 25 to 50%.
If properly designed, a moving from ASCII to a binary protocol may
save further bytes.
Finally, it is possible to resort to general purpose compression, but
a well designed protocol encoding may give the good old LZW a run for
the money, both in terms of simplicity and memory requirements.
But bytes is not the only cost we care about and all bytes are not
created equal. The byte that causes a HTTP message to spill into the
next packet is much more expensive than one which merely adds a byte
to the current packet. Any attempt to design a serialization needs
to to seriously study the tradeoff between bytes scavenged and bytes
well spent to improve protocol implementation.
One example would be to prefix a HTTP message with the number of
bytes in it. In HTTP/1.1 one reads until a [CR]NL[CR]NL sequence
appears, but unless the client is a keyboard and TELNET it knows full
Kamp Expires October 4, 2012 [Page 13]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
well up front, or could easily find out up front, how many bytes it
is going to send to the server.
Announcing what is to follow in the serialization is a significant
optimization for the receiver, which can select and prepare
sufficient storage, and minimize the number of context-switches,
while still getting optimal memory layout.
Kamp Expires October 4, 2012 [Page 14]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
9. Under Wraps: What to protect
The new role of HTTP routers makes a compelling reason to subdivide a
HTTP/2.0 message further than the 2 and a bit parts we have operated
with in HTTP/1.1.
I would split it in three parts
9.1. The envelope
The envelope is the information necessary to route the HTTP message
to the right place, and I would propose that it consist of the
following parts of a HTTP message:
Request:
URI, less the query part.
Host: header
A server assigned session number (or zero)
The length of the metadata.
The length of the body (or zero)
Reponse:
Status (200/305/502 etc)
Session number (or zero)
The length of the metadata.
The length of the body (or zero)
I will go a step further and claim that we can live without
cryptographic protection of this envelope.
There are trivial workarounds to obfuscate all of these fields for
purposes of privacy, and their integrity can be ensured by signing
them in the protected part of the message.
Taking the envelope out of the cryptographically protected part of
messages, means that HTTP routers can route protected traffic,
without terminating the TLS sessions.
The envelope should be byteserialized with an eye to maximum
procesing efficiency in HTTP routers, possibly at the expense of some
bytes which could theoretically be saved. All fields should have
length prefixes and the fixed sized fields be clustered at the front.
9.2. The metadata
Metadata are the bits of the HTTP message which are necessary for
correct semantic interpretation of the message. This is pretty much
everything else we have in the HTTP/1.1 message which not related to
transport or connection management.
Kamp Expires October 4, 2012 [Page 15]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
HTTP routers will not need to examine this part of the message, if
they do, they are not HTTP routers but HTTP proxies.
The metadata should be byteserialized to minimize both size and
processing overhead.
9.3. The body
The body is an opaque sequence of bytes, but it it may be relevant to
do transport level compression on it, even if it has already close to
optimal entropy.
One example where this could make sense is a non-caching proxy
repeatedly requesting an image file on behalf of different clients.
Kamp Expires October 4, 2012 [Page 16]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
10. Headbanging The Piano: Bringing it all together.
Bringing all these strands together, here is my strawman proposal for
what a HTTP/2.0 protocol could look like.
10.1. Container level
At the bottom most level we have a level which is responsible for
efficient transmission, multiplexing, and negotiation of connection
parameters.
uint8_t type;
uint8_t flags;
uint16_t channel;
uint32_t length;
uint8_t[$length] message;
The type field, tells what kind of contents is being sent. Being the
first field, type can trivially be subject to the 'H'/CR/NL
restriction.
Two types should be allocated for tunneling HTTP/1.1 request and
reponses transparently. If container-level compression is
negotiated/available the HTTP/1.1 messages can be compressed, cookies
and all.
The exact allocation policy for further types is up for grabs. It
may make sense to encode the request (GET/PUT/POST) at this level, to
get better utilization of this field.
The flags can indicate if this particular container is subject to TLS
and/or if it has been compressed at the container level. The exact
parameters of the TLS or compression are subject to negotiation.
It may make sense to define a "final" flag bit, to indicate the last
chunk of a progressively delivered HTTP object body, similar to the
"0 length chunk" used for chunked encoding in HTTP/1.1.
Channel indicates which logical channel the container belongs to. A
default maximum number of available channels is subject to
negotiation. A client can use the available channels to send
multiple requests at one time, a proxy could dedicate a channel to
each client to maximize compression efficiency of metadata blocks.
Channel zero is "magic" and is used for negotiation of parameters and
for the TLS handshake if protection is called for.
The length field is big enough to transfer well sized chunks of data,
Kamp Expires October 4, 2012 [Page 17]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
but still of a fixed size which will make it easier to implement HTTP
routing at high speeds, in hardware or software.
10.2. Message Level
The message level is for transferring the envelope and metadata of a
HTTP message in a container.
If the metadata is cryptographically protected it may make sense to
use one container for the envelope and one for the metadata, but in
the case where neither envelope nor metadata is protected, a single
container should be used.
The envelope fields should be serialized in a length prefixed
efficient format, the metadata should follow it, in whatever highly
efficent serialization we find best.
10.3. Examples
The above structure is deliberately very flexible and therefore
hopefully powerfull enough to absorb the next two decades of whacky
ideas.
Such general constructs can be very hard to fully grasp without some
examples, so here are some of the ones I thought about along the way.
"//" marks the start of comments.
Kamp Expires October 4, 2012 [Page 18]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
The canonical HTTP/1.1 to HTTP/2.0 upgrade
C->S GET /foo HTTP/1.1 CRNL
Host: example.com CRNL
Upgrade: http20-gzip-tls CRNL // Client can do HTTP/2.0
// + gzip container compr.
// + TLS support
CRNL
S->C 101 Switching Protocols CRNL
CRNL
{type=HTTP20-Response,chan=1,len=A} [{envelope}{metadata}]
{type=HTTP20-Body,chan=1,len=B} [<html>Hello World...]
{type=CAN-DO,chan=0,len=D} [maxchan=6,gzip=0,tls=0]
// Server will service channels 1...6
// Server does not offer gzip c-compr.
// Server does not offer TLS service.
{type=HTTP20-Body,chan=1,flag=final,len=C} [...</html>]
C->S {type=HTTP20-Request,chan=2,len=E} [{envelope}{metadata}]
C->S {type=HTTP20-Request,chan=3,len=F} [{envelope}{metadata}]
S->C {type=HTTP20-Response,chan=2,len=G} [{envelope}{metadata}]
C->S {type=HTTP20-Request,chan=4,len=H} [{envelope}{metadata}]
S->C {type=HTTP20-Body,chan=2,flag=final,len=I} [JPEG image]
S->C {type=HTTP20-Response,chan=3,len=J} [{envelope}{metadata}]
...
The canonical HTTP/2.0 to HTTP/1.1 downgrade
C->S X / HTTP/1.1 CRNL
Content-Length: XX CRNL
CRNL
{type=HTTP20-Request,chan=1,len=A} [{envelope}{metadata}]
S->C HTTP/1.1 501 Not Implemented CRNL
Content-Length: XX CRNl
(grumble grumble grumble) CRNL
CRNL
<HTML> NL
<H2>Grumble grumble grumble</H2> NL
C->S GET /foobar.html CRNL
Host: fogey.example.com CRNL
Yadda: yadda, yadda, yadda CRNL
CRNL
S->C HTTP/1.1 200 Ok
Content-length: 12323
...
Kamp Expires October 4, 2012 [Page 19]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
The HTTP/2.0 response to HTTP/1.1 Hedging
C->S X / HTTP/1.1 CRNL
Content-Length: XX CRNL
CRNL
{type=HTTP20-Request,chan=1,len=A} [{envelope}{metadata}]
S->C {type=HTTP20-Reponse,chan=1,len=B} [{envelope}{metadata}]
{type=HTTP20-Body,chan=1,flags=final,len=B} [{rickroll.wav}]
{type=HTTP20-Body,chan=1,flags=final,len=B} [{rickroll.wav}]
{type=CAN-DO,chan=0,len=D} [maxchan=10,gzip=1,tls=0]
...
Kamp Expires October 4, 2012 [Page 20]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
11. Security Considerations
Several, read the text.
Kamp Expires October 4, 2012 [Page 21]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
Author's Address
Poul-Henning Kamp
Den Andensidste Viking
Herluf Trollesvej 3
Slagelse DK-4200
Denmark
Kamp Expires October 4, 2012 [Page 22]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
Full Copyright Statement
Copyright (C) The Internet Society (2012). All Rights Reserved.
This document and translations of it may be copied and furnished to
others, and derivative works that comment on or otherwise explain it
or assist in its implementation may be prepared, copied, published
and distributed, in whole or in part, without restriction of any
kind, provided that the above copyright notice and this paragraph are
included on all such copies and derivative works. However, this
document itself may not be modified in any way, such as by removing
the copyright notice or references to the Internet Society or other
Internet organizations, except as needed for the purpose of
developing Internet standards in which case the procedures for
copyrights defined in the Internet Standards process must be
followed, or as required to translate it into languages other than
English.
The limited permissions granted above are perpetual and will not be
revoked by the Internet Society or its successors or assigns.
This document and the information contained herein is provided on an
"AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property
The IETF takes no position regarding the validity or scope of any
intellectual property or other rights that might be claimed to
pertain to the implementation or use of the technology described in
this document or the extent to which any license under such rights
might or might not be available; neither does it represent that it
has made any effort to identify any such rights. Information on the
IETF's procedures with respect to rights in standards-track and
standards-related documentation can be found in BCP 11. Copies of
claims of rights made available for publication and any assurances of
licenses to be made available, or the result of an attempt made to
obtain a general license or permission for the use of such
proprietary rights by implementors or users of this specification can
be obtained from the IETF Secretariat.
The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary
rights which may cover technology that may be required to practice
Kamp Expires October 4, 2012 [Page 23]
Internet-Draft An architectural vision for HTTP/2.0 April 2012
this standard. Please address the information to the IETF Executive
Director.
Acknowledgment
Funding for the RFC Editor function is currently provided by the
Internet Society.
Kamp Expires October 4, 2012 [Page 24]