🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Dealing with the latency and chattiness of REST

Started by
9 comments, last by hplus0603 9 years, 6 months ago
Writing responsive web pages (games, and other interactive experiences) should be easy.
Anyone who has tried to do this, realizes that it's actually hard, because each layer in the stack gets in your way.
At work, we've built out all the different library support and back-end infrastructure needed to make really responsive web services, working around various caching and scalability problems.

http://engineering.imvu.com/2014/12/27/the-real-time-web-in-rest-services-at-imvu/

What the article doesn't say is that we also use the underlying message queue for non-HTTP data, for various games and experiences. But the article was already long enough :-)
enum Bool { True, False, FileNotFound };
Advertisement

Seriously we are long overdue for a binary http-like protocol with encryption and compression built in. Nobody actually uses a terminal to access web servers anymore.

When I think of all the things we have built on HTTP it kind of reminds me of this youtube video I saw where this guy is riding a donkey with turn signals and a car stereo attached.

HTTP is old and well suited for the web as of 1996. We have duct taped new functionality to HTTP to make it do new things, but its still the same old and smelly donkey.

Seriously we are long overdue [...] HTTP is old and well suited for the web as of 1996
That is true for pretty much all application-layer protocols in use, such as FPT, SMTP, POP, IMAP, SIP.... even DNS is mindboggingly explicit and human-readable although you will most likely never look at it in your life.

Then again there's some attempts for protocols that are more efficient and less human readable (think spdy) or protocols that add security (like DNSCurve) but none is so far truly successful in its adoption from what I can tell. For most people most of the time, it probably doesn't make too much of a difference.

Stacking plain normal HTTP on top of TLS using readily available libraries for either one seems "good enough" for most people, so don't expect miracles.

Seriously we are long overdue for a binary http-like protocol with encryption and compression built in. Nobody actually uses a terminal to access web servers anymore.


HTTPS fulfills all of that.

HTTPS is encrypted.
HTTPS supports gzip (and other) encodings.
HTTPS supports binary payloads (if you want.)

And, with HTTP2, we may start getting the ability to support multiple separate streams on top of the same connection (for out-of-order, prioritized, and overlapped requests.) SPDY showed the way, and once HTTP2 is finalized, browsers and servers will swap out SPDY to HTTP2.
enum Bool { True, False, FileNotFound };


even DNS is mindboggingly explicit and human-readable

not true actually. DNS is a raw binary protocol.

However the others were designed so that you could fire up a terminal and connect directly to the server and type commands.


Quote
Seriously we are long overdue for a binary http-like protocol with encryption and compression built in. Nobody actually uses a terminal to access web servers anymore.

HTTPS fulfills all of that.

HTTPS is encrypted.
HTTPS supports gzip (and other) encodings.
HTTPS supports binary payloads (if you want.)

HTTPS is just HTTP with encryption.

You still end up with like 100-300 bytes of headers in ASCII text that mostly serve no purpose.

You can send binary data in HTTP content but you have to either base64 encode it or use MIME.

I would bet that if we took a sample of HTTP traffic at least 5-10% of the throughput would be wasted on headers.


You can send binary data in HTTP content but you have to either base64 encode it or use MIME.

The data that you send is bytes. You don't have to encode anything or setup a MIME if the other end is expecting bytes. If you send bytes on the wire, you'll get bytes on the other end.

As for the header information, if you want reliable communication you must incur some overhead.

I think, therefore I am. I think? - "George Carlin"
My Website: Indie Game Programming

My Twitter: https://twitter.com/indieprogram

My Book: http://amzn.com/1305076532

HTTPS is just HTTP with encryption.


Agreed! Thus, it solves the "encryption" part.

You still end up with like 100-300 bytes of headers in ASCII text that mostly serve no purpose.


I can't think of a single header that's not used by our web stack. Except the Accept: header sent by Internet Explorer when you have Microsoft Office installed -- that's insanely obese.
With HTTP2, you will likely be able to not repeat headers that haven't changed.

You can send binary data in HTTP content but you have to either base64 encode it or use MIME.


If you say something like that in a public forum as if it were fact, it's usually a good idea to verify with the specification first. That makes you look better when you catch yourself from saying falsehoods.

How do you think Flash movies, or JPG images, or application/octet-stream data, is served by a HTTP server?
enum Bool { True, False, FileNotFound };


Quote
You can send binary data in HTTP content but you have to either base64 encode it or use MIME.

If you say something like that in a public forum as if it were fact, it's usually a good idea to verify with the specification first. That makes you look better when you catch yourself from saying falsehoods.

How do you think Flash movies, or JPG images, or application/octet-stream data, is served by a HTTP server?

For some reason I was thinking of email and the lame 7bit character limitation...

HTTP uses 2xCRLF for ASCII or just a Content-length or chunked encoding.

I still stand firm in my belief that HTTP headers are wasteful.

Especially if you were making a networked game on top of it, you'd have more headers than bytes being transmitted.

I still stand firm in my belief that HTTP headers are wasteful.
Especially if you were making a networked game on top of it, you'd have more headers than bytes being transmitted.

At the risk of triggering someone saying "Yeah right, and 640k ought to be enough for everybody", I'll chime in.

RIFF and AIFF have been around for how long, 25 or 30 years? They use 4-byte tag identifiers for chunks, and 4-byte length info. And yes, 4 bytes are enough for everybody. You don't need more than two dozen different chunk types for something like serving web pages, so 4 billion is way enough. Being limited to transmitting 4GB in one chunk is no limitation either (seeing how you can always do multipart, something HTTP supports as well). Using something like RIFF, you could trivially implement several "streams" over the same connection and at the same time cut down on header size (though it isn't really about size!), but more importantly also make parsing headers/requests a lot easier.

Parsing human readable text is awkward and error prone. Comparing a binary number to another binary number is fast and easy (if you agree on endianness).

Some of the RIFF formats (i.e. QuickTime) have explicit support for an expanded 64 bit length.

This topic is closed to new replies.

Advertisement