🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Serving large files over TCP

Started by
27 comments, last by hplus0603 9 years, 5 months ago

I think we might be missing the important point here that he simply can't change how the client devices work. They can't talk http for some odd reason, and he doesn't have the ability to change them. Perhaps they aren't updateable and are already in the field.

Based on that as usually happens with real world specifications, he has no choice but to support them as they are.

If it was me i would try and make some kind of crappy-tcp-protocol to http reverse proxy or such.

Either way it seems he's stuck with non-http, so no amount of discussing how much easier it would be as http is likely to help, unfortunately :(

Advertisement

he doesn't have the ability to change them


Maybe. But he said this:

Even through the TCP, it cannot handle large packets. I have to split them up, put some metadata to identify the chunks


Maybe that's a description of an existing protocol, or maybe that's a description of what he plans to do for the devices. We don't know!
If he has to re-implement an existing protocol, then providing some reference to that protocol specification (if available) would be helpful :-)
And, all of the additional advice still applies. Why can't the files just live on the server hosts, for example? Disk space is very cheap.
enum Bool { True, False, FileNotFound };

I do not have the power to change the client devices. I have been merely informed of their limitation. Why such limitation, I don't know.


Maybe that's a description of an existing protocol, or maybe that's a description of what he plans to do for the devices. We don't know!
If he has to re-implement an existing protocol, then providing some reference to that protocol specification (if available) would be helpful :-)
And, all of the additional advice still applies. Why can't the files just live on the server hosts, for example? Disk space is very cheap.

That's not the description of an existing protocol. I would assume if I have to split a file into several packets, because of the client requirements which I have no control over, these packets can arrive out of order on the receiving end. It will be impossible for the client devices to know which packet belongs to which part in the file. So I would need to include some sort of metadata, like packet id, so the receiving end can piece them back together.

Unless my understanding of TCP is completely wrong, if I send a huge chunk, TCP can split them up and piece them back together on their end, and that's TCP's job. But in this case, I purposely split the packets myself, so as far as I understand, TCP sees the packets independently. Am I correct/wrong on this?


Why can't the files just live on the server hosts, for example? Disk space is very cheap.

Can you give me a compelling reason why I should do this? They are on S3 because, well that's what S3 is for, and it's backed up.

Wouldn't I need to worry about backing them up if they were to live on the server hosts? Don't get me wrong, I would still store them on the server hosts, but as a cache, not the primary storage.


Unless my understanding of TCP is completely wrong, if I send a huge chunk, TCP can split them up and piece them back together on their end, and that's TCP's job. But in this case, I purposely split the packets myself, so as far as I understand, TCP sees the packets independently. Am I correct/wrong on this?

If you mean that a single call to send() [or equivalent function] may split your buffer and transport it with multiple IP packets and the receiver is required to assemble it back in order, then you are correct. If you also mean that two buffers sent by two different calls to send() may arrive out of order at the receiver end, then you are mistaken.

Your data may arrive out of order on the IP level since individual IP packets are independent in that sense, but the two buffers will ultimately be delivered to you in order on the TCP layer. Your mistake is that you see TCP as a message or packet-based protocol. It is not, it is a stream-based protocol; you send a stream of data and you receive a stream of data. Any splitting into packets or individual messages happens only on the IP level when the TCP stream is transported by IP packets. This is all managed by the TCP layer for you: stream in, stream out.

Forget about packages when talking about TCP. There is only a stream you put data into and/or get data from. A single call to send() may be split into multiple IP packages, or multiple calls to send() may be combined into a single IP packet, when the data is actually transmitted. Likewise, there is no requirement to match a call to send() by a call to recv(). You can receive your data in any buffers you like, independent on what and how much you send. This is an entirely different concept from UDP.

You may, for other reasons, have to implement a separate messaging protocol on top of TCP, but that's a different issue. Your issue was effectively about different send() calls arriving out of order from the TCP layer, and that's not correct.

Thank you, Bob, for clearing that up. So TCP guarantees that multiple send() commands will be received in the same order by the recv() command.

Thank you, Bob, for clearing that up. So TCP guarantees that multiple send() commands will be received in the same order by the recv() command.

That is correct, the bytes you put into the TCP stream will be received in the same order no matter how you call send() and revc(). It is all guaranteed and managed by the TCP layer.

I do not have the power to change the client devices. I have been merely informed of their limitation. Why such limitation, I don't know.


That still doesn't answer the question!
Yes, the devices have little memory. And, yes, you say that you cannot change the devices.

Who writes the software that runs on the devices, and downloads the data? Is that software already written?

Separately, just because you call send() multiple times, doesn't mean that TCP will deliver multiple packets. Instead, the way TCP works, is that the receiving device opens the TCP send window no bigger than what it's prepared to receive in a single packet. This is automatic in the TCP protocol, and should already be implemented on the device end.

Thus, we'd need to understand more about the application, and what the real limitation is, as well as whether the receiving software already exists, or whether you write it, or whether someone else writes it.
Also, how is the received data consumed? Is it "played out" or consumed in some real-time way? If so, you just need to send whatever you have, and the device receiving code and TCP stack will do the job of pacing, unless there are some horrible bugs in the device software. Or is the data stored locally, such as written to flash memory or something? If so, again, the device will do the pacing by controlling the TCP window size, and will receive as much as it can as fast as it can, but no faster.

The reason to put the files on your servers instead of S3 is that getting data out of S3 costs money. If cost is not a problem, then spin up your own machines as simple file proxies and call it good. If cost matters, storing files locally is usually cheaper. For backups, you could use S3, or you could use something like Glacier. There's also the question of deployment -- how does the server software and processes get onto the server hardware in the first place? How do files get injected into the system for serving? There are many different ways of solving these problems, and without knowing what your constraints are, recommending something good is almost impossible. Will your servers run on the Amazon cloud? In private data centers? In a co-location facility? In just one data center? In multiple places around the world? Will there be only one server? Or hundreds? How many clients will connect at any one time?

So, it sounds like you're assuming some things that are not actually true, but without knowing what the real project is, it's very hard to debug the overall problem.
enum Bool { True, False, FileNotFound };
Who writes the software that runs on the devices, and downloads the data? Is that software already written?

Thus, we'd need to understand more about the application, and what the real limitation is, as well as whether the receiving software already exists, or whether you write it, or whether someone else writes it.

Another company. Software is still currently being written. I wouldn't call it software, actually, more like firmware. So there is no OS layer, and I don't know much about the hardware specs. So I don't know if the TCP limitation I mentioned is due to a network hardware limitation, or memory constraint, or just pure programming incompetence.

I don't have the source code, I have not seen the source code. Either way, it seems like from the server side, it is a limitation that I should take into consideration.

Also, how is the received data consumed? Is it "played out" or consumed in some real-time way? If so, you just need to send whatever you have, and the device receiving code and TCP stack will do the job of pacing, unless there are some horrible bugs in the device software. Or is the data stored locally, such as written to flash memory or something? If so, again, the device will do the pacing by controlling the TCP window size, and will receive as much as it can as fast as it can, but no faster.

Data will be stored locally on the device flash drive, and there's no streaming capabilities. The device is capable of storing a large file, so the local flash memory capacity is not the issue here. But, it does need to receive the entire file before it can process it.

The reason to put the files on your servers instead of S3 is that getting data out of S3 costs money. If cost is not a problem, then spin up your own machines as simple file proxies and call it good. If cost matters, storing files locally is usually cheaper. For backups, you could use S3, or you could use something like Glacier. There's also the question of deployment -- how does the server software and processes get onto the server hardware in the first place? How do files get injected into the system for serving? There are many different ways of solving these problems, and without knowing what your constraints are, recommending something good is almost impossible. Will your servers run on the Amazon cloud? In private data centers? In a co-location facility? In just one data center? In multiple places around the world? Will there be only one server? Or hundreds? How many clients will connect at any one time?

All of these servers are inside AWS. So, if I use this server as a file proxy/cache, it will only be transfers between S3 and an EC2 instance. According to this Amazon pricing http://aws.amazon.com/s3/pricing/, It seems that if I host this server on the Northern Virginia region, I could get that transfer for free, but otherwise it's $0.02/GB, which shouldn't be too bad.

Region-wise, it should eventually be accessible from around the world. Yes, if someone uploads his file while he's in Europe, travels to Asia, hits the Asian server, the file should still be accessible.

We are so early in the development that a lot of these have yet to be set in stone. We have been using S3 for other things unrelated to this, so that's why it's a natural progression to also S3 for this particular case, unless you can convince me otherwise. I am not sure about using Glacier yet, because well, according to Amazon it's for backup purposes, while the files we are serving need frequent access.

I hope the extra information clear up a lot of confusion.

The extra information helps!

it seems like from the server side, it is a limitation that I should take into consideration


I don't think you need to worry about this. The TCP stack in the firmware of the device will take care of that.

If you REALLY need to limit the maximum segment size sent, then you can set the send buffer of the socket to a small size (256 bytes or whatever you want) but realize that the send buffer size is "advisory" at the socket layer, so the remote end will have to send proper TCP window sizes anyway. Unless the re-implement a complete TCP/IP stack from scratch, this is very likely to already work correctly.

Software is still currently being written.


That means they can implement HTTP instead of TCP, using the mechanism I suggested above. It's not the full HTTP stack, but it's good enough to use existing data delivery networks, which is a significant helper in the long run.

Separately, do you need re-startable transfers? If so, HTTP already supports this, with byte-range serving. You'd have to implement that separately on top of a custom TCP protocol if you rolled your own. Which can be done, but why bother? HTTP does this for you already. HTTP is supported by CDNs. HTTP has good proxy, reverse proxy, byte range serving, and hosting infrastructure. If there's any opportunity at all to use HTTP instead of custom TCP, you should take it!

All of these servers are inside AWS


That does change things a bit -- it's less costly to use S3 as the "file system," as you indicate. Also, if these are files that are custom to each user, that changes things a bit compared to files that are some subset determined by you. For example, a customer uploading a library of PDF files for an e-reader, is a different use case from a set of GPS navigators that need to download satellite ephemeris data every seven days. In the latter, all the devices get the same set of files; in the former, there's very little sharing, and this difference is important to the overall system architecture.

Finally -- it sounds like there's no architect for the overall system who has done large-scale deployment of embedded connected devices before. I have no idea what your business situation is, but if I were running this project, I'd probably try to hire to fill that position :-)

Good luck, and please keep us posted on the technical challenges and successes of your project!
enum Bool { True, False, FileNotFound };


Separately, do you need re-startable transfers? If so, HTTP already supports this, with byte-range serving. You'd have to implement that separately on top of a custom TCP protocol if you rolled your own. Which can be done, but why bother? HTTP does this for you already. HTTP is supported by CDNs. HTTP has good proxy, reverse proxy, byte range serving, and hosting infrastructure. If there's any opportunity at all to use HTTP instead of custom TCP, you should take it!

There is no spec for this yet. Once we got some working implementations, run through some use cases, and deem it necessary for restartable transfer, then I think it's a good idea to bring up back the HTTP support. We will see how the development and relationship with this company we contracted out turn out later.



Finally -- it sounds like there's no architect for the overall system who has done large-scale deployment of embedded connected devices before. I have no idea what your business situation is, but if I were running this project, I'd probably try to hire to fill that position :-)

Yes, you are right in this case. I have done large-scale RESTful services, but not embedded connected devices, and I have been tasked to be that architect. The next thing that's bothering me is load-balancing this server. We won't be having millions of devices yet, but at some point, it will punch you in the stomach if this does become successful.

This topic is closed to new replies.

Advertisement