18

Is it possible to built a file from scratch that when it gets downloaded via HTTP, the download never actually completes ? I am not talking about ZIP BOMB here.

Some download software allows you to download streaming events, thus the final size of the file is not known. Is it possible to craft a file where the download software is not able to "guess" the actual file size and keeps on downloading?

Steffen Ullrich
  • 201,479
  • 30
  • 402
  • 465
mahen23
  • 341
  • 2
  • 5
  • 15
    The download software never "guesses" the file size. When a file transfer is initiated the server sends some header information and in that header info the download software is told how big the file will be before it starts downloading. It is quite easy to misconfigure a server to not announce the file's size and simply send a file to be downloaded so your download software will just keep downloading until the server stops sending. – MonkeyZeus Mar 07 '17 at 13:37
  • Could you draw a line between the definition of download and stream? A file download is, to me, finite; while a stream runs until I tell it to stop. I can't tell if you're mincing terms on accident. – zero298 Mar 07 '17 at 13:44
  • 6
    @MonkeyZeus: A server is not necessarily misconfigured if it can't tell how large the response is. It might literally not know, that's why there is Transfer-Encoding: chunked. (You could argue that the server could just buffer the output from CGIs, but let's not.) – Oskar Skog Mar 07 '17 at 13:44
  • yes: https://dev.twitter.com/streaming/overview ;) – Olle Kelderman Mar 07 '17 at 14:03
  • 1
    @OskarSkog True, but my main point still stands. The download software doesn't guess, it gets told. – MonkeyZeus Mar 07 '17 at 15:05
  • 3
    @MonkeyZeus At least firefox seems to only treat the announced size as hint. When the server gracefully closes the connection and less than the announced size has been downloaded it considers the download successful. (That's very annoying when using SOCKS proxies, since those turn connection loss into gracefully closed connections, resulting in silently truncated downloads) – CodesInChaos Mar 07 '17 at 15:45
  • 7
    This question is incredibly vague. You're probably not talking about "creating a file", but it's hard to tell what you actually mean. – pipe Mar 07 '17 at 17:33
  • 2
    So many different ways to do this, the question is actually poor. A file is simply an arbitrary chunk of data which could easily be concurrently modified to produce your desired result. – Tim Hallman Mar 07 '17 at 19:37
  • 1
    I'm missing something. What does this have to do with information security? – Paul Draper Mar 07 '17 at 19:52
  • 2
    @PaulDraper the concept can be used as a denial-of-service: keeping a connection open and forcing a client to fill its memory/disk with junk. – adelphus Mar 07 '17 at 21:43

3 Answers3

29

Yes it is possible. You just need to use the chunked transfer encoding. https://en.wikipedia.org/wiki/Chunked_transfer_encoding

Depending on your server's configuration, you might be able to simply create a CGI script that writes and flushes stdout in an infinite loop.

It does not seem to work on Lighttpd which I believe buffers the entire output from the CGI script before sending it to the client. It might work on other webservers though.

Example:

HTTP/1.1 200 OK\r\n
Transfer-Encoding: chunked\r\n
Content-Type: text/plain\r\n
\r\n
1e\r\n
Uh-oh, this will never stop.\n
1e\r\n
Uh-oh, this will never stop.\n

followed by an infinite repetition of "1e\r\nUh-oh, this will never stop.\n"

Oskar Skog
  • 565
  • 6
  • 14
15

Yes, just stream /dev/urandom to the client. First, maybe you'll need to fake the file header, so that client thinks it's downloading the stuff it requested, and after that just stream random junk.

An idea on how to do this in Python:

with open("/dev/random", 'rb') as f:
    print repr(f.read(10))
roman
  • 323
  • 2
  • 6
5

If the output is dynamically generated by the server software, it is possible to create a stream that keeps going on until you break the connection. However if you literally want a file it cannot be infinite in size.

However if your file system supports sparse files you can create a file that is larger than the storage media and that way produce a file which would take such a long time to download that it isn't feasible to download it all.

The maximum file size differs between file systems. On ext4 the limit is 16TB. On tmpfs the limit is 8EB. Here is a couple of examples on how such files could be created:

dd if=/dev/null of=/dev/shm/sparse bs=1 seek=7E
dd if=/dev/null of=/tmp/sparse bs=1 seek=15T

Beware when putting such files on a webserver. If the server software you are using doesn't throttle the bandwidth it is possible for a malicious client to overload your network.

kasperd
  • 5,482
  • 1
  • 20
  • 39
  • 4
    A way to create sparse files that is easier to remember: truncate -s15T /tmp/sparse – b0fh Mar 07 '17 at 22:37