2018-10-18 03:12 CEST

View Issue Details Jump to Notes ]
IDProjectCategoryView StatusLast Update
0005296libmicrohttpdlibmicrohttpd internal selectpublic2018-08-05 22:13
Reporterghaderer 
Assigned ToChristian Grothoff 
PrioritynormalSeverityfeatureReproducibilityalways
StatusclosedResolutionwon't fix 
Platformamd64OSLinux UbuntuOS Version16.04.4 LTS
Product Version0.9.59 
Target Version0.9.60Fixed in Version 
Summary0005296: client disconnection not detected
DescriptionI'm using the daemon with the internal polling mode and one thread per connection. I read data from an external source and no data may always be available when the response reader is called. So the response callback returns 0 (and relinquish the CPU for a few milliseconds to avoid busy loops). When this happens, the connection automaton changes to the BLOCK state.

When using the select function, in this state it only queries the exception set. But when the client disconnects, the exception bit is not set.

I tried with the poll function, and the disconnection is not detected either.

Linux provides a specific bit, POLLRDBAND, which is set in this case. Adding this flag to MHD_POLL_EVENTS_ERR_DISC and MHD_POLL_REVENTS_ERR_DISC solves this issue.

Unfortunately, it's still present with the select method, and may also be on other OSes with the poll method.

So we need additional measures to detect the disconnection. For example, in the BLOCK state, we could try to read/write with the MSG_PEEK flag, if it's supported, to test error conditions without really reading/writing data.

I tried this approach on Linux and the better option seems to use the write call as the read call won't report any error as long as data is available in the incoming queue. But the write call will immediately fail with EPIPE error.

More generally, I think socket disconnection must be done on each loop iteration to avoid useless work on the server side, even if the server does not need to read or write to the socket.
Steps To ReproduceCompile the attached example program and run it.

Then make a request:

$ curl -v http://127.0.0.1:8081

You should get the following output:

<<<
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
>>>

Interrupt the curl command as soon as you see the hello messages.

The example program should display the following messages:

<<<
first burst
second burst
completed: code=0
>>>

Between the first and second bursts, the server makes a 2-second pause in the response callback. As the client closed the connection during this pause, I would expect the server to detect the disconnection when the callback returns. That is, the server should not call the response callback anymore for this request, and report an abnormal termination instead of success.

Here is what the server displays if I add POLLRDHUP to the disconnection event masks.

<<<
first burst
completed: code=1
>>>

Now the server detects the disconnection and does not process the request anymore.
TagsNo tags attached.
Attached Files

-Relationships Relation Graph ] Dependency Graph ]
+Relationships

-Notes

~0012894

Christian Grothoff (manager)

What you propose would seem to be incorrect behavior for HTTP, as the client is _allowed_ to half-close the socket (man 2 shutdown) and continue to read the response. If the HTTP client closes the socket for writing, the HTTP server may still continue to serve the full response and the HTTP client may still read it. By (half) closing the socket, the client merely indicates that it will not send further requests.

Given that your response is tiny (fits in 1 TCP segment), the write/send syscall of MHD will succeed even after the HTTP client has closed the socket on its end, so the syscalls MHD makes simply do not allow it to detect that the client did not receive the full response. Hence it cannot signal an error either.

In conclusion: fixing this the way you imagine would create a bug with respect to my reading of the specification, not fix one ;-).

~0012919

ghaderer (reporter)

I understand the detection on read using select/poll with POLLRDBAND or read()/recv() calls would break half-closed connections. So I agree this is not a good option.

But from what I read, if the client closes its connection and especially its read end, the server's TCP will receive a RST message next time it tries to send a segment. This should result in a "connection reset by peer" error at the socket level. Is this correct?

Of course, it involves the server's TCP sends a segment otherwise closed connection detection won't happen. In my case, the server has no data to send and the response reader's callback returns 0. But I guess I can enable TCP keep-alive to force segment transmission. Nonetheless, it will only work if MHD regularly calls send() to catch up errors. But presently, it won't in the BLOCK state.

~0013175

Christian Grothoff (manager)

Right, because why should we waste time on syscalls like send() if we have nothing to send!? Also, TCP keep-alives are usually simply unnecessarily inefficient. With epoll() MHD scales nicely to millions of open connections (as long as you don't do gratuitous send()-calls), so I'd really suggest to simply use a timeout and otherwise leave the connection up.
+Notes

-Issue History
Date Modified Username Field Change
2018-03-12 10:35 ghaderer New Issue
2018-03-12 10:35 ghaderer File Added: example.c
2018-03-23 22:24 Christian Grothoff Severity major => minor
2018-03-23 22:24 Christian Grothoff Status new => acknowledged
2018-03-23 22:31 Christian Grothoff Note Added: 0012894
2018-03-23 22:31 Christian Grothoff Status acknowledged => feedback
2018-04-19 14:39 ghaderer Note Added: 0012919
2018-04-19 14:39 ghaderer Status feedback => new
2018-08-05 22:12 Christian Grothoff Note Added: 0013175
2018-08-05 22:13 Christian Grothoff Assigned To => Christian Grothoff
2018-08-05 22:13 Christian Grothoff Severity minor => feature
2018-08-05 22:13 Christian Grothoff Status new => closed
2018-08-05 22:13 Christian Grothoff Resolution open => won't fix
2018-08-05 22:13 Christian Grothoff Target Version => 0.9.60
+Issue History