Age | Commit message (Collapse) | Author |
|
|
|
Also fix up a few small issues:
- Lookup with "badip:port" now sets the port to 0
- Don't allow assert to have side-effects
|
|
|
|
|
|
along with mutex/condvar/bind/etc.
httpserver handles its own interruption, so there's no reason not to use std
threading.
While we're at it, may as well kill the BOOST_FOREACH's as well.
|
|
When using std::thread in place of boost::thread, letting the threads destruct
results in a std::terminate. According to the docs, the same thing should be
be happening in later boost versions:
http://www.boost.org/doc/libs/1_55_0/doc/html/thread/thread_management.html#thread.thread_management.thread.destructor
I'm unsure why this hasn't blown up already, but explicitly detaching can't
hurt.
|
|
|
|
Thanks to Cory Fields for the idea.
|
|
No need for boost here.
|
|
More useful error reporting.
|
|
Use std::unique_ptr for handling work items.
This makes the code more RAII and, as mentioned in the comment, is what
I planned when I wrote the code in the first place.
|
|
Change the few occurrences of the deprecated `auto_ptr` to c++11 `unique_ptr`.
Silences the deprecation warnings.
Also add a missing `std::` for consistency.
|
|
|
|
`try_join_for` was introduced in Boost 1.50:
http://www.boost.org/doc/libs/1_50_0/doc/html/thread/thread_management.html#thread.thread_management.thread.try_join_for
1.49 has `timed_join`, one can accomplish the same with:
http://www.boost.org/doc/libs/1_49_0/doc/html/thread/thread_management.html#thread.thread_management.thread.timed_join
However, `timed_join` was deprecated in 1.50. So a conditional is
necessary.
This solution was tested in #7031.
|
|
This uses _EVENT_LOG_WARN instead, which appears to be defined in the
old versions of libevent that I have on some systems.
|
|
This continues/fixes #6719.
`event_base_loopbreak` was not doing what I expected it to, at least in
libevent 2.0.21.
What I expected was that it sets a timeout, given that no other pending
events it would exit in N seconds. However, what it does was delay the
event loop exit with 10 seconds, even if nothing is pending.
Solve it in a different way: give the event loop thread time to exit
out of itself, and if it doesn't, send loopbreak.
This speeds up the RPC tests a lot, each exit incurred a 10 second
overhead, with this change there should be no shutdown overhead in the
common case and up to two seconds if the event loop is blocking.
As a bonus this breaks dependency on boost::thread_group, as the HTTP
server minds its own offspring.
|
|
Prevent memory exhaustion by sending lots of data.
Also add a test to `httpbasics.py`.
Closes #6425
|
|
This makes sure that the event loop eventually terminates, even if an
event (like an open timeout, or a hanging connection) happens to be
holding it up.
|
|
Add a WaitExit() call to http's WorkQueue to make it delete the work
queue only when all worker threads stopped.
This fixes a problem that was reproducable by pressing Ctrl-C during
AppInit2:
```
/usr/include/boost/thread/pthread/condition_variable_fwd.hpp:81: boost::condition_variable::~condition_variable(): Assertion `!ret' failed.
/usr/include/boost/thread/pthread/mutex.hpp:108: boost::mutex::~mutex(): Assertion `!posix::pthread_mutex_destroy(&m)' failed.
```
I was assuming that `threadGroup->join_all();` would always have been
called when entering the Shutdown(). However this is not the case in
bitcoind's AppInit2-non-zero-exit case "was left out intentionally
here".
|
|
Shutting down the HTTP server currently breaks off all current requests.
This can create a race condition with RPC `stop` command, where the calling
process never receives confirmation.
This change removes the listening sockets on shutdown so that no new
requests can come in, but no longer breaks off requests in progress.
Meant to fix #6717.
|
|
The two timeouts for the server and client, are essentially different:
- In the case of the server it should be a lower value to avoid clients
clogging up connection slots
- In the case of the client it should be a high value to accomedate slow
responses from the server, for example for slow queries or when the
lock is contended
Split the options into `-rpcservertimeout` and `-rpcclienttimeout` with
respective defaults of 30 and 900.
|
|
Add a option "-debug=libevent" to enable libevent debugging for troubleshooting.
Libevent logging is redirected to our own log.
|
|
... but not yet in trivial tree
|
|
Split StartHTTPServer into InitHTTPServer and StartHTTPServer to give
clients a window to register their handlers without race conditions.
Thanks @ajweiss for figuring this out.
|
|
|
|
Implement RPCTimerHandler for Qt RPC console, so that `walletpassphrase`
works with GUI and `-server=0`.
Also simplify HTTPEvent-related code by using boost::function directly.
|
|
- *Replace usage of boost::asio with [libevent2](http://libevent.org/)*.
boost::asio is not part of C++11, so unlike other boost there is no
forwards-compatibility reason to stick with it. Together with #4738 (convert
json_spirit to UniValue), this rids Bitcoin Core of the worst offenders with
regard to compile-time slowness.
- *Replace spit-and-duct-tape http server with evhttp*. Front-end http handling
is handled by libevent, a work queue (with configurable depth and parallelism)
is used to handle application requests.
- *Wrap HTTP request in C++ class*; this makes the application code mostly
HTTP-server-neutral
- *Refactor RPC to move all http-specific code to a separate file*.
Theoreticaly this can allow building without HTTP server but with another RPC
backend, e.g. Qt's debug console (currently not implemented) or future RPC
mechanisms people may want to use.
- *HTTP dispatch mechanism*; services (e.g., RPC, REST) register which URL
paths they want to handle.
By using a proven, high-performance asynchronous networking library (also used
by Tor) and HTTP server, problems such as #5674, #5655, #344 should be avoided.
What works? bitcoind, bitcoin-cli, bitcoin-qt. Unit tests and RPC/REST tests
pass. The aim for now is everything but SSL support.
Configuration options:
- `-rpcthreads`: repurposed as "number of work handler threads". Still
defaults to 4.
- `-rpcworkqueue`: maximum depth of work queue. When this is reached, new
requests will return a 500 Internal Error.
- `-rpctimeout`: inactivity time, in seconds, after which to disconnect a
client.
- `-debug=http`: low-level http activity logging
|