Age | Commit message (Collapse) | Author |
|
3fc20632a3ad30809356a58d2cf0ea4a4ad4cec3 qt: Set BLOCK_CHAIN_SIZE = 220 (DrahtBot)
2b6a2f4a28792f2fe9dc1be843b1ff1ecae35e8a Regenerate manpages (DrahtBot)
eb7daf4d600eeb631427c018a984a77a34aca66e Update copyright headers to 2018 (DrahtBot)
Pull request description:
Some trivial maintenance to avoid having to do it again after the 0.17 branch off.
(The scripts to do this are in `./contrib/`)
Tree-SHA512: 16b2af45e0351b1c691c5311d48025dc6828079e98c2aa2e600dc5910ee8aa01858ca6c356538150dc46fe14c8819ed8ec8e4ec9a0f682b9950dd41bc50518fa
|
|
|
|
|
|
-BEGIN VERIFY SCRIPT-
sed -i 's/__APPLE__/MAC_OSX/g' src/compat/byteswap.h src/util.cpp
-END VERIFY SCRIPT-
|
|
This commit contains 2 refactors.
1. mark "const" on ArgsManager::GetHelpMessage and IsArgKnown.
2. remove unused "error" argument from ArgsManager::IsArgKnown.
Firstly, I mark "const" on where it is possible to. It is mentioned
before (e.g. https://github.com/bitcoin/bitcoin/pull/13190#pullrequestreview-118823133).
And about 2nd change, ArgsManager::IsArgKnown was added at commit #4f8704d which was
merged at PR #13112. But from its beggining, "error" argument never be used.
I think it should be refactored.
|
|
|
|
Options that are not available (but known in the source code) will
cause an error if they are specified.
Make these options "available" by adding them to the hidden options
category to prevent conf files from failing when shared between binaries
that have different options available.
|
|
|
|
If an unknown option is given via either the command line args or
the conf file, throw an error and exit
Update tests for ArgsManager knowing args
Ignore unknown options in the config file for bitcoin-cli
Fix tests and bitcoin-cli to match actual options used
|
|
Instead of a single map with the category and name as the key,
make m_available_args contain maps. The key will be the category and
the value is a map which actually contains the arguments for that
category. The nested map's key is the argument name, while the value
is a struct that contains the help text and whether the argument is
a debug only argument.
|
|
Many options are extremely technical, and refer internals, making it
difficult to translate usefully. This came up in discussion of e.g.
#10949. If a message is not understood by translators (which are
typically end-users, not developers) they'll either translate it
literally, making it harder to understand instead of easier, with the
added drawback of the user no longer being able to google it.
Also the translation was only working for bitcoin-qt as with
the console programs, there is no translation backend. So it was
injecting never-used translation messages for bitcoin-cli, -tx.
For these reasons, stop translating options help completely. This should
not affect the output **in any way** except for bitcoin-qt when a
non-English language is configured in the locale.
This implements #10962.
|
|
Since -includeconf cannot be used recursively, the user would not see feedback that an -includeconf
in an -includeconf'd file was silently ignored.
|
|
gArgs knows what the available arguments are and their help. Getting
the help message is moved to gArgs and HelpMessage() is removed
|
|
files
25b7ab9 doc: Add release notes for -includeconf (Karl-Johan Alm)
0f0badd test: Test includeconf parameter. (Karl-Johan Alm)
629ff8c -includeconf=<path> support in config handler, for including external configuration files (Karl-Johan Alm)
Pull request description:
Fixes: #10071.
Done:
- adds `-includeconf=<path>`, where `<path>` is relative to `datadir` or to the path of the file being read, if in a file
- protects against circular includes
- updates help docs
~~~Thoughts:~~~
- ~~~I am not sure how to test this in a neat manner. Feedback on this would be nice. Will dig/think though.~~~
Tree-SHA512: cb31f1b2f69fbc0890d264948eb2e501ac05cf12f5e06a5942f9c1539eb15ea8dc3cae817f4073aecb2fcc21d0386747f14f89d990772003a76e2a6d25642553
|
|
|
|
configuration files
|
|
Add logging and error handling inside, and outside of FileCommit.
Functions such as fsync, fdatasync will return error in case of hardware
I/O errors, and ignoring this means it can silently continue through
data corruption. (c.f.
https://lwn.net/SubscriberLink/752063/12b232ab5039efbe/)
|
|
|
|
Printing to the debug log file can be disabled with -nodebulogfile
|
|
|
|
When network-specific options such as -addnode, -connect, etc are
specified in the default section of the config file, but that setting is
ignored due to testnet or regtest being in use, and it is not overridden
by either a command line option or a setting in the [regtest] or [test]
section of the config file, a warning is added to the log, eg:
Warning: Config setting for -connect only applied on regtest network when in [regtest] section.
|
|
When specified in bitcoin.conf without using the [regtest] or [test]
section header, or a "regtest." or "test." prefix, the "addnode",
"connect", "port", "bind", "rpcport", "rpcbind", and "wallet" settings
will only be applied when running on mainnet.
|
|
|
|
When a -nofoo option is seen, instead of adding it to a separate
set of negated args, set the arg as being an empty vector of strings.
This changes the behaviour in some ways:
- -nofoo=0 still sets foo=1 but no longer treats it as a negated arg
- -nofoo=1 -foo=2 has GetArgs() return [2] rather than [2,0]
- "foo=2 \n -nofoo=1" in a config file no longer returns [2,0], just [0]
- GetArgs returns an empty vector for negated args
|
|
|
|
Although no compiler appears to complain about it, these are
not valid for c++11.
(http://en.cppreference.com/w/cpp/language/aggregate_initialization says they're c++20)
The structure is defined as:
struct sched_param {
int sched_priority;
};
So passing 0 for the first field has the same effect.
|
|
Nowhere in the man page of `pthread_setschedparam` it is mentioned that
`0` is a valid value. The example uses `pthread_self()`, so should we.
(noticed by Anthony Towns)
|
|
network-specific sections
77a733a99 [tests] Add additional unit tests for -nofoo edge cases (Anthony Towns)
af173c2be [tests] Check GetChainName works with config entries (Anthony Towns)
fa27f1c23 [tests] Add unit tests for ReadConfigStream (Anthony Towns)
087c5d204 ReadConfigStream: assume the stream is good (Anthony Towns)
6d5815aad Separate out ReadConfigStream from ReadConfigFile (Anthony Towns)
834d30341 [tests] Add unit tests for GetChainName (Anthony Towns)
11b6b5b86 Move ChainNameFromCommandLine into ArgsManager and rename to GetChainName (Anthony Towns)
Pull request description:
This does a bit of refactoring of the configuration handling code in order to add additional tests to make adding support for [test]/[regtest] sections in the config file in #11862 easier. Should not cause any behaviour changes.
Tree-SHA512: 8d2ce1449fc180de03414e7e569d1a21ba1e9f6564e13d3faf3961f710adc725fa0d4ab49b89ebd2baa11ea36ac5018377f693a84037d386a8b8697c9d6db3e9
|
|
d54874d Set SCHED_BATCH priority on the loadblk thread. (Evan Klitzke)
Pull request description:
Today I came across #10271, and while reading the discussion #6358 was linked to. Linux systems have a `SCHED_BATCH` scheduler priority that is useful for threads like loadblk. You can find the full details at [sched(7)](http://man7.org/linux/man-pages/man7/sched.7.html), but I'll quote the relevant part of the man page below:
> ...this policy will cause the scheduler to always assume that the thread is
CPU-intensive. Consequently, the scheduler will apply a small scheduling penalty
with respect to wakeup behavior, so that this thread is mildly disfavored in
scheduling decisions.
>
> This policy is useful for workloads that are noninteractive, but do not want to
lower their nice value, and for workloads that want a deterministic scheduling
policy without interactivity causing extra preemptions (between the workload's
tasks).
I think this change is useful independently of #10271 and irrespective of whether that change is merged. Under normal operation the loadblk thread will just import `mempool.dat`. However, if Bitcoin is started with `-reindex` or `-reindex-chainstate` this thread will use a great deal of CPU while it rebuilds the chainstate database (and the block database in the case of `-reindex`). By setting `SCHED_BATCH` this thread is less likely to interfere with interactive tasks (e.g. the user's web browser, text editor, etc.).
I'm leaving the nice value unchanged (which also affects scheduling decisions) because I think that's better set by the user. Likewise I'm not using [ioprio_set(2)](http://man7.org/linux/man-pages/man2/ioprio_set.2.html) because it can cause the thread to become completely I/O starved (and knowledgeable users can use `ionice(1)` anyway).
Tree-SHA512: ea8f7d3921ed5708948809da771345cdc33efd7ba3323e9dfec07a25bc21e8612e2676f9c178e2710c7bc437e8c9cafc5e0463613688fea5699b6e8e2fec6cff
|
|
|
|
|
|
|
|
This ensures consistency across interfaces and makes the version handling more clear.
|
|
This commit adds tracking for negated arguments. This change will be used in a
future commit that allows disabling the debug.log file using -nodebuglogfile.
|
|
a192636 -blocksdir: keep blockindex leveldb database in datadir (Jonas Schnelli)
f38e4fd QA: Add -blocksdir test (Jonas Schnelli)
386a6b6 Allow to optional specify the directory for the blocks storage (Jonas Schnelli)
Pull request description:
Since the actual block files taking up more and more space, it may be desirable to have them stored in a different location then the data directory (use case: SSD for chainstate, etc., HD for blocks).
This PR adds a `-blocksdir` option that allows one to keep the blockfiles and the blockindex external from the data directory (instead of creating symlinks).
I fist had an option to keep the blockindex within the datadir, but seems to make no sense since accessing the index will (always) lead to access (r/w) the block files.
Tree-SHA512: f8b9e1a681679eac25076dc30e45e6e12d4b2d9ac4be907cbea928a75af081dbcb0f1dd3e97169ab975f73d0bd15824c00c2a34638f3b284b39017171fce2409
|
|
While reading another PR I saw a mention of #6358. The use case for
SCHED_BATCH is to hint to the kernel that the thread is running a
non-interactive workload that consumes a lot of CPU time. This is
helpful on desktop machines where the loadblk thread can interfere with
interactive applications. More details can be found in the sched(7) man
page.
|
|
8674e74 Provide relevant error message if datadir is not writable. (murrayn)
Pull request description:
If the --datadir exists, but is not writable, the current error message on startup is 'Cannot obtain a lock on data directory foo. Bitcoin Core is probably already running.' This is misleading.
I believe this PR addresses #11668, although the issue is not Windows-specific.
Tree-SHA512: 10cbbaea433072aee4fb3e8938a72073c7a5c841f7a7685c9e12549c322b2925c7d34bac254ac33021b23132bfc352c058712bc9542298cf86f8fd9757f528b2
|
|
|
|
7ef46d063a Remove redundant includes. Conform to header include guidelines. (practicalswift)
Pull request description:
From the header include guidelines ([developer-notes.md](https://github.com/bitcoin/bitcoin/blob/master/doc/developer-notes.md#source-code-organization)):
> "One exception is that a `.cpp` file does not need to re-include the includes already included in its corresponding `.h` file."
Covered in this PR:
* `rpc/util.h` includes `pubkey.h` + `utilstrencodings.h`. `rpc/util.cpp` includes `rpc/util.h`.
* `util.h` includes `fs.h`. `util.cpp` includes `util.h`.
Tree-SHA512: a38d9ecefd8165ad151c1ffde52cfbac968526c49db2080988bf6e6a3daa2ebeceb34d08f817e275edf7c650bf3155de01369bfb352522f8e0ae136b2289b194
|
|
|
|
|
|
* Z is the zone designator for the zero UTC offset.
* T is the delimiter used to separate date and time.
This makes it clear for the end-user that the date/time logged is
specified in UTC and not in the local time zone.
|
|
determine available cores
937bf4335 Use std::thread::hardware_concurrency, instead of Boost, to determine available cores (fanquake)
Pull request description:
Following discussion on IRC about replacing Boost usage for detecting available system cores, I've opened this to collect some benchmarks + further discussion.
The current method for detecting available cores was introduced in #6361.
Recap of the IRC chat:
```
21:14:08 fanquake: Since we seem to be giving Boost removal a good shot for 0.15, does anyone have suggestions for replacing GetNumCores?
21:14:26 fanquake: There is std::thread::hardware_concurrency(), but that seems to count virtual cores, which I don't think we want.
21:14:51 BlueMatt: fanquake: I doubt we'll do boost removal for 0.15
21:14:58 BlueMatt: shit like BOOST_FOREACH, sure
21:15:07 BlueMatt: but all of boost? doubtful, there are still things we need
21:16:36 fanquake: Yea sorry, not the whole lot, but we can remove a decent chunk. Just looking into what else needs to be done to replace some of the less involved Boost usage.
21:16:43 BlueMatt: fair
21:17:14 wumpus: yes, it makes sense to plan ahead a bit, without immediately doing it
21:18:12 wumpus: right, don't count virtual cores, that used to be the case but it makes no sense for our usage
21:19:15 wumpus: it'd create a swarm of threads overwhelming any machine with hyperthreading (+accompanying thread stack overhead), for script validation, and there was no gain at all for that
21:20:03 sipa: BlueMatt: don't worry, there is no hurry
21:59:10 morcos: wumpus: i don't think that is correct
21:59:24 morcos: suppose you have 4 cores (8 virtual cores)
21:59:24 wumpus: fanquake: indeed seems that std has no equivalent to physical_concurrency, on any standard. That's annoying as it is non-trivial to implement
21:59:35 morcos: i think running par=8 (if it let you) would be notably faster
21:59:59 morcos: jeremyrubin and i discussed this at length a while back... i think i commented about it on irc at the time
22:00:21 wumpus: morcos: I think the conclusion at the time was that it made no difference, but sure would make sense to benchmark
22:00:39 morcos: perhaps historical testing on the virtual vs actual cores was polluted by concurrency issues that have now improved
22:00:47 wumpus: I think there are not more ALUs, so there is not really a point in having more threads
22:01:40 wumpus: hyperthreads are basically just a stored register state right?
22:02:23 sipa: wumpus: yes but it helps the scheduler
22:02:27 wumpus: in which case the only speedup using "number of cores" threads would give you is, possibly, excluding other software from running on the cores on the same time
22:02:37 morcos: well this is where i get out of my depth
22:02:50 sipa: if one of the threads is waiting on a read from ram, the other can use the arithmetic unit for example
22:02:54 morcos: wumpus: i'm pretty sure though that the speed up is considerably more than what you might expect from that
22:02:59 wumpus: sipa: ok, I back down, I didn't want to argue this at all
22:03:35 morcos: the reason i haven't tested it myself, is the machine i usually use has 16 cores... so not easy due to remaining concurrency issues to get much more speedup
22:03:36 wumpus: I'm fine with restoring it to number of virtual threads if that's faster
22:03:54 morcos: we should have somene with 4 cores (and  actually test it though, i agree
22:03:58 sipa: i would expect (but we should benchmark...) that if 8 scriot validation threads instead of 4 on a quadcore hyperthreading is not faster, it's due to lock contention
22:04:20 morcos: sipa: yeah thats my point, i think lock contention isn't that bad with 8 now
22:04:22 wumpus: on 64-bit systems the additional thread overhead wouldn't be important at least
22:04:23 gmaxwell: I previously benchmarked, a long time ago, it was faster.
22:04:33 gmaxwell: (to use the HT core count)
22:04:44 wumpus: why was this changed at all then?
22:04:47 wumpus: I'm confused
22:05:04 sipa: good question!
22:05:06 gmaxwell: I had no idea we changed it.
22:05:25 wumpus: sigh 
22:05:54 gmaxwell: What PR changed it?
22:06:51 gmaxwell: In any case, on 32-bit it's probably a good tradeoff... the extra ram overhead is worth avoiding.
22:07:22 wumpus: https://github.com/bitcoin/bitcoin/pull/6361
22:07:28 gmaxwell: PR 6461 btw.
22:07:37 gmaxwell: er lol at least you got it right.
22:07:45 wumpus: the complaint was that systems became unsuably slow when using that many thread
22:07:51 wumpus: so at least I got one thing right, woohoo
22:07:55 sipa: seems i even acked it!
22:07:57 BlueMatt: wumpus: there are more alus
22:08:38 BlueMatt: but we need to improve lock contention first
22:08:40 morcos: anywya, i think in the past the lock contention made 8 threads regardless of cores a bit dicey.. now that is much better (although more still to be done)
22:09:01 BlueMatt: or we can just merge #10192, thats fee
22:09:04 gribble: https://github.com/bitcoin/bitcoin/issues/10192 | Cache full script execution results in addition to signatures by TheBlueMatt · Pull Request #10192 · bitcoin/bitcoin · GitHub
22:09:11 BlueMatt: s/fee/free/
22:09:21 morcos: no, we do not need to improve lock contention first. but we should probably do that before we increase the max beyond 16
22:09:26 BlueMatt: then we can toss concurrency issues out the window and get more speedup anyway
22:09:35 gmaxwell: wumpus: yea, well in QT I thought we also diminished the count by 1 or something? but yes, if the motivation was to reduce how heavily the machine was used, thats fair.
22:09:56 sipa: the benefit of using HT cores is certainly not a factor 2
22:09:58 wumpus: gmaxwell: for the default I think this makes a lot of sense, yes
22:10:10 gmaxwell: morcos: right now on my 24/28 physical core hosts going beyond 16 still reduces performance.
22:10:11 wumpus: gmaxwell: do we also restrict the maximum par using this? that'd make less sense
22:10:51 wumpus: if someone *wants* to use the virtual cores they should be able to by setting -par=
22:10:51 sipa: *flies to US*
22:10:52 BlueMatt: sipa: sure, but the shared cache helps us get more out of it than some others, as morcos points out
22:11:30 BlueMatt: (because it means our thread contention issues are less)
22:12:05 morcos: gmaxwell: yeah i've been bogged down in fee estimation as well (and the rest of life) for a while now.. otherwise i would have put more effort into jeremy's checkqueue
22:12:36 BlueMatt: morcos: heh, well now you can do other stuff while the rest of us get bogged down in understanding fee estimation enough to review it 
22:12:37 wumpus: [to answer my own question: no, the limit for par is MAX_SCRIPTCHECK_THREADS, or 16]
22:12:54 morcos: but to me optimizing for more than 16 cores is pretty valuable as miners could use beefy machines and be less concerned by block validation time
22:14:38 BlueMatt: morcos: i think you may be surprised by the number of mining pools that are on VPSes that do not have 16 cores 
22:15:34 gmaxwell: I assume right now most of the time block validation is bogged in the parts that are not as concurrent. simple because caching makes the concurrent parts so fast. (and soon to hopefully increase with bluematt's patch)
22:17:55 gmaxwell: improving sha2 speed, or transaction malloc overhead are probably bigger wins now for connection at the tip than parallelism beyond 16 (though I'd like that too).
22:18:21 BlueMatt: sha2 speed is big
22:18:27 morcos: yeah lots of things to do actually...
22:18:57 gmaxwell: BlueMatt: might be a tiny bit less big if we didn't hash the block header 8 times for every block. 
22:21:27 BlueMatt: ehh, probably, but I'm less rushed there
22:21:43 BlueMatt: my new cache thing is about to add a bunch of hashing
22:21:50 BlueMatt: 1 sha round per tx
22:22:25 BlueMatt: and sigcache is obviously a ton
```
Tree-SHA512: a594430e2a77d8cc741ea8c664a2867b1e1693e5050a4bbc8511e8d66a2bffe241a9965f6dff1e7fbb99f21dd1fdeb95b826365da8bd8f9fab2d0ffd80d5059c
|
|
From the header include guidelines (developer-notes.md):
"One exception is that a `.cpp` file does not need to re-include the
includes already included in its corresponding `.h` file."
* rpc/util.h includes pubkey.h + utilstrencodings.h. rpc/util.cpp includes rpc/util.h.
* util.h includes fs.h. util.cpp includes util.h.
|
|
Add a unit test for LockDirectory, introduced in #11281.
|
|
This commit fixes problems with calling LockDirectory multiple times on
the same directory, or from multiple threads. It also fixes the build on
OpenBSD.
- Wrap the boost::interprocess::file_lock in a std::unique_ptr inside
the map that keeps track of per-directory locks. This fixes a build
issue with the clang 4.0.0+boost-1.58.0p8 version combo on OpenBSD
6.2, and should have no observable effect otherwise.
- Protect the locks map using a mutex.
- Make sure that only locks that are successfully acquired are inserted
in the map.
- Open the lock file for appending only if we know we don't have the
lock yet - The `FILE* file = fsbridge::fopen(pathLockFile, "a");`
wipes the 'we own this lock' administration, likely because it opens
a new fd for the locked file then closes it.
|
|
Most commandline/config args are interpreted as relative to datadir if
not passed absolute. Consolidate the logic for this normalization.
|
|
|
|
|
|
available cores
|