Age | Commit message (Collapse) | Author |
|
This ensures consistency across interfaces and makes the version handling more clear.
|
|
|
|
* inline performance critical code
* Average runtime is specified and used to calculate iterations.
* Console: show median of multiple runs
* plot: show box plot
* filter benchmarks
* specify scaling factor
* ignore src/test and src/bench in command line check script
* number of iterations instead of time
* Replaced runtime in BENCHMARK makro number of iterations.
* Added -? to bench_bitcoin
* Benchmark plotly.js URL, width, height can be customized
* Fixed incorrect precision warning
|
|
-BEGIN VERIFY SCRIPT-
for f in \
src/*.cpp \
src/*.h \
src/bench/*.cpp \
src/bench/*.h \
src/compat/*.cpp \
src/compat/*.h \
src/consensus/*.cpp \
src/consensus/*.h \
src/crypto/*.cpp \
src/crypto/*.h \
src/crypto/ctaes/*.h \
src/policy/*.cpp \
src/policy/*.h \
src/primitives/*.cpp \
src/primitives/*.h \
src/qt/*.cpp \
src/qt/*.h \
src/qt/test/*.cpp \
src/qt/test/*.h \
src/rpc/*.cpp \
src/rpc/*.h \
src/script/*.cpp \
src/script/*.h \
src/support/*.cpp \
src/support/*.h \
src/support/allocators/*.h \
src/test/*.cpp \
src/test/*.h \
src/wallet/*.cpp \
src/wallet/*.h \
src/wallet/test/*.cpp \
src/wallet/test/*.h \
src/zmq/*.cpp \
src/zmq/*.h
do
base=${f%/*}/ relbase=${base#src/} sed -i "s:#include \"\(.*\)\"\(.*\):if test -e \$base'\\1'; then echo \"#include <\"\$relbase\"\\1>\\2\"; else echo \"#include <\\1>\\2\"; fi:e" $f
done
-END VERIFY SCRIPT-
|
|
|
|
Call RandomInit() in bench_bitcoin to initialize the RNG so that it
does not cause an assertion error.
|
|
Edited via:
$ contrib/devtools/copyright_header.py update .
|
|
|
|
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
|