diff options
Diffstat (limited to 'docs/development')
-rw-r--r-- | docs/development/CONTRIBUTING.md | 183 | ||||
-rw-r--r-- | docs/development/PROFILING.md | 95 | ||||
-rw-r--r-- | docs/development/coverage.md | 84 | ||||
-rw-r--r-- | docs/development/sytest.md | 139 | ||||
-rw-r--r-- | docs/development/tracing/opentracing.md | 114 | ||||
-rw-r--r-- | docs/development/tracing/setup.md | 57 |
6 files changed, 672 insertions, 0 deletions
diff --git a/docs/development/CONTRIBUTING.md b/docs/development/CONTRIBUTING.md new file mode 100644 index 00000000..2aec4c36 --- /dev/null +++ b/docs/development/CONTRIBUTING.md @@ -0,0 +1,183 @@ +--- +title: Contributing +parent: Development +permalink: /development/contributing +--- + +# Contributing to Dendrite + +Everyone is welcome to contribute to Dendrite! We aim to make it as easy as +possible to get started. + +## Contribution types + +We are a small team maintaining a large project. As a result, we cannot merge every feature, even if it +is bug-free and useful, because we then commit to maintaining it indefinitely. We will always accept: + - bug fixes + - security fixes (please responsibly disclose via security@matrix.org *before* creating pull requests) + +We will accept the following with caveats: + - documentation fixes, provided they do not add additional instructions which can end up going out-of-date, + e.g example configs, shell commands. + - performance fixes, provided they do not add significantly more maintenance burden. + - additional functionality on existing features, provided the functionality is small and maintainable. + - additional functionality that, in its absence, would impact the ecosystem e.g spam and abuse mitigations + - test-only changes, provided they help improve coverage or test tricky code. + +The following items are at risk of not being accepted: + - Configuration or CLI changes, particularly ones which increase the overall configuration surface. + +The following items are unlikely to be accepted into a main Dendrite release for now: + - New MSC implementations. + - New features which are not in the specification. + +## Sign off + +We require that everyone who contributes to the project signs off their contributions +in accordance with the [Developer Certificate of Origin](https://github.com/matrix-org/matrix-spec/blob/main/CONTRIBUTING.rst#sign-off). +In effect, this means adding a statement to your pull requests or commit messages +along the lines of: + +``` +Signed-off-by: Full Name <email address> +``` + +Unfortunately we can't accept contributions without a sign-off. + +Please note that we can only accept contributions under a legally identifiable name, +such as your name as it appears on government-issued documentation or common-law names +(claimed by legitimate usage or repute). We cannot accept sign-offs from a pseudonym or +alias and cannot accept anonymous contributions. + +If you would prefer to sign off privately instead (so as to not reveal your full +name on a public pull request), you can do so by emailing a sign-off declaration +and a link to your pull request directly to the [Matrix.org Foundation](https://matrix.org/foundation/) +at `dco@matrix.org`. Once a private sign-off has been made, you will not be required +to do so for future contributions. + +## Getting up and running + +See the [Installation](../installation) section for information on how to build an +instance of Dendrite. You will likely need this in order to test your changes. + +## Code style + +On the whole, the format as prescribed by `gofmt`, `goimports` etc. is exactly +what we use and expect. Please make sure that you run one of these formatters before +submitting your contribution. + +## Comments + +Please make sure that the comments adequately explain *why* your code does what it +does. If there are statements that are not obvious, please comment what they do. + +We also have some special tags which we use for searchability. These are: + +* `// TODO:` for places where a future review, rewrite or refactor is likely required; +* `// FIXME:` for places where we know there is an outstanding bug that needs a fix; +* `// NOTSPEC:` for places where the behaviour specifically does not match what the + [Matrix Specification](https://spec.matrix.org/) prescribes, along with a description + of *why* that is the case. + +## Linting + +We use [golangci-lint](https://github.com/golangci/golangci-lint) to lint Dendrite +which can be executed via: + +```bash +golangci-lint run +``` + +If you are receiving linter warnings that you are certain are spurious and want to +silence them, you can annotate the relevant lines or methods with a `// nolint:` +comment. Please avoid doing this if you can. + +## Unit tests + +We also have unit tests which we run via: + +```bash +DENDRITE_TEST_SKIP_NODB=1 go test --race ./... +``` + +This only runs SQLite database tests. If you wish to execute Postgres tests as well, you'll either need to +have Postgres installed locally (`createdb` will be used) or have a remote/containerized Postgres instance +available. + +To configure the connection to a remote Postgres, you can use the following enviroment variables: + +```bash +POSTGRES_USER=postgres +POSTGERS_PASSWORD=yourPostgresPassword +POSTGRES_HOST=localhost +POSTGRES_DB=postgres # the superuser database to use +``` + +In general, we like submissions that come with tests. Anything that proves that the +code is functioning as intended is great, and to ensure that we will find out quickly +in the future if any regressions happen. + +We use the standard [Go testing package](https://gobyexample.com/testing) for this, +alongside some helper functions in our own [`test` package](https://pkg.go.dev/github.com/matrix-org/dendrite/test). + +## Continuous integration + +When a Pull Request is submitted, continuous integration jobs are run automatically +by GitHub actions to ensure that the code builds and works in a number of configurations, +such as different Go versions, using full HTTP APIs and both database engines. +CI will automatically run the unit tests (as above) as well as both of our integration +test suites ([Complement](https://github.com/matrix-org/complement) and +[SyTest](https://github.com/matrix-org/sytest)). + +You can see the progress of any CI jobs at the bottom of the Pull Request page, or by +looking at the [Actions](https://github.com/matrix-org/dendrite/actions) tab of the Dendrite +repository. + +We generally won't accept a submission unless all of the CI jobs are passing. We +do understand though that sometimes the tests get things wrong — if that's the case, +please also raise a pull request to fix the relevant tests! + +### Running CI tests locally + +To save waiting for CI to finish after every commit, it is ideal to run the +checks locally before pushing, fixing errors first. This also saves other people +time as only so many PRs can be tested at a given time. + +To execute what CI tests, first run `./build/scripts/build-test-lint.sh`; this +script will build the code, lint it, and run `go test ./...` with race condition +checking enabled. If something needs to be changed, fix it and then run the +script again until it no longer complains. Be warned that the linting can take a +significant amount of CPU and RAM. + +Once the code builds, run [Sytest](https://github.com/matrix-org/sytest) +according to the guide in +[docs/development/sytest.md](https://github.com/matrix-org/dendrite/blob/main/docs/development/sytest.md#using-a-sytest-docker-image) +so you can see whether something is being broken and whether there are newly +passing tests. + +If these two steps report no problems, the code should be able to pass the CI +tests. + +## Picking things to do + +If you're new then feel free to pick up an issue labelled [good first +issue](https://github.com/matrix-org/dendrite/labels/good%20first%20issue). +These should be well-contained, small pieces of work that can be picked up to +help you get familiar with the code base. + +Once you're comfortable with hacking on Dendrite there are issues labelled as +[help wanted](https://github.com/matrix-org/dendrite/labels/help-wanted), +these are often slightly larger or more complicated pieces of work but are +hopefully nonetheless fairly well-contained. + +We ask people who are familiar with Dendrite to leave the [good first +issue](https://github.com/matrix-org/dendrite/labels/good%20first%20issue) +issues so that there is always a way for new people to come and get involved. + +## Getting help + +For questions related to developing on Dendrite we have a dedicated room on +Matrix [#dendrite-dev:matrix.org](https://matrix.to/#/#dendrite-dev:matrix.org) +where we're happy to help. + +For more general questions please use [#dendrite:matrix.org](https://matrix.to/#/#dendrite:matrix.org). diff --git a/docs/development/PROFILING.md b/docs/development/PROFILING.md new file mode 100644 index 00000000..f3b57347 --- /dev/null +++ b/docs/development/PROFILING.md @@ -0,0 +1,95 @@ +--- +title: Profiling +parent: Development +permalink: /development/profiling +--- + +# Profiling Dendrite + +If you are running into problems with Dendrite using excessive resources (e.g. CPU or RAM) then you can use the profiler to work out what is happening. + +Dendrite contains an embedded profiler called `pprof`, which is a part of the standard Go toolchain. + +## Enable the profiler + +To enable the profiler, start Dendrite with the `PPROFLISTEN` environment variable. This variable specifies which address and port to listen on, e.g. + +``` +PPROFLISTEN=localhost:65432 ./bin/dendrite-monolith-server ... +``` + +If pprof has been enabled successfully, a log line at startup will show that pprof is listening: + +``` +WARN[2020-12-03T13:32:33.669405000Z] [/Users/neilalexander/Desktop/dendrite/internal/log.go:87] SetupPprof + Starting pprof on localhost:65432 +``` + +All examples from this point forward assume `PPROFLISTEN=localhost:65432` but you may need to adjust as necessary for your setup. + +## Profiling CPU usage + +To examine where CPU time is going, you can call the `profile` endpoint: + +``` +http://localhost:65432/debug/pprof/profile?seconds=30 +``` + +The profile will run for the specified number of `seconds` and then will produce a result. + +### Examine a profile using the Go toolchain + +If you have Go installed and want to explore the profile, you can invoke `go tool pprof` to start the profile directly. The `-http=` parameter will instruct `go tool pprof` to start a web server providing a view of the captured profile: + +``` +go tool pprof -http=localhost:23456 http://localhost:65432/debug/pprof/profile?seconds=30 +``` + +You can then visit `http://localhost:23456` in your web browser to see a visual representation of the profile. Particularly usefully, in the "View" menu, you can select "Flame Graph" to see a proportional interactive graph of CPU usage. + +### Download a profile to send to someone else + +If you don't have the Go tools installed but just want to capture the profile to send to someone else, you can instead use `curl` to download the profiler results: + +``` +curl -O http://localhost:65432/debug/pprof/profile?seconds=30 +``` + +This will block for the specified number of seconds, capturing information about what Dendrite is doing, and then produces a `profile` file, which you can send onward. + +## Profiling memory usage + +To examine where memory usage is going, you can call the `heap` endpoint: + +``` +http://localhost:65432/debug/pprof/heap +``` + +The profile will return almost instantly. + +### Examine a profile using the Go toolchain + +If you have Go installed and want to explore the profile, you can invoke `go tool pprof` to start the profile directly. The `-http=` parameter will instruct `go tool pprof` to start a web server providing a view of the captured profile: + +``` +go tool pprof -http=localhost:23456 http://localhost:65432/debug/pprof/heap +``` + +You can then visit `http://localhost:23456` in your web browser to see a visual representation of the profile. The "Sample" menu lets you select between four different memory profiles: + +* `inuse_space`: Shows how much actual heap memory is allocated per function (this is generally the most useful profile when diagnosing high memory usage) +* `inuse_objects`: Shows how many heap objects are allocated per function +* `alloc_space`: Shows how much memory has been allocated per function (although that memory may have since been deallocated) +* `alloc_objects`: Shows how many allocations have been made per function (although that memory may have since been deallocated) + +Also in the "View" menu, you can select "Flame Graph" to see a proportional interactive graph of the memory usage. + +### Download a profile to send to someone else + +If you don't have the Go tools installed but just want to capture the profile to send to someone else, you can instead use `curl` to download the profiler results: + +``` +curl -O http://localhost:65432/debug/pprof/heap +``` + +This will almost instantly produce a `heap` file, which you can send onward. diff --git a/docs/development/coverage.md b/docs/development/coverage.md new file mode 100644 index 00000000..7a3b7cb9 --- /dev/null +++ b/docs/development/coverage.md @@ -0,0 +1,84 @@ +--- +title: Coverage +parent: Development +permalink: /development/coverage +--- + +To generate a test coverage report for Sytest, a small patch needs to be applied to the Sytest repository to compile and use the instrumented binary: +```patch +diff --git a/lib/SyTest/Homeserver/Dendrite.pm b/lib/SyTest/Homeserver/Dendrite.pm +index 8f0e209c..ad057e52 100644 +--- a/lib/SyTest/Homeserver/Dendrite.pm ++++ b/lib/SyTest/Homeserver/Dendrite.pm +@@ -337,7 +337,7 @@ sub _start_monolith + + $output->diag( "Starting monolith server" ); + my @command = ( +- $self->{bindir} . '/dendrite-monolith-server', ++ $self->{bindir} . '/dendrite-monolith-server', '--test.coverprofile=' . $self->{hs_dir} . '/integrationcover.log', "DEVEL", + '--config', $self->{paths}{config}, + '--http-bind-address', $self->{bind_host} . ':' . $self->unsecure_port, + '--https-bind-address', $self->{bind_host} . ':' . $self->secure_port, +diff --git a/scripts/dendrite_sytest.sh b/scripts/dendrite_sytest.sh +index f009332b..7ea79869 100755 +--- a/scripts/dendrite_sytest.sh ++++ b/scripts/dendrite_sytest.sh +@@ -34,7 +34,8 @@ export GOBIN=/tmp/bin + echo >&2 "--- Building dendrite from source" + cd /src + mkdir -p $GOBIN +-go install -v ./cmd/dendrite-monolith-server ++# go install -v ./cmd/dendrite-monolith-server ++go test -c -cover -covermode=atomic -o $GOBIN/dendrite-monolith-server -coverpkg "github.com/matrix-org/..." ./cmd/dendrite-monolith-server + go install -v ./cmd/generate-keys + cd - + ``` + + Then run Sytest. This will generate a new file `integrationcover.log` in each server's directory e.g `server-0/integrationcover.log`. To parse it, + ensure your working directory is under the Dendrite repository then run: + ```bash + go tool cover -func=/path/to/server-0/integrationcover.log + ``` + which will produce an output like: + ``` + ... + github.com/matrix-org/util/json.go:83: NewJSONRequestHandler 100.0% +github.com/matrix-org/util/json.go:90: Protect 57.1% +github.com/matrix-org/util/json.go:110: RequestWithLogging 100.0% +github.com/matrix-org/util/json.go:132: MakeJSONAPI 70.0% +github.com/matrix-org/util/json.go:151: respond 61.5% +github.com/matrix-org/util/json.go:180: WithCORSOptions 0.0% +github.com/matrix-org/util/json.go:191: SetCORSHeaders 100.0% +github.com/matrix-org/util/json.go:202: RandomString 100.0% +github.com/matrix-org/util/json.go:210: init 100.0% +github.com/matrix-org/util/unique.go:13: Unique 91.7% +github.com/matrix-org/util/unique.go:48: SortAndUnique 100.0% +github.com/matrix-org/util/unique.go:55: UniqueStrings 100.0% +total: (statements) 53.7% +``` +The total coverage for this run is the last line at the bottom. However, this value is misleading because Dendrite can run in many different configurations, +which will never be tested in a single test run (e.g sqlite or postgres, monolith or polylith). To get a more accurate value, additional processing is required +to remove packages which will never be tested and extension MSCs: +```bash +# These commands are all similar but change which package paths are _removed_ from the output. + +# For Postgres (monolith) +go tool cover -func=/path/to/server-0/integrationcover.log | grep 'github.com/matrix-org/dendrite' | grep -Ev 'inthttp|sqlite|setup/mscs|api_trace' > coverage.txt + +# For Postgres (polylith) +go tool cover -func=/path/to/server-0/integrationcover.log | grep 'github.com/matrix-org/dendrite' | grep -Ev 'sqlite|setup/mscs|api_trace' > coverage.txt + +# For SQLite (monolith) +go tool cover -func=/path/to/server-0/integrationcover.log | grep 'github.com/matrix-org/dendrite' | grep -Ev 'inthttp|postgres|setup/mscs|api_trace' > coverage.txt + +# For SQLite (polylith) +go tool cover -func=/path/to/server-0/integrationcover.log | grep 'github.com/matrix-org/dendrite' | grep -Ev 'postgres|setup/mscs|api_trace' > coverage.txt +``` + +A total value can then be calculated using: +```bash +cat coverage.txt | awk -F '\t+' '{x = x + $3} END {print x/NR}' +``` + + +We currently do not have a way to combine Sytest/Complement/Unit Tests into a single coverage report.
\ No newline at end of file diff --git a/docs/development/sytest.md b/docs/development/sytest.md new file mode 100644 index 00000000..3cfb99e6 --- /dev/null +++ b/docs/development/sytest.md @@ -0,0 +1,139 @@ +--- +title: SyTest +parent: Development +permalink: /development/sytest +--- + +# SyTest + +Dendrite uses [SyTest](https://github.com/matrix-org/sytest) for its +integration testing. When creating a new PR, add the test IDs (see below) that +your PR should allow to pass to `sytest-whitelist` in dendrite's root +directory. Not all PRs need to make new tests pass. If we find your PR should +be making a test pass we may ask you to add to that file, as generally +Dendrite's progress can be tracked through the amount of SyTest tests it +passes. + +## Finding out which tests to add + +We recommend you run the tests locally by using the SyTest docker image. +After running the tests, a script will print the tests you need to add to +`sytest-whitelist`. + +You should proceed after you see no build problems for dendrite after running: + +```sh +./build.sh +``` + +If you are fixing an issue marked with +[Are We Synapse Yet](https://github.com/matrix-org/dendrite/labels/are-we-synapse-yet) +then there will be a list of Sytests that you should add to the whitelist when you +have fixed that issue. This MUST be included in your PR to ensure that the issue +is fully resolved. + +### Using the SyTest Docker image + +**We strongly recommend using the Docker image to run Sytest.** + +Use the following commands to pull the latest SyTest image and run the tests: + +```sh +docker pull matrixdotorg/sytest-dendrite +docker run --rm -v /path/to/dendrite/:/src/ -v /path/to/log/output/:/logs/ matrixdotorg/sytest-dendrite +``` + +`/path/to/dendrite/` should be replaced with the actual path to your dendrite +source code. The test results TAP file and homeserver logging output will go to +`/path/to/log/output`. The output of the command should tell you if you need to +add any tests to `sytest-whitelist`. + +When debugging, the following Docker `run` options may also be useful: + +* `-v /path/to/sytest/:/sytest/`: Use your local SyTest repository at + `/path/to/sytest` instead of pulling from GitHub. This is useful when you want + to speed things up or make modifications to SyTest. +* `-v "/path/to/gopath/:/gopath"`: Use your local `GOPATH` so you don't need to + re-download packages on every run. +* `--entrypoint bash`: Prevent the container from automatically starting the + tests. When used, you need to manually run `/bootstrap.sh dendrite` inside + the container to start them. +* `-e "DENDRITE_TRACE_HTTP=1"`: Adds HTTP tracing to server logs. +* `-e "DENDRITE_TRACE_INTERNAL=1"`: Adds roomserver internal API tracing to + server logs. +* `-e "DENDRITE_TRACE_SQL=1"`: Adds tracing to all SQL statements to server logs. + +The docker command also supports a single positional argument for the test file to +run, so you can run a single `.pl` file rather than the whole test suite. For example: + +``` +docker run --rm --name sytest -v "/Users/kegan/github/sytest:/sytest" +-v "/Users/kegan/github/dendrite:/src" -v "/Users/kegan/logs:/logs" +-v "/Users/kegan/go/:/gopath" -e "POSTGRES=1" -e "DENDRITE_TRACE_HTTP=1" +matrixdotorg/sytest-dendrite:latest tests/50federation/40devicelists.pl +``` + +### Manually Setting up SyTest + +**We advise AGAINST using manual SyTest setups.** + +If you don't want to use the Docker image, you can also run SyTest by hand. Make +sure you have Perl 5 or above, and get SyTest with: + +(Note that this guide assumes your SyTest checkout is next to your +`dendrite` checkout.) + +```sh +git clone -b develop https://github.com/matrix-org/sytest +cd sytest +./install-deps.pl +``` + +Set up the database: + +```sh +sudo -u postgres psql -c "CREATE USER dendrite PASSWORD 'itsasecret'" +sudo -u postgres psql -c "ALTER USER dendrite CREATEDB" +for i in dendrite0 dendrite1 sytest_template; do sudo -u postgres psql -c "CREATE DATABASE $i OWNER dendrite;"; done +mkdir -p "server-0" +cat > "server-0/database.yaml" << EOF +args: + user: dendrite + password: itsasecret + database: dendrite0 + host: 127.0.0.1 + sslmode: disable +type: pg +EOF +mkdir -p "server-1" +cat > "server-1/database.yaml" << EOF +args: + user: dendrite + password: itsasecret + database: dendrite1 + host: 127.0.0.1 + sslmode: disable +type: pg +EOF +``` + +Run the tests: + +```sh +POSTGRES=1 ./run-tests.pl -I Dendrite::Monolith -d ../dendrite/bin -W ../dendrite/sytest-whitelist -O tap --all | tee results.tap +``` + +where `tee` lets you see the results while they're being piped to the file, and +`POSTGRES=1` enables testing with PostgeSQL. If the `POSTGRES` environment +variable is not set or is set to 0, SyTest will fall back to SQLite 3. For more +flags and options, see <https://github.com/matrix-org/sytest#running>. + +Once the tests are complete, run the helper script to see if you need to add +any newly passing test names to `sytest-whitelist` in the project's root +directory: + +```sh +../dendrite/show-expected-fail-tests.sh results.tap ../dendrite/sytest-whitelist ../dendrite/sytest-blacklist +``` + +If the script prints nothing/exits with 0, then you're good to go. diff --git a/docs/development/tracing/opentracing.md b/docs/development/tracing/opentracing.md new file mode 100644 index 00000000..8528c2ba --- /dev/null +++ b/docs/development/tracing/opentracing.md @@ -0,0 +1,114 @@ +--- +title: OpenTracing +has_children: true +parent: Development +permalink: /development/opentracing +--- + +# OpenTracing + +Dendrite extensively uses the [opentracing.io](http://opentracing.io) framework +to trace work across the different logical components. + +At its most basic opentracing tracks "spans" of work; recording start and end +times as well as any parent span that caused the piece of work. + +A typical example would be a new span being created on an incoming request that +finishes when the response is sent. When the code needs to hit out to a +different component a new span is created with the initial span as its parent. +This would end up looking roughly like: + +``` +Received request Sent response + |<───────────────────────────────────────>| + |<────────────────────>| + RPC call RPC call returns +``` + +This is useful to see where the time is being spent processing a request on a +component. However, opentracing allows tracking of spans across components. This +makes it possible to see exactly what work goes into processing a request: + +``` +Component 1 |<─────────────────── HTTP ────────────────────>| + |<──────────────── RPC ─────────────────>| +Component 2 |<─ SQL ─>| |<── RPC ───>| +Component 3 |<─ SQL ─>| +``` + +This is achieved by serializing span information during all communication +between components. For HTTP requests, this is achieved by the sender +serializing the span into a HTTP header, and the receiver deserializing the span +on receipt. (Generally a new span is then immediately created with the +deserialized span as the parent). + +A collection of spans that are related is called a trace. + +Spans are passed through the code via contexts, rather than manually. It is +therefore important that all spans that are created are immediately added to the +current context. Thankfully the opentracing library gives helper functions for +doing this: + +```golang +span, ctx := opentracing.StartSpanFromContext(ctx, spanName) +defer span.Finish() +``` + +This will create a new span, adding any span already in `ctx` as a parent to the +new span. + +Adding Information +------------------ + +Opentracing allows adding information to a trace via three mechanisms: + +- "tags" ─ A span can be tagged with a key/value pair. This is typically + information that relates to the span, e.g. for spans created for incoming HTTP + requests could include the request path and response codes as tags, spans for + SQL could include the query being executed. +- "logs" ─ Key/value pairs can be looged at a particular instance in a trace. + This can be useful to log e.g. any errors that happen. +- "baggage" ─ Arbitrary key/value pairs can be added to a span to which all + child spans have access. Baggage isn't saved and so isn't available when + inspecting the traces, but can be used to add context to logs or tags in child + spans. + +See +[specification.md](https://github.com/opentracing/specification/blob/master/specification.md) +for some of the common tags and log fields used. + +Span Relationships +------------------ + +Spans can be related to each other. The most common relation is `childOf`, which +indicates the child span somehow depends on the parent span ─ typically the +parent span cannot complete until all child spans are completed. + +A second relation type is `followsFrom`, where the parent has no dependence on +the child span. This usually indicates some sort of fire and forget behaviour, +e.g. adding a message to a pipeline or inserting into a kafka topic. + +Jaeger +------ + +Opentracing is just a framework. We use +[jaeger](https://github.com/jaegertracing/jaeger) as the actual implementation. + +Jaeger is responsible for recording, sending and saving traces, as well as +giving a UI for viewing and interacting with traces. + +To enable jaeger a `Tracer` object must be instansiated from the config (as well +as having a jaeger server running somewhere, usually locally). A `Tracer` does +several things: + +- Decides which traces to save and send to the server. There are multiple + schemes for doing this, with a simple example being to save a certain fraction + of traces. +- Communicating with the jaeger backend. If not explicitly specified uses the + default port on localhost. +- Associates a service name to all spans created by the tracer. This service + name equates to a logical component, e.g. spans created by clientapi will have + a different service name than ones created by the syncapi. Database access + will also typically use a different service name. + + This means that there is a tracer per service name/component. diff --git a/docs/development/tracing/setup.md b/docs/development/tracing/setup.md new file mode 100644 index 00000000..06f89bf8 --- /dev/null +++ b/docs/development/tracing/setup.md @@ -0,0 +1,57 @@ +--- +title: Setup +parent: OpenTracing +grand_parent: Development +permalink: /development/opentracing/setup +--- + +# OpenTracing Setup + +Dendrite uses [Jaeger](https://www.jaegertracing.io/) for tracing between microservices. +Tracing shows the nesting of logical spans which provides visibility on how the microservices interact. +This document explains how to set up Jaeger locally on a single machine. + +## Set up the Jaeger backend + +The [easiest way](https://www.jaegertracing.io/docs/1.18/getting-started/) is to use the all-in-one Docker image: + +``` +$ docker run -d --name jaeger \ + -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \ + -p 5775:5775/udp \ + -p 6831:6831/udp \ + -p 6832:6832/udp \ + -p 5778:5778 \ + -p 16686:16686 \ + -p 14268:14268 \ + -p 14250:14250 \ + -p 9411:9411 \ + jaegertracing/all-in-one:1.18 +``` + +## Configuring Dendrite to talk to Jaeger + +Modify your config to look like: (this will send every single span to Jaeger which will be slow on large instances, but for local testing it's fine) + +``` +tracing: + enabled: true + jaeger: + serviceName: "dendrite" + disabled: false + rpc_metrics: true + tags: [] + sampler: + type: const + param: 1 +``` + +then run the monolith server with `--api true` to use polylith components which do tracing spans: + +``` +./dendrite-monolith-server --tls-cert server.crt --tls-key server.key --config dendrite.yaml --api true +``` + +## Checking traces + +Visit <http://localhost:16686> to see traces under `DendriteMonolith`. |