Age | Commit message (Collapse) | Author |
|
add a `sni' option for the `proxy' block: the given name is used instead
of the one extracted by the `relay-to' rule.
|
|
properly release everything when during client_close if the request
was managed by a proxy.
|
|
refactor the code that calls validate_against_ca into an helper
function to reuse it in both apply_require_ca and (optionally) in
apply_reverse_proxy.
|
|
as a side effect the order of the content of a server block is relaxed:
options, location or proxy blocks can be put in any order.
|
|
|
|
|
|
|
|
it doesn't make any sense to keep the proxying info per-location:
proxying only one per-vhost. It can't work differently, it doesn't make
sense anyway.
|
|
|
|
|
|
Add to gmid the ability to forwad a request to another gemini server and
thus acting like a reverse proxy. The current syntax for the config
file is
server "example.com" {
...
proxy relay-to host:port
}
Further options (like the use of custom certificates) are planned.
cf. github issue #7
|
|
|
|
Currently dogfooding this patch at gemini.sgregoratto.me. To test,
run the following command and look for the "OCSP response" header:
openssl s_client -connect "gemini.sgregoratto.me:1965" -status
|
|
This adds a barebone dumping of the parsed configuration. It is not
complete, but I'm interested in dumping the full path to `cert' and
`key' in order to write some scripts that can inspect the
configuration, extract the certificates and renew them when expired
automatically.
It's not easy to parse gmid configuration otherwise because the syntax
is flexible and users can use macros. Instead, the idea is to run
gmid and let it dump the configuration once it's been parsed in a
static and predictable format.
Now is possible to parse gmid configuration with, say, awk or perl.
|
|
|
|
From day one we've been using a static array of client struct to hold
the clients data. This has variuos drawbacks, among which:
* reuse of the storage ("shades of heartbleed")
* maximum fixed amount of clients connected at the same time
* bugs are harder to debug
The last point in particular is important because if we mess the client
ids, or try to execute some functions (e.g. the various fcgi_*) after a
client has been disconnected, it's harder to "see" this "use after
free"-tier kind of bug.
Now I'm using a splay tree to hold the data about the live connections.
Each client' data is managed by malloc. If we try to access a client
data after the disconnection we'll probably crash with a SIGSEGV and
find the bug is more easy.
Performance-wise the connection phase should be faster since we don't
have to loop anymore to find an empty spot in the clients array, but
some operations could be slightly slower (compare the O(1) access in an
array with a SPLAY_FIND operation -- still be faster than O(n) thought.)
|
|
FastCGI is designed to multiplex requests over a single connection, so
ideally the server can open only one connection per worker to the
FastCGI application and that's that.
Doing this kind of multiplexing makes the code harder to follow and
easier to break/leak etc on the gmid side however. OpenBSD' httpd
seems to open one connection per client, so why can't we too?
One connection per request is still way better (lighter) than using
CGI, and we can avoid all the pitfalls of the multiplexing (keeping
track of "live ids", properly shut down etc...)
|
|
* add configure check
* change the way the headers are required (copied from tmux)
|
|
|
|
This is a big change in how gmid handles I/O. Initially we used a
hand-written loop over poll(2), that then was evolved into something
powered by libevent basic API. This meant that there were a lot of
small "asynchronous" function that did one step, eventually scheduling
the re-execution, that called each others in a chain.
The new implementation revolves completely around libevent'
bufferevents. It's more clear, as everything is implemented around the
client_read and client_write functions.
There is still space for improvements, like adding timeouts for one, but
it's solid enough to be committed as is and then further improved.
|
|
This changes the fastcgi implementation from a blocking I/O to an
async implementation on top of libevent' bufferevents.
Should improve the responsiveness of gmid especially when using remote
fastcgi applications.
|
|
|
|
|
|
it's used only to parse the -p flag. While there add check_port_num
to check the range for the port.
|
|
|
|
we need to delete the events associated with the backends, otherwise
the server process won't ever quit.
Here, we add a pending counter to every backend and shut down
immediately if they aren't handling any client; otherwise we try to
close them as soon as possible (i.e. when they close the connection to
the last connected client.)
|
|
Macros can be defined at the top of the configuration file:
dir = "/var/gemini"
cert = "/etc/keys"
and re-used later, for example
server "foo" {
root "$dir/foo" # -> /var/gemini/foo
cert "$cert/foo.pem" # -> /etc/keys/foo.pem
}
|
|
GMID_VERSION follows the CGI/FastCGI style, i.e. project_name/version.
Define GMID_STRING with a more "human" variant "project_name version",
and reuse that in the --help and --version codepath.
|
|
The actual implementation is based off doas' parse.y. This gave us
various benefits, like cleaner code, \ to break long lines, better
handling of quotes etc...
|
|
the logger process now can receive a file descriptor to write logs
to. At the moment the logic is simple, if it receives a file it logs
there, otherwise it logs to syslog. This will allow to log on custom
log files.
|
|
|
|
|
|
pass the correct loc_off to the executor, so the various variables
that depends on the matched location (like DOCUMENT_ROOT) are computed
correctly.
|
|
it's been since the switch to libevent that is no longer needed.
|
|
Not production-ready yet, but it's a start.
This adds a third ``backend'' for gmid: until now there it served
local files or CGI scripts, now FastCGI applications too.
FastCGI is meant to be an improvement over CGI: instead of exec'ing a
script for every request, it allows to open a single connection to an
``application'' and send the requests/receive the responses over that
socket using a simple binary protocol.
At the moment gmid supports three different methods of opening a
fastcgi connection:
- local unix sockets, with: fastcgi "/path/to/sock"
- network sockets, with: fastcgi tcp "host" [port]
port defaults to 9000 and can be either a string or a number
- subprocess, with: fastcgi spawn "/path/to/program"
the fastcgi protocol is done over the executed program stdin
of these, the last is only for testing and may be removed in the
future.
P.S.: the fastcgi rule is per-location of course :)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
saves some bytes of memory and removes the limit on the maximum number
of vhosts and location blocks.
|
|
|
|
the 1024 bytes limits is for the META only, not for the whole
response. That means that the maximum size for the header line is
1029!
|
|
while there, add capsicum for the logger process
|
|
use imsg to handle ALL kinds of IPC in gmid. This simplifies and shorten the
code, and makes everything more uniform too.
|
|
this fixes a bug introduced with the prefork mechanics: every server
process shared the same socket, and this would cause a race condition
when multiple server processes asked for a script cgi being executed.
This gives each server process its own socket to talk to the executor,
so the race cannot happen.
|
|
this is to let the regression suite compile
|
|
|
|
|