diff options
author | Markus Armbruster <armbru@redhat.com> | 2018-08-23 18:40:01 +0200 |
---|---|---|
committer | Markus Armbruster <armbru@redhat.com> | 2018-08-24 20:26:37 +0200 |
commit | 62815d85aed71eff7b6c3a524705180fb04f5d30 (patch) | |
tree | ef0327f940ea24f3ff8093292e17a4bd63259d6c /qobject/json-parser.c | |
parent | 037f2440888a22bd00ea0d8e37a1b7ed7d2bba88 (diff) |
json: Redesign the callback to consume JSON values
The classical way to structure parser and lexer is to have the client
call the parser to get an abstract syntax tree, the parser call the
lexer to get the next token, and the lexer call some function to get
input characters.
Another way to structure them would be to have the client feed
characters to the lexer, the lexer feed tokens to the parser, and the
parser feed abstract syntax trees to some callback provided by the
client. This way is more easily integrated into an event loop that
dispatches input characters as they arrive.
Our JSON parser is kind of between the two. The lexer feeds tokens to
a "streamer" instead of a real parser. The streamer accumulates
tokens until it got the sequence of tokens that comprise a single JSON
value (it counts curly braces and square brackets to decide). It
feeds those token sequences to a callback provided by the client. The
callback passes each token sequence to the parser, and gets back an
abstract syntax tree.
I figure it was done that way to make a straightforward recursive
descent parser possible. "Get next token" becomes "pop the first
token off the token sequence". Drawback: we need to store a complete
token sequence. Each token eats 13 + input characters + malloc
overhead bytes.
Observations:
1. This is not the only way to use recursive descent. If we replaced
"get next token" by a coroutine yield, we could do without a
streamer.
2. The lexer reports errors by passing a JSON_ERROR token to the
streamer. This communicates the offending input characters and
their location, but no more.
3. The streamer reports errors by passing a null token sequence to the
callback. The (already poor) lexical error information is thrown
away.
4. Having the callback receive a token sequence duplicates the code to
convert token sequence to abstract syntax tree in every callback.
5. Known bug: the streamer silently drops incomplete token sequences.
This commit rectifies 4. by lifting the call of the parser from the
callbacks into the streamer. Later commits will address 3. and 5.
The lifting removes a bug from qjson.c's parse_json(): it passed a
pointer to a non-null Error * in certain cases, as demonstrated by
check-qjson.c.
json_parser_parse() is now unused. It's a stupid wrapper around
json_parser_parse_err(). Drop it, and rename json_parser_parse_err()
to json_parser_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-35-armbru@redhat.com>
Diffstat (limited to 'qobject/json-parser.c')
-rw-r--r-- | qobject/json-parser.c | 7 |
1 files changed, 1 insertions, 6 deletions
diff --git a/qobject/json-parser.c b/qobject/json-parser.c index 7bfa08200c..95fa348e21 100644 --- a/qobject/json-parser.c +++ b/qobject/json-parser.c @@ -541,12 +541,7 @@ static QObject *parse_value(JSONParserContext *ctxt, va_list *ap) } } -QObject *json_parser_parse(GQueue *tokens, va_list *ap) -{ - return json_parser_parse_err(tokens, ap, NULL); -} - -QObject *json_parser_parse_err(GQueue *tokens, va_list *ap, Error **errp) +QObject *json_parser_parse(GQueue *tokens, va_list *ap, Error **errp) { JSONParserContext ctxt = { .buf = tokens }; QObject *result; |