Identifies the kind of a parser node.
Tests if a given type is an input range of JSONParserNode.
Tests if a given type is an input range of JSONToken.
Parses a JSON document using a lazy parser node range.
Consumes a single JSON value from the input range and returns the result as a JSONValue.
* Parses a stream of JSON tokens and returns the result as a JSONValue. * * All tokens belonging to the document will be consumed from the input range. * Any tokens after the end of the first JSON document will be left in the * input token range for possible later consumption.
Reads an array and issues a callback for each entry.
Reads a single double value.
Reads a single double value.
Reads an object and issues a callback for each field.
Reads a single double value.
Skips all entries in an object until a certain key is reached.
Skips a single JSON value in a parser stream.
Parses a JSON string or token range and returns the result as a JSONValue.
The default amount of nesting in the input allowed by toJSONValue and parseJSONValue.
Represents a single node of a JSON parse tree.
Lazy input range of JSON parser nodes.
import std.algorithm : equal, map; import std.format : format; // Parse a JSON string to a single value JSONValue value = toJSONValue(`{"name": "D", "kind": "language"}`); // Parse a JSON string to a node stream auto nodes = parseJSONStream(`{"name": "D", "kind": "language"}`); with (JSONParserNodeKind) { assert(nodes.map!(n => n.kind).equal( [objectStart, key, literal, key, literal, objectEnd])); } // Parse a list of tokens instead of a string auto tokens = lexJSON(`{"name": "D", "kind": "language"}`); JSONValue value2 = toJSONValue(tokens); assert(value == value2, format!"%s != %s"(value, value2));
Copyright 2012 - 2015, Sönke Ludwig.
Provides various means for parsing JSON documents.
This module contains two different parser implementations. The first implementation returns a single JSON document in the form of a JSONValue, while the second implementation returns a stream of nodes. The stream based parser is particularly useful for deserializing with few allocations or for processing large documents.