nom, eating data byte by byte
nom is a parser combinator library with a focus on safe parsing, streaming patterns, and as much as possible zero copy.
Example
extern crate nom;
use ;
The code is available on Github
There are a few guides with more details about the design of nom macros, how to write parsers, or the error management system.
Looking for a specific combinator? Read the "choose a combinator" guide
If you are upgrading to nom 5.0, please read the migration document.
See also the FAQ.
Parser combinators
Parser combinators are an approach to parsers that is very different from software like lex and yacc. Instead of writing the grammar in a separate syntax and generating the corresponding code, you use very small functions with very specific purposes, like "take 5 bytes", or "recognize the word 'HTTP'", and assemble them in meaningful patterns like "recognize 'HTTP', then a space, then a version". The resulting code is small, and looks like the grammar you would have written with other parser approaches.
This gives us a few advantages:
- the parsers are small and easy to write
- the parsers components are easy to reuse (if they're general enough, please add them to nom!)
- the parsers components are easy to test separately (unit tests and property-based tests)
- the parser combination code looks close to the grammar you would have written
- you can build partial parsers, specific to the data you need at the moment, and ignore the rest
Here is an example of one such parser, to recognize text between parentheses:
use ;
It defines a function named parens
which will recognize a sequence of the
character (
, the longest byte array not containing )
, then the character
)
, and will return the byte array in the middle.
Here is another parser, written without using nom's combinators this time:
extern crate nom;
use ;
#
This function takes a byte array as input, and tries to consume 4 bytes. Writing all the parsers manually, like this, is dangerous, despite Rust's safety features. There are still a lot of mistakes one can make. That's why nom provides a list of function and macros to help in developing parsers.
With functions, you would write it like this:
use ;
With macros, you would write it like this:
extern crate nom;
#
nom has used macros for combinators from versions 1 to 4, and from version 5, it proposes new combinators as functions, but still allows the macros style (macros have been rewritten to use the functions under the hood). For new parsers, we recommend using the functions instead of macros, since rustc messages will be much easier to understand.
A parser in nom is a function which, for an input type I
, an output type O
and an optional error type E
, will have the following signature:
;
Or like this, if you don't want to specify a custom error type (it will be u32
by default):
;
IResult
is an alias for the Result
type:
use ;
type IResult<I, O, E = > = ;
It can have the following values:
- a correct result
Ok((I,O))
with the first element being the remaining of the input (not parsed yet), and the second the output value; - an error
Err(Err::Error(c))
withc
an error that can be built from the input position and a parser specific error - an error
Err(Err::Incomplete(Needed))
indicating that more input is necessary.Needed
can indicate how much data is needed - an error
Err(Err::Failure(c))
. It works like theError
case, except it indicates an unrecoverable error: we cannot backtrack and test another parser
Please refer to the "choose a combinator" guide for an exhaustive list of parsers. See also the rest of the documentation here. .
Making new parsers with function combinators
nom is based on functions that generate parsers, with a signature like
this: (arguments) -> impl Fn(Input) -> IResult<Input, Output, Error>
.
The arguments of a combinator can be direct values (like take
which uses
a number of bytes or character as argument) or even other parsers (like
delimited
which takes as argument 3 parsers, and returns the result of
the second one if all are successful).
Here are some examples:
use IResult;
use ;
Combining parsers
There are higher level patterns, like the alt
combinator, which
provides a choice between multiple parsers. If one branch fails, it tries
the next, and returns the result of the first parser that succeeds:
use IResult;
use alt;
use tag;
let alt_tags = alt;
assert_eq!;
assert_eq!;
assert_eq!;
The opt
combinator makes a parser optional. If the child parser returns
an error, opt
will still succeed and return None:
use ;
assert_eq!;
assert_eq!;
many0
applies a parser 0 or more times, and returns a vector of the aggregated results:
# extern crate nom;
#
#
#
#
Here are some basic combining macros available:
opt
: will make the parser optional (if it returns theO
type, the new parser returnsOption<O>
)many0
: will apply the parser 0 or more times (if it returns theO
type, the new parser returnsVec<O>
)many1
: will apply the parser 1 or more times
There are more complex (and more useful) parsers like tuple!
, which is
used to apply a series of parsers then assemble their results.
Example with tuple
:
# extern crate nom;
#
But you can also use a sequence of combinators written in imperative style,
thanks to the ?
operator:
# extern crate nom;
#
Streaming / Complete
Some of nom's modules have streaming
or complete
submodules. They hold
different variants of the same combinators.
A streaming parser assumes that we might not have all of the input data. This can happen with some network protocol or large file parsers, where the input buffer can be full and need to be resized or refilled.
A complete parser assumes that we already have all of the input data. This will be the common case with small files that can be read entirely to memory.
Here is how it works in practice:
use ;
// both parsers will take 4 bytes as expected
assert_eq!;
assert_eq!;
// if the input is smaller than 4 bytes, the streaming parser
// will return `Incomplete` to indicate that we need more data
assert_eq!;
// but the complete parser will return an error
assert_eq!;
// the alpha0 function recognizes 0 or more alphabetic characters
// if there's a clear limit to the recognized characters, both parsers work the same way
assert_eq!;
assert_eq!;
// but when there's no limit, the streaming version returns `Incomplete`, because it cannot
// know if more input data should be recognized. The whole input could be "abcd;", or
// "abcde;"
assert_eq!;
// while the complete version knows that all of the data is there
assert_eq!;
Going further: read the guides!