mirror of
https://github.com/boostorg/spirit.git
synced 2026-01-19 04:42:11 +00:00
Fix some misprints in docs and code comments.
This commit is contained in:
@@ -232,7 +232,7 @@
|
||||
<p>Here, the functors <tt>fa</tt>, <tt>fb</tt> and <tt>fc</tt> all expect the
|
||||
signature <tt>void operator()(unsigned n) const</tt>.</p>
|
||||
<h2>Directives and Actions</h2>
|
||||
<p>Directives inherit the the function/functor interface of the subject it is
|
||||
<p>Directives inherit the function/functor interface of the subject it is
|
||||
enclosing. Example:</p>
|
||||
<pre><code><font color="#000000"><span class=special> </span><span class=identifier>as_lower_d</span><span class=special>[</span><span class=identifier>ch_p</span><span class=special>(</span><span class=literal>'x'</span><span class=special>)][</span><span class=identifier>f</span><span class=special>]</span></font></code></pre>
|
||||
<p>Here, the functor <tt>f</tt> expects the signature <tt>void operator()(char
|
||||
|
||||
@@ -33,7 +33,7 @@ Extension is non-intrusive. And, whenever a component or module is extended, the
|
||||
<p>
|
||||
1) Write and deploy a new primitive:</p>
|
||||
<p>
|
||||
So far we have presented only a few primitives 1) arguments 2) values and 3) variables. For the sake of illustration, let us write a simple primitive extension. Let us call it static_int. It shall be parameterized by an integer value. It is like a static version of the the value<int> class, but since it is static, holds no data at all. The integer is encoded in its type. Here is the complete class (sample5.cpp):</p>
|
||||
So far we have presented only a few primitives 1) arguments 2) values and 3) variables. For the sake of illustration, let us write a simple primitive extension. Let us call it static_int. It shall be parameterized by an integer value. It is like a static version of the value<int> class, but since it is static, holds no data at all. The integer is encoded in its type. Here is the complete class (sample5.cpp):</p>
|
||||
<code><pre>
|
||||
<span class=keyword>template </span><span class=special><</span><span class=keyword>int </span><span class=identifier>N</span><span class=special>>
|
||||
</span><span class=keyword>struct </span><span class=identifier>static_int </span><span class=special>{
|
||||
|
||||
@@ -1043,7 +1043,7 @@ functions:
|
||||
]
|
||||
|
||||
[tip The easiest way to implement a proper real number formatting policy is
|
||||
to derive a new type from the the type `real_policies<>` while overriding
|
||||
to derive a new type from the type `real_policies<>` while overriding
|
||||
the aspects of the formatting which need to be changed.]
|
||||
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ library. The following code snippet shows how this can be done using __lex__.
|
||||
[heading Doing the Useful Work]
|
||||
|
||||
We will use a setup, where we want the __lex__ library to invoke a given
|
||||
function after any of of the generated tokens is recognized. For this reason
|
||||
function after any of the generated tokens is recognized. For this reason
|
||||
we need to implement a functor taking at least the generated token as an
|
||||
argument and returning a boolean value allowing to stop the tokenization
|
||||
process. The default token type used in this example carries a token value of
|
||||
|
||||
@@ -113,7 +113,7 @@ usage.
|
||||
[wcp_grammar_definition]
|
||||
|
||||
As already described (see: __sec_attributes__), the __qi__ parser
|
||||
library builds upon a set of of fully attributed parser components.
|
||||
library builds upon a set of fully attributed parser components.
|
||||
Consequently, all token definitions support this attribute model as well. The
|
||||
most natural way of implementing this was to use the token values as
|
||||
the attributes exposed by the parser component corresponding to the token
|
||||
|
||||
@@ -813,7 +813,7 @@ The optional sign will be automatically applied afterwards.
|
||||
[heading `RealPolicies` Specializations]
|
||||
|
||||
The easiest way to implement a proper real parsing policy is to derive a
|
||||
new type from the the type `real_policies` while overriding the aspects
|
||||
new type from the type `real_policies` while overriding the aspects
|
||||
of the parsing which need to be changed. For example, here's the
|
||||
implementation of the predefined `strict_real_policies`:
|
||||
|
||||
|
||||
@@ -106,7 +106,7 @@ int main()
|
||||
iterator_type iter = lex.begin(it, str.end());
|
||||
iterator_type end = lex.end();
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
// Note, how we use the token_def defined above as the skip parser. It must
|
||||
// be explicitly wrapped inside a state directive, switching the lexer
|
||||
|
||||
@@ -143,7 +143,7 @@ int main()
|
||||
iterator_type iter = tokens.begin(it, str.end());
|
||||
iterator_type end = tokens.end();
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
bool r = qi::parse(iter, end, calc);
|
||||
|
||||
|
||||
@@ -127,7 +127,7 @@ int main()
|
||||
iterator_type iter = tokens.begin(it, str.end());
|
||||
iterator_type end = tokens.end();
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
// Note how we use the lexer defined above as the skip parser.
|
||||
bool r = qi::phrase_parse(iter, end, calc, qi::in_state("WS")[tokens.self]);
|
||||
|
||||
@@ -202,7 +202,7 @@ int main()
|
||||
iterator_type iter = tokens.begin(it, str.end());
|
||||
iterator_type end = tokens.end();
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
// Note how we use the lexer defined above as the skip parser. It must
|
||||
// be explicitly wrapped inside a state directive, switching the lexer
|
||||
|
||||
@@ -247,7 +247,7 @@ int main()
|
||||
iterator_type iter = tokens.begin(it, str.end());
|
||||
iterator_type end = tokens.end();
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
// Note how we use the lexer defined above as the skip parser. It must
|
||||
// be explicitly wrapped inside a state directive, switching the lexer
|
||||
|
||||
@@ -223,7 +223,7 @@ int main()
|
||||
iterator_type iter = tokens.begin(it, str.end());
|
||||
iterator_type end = tokens.end();
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
// Note how we use the lexer defined above as the skip parser. It must
|
||||
// be explicitly wrapped inside a state directive, switching the lexer
|
||||
|
||||
@@ -84,7 +84,7 @@ int main(int argc, char* argv[])
|
||||
language_tokens<lexer_type> tokenizer; // Our lexer
|
||||
language_grammar<iterator_type> g (tokenizer); // Our parser
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
std::string str ("float f = 3.4\nint i = 6\n");
|
||||
base_iterator_type first = str.begin();
|
||||
|
||||
@@ -94,7 +94,7 @@ int main(int argc, char* argv[])
|
||||
print_numbers_tokenids<lexer_type> print_tokens; // Our lexer
|
||||
print_numbers_grammar<iterator_type> print; // Our parser
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
std::string str (read_from_file(1 == argc ? "print_numbers.input" : argv[1]));
|
||||
base_iterator_type first = str.begin();
|
||||
|
||||
@@ -91,7 +91,7 @@ int main(int argc, char* argv[])
|
||||
print_numbers_tokens<lexer_type> print_tokens; // Our lexer
|
||||
print_numbers_grammar<iterator_type> print; // Our parser
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
std::string str (read_from_file(1 == argc ? "print_numbers.input" : argv[1]));
|
||||
base_iterator_type first = str.begin();
|
||||
|
||||
@@ -103,7 +103,7 @@ int main(int argc, char* argv[])
|
||||
char const* first = str.c_str();
|
||||
char const* last = &first[str.size()];
|
||||
|
||||
// Parsing is done based on the the token stream, not the character stream.
|
||||
// Parsing is done based on the token stream, not the character stream.
|
||||
bool r = lex::tokenize_and_parse(first, last, word_count, g);
|
||||
|
||||
if (r) { // success
|
||||
|
||||
@@ -135,7 +135,7 @@ int main(int argc, char* argv[])
|
||||
strip_comments_tokens<lexer_type> strip_comments; // Our lexer
|
||||
strip_comments_grammar<iterator_type> g (strip_comments); // Our parser
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
std::string str (read_from_file(1 == argc ? "strip_comments.input" : argv[1]));
|
||||
base_iterator_type first = str.begin();
|
||||
|
||||
@@ -134,7 +134,7 @@ int main(int argc, char* argv[])
|
||||
strip_comments_tokens<lexer_type> strip_comments; // Our lexer
|
||||
strip_comments_grammar<iterator_type> g (strip_comments); // Our grammar
|
||||
|
||||
// Parsing is done based on the the token stream, not the character
|
||||
// Parsing is done based on the token stream, not the character
|
||||
// stream read from the input.
|
||||
std::string str (read_from_file(1 == argc ? "strip_comments.input" : argv[1]));
|
||||
base_iterator_type first = str.begin();
|
||||
|
||||
@@ -84,7 +84,7 @@ struct word_count_tokens : lex::lexer<Lexer>
|
||||
this->self.add
|
||||
(word) // no token id is needed here
|
||||
('\n') // characters are usable as tokens as well
|
||||
(".", IDANY) // string literals will not be esacped by the library
|
||||
(".", IDANY) // string literals will not be escaped by the library
|
||||
;
|
||||
}
|
||||
|
||||
@@ -146,10 +146,10 @@ int main(int argc, char* argv[])
|
||||
char const* first = str.c_str();
|
||||
char const* last = &first[str.size()];
|
||||
|
||||
/*< Parsing is done based on the the token stream, not the character
|
||||
/*< Parsing is done based on the token stream, not the character
|
||||
stream read from the input. The function `tokenize_and_parse()` wraps
|
||||
the passed iterator range `[first, last)` by the lexical analyzer and
|
||||
uses its exposed iterators to parse the toke stream.
|
||||
uses its exposed iterators to parse the token stream.
|
||||
>*/ bool r = lex::tokenize_and_parse(first, last, word_count, g);
|
||||
|
||||
if (r) {
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
// Spirit predefined maximum number of simultaneously usable different
|
||||
// scanner types.
|
||||
//
|
||||
// This limit defines the maximum number of of possible different scanner
|
||||
// This limit defines the maximum number of possible different scanner
|
||||
// types for which a specific rule<> may be used. If this isn't defined, a
|
||||
// rule<> may be used with one scanner type only (multiple scanner support
|
||||
// is disabled).
|
||||
|
||||
@@ -86,7 +86,7 @@ operator,(actor<BaseT0> const& _0, actor<BaseT1> const& _1)
|
||||
// is true, the true_statement (again an actor) is executed
|
||||
// otherwise, the false_statement (another actor) is executed. The
|
||||
// result type of this is void. Note the trailing underscore after
|
||||
// if_ and the the leading dot and the trailing underscore before
|
||||
// if_ and the leading dot and the trailing underscore before
|
||||
// and after .else_.
|
||||
//
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// Spirit predefined maximum grammar start parser limit. This limit defines
|
||||
// the maximum number of of possible different parsers exposed from a
|
||||
// the maximum number of possible different parsers exposed from a
|
||||
// particular grammar. This number defaults to 3.
|
||||
// The actual maximum is rounded up in multiples of 3. Thus, if this value
|
||||
// is 4, the actual limit is 6. The ultimate maximum limit in this
|
||||
|
||||
@@ -42,7 +42,7 @@
|
||||
// 1. Setup
|
||||
//
|
||||
// Before the rule parser macro (the protagonist of the facility) can be used
|
||||
// the the user must define the macro BOOST_SPIRIT__NAMESPACE (note the double
|
||||
// the user must define the macro BOOST_SPIRIT__NAMESPACE (note the double
|
||||
// underscore characeter) and setup a registration group for Boost.Typeof.
|
||||
//
|
||||
// Examples:
|
||||
|
||||
@@ -188,7 +188,7 @@ namespace boost { namespace spirit { namespace karma { namespace detail
|
||||
{};
|
||||
|
||||
// If both, the containers value type and the exposed attribute type are
|
||||
// optionals we are allowed to pass through the the container only if the
|
||||
// optionals we are allowed to pass through the container only if the
|
||||
// embedded types of those optionals are not compatible.
|
||||
template <typename Container, typename ValueType, typename Attribute
|
||||
, typename Sequence>
|
||||
|
||||
@@ -533,7 +533,7 @@ namespace boost { namespace spirit { namespace lex { namespace lexertl
|
||||
base_type;
|
||||
|
||||
protected:
|
||||
// If no additional token value types are given, the the token will
|
||||
// If no additional token value types are given, the token will
|
||||
// hold no token value at all as the base class already has the
|
||||
// iterator pair of the matched range in the underlying input sequence.
|
||||
// Otherwise the token value is stored as a variant and will
|
||||
|
||||
@@ -333,7 +333,7 @@ namespace boost { namespace spirit { namespace lex { namespace lexertl
|
||||
typedef token<Iterator, lex::omit, HasState, Idtype> base_type;
|
||||
|
||||
protected:
|
||||
// If no additional token value types are given, the the token will
|
||||
// If no additional token value types are given, the token will
|
||||
// hold the plain pair of iterators pointing to the matched range
|
||||
// in the underlying input sequence. Otherwise the token value is
|
||||
// stored as a variant and will again hold the pair of iterators but
|
||||
|
||||
@@ -183,7 +183,7 @@ namespace boost { namespace spirit { namespace qi { namespace detail
|
||||
{};
|
||||
|
||||
// If both, the containers value type and the exposed attribute type are
|
||||
// optionals we are allowed to pass through the the container only if the
|
||||
// optionals we are allowed to pass through the container only if the
|
||||
// embedded types of those optionals are not compatible.
|
||||
template <typename Container, typename ValueType, typename Attribute
|
||||
, typename Sequence>
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
|
||||
The keyword list operator, `kwd("k1")[a] / kwd("k2")[b]`, works tightly with the kwd, ikwd, dkwd and idkwd directives
|
||||
to effeciently match keyword lists. As long as one of the keywords specified through the kwd, ikwd, dkwd or idkwd directive
|
||||
matches, the keyword will be immediatly followed by the the keyword's associated subject parser.
|
||||
matches, the keyword will be immediatly followed by the keyword's associated subject parser.
|
||||
The parser will continue parsing input as long as the one of the keywords and it's associated parser succeed.
|
||||
Writing :
|
||||
(kwd("k1")[a] / kwd("k2")[b] / ... )
|
||||
|
||||
Reference in New Issue
Block a user