2
0
mirror of https://github.com/boostorg/redis.git synced 2026-01-19 04:42:09 +00:00

Documentation improvements.

This commit is contained in:
Marcelo Zimbres
2022-10-02 14:13:35 +02:00
parent 61f9a29ebc
commit b2c19df113
24 changed files with 521 additions and 480 deletions

4
BUILD_STATUS.md Normal file
View File

@@ -0,0 +1,4 @@
Branch | GH Actions | codecov.io |
:-------------: | ---------- | ---------- |
[`master`](https://github.com/mzimbres/aedis/tree/master) | [![CI](https://github.com/mzimbres/aedis/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/mzimbres/aedis/actions/workflows/ci.yml) | [![codecov](https://codecov.io/gh/mzimbres/aedis/branch/master/graph/badge.svg)](https://codecov.io/gh/mzimbres/aedis/branch/master)

View File

@@ -1,157 +0,0 @@
# Changelog
## master
* Removes `coalesce_requests` from the `connection::config`, this
became a request property now, see `request::config::coalesce`.
* Removes `max_read_size` from the `connection::config`. The maximum
read size can be specified now as a parameter of the
`aedis::adapt()` function.
* Removes `aedis::sync` class, see intro_sync.cpp for how to perform
synchronous and thread safe calls. This is possible in Boost. 1.80
only as it requires `boost::asio::deferred`.
* Moves from `boost::optional` to `std::optional`. This is part of
moving to C++17.
* Changes the behaviour of the second `connection::async_run` overload
so that it always return an error when the connection is lost.
* Adds TLS support, see intro_tls.cpp for how to use it.
* Adds an example that shows how to resolve addresses over sentinels,
see subscriber_sentinel.cpp.
* Adds a `connection::config::resp3_handshake_timeout`. This is
timeout used to send the `HELLO` command.
* Adds `endpoint` where in addition to host and port, users can
optionally provide username, password and the expected server role
(see `error::unexpected_server_role`).
* `connection::async_run` checks whether the server role received in
the hello command is equal to the expected server role specified in
`aedis::endpoint`. To skip this check let the role variable empty.
* Removes reconnect functionanlity from `aedis::connection`. It
is possible in simple reconnection strategies but bloats the class
in more complex scenarios, for example, with sentinel,
authentication and TLS. This is trivial to implement in a separate
coroutine. As a result the enum `event` and `async_receive_event`
have been removed from the class too.
* Fixes a bug in `connection::async_receive_push` that prevented
passing any response adapter other that `adapt(std::vector<node>)`.
* Changes the behaviour of `aedis::adapt()` that caused RESP3 errors
to be ignored. One consequence of it is that `connection::async_run`
would not exit with failure in servers that required authentication.
* Changes the behaviour of `connection::async_run` that would cause it
to complete with success when an error in the
`connection::async_exec` occurred.
* Ports the buildsystem from autotools to CMake.
## v1.0.0
* Adds experimental cmake support for windows users.
* Adds new class `aedis::sync` that wraps an `aedis::connection` in
a thread-safe and synchronous API. All free functions from the
`sync.hpp` are now member functions of `aedis::sync`.
* Split `aedis::connection::async_receive_event` in two functions, one
to receive events and another for server side pushes, see
`aedis::connection::async_receive_push`.
* Removes collision between `aedis::adapter::adapt` and
`aedis::adapt`.
* Adds `connection::operation` enum to replace `cancel_*` member
functions with a single cancel function that gets the operations
that should be cancelled as argument.
* Bugfix: a bug on reconnect from a state where the `connection` object
had unsent commands. It could cause `async_exec` to never
complete under certain conditions.
* Bugfix: Documentation of `adapt()` functions were missing from
Doxygen.
## v0.3.0
* Adds `experimental::exec` and `receive_event` functions to offer a
thread safe and synchronous way of executing requests across
threads. See `intro_sync.cpp` and `subscriber_sync.cpp` for
examples.
* `connection::async_read_push` was renamed to `async_receive_event`.
* `connection::async_receive_event` is now being used to communicate
internal events to the user, such as resolve, connect, push etc. For
examples see subscriber.cpp and `connection::event`.
* The `aedis` directory has been moved to `include` to look more
similar to Boost libraries. Users should now replace `-I/aedis-path`
with `-I/aedis-path/include` in the compiler flags.
* The `AUTH` and `HELLO` commands are now sent automatically. This change was
necessary to implement reconnection. The username and password
used in `AUTH` should be provided by the user on
`connection::config`.
* Adds support for reconnection. See `connection::enable_reconnect`.
* Fixes a bug in the `connection::async_run(host, port)` overload
that was causing crashes on reconnection.
* Fixes the executor usage in the connection class. Before theses
changes it was imposing `any_io_executor` on users.
* `connection::async_receiver_event` is not cancelled anymore when
`connection::async_run` exits. This change makes user code simpler.
* `connection::async_exec` with host and port overload has been
removed. Use the other `connection::async_run` overload.
* The host and port parameters from `connection::async_run` have been
move to `connection::config` to better support authentication and
failover.
* Many simplifications in the `chat_room` example.
* Fixes build in clang the compilers and makes some improvements in
the documentation.
## v0.2.1
* Fixes a bug that happens on very high load.
## v0.2.0
* Major rewrite of the high-level API. There is no more need to use the low-level API anymore.
* No more callbacks: Sending requests follows the ASIO asynchronous model.
* Support for reconnection: Pending requests are not canceled when a connection is lost and are re-sent when a new one is established.
* The library is not sending HELLO-3 on user behalf anymore. This is important to support AUTH properly.
## v0.1.2
* Adds reconnect coroutine in the `echo_server` example.
* Corrects `client::async_wait_for_data` with `make_parallel_group` to launch operation.
* Improvements in the documentation.
* Avoids dynamic memory allocation in the client class after reconnection.
## v0.1.1
* Improves the documentation and adds some features to the high-level client.
## v0.1.0
* Improvements in the design and documentation.
## v0.0.1
* First release to collect design feedback.

View File

@@ -10,8 +10,8 @@ cmake_minimum_required(VERSION 3.14)
project(
Aedis
VERSION 1.0.0
DESCRIPTION "An async redis client designed for performance and scalability"
VERSION 1.1.0
DESCRIPTION "A redis client designed for performance and scalability"
HOMEPAGE_URL "https://mzimbres.github.io/aedis"
LANGUAGES CXX
)
@@ -63,11 +63,12 @@ add_executable(serialization examples/serialization.cpp)
add_executable(subscriber examples/subscriber.cpp)
add_executable(subscriber_sentinel examples/subscriber_sentinel.cpp)
add_executable(test_low_level tests/low_level.cpp)
add_executable(low_level_sync tests/low_level_sync.cpp)
add_executable(low_level_sync examples/low_level_sync.cpp)
add_executable(test_connection_other tests/connection_other.cpp)
add_executable(test_connection_connect tests/connection_connect.cpp)
add_executable(test_connection_push tests/connection_push.cpp)
add_executable(test_connection_quit tests/connection_quit.cpp)
add_executable(test_connection_quit_coalesce tests/connection_quit_coalesce.cpp)
add_executable(test_connection_reconnect tests/connection_reconnect.cpp)
add_executable(test_connection_tls tests/connection_tls.cpp)
@@ -77,7 +78,6 @@ target_compile_features(subscriber PUBLIC cxx_std_20)
target_compile_features(subscriber_sentinel PUBLIC cxx_std_20)
target_compile_features(test_connection_other PUBLIC cxx_std_20)
target_compile_features(test_connection_push PUBLIC cxx_std_20)
target_compile_features(test_connection_quit PUBLIC cxx_std_20)
target_link_libraries(intro_tls OpenSSL::Crypto OpenSSL::SSL)
target_link_libraries(test_connection_tls OpenSSL::Crypto OpenSSL::SSL)
@@ -96,6 +96,7 @@ add_test(test_connection_other test_connection_other)
add_test(test_connection_connect test_connection_connect)
add_test(test_connection_push test_connection_push)
add_test(test_connection_quit test_connection_quit)
add_test(test_connection_quit_coalesce test_connection_quit_coalesce)
add_test(test_connection_reconnect test_connection_reconnect)
add_test(test_connection_tls test_connection_tls)

461
README.md
View File

@@ -16,34 +16,38 @@ Some of its distinctive features are
* Healthy checks, back pressure and low latency.
In addition to that, Aedis hides most of the low-level Asio code away
from the user, which in the majority of the use cases will only interact
with three entities
from the user, which in the majority of the use cases will interact
with only three library entities
* `aedis::connection`: A healthy connection to the Redis server.
* `aedis::connection`: A healthy long-lasting connection to the Redis server.
* `aedis::resp3::request`: A container of Redis commands.
* `aedis::adapt()`: Adapts user data structures like STL containers to
receive Redis responses.
For example, the code below establishes a connection to the Redis
server (see intro.cpp)
Let us see how this works in more detail.
### Connection
The code below will establish a connection with a Redis
server where users can send commands (see intro.cpp)
```cpp
int main()
{
net::io_context ioc;
connection db{ioc};
connection conn{ioc};
db.async_run({"127.0.0.1", "6379"}, [](auto ec) { ... });
conn.async_run({"127.0.0.1", "6379"}, {}, [](auto ec) { ... });
// Pass conn to other operations ...
ioc.run();
}
```
The `connection::async_run` function above completes only when a
connection is lost. Requests can be sent at any
time, regardless of whether before or after a connection was
established. For example, the code below sends a `PING` command,
waits for the response and exits
Requests on the other hand can be sent at any time, regardless of whether before or
after a connection was established. For example, the code below sends
the `PING` and `QUIT` command, waits for the response and exits
```cpp
net::awaitable<void> ping(std::shared_ptr<connection> conn)
@@ -51,9 +55,10 @@ net::awaitable<void> ping(std::shared_ptr<connection> conn)
// Request
request req;
req.push("PING", "some message");
req.push("QUIT");
// Response
std::tuple<std::string> resp;
std::tuple<std::string, aedis::ignore> resp;
// Execution
co_await conn->async_exec(req, adapt(resp));
@@ -68,12 +73,10 @@ The structure of how to send commands is evident from the code above
* Execute the request.
Multiple calls to `connection::async_exec` are synchronized
automatically so that different operations (or coroutines)
don't have to be aware of each other.
The implementation also supports server side pushes on the same
connection object that is being used to execute commands, for example,
the coroutine below reads pushes (see subscriber.cpp)
automatically so that different operations (or coroutines) don't have
to be aware of each other. Server side pushes can be received on the
same connection object that is being used to execute commands, for
example (see subscriber.cpp)
```cpp
net::awaitable<void> receive_pushes(connection& db)
@@ -102,7 +105,7 @@ net::awaitable<void> reconnect(std::shared_ptr<connection> db)
endpoint ep{"127.0.0.1", "6379"};
for (;;) {
boost::system::error_code ec;
co_await db->async_run(ep, req, adapt(), net::redirect_error(net::use_awaitable, ec));
co_await db->async_run(ep, req, adapt(), {}, net::redirect_error(net::use_awaitable, ec));
db->reset_stream();
timer.expires_after(std::chrono::seconds{1});
co_await timer.async_wait();
@@ -110,54 +113,88 @@ net::awaitable<void> reconnect(std::shared_ptr<connection> db)
}
```
more complex scenarios, like performing a failover with sentinel can
be found in the examples. Notice that any calls to
`connection::async_exec` won't automatically fail as a result of
connection lost, rather, they will remain suspended until a new
connection is established, once that happens, all requests are sent
automatically. This behaviour can be changed by per request by setting
the `close_on_connection_lost` on the `aedis::resp3::request` constructor
or by calling `connection::cancel(operation::exec)` which will cause
all pending requests to be canceled.
be found in the examples. Calls to `connection::async_exec` won't
automatically fail as a result of connection lost, rather, they will
remain suspended until a new connection is established, after that
all requests are sent automatically. This behaviour can be
changed per request by setting on the
`aedis::resp3::request::config::close_on_connection_lost` or by calling
`connection::cancel()` with `connection::operation::exec`
which will cause all pending requests to be canceled.
### Timeouts
The way Aedis deals with timeout differs to some extent from other
Asio based libraries the author is aware of. All timeouts that users
need are built-in the `aedis::connection` class. The reason for that
is manifold
Aedis high-level API provides built-in support for most timeouts users
might need. For example, the `aedis::connection::async_run` member
function performs the following operations on behalf of the user
#### Aedis is high-level
The member function `connection::async_run`
for example performs the following operations on behalf of the user
* Resolves addresses.
* Resolves Redis address.
* Connects to the endpoint.
* Performs TLS handhshake (for TLS endpoints)
* Performs the RESP3 handshake.
* Keeps sending PING commands and checking for unresponsive servers.
* TLS handhshake (for TLS endpoints).
* RESP3 handshake, authentication and role check.
* Keeps sending PING commands to check for unresponsive servers.
* Keeps reading from the socket to handle server pushes and command responses.
* Keeps writing requests as they come.
* Keeps writing requests as it becomes possible e.g. after last response has arrived.
by wrapping all these operations in a single function it becomes
necessary to have built-in support for timeouts.
To control the timeout-behaviour of the operations above users must
create a `aedis::connection::timeouts` and pass it to as argument to
the `aedis::connection::async_run` member function (or use the
suggested defaults).
#### Pipelines
With the introduction of awaitable operators in Asio it is very simple
implement timeouts either on individual or on a group of operations.
Users, for example, may be tempted in writing code like
Another important topic regarding timeouts is the cancellation of
`aedis::connection::async_exec`. With the introduction of awaitable
operators in Asio it is very simple implement timeouts either on
individual or on a group of operations. Users, for example, may be
tempted in writing code like
```cpp
co_await (conn.async_exec(...) || timer.async_wait(...))
```
the problem with this approach in Aedis is that to improve performance Redis
encourages the use of pipelines, where many requests are sent in a single chunk
to the server. In this scenario it is harder to cancel
the problem with this approach in Aedis is that to improve performance
Redis encourages the use of pipelines, where many requests are sent in
a single chunk to the server. In this scenario it is harder to cancel
individual operations without causing all other (independent) requests
in the same pipeline to fail too.
### Installation
Download the latest Aedis release from github
```cpp
$ wget https://github.com/mzimbres/aedis/releases/download/v1.1.0/aedis-1.1.0.tar.gz
```
and unpack in your preferred location. Aedis is a header only
library, so you can starting using it. For that include the
following header
```cpp
#include <aedis/src.hpp>
```
in no more than one source file in your applications (see intro.cpp
for example). To build the examples and run the tests cmake is also
supported
```cpp
$ BOOST_ROOT=/opt/boost_1_79_0/ cmake
$ make
$ make test
```
These are the requirements for using Aedis
- Boost 1.79 or greater.
- C++17. Some examples require C++20 with coroutine support.
- Redis 6 or higher. Optionally also redis-cli and Redis Sentinel.
The following compilers are supported
- Tested with gcc: 10, 11, 12.
- Tested with clang: 11, 13, 14.
### Examples
Users are encouraged to skim over the examples below before proceeding
@@ -166,20 +203,23 @@ to the next sections
* intro.cpp: The Aedis hello-world program. It sends one command to Redis and quits the connection.
* intro_tls.cpp: Same as intro.cpp but over TLS.
* intro_sync.cpp: Synchronous version of intro.cpp.
* intro_sync_tls.cpp: Same as intro_sync.cpp but over TLS.
* containers.cpp: Shows how to send and receive stl containers and how to use transactions.
* serialization.cpp: Shows how to serialize types using Boost.Json.
* subscriber.cpp: Shows how to implement pubsub that reconnects and resubscribes when the connection is lost.
* subscriber_sentinel.cpp: Same as subscriber.cpp but with failover with sentinels.
* subscriber_sync.cpp: Synchronous version of subscriber.cpp.
* echo_server.cpp: A simple TCP echo server.
* chat_room.cpp: A simple chat room.
<a name="requests"></a>
### API Reference
* [High-Level](#high-level-api): Recommend to all users
* [Low-Level](#low-level-api): For users with needs yet to be imagined by the author.
In the next sections we will see how to create requests and receive
responses with more detail
<a name="requests"></a>
## Requests
Redis requests are composed of one of more Redis commands (in
@@ -314,7 +354,7 @@ co_await db->async_exec(req, adapt());
```
Responses that contain nested aggregates or heterogeneous data
types will be given special treatment later in @ref the-general-case. As
types will be given special treatment later in [the-general-case](#the-general-case). As
of this writing, not all RESP3 types are used by the Redis server,
which means in practice users will be concerned with a reduced
subset of the RESP3 specification.
@@ -453,46 +493,6 @@ from Redis with `HGETALL`, some of the options are
In addition to the above users can also use unordered versions of the containers. The same reasoning also applies to sets e.g. `SMEMBERS`.
## Installation
Download the latest Aedis release from github
```cpp
$ wget https://github.com/mzimbres/aedis/releases/download/v1.0.0/aedis-1.0.0.tar.gz
```
and unpack in your preferred location. Aedis is a header only
library, so you can starting using it. For that include the
following header
```cpp
#include <aedis/src.hpp>
```
in no more than one source file in your applications (see
intro.cpp for example). To build the examples, run the tests etc.
cmake is also supported
```cpp
$ BOOST_ROOT=/opt/boost_1_79_0/ cmake
$ make
$ make test
```
Notice you have to specify the compiler flags manually.
These are the requirements for using Aedis
- Boost 1.78 or greater.
- C++17. Some examples require C++20 with coroutine support.
- Redis 6 or higher. Optionally also redis-cli and Redis Sentinel.
The following compilers are supported
- Tested with gcc: 10, 11, 12.
- Tested with clang: 11, 13, 14.
## Why Aedis
At the time of this writing there are seventeen Redis clients
@@ -503,11 +503,6 @@ to compare Aedis with the most popular clients and why we need
Aedis. Notice however that this is ongoing work as comparing
client objectively is difficult and time consuming.
The most popular client at the moment of this writing ranked by
github stars is
* https://github.com/sewenew/redis-plus-plus
Before we start it is worth mentioning some of the things it does
not support
@@ -519,7 +514,12 @@ not support
The remaining points will be addressed individually.
## redis-plus-plus
### Redis-plus-plus
The most popular client at the moment of this writing ranked by
github stars is
* https://github.com/sewenew/redis-plus-plus
Let us first have a look at what sending a command a pipeline and a
transaction look like
@@ -598,7 +598,6 @@ Future<string> ping_res = async_redis.ping();
cout << ping_res.get() << endl;
```
As the reader can see, the async interface is based on futures
which is also known to have a bad performance. The biggest
problem however with this async design is that it makes it
@@ -608,23 +607,257 @@ enqueueing a message and triggering a write when it can be sent.
It is also not clear how are pipelines realised with the design
(if at all).
## Build status
### Benchmark: Echo server
Branch | GH Actions | codecov.io |
:-------------: | ---------- | ---------- |
[`master`](https://github.com/mzimbres/aedis/tree/master) | [![CI](https://github.com/mzimbres/aedis/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/mzimbres/aedis/actions/workflows/ci.yml) | [![codecov](https://codecov.io/gh/mzimbres/aedis/branch/master/graph/badge.svg)](https://codecov.io/gh/mzimbres/aedis/branch/master)
This document benchmarks the performance of TCP echo servers I
implemented in different languages using different Redis clients. The
main motivations for choosing an echo server are
## Reference
* Simple to implement and does not require expertise level in most languages.
* I/O bound: Echo servers have very low CPU consumption in general
and therefore are excelent to measure how a program handles concurrent requests.
* It simulates very well a typical backend in regard to concurrency.
See [Reference](#any)
I also imposed some constraints on the implementations
* It should be simple enough and not require writing too much code.
* Favor the use standard idioms and avoid optimizations that require expert level.
* Avoid the use of complex things like connection and thread pool.
To reproduce these results run one of the echo-server programs in one
terminal and the
[echo-server-client](https://github.com/mzimbres/aedis/blob/42880e788bec6020dd018194075a211ad9f339e8/benchmarks/cpp/asio/echo_server_client.cpp)
in another.
#### Without Redis
First I tested a pure TCP echo server, i.e. one that sends the messages
directly to the client without interacting with Redis. The result can
be seen below
![](https://mzimbres.github.io/aedis/tcp-echo-direct.png)
The tests were performed with a 1000 concurrent TCP connections on the
localhost where latency is 0.07ms on average on my machine. On higher
latency networks the difference among libraries is expected to
decrease.
* I expected Libuv to have similar performance to Asio and Tokio.
* I did expect nodejs to come a little behind given it is is
javascript code. Otherwise I did expect it to have similar
performance to libuv since it is the framework behind it.
* Go did surprise me: faster than nodejs and liuv!
The code used in the benchmarks can be found at
* [Asio](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/cpp/asio/echo_server_direct.cpp): A variation of [this](https://github.com/chriskohlhoff/asio/blob/4915cfd8a1653c157a1480162ae5601318553eb8/asio/src/examples/cpp20/coroutines/echo_server.cpp) Asio example.
* [Libuv](https://github.com/mzimbres/aedis/tree/835a1decf477b09317f391eddd0727213cdbe12b/benchmarks/c/libuv): Taken from [here](https://github.com/libuv/libuv/blob/06948c6ee502862524f233af4e2c3e4ca876f5f6/docs/code/tcp-echo-server/main.c) Libuv example .
* [Tokio](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/rust/echo_server_direct): Taken from [here](https://docs.rs/tokio/latest/tokio/).
* [Nodejs](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/nodejs/echo_server_direct)
* [Go](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/go/echo_server_direct.go)
#### With Redis
This is similar to the echo server described above but messages are
echoed by Redis and not by the echo-server itself, which acts
as a proxy between the client and the Redis server. The results
can be seen below
![](https://mzimbres.github.io/aedis/tcp-echo-over-redis.png)
The tests were performed on a network where latency is 35ms on
average, otherwise it uses the same number of TCP connections
as the previous example.
As the reader can see, the Libuv and the Rust test are not depicted
in the graph, the reasons are
* [redis-rs](https://github.com/redis-rs/redis-rs): This client
comes so far behind that it can't even be represented together
with the other benchmarks without making them look insignificant.
I don't know for sure why it is so slow, I suppose it has
something to do with its lack of proper
[pipelining](https://redis.io/docs/manual/pipelining/) support.
In fact, the more TCP connections I lauch the worse its
performance gets.
* Libuv: I left it out because it would require too much work to
write it and make it have a good performance. More specifically,
I would have to use hiredis and implement support for pipelines
manually.
The code used in the benchmarks can be found at
* [Aedis](https://github.com/mzimbres/aedis): [code](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/examples/echo_server.cpp)
* [node-redis](https://github.com/redis/node-redis): [code](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/nodejs/echo_server_over_redis)
* [go-redis](https://github.com/go-redis/redis): [code](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/go/echo_server_over_redis.go)
## Changelog
### v1.1.0
* Removes `coalesce_requests` from the `aedis::connection::config`, it
became a request property now, see `aedis::resp3::request::config::coalesce`.
* Removes `max_read_size` from the `aedis::connection::config`. The maximum
read size can be specified now as a parameter of the
`aedis::adapt()` function.
* Removes `aedis::sync` class, see intro_sync.cpp for how to perform
synchronous and thread safe calls. This is possible in Boost. 1.80
only as it requires `boost::asio::deferred`.
* Moves from `boost::optional` to `std::optional`. This is part of
moving to C++17.
* Changes the behaviour of the second `aedis::connection::async_run` overload
so that it always returns an error when the connection is lost.
* Adds TLS support, see intro_tls.cpp.
* Adds an example that shows how to resolve addresses over sentinels,
see subscriber_sentinel.cpp.
* Adds a `aedis::connection::timeouts::resp3_handshake_timeout`. This is
timeout used to send the `HELLO` command.
* Adds `aedis::endpoint` where in addition to host and port, users can
optionally provide username, password and the expected server role
(see `aedis::error::unexpected_server_role`).
* `aedis::connection::async_run` checks whether the server role received in
the hello command is equal to the expected server role specified in
`aedis::endpoint`. To skip this check let the role variable empty.
* Removes reconnect functionanlity from `aedis::connection`. It
is possible in simple reconnection strategies but bloats the class
in more complex scenarios, for example, with sentinel,
authentication and TLS. This is trivial to implement in a separate
coroutine. As a result the enum `event` and `async_receive_event`
have been removed from the class too.
* Fixes a bug in `connection::async_receive_push` that prevented
passing any response adapter other that `adapt(std::vector<node>)`.
* Changes the behaviour of `aedis::adapt()` that caused RESP3 errors
to be ignored. One consequence of it is that `connection::async_run`
would not exit with failure in servers that required authentication.
* Changes the behaviour of `connection::async_run` that would cause it
to complete with success when an error in the
`connection::async_exec` occurred.
* Ports the buildsystem from autotools to CMake.
### v1.0.0
* Adds experimental cmake support for windows users.
* Adds new class `aedis::sync` that wraps an `aedis::connection` in
a thread-safe and synchronous API. All free functions from the
`sync.hpp` are now member functions of `aedis::sync`.
* Split `aedis::connection::async_receive_event` in two functions, one
to receive events and another for server side pushes, see
`aedis::connection::async_receive_push`.
* Removes collision between `aedis::adapter::adapt` and
`aedis::adapt`.
* Adds `connection::operation` enum to replace `cancel_*` member
functions with a single cancel function that gets the operations
that should be cancelled as argument.
* Bugfix: a bug on reconnect from a state where the `connection` object
had unsent commands. It could cause `async_exec` to never
complete under certain conditions.
* Bugfix: Documentation of `adapt()` functions were missing from
Doxygen.
### v0.3.0
* Adds `experimental::exec` and `receive_event` functions to offer a
thread safe and synchronous way of executing requests across
threads. See `intro_sync.cpp` and `subscriber_sync.cpp` for
examples.
* `connection::async_read_push` was renamed to `async_receive_event`.
* `connection::async_receive_event` is now being used to communicate
internal events to the user, such as resolve, connect, push etc. For
examples see subscriber.cpp and `connection::event`.
* The `aedis` directory has been moved to `include` to look more
similar to Boost libraries. Users should now replace `-I/aedis-path`
with `-I/aedis-path/include` in the compiler flags.
* The `AUTH` and `HELLO` commands are now sent automatically. This change was
necessary to implement reconnection. The username and password
used in `AUTH` should be provided by the user on
`connection::config`.
* Adds support for reconnection. See `connection::enable_reconnect`.
* Fixes a bug in the `connection::async_run(host, port)` overload
that was causing crashes on reconnection.
* Fixes the executor usage in the connection class. Before theses
changes it was imposing `any_io_executor` on users.
* `connection::async_receiver_event` is not cancelled anymore when
`connection::async_run` exits. This change makes user code simpler.
* `connection::async_exec` with host and port overload has been
removed. Use the other `connection::async_run` overload.
* The host and port parameters from `connection::async_run` have been
move to `connection::config` to better support authentication and
failover.
* Many simplifications in the `chat_room` example.
* Fixes build in clang the compilers and makes some improvements in
the documentation.
### v0.2.1
* Fixes a bug that happens on very high load.
### v0.2.0
* Major rewrite of the high-level API. There is no more need to use the low-level API anymore.
* No more callbacks: Sending requests follows the ASIO asynchronous model.
* Support for reconnection: Pending requests are not canceled when a connection is lost and are re-sent when a new one is established.
* The library is not sending HELLO-3 on user behalf anymore. This is important to support AUTH properly.
### v0.1.2
* Adds reconnect coroutine in the `echo_server` example.
* Corrects `client::async_wait_for_data` with `make_parallel_group` to launch operation.
* Improvements in the documentation.
* Avoids dynamic memory allocation in the client class after reconnection.
### v0.1.1
* Improves the documentation and adds some features to the high-level client.
### v0.1.0
* Improvements in the design and documentation.
### v0.0.1
* First release to collect design feedback.
## Acknowledgement
Some people that were helpful in the development of Aedis
Acknowldgement to people that helped shape Aedis in one way or
another.
* Richard Hodges ([madmongo1](https://github.com/madmongo1)): For very helpful support with Asio and the design of asynchronous programs in general.
* Vinícius dos Santos Oliveira ([vinipsmaker](https://github.com/vinipsmaker)): For useful discussion about how Aedis consumes buffers in the read operation (among other things).
* Richard Hodges ([madmongo1](https://github.com/madmongo1)): For very helpful support with Asio, the design of asynchronous programs, etc.
* Vinícius dos Santos Oliveira ([vinipsmaker](https://github.com/vinipsmaker)): For useful discussion about how Aedis consumes buffers in the read operation.
* Petr Dannhofer ([Eddie-cz](https://github.com/Eddie-cz)): For helping me understand how the `AUTH` and `HELLO` command can influence each other.
* Mohammad Nejati ([ashtum](https://github.com/ashtum)): For pointing scenarios where calls to `async_exec` should fail when the connection is lost.
* Klemens Morgenstern ([klemens-morgenstern](https://github.com/klemens-morgenstern)): For useful discussion about timeouts, the synchronous interface and help with Asio.
* Klemens Morgenstern ([klemens-morgenstern](https://github.com/klemens-morgenstern)): For useful discussion about timeouts, the synchronous interface and general help with Asio.

View File

@@ -1,87 +0,0 @@
# TCP echo server performance
This document benchmarks the performance of TCP echo servers I
implemented in different languages using different Redis clients. The
main motivations for choosing an echo server are
* Simple to implement and does not require expertise level in most languages.
* I/O bound: Echo servers have very low CPU consumption in general
and therefore are excelent to measure how a program handles concurrent requests.
* It simulates very well a typical backend in regard to concurrency.
I also imposed some constraints on the implementations
* It should be simple enough and not require writing too much code.
* Favor the use standard idioms and avoid optimizations that require expert level.
* Avoid the use of complex things like connection and thread pool.
## No Redis
First I tested a pure TCP echo server, i.e. one that sends the messages
directly to the client without interacting with Redis. The result can
be seen below
![](https://mzimbres.github.io/aedis/tcp-echo-direct.png)
The tests were performed with a 1000 concurrent TCP connections on the
localhost where latency is 0.07ms on average on my machine. On higher
latency networks the difference among libraries is expected to
decrease.
### Remarks
* I expected Libuv to have similar performance to Asio and Tokio.
* I did expect nodejs to come a little behind given it is is
javascript code. Otherwise I did expect it to have similar
performance to libuv since it is the framework behind it.
* Go did surprise me: faster than nodejs and liuv!
The code used in the benchmarks can be found at
* [Asio](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/cpp/asio/echo_server_direct.cpp): A variation of [this](https://github.com/chriskohlhoff/asio/blob/4915cfd8a1653c157a1480162ae5601318553eb8/asio/src/examples/cpp20/coroutines/echo_server.cpp) Asio example.
* [Libuv](https://github.com/mzimbres/aedis/tree/835a1decf477b09317f391eddd0727213cdbe12b/benchmarks/c/libuv): Taken from [here](https://github.com/libuv/libuv/blob/06948c6ee502862524f233af4e2c3e4ca876f5f6/docs/code/tcp-echo-server/main.c) Libuv example .
* [Tokio](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/rust/echo_server_direct): Taken from [here](https://docs.rs/tokio/latest/tokio/).
* [Nodejs](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/nodejs/echo_server_direct)
* [Go](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/go/echo_server_direct.go)
## Echo over Redis
This is similar to the echo server described above but messages are
echoed by Redis and not by the echo-server itself, which acts
as a proxy between the client and the Redis server. The results
can be seen below
![](https://mzimbres.github.io/aedis/tcp-echo-over-redis.png)
The tests were performed on a network where latency is 35ms on
average, otherwise it uses the same number of TCP connections
as the previous example.
### Remarks
As the reader can see, the Libuv and the Rust test are not depicted
in the graph, the reasons are
* [redis-rs](https://github.com/redis-rs/redis-rs): This client
comes so far behind that it can't even be represented together
with the other benchmarks without making them look insignificant.
I don't know for sure why it is so slow, I suppose it has
something to do with its lack of proper
[pipelining](https://redis.io/docs/manual/pipelining/) support.
In fact, the more TCP connections I lauch the worse its
performance gets.
* Libuv: I left it out because it would require too much work to
write it and make it have a good performance. More specifically,
I would have to use hiredis and implement support for pipelines
manually.
The code used in the benchmarks can be found at
* [Aedis](https://github.com/mzimbres/aedis): [code](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/examples/echo_server.cpp)
* [node-redis](https://github.com/redis/node-redis): [code](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/nodejs/echo_server_over_redis)
* [go-redis](https://github.com/go-redis/redis): [code](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/go/echo_server_over_redis.go)
## Running the benchmarks
Run one of the echo-server programs in one terminal and the [echo-server-client](https://github.com/mzimbres/aedis/blob/42880e788bec6020dd018194075a211ad9f339e8/benchmarks/cpp/asio/echo_server_client.cpp) in another.

View File

@@ -12,9 +12,14 @@
#include <aedis/connection.hpp>
#include <aedis/resp3/request.hpp>
/** @defgroup any Reference
/** @defgroup high-level-api Reference
*
* This page contains the documentation of all user facing code.
* This page contains the documentation of the Aedis high-level API.
*/
/** @defgroup low-level-api Reference
*
* This page contains the documentation of the Aedis low-level API.
*/
#endif // AEDIS_HPP

View File

@@ -22,7 +22,7 @@
namespace aedis {
/** @brief Tag used to ignore responses.
* @ingroup any
* @ingroup high-level-api
*
* For example
*
@@ -188,7 +188,7 @@ auto make_adapter_wrapper(Adapter adapter)
} // detail
/** @brief Creates an adapter that ignores responses.
* @ingroup any
* @ingroup high-level-api
*
* This function can be used to create adapters that ignores
* responses.
@@ -202,7 +202,7 @@ inline auto adapt(std::size_t max_read_size = (std::numeric_limits<std::size_t>:
}
/** @brief Adapts a type to be used as a response.
* @ingroup any
* @ingroup high-level-api
*
* The type T can be any STL container, any integer type and
* \c std::string

View File

@@ -14,9 +14,8 @@ namespace aedis::adapter {
template <class T>
using adapter_t = typename detail::adapter_t<T>;
/** \internal
\brief Creates a dummy response adapter.
\ingroup any
/** \brief Creates a dummy response adapter.
\ingroup low-level-api
The adapter returned by this function ignores responses. It is
useful to avoid wasting time with responses which are not needed.
@@ -41,9 +40,8 @@ inline
auto adapt2() noexcept
{ return detail::response_traits<void>::adapt(); }
/** \internal
* \brief Adapts user data to read operations.
* \ingroup any
/** \brief Adapts user data to read operations.
* \ingroup low-level-api
*
* STL containers, \c std::tuple and built-in types are supported and
* can be used in conjunction with \c std::optional<T>.

View File

@@ -11,12 +11,12 @@
#include <memory>
#include <boost/asio/io_context.hpp>
#include <aedis/connection_base.hpp>
#include <aedis/detail/connection_base.hpp>
namespace aedis {
/** @brief A connection to the Redis server.
* @ingroup any
* @ingroup high-level-api
*
* This class keeps a healthy connection to the Redis instance where
* commands can be sent at any time. For more details, please see the
@@ -31,7 +31,7 @@ namespace aedis {
*/
template <class AsyncReadWriteStream = boost::asio::ip::tcp::socket>
class connection :
private connection_base<
private detail::connection_base<
typename AsyncReadWriteStream::executor_type,
connection<AsyncReadWriteStream>> {
public:
@@ -40,12 +40,9 @@ public:
/// Type of the next layer
using next_layer_type = AsyncReadWriteStream;
using base_type = connection_base<executor_type, connection<AsyncReadWriteStream>>;
using base_type = detail::connection_base<executor_type, connection<AsyncReadWriteStream>>;
/// List of operations that can be canceled.
using operation = typename base_type::operation;
/** @brief Connection configuration parameters.
/** \brief Connection configuration parameters.
*/
struct timeouts {
/// Timeout of the resolve operation.
@@ -268,7 +265,7 @@ public:
private:
using this_type = connection<next_layer_type>;
template <class, class> friend class connection_base;
template <class, class> friend class detail::connection_base;
template <class, class> friend struct detail::exec_read_op;
template <class, class> friend struct detail::exec_op;
template <class, class> friend struct detail::receive_push_op;

View File

@@ -21,11 +21,12 @@
#include <boost/asio/experimental/channel.hpp>
#include <aedis/adapt.hpp>
#include <aedis/operation.hpp>
#include <aedis/endpoint.hpp>
#include <aedis/resp3/request.hpp>
#include <aedis/detail/connection_ops.hpp>
namespace aedis {
namespace aedis::detail {
/** Base class for high level Redis asynchronous connections.
*
@@ -42,20 +43,6 @@ public:
using executor_type = Executor;
using this_type = connection_base<Executor, Derived>;
/** @brief List of async operations exposed by this class.
*
* The operations listed below can be cancelled with the `cancel`
* member function.
*/
enum class operation {
/// Refers to `connection::async_exec` operations.
exec,
/// Refers to `connection::async_run` operations.
run,
/// Refers to `connection::async_receive_push` operations.
receive_push,
};
explicit connection_base(executor_type ex)
: resv_{ex}
, ping_timer_{ex}

View File

@@ -106,12 +106,12 @@ struct receive_push_op {
conn->make_dynamic_buffer(adapter.get_max_read_size(0)),
adapter, std::move(self));
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
// Needed to cancel the channel, otherwise the read
// operation will be blocked forever see
// test_push_adapter.
conn->cancel(Conn::operation::receive_push);
conn->cancel(operation::receive_push);
self.complete(ec, 0);
return;
}
@@ -159,7 +159,7 @@ struct exec_read_op {
conn->make_dynamic_buffer(),
"\r\n", std::move(self));
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec, 0);
return;
}
@@ -191,7 +191,7 @@ struct exec_read_op {
++index;
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec, 0);
return;
}
@@ -311,7 +311,7 @@ struct ping_op {
yield
conn->ping_timer_.async_wait(std::move(self));
if (ec || !conn->is_open()) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec);
return;
}
@@ -321,7 +321,7 @@ struct ping_op {
yield
conn->async_exec(conn->req_, adapt(), std::move(self));
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete({});
return;
}
@@ -344,7 +344,7 @@ struct check_idle_op {
yield
conn->check_idle_timer_.async_wait(std::move(self));
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete({});
return;
}
@@ -357,7 +357,7 @@ struct check_idle_op {
auto const now = std::chrono::steady_clock::now();
if (conn->last_data_ + (2 * ping_interval) < now) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(error::idle_timeout);
return;
}
@@ -421,7 +421,7 @@ struct run_op {
yield
conn->async_resolve_with_timeout(ts.resolve_timeout, std::move(self));
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec);
return;
}
@@ -429,7 +429,7 @@ struct run_op {
yield
conn->derived().async_connect(conn->endpoints_, ts, conn->ping_timer_, std::move(self));
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec);
return;
}
@@ -448,7 +448,7 @@ struct run_op {
);
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec);
return;
}
@@ -456,7 +456,7 @@ struct run_op {
conn->ep_.password.clear();
if (!conn->expect_role(conn->ep_.role)) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(error::unexpected_server_role);
return;
}
@@ -507,7 +507,7 @@ struct writer_op {
yield
conn->writer_timer_.async_wait(std::move(self));
if (ec != boost::asio::error::operation_aborted) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec);
return;
}
@@ -548,7 +548,7 @@ struct reader_op {
conn->make_dynamic_buffer(),
"\r\n", std::move(self));
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec);
return;
}
@@ -579,7 +579,7 @@ struct reader_op {
yield
async_send_receive(conn->push_channel_, std::move(self));
if (ec) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec);
return;
}
@@ -592,7 +592,7 @@ struct reader_op {
conn->read_timer_.async_wait(std::move(self));
if (ec != boost::asio::error::operation_aborted ||
!conn->is_open()) {
conn->cancel(Conn::operation::run);
conn->cancel(operation::run);
self.complete(ec);
return;
}

View File

@@ -11,7 +11,8 @@
namespace aedis {
/** @brief A Redis endpoint.
/** \brief A Redis endpoint.
* \ingroup high-level-api
*/
struct endpoint {
/// Redis server address.

View File

@@ -12,7 +12,7 @@
namespace aedis {
/** \brief Generic errors.
* \ingroup any
* \ingroup high-level-api
*/
enum class error
{

View File

@@ -0,0 +1,29 @@
/* Copyright (c) 2018-2022 Marcelo Zimbres Silva (mzimbres@gmail.com)
*
* Distributed under the Boost Software License, Version 1.0. (See
* accompanying file LICENSE.txt)
*/
#ifndef AEDIS_OPERATION_HPP
#define AEDIS_OPERATION_HPP
namespace aedis {
/** \brief Connection operations that can be cancelled.
* \ingroup high-level-api
*
* The operations listed below can be passed to the
* `aedis::connection::cancel` member function.
*/
enum class operation {
/// Refers to `connection::async_exec` operations.
exec,
/// Refers to `connection::async_run` operations.
run,
/// Refers to `connection::async_receive_push` operations.
receive_push,
};
} // aedis
#endif // AEDIS_OPERATION_HPP

View File

@@ -15,7 +15,7 @@
namespace aedis::resp3 {
/** \brief A node in the response tree.
* \ingroup any
* \ingroup high-level-api
*
* Redis responses are the pre-order view of the response tree (see
* https://en.wikipedia.org/wiki/Tree_traversal#Pre-order,_NLR).

View File

@@ -17,9 +17,8 @@
namespace aedis::resp3 {
/** \internal
* \brief Reads a complete response to a command sychronously.
* \ingroup any
/** \brief Reads a complete response to a command sychronously.
* \ingroup low-level-api
*
* This function reads a complete response to a command or a
* server push synchronously. For example
@@ -40,7 +39,7 @@ namespace aedis::resp3 {
*
* \param stream The stream from which to read e.g. a tcp socket.
* \param buf Dynamic buffer (version 2).
* \param adapter The response adapter, see more on \ref low-level-responses.
* \param adapter The response adapter.
* \param ec If an error occurs, it will be assigned to this paramter.
* \returns The number of bytes that have been consumed from the dynamic buffer.
*
@@ -93,9 +92,8 @@ read(
return consumed;
}
/** \internal
* \brief Reads a complete response to a command sychronously.
* \ingroup any
/** \brief Reads a complete response to a command sychronously.
* \ingroup low-level-api
*
* Same as the error_code overload but throws on error.
*/
@@ -118,9 +116,8 @@ read(
return n;
}
/** \internal
* \brief Reads a complete response to a Redis command asynchronously.
* \ingroup any
/** \brief Reads a complete response to a Redis command asynchronously.
* \ingroup low-level-api
*
* This function reads a complete response to a command or a
* server push asynchronously. For example
@@ -142,7 +139,7 @@ read(
*
* \param stream The stream from which to read e.g. a tcp socket.
* \param buffer Dynamic buffer (version 2).
* \param adapter The response adapter, see more on \ref low-level-responses.
* \param adapter The response adapter.
* \param token The completion token.
*
* The completion handler will receive as a parameter the total

View File

@@ -149,7 +149,7 @@ void add_separator(Request& to)
} // detail
/** @brief Creates Redis requests.
* @ingroup any
* @ingroup high-level-api
*
* A request is composed of one or more Redis commands and is
* referred to in the redis documentation as a pipeline, see

View File

@@ -14,7 +14,7 @@
namespace aedis::resp3 {
/** \brief RESP3 data types.
\ingroup any
\ingroup high-level-api
The RESP3 specification can be found at https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md.
*/
@@ -56,13 +56,13 @@ enum class type
};
/** \brief Converts the data type to a string.
* \ingroup any
* \ingroup high-level-api
* \param t RESP3 type.
*/
auto to_string(type t) -> char const*;
/** \brief Writes the type to the output stream.
* \ingroup any
* \ingroup high-level-api
* \param os Output stream.
* \param t RESP3 type.
*/

View File

@@ -11,6 +11,9 @@
namespace aedis::resp3 {
/** \brief Writes a request synchronously.
* \ingroup low-level-api
*/
template<
class SyncWriteStream,
class Request
@@ -32,6 +35,9 @@ auto write(
return boost::asio::write(stream, boost::asio::buffer(req.payload()), ec);
}
/** \brief Writes a request asynchronously.
* \ingroup low-level-api
*/
template<
class AsyncWriteStream,
class Request,

View File

@@ -11,7 +11,7 @@
#include <memory>
#include <boost/asio/io_context.hpp>
#include <aedis/connection_base.hpp>
#include <aedis/detail/connection_base.hpp>
#include <aedis/ssl/detail/connection_ops.hpp>
namespace aedis::ssl {
@@ -19,8 +19,8 @@ namespace aedis::ssl {
template <class>
class connection;
/** @brief A SSL connection to the Redis server.
* @ingroup any
/** \brief A SSL connection to the Redis server.
* \ingroup high-level-api
*
* This class keeps a healthy connection to the Redis instance where
* commands can be sent at any time. For more details, please see the
@@ -35,7 +35,7 @@ class connection;
*/
template <class AsyncReadWriteStream>
class connection<boost::asio::ssl::stream<AsyncReadWriteStream>> :
private connection_base<
private aedis::detail::connection_base<
typename boost::asio::ssl::stream<AsyncReadWriteStream>::executor_type,
connection<boost::asio::ssl::stream<AsyncReadWriteStream>>> {
public:
@@ -44,12 +44,9 @@ public:
/// Executor type.
using executor_type = typename next_layer_type::executor_type;
using base_type = connection_base<executor_type, connection<boost::asio::ssl::stream<AsyncReadWriteStream>>>;
using base_type = aedis::detail::connection_base<executor_type, connection<boost::asio::ssl::stream<AsyncReadWriteStream>>>;
/// List of operations that can be canceled.
using operation = typename base_type::operation;
/** @brief Connection configuration parameters.
/** \brief Connection configuration parameters.
*/
struct timeouts {
/// Timeout of the resolve operation.
@@ -165,7 +162,7 @@ public:
private:
using this_type = connection<next_layer_type>;
template <class, class> friend class aedis::connection_base;
template <class, class> friend class aedis::detail::connection_base;
template <class, class> friend struct aedis::detail::exec_op;
template <class, class> friend struct detail::ssl_connect_with_timeout_op;
template <class> friend struct aedis::detail::run_op;

View File

@@ -20,6 +20,7 @@ namespace net = boost::asio;
using aedis::resp3::request;
using aedis::adapt;
using aedis::endpoint;
using aedis::operation;
using connection = aedis::connection<>;
using error_code = boost::system::error_code;
using net::experimental::as_tuple;
@@ -144,7 +145,7 @@ void test_push_is_received1(bool coalesce)
endpoint ep{"127.0.0.1", "6379"};
db->async_run(ep, req, adapt(), {}, [db](auto ec, auto){
BOOST_CHECK_EQUAL(ec, net::error::misc_errors::eof);
db->cancel(connection::operation::receive_push);
db->cancel(operation::receive_push);
});
bool push_received = false;
@@ -186,7 +187,7 @@ void test_push_is_received2(bool coalesce)
endpoint ep{"127.0.0.1", "6379"};
db->async_run(ep, {}, [db](auto ec, auto...) {
BOOST_CHECK_EQUAL(ec, net::error::misc_errors::eof);
db->cancel(connection::operation::receive_push);
db->cancel(operation::receive_push);
});
bool push_received = false;
@@ -244,7 +245,7 @@ void test_push_many_subscribes(bool coalesce)
endpoint ep{"127.0.0.1", "6379"};
db->async_run(ep, {}, [db](auto ec, auto...) {
BOOST_CHECK_EQUAL(ec, net::error::misc_errors::eof);
db->cancel(connection::operation::receive_push);
db->cancel(operation::receive_push);
});
net::co_spawn(ioc.get_executor(), push_consumer3(db), net::detached);

View File

@@ -21,6 +21,7 @@ using aedis::endpoint;
using aedis::resp3::request;
using connection = aedis::connection<>;
using error_code = boost::system::error_code;
using operation = aedis::operation;
// Test if quit causes async_run to exit.
BOOST_AUTO_TEST_CASE(test_quit_no_coalesce)
@@ -34,38 +35,6 @@ BOOST_AUTO_TEST_CASE(test_quit_no_coalesce)
request req2{{false, false}};
req2.push("QUIT");
db->async_exec(req1, adapt(), [](auto ec, auto){ BOOST_TEST(!ec); });
db->async_exec(req2, adapt(), [](auto ec, auto){ BOOST_TEST(!ec); });
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
});
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
});
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
});
endpoint ep{"127.0.0.1", "6379"};
db->async_run(ep, {}, [db](auto ec){
BOOST_CHECK_EQUAL(ec, net::error::misc_errors::eof);
db->cancel(connection::operation::exec);
});
ioc.run();
}
BOOST_AUTO_TEST_CASE(test_quit_coalesce)
{
net::io_context ioc;
auto db = std::make_shared<connection>(ioc);
request req1;
req1.push("PING");
request req2;
req2.push("QUIT");
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_TEST(!ec);
});
@@ -73,7 +42,10 @@ BOOST_AUTO_TEST_CASE(test_quit_coalesce)
BOOST_TEST(!ec);
});
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_CHECK_EQUAL(ec, net::error::misc_errors::eof);
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
});
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
});
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
@@ -81,8 +53,8 @@ BOOST_AUTO_TEST_CASE(test_quit_coalesce)
endpoint ep{"127.0.0.1", "6379"};
db->async_run(ep, {}, [db](auto ec){
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
db->cancel(connection::operation::exec);
BOOST_CHECK_EQUAL(ec, net::error::misc_errors::eof);
db->cancel(operation::exec);
});
ioc.run();

View File

@@ -0,0 +1,57 @@
/* Copyright (c) 2018-2022 Marcelo Zimbres Silva (mzimbres@gmail.com)
*
* Distributed under the Boost Software License, Version 1.0. (See
* accompanying file LICENSE.txt)
*/
#include <iostream>
#include <boost/asio.hpp>
#include <boost/system/errc.hpp>
#define BOOST_TEST_MODULE low level
#include <boost/test/included/unit_test.hpp>
#include <aedis.hpp>
#include <aedis/src.hpp>
namespace net = boost::asio;
using aedis::adapt;
using aedis::endpoint;
using aedis::resp3::request;
using connection = aedis::connection<>;
using error_code = boost::system::error_code;
using operation = aedis::operation;
BOOST_AUTO_TEST_CASE(test_quit_coalesce)
{
net::io_context ioc;
auto db = std::make_shared<connection>(ioc);
request req1{{false, true}};
req1.push("PING");
request req2{{false, true}};
req2.push("QUIT");
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_TEST(!ec);
});
db->async_exec(req2, adapt(), [](auto ec, auto){
BOOST_TEST(!ec);
});
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_CHECK_EQUAL(ec, net::error::misc_errors::eof);
});
db->async_exec(req1, adapt(), [](auto ec, auto){
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
});
endpoint ep{"127.0.0.1", "6379"};
db->async_run(ep, {}, [db](auto ec){
BOOST_CHECK_EQUAL(ec, boost::system::errc::errc_t::operation_canceled);
db->cancel(operation::exec);
});
ioc.run();
}