Documentation
Overview
Aedis is a Redis client library built on top of Asio that implements the latest version of the Redis communication protocol RESP3. It makes communication with a Redis server easy by hidding most of the low-level Asio-related code away from the user, which in the majority of the cases will be concerned with only three library entities
aedis::connection: A connection to the Redis server.aedis::resp3::request: A container of Redis commands.aedis::adapt(): Adapts data structures to receive responses.
For example, the coroutine below uses a short-lived connection to read Redis
hashes
in a std::map (see intro.cpp and containers.cpp)
auto async_main() -> net::awaitable<void>
{
auto conn = std::make_shared<connection>(co_await net::this_coro::executor);
// From examples/common.hpp to avoid vebosity
co_await connect(conn, "127.0.0.1", "6379");
// A request contains multiple commands.
resp3::request req;
req.push("HELLO", 3);
req.push("HGETALL", "hset-key");
req.push("QUIT");
// Responses as tuple elements.
std::tuple<aedis::ignore, std::map<std::string, std::string>, aedis::ignore> resp;
// Executes the request and reads the response.
co_await (conn->async_run() || conn->async_exec(req, adapt(resp)));
// Use the map from std::get<1>(resp) ...
}
The execution of connection::async_exec above is composed with
connection::async_run with the aid of the Asio awaitable operator ||
that ensures that one operation is cancelled as soon as the other
completes, these functions play the following roles
connection::async_exec: Execute commands (i.e. write the request and reads the response).connection::async_run: Start read and write operations and remains suspended until the connection is lost.
Let us dig in.
Connection
In general we will want to reuse the same connection for multiple
requests, we can do this with the example above by decoupling the
HELLO command and the call to async_run in a separate coroutine
auto run(std::shared_ptr<connection> conn) -> net::awaitable<void>
{
co_await connect(conn, "127.0.0.1", "6379");
resp3::request req;
req.push("HELLO", 3); // Upgrade to RESP3
// Notice we use && instead of || so async_run is not cancelled
// when the response to HELLO comes.
co_await (conn->async_run() && conn->async_exec(req));
}
We can now let run run detached in the background while other
coroutines perform requests on the connection
auto async_main() -> net::awaitable<void>
{
auto conn = std::make_shared<connection>(co_await net::this_coro::executor);
// Calls async_run detached.
net::co_spawn(ex, run(conn), net::detached)
// Here we can pass conn around to other coroutines so they can make requests.
...
}
With this separation, it is now easy to incorporate other operations
in our application, for example, to cancel the connection on SIGINT
and SIGTERM we can extend run as follows
auto run(std::shared_ptr<connection> conn) -> net::awaitable<void>
{
co_await connect(conn, "127.0.0.1", "6379");
signal_set sig{ex, SIGINT, SIGTERM};
resp3::request req;
req.push("HELLO", 3);
co_await ((conn->async_run() || sig.async_wait()) && conn->async_exec(req));
}
Likewise we can incorporate support for server pushes, healthy checks and pubsub
auto run(std::shared_ptr<connection> conn) -> net::awaitable<void>
{
co_await connect(conn, "127.0.0.1", "6379");
signal_set sig{ex, SIGINT, SIGTERM};
resp3::request req;
req.push("HELLO", 3);
req.push("SUBSCRIBE", "channel1", "channel2");
co_await ((conn->async_run() || sig.async_wait() || receiver(conn) || healthy_checker(conn))
&& conn->async_exec(req));
}
The definition of receiver and healthy_checker above can be found
in subscriber.cpp. Adding a loop around async_run produces a simple
way to support reconnection while there are pending operations on the connection
for example, to reconnect to the same address
auto run(std::shared_ptr<connection> conn) -> net::awaitable<void>
{
auto ex = co_await net::this_coro::executor;
steady_timer timer{ex};
resp3::request req;
req.push("HELLO", 3);
req.push("SUBSCRIBE", "channel1", "channel2");
for (;;) {
co_await connect(conn, "127.0.0.1", "6379");
co_await ((conn->async_run() || healthy_checker(conn) || receiver(conn))
&& conn->async_exec(req));
conn->reset_stream();
timer.expires_after(std::chrono::seconds{1});
co_await timer.async_wait();
}
}
For failover with sentinels see resolve_with_sentinel.cpp. At
this point the reasons for why async_run was introduced in Aedis
might have become apparent to the reader
- Provide a quick reaction to disconnections and hence faster failovers.
- Support server pushes and requests in the same connection object, concurrently.
- Separate requests, handling of server pushes and reconnection operations.
Cancellation
Aedis supports both implicit and explicit cancellation of connection
operations. Explicit cancellation is supported by means of the
aedis::connection::cancel member function. Implicit cancellation,
like those that may happen when using Asio awaitable operators && and
|| will be discussed with more detail below.
co_await (conn.async_run(...) && conn.async_exec(...))
- Useful when implementing reconnection.
async_execis responsible for sending theHELLOcommand and optionally for subscribing to channels.
co_await (conn.async_run(...) || conn.async_exec(...))
- Useful for short-lived connections that are meant to be closed after a command has been executed.
co_await (conn.async_exec(...) || time.async_wait(...))
- Provides a way to limit how long the execution of a single request should last.
- The cancellation will be ignored if the request has already been written to the socket.
- It is usually a better idea to have a healthy checker than adding per request timeout, see subscriber.cpp for an example.
co_await (conn.async_run(...) || time.async_wait(...))
- Sets a limit on how long the connection should live.
co_await (conn.async_exec(...) || conn.async_exec(...) || ... || conn.async_exec(...))
- This works but is considered an anti-pattern. Unless
the user has set
aedis::resp3::request::config::coalescetofalse, and he shouldn't, the connection will automatically merge the individual requests into a single payload anyway.
Requests
Redis requests are composed of one or more commands (in the Redis documentation they are called pipelines). For example
// Some example containers.
std::list<std::string> list {...};
std::map<std::string, mystruct> map { ...};
request req;
// Command with variable length of arguments.
req.push("SET", "key", "some value", "EX", "2");
// Pushes a list.
req.push_range("SUBSCRIBE", list);
// Same as above but as an iterator range.
req.push_range("SUBSCRIBE", std::cbegin(list), std::cend(list));
// Pushes a map.
req.push_range("HSET", "key", map);
Sending a request to Redis is performed with aedis::connection::async_exec as already stated.
Serialization
The resp3::request::push and resp3::request::push_range member functions work
with integer data types e.g. int and std::string out of the box.
To send your own data type define a to_bulk function like this
// Example struct.
struct mystruct {...};
// Serialize your data structure here.
void to_bulk(std::pmr::string& to, mystruct const& obj)
{
std::string dummy = "Dummy serializaiton string.";
aedis::resp3::to_bulk(to, dummy);
}
Once to_bulk is defined and visible over ADL mystruct can
be passed to the request
request req;
std::map<std::string, mystruct> map {...};
req.push_range("HSET", "key", map);
Example serialization.cpp shows how store json strings in Redis.
Config flags
The aedis::resp3::request::config object inside the request dictates how the
aedis::connection should handle the request in some important situations. The
reader is advised to read it carefully.
Responses
Aedis uses the following strategy to support Redis responses
- Static: For
aedis::resp3::requestwhose sizes are known at compile time std::tuple is supported. - Dynamic: Otherwise use
std::vector<aedis::resp3::node<std::string>>.
For example, below is a request with a compile time size
request req;
req.push("PING");
req.push("INCR", "key");
req.push("QUIT");
To read the response to this request users can use the following tuple
std::tuple<std::string, int, std::string>
The pattern might have become apparent to the reader: the tuple must
have as many elements as the request has commands (exceptions below).
It is also necessary that each tuple element is capable of storing the
response to the command it refers to, otherwise an error will occur.
To ignore responses to individual commands in the request use the tag
aedis::ignore
// Ignore the second and last responses.
std::tuple<std::string, aedis::ignore, std::string, aedis::ignore>
The following table provides the resp3-types returned by some Redis commands
| Command | RESP3 type | Documentation |
|---|---|---|
| lpush | Number | https://redis.io/commands/lpush |
| lrange | Array | https://redis.io/commands/lrange |
| set | Simple-string, null or blob-string | https://redis.io/commands/set |
| get | Blob-string | https://redis.io/commands/get |
| smembers | Set | https://redis.io/commands/smembers |
| hgetall | Map | https://redis.io/commands/hgetall |
To map these RESP3 types into a C++ data structure use the table below
| RESP3 type | Possible C++ type | Type |
|---|---|---|
| Simple-string | std::string |
Simple |
| Simple-error | std::string |
Simple |
| Blob-string | std::string, std::vector |
Simple |
| Blob-error | std::string, std::vector |
Simple |
| Number | long long, int, std::size_t, std::string |
Simple |
| Double | double, std::string |
Simple |
| Null | std::optional<T> |
Simple |
| Array | std::vector, std::list, std::array, std::deque |
Aggregate |
| Map | std::vector, std::map, std::unordered_map |
Aggregate |
| Set | std::vector, std::set, std::unordered_set |
Aggregate |
| Push | std::vector, std::map, std::unordered_map |
Aggregate |
For example, the response to the request
request req;
req.push("HELLO", 3);
req.push_range("RPUSH", "key1", vec);
req.push_range("HSET", "key2", map);
req.push("LRANGE", "key3", 0, -1);
req.push("HGETALL", "key4");
req.push("QUIT");
can be read in the tuple below
std::tuple<
aedis::ignore, // hello
int, // rpush
int, // hset
std::vector<T>, // lrange
std::map<U, V>, // hgetall
std::string // quit
> resp;
Where both are passed to async_exec as showed elsewhere
co_await conn->async_exec(req, adapt(resp));
If the intention is to ignore the response to all commands altogether
use adapt() without arguments instead
// Uses the ignore adapter explicitly.
co_await conn->async_exec(req, adapt());
// Ignore adapter is also the default argument.
co_await conn->async_exec(req);
Responses that contain nested aggregates or heterogeneous data types will be given special treatment later in The general case. As of this writing, not all RESP3 types are used by the Redis server, which means in practice users will be concerned with a reduced subset of the RESP3 specification.
Pushes
Commands that have push response like
"SUBSCRIBE""PSUBSCRIBE""UNSUBSCRIBE"
must be NOT be included in the tuple. For example, the request below
request req;
req.push("PING");
req.push("SUBSCRIBE", "channel");
req.push("QUIT");
must be read in this tuple std::tuple<std::string, std::string>,
that has size two.
Null
It is not uncommon for apps to access keys that do not exist or
that have already expired in the Redis server, to deal with these
cases Aedis provides support for std::optional. To use it,
wrap your type around std::optional like this
std::tuple<
std::optional<A>,
std::optional<B>,
...
> resp;
co_await conn->async_exec(req, adapt(resp));
Everything else stays pretty much the same.
Transactions
To read responses to transactions we must first observe that Redis will
queue the transaction commands and send their individual responses as elements
of an array, the array is itself the response to the EXEC command.
For example, to read the response to this request
req.push("MULTI");
req.push("GET", "key1");
req.push("LRANGE", "key2", 0, -1);
req.push("HGETALL", "key3");
req.push("EXEC");
use the following response type
using aedis::ignore;
using exec_resp_type =
std::tuple<
std::optional<std::string>, // get
std::optional<std::vector<std::string>>, // lrange
std::optional<std::map<std::string, std::string>> // hgetall
>;
std::tuple<
aedis::ignore, // multi
aedis::ignore, // get
aedis::ignore, // lrange
aedis::ignore, // hgetall
exec_resp_type, // exec
> resp;
co_await conn->async_exec(req, adapt(resp));
For a complete example see containers.cpp.
Deserialization
As mentioned in the serialization section, it is common practice to
serialize data before sending it to Redis e.g. as json strings. For
performance and convenience reasons, we may also want to deserialize
responses directly in their final data structure. Aedis supports this
use case by calling a user provided from_bulk function while parsing
the response. For example
void from_bulk(mystruct& obj, char const* p, std::size_t size, boost::system::error_code& ec)
{
// Deserializes p into obj.
}
After that, you can start receiving data efficiently in the desired
types e.g. mystruct, std::map<std::string, mystruct> etc.
The general case
There are cases where responses to Redis commands won't fit in the model presented above, some examples are
- Commands (like
set) whose responses don't have a fixed RESP3 type. Expecting anintand receiving a blob-string will result in error. - RESP3 aggregates that contain nested aggregates can't be read in STL containers.
- Transactions with a dynamic number of commands can't be read in a
std::tuple.
To deal with these cases Aedis provides the aedis::resp3::node type
abstraction, that is the most general form of an element in a
response, be it a simple RESP3 type or the element of an aggregate. It
is defined like this
template <class String>
struct node {
// The RESP3 type of the data in this node.
type data_type;
// The number of elements of an aggregate (or 1 for simple data).
std::size_t aggregate_size;
// The depth of this node in the response tree.
std::size_t depth;
// The actual data. For aggregate types this is always empty.
String value;
};
Any response to a Redis command can be received in a
std::vector<node<std::string>>. The vector can be seen as a
pre-order view of the response tree. Using it is not different than
using other types
// Receives any RESP3 simple or aggregate data type.
std::vector<node<std::string>> resp;
co_await conn->async_exec(req, adapt(resp));
For example, suppose we want to retrieve a hash data structure
from Redis with HGETALL, some of the options are
std::vector<node<std::string>: Works always.std::vector<std::string>: Efficient and flat, all elements as string.std::map<std::string, std::string>: Efficient if you need the data as astd::map.std::map<U, V>: Efficient if you are storing serialized data. Avoids temporaries and requiresfrom_bulkforUandV.
In addition to the above users can also use unordered versions of the
containers. The same reasoning also applies to sets e.g. SMEMBERS
and other data structures in general.
Examples
These examples demonstrate what has been discussed so far.
- intro.cpp: The Aedis hello-world program. Sends one command and quits the connection.
- intro_tls.cpp: Same as intro.cpp but over TLS.
- intro_sync.cpp: Shows how to use the connection class synchronously.
- containers.cpp: Shows how to send and receive STL containers and how to use transactions.
- serialization.cpp: Shows how to serialize types using Boost.Json.
- resolve_with_sentinel.cpp: Shows how to resolve a master address using sentinels.
- subscriber.cpp: Shows how to implement pubsub with reconnection re-subscription.
- echo_server.cpp: A simple TCP echo server.
- chat_room.cpp: A command line chat built on Redis pubsub.
- low_level_sync.cpp: Sends a ping synchronously using the low-level API.
- low_level_async.cpp: Sends a ping asynchronously using the low-level API.
To avoid repetition code that is common to all examples has been grouped in common.hpp. The main function used in some async examples has been factored out in the main.cpp file.
Echo server benchmark
This document benchmarks the performance of TCP echo servers I implemented in different languages using different Redis clients. The main motivations for choosing an echo server are
- Simple to implement and does not require expertise level in most languages.
- I/O bound: Echo servers have very low CPU consumption in general and therefore are excelent to measure how a program handles concurrent requests.
- It simulates very well a typical backend in regard to concurrency.
I also imposed some constraints on the implementations
- It should be simple enough and not require writing too much code.
- Favor the use standard idioms and avoid optimizations that require expert level.
- Avoid the use of complex things like connection and thread pool.
To reproduce these results run one of the echo-server programs in one terminal and the echo-server-client in another.
Without Redis
First I tested a pure TCP echo server, i.e. one that sends the messages directly to the client without interacting with Redis. The result can be seen below
The tests were performed with a 1000 concurrent TCP connections on the localhost where latency is 0.07ms on average on my machine. On higher latency networks the difference among libraries is expected to decrease.
- I expected Libuv to have similar performance to Asio and Tokio.
- I did expect nodejs to come a little behind given it is is javascript code. Otherwise I did expect it to have similar performance to libuv since it is the framework behind it.
- Go did surprise me: faster than nodejs and libuv!
The code used in the benchmarks can be found at
- Asio: A variation of this Asio example.
- Libuv: Taken from here Libuv example .
- Tokio: Taken from here.
- Nodejs
- Go
With Redis
This is similar to the echo server described above but messages are echoed by Redis and not by the echo-server itself, which acts as a proxy between the client and the Redis server. The results can be seen below
The tests were performed on a network where latency is 35ms on average, otherwise it uses the same number of TCP connections as the previous example.
As the reader can see, the Libuv and the Rust test are not depicted in the graph, the reasons are
-
redis-rs: This client comes so far behind that it can't even be represented together with the other benchmarks without making them look insignificant. I don't know for sure why it is so slow, I suppose it has something to do with its lack of automatic pipelining support. In fact, the more TCP connections I lauch the worse its performance gets.
-
Libuv: I left it out because it would require me writing to much c code. More specifically, I would have to use hiredis and implement support for pipelines manually.
The code used in the benchmarks can be found at
Conclusion
Redis clients have to support automatic pipelining to have competitive performance. For updates to this document follow https://github.com/mzimbres/aedis.
Comparison
The main reason for why I started writing Aedis was to have a client compatible with the Asio asynchronous model. As I made progresses I could also address what I considered weaknesses in other libraries. Due to time constraints I won't be able to give a detailed comparison with each client listed in the official list, instead I will focus on the most popular C++ client on github in number of stars, namely
Aedis vs Redis-plus-plus
Before we start it is important to mentioning some of the things redis-plus-plus does not support
- The latest version of the communication protocol RESP3. Without it it is impossible to support some important Redis features like client side caching, among other things.
- Coroutines.
- Reading responses directly in user data structures to avoid creating temporaries.
- Error handling with support for error-code.
- Cancellation.
The remaining points will be addressed individually. Let us first have a look at what sending a command a pipeline and a transaction look like
auto redis = Redis("tcp://127.0.0.1:6379");
// Send commands
redis.set("key", "val");
auto val = redis.get("key"); // val is of type OptionalString.
if (val)
std::cout << *val << std::endl;
// Sending pipelines
auto pipe = redis.pipeline();
auto pipe_replies = pipe.set("key", "value")
.get("key")
.rename("key", "new-key")
.rpush("list", {"a", "b", "c"})
.lrange("list", 0, -1)
.exec();
// Parse reply with reply type and index.
auto set_cmd_result = pipe_replies.get<bool>(0);
// ...
// Sending a transaction
auto tx = redis.transaction();
auto tx_replies = tx.incr("num0")
.incr("num1")
.mget({"num0", "num1"})
.exec();
auto incr_result0 = tx_replies.get<long long>(0);
// ...
Some of the problems with this API are
- Heterogeneous treatment of commands, pipelines and transaction. This makes auto-pipelining impossible.
- Any Api that sends individual commands has a very restricted scope of usability and should be avoided for performance reasons.
- The API imposes exceptions on users, no error-code overload is provided.
- No way to reuse the buffer for new calls to e.g. redis.get in order to avoid further dynamic memory allocations.
- Error handling of resolve and connection not clear.
According to the documentation, pipelines in redis-plus-plus have the following characteristics
NOTE: By default, creating a Pipeline object is NOT cheap, since it creates a new connection.
This is clearly a downside in the API as pipelines should be the default way of communicating and not an exception, paying such a high price for each pipeline imposes a severe cost in performance. Transactions also suffer from the very same problem.
NOTE: Creating a Transaction object is NOT cheap, since it creates a new connection.
In Aedis there is no difference between sending one command, a pipeline or a transaction because requests are decoupled from the IO objects.
redis-plus-plus also supports async interface, however, async support for Transaction and Subscriber is still on the way.
The async interface depends on third-party event library, and so far, only libuv is supported.
Async code in redis-plus-plus looks like the following
auto async_redis = AsyncRedis(opts, pool_opts);
Future<string> ping_res = async_redis.ping();
cout << ping_res.get() << endl;
As the reader can see, the async interface is based on futures which is also known to have a bad performance. The biggest problem however with this async design is that it makes it impossible to write asynchronous programs correctly since it starts an async operation on every command sent instead of enqueueing a message and triggering a write when it can be sent. It is also not clear how are pipelines realised with this design (if at all).
Reference
- High-Level: Covers the topics discussed in this document.
- Low-Level: Covers low-level building blocks. Provided mostly for developers, most users won't need any information provided here.
Installation
Download the latest release on
https://github.com/mzimbres/aedis/releases. Aedis is a header only
library, so you can starting using it right away by adding the
include subdirectory to your project and including
#include <aedis/src.hpp>
in no more than one source file in your applications. To build the examples and test cmake is supported, for example
BOOST_ROOT=/opt/boost_1_80_0 cmake --preset dev
The requirements for using Aedis are
- Boost 1.80 or greater.
- C++17 minimum.
- Redis 6 or higher (must support RESP3).
- Optionally also redis-cli and Redis Sentinel.
The following compilers are supported
- Gcc: 10, 11, 12.
- Clang: 11, 13, 14.
- Visual Studio 17 2022, Visual Studio 16 2019.
Acknowledgement
Acknowledgement to people that helped shape Aedis
- Richard Hodges (madmongo1): For very helpful support with Asio, the design of asynchronous programs, etc.
- Vinícius dos Santos Oliveira (vinipsmaker): For useful discussion about how Aedis consumes buffers in the read operation.
- Petr Dannhofer (Eddie-cz): For helping me understand how the
AUTHandHELLOcommand can influence each other. - Mohammad Nejati (ashtum): For pointing out scenarios where calls to
async_execshould fail when the connection is lost. - Klemens Morgenstern (klemens-morgenstern): For useful discussion about timeouts, cancellation, synchronous interfaces and general help with Asio.
Changelog
v1.4.0
- Removes dependency on Boost.Hana, boost::string_view, Boost.Variant2 and Boost.Spirit.
- Fixes build and setup CI on windows.
v1.3.0-1
-
Upgrades to Boost 1.80.0
-
Removes automatic sending of the
HELLOcommand. This can't be implemented properly without bloating the connection class. It is now a user responsibility to send HELLO. Requests that contain it have priority over other requests and will be moved to the front of the queue, seeaedis::resp3::request::config -
Automatic name resolving and connecting have been removed from
aedis::connection::async_run. Users have to do this step manually now. The reason for this change is that having them built-in doesn't offer enough flexibility that is need for boost users. -
Removes healthy checks and idle timeout. This functionality must now be implemented by users, see the examples. This is part of making Aedis useful to a larger audience and suitable for the Boost review process.
-
The
aedis::connectionis now using a typeddef to anet::ip::tcp::socketandaedis::ssl::connectiontonet::ssl::stream<net::ip::tcp::socket>. Users that need to use other stream type must now specializeaedis::basic_connection. -
Adds a low level example of async code.
v1.2.0
-
aedis::adaptsupports now tuples created withstd::tie.aedis::ignoreis now an alias to the type ofstd::ignore. -
Provides allocator support for the internal queue used in the
aedis::connectionclass. -
Changes the behaviour of
async_runto complete with success if asio::error::eof is received. This makes it easier to write composed operations with awaitable operators. -
Adds allocator support in the
aedis::resp3::request(a contribution from Klemens Morgenstern). -
Renames
aedis::resp3::request::push_range2topush_range. The suffix 2 was used for disambiguation. Klemens fixed it with SFINAE. -
Renames
fail_on_connection_losttoaedis::resp3::request::config::cancel_on_connection_lost. Now, it will only cause connections to be canceled whenasync_runcompletes. -
Introduces
aedis::resp3::request::config::cancel_if_not_connectedwhich will cause a request to be canceled ifasync_execis called before a connection has been established. -
Introduces new request flag
aedis::resp3::request::config::retrythat if set to true will cause the request to not be canceled when it was sent to Redis but remained unresponded afterasync_runcompleted. It provides a way to avoid executing commands twice. -
Removes the
aedis::connection::async_runoverload that takes request and adapter as parameters. -
Changes the way
aedis::adapt()behaves withstd::vector<aedis::resp3::node<T>>. Receiving RESP3 simple errors, blob errors or null won't causes an error but will be treated as normal response. It is the user responsibility to check the content in the vector. -
Fixes a bug in
connection::cancel(operation::exec). Now this call will only cancel non-written requests. -
Implements per-operation implicit cancellation support for
aedis::connection::async_exec. The following call willco_await (conn.async_exec(...) || timer.async_wait(...))will cancel the request as long as it has not been written. -
Changes
aedis::connection::async_runcompletion signature tof(error_code). This is how is was in the past, the second parameter was not helpful. -
Renames
operation::receive_pushtoaedis::operation::receive.
v1.1.0-1
-
Removes
coalesce_requestsfrom theaedis::connection::config, it became a request property now, seeaedis::resp3::request::config::coalesce. -
Removes
max_read_sizefrom theaedis::connection::config. The maximum read size can be specified now as a parameter of theaedis::adapt()function. -
Removes
aedis::syncclass, see intro_sync.cpp for how to perform synchronous and thread safe calls. This is possible in Boost. 1.80 only as it requiresboost::asio::deferred. -
Moves from
boost::optionaltostd::optional. This is part of moving to C++17. -
Changes the behaviour of the second
aedis::connection::async_runoverload so that it always returns an error when the connection is lost. -
Adds TLS support, see intro_tls.cpp.
-
Adds an example that shows how to resolve addresses over sentinels, see subscriber_sentinel.cpp.
-
Adds a
aedis::connection::timeouts::resp3_handshake_timeout. This is timeout used to send theHELLOcommand. -
Adds
aedis::endpointwhere in addition to host and port, users can optionally provide username, password and the expected server role (seeaedis::error::unexpected_server_role). -
aedis::connection::async_runchecks whether the server role received in the hello command is equal to the expected server role specified inaedis::endpoint. To skip this check let the role variable empty. -
Removes reconnect functionality from
aedis::connection. It is possible in simple reconnection strategies but bloats the class in more complex scenarios, for example, with sentinel, authentication and TLS. This is trivial to implement in a separate coroutine. As a result theenum eventandasync_receive_eventhave been removed from the class too. -
Fixes a bug in
connection::async_receive_pushthat prevented passing any response adapter other thatadapt(std::vector<node>). -
Changes the behaviour of
aedis::adapt()that caused RESP3 errors to be ignored. One consequence of it is thatconnection::async_runwould not exit with failure in servers that required authentication. -
Changes the behaviour of
connection::async_runthat would cause it to complete with success when an error in theconnection::async_execoccurred. -
Ports the buildsystem from autotools to CMake.
v1.0.0
-
Adds experimental cmake support for windows users.
-
Adds new class
aedis::syncthat wraps anaedis::connectionin a thread-safe and synchronous API. All free functions from thesync.hppare now member functions ofaedis::sync. -
Split
aedis::connection::async_receive_eventin two functions, one to receive events and another for server side pushes, seeaedis::connection::async_receive_push. -
Removes collision between
aedis::adapter::adaptandaedis::adapt. -
Adds
connection::operationenum to replacecancel_*member functions with a single cancel function that gets the operations that should be cancelled as argument. -
Bugfix: a bug on reconnect from a state where the
connectionobject had unsent commands. It could causeasync_execto never complete under certain conditions. -
Bugfix: Documentation of
adapt()functions were missing from Doxygen.
v0.3.0
-
Adds
experimental::execandreceive_eventfunctions to offer a thread safe and synchronous way of executing requests across threads. Seeintro_sync.cppandsubscriber_sync.cppfor examples. -
connection::async_read_pushwas renamed toasync_receive_event. -
connection::async_receive_eventis now being used to communicate internal events to the user, such as resolve, connect, push etc. For examples see subscriber.cpp andconnection::event. -
The
aedisdirectory has been moved toincludeto look more similar to Boost libraries. Users should now replace-I/aedis-pathwith-I/aedis-path/includein the compiler flags. -
The
AUTHandHELLOcommands are now sent automatically. This change was necessary to implement reconnection. The username and password used inAUTHshould be provided by the user onconnection::config. -
Adds support for reconnection. See
connection::enable_reconnect. -
Fixes a bug in the
connection::async_run(host, port)overload that was causing crashes on reconnection. -
Fixes the executor usage in the connection class. Before theses changes it was imposing
any_io_executoron users. -
connection::async_receiver_eventis not cancelled anymore whenconnection::async_runexits. This change makes user code simpler. -
connection::async_execwith host and port overload has been removed. Use the otherconnection::async_runoverload. -
The host and port parameters from
connection::async_runhave been move toconnection::configto better support authentication and failover. -
Many simplifications in the
chat_roomexample. -
Fixes build in clang the compilers and makes some improvements in the documentation.
v0.2.0-1
- Fixes a bug that happens on very high load. (v0.2.1)
- Major rewrite of the high-level API. There is no more need to use the low-level API anymore.
- No more callbacks: Sending requests follows the ASIO asynchronous model.
- Support for reconnection: Pending requests are not canceled when a connection is lost and are re-sent when a new one is established.
- The library is not sending HELLO-3 on user behalf anymore. This is important to support AUTH properly.
v0.1.0-2
- Adds reconnect coroutine in the
echo_serverexample. (v0.1.2) - Corrects
client::async_wait_for_datawithmake_parallel_groupto launch operation. (v0.1.2) - Improvements in the documentation. (v0.1.2)
- Avoids dynamic memory allocation in the client class after reconnection. (v0.1.2)
- Improves the documentation and adds some features to the high-level client. (v.0.1.1)
- Improvements in the design and documentation.
v0.0.1
- First release to collect design feedback.

