2
0
mirror of https://github.com/boostorg/redis.git synced 2026-01-19 04:42:09 +00:00

Small corrections.

This commit is contained in:
Marcelo Zimbres
2022-07-16 12:33:02 +02:00
parent 835a1decf4
commit 59b5d35672
3 changed files with 65 additions and 92 deletions

View File

@@ -1,60 +1,60 @@
# TCP echo server performance.
This document describes how I benchmarked a TCP echo server
implemented in different languages and frameworks.
This document describe benchmakrs I made in implementations of TCP
echo server I implemented in different languages. The main
motivations for choosing a TCP echo server as a benchmark program are
## Motivation
The main motivations for choosing a TCP echo server as a benchmark
program are
* Simple to implement in most languages.
* Does not require expertise level get decent performance.
* I/O bound: Echo servers have very low CPU consumption since they
don't compute anything. It is therefore an excelent measure of
the ability of a program to server concurrent requests.
* Simple to implement and does not require expertise level in most languages.
* I/O bound: Echo servers have very low CPU consumption in general
and therefore an excelent measure of the ability of a program to
server concurrent requests.
* It simulates very well a typical backend in regard to concurrency.
I also imposed some constraints on the implementations
* It should not require me to write too much code, in other words,
it must be simple.
* Favor the use standard idioms.
* Avoid optimizations that requires expert level or makes the
code too complex e.g. connection and thread pool.
* It should not require me to write too much code.
* Favor the use standard idioms and avoid optimizations that require expert level.
* Makes no use of complex things like connection and thread pool.
## Without Redis
## No Redis
First I tested a pure TCP echo server, i.e. that sends the messages
directly to the client without interacting with Redis. This is the
result
First I tested a pure TCP echo server, i.e. one that sends the messages
directly to the client without interacting with Redis. The result can
be seen below
![](https://mzimbres.github.io/aedis/tcp-echo-direct.png)
The tests were performed with 1000 TCP connection on the localhost
where latency is 0.07ms on average. On higher latency networks the
difference among libraries is expected to decrease.
### Remarks:
* I was not expecting Asio to perform so much better than the alternatives like Tokio and libuv.
* I was not expecting Asio to perform so much better than Tokio and libuv.
* I did expect nodejs to come a little behind given it is is
javascript code. Otherwise I did expect it to have similar
performance to libuv since it is the framework behind it.
* The go performance was no surprise: decent and not some much far behind nodejs.
The tests were performed on the localhost where latency is 0.07ms on
average. On higher latency networks the difference among libraries
will decrease.
The code used in the benchmarks can be found at
* [Asio](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/cpp/asio/echo_server_direct.cpp): A variation of [this](https://github.com/chriskohlhoff/asio/blob/4915cfd8a1653c157a1480162ae5601318553eb8/asio/src/examples/cpp20/coroutines/echo_server.cpp) Asio example.
* [Libuv](): A variation of [this](https://github.com/libuv/libuv/blob/06948c6ee502862524f233af4e2c3e4ca876f5f6/docs/code/tcp-echo-server/main.c) Libuv example .
* [Tokio](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/rust/echo_server_direct): This was taken from [here](https://docs.rs/tokio/latest/tokio/).
* [Nodejs](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/nodejs/echo_server_direct).
* [Go](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/go/echo_server_direct.go).
* [Libuv](https://github.com/mzimbres/aedis/tree/835a1decf477b09317f391eddd0727213cdbe12b/benchmarks/c/libuv): Taken from [here](https://github.com/libuv/libuv/blob/06948c6ee502862524f233af4e2c3e4ca876f5f6/docs/code/tcp-echo-server/main.c) Libuv example .
* [Tokio](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/rust/echo_server_direct): Taken from [here](https://docs.rs/tokio/latest/tokio/).
* [Nodejs](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/nodejs/echo_server_direct)
* [Go](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/go/echo_server_direct.go)
## With Redis
## Echo over Redis
This is similar to the echo server described above but the message is
echoed by Redis and the Echo server works as a proxy between the
echoed by Redis. The echo server works as a proxy between the
client and the Redis server. The result can be seen below
![](https://mzimbres.github.io/aedis/tcp-echo-over-redis.png)
The tests were also performed with 1000 TCP connections on a network
latency is 35ms on average.
### Remarks
As the reader can see, the Libuv and the Rust test are not depicted
@@ -62,23 +62,25 @@ above, reasons are
* [redis-rs](https://github.com/redis-rs/redis-rs): This client
comes so far behind that it can't even be represented together
with the other benchmarks without making them insignificant.
I don't know for sure why it is so slow, I suppose however it is
not implementing pipelines properly.
with the other benchmarks without making them insignificant. I
don't know for sure why it is so slow, I suppose however it has
something to do with its lack of proper
[pipelining](https://redis.io/docs/manual/pipelining/) support.
In fact, the more TCP connections I lauch the worst its
performance gets.
* Libuv: I let this out because it would require too much work to
* Libuv: I left it out because it would require too much work to
make it have a good performance. More specifically, I would have
to use hiredis and implement support for pipelines manually.
The code used in the benchmarks can be found at
* [Aedis](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/examples/echo_server.cpp).
* [Nodejs](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/nodejs/echo_server_over_redis).
* [Go](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/go/echo_server_over_redis.go).
* [Aedis](https://github.com/mzimbres/aedis): [Code](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/examples/echo_server.cpp)
* [node-redis](https://github.com/redis/node-redis): [Code](https://github.com/mzimbres/aedis/tree/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/nodejs/echo_server_over_redis)
* [go-redis](https://github.com/go-redis/redis): [Go](https://github.com/mzimbres/aedis/blob/3fb018ccc6138d310ac8b73540391cdd8f2fdad6/benchmarks/go/echo_server_over_redis.go)
## Conclusion:
## Contributing
The main conclusion is that pipelines is fundamental, much more fundamental than the language performance per se.
If your spot any performance improvement in any of the example, please open a PR and I will gladly merge it.
If your spot any performance improvement in any of the example or
would like to include other clients, please open a PR and I will
gladly merge it.

View File

@@ -1,37 +1,3 @@
% $ time ./echo_server_client 1000 5000
% ======================================================
% C++: Asio
%
% Seq single: 2m3.934s
% Over Redis: 5m2.083s
% Direct: 31.427s
%
% ======================================================
% C++: libuv
%
% real 0m43.568s
%
% ======================================================
% Rust: Tokio
%
% Seq: 186.000s = 1.3 * Async.
% Direct: 42.214s
% Over Redis: 142.775s
%
% ======================================================
% Go
%
% Direct: 78.238s
% Over Redis: 54.238s
%
% ======================================================
% Nodejs
%
% Seq: 360.000s = 2.6 * Async.
% Direct: 72.973s
% Over Redis: 141.881s
\documentclass{article}
\usepackage{pgfplots}
\pgfrealjobname{echo}
@@ -41,7 +7,7 @@
\beginpgfgraphicnamed{echo-f0}
% time ./echo_server_client 1000 5000
\begin{tikzpicture}
\begin{tikzpicture}[scale=1.0]
\begin{axis}[
y dir=reverse,
%xbar stacked,
@@ -49,7 +15,7 @@
%hide x axis,
bar shift=0pt,
width=15cm, height=6cm, enlarge y limits=0.5,
title={TCP echo server performance},
title={TCP Echo Server Performance},
xlabel={Seconds},
symbolic y coords={Asio,Tokio,Libuv,Nodejs,Go},
ytick=data,
@@ -72,7 +38,7 @@
%debian2[0]$ time ./echo_server_client 1000 1000
%Go (1): 1.000s
%C++ (1): 0.07s
\begin{tikzpicture}
\begin{tikzpicture}[scale=1.0]
\begin{axis}[
y dir=reverse,
%xbar stacked,
@@ -80,21 +46,22 @@
%hide x axis,
bar shift=0pt,
width=12cm, height=6cm, enlarge y limits=0.5,
xlabel={TCP echo server performance (over Redis)},
symbolic y coords={Asio,Rust/Tokio,Libuv,Nodejs,Go},
title={TCP Echo Server Performance (over Redis)},
xlabel={Seconds},
symbolic y coords={Aedis,Rust-rs,Libuv,Node-redis,Go-redis},
ytick=data,
%bar width=1cm,
nodes near coords,
nodes near coords align={horizontal},
]
\addplot coordinates {
(12.6,Asio)
(28.8,Nodejs)
(352.4,Go)
(12.6,Aedis)
(28.8,Node-redis)
(352.4,Go-redis)
};
%\addplot coordinates {
% (30.0,Asio)
% (90.6,Rust/Tokio)
% (90.6,Rust-rs)
% (0.0,Libuv)
% (68.9,Nodejs)
% (0.0,Go)

View File

@@ -1,8 +1,12 @@
/* Copyright (c) 2018-2022 Marcelo Zimbres Silva (mzimbres@gmail.com)
*
* Distributed under the Boost Software License, Version 1.0. (See
* accompanying file LICENSE.txt)
*/
//
// echo_server.cpp
// ~~~~~~~~~~~~~~~
//
// Copyright (c) 2003-2022 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
#include <boost/asio/co_spawn.hpp>
#include <boost/asio/detached.hpp>