eRPC

所属分类:企业管理
开发工具:C++
文件大小:3093KB
下载次数:0
上传日期:2023-04-17 18:23:35
上 传 者sh-1993
说明:  数据中心网络的高效RPC
(Efficient RPCs for datacenter networks)

文件列表:
.clang-format (61, 2023-08-26)
.ycm_extra_conf.py (3276, 2023-08-26)
CMakeLists.txt (18231, 2023-08-26)
Doxyfile (147, 2023-08-26)
DoxygenLayout.xml (2105, 2023-08-26)
LICENSE (605, 2023-08-26)
NOTES.md (8195, 2023-08-26)
apps (0, 2023-08-26)
apps\apps_common.h (7730, 2023-08-26)
apps\congestion (0, 2023-08-26)
apps\congestion\config (319, 2023-08-26)
apps\congestion\congestion.cc (2794, 2023-08-26)
apps\congestion\congestion.h (3156, 2023-08-26)
apps\congestion\incast_impl.h (7293, 2023-08-26)
apps\congestion\regular_impl.h (5908, 2023-08-26)
apps\large_rpc_tput (0, 2023-08-26)
apps\large_rpc_tput\config (238, 2023-08-26)
apps\large_rpc_tput\large_rpc_tput.cc (10648, 2023-08-26)
apps\large_rpc_tput\large_rpc_tput.h (3489, 2023-08-26)
apps\large_rpc_tput\profile_incast.h (1315, 2023-08-26)
apps\large_rpc_tput\profile_victim.h (1702, 2023-08-26)
apps\latency (0, 2023-08-26)
apps\latency\config (110, 2023-08-26)
apps\latency\latency.cc (8481, 2023-08-26)
apps\log (0, 2023-08-26)
apps\log\config (234, 2023-08-26)
apps\log\log.cc (9197, 2023-08-26)
apps\log\log.h (4704, 2023-08-26)
apps\masstree_analytics (0, 2023-08-26)
apps\masstree_analytics\config (250, 2023-08-26)
apps\masstree_analytics\masstree_analytics.cc (16103, 2023-08-26)
apps\masstree_analytics\masstree_analytics.h (4953, 2023-08-26)
apps\masstree_analytics\mt_index_api.cc (197, 2023-08-26)
apps\masstree_analytics\mt_index_api.h (3298, 2023-08-26)
apps\mica_test (0, 2023-08-26)
apps\mica_test\mica_test.cc (4890, 2023-08-26)
... ...

eRPC is a fast and general-purpose RPC library for datacenter networks. Our NSDI 2019 [paper](https://www.usenix.org/system/files/nsdi19-kalia.pdf) describes the system in detail. [Documentation](https://erpc.io/docs/index.html) is available online. Some highlights: * Multiple supported networks: Ethernet, InfiniBand, and RoCE * Low latency: 2.3 microseconds round-trip RPC latency with UDP over Ethernet * Performance for small 32-byte RPCs: ~10M RPCs/sec with one CPU core, 60--80M RPCs/sec with one NIC. * Bandwidth for large RPC: 75 Gbps on one connection (one CPU core at server and client) for 8 MB RPCs * Scalability: 20000 RPC sessions per server * End-to-end congestion control that tolerates 100-way incasts * Nested RPCs, and long-running background RPCs * A port of [Raft](https://github.com/willemt/raft) as an example. Our 3-way replication latency is 5.3 microseconds with traditional UDP over Ethernet. ## System requirements * NICs: Fast (10 GbE+) NICs are needed for good performance. eRPC works best with Mellanox Ethernet and InfiniBand NICs. Any DPDK-capable NICs also work well. * System configuration: * At least 1024 huge pages on every NUMA node, and unlimited SHM limits * On a machine with `n` eRPC processes, eRPC uses kernel UDP ports `{31850, ..., 31850 + n - 1}.` These ports should be open on the management network. See `scripts/firewalld/erpc_firewall.sh` for systems running `firewalld`. ## eRPC quickstart * Build and run the test suite: `cmake . -DPERF=OFF -DTRANSPORT=dpdk; make -j; sudo ctest`. * `DPERF=OFF` enables debugging, which greatly reduces performance. Set `DPERF=ON` for good performance. * Here, `dpdk` should be replaced with `infiniband` for InfiniBand NICs. * A machine with two ports is needed to run the unit tests if DPDK is chosen. Run `scripts/run-tests-dpdk.sh` instead of `ctest`. * Run the `hello_world` application: * `cd hello_world` * Edit the server and client hostnames in `common.h` * Based on the transport that eRPC was compiled for, compile `hello_world` using `make dpdk`, or `make infiniband`. * Run `./server` at the server, and `./client` at the client * Generate the documentation: `doxygen` ## Supported bare-metal NICs: * Ethernet/UDP mode: * DPDK-enabled bare-metal NICs: Use `DTRANSPORT=dpdk`. We have primarily tested Mellanox CX3--CX5 NICs. * DPDK-enabled NICs on Microsoft Azure: Use `-DTRANSPORT=dpdk -DAZURE=on` * RDMA (InfiniBand/RoCE) NICs: Use `DTRANSPORT=infiniband`. Add `DROCE=on` if using RoCE. ## Running eRPC over DPDK on Microsoft Azure VMs * eRPC works well on Azure VMs with accelerated networking. * Configure two Ubuntu 18.04 VMs as below. Use the same resource group and availability zone for both VMs. * Uncheck "Accelerated Networking" when launching each VM from the Azure portal (e.g., F32s-v2). For now, this VM should have just the control network (i.e., `eth0`) and `lo` interfaces. * Add a NIC to Azure via the Azure CLI: `az network nic create --resource-group --name --vnet-name --subnet default --accelerated-networking true --subscription --location ` * Stop the VM launched earlier, and attach the NIC created in the previous step to the VM (i.e., in "Networking" -> "Attach network interface"). * Re-start the VM. It should have a new interface called `eth1`, which eRPC will use for DPDK traffic. * Prepare DPDK 21.11 * [rdma-core](https://github.com/linux-rdma/rdma-core) must be installed from source. We recommend the tag `stable-v40. First, install its dependencies listed in rdma-core's README. Then, in the `rdma-core` directory: * `cmake .` * `sudo make install` * Install upstream pre-requisite libraries and modules: * `sudo apt install make cmake g++ gcc libnuma-dev libgflags-dev numactl` * `sudo modprobe ib_uverbs` * `sudo modprobe mlx4_ib` * Download the [DPDK tarball](https://core.dpdk.org/download/) and extract it. Other DPDK versions are not supported. * Edit `config/common_base` by changing `CONFIG_RTE_LIBRTE_MLX5_PMD` and `CONFIG_RTE_LIBRTE_MLX4_PMD` to `y` instead of `n`. * Build and locally install DPDK: ```bash export RTE_SDK= git clone --depth 1 --branch 'v21.11' https://github.com/DPDK/dpdk.git "${RTE_SDK}" cd "${RTE_SDK}" meson build -Dexamples='' -Denable_kmods=false -Dtests=false -Ddisable_drivers='raw/*,crypto/*,baseband/*,dma/*' cd build/ DESTDIR="${RTE_SDK}/build/install" ninja install ``` * Create hugepages: ```bash sudo bash -c "echo 2048 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages" sudo mkdir /mnt/huge sudo mount -t hugetlbfs nodev /mnt/huge ``` * Build eRPC's library and latency benchmark: ```bash cmake . -DTRANSPORT=dpdk -DAZURE=on make make latency ``` * Create the file `scripts/autorun_process_file` like below. Here, do not use the IP addresses of the accelerated NIC (i.e., not of `eth1`). ``` 31850 0 31850 0 ``` * Run the eRPC application (the latency benchmark by default): * At VM #1: `./scripts/do.sh 0 0` * At VM #2: `./scripts/do.sh 1 0` ## Configuring and running the provided benchmarks * The `apps` directory contains a suite of benchmarks and examples. The instructions below are for this suite of applications. eRPC can also be simply linked as a library instead (see `hello_world/` for an example). * To build an application, create `scripts/autorun_app_file` and change its contents to one of the available directory names in `apps/`. See `scripts/example_autorun_app_file` for an example. Then generate a Makefile using `cmake . -DTRANSPORT=dpdk/infiniband`. * Each application directory in `apps/` contains a config file that must specify all flags defined in `apps/apps_common.h`. For example, `num_processes` specifies the total number of eRPC processes in the cluster. * The URIs of eRPC processes in the cluster are specified in `scripts/autorun_process_file`. Each line in this file must be ` `. * Run `scripts/do.sh` for each process: * With single-CPU machines: `num_processes` machines are needed. Run `scripts/do.sh 0` on machine `i` in `{0, ..., num_processes - 1}`. * With dual-CPU machines: `num_machines = ceil(num_processes / 2)` machines are needed. Run `scripts/do.sh ` on machine i in `{0, ..., num_machines - 1}`. * To automatically run an application at all processes in `scripts/autorun_process_file`, run `scripts/run-all.sh`. For some applications, statistics generated in a run can be collected and processed using `scripts/proc-out.sh`. ## Getting help * GitHub issues are preferred over email. Please include the following information in the issue: * NIC model * `rdma_core` version and DPDK version * Operating system ## Contact Anuj Kalia ## License Copyright 2018, Carnegie Mellon University Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

近期下载者

相关文件


收藏者