zipkin-storage-kafka:基于Kafka的Zipkin存储

  • t7_952376
    了解作者
  • 7.6MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-04-28 03:02
    上传日期
邮编存储:Kafka [EXPERIMENTAL] 基于Kafka的Zipkin存储。 +----------------------------*zipkin*---------------------------------------------- | [ dependency-storage ]--->( dependencies ) | ^ +-->( autocomplete-tags ) ( collected-spans )-|->[ partitioning ]
zipkin-storage-kafka-master.zip
内容介绍
# Test and Deploy scripts This is a Maven+Docker project, which uses standard conventions for test and deploy. The Docker image for zipkin-storage-kafka is a layer over zipkin, including only the [../module] and configuration settings. As zipkin-storage-kafka is a contrib project, Docker images only push to `ghcr.io`. [test] uses a non-standard [docker-compose-zipkin-storage-kafka] to ensure Kafka is available before running the image. [//]: # (Below here should be standard for all projects) ## Build Overview `build-bin` holds portable scripts used in CI to test and deploy the project. The scripts here are portable. They do not include any CI provider-specific logic or ENV variables. This helps `.travis.yml` and `test.yml` (GitHub Actions) contain nearly the same contents, even if certain OpenZipkin projects need slight adjustments here. Portability has proven necessary, as OpenZipkin has had to transition CI providers many times due to feature and quota constraints. These scripts serve a second purpose, which is to facilitate manual releases, which has also happened many times due usually to service outages of CI providers. While tempting to use CI-provider specific tools, doing so can easily create a dependency where no one knows how to release anymore. Do not use provider-specific mechanisms to implement release flow. Instead, automate triggering of the scripts here. The only scripts that should be modified per project are in the base directory. Those in sub directories, such as [docker], should not vary project to project except accident of version drift. Intentional changes in sub directories should be relevant and tested on multiple projects to ensure they can be blindly copy/pasted. Conversely, the files in the base directory are project specific entry-points for test and deploy actions and are entirely appropriate to vary per project. Here's an overview: ## Test Test builds and runs any tests of the project, including integration tests. CI providers should be configured to run tests on pull requests or pushes to the master branch, notably when the tag is blank. Tests should not run on documentation-only commits. Tests must not depend on authenticated resources, as running tests can leak credentials. Git checkouts should include the full history so that license headers or other git analysis can take place. * [configure_test] - Sets up build environment for tests. * [test] - Builds and runs tests for this project. ### Example GitHub Actions setup A simplest GitHub Actions `test.yml` runs tests after configuring them, but only on relevant event conditions. The name `test.yml` and job `test` allows easy references to status badges and parity of the scripts it uses. The `on:` section obviates job creation and resource usage for irrelevant events. Notably, GitHub Actions includes the ability to skip documentation-only jobs. Combine [configure_test] and [test] into the same `run:` when `configure_test` primes file system cache. Here's a partial `test.yml` including only the aspects mentioned above. ```yaml on: push: tags: '' branches: master paths-ignore: '**/*.md' pull_request: branches: master paths-ignore: '**/*.md' jobs: test: steps: - name: Checkout Repository uses: actions/checkout@v2 with: fetch-depth: 0 # full git history - name: Test run: build-bin/configure_test && build-bin/test ``` ### Example Travis setup `.travis.yml` is a monolithic configuration file broken into stages, of which the default is "test". A simplest Travis `test` job configures tests in `install` and runs them as `script`, but only on relevant event conditions. The `if:` section obviates job creation and resource usage for irrelevant events. Travis does not support file conditions. A `before_install` step to skip documentation-only commits will likely complete in less than a minute (10 credit cost). Here's a partial `.travis.yml` including only the aspects mentioned above. ```yaml git: depth: false # TRAVIS_COMMIT_RANGE requires full commit history. jobs: include: - stage: test if: branch = master AND tag IS blank AND type IN (push, pull_request) name: Run unit and integration tests before_install: | # Prevent test build of a documentation-only change. if [ -n "${TRAVIS_COMMIT_RANGE}" ] && ! git diff --name-only "${TRAVIS_COMMIT_RANGE}" -- | grep -qv '\.md$'; then echo "Stopping job as changes only affect documentation (ex. README.md)" travis_terminate 0 fi install: ./build-bin/configure_test script: ./build-bin/test ``` When Travis only runs tests (something else does deploy), there's no need to use stages: ```yaml git: depth: false # TRAVIS_COMMIT_RANGE requires full commit history. if: branch = master AND tag IS blank AND type IN (push, pull_request) before_install: | # Prevent test build of a documentation-only change. if [ -n "${TRAVIS_COMMIT_RANGE}" ] && ! git diff --name-only "${TRAVIS_COMMIT_RANGE}" -- | grep -qv '\.md$'; then echo "Stopping job as changes only affect documentation (ex. README.md)" travis_terminate 0 fi install: ./build-bin/configure_test script: ./build-bin/test ``` ## Deploy Deploy builds and pushes artifacts to a remote repository for master and release commits on it. CI providers deploy pushes to master on when the tag is blank, but not on documentation-only commits. Releases should deploy on version tags (ex `/^[0-9]+\.[0-9]+\.[0-9]+/`), without consideration of if the commit is documentation only or not. * [configure_deploy] - Sets up environment and logs in, assuming [configure_test] was not called. * [deploy] - deploys the project, with arg0 being "master" or a release commit like "1.2.3" ### Example GitHub Actions setup A simplest GitHub Actions `deploy.yml` deploys after logging in, but only on relevant event conditions. The name `deploy.yml` and job `deploy` allows easy references to status badges and parity of the scripts it uses. The `on:` section obviates job creation and resource usage for irrelevant events. GitHub Actions cannot implement "master, except documentation only-commits" in the same file. Hence, deployments of master will happen even on README change. Combine [configure_deploy] and [deploy] into the same `run:` when `configure_deploy` primes file system cache. Here's a partial `deploy.yml` including only the aspects mentioned above. Notice env variables are explicitly defined and `on.tags` is a [glob pattern](https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions#filter-pattern-cheat-sheet). ```yaml on: push: tags: '[0-9]+.[0-9]+.[0-9]+**' # Ex. 8.272.10 or 15.0.1_p9 branches: master jobs: deploy: steps: - name: Checkout Repository uses: actions/checkout@v2 with: fetch-depth: 1 # only needed to get the sha label - name: Deploy env: GH_USER: ${{ secrets.GH_USER }} GH_TOKEN: ${{ secrets.GH_TOKEN }} run: | # GITHUB_REF will be refs/heads/master or refs/tags/MAJOR.MINOR.PATCH build-bin/configure_deploy && build-bin/deploy $(echo ${GITHUB_REF} | cut -d/ -f 3) ``` ### Example Travis setup `.travis.yml` is a monolithic configuration file broken into stages. This means `test` and `deploy` are in the same file. A simplest Travis `deploy` stage has two jobs: one for master pushes and another for version tags. These jobs are controlled by event conditions. The `if:` section obviates job creation and resource usage for irrelevant events. Travis does not support file conditions. A `before_install` step to skip documentation-only commits will likely complete in less than a minute (10 credit cost). As billing is by the minute, it is most cost effective to combine test and deploy on master push. Here's a partial `.travis.yml` including only the aspects mentioned above. No
评论
    相关推荐
    • Kafka技术内幕
      Kafka技术内幕 Kafka技术内幕:图文详解Kafka源码设计与实现.郑奇煌
    • kafka elasticsearch
      通过获取kafka消息列队消费发送到elasticsearch做持久存储
    • chaperone:Kafka审核系统
      基本上,伴侣将时间线切成10分钟的存储桶,并根据其事件时间将消息分配给相应的存储桶。 存储桶的统计信息会相应更新,例如消息总数。 这些统计数据会定期发送给专门的Kafka主题,例如“陪伴审核”。 ...
    • Kafka管理工具Kafka Tool
      Kafka Tool是一个用于管理和使用Apache Kafka集群的GUI应用程序。它提供了一个直观的UI,允许用户快速查看Kafka群集中的对象以及存储在群集主题中的消息。
    • Kafka技术内幕
      本书以0.10版本的源码为基础,深入分析了Kafka的设计与实现,包括生产者和消费者的消息处理流程,新旧消费者不同的设计方式,存储层的实现,协调者和控制器如何确保Kafka集群的分布式和容错特性,两种同步集群工具...
    • kafka-playground
      卡夫卡游乐场 一个用于存放一些与Kafka相关的测试代码的运动场存储库。
    • Kafka_Learn.zip
      该代码包含kafka的生产者、消费者原理详解,各种参数解析,主题、分区、存储等的代码演示,可用于搭配博客学习
    • kafka-journal:使用Kafka作为主要存储的事件源日志实现
      kafka-journal:使用Kafka作为主要存储的事件源日志实现
    • kafka
      该项目需要将ojdbc jar文件添加到本地Maven存储库中 jar文件在仓库中可用 mvn install:安装文件-Dfile = ojdbc6.jar -DgroupId = com.oracle -DartifactId = ojdbc6 -Dversion = 12.1.0.2 -Dpackaging = jar
    • SIM800C_MQTT.rar
      使用SIM800C模块,使用MQTT协议,连接中国移动onenet平台,能实现数据的订阅、发布、存储等