kappa-architecture
所属分类:大数据
开发工具:kotlin
文件大小:0KB
下载次数:0
上传日期:2019-12-20 21:01:10
上 传 者:
sh-1993
说明: 使用不可变事件日志、流处理器和物化视图来实现分布式函数式 React编程的实验,
(An experiment using immutable event logs, stream processors, and materialized views to achieve distributed functional reactive programming,)
文件列表:
.ruby-version (6, 2018-05-01)
acceptance-tests/ (0, 2018-05-01)
acceptance-tests/build.gradle (957, 2018-05-01)
acceptance-tests/gradle/ (0, 2018-05-01)
acceptance-tests/gradle/wrapper/ (0, 2018-05-01)
acceptance-tests/gradle/wrapper/gradle-wrapper.jar (54329, 2018-05-01)
acceptance-tests/gradle/wrapper/gradle-wrapper.properties (200, 2018-05-01)
acceptance-tests/gradlew (5296, 2018-05-01)
acceptance-tests/settings.gradle (363, 2018-05-01)
acceptance-tests/src/ (0, 2018-05-01)
acceptance-tests/src/test/ (0, 2018-05-01)
acceptance-tests/src/test/kotlin/ (0, 2018-05-01)
acceptance-tests/src/test/kotlin/com/ (0, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/ (0, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/ (0, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/AcceptanceTest.kt (828, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/support/ (0, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/support/Http.kt (598, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/support/Json.kt (453, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/support/JsonResponse.kt (115, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/values/ (0, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/values/CreateSentenceRequest.kt (100, 2018-05-01)
acceptance-tests/src/test/kotlin/com/kappa/acceptance_tests/values/WordCountResponse.kt (115, 2018-05-01)
configuration/ (0, 2018-05-01)
configuration/consumer/ (0, 2018-05-01)
configuration/consumer/development/ (0, 2018-05-01)
configuration/consumer/development/application.ejson (674, 2018-05-01)
configuration/consumer/test/ (0, 2018-05-01)
configuration/consumer/test/application.ejson (690, 2018-05-01)
configuration/producer/ (0, 2018-05-01)
configuration/producer/development/ (0, 2018-05-01)
configuration/producer/development/application.ejson (408, 2018-05-01)
configuration/producer/test/ (0, 2018-05-01)
configuration/producer/test/application.ejson (408, 2018-05-01)
configuration/stream-processor/ (0, 2018-05-01)
configuration/stream-processor/development/ (0, 2018-05-01)
configuration/stream-processor/development/application.ejson (560, 2018-05-01)
configuration/stream-processor/test/ (0, 2018-05-01)
... ...
# kappa-architecture
An experiment using immutable event logs, stream processors, and materialized views to achieve distributed functional reactive programming
## Installation
Download confluent platform from `https://www.confluent.io/download` and move it into this directory:
```
tar -xvf ~/Downloads/confluent-oss-4.0.0-2.11.tar.gz
mv confluent-4.0.0 ~/workspace/kappa-architecture/runtime
```
Use the workstation setup script to install the required dependencies (including the kappa command line interface):
```
./setup/install
```
Create a postgres database into which we can materialize a view:
```
createdb consumer_development
```
Install the private key used to decrypt secrets:
```
sudo mkdir -p /opt/ejson/keys
sudo vim /opt/ejson/keys/3563cb1ccbbc1c5adc6f81684e7b85d9d40b0e8cfece2320e04e31af641b624c
```
## Development
Use the command line interface to operate the application. For example, to run all tests:
```
kappa test
```
## Startup
First:
`kappa runtime start` to start zookeeper, kafka, schema-registry, and connect
`kappa topics create` to create the required topics in the local Kafka
`kappa bindings generate` to generate serializers and deserializers from schema files
Then, in separate terminals:
`kappa start producer` to launch the producer application
`kappa start stream-processor` to launch the stream-processor application
`kappa start consumer` to launch the consumer application
`kappa connectors load` to load the jdbc-sink Kafka Connect connector
## Usage
Notice that the word_counts table doesn't even exist:
```
$ psql consumer_development
consumer_development=# SELECT * FROM word_counts;
ERROR: relation "word_counts" does not exist
```
Use the producer to write a few sentences to the sentence_created kafka topic:
```
curl -X POST localhost:8080/sentences -H "Content-Type: application/json" --data '{"words": "alpine rainbows"}'
curl -X POST localhost:8080/sentences -H "Content-Type: application/json" --data '{"words": "rainbows and sheep"}'
```
The stream-processor is listening to that topic and will write an updated set of word counts to the word_counts topic.
The jdbc-sink connector is listening to word_counts and will materialize that into a table in the consumer_development database.
```
$ psql consumer_development
consumer_development=# SELECT * FROM word_counts;
count | word
-------+---------
1 | alpine
2 | rainbows
1 | and
1 | sheep
(4 rows)
```
Independently, the consumer is listening to word_counts and materializing its own view using RocksDB:
curl localhost:8181/word_counts/alpine
{"word":"alpine","count":1}
近期下载者:
相关文件:
收藏者: