hipchat-docker-infra

所属分类:Redis
开发工具:Shell
文件大小:82KB
下载次数:0
上传日期:2023-05-14 05:51:03
上 传 者sh-1993
说明:  hipchat docker infra,一个脚本和docker构建的集合,用于创建hipchat数据中心(Redis,Postg...
(A collection of scripts and docker builds to create the infrastrcuture required for HipChat Datacenter ( Redis, Postgres, NFS, HAPRoxy ))

文件列表:
LICENSE (11357, 2018-03-22)
del_stack.sh (342, 2018-03-22)
docker-compose-ha.yml (5142, 2018-03-22)
docker-compose-multistack.yml (1426, 2018-03-22)
docker-compose.yml (982, 2018-03-22)
down_stack.sh (56, 2018-03-22)
haproxy (0, 2018-03-22)
haproxy\Dockerfile_HAPROXY (240, 2018-03-22)
haproxy\haproxy.cfg (954, 2018-03-22)
init_stack.sh (2487, 2018-03-22)
net_mon.sh (3509, 2018-03-22)
nfs-export-cfg (69, 2018-03-22)
nuke.sh (110, 2018-03-22)
pgpool (0, 2018-03-22)
pgpool\pgpool.conf (31710, 2018-03-22)
pgsql (0, 2018-03-22)
pgsql\Pgpool-3.6.Dockerfile (2543, 2018-03-22)
pgsql\Pgpool-latest.Dockerfile (23, 2018-03-22)
pgsql\Postgres-9.5.Dockerfile (4589, 2018-03-22)
pgsql\Postgres-9.6.Dockerfile (4588, 2018-03-22)
pgsql\includes.Dockerfile (0, 2018-03-22)
pgsql\includes.Dockerfile\Barman-2.3.part.Dockerfile (2480, 2018-03-22)
pgsql\includes.Dockerfile\Pgpool-3.3-3.6.part.Dockerfile (2157, 2018-03-22)
pgsql\includes.Dockerfile\Postgres-9.5-9.6.part.Dockerfile (4381, 2018-03-22)
pgsql\includes.Dockerfile\Postgres-extended-9.5-9.6.part.Dockerfile (186, 2018-03-22)
pgsql\pgpool (0, 2018-03-22)
pgsql\pgpool\bin (0, 2018-03-22)
pgsql\pgpool\bin\entrypoint.sh (578, 2018-03-22)
pgsql\pgpool\bin\has_enough_backends.sh (805, 2018-03-22)
pgsql\pgpool\bin\has_write_node.sh (265, 2018-03-22)
pgsql\pgpool\bin\pgpool_setup.sh (3682, 2018-03-22)
pgsql\pgpool\bin\pgpool_start.sh (79, 2018-03-22)
pgsql\pgpool\configs (0, 2018-03-22)
pgsql\pgpool\configs\pgpool.conf (30804, 2018-03-22)
pgsql\pgsql (0, 2018-03-22)
pgsql\pgsql\bin (0, 2018-03-22)
pgsql\pgsql\bin\entrypoint.sh (468, 2018-03-22)
pgsql\pgsql\bin\functions (0, 2018-03-22)
... ...

# Overview A collection of scripts and docker builds to create the infrastructure required for HipChat Datacenter ( Redis, Postgres, NFS, HAPRoxy ). Note this is not for production... Use this for: * Testing HA * Fast POCs * Testing Features/New Versions without several VMS # Upcoming HA Features * HAProxy Clustering ### Not for production use ### ## Getting Started: * Clone this Repo * Edit init stack line 40 with choice of docker-compose.yml or docker-compose-ha.yml * ./init_stack.sh * Get a coffee * *** * Profit ## init_stack.sh Creates the directories needed for docker-compose persistent storage and tests port connectivity. * Creates $HOME/dockerdata/blah * Copies the NFS Exports cfg to the above nfs_hipchat folder * Tests local ports for availability ( can remove this and let docker handle your failures ) * Checks if hipc.pem is present, if not creates a dummy .pem for you * Builds the HAProxy Container and tags the image locally * Runs the docker compose and prints outputs to your shell * Runs the docker-compose.yml by default ## del_stack.sh Deletes the composed stack but keeps container images + data on volumes. ## down_stack.sh Performs a docker-compose down to shutdown all containers. ## nuke.sh Deletes all containers and images! Only use to cleanup images and containers when no longer neded. DELETES all containers! * Runs -f flag to force the deletion of images # Standalone Services: ### NFS Shared storage used by all HCDC Nodes * NFSv4 Only * Not secure in any way * Seems HCDC nodes need a reboot after the config.json is applied for selfcheck to pass NFS IO tests ### Redis Shared NoSQL Instance used by HCDC Nodes * Simple Redis instance for caching. * All Defaults ### Postgres Shared SQL Database used by HCDC Nodes * Simple Postgres DB for HipChat * Schema creation handled by Application ### HAProxy Load balancer Frontend of HipChat + SSL Termination * BYO Self Generated Certificate as hipc.pem * Config is best known and tested option * Update Frontend port to whatever.. * Configure your HCDC Instance to the FQDN pointing at the Load Balancer ( hack your /etc/hosts if you don't have DNS ) * configure backend IP addresses in haproxy.cfg or use 192.168.122.100 etc etc ### Next Steps * Deploy the HipChat 3.1.3+ .OVA * Configure your HCDC Nodes with IPs from the HAProxy.cfg * Using the Web UI Setup wizard, configure the Postgres service, then Redis and NFS using the IP address from your workstation after a successful compose * Using the hipchat datacenter cli - Configure your instance using a config.json, restart the instance and reboot. On reboot, perform the hipchat datacenter selfcheck * Official Atlassion doco for these steps here: https://confluence.atlassian.com/hipchatdc3/configure-hipchat-data-center-nodes-909770912.html # High Availability Services: ### NFS * Singular instance for now... ### Redis HA * 3x Redis 3.2 Instances ( 1 Master, 2 Slaves ) * 1x Sentinel Monitor ( QUORUM = 1) ### Postgres + PGPool * 3x Postgres 9.5 Instances ( 1 Master, 2 Slaves ) * 2x PGPool with Watchdog ### HAProxy + KeepAlived * Singular Instance for now # Next Steps * Deploy the HipChat 3.1.1+ .OVA * Configure your HCDC Nodes with IPs from the HAProxy.cfg * Using the Web UI Setup wizard, configure the Postgres service, then Redis and NFS using the IP address from your workstation after a successful compose * To benefit from automagic failovers, connect to postgres via PGPool (port 5430) and Redis Sentinel via (port 9000) * Using the hipchat datacenter cli - Configure your instance using a config.json, restart the instance and reboot. On reboot, perform the hipchat datacenter selfcheck * Official Atlassion doco for these steps here: https://confluence.atlassian.com/hipchatdc3/configure-hipchat-data-center-nodes-909770912.html # I want to use this in Production? By all means, take the concept, but this isn't a production deployment bible. You should look at the following as the next steps: * Using a CI/CD Pipeline to manage container builds + publish them to a Registry * Use a Orchestration engine such as Kubernetes instead as docker-compose is not a production ready service * Scale/Replicas of "services" * Apply Best practices like 3+ Sentinel Nodes, Multiple PGPOOL nodes etc * Use AWS EFS + NFS or a SAN NFS Volume and let the hardware vendors manage your disk/failover * Understand how docker volumes, data persistence and docker in general works # What does this really provide? I wanted a lean way of deploying components without the use of VMS, it's too static, slow and kinda old school. This really helps in building up and tearing down instances quickly, just about anywhere. However, it evolved into high availability of services, mostly around Redis and Postgres as this is a concept that's quite complex. ## Redis HA With Sentinal Redis Native clustering is done with the use of the Sentinel service, part of of the redis service. Essentially, this is deployed and run on the same nodes as your Redis server, and monitors master and slaves. Depending on the config, a QUORUM setting is defined which is essentially "how many sentinals must decide when a master is down, and promote a slave to a master", but also work as a connection broker for your application. In this deployment, you should connect your application to the Redis Sentinal Server (Port 9000) which is the connection broker, routing the real connection to the current master. Testing this... docker kill rdsmaster, does the application still work? you simulated someone accidentally unplugging your server in the datacenter. ## Postgres with PGPool There's a few ways to skin this cat, however I think PGPool2 is the best method, in the days of automation and automagic stuffs happening, we shouldn't need to worry about manually switching master/slaves, updaing configs and endpoints etc. PGPool operates much like Sentinal, as a cache, connection broker, but has a lot more brains included which also handle replications, load balancing and some other cool stuff. The automagic failover is done by the watchdog process, we allocate a master postgres instance, and pgpool handles the rest, killing this node, and having replications working should see a transparent failover to the slave, and a promotion to MASTER to enable WRITES to the database.

近期下载者

相关文件


收藏者