system-design-sonnets:系统设计概念和原则的回购,可帮助其逐步发展为架构师角色以及帮助建立基于产品的公司

  • u7_193722
    了解作者
  • 238.4KB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-06-14 03:25
    上传日期
前言 system-design-sonnets是系统设计概念和原理的集合,这些概念和原理可帮助您在组织中逐步发展为架构师角色,也有助于破解基于产品的公司,这些公司不仅注重编码能力,而且注重架构部分。 研究此内容的一些不错的资源是: 总体主题: : 多个负载均衡器: : 负载平衡技术: : CAP定理: : k-Yaq8AHlFA 通过创作的图 内容交付网络: : 正向代理与反向代理与负载均衡器: : 心跳系统: : API网关: : Zookeeper: : 布隆过滤器: : 卡桑德拉(Cassandra): : UUID: : 集中式与分布式系统 集中式系统将所有内容都放在同一台计算机上。它们提供单点故障。 分布式系统具有在不同节点(机器)上运行的块。这样可以提供分隔,并通过增加冗余/复制来帮助按需扩展体系结构。 系统设计-为何需要,简短介
system-design-sonnets-master.zip
  • system-design-sonnets-master
  • images
  • multiple-load-balancers-for-an-application.png
    15.2KB
  • nslookup-of-google.png
    20.3KB
  • amazon-product-info.png
    157.3KB
  • load-balancer-and-balancing-algorithms.png
    48.4KB
  • README.md
    15.5KB
内容介绍
# Foreword system-design-sonnets is a collection of system design concepts and principles that help in both progression towards an architect role in your organization as well as for cracking product based companies which stress not only on the coding ability, but also on the architectural part. ## Some good resources on which this content is researched upon are: - Overall Topics: https://github.com/donnemartin/system-design-primer - Multiple Load Balancers: https://www.flexera.com/blog/cloud/dns-load-balancing-and-using-multiple-load-balancers-in-the-cloud/ - Load Balancing Techniques: https://kemptechnologies.com/load-balancer/load-balancing-algorithms-techniques/ - CAP Theorem: https://www.youtube.com/watch?v=k-Yaq8AHlFA - Diagrams authored via https://app.diagrams.net/ - Content Delivery Network: https://www.youtube.com/watch?v=Bsq5cKkS33I - Forward Proxy vs Reverse Proxy vs Load Balancers: https://www.youtube.com/watch?v=MiqrArNSxSM - Heartbeat Systems: https://medium.com/@adhorn/patterns-for-resilient-architecture-part-3-16e8601c488e - API Gateways: https://www.nginx.com/blog/building-microservices-using-an-api-gateway/ - Zookeeper: https://lucidworks.com/post/how-to-use-apache-zookeeper-to-build-distributed-apps-and-why/ - Bloom Filters: https://www.youtube.com/watch?v=bgzUdBVr5tE https://www.youtube.com/watch?v=heEDL9usFgs - Cassandra: https://blog.discord.com/how-discord-stores-billions-of-messages-7fa6ec7ee4c7 - UUID: https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake.html # Centralized vs Distributed Systems - Centralized systems have everything on the same machine. They offer single point of failures. - Distributed systems have blocks which are running on different nodes ( machines ). This provides seperation and helps in extending the architecture per need in future by adding redundance/replication. # System Design - Why is it needed, a short intro As you build an application and the application becomes a success, first thing that you would notice is that the number of users visiting or using your application increasing. This increase could then expose faulty areas in your architecture that are previously dormant. This is, in most cases, the web servers, databases etc. Furthermore, if a database that you are so reliant on goes down, then your application is rendered useless. [comment]: <> Add a picture for easier recollection That is why a need for reducing the bottlenecks and having standby extras ( databases, servers etc ) is handy. ## Horizontal vs Vertical Scaling If the numbers of requests that a server is serving are increasing, we have two options. ### Vertical scaling - Simply add more hardware to the system. Increase its RAM, buy better harddrives or move to SSD etc. Add more L1, L2 caches. - L1 cache is directly on the microprocessor chip itself and so is the fastest cache. - L2 cache is typically on a seperate chip on the motherboard but very close to the chip. L2 is only consulter is L1 lookup is a **miss**. - ***L1 is about 14x faster than L2 & about 100x faster than a RAM query.*** - Sooner or later you would hit a limit for such increase on a server though. - ***Not to mention, having a single server serve the user requests would result in a single point of failure, the server itself. If it goes down, the application goes down.*** ### Horizontal scaling - In this mode, we add multiple servers of the similar configuration and put a load balancer ahead of them. - This means that the load balancer would now be hit for any requests to these servers. - ***If the load on the application is bound to increase, simply increase the number of servers behind the load balancer.*** - ***Load balancer would need to have a public IP and all the other servers need not have a public IP => need not be accessible to the internet.*** - ***For the application domain name, we would now have the load balancer IP mapped in the DNS record. Furthermore, we can also have more than one load balancers for an application. In this case all the different load balancers would be listed in DNS in multiple records.*** - ***Take google for example: [comment]: <> Add a picture for easier recollection ## Load Balancers When a request reaches a load balancer from the internet, there are some variety of ways in which this request could be forwarded to one among the different application servers available. ![Load Balancer & Distribution Algorithms](/images/load-balancer-and-balancing-algorithms.png) #### Algorithm: Random distribution - Requests are distributed to random webserver. - ***Same server could be fed with most of the load and yields improper usage of other resources.*** - ***Applications that need to connect to the same server for all requests would be lost.*** - Shopping card application where the checked out items are present in the session. #### Algorithm: Least Busy Server - Requests are distributed to the server that has the least amount of load/connections. - ***Judicious usage of servers.*** - ***Applications that need to connect to the same server for all requests would be lost.*** - Shopping card application where the checked out items are present in the session. #### Algorithm: Round Robin - Requests are distributed to each of the servers in turn, one after the other. - Judicious distribution of tasks. - Same heavily loaded server would still get the requests. - ***Applications that need to connect to the same server for all requests would be lost.*** - Shopping card application where the checked out items are present in the session. #### Algorithm: Sticky sessions / Source IP Hash - Requests are analyzed by load balancer and is directed to the webserver, which is the HASH of say, userid / source ip. - All requests from the same client machine of the given user go to the same webserver. - This HASH could even be stored as a cookie on the client machine by the browser. - ***This approach would preserve the session caches that we have discussed about.*** - ***Drawback of this problem is the unequal distribution of requests against the servers.*** #### Algorithm: Round Robin + Session Cache - Requests go to the load balancer which then checks if the user is new to the site. - If so, a server that falls next in the round robin would be assigned to him. - An entry to the central cache holding the session information would be made. - If the user is not new - A look up is made to see which server he was previously allocated to. - Request is redirected to that application server. - ***This approach preserves the user historic session information and is also making judicious usage of servers.*** - ***Important point to note here is the storage for user sessions - it should be a persistant simple storage like REDIS.*** ## Content Delivery Network When an application gets deployed at a certain geography, say in Netherlands, people from the Netherlands would notice that the site is faster to access. But people from other parts of the world, say US, would notice that it is slower. US to Netherlands has a delay of about 140ms. Furthermore customers residing in australia could experience the worst traffice where the ping delay could be around 200ms. Greater this delay worser would be the performance of the application. The delay that we just referred to here is the ***ping delay***, the amount of time it takes for a network packet to move from US/AU to Netherlands and travel back. For most applications, the part that always remain the same can be cached - this could be the images, the HTML pages etc. If we are able to cache them closer to say, US/AU in this case, the delay would radically come down and gives the application a very good performance. Note that the dynamic parts that have to go to Netherlands central server for execution would still continue to exist, but the application would still offer a far better performance than without the caches. These caches are ca
评论
    相关推荐