Saturday 15 June 2019

Scalability | Proxies


Proxy server
A proxy server is a dedicated computer or a software system running on a computer that acts as an intermediate server between the client and the back-end server. The proxy server may exist in the same machine as a firewall server or it may be on a separate server, which forwards requests through the firewall.

Typically, proxies are used to filter requests, log requests, or sometimes transform requests (by adding/removing headers, encrypting/decrypting, or compressing a resource). Another advantage of a proxy server is that its cache can serve all users. If one or more Internet sites are frequently requested, these are likely to be in the proxy's cache and serve it to all the clients without going to the remote server, which will improve user response time. A proxy can also log its interactions, which can be helpful for troubleshooting.

Proxy Server Types

Proxies can reside on the client’s local server or anywhere between the client and the remote servers. Here are a few famous types of proxy servers:

Open Proxy
An open proxy is a proxy server that is accessible by any Internet user. Generally, the proxy server only allows users within a networking group (i.e. a closed proxy) to store and forward Internet services such as DNS or web pages to reduce and control the bandwidth used by the group. With an open proxy, however, any user on the Internet is able to use this forwarding service. There two famous open proxy types:

1. Anonymous Proxy
This proxy reveals its identity as а server but does not disclose the іnіtіаl IP address. Though this proxy server can be discovered easily at can be benefіcіаl for some users as at hides their IP address.

2. Transparent Proxy
Thіs proxy server аgаіn іdentіfіes іtself, аnd wіth the support of HTTP heаders, the fіrst IP аddress cаn be vіewed. The mаіn benefіt of usіng thіs sort of server іs іts аbіlіty to cаche the websіtes.

Reverse Proxy

A reverse proxy retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client, appearing as if they originated from the proxy server itself. Reverse proxies are typically implemented to help increase security, performance, and reliability.

Benefits of a reverse proxy

Load balancing
A reverse proxy can provide a load balancing solution which will distribute the incoming traffic evenly among the different servers to prevent any single server from becoming overloaded. In the event that a server fails completely, other servers can step up to handle the traffic.

Protection from attacks
With a reverse proxy in place, a web site or service never needs to reveal the IP address of their origin server(s). This makes it much harder for attackers to leverage a targeted attack against them, such as a DDoS attack. Instead of the attackers will only be able to target the reverse proxy, such as Cloudflare’s CDN, which will have tighter security and more resources to fend off a cyber attacks.

Global Server Load Balancing (GSLB)
In this form of load balancing, a website can be distributed on several servers around the globe and the reverse proxy will send clients to the server that’s geographically closest to them. This decreases the distances that requests and responses need to travel, minimizing load times.

Caching
A reverse proxy can also cache content, resulting in faster performance. For example, if a user in Mumbai visits a reverse-proxied website with web servers in Los Angeles, the user might actually connect to a local reverse proxy server in Mumbai, which will then have to communicate with an origin server in Los Angeles. The proxy server can then cache (or temporarily save) the response data. Subsequent Mumbai’s users who browse the site will then get the locally cached the version from the Mumbai's reverse proxy server, resulting in much faster performance.

SSL encryption
Encrypting and decrypting SSL (or TLS) communications for each client can be computationally expensive for an origin server. A reverse proxy can be configured to decrypt all incoming requests and encrypt all outgoing responses, freeing up valuable resources on the origin server.

Friday 14 June 2019

Scalability | Principles Of High Performance Programs

This article is an attempt to sum up a small number of generic rules that appear to be useful rules of thumb when creating high performing programs. It is structured by first establishing some fundamental causes of performance hits followed by their extensions.  

Two fundamental causes of performance problems

Memory Latency: A big performance problem on modern computers is the latency of SDRAM. The CPU waits idly for a read from memory to come back.

Context Switching: When a CPU switches context "the memory it will access is most likely unrelated to the memory the previous context was accessing. This often results in significant eviction of the previous cache, and requires the switched-to context to load much of its data from RAM, which is slow."

Rules to help balance the forces of evil

Batch work: To avoid the cost of context switches, it makes sense to try to invoke them as rarely as possible. You may not have much control over operating systems’ system calls. Avoid context switching by batching work. For example, there are vector versions of system calls like writev() and readv() that operate on more than one item per call. An implication is that you want to merge as many writes as possible.

Avoid Magic Numbers: They don't scale. Waking a thread up every 100ms or when 100 jobs are queued, or using fixed size buffers, doesn't adapt to changing circumstances.

Allocate memory buffers up front: Avoid extra copying and maintain predictable memory usage.

Organically adapt your job-batching sizes to the granularity of the scheduler and the time it takes for your thread to wake up.

Adapt receive buffer sizes for sockets, while at the same time avoiding copying memory out of the kernel.

Always complete all work queued up for a thread before going back to sleep.

Only signal a worker thread to wake up when the number of jobs on its queue go from 0 to > 0. Any other signal is redundant and a waste of time.

Scalability | The 7 Stages Of Scaling Web Apps


Good presentation of the stages a typical the successful website goes through:

Stage 1 - The Beginning
Simple architecture
  • Firewall and load balancer
  • A pair of web servers
  • Database server
  • Internal storage.

Low complexity and overhead means quick development and lots of features, fast.
No redundancy, low operational cost.

Stage 2 - More of the same, just bigger.

Stage 3 - The Pain Begins
publicity hits. Use a reverse proxy, cache static content, load balancers, more databases, re-coding.

Stage 4 - The Pain Intensifies
Caching with Memcached writes overload and replication takes too long, start database partitioning, shared storage makes sense for content, significant re-architecting for DB.

Stage 5 - This Really Hurts!
Rethink entire application, the partition on geography user ID, etc, creates user clusters, using a hashing scheme for locating which the user belongs to which cluster.

Stage 6 - Getting a little less painful
Scalable application and database architecture, acceptable performance, starting to add new features again, optimizing some code, still growing but manageable.

Stage 7 - Entering the unknown
Where are the remaining bottlenecks (power, space, bandwidth, CDN, firewall, load balancer, storage, people, process, database), all eggs in one basket (single data center, a single instance of data).

Related Posts Plugin for WordPress, Blogger...