Invention for Systems and Methods for Server Surge Protection in a Multi-Core System

Invented by Roy Rajan, Saravanakumar Annamalaisami, Citrix Systems Inc

As the world becomes increasingly reliant on technology, the demand for reliable and secure server systems has grown exponentially. In a multi-core system, where multiple processors work together to perform complex tasks, the risk of server surge is a constant concern. This has led to the development of systems and methods for server surge protection, which have become a crucial component of any modern server infrastructure.

The market for systems and methods for server surge protection has grown rapidly in recent years, as businesses and organizations seek to protect their critical data and applications from power surges and other electrical disturbances. These systems and methods are designed to detect and respond to power surges in real-time, preventing damage to servers and other critical equipment.

One of the key drivers of the market for server surge protection is the increasing complexity of modern server systems. As servers become more powerful and capable, they require more advanced protection systems to ensure their continued operation. This has led to the development of sophisticated surge protection systems that can detect and respond to even the most complex electrical disturbances.

Another factor driving the market for server surge protection is the increasing importance of data security. With the rise of cyber threats and data breaches, businesses and organizations are more focused than ever on protecting their sensitive data. Server surge protection systems play a critical role in this effort, as they help to prevent data loss and downtime caused by power surges and other electrical disturbances.

The market for server surge protection systems is highly competitive, with a wide range of vendors offering a variety of products and services. Some of the key players in this market include APC by Schneider Electric, Eaton, Tripp Lite, and CyberPower. These companies offer a range of surge protection systems, including uninterruptible power supplies (UPS), surge protectors, and power distribution units (PDU).

In addition to traditional surge protection systems, there are also a number of newer technologies emerging in the market. For example, some vendors are now offering cloud-based surge protection services, which can provide real-time monitoring and response to electrical disturbances. Other vendors are developing advanced analytics and machine learning tools to help predict and prevent power surges before they occur.

Overall, the market for systems and methods for server surge protection is expected to continue growing in the coming years. As businesses and organizations become increasingly reliant on technology, the need for reliable and secure server systems will only continue to grow. With the right surge protection systems in place, businesses can ensure that their critical data and applications remain safe and secure, even in the face of unexpected electrical disturbances.

The Citrix Systems Inc invention works as follows

The present application is directed at systems and methods of providing connection surge protection by a multi-core intermediary system. The packet processor of a multicore device that is deployed between a plurality clients and one or several servers, determines the estimated total number pending request received by each packet processing engine based on the value of a counter local of received requests. It also takes into account the total number pending request received by other packet engines during a previous predetermined time interval and the rate of change in the total number pending request received by other packet engines. “The packet processing engine responds to pending requests by applying a surge-protection policy in response to the estimated total number of pending requests.

Background for Systems and Methods for Server Surge Protection in a Multi-Core System

In many cases, an influx of data requests from a large clientele can overload a server or servers. This results in an unacceptable response time and leads users to look for content elsewhere. It could be harmful to electronic commerce. In some cases, the most resource intensive part of responding a data request is setting up a connection. For example, in addition to handshaking procedures, such as the 3-way handshake of TCP, a server may need to allocate memory, transmission buffers, packet control buffers, establish cryptographic keys, or perform other computationally-intensive tasks. It may be possible, therefore, for an intermediary device to delay requests slightly, so that the server can establish fewer connections at once, while still maintaining a high throughput.

In a multicore intermediary, the surge protection is more complex. It is possible that each core does not have enough information about a surge in requests from other cores, to be able to apply a policy of surge protection, without having large amounts core-to core communication. This can result in long computation delays, and a large amount of bandwidth being consumed.

The present application is aimed at systems and methods that provide connection surge protection for one or more servers using a multi-core intermediary system. Each packet processor of a multicore device acting as an intermediary for a number of clients and servers, maintains a counter local of the pending requests. In some embodiments each packet processing engine, at predetermined times, adds their local counter value to a global counter which represents the total number pending requests for establishing a connection with a server. In one embodiment, the packet processing engine subtracts its local counter value from the global counter value to calculate the total number of requests pending received by other packet processing engines. The packet processing engine determines also the rate of changes in the total number pending requests that all other packet processors have received. The packet processor determines the estimated total number pending request received by other packet engines by calculating the local counter value, the total pending request received by other packet engines at the previous predetermined interval and the rate of changes of the total pending request received by other packet engines multiplied with the time since that interval. “The packet processing engine applies surge protection policies to received pending request in response to the estimated total number of pending requests.

The present invention, in one aspect, features a method of providing connection surge to a server using a number of packet processing engine executing on the corresponding number of cores within an intermediary device deployed among a group of clients and a single or multiple servers. Each packet processing engines maintains a local counter for received requests to connect with a server at a memory address that is accessible to the engine. Each packet processing machine stores, in response to an expiration of a predetermined timer, the value of its local counter to a global count of requests received, which is stored at a shared memory accessible to all the packet processing engines. A first packet processor retrieves the value of a global counter from a shared memory location, and determines a difference between the retrieved and previously retrieved values of the counter. The first packet processing algorithm divides the difference value by a predetermined duration in order to determine the rate of change of the global number received requests. The first packet processing unit also receives a request to connect to a server. The first packet processing engine determines whether or not the global number has reached the connection rate limit by analyzing the rate at which the global number is changing.

In one embodiment of the method, each packet processing engine maintains the local counter by incrementing it when a connection request is received from a server. The counter will be decremented after the request has been processed. In another embodiment of the method, each packet processing engine replaces the value previously stored in the local counter to the global counter. In a further embodiment, each packet processing engine maintains a record of an earlier value of local counter. The previous value is then subtracted from the global count and the current value added to the global count. In a further embodiment, initializing the global counter and adding the value of each packet processing engine’s local counter to the global count are included in the method.

In another embodiment, a first packet processor subtracts the current value from the retrieved global counter value to determine the total requests received by the other packet processor engines in the plurality. In a further embodiment, the first packet processor engine determines whether the global request rate has been reached by comparing the change in global requests with the amount of time that has passed since the expiration of the previous timer.

In another embodiment, a first packet processing engine delays, in response to the determination of a surge protection, processing the first request to reconnect to the server, for a predetermined period. In another embodiment, the first packet processor receives a plurality requests to connect with the server and delays, in response to the determination of each request to connect, its processing so that requests are processed at regular intervals. In another embodiment, the predetermined time is multiplied by number of packet processors in the plurality.

The present invention also features, in another aspect, a system that provides connection surge protection for a server through a multi-core device placed between a plurality clients and one or several servers. The system comprises an intermediary device that is deployed between a plurality clients and one or several servers. The intermediary device comprises a timer with a predetermined duration and a shared location of memory, as well as a plurality packet processing engines that execute on a respective plurality cores. Each packet processing engine includes (i) a means to maintain a local counter for received requests to connect to servers, and (ii), a means to store, in response to the expiration timer, a value from the local counter into a global counter located in the shared location of memory. The system includes a single packet processing engine from the plurality. The first packet processor includes means to retrieve the value of global counter from shared memory and determine a difference between the retrieved and previously retrieved values of global counter. The first packet processor engine includes also means for dividing by the predetermined time to determine the rate of change of the global number received requests. The first packet engine includes means to receive a first connection request. The first packet processor engine also includes means to determine whether the global amount of requests to connect with the server has reached the connection rate limit, based on changes in the number of connections.

In one embodiment, the packet processing engines of the system include means for incrementing a counter upon receiving a connection request from a server. The counter is decremented when the request has been processed. In another embodiment, the packet processing engines of the system include means to replace the value of the previous local counter stored into the global counter. In a further embodiment, the packet processing engines of the system include means for maintaining a record of an earlier value of local counters; subtracting that value from the global counter and adding the value of local counters to the global count. In a further embodiment, the system contains means for initializing value of global counter. Each packet processing engines of the system also includes means to add value of local counter maintained by said packet processing engines to global counter.

In another embodiment, in the first packet-processing engine of the system, there is a means to subtract the current value from the global counter and the value retrieved by the local counter. This allows the total number requests received by the other packet-processing engines of the plurality. In a further embodiment, the first system packet processing engine includes means to determine whether the number or connections has reached a limit on connection rate based upon the change in global requests divided by the amount of time that has passed since the expiration of the previous timer.

In a further embodiment, the first system packet processing engine includes means to delay, in response to the determination, the processing of the initial request to connect to server for a certain time. The predetermined period is selected based on a surge-protection policy. In another embodiment, the system’s first packet processing engine includes means to receive a plurality requests to connect with the server and means to delay, in response to the determination of each request to connect with the server, so that they are processed at intervals within the predetermined period. In a further embodiment, the system comprises the predetermined time multiplied with the number of packet processors in the plurality.

The description and accompanying drawings show the details of different embodiments of the invention.

The following sections of the specification with their respective contents can be useful for reading the descriptions of various embodiments:

Before discussing the details of particular embodiments of systems and methods for clients and appliances, it might be useful to discuss the computing and network environments in which these embodiments could be deployed. Referring to FIG. FIG. 1A shows an example of a network environment. The network environment consists of one or several clients 102a-102n (also known as client(s), 102) in communication to one or two servers 106a-106n (also known as remote machine(s), 106) via one or multiple networks 104,104? (generally known as network 104). A client 102 can communicate with a server (106) via an appliance 200 in some embodiments.

Although FIG. “Although FIG. The clients 102, 106 and servers 106 could be on the same network. What are the networks 104-104? It can be one type of network, or several types of networks. The network 104 or the network 104? It can be a local area network (LAN), like a company Intranet or a metropolitan area network, or a wide-area network(WAN), like the Internet or the World Wide Web. Network 104 is one example. Network 104 could be a private network, while network 104 might be a public one. Network 104 and network 104 can be considered private networks in some instances. A public network. Networks 104 and104 may be used in another way. Both networks may be private. Clients 102 could be at a branch office or corporate enterprise and communicate via a WAN connection through the network 104 to the corporate data center servers 106.

The network 104 or 104?” Any type or form of network may be used. It can include any number of the following: a network 104 and/or 104, a broadcast network network, large area network network networks, a wide-area network network network, and a telecommunications network. The network 104 can include a wireless link such as an infrared channel, satellite band, or a wireline network. What is the topology of network 104 or 104? It could be a bus, star or ring network topology. What is the network 104 or 104? The network 104 and/or 104 may have any network or network topology that is known to those who are skilled in the art of the art.

As shown at FIG. “As shown in FIG. 1A, the appliance 200 is shown between networks 104 and104?. The appliance 200 can be found on network 104 in some instances. An appliance 200 may be deployed at a branch office within a corporate entity, for example. The appliance 200 could also be found on network 104?. An appliance 200, for example, could be found at a corporate data centre. A plurality of appliances 200 can be deployed on network 1004. A plurality of appliances 200 can be deployed on network 104.1?. One embodiment shows a first appliance 200 communicating with a second appliance 200. Other embodiments allow the appliance 200 to be part of any client 102, server 106, or other network 104.104. as the client 102. An appliance 200 or more may be found at any point on the network or in the communications path between a client and server 102.

In some embodiments, an appliance 200 includes any network device manufactured by Citrix Systems, Inc., Ft. Lauderdale Fla., also known as Citrix netScaler devices. Other embodiments include any product embodiments known as WebAccelerator or BigIP made by F5 Networks, Inc., Seattle, Wash. Another embodiment of the appliance 205 is any one of the DX acceleration platform platforms and/or SSL VPN series devices, such as SA 700, SA 2000, SA 4000, SA 6000, and SA 4000 devices manufactured by Juniper Networks, Inc., Sunnyvale, Calif. Another embodiment of the appliance 200 includes all application acceleration and/or security-related appliances and/or software manufactured or distributed by Cisco Systems, Inc., San Jose, Calif., including the Cisco AVS Series Application Velocity Systems and Cisco ACE Application Control Engine Modular service software.

In one embodiment, multiple servers may be logically grouped 106. These embodiments may also include a server farm 38. The serves 106 in some embodiments may be geographically dispersed. A farm 38 can be administered as one entity in some cases. Other embodiments of the server farm 38 include a plurality server farms 38. One embodiment of the server farm executes one to several applications for one or more clients 102.

The servers 106 in each farm 38 may be heterogeneous. One or more servers 106 may operate under one operating system platform, such as WINDOWS, which is manufactured by Microsoft Corp. of Redmond. Wash., while the other servers (106 and 106) can operate on another operating system platform, such as Unix or Linux. Servers 106 and 106 from each farm 38 don’t need to be physically close to other servers 106 in the farm 38. The servers 106 that are logically connected to form a farm 38 can be interconnected via a wide-area or medium-area connection (WAN) or MAN connections. A farm 38 could include servers 106 located on different continents, in different areas of a country, state, city or campus. The data transmission speeds between servers 38 and 106 can be improved if servers 106 are connected via a local-area networking (LAN) connection, or another type of direct connection.

Servers 106 can be called a file server or application server, a web server, proxy server or gateway server. A server 106 can be configured to serve as an application server, or master application server in some cases. A server 106 could include an Active Directory in one embodiment. Client nodes and endpoints may also be used to refer to clients 102. A client 102 can be used as both a client server that provides access to the applications of a server or as a client node to access them.

Click here to view the patent on Google Patents.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *