Invention for Low latency connections in cloud computing environments

Invented by Deepak Suryanarayanan, Sheshadri Supreeth Koushik, Nicholas Patrick Wilt, Kalyanaraman Prasad, Amazon Technologies Inc

The market for low latency connections in cloud computing environments has been growing rapidly in recent years. As businesses increasingly rely on cloud-based services for their operations, the need for fast and reliable connections has become paramount. Low latency connections are crucial for ensuring smooth and seamless access to cloud resources, enabling real-time data processing, and supporting mission-critical applications.

Low latency refers to the time it takes for data to travel from one point to another in a network. In cloud computing environments, low latency connections are essential for minimizing delays and ensuring quick response times. This is particularly important for industries such as finance, gaming, healthcare, and e-commerce, where even a fraction of a second can make a significant difference.

One of the main drivers behind the increasing demand for low latency connections is the rise of real-time applications. With the advent of technologies like Internet of Things (IoT), artificial intelligence (AI), and machine learning (ML), businesses are generating massive amounts of data that need to be processed and analyzed in real-time. Low latency connections enable these applications to function effectively by reducing the time it takes for data to be transmitted and processed.

Another factor contributing to the growth of the low latency market is the increasing adoption of cloud-based services. Cloud computing offers numerous benefits, including scalability, cost-efficiency, and flexibility. However, the performance of cloud services heavily relies on the quality of the network connection. Low latency connections ensure that businesses can fully leverage the advantages of cloud computing without experiencing delays or disruptions.

Furthermore, the proliferation of edge computing has also fueled the demand for low latency connections. Edge computing involves processing data closer to the source, at the network edge, rather than relying solely on centralized cloud servers. This approach reduces latency by minimizing the distance data needs to travel. As edge computing becomes more prevalent, the need for low latency connections will continue to grow.

In response to this increasing demand, telecommunication companies, internet service providers, and cloud service providers are investing heavily in infrastructure upgrades to support low latency connections. They are deploying fiber-optic networks, improving data center capabilities, and leveraging technologies like 5G to ensure fast and reliable connections. Additionally, advancements in networking protocols and hardware are also contributing to the availability of low latency solutions.

The market for low latency connections in cloud computing environments is highly competitive, with numerous players vying for market share. Companies offering dedicated low latency solutions, such as direct connections to cloud service providers or specialized networking equipment, are gaining traction. Additionally, cloud service providers themselves are offering low latency options as part of their service offerings, further driving the market growth.

In conclusion, the market for low latency connections in cloud computing environments is experiencing significant growth due to the increasing demand for real-time applications, the adoption of cloud-based services, and the rise of edge computing. As businesses continue to rely on cloud resources for their operations, the need for fast and reliable connections will only intensify. This presents a lucrative opportunity for telecommunication companies, internet service providers, and cloud service providers to invest in infrastructure upgrades and offer innovative solutions to meet the growing demand for low latency connections.

The Amazon Technologies Inc invention works as follows

A computing system that provides virtual computing services can generate and manage remote computing between client devices, and virtual desktop instances hosted on the network of the service provider. The system can implement a virtual cloud to provide workspaces services that are distributed geographically. The service can configure a virtual computing instance for a session in response to a request from a client. It will also establish a low-latency, secure, reliable communication channel between the resource and the gateway component located at a POP near the client. This communication channel is established over a Virtual Private Network. The POP location can be located in a different availability zone than the one that hosts the resource instance. Client devices can connect to the gateway component via a public network.

Background for Low latency connections in cloud computing environments

Many companies and organizations run computer networks to interconnect multiple computing systems in order to support their business operations. These networks may be co-located, e.g. as part of a network local, or located at different geographical locations, e.g. connected by one or more public or private intermediate networks. Data centers with large numbers of interconnected systems are becoming more common. These include private data centres operated by a single company, as well as public data centers operated by businesses that provide computing resources to clients or customers. Some operators of public data centers provide network access, electricity, and secure installations facilities for the hardware owned by different clients. Other operators offer a “full-service” data center. Facilities that include hardware resources available to their clients are also known as “full service” facilities. As the size and scope of data centers have increased, tasks such as provisioning, managing, and administering the physical computing resources has become more complex.

Virtualization technologies on commodity hardware have provided many benefits in managing large-scale computing for clients with different needs. They allow computing resources to efficiently and securely be shared by several clients. Virtualization technologies, for example, may allow one physical computing device to be shared by multiple users. Each user is provided with a virtual machine hosted on the physical machine. This virtual machine acts as a software simulation that simulates a logical computing system. It gives users the illusion they are the only operators and administrators, and provides application isolation and security between the virtual machines. Virtualization technologies can also provide virtual resources that span multiple physical resources. For example, a virtual machine that has multiple virtual processors and spans several distinct physical computing systems. Virtualization allows a single computing device to create, delete or maintain virtual machines dynamically. Users can then request computer resources and receive varying virtual machine resources as needed from a datacenter. Users can request computer resources from a data center and be provided with varying numbers of virtual machine resources on a ‘as needed? basis. Virtual desktops can be implemented in some systems by using virtualized computing resources.

The present invention describes “Various embodiments” of systems and method for providing low-latency connections (or channels of communication) to workspaces, e.g. virtual desktop instances in a virtualized computing system. A computing system that provides virtual computing services can generate and manage remote computing session between client computing devices, and virtual desktop instances hosted on the network of the service provider. The system can implement a workspaces services through which end users receive an interactive video stream on their computing devices. The performance of workspaces services (e.g. the latency of delivering the video stream or the quality) can be heavily dependent on the network that provides the connection that delivers the video stream to end users. The delivery of an interactive video stream can be time-sensitive in the sense that adverse network effects may affect the quality of a connection for the user. In some embodiments of the systems and method described herein, they may minimize the part of the path that crosses a public network, such as the public Internet, and expose the interactive video stream on a low-quality network, over which workspaces services have little control.

The systems described in this document may implement a Virtual Private Cloud (VPC) that extends to components of the security gateway and access gateway (also referred to as “gateway components” herein). “In some embodiments, the systems described herein may implement a virtual private cloud (VPC) for the workspaces service that extends out to security gateway or access gateway components (sometimes referred to as?gateway components? Multiple, geographically dispersed point of presence locations (POP). These systems can include gateway components which are located on nodes physically close to client devices and are still operating in the VPC (and under the control) of workspaces services. This approach can provide a high-quality connection and experience for the end user by using a reliable, secure, low-latency, high-bandwidth, low-loss, and low-jitter communication channel to communicate the interactive video stream of a virtual desktop up to as near to the client device as is possible, before switching to an unreliable and higher-latency public network. The terms “connection” and “communication channel? Note that the terms ‘connection’ and ‘communication channel” are used in the descriptions. In the following descriptions, these terms may be used interchangeably. In some embodiments, however, the connection or channel of communication between a virtual desktop and a client through a gateway component can or cannot implement (or rely upon) a high level handshaking protocol. It is also not limited to any specific type or type of networking components, or to any specific virtual or physical connections between networking components.

In some embodiments, the networked environment of a service provider may include multiple instances of virtual desktops. Each instance is hosted on one of several computing nodes which collectively implements a virtual workspaces service. These computing nodes can be located in data centers that are spread across multiple availability zones, such as different cities, towns, or countries. In some embodiments this networked system may include multiple gateway components. Each of these gateway components is hosted at a computing node in a POP located within a zone of availability. These gateway components, as described herein in greater detail, may interact with one another within a private virtual cloud of the virtual desk service and may communicate over a private virtual network. In some embodiments the gateway components can be implemented by using virtualized resource instance hosted on computing nodes located at the POP location.

In some embodiments, a service configured a virtual computing resources instance to be a desktop instance to implement a virtual session in response to the client’s request. This virtual computing resource may be one of several virtual resource instances operating (or participating) in a virtual private network of the client (e.g. a client whose request was received by a client device). One or more virtual computing resource instance may be configured to implement the management component of the services.

In response for a client request for a session of virtual desktop computing, the service can also establish a low-latency, secure connection (e.g. over a Virtual Private Network) between a gateway component located at a POP near the client device to communicate a two way interactive video stream during the session. The interactive video stream can include, for example, a stream that is sent to the client from the virtual desktop and inputs from the client to the virtual desk instance. These inputs represent user interaction with the virtual workspace instance. In some embodiments the interactive video stream can include commands communicated from the virtual desktop to the client, which represent instructions to the client to generate or render pixels to be displayed by the client (e.g. instead of, and/or in addition to a stream). The gateway component, as described here, may be one of several gateway components located at different POP locations. It may be chosen to be used in the session based on its proximity to the user. The gateway component, for example, may be hosted at the same POP in the same country, city or region of the client device that made the request. The availability zone where the gateway component is located may differ from the availability zone of the resource instance in the session. In some embodiments the gateway component can be automatically selected by a service management component (e.g. one that operates within the VPC), while in others a service client may choose the component. The service can initiate a virtual desktop on the virtual desktop and communicate the interactive video stream. Once the connection has been made between the virtual computing resources instance and gateway component (and between the client device) by the service.

The systems and methods described in this document may be implemented by or on one or more computers within a network, according to different embodiments. In FIG., an example computer system that may implement embodiments of techniques for securing workstations in a cloud environment is shown. 11. In general, embodiments of different systems and methods to implement these techniques are described in this document. They are typically used by a service provider who provides virtualized resources, such as virtualized computing resources and virtualized storage resources, implemented on the provider network. FIGS. The descriptions of FIGS. 1-7 and 11, and their illustrations, illustrate and describe examples environments in which embodiments described herein can be implemented. They are not meant to be restrictive. In some embodiments, a portion of the services provided by the service provider to its clients via the provider network can be virtualized computing resource implemented on shared hardware with other clients or on dedicated hardware for the client. A resource instance may be used to refer to each virtualized computing resources. Resource instances can be rented out or leased by the service provider to its clients. Clients of a service provider can, for instance, access services on the provider network through APIs in order to configure resource instances or to create and manage virtual networks that include resource instances.

In certain embodiments, resource instances can, for example be implemented using hardware virtualization technologies that allow multiple operating systems run simultaneously on a computer host, i.e. Virtual machines (VMs), which are virtualized on the hosts. Hypervisors, or virtual machine monitors (VMMs), may be installed on hosts to provide VMs with a virtual environment and monitor their execution. The VMM of a host can be aware of private IP addresses for each VM. A system using this type of hardware virtualization is shown in FIG. “Figure 4 is described in more detail below.

In various embodiments, systems described in this document for providing virtual computing service may be deployed over multiple “availability zone” infrastructures. Each of these zones may have its own physically separate, independent infrastructure, on which a group of computing nodes is implemented. In some embodiments, the availability zones can be located in different geographical locations or regions, while other embodiments may have multiple availability zones in one geographic area or region.

Example Provider Network Environments

This section describes examples of provider network environments where embodiments described herein can be implemented. These example provider networks are not meant to be restrictive. In different embodiments, these provider networks environments allow a service provider to host virtualized resources instances for a customer, which can be accessed by end users. End users associated with the customer whose virtualized resource instances are hosted, e.g. members of the organization or enterprise, may be able access the virtualized resource instances using client apps on client devices. In some embodiments the virtualized resource instances may be configured as virtual desktop instances.

FIG. “FIG. Virtualization services 110 may be provided by a provider network 100 to enable clients to obtain virtualized resources such as computation and storage resources on networks within the provider network. The resource instances 112 may have private IP addresses 116. These are the network addresses on the provider 100. In certain embodiments, provider network 100 can also provide clients with public IP addresses (e.g. Internet Protocol version 4 or IPv6 addresses).

Conventionally the provider network 100 may, via virtualization services, allow a service provider’s client (e.g. a client operating client networks 150A,150B, or150C, which each may include one of more client devices 152) dynamically assign or allocate at least some of its public IP addresses 114 to particular resource instances 112. The provider network may allow the client to remap the public IP address 114 previously mapped to a virtualized computing instance 112 assigned to the customer to another virtualized resource instance 112. Clients of the service providers, such as operator of client 150A, can implement applications for clients using virtualized computing resources instances 112 and the public IP addresses provided by the provider. The applications may then be presented on an intermediary network 140 such as the Internet. Other network entities on the intermediate 140 network may generate traffic for a destination IP address published by the client 150A. The traffic is then routed to the data center of the service provider and is then routed via a network substratum to the private IP 116 address of the virtualized computing resources instance 112 that is currently mapped to this destination IP address. The virtualized computing resource 112 can also route response traffic via the network substrate to the source entity.

Private addresses” are the network addresses that resource instances within a provider’s network have. Private IP addresses can only be routed within the provider network. Traffic originating from outside the provider’s network is not routed directly to private IPs; it uses public IP addresses which are then mapped to resource instances. The provider network can include network appliances or devices that perform network address translation or similar functionality in order to map from public IPs to private IPs and vice versa.

Public IP addresses are Internet-routable network addresses assigned by either the service provider or the client to resource instances. Traffic directed to a public address is translated via 1:1 network translation (NAT) and forwarded to its private IP address.

Standard IP addresses or standard public addresses are the terms used to describe some of the public IP address that may be assigned to resource instances by the provider’s network infrastructure. In some embodiments, mapping a standard IP to the private IP of a resource is the default configuration for all resource instances.

At the very least, some public IP address may be assigned or obtained by the clients of provider network 100. A client can then assign the public IP address they have been allocated to specific resource instances that are allocated to them. These public IPs may also be called client public IPs or client IP addresses. Client IP addresses can be assigned to resource instance by clients instead of the provider network 100, as is the case with standard IP addresses. This could be done, for example, via an API provided to them by the service provider. Client IP addresses can be assigned to client accounts, and then remapped by clients to other resource instances as needed or desired. In some embodiments, the IP address of a client is not associated with any particular resource instance but rather the account. The client retains control over the IP address until it decides to release the address. Client IP addresses allow clients to mask failures of resource instances or availability zones by remapping their public IP addresses onto any resource instance that is associated with their account. Client IP addresses may allow a client, for instance, to work around issues with their resource instances or software. This is done by mapping client IPs to alternative resource instances.

Note that, in some embodiments the resource instances 112 made available to clients via virtualization services 110 (e.g. client devices 152) may include multiple interfaces. At least some of the virtualized computing resource instance (including those that are configured for virtual desktop instances), may include a network interface that communicates with components of client network 150, and another interface that communicates with resources or network entities external to provider network (not shown).

FIG. The block diagram in Figure 2 shows another example of a provider network, which provides clients with a storage virtualization and hardware virtualization services, according to some embodiments. In this example hardware virtualization service provides clients with multiple computation resources (e.g. VMs). The computation resources 224 can, for example be rented or lease to clients of provider network 200 (e.g. to a client implementing client network 250). Each computation resource 224 can be assigned one or more IP addresses. The provider network 200 can be configured to send packets to and from public internet destinations from private IP addresses.

Provider Network 200 may allow a client 250, which is, for instance, coupled to the intermediate network via local network, to implement virtual computing system 292 through hardware virtualization service 220–coupled to both intermediate network and provider network 200. Hardware virtualization 220 in some embodiments may provide one API 202 (for example, a web services user interface) via which client network 250 can access functionality provided by hardware virtualization 220. For example, this could be via console 294 At least in some embodiments of the provider network 200 each virtual computing system at client network may correspond to a compute resource 224 which is leased, rent, or otherwise provided by client network 250.

Click here to view the patent on Google Patents.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *