Invented by Satish Chandra Jha, S M Iftekharul Alam, Ned M. Smith, Intel Corp

The market for Distributed Machine Learning in an Information Centric Network

In recent years, the field of machine learning has seen significant advancements, with applications ranging from natural language processing to image recognition. However, as the amount of data being generated continues to grow exponentially, traditional centralized machine learning approaches are facing challenges in terms of scalability and efficiency. This has led to the emergence of distributed machine learning, a paradigm that leverages the power of multiple machines to process and analyze data in a decentralized manner.

One particular area where distributed machine learning is gaining traction is in information-centric networks (ICNs). ICNs are a new type of network architecture that focuses on content rather than the location of data. In an ICN, data is stored and retrieved based on its content, making it an ideal environment for distributed machine learning algorithms to operate.

The market for distributed machine learning in an ICN is expected to grow significantly in the coming years. One of the key drivers of this growth is the increasing demand for real-time analytics and decision-making. With the ability to process and analyze data in parallel across multiple machines, distributed machine learning algorithms can provide faster and more accurate insights, enabling businesses to make more informed decisions in real-time.

Another factor contributing to the market growth is the need for privacy and security in data processing. In traditional centralized machine learning approaches, data is often sent to a central server for processing, raising concerns about data privacy and security. In a distributed machine learning system, data remains on the edge devices or local servers, reducing the risk of data breaches and ensuring better privacy protection.

Furthermore, the market for distributed machine learning in an ICN is also driven by the increasing availability of edge computing resources. Edge computing refers to the practice of processing data closer to the source, reducing latency and bandwidth requirements. With the proliferation of edge devices such as smartphones, IoT devices, and autonomous vehicles, there is a growing need for distributed machine learning algorithms that can leverage these edge resources for faster and more efficient data processing.

Several companies and research institutions are already exploring the potential of distributed machine learning in an ICN. For example, Cisco Systems has developed a distributed machine learning framework called Distributed Machine Learning for ICN (DML-ICN), which enables efficient and scalable machine learning in ICNs. Similarly, academic institutions such as Stanford University and the University of California, Los Angeles, are conducting research on distributed machine learning algorithms specifically designed for ICNs.

In conclusion, the market for distributed machine learning in an information-centric network is poised for significant growth in the coming years. The ability to process and analyze data in a decentralized manner, coupled with the increasing demand for real-time analytics and privacy protection, makes distributed machine learning a promising solution for businesses and organizations. As more companies and research institutions invest in this field, we can expect to see innovative applications and advancements that will further drive the market growth.

The Intel Corp invention works as follows

Herein, “Systems and Techniques for Distributed Machine Learning (DML) in an Information Centric Network (ICN)” are described. To reduce the overall network overhead, it is possible to implement finite message exchanging, as used in many DML exercise. Local coordinating nodes can be used to manage the devices in a distributed machine-learning exercise. This will improve network efficiency. Modifying a round DML training in order to accommodate the available devices (e.g., by selecting the devices using a quality of service group metric, or extending round execution parameters to add additional devices) may also have an effect on DML performance.

Background for Distributed Machine Learning in an Information Centric Network

The proliferation of mobile edge devices (e.g. smartphones, Internet of Things (IoT), and so on) is a major factor in the spread of data across large geographical areas. These nodes, which are equipped with computing, communication and storage capabilities, generate data that is spread over large geographic areas. The typical centralized machine-learning techniques require data to be uploaded into a central cloud. Due to the high overhead of communication, it is difficult to move large volumes geographically dispersed information from the edge into the cloud. Users may also be concerned about privacy when sharing data over such vast distances. Distributed Machine Learning (DML), a method of overcoming these problems, uses local data to train a model on the device. The model parameters and gradients are then shared with centralized aggregators. The user’s data is kept with the user, and any transmissions used to update the models are only changes or adjustments made to the model. Updates can be aggregated across many submissions to allow the user device download the updated model. Federated Learning is an example of DML implementation.

ICN refers to a new paradigm of networking in which the information is named, and is requested by the network itself instead of from hosts (e.g. machines that provide data). A device must request content by name from the network. Content requests are often called interests and sent via interest packets. The interest packet is recorded as it travels through network devices, such as routers. A device with content that matches the name in the packet of interest may send a response data packet to the packet. The data packet will typically be routed through the network back to the source, by following the traces left on the network devices.

DML involves substantial messages exchanges between a global aggregater (GA) as well as devices. These messages include, among others, discovering, recruiting or selecting device participants, downloading the model or parameters that will train with local data and uploading changes to the model due to training. Message interactions between nodes generally involve sending messages (e.g. data, requests and responses, parameters, etc.) across multiple network elements. In existing connection-based network, like transmission control protocol on internet protocol (TCP/IP), each of these messages is exchanged in a separate packet. The network may be burdened with a large amount of communication. It is possible that replacing TCP/IP by a simple Information-Centric Networking layer (ICN) will not reduce the network overhead compared to a connection-based system. ICN can be optimized to make it more efficient.

Assume many participants are geographically near each other (e.g. in the same area). This means that unicast (e.g. to deliver the training parameters, model, etc.), messages are generally sent. These participants, via multi-hop routes, probably share data as well as a common network path. This presents an opportunity to optimize a networking layer that is underlying the DML workload. This optimization begins with an efficient message forwarding layer and message content optimization to avoid redundant messages or redundant content in messages over the shared path. The optimization can also include techniques that ensure the training takes place in a secure environment, which also guarantees the privacy and security of training data.

These optimization goals can be achieved by sending ICN messages to the GA that enable it to find or select edge participants capable of meeting model training requirements such as data quality or computing capabilities, security hardware and software, etc. The ICN pending interests table (PIT), which is traditionally used to track messages, can be modified by adding entries for these messages. This will prevent the redundant content from being sent between GA and edge members. A local trusted node, such as roadside units (RSUs), which may be part a vehicular communications network, may also act as a Local Coordinator (LC) in order to help GA with DML operations. LCs can, for example, use a geo-area aware selection technique, LC assisted discovery of edge participants or a procedure to address mobility if a participant moves between LCs during a training round. A group-quality of service (QoS), participant selection technique can be used to guarantee training update reporting within a specified latency. All of this can be complemented with a secured procedure for downloading a model, an algorithm, software, or training data, from a secure entity in order to ensure security or privacy. The systems and techniques described here enable DML on multi-hop ICN network while avoiding redundant messages transmissions. Below are additional details and examples.

FIG. According to one embodiment, FIG. 1 shows an example of a distributed machine-learning system and ICN. “As shown, a GA 105 can be connected to one or several participant devices (e.g. downstream nodes represented here as device 125 and device 135) using ICN Routers (such ICN Router 110 and ICN Router 130). This forms an ICN Network.

The GA 105 is responsible for directing DML activities in relation to an artificial intelligence (AI) model, which we refer to as a “model” herein. The GA 105 must also aggregate model updates from participant devices 120 125 and 130 in order to create an updated model by the end of a DML training round.

An LC 115 can be used by the GA 105 in order to coordinate participant devices near the LC 115. As shown here, the LC 115 acts as an RSU that services vehicles acting as participant devices. The LC 115, for example, may be involved in the selection of participant devices during a DML session. The LC 115, for example, may be involved in routing ICN between the GA 105 and the participant devices 120 and 125. The LC 115, for example, may aggregate updates to models from the devices 120 and 125 of the participants before sending them to the GA 105. The LC 115 can, for example, act as a store of model training algorithms from which participant devices 120 and 125 can obtain a DML model training algorithm. The LC 115 is a central coordination point for the 120 and 125 participant devices to reduce network congestion.

FIGS. The following examples illustrate additional aspects of the GA 105 and LC 115 roles, as well as messages exchanged between them and participant devices (e.g. participant devices 120, 125 and 130). These examples are based on FIG. The focus of FIG. 1 will be on the operation and DML of the ICN router. This focus helps to clarify the modifications that were made to ICN to implement DML efficiently in an ICN Network. The ICN router includes upstream, downstream, and machine-readable media for a local cache, a PIT and processing circuitry that implements operations performed by ICN router.

The ICN router 110 has been configured, for example by hardwiring the circuitry or executing instructions to receive a participant interest packet. This interest packet was generated by the GA 105, and it is received at the ICN router’s upstream interface. The solicitation packet contains a name that identifies it as a DML participant’s solicitation. The ICN router 110 or LC 115 can tell the purpose of an interest by simply looking at the name. For example, ICN router 110 can consult its forwarding interests base (FIB), to determine the best downstream network interface to use to send the interest to potential participant devices like participant device 120 or125.

In one example, the name for the interest packet contains a DML round ID (ID) of the round of DML. The name of the interest packet may help identify the DML round to which it pertains, and this information can be used for caches or tables that store interest packets and data packets. The participant solicitation interest package name, for example, includes a DML Task ID. This may again be useful for cache and record-keeping activities on the ICN Router 110.

In an example, an aggregator’s ID is included in the name of the interest packet, for instance, LC 115. The aggregator ID can be used to create stemming filters that are based, for instance, on which aggregators have responded with updated models during a DML. The content of the name can be altered to suit different needs, such as searching for forward routes within the FIB or making cache decisions locally. ICN routers 110 often have simple mechanisms for separating different named data. The mechanisms assume that the important information is at the beginning of a name, while the less important information is located at the end. Name prefixes, therefore, are the most important aspect of determining whether or not data is related, when the data, for instance, is not named identically.

In one example, the participant interest packet contains a list of parameters for participating in a DML round. If they are not listed in the name of the element, these parameters can be used instead. For example, the DML round number, the aggregator ID. These parameters are used to inform the LC 115, or the participant node 12 of the operating criteria or requirements for participating in the DML round. The parameters may include computing requirements such as minimum accuracy or type of hardware. The parameters, for example, specify the maximum age of the information (AoI), which the participant node 120 can use in training during the DML round.

In a simple example, the parameters specify the model training algorithm that will be used for the DML round. This specification can name a specific model training algorithm or specify attributes so that the participant node may use several model training algorithms. The specification of the algorithm for model training may, as an example include a reputation score. The specification can include, for example, data stores where the algorithms are obtained.

In an example, parameters are used to specify reporting requirements. For instance, a model update format, metrics, etc. The parameters in an example include DML round time, including when the participation solicitation is closed, when participant selection takes place, when expected model updates will be made, etc.

In an example, security requirements are included in the set of parameters. The security requirements in an example specify hardware such as trusted execution environments (TEE) in which model training algorithms must be executed or stored. The security requirements may specify the type of encryption that will be applied to data used to train DML, to model training algorithms, or to model updates.

The ICN Router 110 is configured to create a PIT entry as well for the participant interest packet. In an ICN network the PIT entry is used to provide a route back to the GA for responses to the participant interest packet. As there may be several responses, this PIT is not removed upon the first one, but instead, it remains until a timer expires. The duration of the timer can be specified as part of a name, or as a parameter in the solicitation packet.

The ICN router is configured to receive a data packet in response to a solicitation interest packet. This packet can be received from the participant device or the LC 115. The response here indicates that the downstream device 120 will take part in the DML round specified by the solicitation packet. In one example, the data packet adds the participant ID to the interest packet name. This name differs slightly from the interest package, but only in a part that is not used to backtrack match the data packet to the GA 105.

The ICN router is configured to create a second PIT for the first packet of data in response to the packet being received. The first data packet will be treated both as a data and interest packet. Such an arrangement may reduce the number of interest packets needed for effective ICN communications in a protocol that uses a known request-response-response- . . . -response between nodes. In this example, the second PIT corresponds to first PIT. In this case, the correspondence could be as simple as a prefix shared. The correspondence can include meta-data linking to the PIT tables or PIT entries.

The ICN router 110 transmits the first data packet according to the first PIT entry. This transmission involves routing the data packet using the first PIT to reach the GA105.

The ICN router is configured to receive a secondary data packet as a response to the initial data packet. This exchange is different from most ICN exchanges, because the second packet responds to a first packet of data rather than an interest packet. In this case, the second packet of data identifies the DML model that will be used by the downstream node. In one example, a DML model is identified by the model training algorithm that will be implemented in the DML round by the downstream node. This exchange may not take place if the participant node has already received the model training algorithms or will be receiving them from a secure store of model algorithm as described below. This exchange is based on the assumption that the GA 105 provides the model training algorithms?or a hyperlink, access, etc. The model training algorithm is provided to the participant node 120.

Click here to view the patent on Google Patents.