Invention for Integration learning models into software development systems

Invented by Alexander B. Brown, Michael R. Siracusa, Gaurav Kapoor, Elizabeth OTTENS, Christopher M. Hanson, Zachary A. Nation, Vrushali MUNDHE, Srikrishna Sridhar, Apple Inc

The market for integration learning models into software development systems has been steadily growing in recent years. As technology continues to advance, businesses are increasingly relying on software development to create innovative solutions and improve their operations. Integration learning models have emerged as a valuable tool in this process, allowing developers to streamline their workflows and enhance the efficiency of their software development systems.

Integration learning models refer to the integration of machine learning algorithms and techniques into software development systems. These models enable developers to automate various tasks, such as code generation, bug detection, and software testing. By leveraging machine learning capabilities, developers can significantly reduce the time and effort required to complete these tasks, leading to faster development cycles and improved software quality.

One of the key drivers behind the growing market for integration learning models is the increasing complexity of software development. As software systems become more intricate, developers face challenges in managing the vast amounts of code and ensuring its correctness. Integration learning models offer a solution by automating code generation, allowing developers to quickly create code snippets or even entire modules based on predefined patterns or examples. This not only saves time but also reduces the risk of introducing errors into the codebase.

Another factor contributing to the market growth is the need for efficient bug detection and software testing. Traditional methods of bug detection and testing often rely on manual inspection, which can be time-consuming and prone to human error. Integration learning models can automate these processes by analyzing large volumes of code and identifying potential bugs or areas of improvement. This not only improves the accuracy of bug detection but also frees up developers’ time to focus on more critical tasks.

Furthermore, integration learning models can also assist in software maintenance and updates. As software systems evolve, developers need to ensure that changes made to the codebase do not introduce new bugs or break existing functionality. Integration learning models can analyze the codebase and identify potential conflicts or issues before they occur, allowing developers to make informed decisions and minimize the risk of introducing errors during updates.

The market for integration learning models is not limited to large enterprises or software development companies. Small and medium-sized businesses are also recognizing the benefits of integrating machine learning into their software development systems. By leveraging integration learning models, these businesses can improve their development processes, reduce costs, and deliver high-quality software products to their customers.

However, it is important to note that the integration of machine learning into software development systems is still an emerging field. As such, there are challenges and considerations that need to be addressed. For example, developers need to ensure that the integration learning models are trained on relevant and representative data to avoid biased or inaccurate results. Additionally, the models need to be continuously updated and refined to adapt to changing software requirements and industry standards.

In conclusion, the market for integration learning models into software development systems is experiencing significant growth. As businesses strive to stay competitive in the digital age, the integration of machine learning capabilities into their software development processes offers a promising solution. By automating tasks, improving bug detection, and enhancing software testing, integration learning models enable developers to streamline their workflows and deliver high-quality software products more efficiently. As the demand for software development continues to rise, the market for integration learning models is expected to expand further, driving innovation and efficiency in the software development industry.

The Apple Inc invention works as follows

The subject technology transforms an existing machine learning algorithm into a transformed model when it does not comply with the model specification. This particular model is compatible with the integrated development environment. The subject technology creates a code and interface for the transformed model. The code interface includes code statements in an object-oriented programming language. These code statements correspond to objects representing the transformed model. The subject technology also provides the generated interface code and code to be displayed in the IDE. This IDE allows modification of the generated interface code and code.

Background for Integration learning models into software development systems

Software development environments are used to develop software programs in a particular programming language for different computing platforms.

The detailed description below is meant to be a description for various configurations that the technology in question can be used. It is not intended to be the only configurations. The drawings that are attached to this document form part of the detailed descriptions. To provide a comprehensive understanding of the technology, the detailed description contains specific details. The subject technology can be implemented in a variety of ways, not just the details that are provided here. Structures and components in one or more implementations are shown as block diagrams to avoid confusing the concepts of subject technology.

Existing approaches to enable software developers to use machine learning models within software development environments can require significant configuration. Some software developers may find it difficult to enter the market because developing machine learning models can require additional hardware configurations and software libraries. Many software developers have experience with the object-oriented paradigms that are built into many of the existing software development tools. Recent developments in the area of machine learning have resulted in software libraries that can be used in separate or stand-alone development environments. This requires software developers to use a different approach when developing machine learning models.

In one or more implementations described in this document, machine learning models can be represented internally within an integrated software-development environment (IDE) as opaque resources, rather than being merely represented as first-class objects like functions, classes and the such. The developer can use the productivity features in the IDE, such as auto-completion and detection of type or name errors in parameter lists. This can be achieved by having a standard description for ML models. The derived data created from the model is used to index keywords and names within the model.

FIG. “FIG. Not all components shown may be included in all implementations. Some implementations may also include components that are different or additional to those in the figure. The arrangement and type may be changed without departing the spirit or the scope of the present claims. “Either more components, different components or fewer can be provided.

The network environment is made up of an electronic device 110 and an electronic device 120. The network 106 can communicate (directly or indirect) with the electronic device 120 and/or server 110. In some implementations, network 106 can be a network of interconnected devices, which may include the Internet or be communicatively connected to it. The network environment 100 in FIG. is shown for explanation purposes. The network environment 100 is shown in FIG.

The electronic device 110 can be, for instance, a desktop computer, portable computing devices such as laptop computers, smartphones, peripheral devices (e.g. a digital cameras, headphones), tablet devices, wearable devices such as watches, bands, etc. or any other suitable device that includes one or more wireless inputs such as WLAN radios or cellular radios or Bluetooth radios or Zigbee Radios or Near Field Communication (NFC) Radios or other wireless radios. FIG. In FIG. The electronic device 110 can be and/or include all or part the electronic system discussed in relation to FIG. 7 .

The electronic device 115 can include a screen and could be a mobile computing device, such as a smartphone, with a built-in touchscreen. It may also be a tablet, a wearable gadget, such as a wristband, watch or similar device, that has a built-in touchscreen. In some implementations, electronic devices 115 do not have touchscreens but can support gestures similar to touchscreens. This could be in virtual reality or augmented-reality environments. In some implementations, electronic device 115 can include a touchscreen. In FIG. In FIG. In some implementations, electronic device 115 can be, or include, all or part, of the electronic system discussed in the following section with reference to FIG. 7 .

In one or more implementations the electronic device may provide a development environment, such as a program, that a developer can use to develop compiled code (e.g. executable), debug, support, or maintain computer programs and applications. The software development environment can, for example, create a package of software using the compiled codes with the help of the server 120.

In one or more implementations of the server 120, the compiled code is deployed to a device target for execution. In one example, the electronic device may be used as a target for receiving the code compiled and executing it in the runtime environment on the electronic device. In another example the server 120 and/or another server may provide a Web service, which can perform complex processing operations, associated with the code.

FIG. The example software architecture of an integrated development system 200 is shown in Figure 2. It integrates object-oriented code with machine learning models, according to one or more implementations. To explain the integrated development (IDE) environment 200, it is assumed that the electronic device 110 in FIG. provides the IDE 200. The IDE 200 can be implemented on any electronic device, including the device 110’s processor or memory. Not all components shown may be used for all implementations. Moreover, some implementations could include different or additional components. The arrangement and type may be changed without affecting the spirit or scope as stated herein. “Either more components, different components or fewer can be provided.

In one or several implementations, IDE 200 can provide a user interface 280 which can be displayed on a display device 110. The user interface can display code listings 282, for example, the source code to a program that is being developed in an editor. The source code can be computer code instructions written in object-oriented programming languages such as Swift or Objective C. A project navigator may also be displayed on the user interface 280 when a new project for software development is created. The project navigator allows a user manage files within the project or select a specific file to be viewed in the editor which provides the code listing 282 and/or edited. The user interface 280 provides a search interface 286, which allows users to search terms within the code listing 282, in files in the project and/or in other assets.

The IDE 200 also includes a memory 210 that stores in memory machine learning (ML), models 212, and associated ML data such as datasets for ML models (e.g. a dataset corresponding to the ML model), files of source code 214 and/or data relating to software development projects. Machine learning models can include data in a format and syntax that is compatible with machine learning libraries, such as TensorFlow or Keras. However, the IDE 200 may not support these formats. Examples of machine learning models can include machine learning techniques, such as automated processes for extracting patterns from data, algorithms which automatically identify a relation between descriptive features and the target feature within a dataset (e.g. a predictive model), neural networks (deep), classifiers (classifiers) that specify to which category an input belongs, etc. The examples above are not intended to limit the subject technology. Other types of machine-learning models are also contemplated. Machine learning models can be associated with source code usage examples or tutorials, or with reference implementations in a language like JAVA,.NET, or an API that may not be supported by the IDE 200, or may only support certain deployment scenarios.

For integrating machine learning models (e.g. ML models 212), IDE 200 can include a machine-learning (ML) model converter 215 which includes a specification convertor 220 and an object-oriented programming (OOP ) code generator 240. Below, each of these components will be described in more detail. In FIG., we describe an example of how to transform a ML model. 2 . The ML Model Transformer 215 is shown as part of the IDE 200. However, in certain implementations, it may be run as a separate program from the IDE 200. The subject technology is not limited to object-oriented programming. Although the OOP code generation 240 is discussed in relation to other components, the object-oriented programming language is used for the purposes of this document. The subject technology is not limited to any particular programming language.

As shown, the IDE 200 provides the specification convertor 220. The specification converter 220, for example, may receive a ML-model 212 from storage 210. The specification converter 220 checks and/or validates if an existing ML Model in a First Format (e.g. Keras TensorFlow Caffe etc.) is compatible with the IDE 200. The IDE 200 can recognize if the model data is sufficient to comply with a specific model specification, such as a model provided by a third-party vendor or another entity. A model specification example may include parameters, data formats and data types, as well as processing or hardware requirements, to convert a machine learning model in a format that can be used by the IDE 200. This includes, for instance, syntax checking, auto-completion or detecting name or type mistakes in parameter lists. The specification converter transforms the ML 212 model into a transformed ML 235 model that is compatible with a particular model specification supported by the IDE 200 (e.g. the transformed ML models). The specification converter 220 may use a serialization format, such as?ProtoBuf? ?ProtoBuf? “ProtoBuf

The OOP Code Generator 240 generates code interface for transformed ML model.” Code interfaces are code statements in a programming language that describe the functions and/or types of data required to use the transformed ML models. In an example, a function that uses the transformed ML can accept one or multiple data types as input variables. In an example, the code interface of the transformed ML models provides functions and data type compatibility with the programming language used in the project. In one example, OOP code generation 240 determines which data types and APIs are available in the programming language used to access the existing ML data of the transformed model. OOP code generation 240, for example, creates a thunk, e.g. a subroutine, that allows access to values of existing ML data using the data types supported by the programming language. In FIG., a process is illustrated that provides more information on how to generate the code interface as well as other code for the transformed model. The programming language and its APIs can support multiple types that correspond to existing ML types. In this case, the type selected may depend on whether it is familiar to programmers or if the particular type is appropriate.

The OOP code generator can perform the following functions in an implementation. Select a type of Tpn in the programming language or its APIs for each type Tm that is used by the model. If multiple types of Tp are available, choose the “best” The type that is most commonly used is the best choice. It is better to choose a type that is available in both the programming language and the APIs. For each Fm function used by the model that takes as inputs a Set of Types of Tmi1, and Tmi2 values, and returns as an outputs a Set of Values of Tmv1, etc. Generate a function that takes the types Tpi1,Tpi2Tpi2,Tpv1,Tpv2 as inputs. In the function, create code to transform each model type into or out of the language type. This could be simple or require multiple steps. This generated code can be called a “thunk.

In one or more implementations an additional non-code payload (e.g.?compiled model ML?) The ML model is also used to generate a non-code based payload (e.g.,?compiled ML model?) and sent in a package to the target device. This package can include a compiled version of the generated code and code and this compiled ML (e.g. the non-code based payment). This non-code payload is not the same as a compiled version of the generated code and code. The compiled ML models in this example include trained weights and information that is useful for the target device. This information is not included in the generated code because: 1) source-code is not great for holding data (impedances, space, and speed); 2) despite being easily accessible by users, source code types are not as easily inspected as data types for computers (e.g. software). Other components wishing reason about the model might prefer to visit data (e.g. determining a desired size for an image).

The OOP code generation 240 can also determine if other software libraries are required to compile and/or execute the transformed ML models, such as a graphics processor unit (GPU), which provides support for executing GPU instructions. FIG. describes in detail an example data structure associated with the transformed ML models. 6 .

In an example, “Also, the generated code can be viewed using the project navigator 284” in the UI 280. FIG. 4 shows a detailed example of how to view the generated code interface. 4 .

Click here to view the patent on Google Patents.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *