Invention for Systems and Methods for Virtual Reality-Based Grouping Evaluation

Invented by David Strong, Scott Hellman, Johann Larusson, Jake Noble, Timothy J. Stewart, Alex Nickel, Luis Oros, Quinn Lathrop, Daniel Tonks, Peter Foltz, Pearson Education Inc

The market for systems and methods for virtual reality-based grouping evaluation has been rapidly growing in recent years. With the increasing popularity of virtual reality (VR) technology, businesses and organizations are recognizing the potential of using VR for evaluating group dynamics and improving teamwork.

Grouping evaluation refers to the process of assessing the effectiveness of group interactions, communication, and collaboration. Traditionally, this has been done through observation, surveys, and interviews. However, these methods have limitations, such as subjective biases and the inability to recreate realistic scenarios.

Virtual reality offers a unique solution to these challenges by providing a realistic and immersive environment for group evaluation. By simulating real-life scenarios, VR allows researchers and evaluators to observe and analyze group dynamics in a controlled and repeatable manner.

One of the key advantages of using VR for grouping evaluation is the ability to recreate complex and dynamic situations. For example, in team-based tasks or emergency response training, VR can simulate scenarios that are difficult or dangerous to replicate in real life. This allows evaluators to assess how groups respond to various challenges and make informed decisions based on their performance.

Furthermore, VR-based grouping evaluation systems can provide real-time feedback and data analysis. By tracking participants’ movements, gestures, and interactions within the virtual environment, evaluators can gather objective data on individual and group performance. This data can then be analyzed to identify strengths, weaknesses, and areas for improvement.

The market for systems and methods for virtual reality-based grouping evaluation is diverse and rapidly expanding. There are numerous companies and startups that specialize in developing VR platforms and software specifically designed for group evaluation purposes. These systems often include features such as customizable scenarios, data visualization tools, and integration with other evaluation methods.

In addition to businesses and organizations, educational institutions and research centers are also adopting VR-based grouping evaluation systems. These institutions recognize the value of using VR as a tool for teaching and assessing teamwork skills. By incorporating VR into their curricula, educators can provide students with realistic and engaging group evaluation experiences.

Despite the growing market, there are still challenges to overcome in the adoption of VR-based grouping evaluation systems. Cost and accessibility remain significant barriers for many organizations, as VR technology can be expensive and requires specialized equipment. Additionally, there is a need for further research and standardization in the field to ensure the validity and reliability of VR-based evaluation methods.

In conclusion, the market for systems and methods for virtual reality-based grouping evaluation is expanding rapidly. The use of VR technology offers unique advantages in assessing group dynamics and improving teamwork. As the technology becomes more accessible and affordable, we can expect to see increased adoption of VR-based grouping evaluation systems in various industries and sectors.

The Pearson Education Inc invention works as follows

Herein are disclosed systems and methods for evaluating virtual reality interactions. The system may include a memory that includes: an interaction database containing information about user interactions with virtual assets in a virtual environment; and a content library containing virtual assets as well as information about them. The system can include a server that determines user engagement with one or more of the virtual assets. It can also receive data indicative an interaction with one or more of the virtual assets and determine the type of interaction associated with this data. The server can perform speech analysis and capture, manipulate data, create an evaluation of user interactions with at least one virtual asset, and deliver that evaluation.

Background for Systems and Methods for Virtual Reality-Based Grouping Evaluation

A computer network, or data network, is a telecommunications system that allows computers exchange data. Computer networks allow networked devices to exchange data via network links (data connection). Cable media or wireless media are used to establish connections between nodes. “The Internet is the most well-known computer network.

Network nodes are devices on a network that originate, route and terminate data. Nodes include personal computers, phones and servers, as well as network hardware. “Two such devices are networked when they can exchange information, regardless of whether they are directly connected to each other.

Computer networks are different in terms of the transmission medium used to transmit their signals, communication protocols to organize the network traffic, network size, topology and organizational intent. Most often, communication protocols are built on top of (i.e. Work using) other, more specific or general communication protocols except the physical layer which directly deals with transmission media.

Notifications may be sent via a computer network. These notifications are electronic and can be sent via e mail, phone, text, or fax. Businesses, governments, schools and individuals can all benefit from notifications.

One aspect of this disclosure is a system based on virtual reality for assessment. The system may include a memory that contains an interaction database with information about one or more user interactions in a virtual environment. It can also contain a content library database, which stores information on a number of virtual assets. At least one server is included in the system. The system includes at least a server.

In some embodiments, a system can include either a virtual or augmented reality head-set. In some embodiments the user experience can be launched from the virtual reality or augmented reality headgear. In some embodiments the assets are created within the virtual or augmented reality headset.

In some embodiments, at least one server is able to: receive user data, where the user data identifies a minimum of one attribute about the user, and customize the virtual world based on that user data. In some embodiments the user information identifies: a previously supplied virtual asset, and a skill level of the user. In some embodiments the user information identifies information about a preferred position of assets in the virtual environment. In some embodiments the configuration profile is unique to an individual user.

In some embodiments, user actions within the virtual environment can include selectively viewing one or more of the virtual assets or selectively asking at least one question of one or more of the virtual assets. In certain embodiments, at least one processor is capable of generating an evaluation of user actions in the virtual environment. In some embodiments generating the assessment can include analyzing at least one query. In some embodiments the asset response can be delivered in response the at least one questions posed to one or more of the virtual assets. In some embodiments recording user interactions can include: identifying user gaze, determining object engagement, recording user speech and analyzing speech. In some embodiments the identifying of the gaze includes: identifying an object engagement; recording user speech; and analyzing user speech.

The present disclosure includes a method for virtual reality-based assessment. The method may include: launching an experience for a user in a virtual world created by a computer device; creating a plurality assets in the virtual world; recording user interactions in the virtual space, where user actions include interaction with at least some of the assets; delivering a response from the asset to the interaction; and generating evaluations of user actions within the environment.

In some embodiments the user experience can be launched in a virtual reality or augmented reality headset. In some embodiments the plurality assets are generated in the virtual reality or augmented reality headset.

In some embodiments the method comprises: receiving user data, where the user data identifies at lease one attribute of the users; and customizing a virtual environment using the user data. In some embodiments the user information identifies: a previously supplied virtual asset, and a skill level of the user. In some embodiments the user information identifies information about a configuration profile that contains information on a preferred placement of assets within the virtual world. In some embodiments the configuration profile is unique to an individual user.

In some embodiments, user actions within the virtual environment may include selectively viewing one or more of the virtual assets and selectively asking at least one query to one of those virtual assets. In some embodiments, the analysis of at least one question is included in generating an assessment of user actions within a virtual environment. In some embodiments the asset response is provided in response to at least one query posed to a virtual asset. In some embodiments recording user interactions can include: identifying user gaze, determining object engagement, recording user speech and analyzing speech. In some embodiments the identifying of the gaze includes: identifying an object engagement; recording user speech; and analyzing user speech.

An aspect of the disclosure is a system to evaluate virtual reality interactions. The system may include memory. Memory can contain: an interaction database containing information about one or more user interactions with a virtual asset within a virtual environment, and a content library database that contains a number of virtual assets as well as information relating them. At least one server can be included in the system. The at-least one server can perform the following: “determine user engagement with one or more of the virtual assets, receive data indicative to an interaction with one or more of the virtual assets, determine the type of interaction associated with that data; determine the type of interaction associated with received data; determine the type of interaction associated with received data; determine the type of interaction associated with received data; determine the type of interaction of the interaction in the data received. This interaction type includes: a vocal interaction; a manipulative interaction; perform speech capture and analysis

In some embodiments the at least one client can detect the user’s gaze. In some embodiments the user engagement is based on this detected gaze. In some embodiments the speech analysis and capture process can include recording a user’s question. The speech capture and processing process can include: recording at least a single question by the user; converting that question to text using a speech recognition program; extracting one or more key words from the question. Inputting those key words into a machine-learning speech algorithm which can predict the content of a response based on key words inputted; and presenting a topic prediction derived from the machine-learning speech algorithm.

In some embodiments the speech analysis and capture process includes selecting a machine-learning speech algorithm. In some embodiments the machine learning algorithm is specific to at least one virtual asset for which data indicative of interaction was received.

In some embodiments, at least one server is able to deliver at least a single asset response in response to a user interaction with at least one virtual asset. In some embodiments delivering at least one response file for the user interaction with the plurality virtual assets comprises: identifying the response data based upon the output topic predictions; generating natural-language text from the responses data; and playing the generated response file to a user. In some embodiments delivering the at least asset response to user interactions with the multiple virtual assets includes receiving feedback on whether the response is answering the at least question; and updating machine learning speech algorithms based upon the feedback received.

In some embodiments, a manipulation process may include: receiving a request for manipulation; retrieving manipulation information; and creating a manipulated asset using the retrieved manipulation information. In some embodiments the manipulation process also includes delivering the manipulated asset to the user.

The present disclosure includes a method of evaluating virtual reality interactions. The method comprises: determining the engagement of a user with one or more virtual assets, receiving data indicative that an interaction was made with one or more virtual assets, determining the type of interaction associated with this data (which interaction type includes at the least: a verbal interactions; and manipulation interactions); performing a speech analysis and capture process when the verbal interaction is identified; performing manipulation process when the manipulation interaction received; generating and delivering an evaluation of user interaction with one or several virtual assets.

In some embodiments, a method is used to detect a user’s gaze. The user engagement can then be determined by the gaze. The speech capture and analysis includes, in some embodiments: recording atleast one question from a user, generating a voice file from that question, converting it to text using a speech recognition program, extracting one or more key words from the question, and then putting those key words into a machine-learning speech algorithm. This machine-learning speech algorithm is able to predict the content of a response based on key words inputted.

In some embodiments the speech analysis and capture process includes selecting a machine-learning speech algorithm. In some embodiments the machine learning algorithm is specific to at least one virtual asset for which data indicative of interaction was received.

In some embodiments, delivering the at least a single asset response can be part of the method. This is done when the user interacts with at least one virtual asset. In some embodiments delivering at least one response to user interactions with the plurality virtual assets can include: identifying the response data based upon the output topic predictions; generating natural-language text from the responses data; creating a response files; and playing the generated response files to the users. In some embodiments delivering the at least a single asset response to user interactions with the plurality virtual assets include: receiving feedback on whether the response answered the at least a question; and updating machine learning speech algorithms based upon the feedback received. In some embodiments the manipulation process comprises: receiving a request for manipulation; retrieving manipulation information; generating a virtual asset using the retrieved manipulation information; and presenting the virtual asset to the end user.

Click here to view the patent on Google Patents.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *