Invention for Virtual blaster

Invented by Michael P. GOSLIN, Jonathan R D Hsu, Joseph L. OLSON, Timothy M. PANEC, Katherine M. Bassett, Disney Enterprises Inc

Virtual blasters have become increasingly popular in recent years, with a growing market of gamers and enthusiasts seeking out the latest and greatest in virtual reality weaponry. These high-tech devices offer a unique and immersive gaming experience, allowing players to feel like they are truly in the game and engaging in intense battles with their opponents.

The market for virtual blasters is diverse and constantly evolving, with new products and innovations being introduced all the time. Some of the most popular virtual blasters on the market today include the Oculus Quest 2, the HTC Vive, and the PlayStation VR. These devices offer a range of features and capabilities, from advanced tracking and motion sensing technology to high-quality graphics and immersive sound.

One of the key factors driving the growth of the virtual blaster market is the increasing popularity of virtual reality gaming. As more and more gamers seek out new and exciting ways to experience their favorite games, virtual blasters have emerged as a top choice for those looking to take their gaming to the next level. These devices offer a level of immersion and realism that simply cannot be matched by traditional gaming controllers or keyboards.

Another factor driving the growth of the virtual blaster market is the increasing availability of affordable and high-quality virtual reality technology. As the cost of VR devices continues to come down, more and more gamers are able to afford these cutting-edge devices and experience the thrill of virtual reality gaming for themselves.

Despite the many benefits and advantages of virtual blasters, there are also some challenges and limitations to consider. For example, some gamers may find that the physical strain of using a virtual blaster for extended periods of time can be tiring or uncomfortable. Additionally, the cost of these devices can be prohibitive for some gamers, particularly those who are just starting out in the world of virtual reality gaming.

Overall, the market for virtual blasters is a dynamic and exciting space, with new products and innovations being introduced all the time. Whether you are a seasoned gamer or just starting out in the world of virtual reality, there is sure to be a virtual blaster out there that is perfect for you. So why not take the plunge and experience the thrill of virtual reality gaming for yourself? With the right virtual blaster, you can take your gaming to the next level and experience a whole new world of immersive, high-tech fun.

The Disney Enterprises Inc invention works as follows

Embodiments are techniques that automate control of electronic devices. The first interactive game is played within the physical environment with one or more electronic devices. Embodiments gather electromyogram data from one of more electromyography sensors in the physical environment. The electromyogram data is analyzed by embodiments to determine the user’s muscle activations. When the user’s muscle activations match a predetermined pattern of electromyography, embodiments determine the gameplay action and send an instruction to the device.

Background for Virtual blaster

Field of Invention

The invention generally refers to entertainment systems and, more specifically, to techniques for controlling an enhanced reality game using user muscle movements.

Description of Related Art

Computer graphics technology has advanced significantly since the first video games were created. The relative cost of 3D graphics engines is very affordable. They can provide interactive gaming on handheld, home and personal computers. These systems usually include a controller, game controller or integrated controller. The controller can be used to communicate with the user to send instructions or commands to the system to control a simulation or video game. The controller could include buttons and a joystick that can be operated by the user.

Video games let the user interact with the system directly, but such interactions only influence the graphic representation on the device or on a connected display and have little effect on other objects in the virtual world. The user can indicate to the system that their avatar should jump. In response, the system may display the avatar jumping. These interactions are usually limited to the virtual realm, so interactions outside of the virtual world are often limited. For example, a handheld gaming device might vibrate when certain actions occur.

Game developers can now create games using modern technologies like augmented reality devices. Virtual characters and other virtual objects can appear to be present in the real world by using such technologies. It is best to render virtual characters with realistic dimensions and positions in order to create the illusion of augmented reality.

Embodiments are a non-transitory computer-readable medium that can be used to automate electronic devices. The system, method, and non-transitory computer-readable medium include the detection of an interactive game within the physical environment. This can be done using one or more electronic devices. The system, method, and non-transitory computerreadable medium also include the collection of electromyogram data from any one or more electromyography sensor within the physical environment. The method, non-transitory computers readable medium, and system also include the analysis of electromyogram data from one or more electromyography sensor to determine user muscle activations. Once the user has determined that the muscle activations determined by him match a predefined electromyography pattern, the method, computer-readable medium, and system include the determination of a gameplay action according to the predefined electromyography pattern and the transmission of an instruction to at least one electronic device in the physical environment. The instruction instructs the electronic device to execute the determined gameplay action.

Generally speaking, the embodiments described herein offer techniques for controlling devices in synchronized fashion. Embodiments can be used to determine whether one or more environmental devices exist in a given physical environment. The environmental devices might include temperature control devices, window covers, and illumination devices. The first instance of audiovisual material that is being played in the physical environment is detected. Audiovisual content includes, but is not limited to, video content, augmented reality content, and streaming movie content. Embodiments define an environment at the first playback position of audiovisual content. Embodiments could, for example, analyze metadata from a video content instance to determine whether the video frames depict an arctic scene with dark snowy scenes. Based on the environmental condition, embodiments could control one or more of the environmental devices in the physical environment during playback. A temperature control device, for example, could be controlled to lower the temperature in the environment. Another example is that embodiments could control illumination devices in the physical environment to reduce the intensity of the illumination. This enhances the user experience while listening to the audiovisual content.

Particular embodiments of an immersive storytelling environment are described in this document. This is where a story is played back using interactive devices. To create an immersive, interactive storytelling experience, embodiments can use a variety of storytelling devices that are capable of producing different visual and auditory effects. This system could include several storytelling devices and a controller connected through a network (e.g. an RF communication network). A storytelling device is any device that can enhance a storytelling experience by responding to user input or some stimuli. The controller device could, for example, configure the storytelling devices to respond to a given story context. The controller could, for example, set up a specific storytelling device to produce audiovisual messages in response to a stimulus event (e.g. a user performing a given action) and another stimulus (e.g. the user not performing that action within a time limit). The controller can be either one of the plurality or a standalone device (e.g. a computing device that executes a control program).

Each storytelling device can have different audio output devices, audio processing, and storage capabilities. A first device might have multiple speakers of higher quality and more audio processing and storage capabilities. However, another device could only have one speaker and very limited audio processing. It could be that higher quality speakers, processing, and storage resources are more costly, so it is possible for some storytelling devices to be cheaper.

As such, audio effects with higher quality sounds may be played on a specific storytelling device’s hardware than when they are played on a device that has less hardware. Embodiments can adjust audio output to fit the purpose of a story, so that the best device for delivering certain sound effects is chosen. A particular audio effect could be used to represent the voice of a robotic assistant giving instructions or updating the player in the story. The story’s robotic assistant character cannot be represented by any storytelling device. Therefore, it may be appropriate to use the audio effect to represent the voice of the robot assistant through any number of storytelling devices. Embodiments could choose the best-suited storytelling device to output the audio effects with the highest quality (e.g. the device that has the best storage, processing, and speakers capabilities for the specific audio effect) and instruct the device to do so. These devices can also be chosen dynamically during the story. For example, a new device may be brought into the area where the story is being told, or a device loses its battery power, etc

Additionally in some cases, the storytelling devices can work together to produce a specific sound effect. One sound effect can be output to multiple devices at once, in order for the user to experience stereophonic surround-sound. To avoid phase cancellation between sound effects, it is possible to introduce a slight delay in the playback of the sound effect on multiple devices. Another way to make the sound effect more distinct is to set the devices to emit it with a longer delay. In a story where the user is placed in a bee hive and the sounds of bees buzzing are being produced by the different storytelling devices, each device could output the sound effect of bees buzzing with a delay between each device’s output. This would make the sound appear to be moving around the environment. The sound of the bee buzzing can create an immersive auditory experience by placing the user between the different storytelling devices. This could enhance the realism of a story and make it more real.

Additionally embodiments can also include augmented reality devices along with different storytelling devices as part a augmented reality gaming environment. An augmented reality device is any device that can display a real-time view in real time of a real-world environment and alter elements. An augmented reality device, which is not a virtual reality device, displays a view in real-world but augments elements with computer graphics technology. An augmented reality device could include a camera (or multiple cameras) that captures a view of the real world environment. It may also include hardware and software to augment the scene. An augmented reality device might capture images of a coffee cup on top of a table and modify them so that the cup appears animated. The user can then view the modified images in real time. The user can see an augmented view of their real-world surroundings when they look at the augmented reality devices.

Additionally, software could identify the first physical object in the visual scene captured using camera devices of an augmented reality device. Embodiments could, for example, analyze the visual scenes to determine the borders of objects. These border edges could be used to identify any physical objects present within the visual scenes. Notably, the visual scene captured by the augmented reality device is a three-dimensional space. Embodiments may be able to calculate the three-dimensional space occupied each physical object within the captured scene. The software could also be used to calculate the three-dimensional surfaces and dimensions of physical objects in the captured scene.

In response to detecting an object in the visual scene, the software can render one or more virtual characters based upon the object’s appearance in the captured frames. The augmented reality software can create a three-dimensional representation from the physical environment. It could also create a virtual character or object to insert into the three-dimensional representation. Based on the images of the physical object in the captured frames, the software could place the virtual object or character created by the software at a specific location within the three-dimensional scene. The software could, for example, determine that the physical object is on a specific surface in the physical environment (e.g. a table surface or a floor). Based on data about the size, shape and appearance of the physical object within the captured frames. After identifying the surface, the software can position the character or virtual object within the scene so that it rests on the surface. This creates a more real-life experience for the user.

Also, the software could scale down the size of the character or virtual object based on how the physical object is depicted within the captured frames. Software could, for example, store predefined geometric information about the physical object. This data could include the shape and dimensions. This information could be used by the software to determine the size of the virtual object, or character, in the three-dimensional scene. As an example, let’s say the virtual object is a 12 inch diameter sphere. The software could calculate a scale for the virtual object using the dimensions of the physical object in the captured frames and predefined geometric data. Another example is that the software could create an avatar and scale it to life-size dimensions (e.g. the size of an average person) by using the dimensions of the physical object within the frames and the predefined geodata specifying the physical objects’ known dimensions. This allows the augmented reality software create a consistent and realistic depiction of the virtual object/character.

The augmented reality devices can render frames of the three-dimensional scene interlaced using frames captured by the camera sensors. This is done in real-time as the device (and user) move through the physical environment. This allows the user to create a more immersive experience with augmented realities. The user can also paint on the objects in the world. Even though the environment may be viewed from different angles, the user’s painting will still remain true to the physical environment.

An example will be discussed now with regard to FIG. 1. This illustrates a playtime setting in which an interactive device is instructed by a controller to perform a specific effect according to one embodiment. The system 100 is shown with an action figure 110 and a toy blaster gun 115. An action figure 120 and a controller device number 125. The toys 110, 120, and 115 are capable of producing audiovisual effects (e.g. audio output, light effects or vibrations). One embodiment of the toys 110-115 and 120 can be equipped with an action disk device (e.g. device 200 in FIG. 2, and are discussed below. While we have discussed various examples with regard to the toy device 110, 115, and 120, it is generally thought that these techniques can also be used with other devices and types of devices in accordance with the functionality described herein.

The toys 110, 120, and 115 are capable of producing audiovisual effect, but the toys 110 and 115 may not have the logic to determine when a specific effect should be performed. This could be partly due to the complexity and cost of configuring each toy 110-115-120 with the logic and hardware resources necessary to detect stimuli and perform an appropriate audiovisual effect in response. Toys 110, 115, and 120 can be set up to receive commands (e.g. from controller 125), and then to respond with an audiovisual effect. This allows the toys 110 to 115 and 120, as well as the ability to perform audiovisual effects(s), to be manufactured more economically.

For example, a storyline could suggest that devices in the physical environment should play a specific sound effect when the user uses the?force. The controller 125 could track the user’s behavior and detect when they have performed the predefined gesture. The controller 125 could, for example, use one or more cameras (e.g. within the controller devices125, within one of the toys 110 or 115 or 120 etc.). Monitor the user’s movements within the physical environment. Another example is a bracelet that the user wears. It could be equipped with an accelerometer and transmit data to the controller device 125.

The controller 125 can broadcast a command to the toys 110, 120 and 115, informing them to perform an audiovisual effect. This command could be transmitted, for example, by a radio-frequency transmitter or an infrared emitter. Any communication protocol can be used to communicate with the controller with the toy devices 110-115 or 120. This is consistent in accordance with the functionality described herein.

Click here to view the patent on Google Patents.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *