Invention for Augmented Reality platform and method of use

Invented by Lucas Kane Thoresen, Joshua Ian Cohen, SCRRD Inc

The market for Augmented Reality (AR) platform and method of use is rapidly growing, with more businesses and consumers adopting this technology. AR is a technology that overlays digital information onto the real world, enhancing the user’s experience and providing a more immersive environment. It is used in various industries, including gaming, education, healthcare, retail, and marketing.

The AR market is expected to grow at a CAGR of 46.6% from 2020 to 2025, according to MarketsandMarkets. The increasing demand for AR technology in various industries is driving the growth of the market. The gaming industry is one of the major sectors that use AR technology. Games like Pokemon Go and Ingress have been successful in attracting a large number of users, and this has led to the development of more AR-based games.

The education sector is also adopting AR technology to enhance the learning experience. AR technology is used to create interactive textbooks, which provide students with a more engaging and immersive learning experience. AR technology is also used in museums and art galleries to provide visitors with a more interactive and informative experience.

The healthcare sector is also using AR technology to improve patient care. AR technology is used to create virtual simulations of surgeries, which help doctors to practice and improve their skills. AR technology is also used to create virtual reality therapy, which helps patients to overcome phobias and anxiety.

The retail industry is also adopting AR technology to enhance the shopping experience. AR technology is used to create virtual try-on rooms, which allow customers to try on clothes virtually. AR technology is also used to create virtual showrooms, which allow customers to see how furniture and other products will look in their homes.

The marketing industry is also using AR technology to create more engaging and interactive advertisements. AR technology is used to create interactive billboards, which allow customers to interact with the advertisement. AR technology is also used to create virtual product demonstrations, which provide customers with a more immersive experience.

In conclusion, the market for AR platform and method of use is growing rapidly, and it is expected to continue to grow in the coming years. AR technology is being adopted in various industries, including gaming, education, healthcare, retail, and marketing. The increasing demand for AR technology is driving the growth of the market, and it is expected to create new opportunities for businesses and consumers alike.

The SCRRD Inc invention works as follows

The invention is an augmented reality platform, and a method of using the same. In one embodiment an array of positioning devices determines a perspective value for a physical object in a space using visual-inertial, radio wave, and acoustic positioning. The server decides the value of the object based on the plurality of perspectives values received from the array. The server’s digital map and library maintain the location and orientation of spatial experiential objects, and physical objects. This information is used to deliver the spatial experiential object from an augmented reality device.

Background for Augmented Reality platform and method of use

The adoption of augmented reality and virtual reality has increased with the proliferation of smartphones, glasses, technology-enhanced lenses, and synchronized light matrixes in society. Individual interactions with digital 2-dimensional (2D), 3-dimensional (3D), and/or 4-dimensional (4D) renderings have relied on personal interfaces with devices that capture their environment before injecting visual items rendered with code.

This individual relationship between users and digital renderings in the augmented reality environment is due to how difficult it is to accurately place renderings within a digitally enhanced space. To create a sharable experience, complex equipment is needed to analyse the environment. Cameras are unable to solve the problem of re-localizing digitally rendered objects. The occlusion of objects is another barrier to a shared augmented reality experience between devices. Due to limitations in current augmented technology, it is necessary to improve augmented realities with geospatial correlations of augmented objects.

It would be beneficial to develop systems and methods that provide augmented reality, with geospatial correlations of augmented-reality objects. This would enhance the functionality. It would be ideal to have a software and electrical engineering solution that provides enhanced geolocation in a platform-independent environment. “An augmented reality platform, and a method of using the same, are provided to better address these concerns.

In one embodiment, a platform for augmented reality uses a location array to determine the perspective value of an object in a given space using visual-inertial, radio wave, and acoustic positioning. A server decides the value of the object based on the plurality of perspectives values received from the array. The server’s digital map and library maintains the location of spatial experiential objects and physical objects to be provided to an augmented-reality device.

The teachings herein, in one aspect, provide a mesh topography for network devices that synchronize geo-spatial correspondence(s) with at least one (1) mainframe digital graph using cubic meters or smaller. This mesh topography can be used through lenses and/or light matrix media which are technologically enhanced to enable AR and/or virtual reality. This implementation of unique 2D or 3D objects on shared networks allows for connected devices to render topographic precision between visual media in real-time by multiple people, privately or publicly. By sharing relative coordinates between devices or correlating servers, groups can monetize topographic networks for placing more precise virtual items. Multi-dimensional renderings can include sub-graphs to process properties within a universal digital framework. The groups of devices connected to servers can form virtual objects that are placed within the original graph, subgraph or sharable grids for topographic regions using a mobile application from iOS or Android.

Hardware may synchronize network metrics in meshed topographies or an original graph by digitally placing virtual object coordinates that can be shared to chart accurate grid renderings. Multiple-dimensional virtual object in topographic regions can be correlated more precisely with each network connected device through environmental analysis of renderings. The embodiments will reveal and clarify these and other aspects of this invention.

The present invention is a collection of inventive concepts that can be applied in many different contexts. The embodiments described herein are only illustrative and don’t limit the scope of this invention.

Referring to FIG. 1A, FIG. In FIG. In FIG. 1C is shown a platform for augmented reality, which has been schematically illustrated. It is designated 10. The space S is shown as a sitting room. It has an entrance to the room E. The room is furnished with a bookcase and seating F. Space S contains individuals I1, I2, and I3, and they move into and out the living room. “A dog D and virtual cat C also are within the room.

The augmented reality platform 10 has a number of devices for locating the user. These devices include camera nodes, such as 14, 16, and 18, a robotic node, which is mounted on a collar. The locationing devices 12 each have a device identification that provides an accurate location within the space. Individuals I1, I2, I3, I4 are using an augmented reality device 22, and individual I1 in particular is using a smart-phone 24. Individual I2 uses smart phones 26 and Individual I3 uses smart glasses 28. Individual I4 uses smart phone 30 with smart glasses 32. A server 40, as shown in the image, is located remotely to the space of Operations O. It is in communication with both the arrays of locationing devices 12, as well as augmented reality devices 22, as well. The augmented reality devices may include any wirelessly-enabled wearable or carryable device that is programmable, including but not limited smart phones, tablets, laptops, smart watches and smart pendants.

In one implementation, the spatial location 50 is performed by each locationing device 12, including the augmented-reality devices 22, based on the signaling 52, which includes radio wave positioning, acoustic placement, and visual-inertial measurement 54. Locationing 60 involves each location device 12, including augmented reality devices 22 determining a perspective relative to the geospatial position of the physical object.

As part of spatial location 50, the server establishes a determined value 62 for the physical object using the perspective values received from the arrays of locationing devices and augmented reality devices. The server 40 maintains a digital map 70 based on the values determined for the physical objects.

According A In Grid A virtual object may include 2D and/or 3D shapes of various size(s), color(s), pattern(s), graph(s), sub-graph(s), grid(s), chart(s), dimension(s), multi-dimensional rendering(s), presence(s), atmospheric pressure(s), virtual walls, virtual ceilings, planar axis chart(s) formulation(s), bits, coordinated light spectrum(s), water droplet(s), cloud(s), cosmic ray(s), solar shock(s), fluid dynamics, iridescence(s), goniochromism(s), structural coloration(s), pearlescence(s), sound transmission(s), photon receptor(s), photonic crystal(s), wavelength vibration(s), dispersion(s), mirror(s), refractor(s), reflector(s), reflection(s), compiler(s), processor(s), converter(s), code(s), gamut(s), RGB coordinate(s) mixed approximation(s), distortion(s), Doppler shift(s), astronomical properties(s), spectroscopy, anisotropy, isotropy, bioluminescence(s), monitor(s), plane(s), emission line(s), absorption line(s), spectral line(s), plate(s), hue(s), coherencies, intensities, density, opacity, interference(s), interferometric microscopy, interactive interface(s), screen(s), touchable space(s), program(s), game(s), realistic item duplicate(s), vacuum vacancy, bendable energy source(s), synthetic-aperture imaging, off-axis-dark-field illumination technique(s), enhanced sensor(s), replication(s), encoded scene(s), super imposed wavefront(s), speckle(s), reciprocities, photographic plate(s), photoresist(s), photo-thermoplastic(s), photo-polymer(s), photorefractive(s), photorefractive crystal(s), nanocellulose(s), dichroic filter(s), semiconductor(s), semiconductor heterostructure(s), quantum well(s), plasma(s), embossing(s), electrodeposition(s), entangled particle(s), energy equipment(s), stereopsis, motion(s), gradient(s), color(s), laser(s), pointer(s), diode(s), coherence length(s), phase-conjugate mirror(s), phase shift(s), phase(s), scattering(s), render processing(s), prism(s), relay(s), modulate(s), amplitude(s), numerical aperture(s), visual depiction(s), augmented frequencies, radiation(s), parallax, joule(s), electron holography, transmission electronic microscope(s), interference lithography, acoustic holography, source parameter(s), holographic interferometry, specular holography, dynamic holography, volume hologram(s), spatial resolution(s), pulse(s), reference plate(s), absolute value(s), angle(s), electronic field(s), selective higher resolution medium apparatus, emulsion(s), atomic mirror(s), atom optic(s), ridged mirror(s), atomic hologram(s), neutron expulsion(s), shadow(s), seismic activities, plate tectonic(s), quantum mechanical purpose(s), magnetic propulsion(s), parallel inception(s), nano-vibration(s), neural activities, anti-matter(s), anti-particle(s), anti-force(s), sub-atomic(s), atomic structuring(s), compound bond(s), organic chemical (s), polarity cycle(s), ionic spin(s), inter-dimensional fluctuation(s), covalent charge(s), jet stream(s), lenticular graphing(s), tomography, volumetric display(s), rear-projection(s), semi-transparency screen(s), illumination(s), force field(s), quantifiable hypothetical control barrier(s), meta-data(s), meta-molecular(s), meta-cellular, meta-conscious, meta-physic(s), meta-human characteristic(s), reverberation(s), radiation(s), optical microscopy, optical phase conjugation(s), optical computing(s), optical table(s), optical phenomenon(s), optic (s) beam(s) transgression(s), nonlinear optical material(s), lens trajectories, ambient diffusion(s), Fourier transform(s), diffraction grating(s), polarity current(s), magnetic data(s), photographic recording(s) of light magnetism(s) for sharable multi-dimensional vector(s), various shapes with depth cue(s) calibrations of feature frames between devices formulating data analytics and/or physical characteristics interacting between network renderings of a topographic region relativity connected device(s) and/or viewer(s) and/or mechanical and/or biological retina(s) association(s) adjacently synchronizing renderings to focal distance ratios when virtual object interactions between matter obstacles and/or renderings framing characteristics of physical properties computed involving device(s) qualified retina(s), lens curvature(s), angle(s), direction(s), accelerometer(s), gyroscope(s), receiver(s), network topology, wavelength exposure(s), signal strength(s), size ratio(s), and/or continuous visual camera analysis for angle probabilities, alternate diagnostics, and/or captured surrounding(s) correlating precision accuracy meters scaling focal placing between dimensional processor volume limits distributing between adjacent parallel frames developing artificial intelligence (AI) improvements without limiting the art scope of this invention.

In one embodiment the digital map 70 displays the same geospatial location graphed in two different realms with the original grid acting as the virtual object(s) or obstacle(s). The system will calculate predicted angles for the digital artificial device and the mathematically computed surrounding which is stored on a server in order to analyze the scene better. The analysis of the topographic grid is based on the real-world accurate angles captured by the physical devices. Virtual objects are placed relative to the physical obstacles and/or devices formulated in the scene. This formulation can use qualified retinas, lens curvatures and angle directions, accelerometers, network topologies and wavelength exposures, signal strengths, size ratios and/or continuous camera analysis to determine where the topographic region of the real world and its surroundings may reside in the digitally-framed artificial environment. Geospacial collaborations are processed by an original server, accurately rendering objects relative to devices. The original server is used to replicate the results expected of the digital artificial devices, which are comparable with the actual results of physical devices harnessed by the connected device server. On the original grid server, you can place the same geospatial ratio diagnostic dissection and makeup. The server, or other calibrated servers, adjusts the spatial experiential objects in the topographical grids. They have been calibrated to ensure precision accuracy for more than one device. The servers will display the shared rendering as it is placed by camera analysis in real-time between devices. This shared experience is along the surface-level positive subgraph X-axis and/or Y axis charted regions capable of correlating virtual object viewable by devices connected to networks that share multi-dimensional renderings. Multiple servers and parallel sub-graphs can compare grids between subgraphs or place subgraph renderings into further subgraphs, without limiting scope of this invention.

The relative ratios between spatial experiential objects 74 in the digital map are associated with specific frames, before comparing device camera analysis predicted frames to reality. The topographic regions compartmentalize the surrounding environment based on device qualified retinas, lens curvatures and angles, accelerometers, distances, network topologies, signal strengths, size ratios, gyroscopes, fluid dynamic shadows, and/or continuous camera analysis of angles, alternate diagnoses and/or captured environments to restructure digital map 70. Each digital map is calibrated to take into account the differences between each environment using wave reverberation calculations. Virtual objects and spatial experiential objects that are associated with topographic region renderings can be shared with devices or networks within range. Hardware sensors are used to create a graph that is consistent with meshed topographic areas and the universal grid. Constants are calculated from topographic camera analysis or relative virtual object frames. At least one (1) matrix connected to server networks monitors the geo-spatial Coordinates of virtual objects within at least one digital graph. This is a realm for pin-pointing multi-dimensional wave correlative(s) with device ranges. Software for topologic geospatial charts of devices allows accurate comparison between coordinates within a 3D environment to accompany camera analytics processing frame dimensions for virtual object adjacent alignment in a calibrated parallel graph meters and/or a central grid system. “In addition, by placing multi-dimensional virtual objects correlations with planar coordinates on each axis, meshed topographic areas can be formed for device(s), network(s), and/or server rendering placements without limiting scope of this invention.

A digital book 72 is maintained by server 40. The digital book 72 contains an augmented reality profile with spatial experiential objects, which enhance physical objects, 76, and the space, with metaphysical objects, 78, and virtual objects, 80, as well as metavirtual items, 82. As an example, but not as a limitation, the metaphysical objects 78 could include the augmented-reality viewable information 88 about the cleaning schedule for bookshelf B in the living room. “The virtual cat C, which is the virtual object 80 in this example, and the augmented viewable information 92 regarding the ageing of the cat, the metavirtual item 82, are examples of the metavirtual objects 82.

Physical objects are tangible objects that people can touch and see. It may be the elements of the physical universe that people can detect using their five senses and interact with or inspect without additional technology. Even though using technology to study elements, such as personal computers, would be faster and provide details that would otherwise not be visible, it is still a good idea. Metaphysical or metavirtual elements are those that require an extra-human detection process, either in a virtual or physical sense. A person may grab a buoy because they can see it without having to detect it. Another might plug in a lamp and quickly turn it on and off, just as a test. The user now knows that there are several elements that cannot be detected, such as the fact that the switch and bulb are linked, and that a lamp won’t turn on without the switch being flipped. It is also expected that the lamp will turn on next time. The more refined aspects of the lamp are metaphysical, or metavirtual. However, the overt detection of the buoy without any calculations is what makes it physical or virtual.

With regard to the metaphysical items 78, it may be that there is a lot of information to be known about each situation, which is not immediately obvious, not visible with the naked eye. It could be that the behavior might be related to something else or that the environment has aspects that, if the user knew about them, they would have made a different decision. The metaphysical includes the layers of the physical world that are difficult to detect without the use of tools and techniques. Although it is possible to manipulate physical objects in order to influence metaphysical constructs, elements are still considered “meta”. The person must use a technique or tool outside their senses to discover the element. In the example above, a person may go to turn off the light and not realize that the lamp is still drawing electricity from when it was last turned on. The lamp may be visible turned off but it could still emit an electromagnetic field that is weaker in strength, but statistically important, while not significantly changing from its baseline. A person may swim up to the buoy to look at it or, in an emergency situation, to grab onto something. The buoy could be electronic, emitting a high-amplitude low-wavelength radio signal. The buoy could be a source for information, and digital interaction with it would reveal details about the weather to come. Or, perhaps, the enhanced metaphysical view of its movement in the waves will yield a degree-of-freedom, which is the number of degrees the buoy moves in relation to the sky. If the buoy has its own sensor and historical data, a user could even compare the current waves to the ones seen in the past and make predictions about the weather.

Radio waves are a good example of invisible metaphysical entities. They are a very important part of our environment. There may not be a way to detect them unless there is sufficient energy to feel them or a device that can use the waves in some capacity. The ability to map radio waves is limited for ordinary people. They may download software that creates a heatmap or an app which shows the strength of a specific signal. The ability to map radio waves is only possible with a sensing network, which has a high sensing density. This means that there are many sensors working simultaneously, each of which can estimate or measure radio signal details. A sensor equipped with an antenna of a certain type can also calculate the direction from which a radio signal originated in relation to its magnitude. A single vector (magnitude, direction) is enough to pinpoint the origin of a signal. With the addition of a fleet, or multiple sensors, each determining a vector, the information can be combined to create not only a more accurate account of a signal but also an accurate volumetric description of that signal. The ability to trace specific waves from their point of origin and through multiple sensors can include the deflection of that wave off the surface of the line of sight. A neutrino from a chemical or outer space reaction could be an example of a metaphysical entity that is difficult to see. A scientific project called ‘IceCube’ is underway in Antarctica. The array of light-sensitive sensors is buried under the ice to detect space particles. IceCube is the name of the project because it has layers upon layers within the ice of light sensors arranged into a three-dimensional matrix. These sensors are drilled in the ice because, even though neutrinos are very small, they can still be detected when they collide with solid material and produce light. As a reference, the diatoms in gases such as hydrogen and oxygen are also both atoms. They are bound together in nature because they are more stable that way (e.g. H2 andO2). It is the density of these gases that makes it difficult to detect neutrino discharges. The energy level of gases is much higher than liquids or solids frozen in place. This means that there are fewer chances for a neutrino to discharge within confined air spaces than they would be within confined areas of more dense materials like frozen water. Radio waves, in a similar context to the IceCube Project, can provide a volumetric description of an otherwise invisible phenomenon. The space sensing platform, just as the IceCube helps visualize neutrino movements through ice, can also help visualize signal propagation in the air and other metaphysical phenomena that humans cannot experience. Scientists in Antarctica could use the platform to watch a neutrino explosion as if it were happening right in front of them. They could replay previous discharges or overlay them in order to see both events at once. A team of scientists who are studying wireless signals could arrange groups of sensors in the air to observe radio waves, or a swimmer swimming towards a buoy might use the system so they can see the angular movement of the waves. These invisible components of visible objects are called the metaphysical.

The phenomenon known as “ghost” is a controversial example of metaphysical items. It is irrelevant whether ghosts exist or not for this argument. A large portion of the population does believe in ghosts for different reasons. The term is hard to define because some people think ghosts have a spiritual nature and are perhaps the spirits of loved ones who are dead, while others may believe ghosts to be electromagnetic phenomena or linked to the idea of a collective conscious. Ghosts are a real thing. It is possible to classify something as invisible but still have metaphysical properties that make it real. The classification of ‘ghost’ and the resulting baseline can be a widely accepted baseline. It could sound like: ‘A detectable supernatural phenomena that appears to have metaphysical characteristics for which there is no definitive explanation.

Virtual objects are data-based constructs that can be detected virtually by the user. They can be related to real places and things or not. In the context augmented reality they are often used to enhance perceptions of the physical world. Virtual objects can be used to represent physical objects even if the original object is no longer there or has never existed. If society decides life is “virtual”, then virtual and metavirtual constructions may be comparable to physical and metaphysical structures. In nature. The system could draw radio waves and illuminate the rays as if it were a ray tracer, in order to make the waves visible to the user. The user could also see the exact angular movement of the buoy, which would make the metaphysical constructs seem more real and predictable. Virtual constructs make objects more visible to the five senses of the user. In a virtual environment that is truly immersive, users can interact with virtual objects just as they would if they were physical. Virtual spaces can be seen, felt, and even smelt. As video games have demonstrated, virtual objects can be created in a separate space from the physical. Things we wished existed but never did. A place to escape to. Talking to non-player characters or artificially intelligent beings, such as virtual representations, could be possible. “People, places and things are present whether we’re talking about real or virtual space and can be equally important to the user.

This brings us up to the metavirtual object 82. Virtual spaces can also have metavirtual properties. They include behaviors, non-visible features, and relationships which are otherwise undetectable. In order to use the system as augmented or virtual reality, it is important that immersive features are also present. Imagine there was a speed limit for the virtual world that would govern how fast certain aspects could affect distant parts of the real world. This rule governs how fast changes can spread throughout the virtual field. This is what we would call the speed of light in physics, but it is important to remember that virtual worlds are connected. Imagine that a certain aspect of the world is able to propagate beyond a speed limit that has been set. Quantum entanglement is a physical phenomenon that’s similar. In physical space, entangled particles are connected in a way to appear faster than light. This connectedness between particles has a metaphysical element. What happens to one seems to also happen to the second and vice versa. Virtual objects now have properties that are explainable with physical concepts but also show a connection that is not visible in the virtual world. This is sometimes a result of the way it was designed, but in some cases the connectedness exists simply to enhance the user’s experience.

Click here to view the patent on Google Patents.