Invented by Judy Sibille SNOW, Robert James SNOW
One of the most popular methods for providing feedback in physical therapy and sports medicine is through wearable technology. Devices such as fitness trackers, heart rate monitors, and smartwatches can provide real-time feedback on a patient or athlete’s performance, allowing them to adjust their training or therapy program as needed. These devices can also track progress over time, providing valuable data to both the patient and their healthcare provider.
Another popular method for providing feedback in physical therapy and sports medicine is through video analysis. This involves recording a patient or athlete’s movements and analyzing them to identify areas for improvement. Video analysis can be done in real-time or after the fact, and can be used to provide feedback on everything from posture and form to technique and strategy.
In addition to wearable technology and video analysis, there are also a variety of other methods for providing feedback in physical therapy and sports medicine. These include biofeedback, which uses sensors to monitor physiological responses such as muscle tension and heart rate, and virtual reality, which can be used to simulate real-world scenarios and provide feedback on performance.
Overall, the market for methods to provide feedback in physical therapy and sports medicine is growing rapidly, driven by the increasing demand for personalized, data-driven healthcare. As technology continues to advance, we can expect to see even more innovative feedback methods emerge, helping patients and athletes achieve their goals and improve their overall health and wellness.
The Judy Sibille SNOW, Robert James SNOW invention works as follows
A method is disclosed for visually capturing a person performing a set exercise steps and then comparing that person’s performance during the exercise with those steps to measure the results. Each exercise is customized to each patient rather than an “ideal” Each exercise is tailored to the individual patient, rather than an?ideal? oder?generic standard? standard. This flexibility allows the physical therapists to adapt to multiple problems and/or optimize treatment. The invention can be used by a physical therapist to guide rehabilitation exercises using a software medical product. The device can also be used to train athletes or for fitness under the supervision of a coach or trainer. It enhances communication by providing visual data, tracking results and allowing the physical therapist to communicate with the patient, or trainer and athlete, and optionally the physician.
Background for Method for providing feedback to a patient in physical therapy or an athlete
Physical Therapy Background
Physical therapists are currently a great help to patients who have suffered accidents or injuries. Physical therapists usually work with patients twice a week for 45 minutes each time. The physical therapist will assess the patient’s strength and range of motion during this session. The physical therapist then outlines a set of preferred exercises for the patient in order to gain more control over various muscle groups. Normaly, the physical therapy will give the patient an exercise program to follow when the therapist isn’t present. The physical therapist will often demonstrate and coach the patient in the execution of the exercise to ensure the patient fully understands it. They may then ask the patient to perform the exercise several times a day. The patient may be asked to perform ten repetitions of a particular range of motion three times a day. The physical therapist will determine if she has increased her range of movement and prescribe more advanced exercise regimens to restore full mobility.
Physical Therapy Instruction
When a physical therapist or assistant is with a client, they observe the quality and quantity therapeutic exercises. Often, the patient is given exercises to do on her own. Instructions are primarily paper sketches that show each step of the exercise, with modifications specific to a patient noted on the sketch. These instructions can be enhanced with two-dimensional photos or videos showing a model performing an exercise. The model is still used to provide patient-specific instructions because it represents an “ideal” situation. The model is a ‘generic’ or ‘ideal? Standard, not an exercise tailored to the patient. The physical therapist currently does not have an easy way to monitor the exercises the patient performs by themselves.
Exercise or Fitness Instruction
When a person is learning an exercise, they try to mimic an athlete, model, or group instructor performing it live, either in a private or group class. Live instructors may be able provide real-time feedback and make modifications to fit the individual student. Automated feedback on exercise tries to replicate real-time feedback using various devices, but the ultimate goal is an “ideal?” The goal is still a ‘ideal? standard.
Since Microsoft Corporation released Kinect? “Since Microsoft Corporation released the Kinect? The Kinect? can analyze the user’s movement in three dimensions and compare it to a preset goal. Dance Central? is one example. Harmonix Music System’s music video game Dance Central is a good example. Pat Pub No US 2012.0143358. Ubisoft Entertainment’s Your Shape: Fitness Evolved is another example. Some examples are based on sports, such as reflecting the ball movement during tennis, or the arm movement during the swing. The goals are already predetermined. Normalized comparisons are made with an “ideal” standard. standard.
The Nintendo Wii is another example of video game technology which preceded the Kinect. The Remote (See U.S. Patent. No. Nintendo Wii Fit? Balance Board (See U.S. Patent No. No. No. See EP 2356545). Patents cited show that rehabilitation video games based on the technology have been developed. The Nintendo or Sony gaming system requires the player to stand on a Balance Board Sensor or hold a sensor.
A sensor-based system schematically illustrated in FIG. 1A. This system includes at least one sensor 110, which contains a color sensor, a depth sensor, an audio sensor, and an IR emitter. In one embodiment, RGB camera 111 provides a three-color image stream (Red Green Blue). Infrared emitter 112. The infrared sensor 113. Together, they produce a depth image stream. These data streams allow a computer 130 to recognize objects within the field of view of the camera in three dimensions. The multi-microphone sensor 114 extracts and cancels ambient noise while parsing voices and sound input. It then delivers an audio stream. A processor with commercially-available, proprietary software can coordinate these input streams.
FIG. The display 120 is also shown in 1A, along with a computer 130. These may be combined into a single device, such as a all-in one PC, laptop or tablet. The display 120 could be a television that is connected to the computer. A USB cable 101, HDMI cable 102 or other external or internal system bus is used to provide physical connectivity. The software running on the computer 130 consists of system software and drivers (132), a NUI Application Programming Interface 133 and one or more applications (134). A sophisticated, commercially-available NUI software library and related tools help developers use the rich form of natural input coming from a sensor array to react to real-world events.
Microsoft’s Kinect? Microsoft Kinect is a peripheral that can be used as an external interface for Xbox 360. Microsoft’s Xbox 360? computers. The Kinect? The Kinect and its associated computer program or Xbox can sense, recognize and use the user’s human form to interact with media and software without the need of a controller. Kinect enables a three-dimensional interface? Hardware is disclosed in U.S. Patent. No. U.S. Patent No. 8,106,421, a general interactive display system is also disclosed. No. 7,348,963.
Microsoft offers a proprietary layer of software (e.g. U.S. No. No. Microsoft’s Kinect Software Development Kit or open-source software libraries are other options for developers. In the following description, we will use the former.
Microsoft has detailed the Microsoft Kinect SDK environment on their website, http://msdn.microsoft.com/en-us/library/jj131023.aspx (webpage visited on Nov. 16, 2012). Details of the Microsoft Kinect SDK environment are referenced on the Microsoft website, http://msdn.microsoft.com/en-us/library/jj131023.aspx (webpage visited on Nov. 16, 2012), and can be summarized as follows: Hardware 131: 32 bit (x86) or 64 bit (x64) dual-core 2.66-GHz or faster processor; 2 GB RAM; dedicated USB 2.0 bus; graphics card that supports DirectX 9.0c; Microsoft Kinect for Windows sensor; system software and drivers 132: Microsoft Windows 7 or 8, including the APIs for audio, speech, and media; DirectX end-user runtimes (June 2010); Kinect microphone array and DirectX Media Object (DMO); audio and video streaming controls (color, depth, and skeleton); device enumeration functions that enable more than one Kinect; Kinect NUI API 133: Skeleton tracking, audio, color and depth imaging.
Additional hardware 131 such as audio output devices, local data storage or Internet connectivity may be required or beneficial to “Applications” 134. To support this hardware, additional system software and drivers may be needed. The application 134 can also use other system software features, like the Windows Graphical User Interface libraries (GUI).
A sensor-based systems can provide a framework to determine positional information about a user’s torso, capturing movement for analysis. There are many systems that capture motion using sensors. A system that combines a camera and a depth-sensor can be used to produce a three-dimensional skeleton by determining the position of the body. Transducers are attached to the body of the user to detect their limb positions and create a skeleton. Other systems use multiple cameras, motion tracking peripherals or infrared devices to enhance the positional information.
As used in this document, the terms “joint”, “bone”, and “skeleton” are to be understood as they would be by someone with a good understanding of motion capture and animation. The terms are meant to be understood by someone with a good understanding of the art of animation and motion capture. A skeleton, for example, can have bones. However, the number of them and their position are determined by the motion capture software and equipment. This may not be the same as the number and position of bones an anatomist would recognize on a human skull. A joint can also be the distal tip of a bone (such as a fingertip, or head), without necessarily being at the point where two bones meet.
As schematically shown in FIG. A typical skeletal 150 model is made up of a series of joints (151 to 170) and the lines that connect the joints. The output of the model is a datastructure that contains coordinates describing each joint’s location and the connected lines of bones in a human body. “US Patent Application Publication US2010/0197399 Geiss” provides an example of skeletal model generation and representation.
Click here to view the patent on Google Patents.