Scientists are developing new methods for quantifying upper extremity reachable workspace (UERW) utilising artificial intelligence and markerless motion capture technology. Seth Donahue and J. D. Peiffer, from Shriners Children’s Lexington and the University of Kentucky Department of Physical Therapy, alongside R. Tyler Richardson from Pennsylvania State University at Harrisburg, Yishan Zhong, Shaun Q. Y. Tan and Benoit Marteau from Georgia Institute of Technology, Stephanie R. Russo from Nationwide Children’s Hospital, May D. Wang working with colleagues at both Georgia Institute of Technology and Emory University, and R. James Cotton from Shirley Ryan AbilityLab and Northwestern University, in collaboration with Ross Chafetz from Shriners Hospitals for Children and others, present research validating a single-camera approach to this assessment. This work is significant because it addresses the need for accessible and affordable biomechanical analysis tools, potentially reducing the barriers to implementing quantitative upper extremity mobility assessments in clinical settings. The team’s validation, utilising simultaneous marker-based motion capture and a novel monocular system across nine participants, demonstrates strong agreement with established methods, paving the way for practical, single-camera evaluations of UERW.
Scientists have developed a streamlined method for quantifying upper extremity movement using artificial intelligence and a single camera. This innovation addresses longstanding challenges in clinical biomechanical analysis, traditionally reliant on complex and expensive marker-based motion capture systems. The research validates a new approach to assessing the Upper Extremity Reachable Workspace (UERW), the three-dimensional space a person can reach with their hand, using markerless motion capture driven by AI and a standard monocular camera.
This technique promises to make quantitative movement analysis more accessible for both clinicians and patients, potentially revolutionising rehabilitation assessments. The study focused on establishing the accuracy of this monocular markerless motion capture (MMC) system by comparing its measurements against the established gold standard of marker-based motion capture.
Nine participants performed a standardised UERW task within a virtual reality environment, simultaneously recorded by both systems and a set of eight FLIR cameras. Researchers analysed video data from two camera angles, frontal and offset, to determine which configuration yielded the most reliable results. The frontal camera orientation proved particularly effective, demonstrating a high degree of consistency with the marker-based reference system.
Specifically, the frontal view exhibited a minimal mean bias of 0.61 ±0.12 % reachspace reached per octant, a measure of how accurately the system identifies reachable targets within the workspace. In contrast, the offset camera significantly underestimated the reachable workspace, highlighting the importance of camera positioning for accurate data acquisition.
While some depth-related inaccuracies were observed in the posterior regions of the workspace with the frontal camera, the overall performance demonstrates a clear pathway toward practical, single-camera assessments of upper extremity mobility. This work represents the first validation of a monocular MMC system for UERW assessment, paving the way for broader implementation of quantitative mobility analysis in clinical and home-based rehabilitation settings.
Frontal camera exhibits high fidelity workspace measurement while offset views introduce significant depth perception errors
The frontal camera configuration yielded a mean bias of 0.61 ±0.12 % reachspace reached per octant, demonstrating strong agreement with the marker-based motion capture reference standard. This minimal bias indicates the system accurately quantified the percentage of reachable workspace across the six assessed octants, suggesting high fidelity in tracking movement.
Standard deviation remained low at 0.12%, further reinforcing the consistency of measurements obtained from the frontal view. Conversely, the offset camera view significantly underestimated the percent workspace reached, reporting a value of −5.66 ±0.45 % reachspace reached. This substantial negative bias reveals a systematic error in depth perception when utilising the offset camera orientation, impacting the accuracy of workspace quantification.
The standard deviation of 0.45% suggests relatively consistent underestimation, but does not mitigate the overall inaccuracy. Detailed analysis revealed depth-related errors in the frontal configuration were largely confined to posterior octants, suggesting challenges in accurately capturing depth information in areas behind the participant. However, the offset view exhibited inaccuracies in both contralateral and posterior octants, indicating a more pervasive issue with depth estimation and potentially anatomical occlusion.
These localized errors highlight specific areas for refinement in the monocular system’s algorithms. The research establishes the feasibility of using a single frontal-facing camera to assess Upper Extremity Reachable Workspace, particularly for evaluating anterior workspace where alignment with marker-based systems was strongest. While posterior workspace accuracy requires further improvement due to depth estimation limitations, the overall performance supports the potential for practical, single-camera assessments in clinical settings. This work represents the first validation of a monocular Markerless Motion Capture system for quantifying the UERW task.
Virtual reality upper limb kinematics via markerless motion capture
A 72-frame-per-second video rate underpinned the capture of upper extremity movements during this study. Nine neurologically intact adult participants completed a standardized Upper Extremity Reachable Workspace (UERW) task within a virtual reality environment, reaching for targets distributed across a virtual sphere centred on their torso. Simultaneous kinematic data acquisition employed both a marker-based motion capture system, serving as the reference standard, and a multi-camera array consisting of eight FLIR cameras.
To facilitate comparison between methodologies, researchers analysed video footage from two distinct camera orientations: a frontal view and an offset view. The core methodological innovation lay in the application of AI-driven Markerless Motion Capture (MMC) to the monocular video data. This technique bypasses the need for reflective markers, reducing the technical demands and associated costs of traditional motion analysis.
Monocular vision, utilising a single camera perspective, presents computational challenges in depth perception, which were addressed through sophisticated algorithms within the MMC system. The choice of MMC was driven by its potential to broaden accessibility to quantitative upper extremity mobility assessment, particularly in clinical settings where resource constraints may limit the use of marker-based systems.
Agreement between the MMC-derived data and the marker-based reference was quantified by comparing the percentage of reachable targets across six distinct workspace octants. This granular analysis allowed for precise identification of systematic biases or inaccuracies in the MMC system. The frontal and offset camera configurations were deliberately contrasted to determine the optimal camera placement for accurate UERW assessment. This comparative approach enabled a rigorous evaluation of the impact of camera angle on the reliability of the MMC-based measurements.
The Bigger Picture
Scientists have long sought to bring the precision of biomechanical analysis out of the laboratory and into everyday clinical practice. This work represents a genuine step towards that goal, demonstrating that accurate assessment of upper extremity reach, a crucial metric in rehabilitation and orthopaedics, is now achievable with a single camera and artificial intelligence.
For years, the reliance on complex and expensive marker-based motion capture systems has been a significant barrier, limiting access to detailed movement analysis for many patients and clinicians. The elegance of this approach lies in its simplicity; by leveraging advances in AI-driven markerless motion capture, it sidesteps the need for cumbersome setups and specialised expertise.
The demonstrated agreement with established marker-based systems, particularly when using a frontal camera view, is encouraging. However, it’s important to acknowledge that this isn’t a universal solution. The underperformance of the offset camera highlights the sensitivity of these systems to viewpoint, and further research is needed to optimise camera placement and algorithms for diverse clinical settings.
Moreover, the study focused on healthy adults, and the accuracy of the system in patients with movement disorders or musculoskeletal impairments remains an open question. Looking ahead, the potential extends beyond simple reach assessment. Combining this technology with virtual reality environments, as seen in related work, could create immersive and engaging rehabilitation programs.
The next challenge will be to integrate this technology into existing clinical workflows, developing user-friendly software and automated analysis tools. Ultimately, the true measure of success will be whether this approach improves patient outcomes and expands access to high-quality biomechanical care.
👉 More information
🗞 Monocular Markerless Motion Capture Enables Quantitative Assessment of Upper Extremity Reachable Workspace
🧠 ArXiv: https://arxiv.org/abs/2602.13176
