Hybrid Kalman-Nonlinear AR State Estimation for Adaptive Museum Experiences
Abstract
We propose a hybrid state estimation framework for augmented reality (AR) systems in museum environments, addressing the challenge of dynamic lighting and texture interactions that degrade overlay realism. Conventional AR systems often rely on static models or heuristic adjustments, which fail to adapt to complex environmental variations. The proposed method integrates a Kalman filter for linear dynamics and a Gated Recurrent Unit (GRU) network for nonlinear effects, enabling simultaneous modeling of physical priors and data-driven adaptations. The Kalman filter processes sensor inputs such as ambient light and device pose, while the GRU learns latent representations of texture and reflectance variations. A learned attention mechanism dynamically fuses these estimates, weighting their contributions based on contextual features like exhibit material properties. The fused output modulates shader parameters in real-time, adjusting AR overlays to maintain visual coherence under varying conditions. Moreover, the system is implemented on an edge-computing architecture with a Qualcomm Snapdragon 8 Gen 3 SoC, achieving sub-20ms latency for seamless immersion. The novelty lies in the explicit decoupling of linear and nonlinear dynamics, a departure from prior works that treat environmental variations as monolithic disturbances. This approach not only improves adaptation fidelity but also provides interpretable insights into lighting-texture interactions. Experimental validation in museum settings demonstrates significant enhancements in overlay realism and user immersion, highlighting the framework's practical viability. The proposed hybrid estimator bridges the gap between physical modeling and machine learning, offering a scalable solution for adaptive AR experiences in dynamic environments.