The LaMAria test sequences can be used to evaluate the performance of SLAM algorithms in various challenges typical of egocentric data. Here, we showcase the performance rankings of different methods across the test sequences. A method appears on the summary leaderboard only when it provides results for all sequences within a challenge. For per‐sequence challenge‐wise results, select a challenge from the dropdown below. To view additional details about a specific method, click on its name in the leaderboard table.

The submissions made by the LaMAria team are marked with (r) to indicate reproduced results. Here, we submit one run out of three from our own evaluation results.

For our Aria's SLAM submission (in red indicating closed-source), we do not show pose recall due to the initialization of our ground truth pipeline with the Aria's SLAM solution.

MethodInfoShortMediumLongLow lightMoving platform
score
sort
R@5m
sort
score
sort
R@5m
sort
score
sort
R@5m
sort
score
sort
R@5m
sort
score
sort
(r) Aria's SLAM bino, imu90.7N.A.78.5N.A.70.9N.A.84.2N.A.55.0
(r) OpenVINS+Maplab bino, imu27.760.823.452.312.826.119.840.513.9
(r) ORB-SLAM3 mono, imu23.061.210.926.011.228.73.19.02.0
(r) OKVIS2 bino, imu20.050.011.627.92.61.414.533.04.7
(r) OpenVINS+Maplab mono, imu19.743.812.729.05.510.26.915.02.0
(r) OpenVINS bino, imu18.954.215.946.812.528.113.934.910.4
(r) OpenVINS mono, imu16.040.310.624.05.513.75.513.01.5
(r) DPV-SLAM mono8.014.18.815.63.98.73.06.82.3
(r) DPVO mono7.618.07.515.41.72.92.65.12.3
(r) Kimera VIO mono, imu6.810.18.318.77.217.23.37.83.6