๐ Executive Summary
Applied a Deep Learning SR (Super-Resolution) algorithm to low-channel (8ch) LiDAR data, dramatically reducing errors and restoring structural details compared to high-channel (128ch) Teacher data.
๐ Distance-wise MAE Reduction
Significant reduction in MAE observed across all ranges (0-100m+). Notably, error at ranges over 100m dropped from 12.18m to 1.75m.
๐ Structural Similarity Enhancement
Cosine Similarity reached above 0.8 across all bins, indicating successful Object Shape Restoration beyond simple distance correction.
๐ฏ Performance Improvement (Radar View)
๐ Qualitative Analysis (Key Cases)
Case 1: Short-Range (0-30m)
Scenario: Near-field objects & ground reflection.
Result: Restored point density where raw 8ch was too sparse for identification.
MAE: 0.1292 โ 0.0387 (โผ70%)
Case 2: Mid-Range (30-60m)
Scenario: Standard driving zone. Vehicle/Pedestrian detection.
Result: Suppressed line-break artifacts; sharpened object contours.
Sim: 0.817 โ 0.942 (โฒ15%)
Case 3: Long-Range (60-100m)
Scenario: Sensor limit zone with minimal points.
Result: Effectively inferred missing spatial information based on learned patterns.
MAE: 0.1306 โ 0.0346 (โผ73%)
Case 4: Ultra Long-Range (100m+)
Scenario: Sparse data near noise level.
Result: Reconstructed distribution highly similar to Teacher data.
Sim: 0.847 โ 0.991 (โฒ17%)