Through Fog High Resolution Imaging Using Millimeter Wave Radar


Junfeng Guan
Sohrab Madani
Suraj Jog
Saurabh Gupta
Haitham Hassanieh

University of Illinois at Urbana-Champaign

Abstract

This project demonstrates high-resolution imaging using millimeter wave (mmWave) radars that can function even in dense fog. We leverage the fact that mmWave signals have favorable propagation characteristics in low visibility conditions, unlike optical sensors like cameras and LiDARs which cannot penetrate through dense fog. Millimeter wave radars, however, suffer from very low resolution, specularity, and noise artifacts. We introduce HawkEye, a system that leverages a cGAN architecture to recover high-frequency shapes from raw low-resolution mmWave heatmaps. We propose a novel design that addresses challenges specific to the structure and nature of the radar signals involved. We also develop a data synthesizer to aid with large-scale dataset generation for training. We implement our system on a custom-built mmWave radar platform and demonstrate performance improvement over both standard mmWave radars and other competitive baselines.


Project Overview Video



Paper

Through Fog High Resolution Imaging Using Millimeter Wave Radar
Junfeng Guan, Sohrab Madani, Suraj Jog, Saurabh Gupta, Haitham Hassanieh
Computer Vision and Pattern Recognition (CVPR), 2020

[Paper] [Supp]




Dataset & Synthesizer Code

[GitHub]


Results

We show the performance of HawkEye in fog, clear weather, and rain in the following figures:

Performance with fog in scene:

Column (a) shows the original scene. Column (b) shows the corresponding ground truth. Column (c) shows the scene with fog. Column (d) and (e) show the radar heatmap in the form of 3D point-cloud and 2D front-view projection respectively. Column (f) shows the output of HawkEye.

Randomly sampled qualitative results:

Column (a) shows the original scene. Column (b) shows the corresponding ground truth. Column (c) and (d) show the radar heatmap in the form of 3D point-cloud and 2D front-view projection respectively. Column (e) shows the output of HawkEye.

Performance with multiple cars in the scene:

Column (a) shows the original scene. Column (b) shows the corresponding ground truth. Column (c) and (d) show the radar heatmap in the form of 3D point-cloud and 2D front-view projection respectively. Column (e) shows the output of HawkEye.

Performance with rain in scene:

Column (a) shows the original scene. Column (b) shows the scene with rain. Column (c) and (d) show the radar heatmap in the form of 3D point-cloud and 2D front-view projection respectively. Column (e) shows the output of HawkEye.

Quantitative Results:

We evaluate accuracy in range, size (length, width, height), and orientation of the car captured by HawkEye. We also evaluate accuracy in shape prediction by comparing the percentage of Car’s Surface Missed (false negatives) and the percentage of Fictitious Reflections (false positives) along the front view of the scen.



University of Illinois at Urbana Champaign | SyNRG