Adaptive Radar Systems Get AI Boost from Duke University Researchers

Though such classical radar methods are proved to be efficient, they are very insufficient to meet the demands that the industry is raising up today for solutions such as self-driving cars, Shyam Venkatasubramanian a graduate research assistant on Vahid Tarokh’s lab the Rhodes Family Professor of Electrical and Computer Engineering at Duke explained. For instance, application of AI is in adaptive radar where issues such as object detection, location and identification are to be addressed.

Basically, radar control involves using of high frequency radio waves for transmission and back-scattered waves for reception. In today’s world, radars are nothing like that; one can design and/or shape the signals, handle multiple targets at once and ignore all other interferences. However, adaptive radar is still restricted in cases when the reflectable surfaces are in the places such as mountains.

Thus, keeping in mind the development of computer vision in the given field, one has to specify that the studies by Venkatasubramanian and Tarokh were distincly oriented on AI-based improvement of adaptive radar. 2010 year, the participant of the project from Stanford University has published the new databank ofมากกว่า 14,000,000 pictures and their description has named the ImageNet, which act as the new standard for AI in the area of the vision of the computer.

The Duke researchers made sure that applying the similar AI methods increases the efficiency of the adaptive radar systems significantly.

“Our research mirrors early AI work in computer vision, particularly the creation of ImageNet, but focuses on adaptive radar,” Venkatasubramanian said. “Our AI processes radar data and predicts target locations using a straightforward architecture reminiscent of early computer vision models.”

Although their methods have yet to be tested in the field, they benchmarked their AI’s performance using RFView, a modeling tool incorporating Earth’s topography and terrain.

Following in computer vision’s footsteps, they created 100 airborne radar scenarios from various U.S. landscapes and released them as an open-source asset called “RASPNet.” The dataset, containing over 16 terabytes of data, is publicly available.

Hugh Griffiths, Fellow Royal Academy of Engineering, Fellow IEEE, Fellow IET, OBE, and the THALES/Royal Academy Chair of RF Sensors at University College London, praised the work. “This will undoubtedly stimulate further research in this important area and ensure that results can be readily compared,” Griffiths said.

The map of the matched case RFView® example scenario. The blue triangle is the platform location, and the red region is the range-azimuth area for radar processing. The elevation heatmap overlaying the left image depicts the simulation region. Credit: IET Radar, Sonar & Navigation (2024). DOI: 10.1049/rsn2.12600

The scenarios range from the relatively simple Bonneville Salt Flats to the challenging Mount Rainier. Venkatasubramanian hopes others will build on their dataset and AI approaches.

In a previous study, Venkatasubramanian showed that AI tailored to specific geographical locations could achieve up to a seven-fold improvement in object localization. He believes that if an AI can select a similar scenario to its current environment, it will perform significantly better.

“We believe this will significantly impact the adaptive radar community,” Venkatasubramanian said. “As we continue to enhance the dataset, we aim to equip the community with the tools needed to advance AI in adaptive radar.”

References:

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More