May 10, 2024 at 10:07AM
A team of researchers has developed an undetectable attack system, GhostStripe, capable of manipulating the image recognition of autonomous vehicles by exploiting the reliance on CMOS sensors. This attack causes the vehicles to not recognize road signs, posing a serious security concern. While countermeasures are available, the study highlights ongoing safety concerns with AI and autonomous vehicles.
The meeting notes indicate that a group of researchers has developed a system, called GhostStripe, capable of attacking autonomous vehicles by manipulating the cameras’ reliance on CMOS sensors. This system can cause the vehicles’ computer vision to fail in recognizing road signs, posing a serious security concern.
The research involved manipulating the rapid light changes captured by CMOS sensors to create an unrecognizable image of road signs, which could mislead the vehicle’s classifier. The team developed two versions of this stable attack, with GhostStripe2 being targeted and requiring access to the vehicle, and GhostStripe1 being non-targeted and not requiring access to the vehicle.
The researchers tested their system on real roads and a car equipped with the specific camera used in Baidu Apollo’s hardware reference design, demonstrating a high success rate in misleading the traffic sign recognition.
Countermeasures such as replacement of CMOS cameras with CCD, randomizing the line image capture, using more cameras, or including the attack in the AI training model are suggested to mitigate the vulnerability.
The study highlights ongoing safety concerns and the need to address AI and autonomous vehicle vulnerabilities. The Register has requested a comment from Baidu on its Apollo camera system.