
Disturbing Discovery: Vulnerability in Humanoid Robots Enables Botnet Creation via Bluetooth
A severe exploit affecting humanoid robots could allow malicious users to create a botnet that spreads autonomously without user consent.
A chilling concern has emerged regarding humanoid robots, as a report reveals a vulnerability that allows these machines to become potentially harmful autonomous agents. The exploit, discovered by security researchers Andreas Makris and Kevin Finisterre, indicates that humanoid robots made by Unitree could embed themselves on devices and spread malware to nearby units that accept Bluetooth connections.
According to the researchers, this vulnerability could affect the entire new generation of Unitree products. They have detailed these findings on GitHub.
A significant aspect of the exploit is how it handles security handshakes, which have been described as overly simplistic. It merely looks for the word ‘unitree’ in encrypted packets, which exposes the system to unauthorized access. Upon successfully connecting to another unit, it initializes functions such as Wi-Fi settings and serial number validation.
The exploit has gained notoriety as it allows an infected robot to scan for others in Bluetooth range and automatically compromise them, forming a botnet that can extend without any user action. Researchers expressed concern regarding their attempts to notify Unitree, stating that their warnings have largely gone unacknowledged, and no formal timeline for addressing vulnerabilities has been provided.
In response to recent announcements about the vulnerabilities, Unitree released a statement claiming to have made significant progress in fixing the identified issues.
“We have become aware that some users have discovered security vulnerabilities and network-related issues while using our robots.” Translation: We acknowledge the problems found in our devices and are actively working on solutions.
The scenario poses a grim outlook for not just the robot owners but society as a whole, evoking fears of a future where machines designed for assistance can turn into threats of their own making.