Address
304 North Cardinal St.
Dorchester Center, MA 02124
Work Hours
Monday to Friday: 7AM - 7PM
Weekend: 10AM - 5PM
Address
304 North Cardinal St.
Dorchester Center, MA 02124
Work Hours
Monday to Friday: 7AM - 7PM
Weekend: 10AM - 5PM
Source : https://gizmodo.com/ai-acoustic-cyberattack-deep-learning-hackers-1850714550
A new study published by a number of British researchers reveals a hypothetical cyberattack in which a hacker could leverage recorded audio of a person typing to steal their personal data. The attack uses a home-made deep-learning-based algorithm that can acoustically analyze keystroke noises and automatically decode what that person is typing. The research showed that typing could be accurately de-coded in this fashion 95 percent of the time.
Researchers say that such recordings could be easily achieved via a cell phone microphone, as well as through the conferencing app Zoom. After that, the recording can be fed into an easily compiled algorithm that analyzes the sounds and translates them into readable text.
This is an interesting variation on what is technically known as an “acoustic side channel attack.” Acoustic attacks (which use sonic surveillance to capture sensitive information) are not a new phenomenon, but the integration of AI capabilities promises to make them that much more effective at pilfering data. The big threat, from researchers’ point of view, is if a hacker were able to use this form of eavesdropping to nab information related to a user’s passwords and online credentials. According to researchers, this is actually fairly easy to do if the cybercriminal deploys the attack in the right conditions. They write:
“Our results prove the practicality of these side channel attacks via off-the-shelf equipment and algorithms…The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector, but also prompts victims to underestimate (and therefore not try to hide) their output.
You can definitely imagine a number of scenarios in which a bad actor could feasibly pull this off and nab a hapless computer/phone user’s data. Since the attack model relies on having an audio recording of the victim’s activity, an attacker could hypothetically wait until you were out in public (at a coffee shop, for instance) and then clandestinely snoop from a safe distance. If the attacker had high-quality parabolics or other sophisticated listening devices, on the other hand, they might even be able to penetrate the walls of your apartment.