By the looks of it, malware and vulnerabilities are not going anywhere anytime soon. However, with the hard work of defenders and constant education of the end users, we can reduce the risk and threat that they pose to our society.
With the rise (or publicity) of genAI, normal users found a new productive tool, and malicious users added a potent weapon to their arsenal.
All these are what we are about to discuss in this week's edition of our security review.
Millions of low-cost Android devices turn home networks into crime platforms
If you want cheap products, this article should make you think again. In addition, the malware in question has been active for almost a decade. And to say that it packs some punch would be an understatement.
Here is what I am saying:
The malware employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and functions for modifying the Android OS's all-powerful Zygote process.
The malware, which Human Security estimated was installed on 74,000 devices around the world, facilitated a range of illicit activities, including advertising fraud, residential proxy services, the creation of fake Gmail and WhatsApp accounts, and infecting other Internet-connected devices.
New Supply Chain Malware Operation Hits npm and PyPI Ecosystems, Targeting Millions Globally
Trust is the number one factor threat actors exploit when they target ecosystems like PyPi and npm. The trust that users have that what they are downloading is "safe", causes them to lower their guard and unknowingly infect themselves with malware.
What can we do to stay safe from threats like this? Have a development environment, a good security protocol in cases of compromise, and a separate machine exclusively for development.
From the article:
The potential impact is massive in scale, and the malware's persistence mechanism is particularly concerning – attackers maintain access to infected machines even after maintainers update the packages.
Vulnerabilities Exposed Phone Number of Any Google User
Google patched it and paid the researcher $5000. Not bad. However, what can we learn from this? Even the big boys are vulnerable and it is just a matter of who is searching for them, how lucky they can get, and their level of patience.
So, what happened and how did the researcher find the bug? Here you go:
The researcher, who uses the online monikers Brutecat and Skull, said he came across the vulnerabilities after disabling JavaScript in his browser in an effort to determine whether any Google services still worked without JavaScript.
According to tests conducted by Brutecat, a US phone number could have been obtained in roughly 20 minutes, a UK number in 4 minutes, and Netherlands and Singapore numbers could be brute-forced in seconds — all of this by renting a server at a cost of $0.30/hour.
How Scammers Are Using AI to Steal College Financial Aid
It's heartbreaking, to say the least. I am not in the victim's shoes. So, it's inappropriate to start assuming how they are feeling right now. Nonetheless, this shows how threat actors can and will probably continue to use new and publicly available technology for malicious purposes.
From the article:
In some cases, professors discover almost no one in their class is real. Students get locked out of the classes they need to graduate as bots push courses over their enrollment limits.
And victims of identity theft who discover loans fraudulently taken out in their names must go through months of calling colleges, the Federal Student Aid office and loan servicers to try to get the debt erased.
‘EchoLeak’ AI Attack Enabled Theft of Sensitive Data via Microsoft 365 Copilot
Before you continue reading, take a wild guess of what makes the attack possible.
.
.
.
.
.
.
.
.
.
.
Prompt injection. Did you get it? Congratulations, you have been following the AI world for a while. And to make matters worse, it is a zero-click vulnerability; meaning it required no human interaction to initiate the attack chain.
From the article:
The EchoLeak attack involves sending a specially crafted email to the targeted user. The email contains instructions for Copilot to collect secret and personal information from prior chats with the user and send them to the attacker’s server.
The user does not need to open the malicious email or click on any links. The exploit, which Aim Security described as indirect prompt injection, is triggered when the victim asks Copilot for information referenced in the malicious email.
That is when Copilot executes the attacker’s instructions to collect information previously provided by the victim and send it to the attacker.
Credits
Cover photo by Debby Hudson on Unsplash.
That's it for this week, and I'll see you next time.