The Azure security team is pleased to announce that the Azure Security Benchmark v1 (ASB) is now available. ASB is a collection of over 90 security best practices recommendations you can employ to increase the overall security and compliance of all your workloads in Azure.
The ASB controls are based on industry standards and best practices, such as Center for Internet Security (CIS). In addition, ASB preserves the value provided by industry standard control frameworks that have an on-premises focus and makes them more cloud centric. This enables you to apply standard security control frameworks to your Azure deployments and extend security governance practices to the cloud.
ASB v1 includes 11 security controls inspired by, and mapped to, the CIS 7.1 control framework. Over time we’ll add mappings to other frameworks, such as NIST.
ASB also makes it possible to improve the consistency of security documentation for all Azure services by creating a framework where all security recommendations for Azure services are represented in the same format, using the common ASB framework.
ASB includes the following controls:
- Network security
- Logging and monitoring
- Identity and access control
- Data protection
- Vulnerability management
- Inventory and asset management
- Secure configuration
- Malware defense
- Data recovery
- Incident response
- Penetration tests and red team exercises
Documentation for each of the controls contains mappings to industry standard benchmarks (such as CIS), details/rationale for the recommendations, and link(s) to configuration information that will enable the recommendation.
ASB is integrated with Azure Security Center allowing you to track, report, and assess your compliance against the benchmark by using the Security Center compliance dashboard. It has a tab like those you see below. In addition, the ASB impacts Secure Score in Azure Security Center for your subscriptions.
ASB is the foundation for future Azure service security baselines, which will provide a view of benchmark recommendations that are contextualized for each Azure service. This will make it easier for you to implement the ASB for the Azure services that you’re actually using. Also, keep an eye out our release of mappings to the NIST and other security frameworks.Send us your feedback
We welcome your feedback on ASB! Please complete the Azure Security Benchmark feedback form. Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
While digital transformation is critical to business innovation, delivering security to cloud-first, mobile-first architectures requires rethinking traditional network security solutions. Some businesses have been successful in doing so, while others still remain at risk of very costly breaches.
MAN Energy Solutions, a leader in the marine, energy, and industrial sectors, has been driving cloud transformation across their business. As with any transformation, there were challenges—as they began to adopt cloud services, they quickly realized that the benefits of the cloud would be offset by poor user experience, increasing appliance and networking costs, and an expanded attack surface.
In 2017, MAN Energy Solutions implemented “Blackcloud”—an initiative that establishes secure, one-to-one connectivity between each user and the specific private apps that the user is authorized to access, without ever placing the user on the larger corporate network. A virtual private network (VPN) is no longer necessary to connect to these apps. This mitigates lateral movement of bad actors or malware.
This approach is based on the Zero Trust security model.Understanding the Zero Trust model
In 2019, Gartner released a Market Guide describing its Zero Trust Network Access (ZTNA) model and making a strong case for its efficacy in connecting employees and partners to private applications, simplifying mergers, and scaling access. Sometimes referred to as software-defined perimeter, the ZTNA model includes a “broker” that mediates connections between authorized users and specific applications.
The Zero Trust model grants application access based on identity and context of the user, such as date/time, geolocation, and device posture, evaluated in real-time. It empowers the enterprise to limit access to private apps only to the specific users who need access to them and do not pose any risk. Any changes in context of the user would affect the trust posture and hence the user’s ability to access the application.
Access governance is done via policy and enabled by two end-to-end, encrypted, outbound micro-tunnels that are spun on-demand (not static IP tunnels like in the case of VPN) and stitched together by the broker. This ensures apps are never exposed to the internet, thus helping to reduce the attack surface.
As enterprises witness and respond to the impact of increasingly lethal malware, they’re beginning to transition to the Zero Trust model with pilot initiatives, such as securing third-party access, simplifying M&As and divestitures, and replacing aging VPN clients. Based on the 2019 Zero Trust Adoption Report by Cybersecurity Insiders, 59 percent of enterprises plan to embrace the Zero Trust model within the next 12 months.Implement the Zero Trust model with Microsoft and Zscaler
Different organizational requirements, existing technology implementations, and security stages affect how the Zero Trust model implementation takes place. Integration between multiple technologies, like endpoint management and SIEM, helps make implementations simple, operationally efficient, and adaptive.
Microsoft has built deep integrations with Zscaler—a cloud-native, multitenant security platform—to help organizations with their Zero Trust journey. These technology integrations empower IT teams to deliver a seamless user experience and scalable operations as needed, and include:
Azure Active Directory (Azure AD)—Enterprises can leverage powerful authentication tools—such as Multi-Factor Authentication (MFA), conditional access policies, risk-based controls, and passwordless sign-in—offered by Microsoft, natively with Zscaler. Additionally, SCIM integrations ensure adaptability of user access. When a user is terminated, privileges are automatically modified, and this information flows automatically to the Zscaler cloud where immediate action can be taken based on the update.
Microsoft Endpoint Manager—With Microsoft Endpoint Manager, client posture can be evaluated at the time of sign-in, allowing Zscaler to allow or deny access based on the security posture. Microsoft Endpoint Manager can also be used to install and configure the Zscaler app on managed devices.
Azure Sentinel—Zscaler’s Nanolog Streaming Service (NSS) can seamlessly integrate with Azure to forward detailed transactional logs to the Azure Sentinel service, where they can be used for visualization and analytics, as well as threat hunting and security response.
Implementation of the Zscaler solution involves deploying a lightweight gateway software, on endpoints and in front of the applications in AWS and/or Azure. Per policies defined in Microsoft Endpoint Manager, Zscaler creates secure segments between the user devices and apps through the Zscaler security cloud, where brokered micro-tunnels are stitched together in the location closest to the user.
If you’d like to learn more about secure access to hybrid apps, view the webinar on Powering Fast and Secure Access to All Apps with experts from Microsoft and Zscaler.Rethink security for the cloud-first, mobile-first world
The advent of cloud-based apps and increasing mobility are key drivers forcing enterprises to rethink their security model. According to Gartner’s Market Guide for Zero Trust Network Access (ZTNA) “by 2023, 60 percent of enterprises will phase out most of their remote access VPNs in favor of ZTNA.” Successful implementation depends on using the correct approach. I hope the Microsoft-Zscaler partnership and platform integrations help you accomplish the Zero Trust approach as you look to transform your business to the cloud.
For more information on the Zero Trust model, visit the Microsoft Zero Trust page. Also, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
The post Microsoft and Zscaler help organizations implement the Zero Trust model appeared first on Microsoft Security.
sLoad, the PowerShell-based Trojan downloader notable for its almost exclusive use of the Background Intelligent Transfer Service (BITS) for malicious activities, has launched version 2.0. The new version comes on the heels of a comprehensive blog we published detailing the malware’s multi-stage nature and use of BITS as alternative protocol for data exfiltration and other behaviors.
With the new version, sLoad has added the ability to track the stage of infection on every affected machine. Version 2.0 also packs an anti-analysis trick that could identify and isolate analyst machines vis-à-vis actual infected machines.
We’re calling the new version “Starslord” based on strings in the malware code, which has clues indicating that the name “sLoad” may have been derived from a popular comic book superhero.
We discovered the new sLoad version over the holidays, in our continuous monitoring of the malware. New sLoad campaigns that use version 2.0 follow an attack chain similar to the previous version, with some updates, including dropping the dynamic list of command-and-control (C2) servers and upload of screenshots.Tracking the stage of infection
With the ability to track the stage of infection, malware operators with access to the Starslord backend could build a detailed view of infections across affected machines and segregate these machines into different groups.
The tracking mechanism exists in the final-stage, which, as with the old version, loops infinitely (with sleep interval of 2400 seconds, higher than the 1200 seconds in version 1.0). In line with the previous version, at every iteration of the final stage, the malware uses a download BITS job to exfiltrate stolen system information and receive additional payloads from the active C2 server.
As we noted in our previous blog, creating a BITS job with an extremely large RemoteURL parameter that includes non-encrypted system information, as the old sLoad version did, stands out and is relatively easy to detect. However, with Starslord, the system information is encoded into Base64 data before being exfiltrated.
The file received by Starslord in response to the exfiltration BITS job contains a tuple of three values separated by an asterisk (*):
- Value #1 is a URL to download additional payload using a download BITS job
- Value #2 specifies the action, which can be any of the following, to be taken on the payload downloaded from the URL in value#1:
- “eval” – Run (possibly very large) PowerShell scripts
- “iex” – Load and invoke (possibly small) PowerShell code
- “run” – Download encoded PE file, decode using exe, and run the decoded executable
- Value #3 is an integer that can signify the stage of infection for the machine
Supplying the payload URL as part of value #1 allows the malware infrastructure to house additional payloads on different servers from the active C2 servers responding to the exfiltration BITS jobs.
Value#3 is the most noteworthy component in this setup. If the final stage succeeds in downloading additional payload using the URL provided in value #1 and executing it as specified by the command in value #2, then a variable is used to form the string “td”:”<value#3>”,”tds”:”3”. However, if the final stage fails to download and execute the payload, then the string formed is “td”:”<value #3>”,”tds”:”4”.
The infinite loop ensures that the exfiltration BITS jobs are created at a fixed interval. The backend infrastructure can then pick up the pulse from each infected machine. However, unlike the previous version, Starslord includes the said string in succeeding iterations of data exfiltration. This means that the malware infrastructure is always aware of the exact stage of the infection for a specific affected machine. In addition, since the numeric value for value #3 in the tuple is always governed by the malware infrastructure, malware operators can compartmentalize infected hosts and could potentially set off individual groups on unique infection paths. For example, when responding to exfiltration BITS jobs, malware operators can specify a different URL (value #1) and action (value #2) for each numeric value for value #3 of the tuple, essentially deploying a different malware payload for different groups.Anti-analysis trap
Starslord comes built-in with a function named checkUniverse, which is in-fact an anti-analysis trap.
As mentioned in our previous blog post, the final stage of sLoad is a piece of PowerShell code obtained by decoding one of the dropped .ini files. The PowerShell code appears in the memory as a value assigned to a variable that is then executed using the Invoke-Expression cmdlet. Because this is a huge piece of decrypted PowerShell code that never hits the disk, security researchers would typically dump it into a file on the disk for further analysis.
The sLoad dropper PowerShell script drops four files:
- a randomly named .tmp file
- a randomly named .ps1 file
- a ini file
- a ini file
It then creates a scheduled task to run the .tmp file every 3 minutes, similar to the previous version. The .tmp file is a proxy that does nothing but run the .ps1 file, which decrypts the contents of main.ini into the final stage. The final stage then decrypts contents of domain.ini to obtain active C2 and perform other activities as documented.
As a unique anti-analysis trap, Starslord ensures that the .tmp and.ps1 files have the same random name. When an analyst dumps the decrypted code of the final stage into a file in the same folder as the .tmp and .ps1 files, the analyst could end up naming it something other than the original random name. When this dumped code is run from such differently named file on the disk, a function named checkUniverse returns the value 1, and the analyst gets trapped:
What comes next is not very desirable for a security researcher: being profiled by the malware operator.
If the host belongs to a trapped analyst, the file downloaded from the backend in response to the exfiltration BITS job, if any, is discarded and overwritten by the following new tuple:
In this case, the value #1 of the tuple is a URL that’s known to the backend for being associated with trapped hosts. BITS jobs from trapped hosts don’t always get a response. If they do, it’s a copy of the dropper PowerShell script. This could be to create an illusion that the framework is being updated as the URL in value #1 of the tuple suggests (hxxps://<active C2>/doc/updx2401.jpg).
However, the string that is included in all successive exfiltration BITS jobs from such host is “td”:”-1”,”tds”:”3”, eventually leading to all such hosts getting grouped under value “td”:”-1”. This forms the group of all trapped machines that are never delivered a payload. For the rest, so far, evidence suggests that it has been delivering the file infector Ramnit intermittently.Durable protection against evolving malware
sLoad’s multi-stage attack chain, use of mutated intermediate scripts and BITS as an alternative protocol, and its polymorphic nature in general make it a piece malware that can be quite tricky to detect. Now, it has evolved into a new and polished version Starlord, which retains sLoads most basic capabilities but does away with spyware capabilities in favor of new and more powerful features, posing even higher risk.
Starslord can track and group affected machines based on the stage of infection, which can allow for unique infection paths. Interestingly, given the distinct reference to a fictional superhero, these groups can be thought of as universes in a multiverse. In fact, the malware uses a function called checkUniverse to determine if a host is an analyst machine.
Microsoft Threat Protection defends customers from sophisticated and continuously evolving threats like sLoad using multiple industry-leading security technologies that protect various attack surfaces. Through signal-sharing across multiple Microsoft services, Microsoft Threat Protection delivers comprehensive protection for identities, endpoints, data, apps, and infrastructure.
On endpoints, behavioral blocking and containment capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) ensure durable protection against evolving threats. Through cloud-based machine learning and data science informed by threat research, Microsoft Defender ATP can spot and stop malicious behaviors from threats, both old and new, in real-time.
Microsoft Defender ATP Research Team
With high levels of political unrest in various parts of the world, it’s no surprise we’re also in a period of increased cyber threats. In the past, a company’s name, political affiliations, or religious affiliations might push the risk needle higher. However, in the current environment any company could be a potential target for a cyberattack. Companies of all shapes, sizes, and varying security maturity are asking what they could and should be doing to ensure their safeguards are primed and ready. To help answer these questions, I created a list of actions companies can take and controls they can validate in light of the current level of threats—and during any period of heightened risk—through the Microsoft lens:
- Implement Multi-Factor Authentication (MFA)—It simply cannot be said enough—companies need MFA. The security posture at many companies is hanging by the thread of passwords that are weak, shared across social media, or already for sale. MFA is now the standard authentication baseline and is critical to basic cyber hygiene. If real estate is “location, location, location,” then cybersecurity is “MFA, MFA, MFA.” To learn more, read How to implement Multi-Factor Authentication (MFA).
- Update patching—Check your current patch status across all environments. Make every attempt to patch all vulnerabilities and focus on those with medium or higher risk if you must prioritize. Patching is critically important as the window between discovery and exploit of vulnerabilities has shortened dramatically. Patching is perhaps your most important defense and one that, for the most part, you control. (Most attacks utilize known vulnerabilities.)
- Manage your security posture—Check your Secure Score and Compliance Score for Office 365, Microsoft 365, and Azure. Also, take steps to resolve all open recommendations. These scores will help you to quickly assess and manage your configurations. See “Resources and information for detection and mitigation strategies” below for additional information. (Manage your scores over time and use them as a monitoring tool for unexpected consequences from changes in your environment.)
- Evaluate threat detection and incident response—Increase your threat monitoring and anomaly detection activities. Evaluate your incident response from an attacker’s perspective. For example, attackers often target credentials. Is your team prepared for this type of attack? Are you able to engage left of impact? Consider conducting a tabletop exercise to consider how your organization might be targeted specifically.
- Resolve testing issues—Review recent penetration test findings and validate that all issues were closed.
- Validate distributed denial of service (DDoS) protection—Does your organization have the protection you need or stable access to your applications during a DDoS attack? These attacks have continued to grow in frequency, size, sophistication, and impact. They often are utilized as a “cyber smoke screen” to mask infiltration attacks. Your DDoS protection should be always on, automated for network layer mitigation, and capable of near real-time alerting and telemetry.
- Test your resilience—Validate your backup strategies and plans, ensuring offline copies are available. Review your most recent test results and conduct additional testing if needed. If you’re attacked, your offline backups may be your strongest or only lifeline. (Our incident response teams often find companies are surprised to discover their backup copies were accessible online and were either encrypted or destroyed by the attacker.)
- Prepare for incident response assistance—Validate you have completed any necessary due diligence and have appropriate plans to secure third-party assistance with responding to an incident/attack. (Do you have a contract ready to be signed? Do you know who to call? Is it clear who will decide help is necessary?)
- Train your workforce—Provide a new/specific round of training and awareness information for your employees. Make sure they’re vigilant to not click unusual links in emails and messages or go to unusual or risky URLs/websites, and that they have strong passwords. Emphasize protecting your company contributes to the protection of the financial economy and is a matter of national security.
- Evaluate physical security—Step up validation of physical IDs at entry points. Ensure physical reviews of your external perimeter at key offices and datacenters are being carried out and are alert to unusual indicators of access attempts or physical attacks. (The “see something/say something” rule is critically important.)
- Coordinate with law enforcement—Verify you have the necessary contact information for your local law enforcement, as well as for your local FBI office/agent (federal law enforcement). (Knowing who to call and how to reach them is a huge help in a crisis.)
The hope, of course, is there will not be any action against any company. Taking the actions noted above is good advice for any threat climate—but particularly in times of increased risk. Consider creating a checklist template you can edit as you learn new ways to lower your risk and tighten your security. Be sure to share your checklist with industry organizations such as FS-ISAC. Finally, if you have any questions, be sure to reach out to your account team at Microsoft.Resources and information for detection and mitigation strategies
- Information from the U.S. Department of Homeland Security—Cybersecurity and Infrastructure Security Agency, Alert (AA20-006A)
- Information from the Center for Internet Security—CIS Microsoft Azure Foundations Benchmark guide
- Learn more about Azure Security Center—Azure Security Center documentation
- Learn more about MFA—How to implement Multi-Factor Authentication (MFA)
- Check and improve your Secure Score in Azure—Improve your Secure Score in Azure Security Center
- Check and improve your Compliance Score in Office 365—Microsoft Compliance Score
About the author
Lisa Lee is a former U.S. banking regulator who helped financial institutions of all sizes prepare their defenses against cyberattacks and reduce their threat landscape. In her current role with Microsoft, she advises Chief Information Security Officers (CISOs) and other senior executives at large financial services companies on cybersecurity, compliance, and identity. She utilizes her unique background to share insights about preparing for the current cyber threat landscape.
The post How companies can prepare for a heightened threat environment appeared first on Microsoft Security.
In Changing the monolith—Part 1: Building alliances for a secure culture, I explored how security leaders can build alliances and why a commitment to change must be signaled from the top. But whose support should you recruit in the first place? In Part 2, I address considerations for the cybersecurity team itself, the organization’s business leaders, and the employees whose buy-in is critical.Build the right cybersecurity team
It could be debated that the concept of a “deep generalist” is an oxymoron. The analogy I frequently find myself making is you would never ask a dermatologist to perform a hip replacement. A hip replacement is best left to an orthopedic surgeon who has many hours of hands-on experience performing hip replacements. This does not lessen the importance of the dermatologist, who can quickly identify and treat potentially lethal diseases such as skin cancer.
Similarly, not every cybersecurity and privacy professional is deep in all subjects such as governance, technology, law, organizational dynamics, and emotional intelligence. No person is born a specialist.
If you are looking for someone who is excellent at threat prevention, detection, and incident response, hire someone who specializes in those specific tasks and has demonstrated experience and competency. Likewise, be cautious of promoting cybersecurity architects to the role of Chief Information Security Officer (CISO) if they have not demonstrated strategic leadership with the social aptitude to connect with other senior leaders in the organization. CISOs, after all, are not technology champions as much as they are business leaders.Keep business leaders in the conversation
Leaders can enhance their organizations’ security stance by sending a top-down message across all business units that “security begins with me.” One way to send this message is to regularly brief the executive team and the board on cybersecurity and privacy risks.
Keep business leaders accountable about security.
These should not be product status reports, but briefings on key performance indicators (KPI) of risk. Business leaders must inform what the organization considers to be its top risks.
Here are three ways to guide these conversations:
- Evaluate the existing cyber-incident response plan within the context of the overall organization’s business continuity plan. Elevate cyber-incident response plans to account for major outages, severe weather, civil unrest, and epidemics—which all place similar, if not identical, stresses to the business. Ask leadership what they believe the “crown jewels” to be, so you can prioritize your approach to data protection. The team responsible for identifying the “crown jewels” should include senior management from the lines of businesses and administrative functions.
- Review the cybersecurity budget with a business case and a strategy in mind. Many times, security budgets take a backseat to other IT or business priorities, resulting in companies being unprepared to deal with risks and attacks. An annual review of cybersecurity budgets tied to what looks like a “good fit” for the organization is recommended.
- Reevaluate cyber insurance on an annual basis and revisit its use and requirements for the organization. Ensure that it’s effective against attacks that could be considered “acts of war,” which might otherwise not be covered by the organization’s policy. Review your policy and ask: What happens if the threat actor was a nation state aiming for another nation state, placing your organization in the crossfire?
“Shadow IT” is a persistent problem when there is no sanctioned way for users to collaborate with the outside world. Similarly, users save and hoard emails when, in response to an overly zealous data retention policy, their emails are deleted after 30 days.
Digital transformation introduces a sea of change in how cybersecurity is implemented. It’s paramount to provide the user with the most frictionless user experience available, adopting mobile-first, cloud-first philosophies.
Ignoring the user experience in your change implementation plan will only lead users to identify clever ways to circumvent frustrating security controls. Look for ways to prioritize the user experience even while meeting security and compliance goals.Incremental change versus tearing off the band-aid
Imagine slowly replacing the interior and exterior components of your existing vehicle one by one until you have a “new” car. It doesn’t make sense: You still have to drive the car, even while the replacements are being performed!
Similarly, I’ve seen organizations take this approach in implementing change, attempting to create a modern workplace over a long period of time. However, this draws out complex, multi-platform headaches for months and years, leading to user confusion, loss of confidence in IT, and lost productivity. You wouldn’t “purchase” a new car this way; why take this approach for your organization?
Rather than mixing old parts with new parts, you would save money, shop time, and operational (and emotional) complexity by simply trading in your old car for a new one.
Fewer organizations take this alternative approach of “tearing off the band-aid.” If the user experience is frictionless, more efficient, and enhances the ease of data protection, an organization’s highly motivated employee base will adapt much more easily.Stayed tuned and stay updated
Stay tuned for more! In my next installments, I will cover the topics of process and technology, respectively, and their role in changing the security monolith. Technology on its own solves nothing. What good are building supplies and tools without a blueprint? Similarly, process is the orchestration of the effort, and is necessary to enhance an organization’s cybersecurity, privacy, compliance, and productivity.
The post Changing the monolith—Part 2: Whose support do you need? appeared first on Microsoft Security.
Modern software development practices often involve building applications from hundreds of existing components, whether they’re written by another team in your organization, an external vendor, or someone in the open source community. Reuse has great benefits, including time-to-market, quality, and interoperability, but sometimes brings the cost of hidden complexity and risk.
You trust your engineering team, but the code they write often accounts for only a tiny fraction of the entire application. How well do you understand what all those external software components actually do? You may find that you’re placing as much trust in each of the thousands of contributors to those components as you have in your in-house engineering team.
At Microsoft, our software engineers use open source software to provide our customers high-quality software and services. Recognizing the inherent risks in trusting open source software, we created a source code analyzer called Microsoft Application Inspector to identify “interesting” features and metadata, like the use of cryptography, connecting to a remote entity, and the platforms it runs on.
Application Inspector differs from more typical static analysis tools in that it isn’t limited to detecting poor programming practices; rather, it surfaces interesting characteristics in the code that would otherwise be time-consuming or difficult to identify through manual introspection. It then simply reports what’s there, without judgement.
For example, consider this snippet of Python source code:
Here we can see that a program that downloads content from a URL, writes it to the file system, and then executes a shell command to list details of that file. If we run this code through Application Inspector, we’ll see the following features identified which tells us a lot about what it can do:
In this small example, it would be trivial to examine the snippet manually to identify those same features, but many components contain tens of thousands of lines of code, and modern web applications often use hundreds of such components. Application Inspector is designed to be used individually or at scale and can analyze millions of lines of source code from components built using many different programming languages. It’s simply infeasible to attempt to do this manually.Application Inspector is positioned to help in key scenarios
We use Application Inspector to identify key changes to a component’s feature set over time (version to version), which can indicate anything from an increased attack surface to a malicious backdoor. We also use the tool to identify high-risk components and those with unexpected features that require additional scrutiny, under the theory that a vulnerability in a component that is involved in cryptography, authentication, or deserialization would likely have higher impact than others.Using Application Inspector
Application Inspector is a cross-platform, command-line tool that can produce output in multiple formats, including JSON and interactive HTML. Here is an example of an HTML report:
Each icon in the report above represents a feature that was identified in the source code. That feature is expanded on the right-hand side of the report, and by clicking any of the links, you can view the source code snippets that contributed to that identification.
Each feature is also broken down into more specific categories and an associated confidence, which can be accessed by expanding the row.
Application Inspector comes with hundreds of feature detection patterns covering many popular programming languages, with good support for the following types of characteristics:
- Application frameworks (development, testing)
- Cloud / Service APIs (Microsoft Azure, Amazon AWS, and Google Cloud Platform)
- Cryptography (symmetric, asymmetric, hashing, and TLS)
- Data types (sensitive, personally identifiable information)
- Operating system functions (platform identification, file system, registry, and user accounts)
- Security features (authentication and authorization)
Application Inspector can identify interesting features in source code, enabling you to better understand the software components that your applications use. Application Inspector is open source, cross-platform (.NET Core), and can be downloaded at github.com/Microsoft/ApplicationInspector. We welcome all contributions and feedback.
Another day, another data breach. If the regular drumbeat of leaked and phished accounts hasn’t persuaded you to switch to Multi-Factor Authentication (MFA) already, maybe the usual January rush of ‘back to work’ password reset requests is making you reconsider. When such an effective option for protecting accounts is available, why wouldn’t you deploy it straight away?
The problem is that deploying MFA at scale is not always straightforward. There are technical issues that may hold you up, but the people side is where you have to start. The eventual goal of an MFA implementation is to enable it for all your users on all of your systems all of the time, but you won’t be able to do that on day one.
To successfully roll out MFA, start by being clear about what you’re going to protect, decide what MFA technology you’re going to use, and understand what the impact on employees is going to be. Otherwise, your MFA deployment might grind to a halt amid complaints from users who run into problems while trying to get their job done.
Before you start on the technical side, remember that delivering MFA across a business is a job for the entire organization, from the security team to business stakeholders to IT departments to HR and to corporate communications and beyond, because it has to support all the business applications, systems, networks and processes without affecting workflow.Campaign and train
Treat the transition to MFA like a marketing campaign where you need to sell employees on the idea—as well as provide training opportunities along the way. It’s important for staff to understand that MFA is there to support them and protect their accounts and all the their data, because that may not be their first thought when met with changes to the way they sign in to the tools they use every day. If you run an effective internal communications campaign that makes it clear to users what they need to do and, more importantly, why they need to do it, you’ll avoid them seeing MFA as a nuisance or misunderstanding it as ‘big brother’ company tracking.
The key is focusing on awareness: in addition to sending emails—put up posters in the elevator, hang banner ads in your buildings, all explaining why you’re making the transition to MFA. Focus on informing your users, explaining why you’re making this change—making it very clear what they will need to do and where they can find instructions, documentation, and support.
Also, provide FAQs and training videos, along with optional training sessions or opportunities to opt in to an early pilot group (especially if you can offer them early access to a new software version that will give them features they need). Recognize that MFA is more work for them than just using a password, and that they will very likely be inconvenienced. Unless you are able to use biometrics on every device they will have to get used to carrying a security key or a device with an authenticator app with them all the time, so you need them to understand why MFA is so important.
It’s not surprising that users can be concerned about a move to MFA. After all, MFA has sometimes been done badly in the consumer space. They’ll have seen stories about social networks abusing phone numbers entered for security purposes for marketing or of users locked out of their accounts if they’re travelling and unable to get a text message. You’ll need to reassure users who have had bad experiences with consumer MFA and be open to feedback from employees about the impact of MFA policies. Like all tech rollouts, this is a process.
If you’re part of an international business you have more to do, as you need to account for global operations. That needs wider buy-in and a bigger budget, including language support if you must translate training and support documentation. If you don’t know where to start, Microsoft provides communication templates and user documentation you can customize for your organization.Start with admin accounts
At a minimum, you want to use MFA for all your admins, so start with privileged users. Administrative accounts are your highest value targets and the most urgent to secure, but you can also treat them as a proof of concept for wider adoption. Review who these users are and what privileges they have—there are probably more accounts than you expect with far more privileges than are really needed.
At the same time, look at key business roles where losing access to email—or having unauthorized emails sent—will have a major security impact. Your CEO, CFO, and other senior leaders need to move to MFA to protect business communications.
Use what you’ve learned to roll out MFA to high value groups to plan a pilot deployment—which includes employees from across the business who require different levels of security access—so your final MFA deployment is optimized for mainstream employees without hampering the productivity of those working with more sensitive information, whether that’s the finance team handling payroll or developers with commit rights. Consider how you will cover contractors and partners who need access as well.Plan for wider deployment
Start by looking at what systems you have that users need to sign in to that you can secure with MFA. Remember that includes on-premises systems—you can incorporate MFA into your existing remote access options, using Active Directory Federation Services (AD FS), or Network Policy Server and use Azure Active Directory (Azure AD) Application Proxy to publish applications for cloud access.
Concentrate on finding any networks or systems where deploying MFA will take more work (for example, if SAML authentication is used) and especially on discovering vulnerable apps that don’t support anything except passwords because they use legacy or basic authentication. This includes older email systems using MAPI, EWS, IMAP4, POP3, SMTP, internal line of business applications, and elderly client applications. Upgrade or update these to support modern authentication and MFA where you can. Where this isn’t possible, you’ll need to restrict them to use on the corporate network until you can replace them, because critical systems that use legacy authentication will block your MFA deployment.
Be prepared to choose which applications to prioritize. As well as an inventory of applications and networks (including remote access options), look at processes like employee onboarding and approval of new applications. Test how applications work with MFA, even when you expect the impact to be minimal. Create a new user without admin access, use that account to sign in with MFA and go through the process of configuring and using the standard set of applications staff will use to see if there are issues. Look at how users will register for MFA and choose which methods and factors to use, and how you will track and audit registrations. You may be able to combine MFA registration with self-service password reset (SSPR) in a ‘one stop shop,’ but it’s important to get users to register quickly so that attackers can’t take over their account by registering for MFA, especially if it’s for a high-value application they don’t use frequently. For new employees, you should make MFA registration part of the onboarding process.Make MFA easier on employees
MFA is always going to be an extra step, but you can choose MFA options with less friction, like using biometrics in devices or FIDO2 compliant factors such as Feitan or Yubico security keys. Avoid using SMS if possible. Phone-based authentication apps like the Microsoft Authenticator App are an option, and they don’t require a user to hand over control of their personal device. But if you have employees who travel to locations where they may not have connectivity, choose OATH verification codes, which are automatically generated rather than push notifications that are usually convenient but require the user to be online. You can even use automated voice calls: letting users press a button on the phone keypad is less intrusive than giving them a passcode to type in on screen.
Offer a choice of alternative factors so people can pick the one that best suits them. Biometrics are extremely convenient, but some employees may be uncomfortable using their fingerprint or face for corporate sign-ins and may prefer receiving an automated voice call.
Make sure that you include mobile devices in your MFA solution, managing them through Mobile Device Management (MDM), so you can use conditional and contextual factors for additional security.
Avoid making MFA onerous; choose when the extra authentication is needed to protect sensitive data and critical systems rather than applying it to every single interaction. Consider using conditional access policies and Azure AD Identity Protection, which allows for triggering two-step verification based on risk detections, as well as pass-through authentication and single-sign-on (SSO).
If MFA means that a user accessing a non-critical file share or calendar on the corporate network from a known device that has all the current OS and antimalware updates sees fewer challenges—and no longer faces the burden of 90-day password resets—then you can actually improve the user experience with MFA.Have a support plan
Spend some time planning how you will handle failed sign-ins and account lockouts. Even with training, some failed sign-ins will be legitimate users getting it wrong and you need to make it easy for them to get help.
Similarly, have a plan for lost devices. If a security key is lost, the process for reporting that needs to be easy and blame free, so that employees will notify you immediately so you can expire their sessions and block the security key, and audit the behavior of their account (going back to before they notified you of the loss). Security keys that use biometrics may be a little more expensive, but if they’re lost or stolen, an attacker can’t use them. If possible, make it a simple, automated workflow, using your service desk tools.
You also need to quickly get them connected another way so they can get back to work. Automatically enrolling employees with a second factor can help. Make that second factor convenient enough to use that they’re not unable to do their job, but not so convenient that they keep using it and don’t report the loss: one easy option is allowing one-time bypasses. Similarly, make sure you’re set up to automatically deprovision entitlements and factors when employees change roles or leave the organization.Measure and monitor
As you deploy MFA, monitor the rollout to see what impact it has on both security and productivity and be prepared to make changes to policies or invest in better hardware to make it successful. Track security metrics for failed login attempts, credential phishing that gets blocked and privilege escalations that are denied.
Your MFA marketing campaign also needs to continue during and after deployment, actively reaching out to staff and asking them to take back in polls or feedback sessions. Start that with the pilot group and continue it once everyone is using MFA.
Even when you ask for it, don’t rely on user feedback to tell you about problems. Check helpdesk tickets, logs, and audit options to see if it’s taking users longer to get into systems, or if they’re postponing key tasks because they’re finding MFA difficult, or if security devices are failing or breaking more than expected. New applications and new teams in the business will also mean that MFA deployment needs to be ongoing, and you’ll need to test software updates to see if they break MFA; you have to make it part of the regular IT process.
Continue to educate users about the importance of MFA, including running phishing training and phishing your own employees (with more training for those who are tricked into clicking through to fake links).
MFA isn’t a switch you flip; it’s part of a move to continuous security and assessment that will take time and commitment to implement. But if you approach it in the right way, it’s also the single most effective step you can take to improve security.About the authors
Ann Johnson is the Corporate Vice President for Cybersecurity Solutions Group for Microsoft. She is a member of the board of advisors for FS-ISAC (The Financial Services Information Sharing and Analysis Center), an advisory board member for EWF (Executive Women’s Forum on Information Security, Risk Management & Privacy), and an advisory board member for HYPR Corp. Ann recently joined the board of advisors for Cybersecurity Ventures
Christina Morillo is a Senior Program Manager on the Azure Identity Engineering Product team at Microsoft. She is an information security and technology professional with a background in cloud technologies, enterprise security, and identity and access. Christina advocates and is passionate about making technology less scary and more approachable for the masses. When she is not at work, or spending time with her family, you can find her co-leading Women in Security and Privacy’s NYC chapter and supporting others as an advisor and mentor. She lives in New York City with her husband and children.Learn more
The post How to implement Multi-Factor Authentication (MFA) appeared first on Microsoft Security.
In two recent posts I discussed with Circadence the increasing importance of gamification for cybersecurity learning and how to get started as a practitioner while being supported by an enterprise learning officer or security team lead. In this third and final post in the series, Keenan and I address more advanced SecOps scenarios that an experienced practitioner would be concerned with understanding. We even show how Circadence and Microsoft help seasoned practitioners defend against some of the most prevalent and advanced attackers we see across industries.
Here are more of Keenan’s insights from our Q&A:
Q: Keenan, thanks for sharing in this digital conversation with me again. I admire your passion for gamified cyber learning. I’d not put the two ideas together, that you can adopt gaming concepts—and consoles—in a way that makes learning the often difficult and evolving subject matter of “cyber” much more fun and impactful. Now that I’ve used Project Ares for a year, it’s hard to imagine NOT having an interactive, gamified platform to help me build and refine cybersecurity concepts and skills. Several friends and colleagues have also registered their teenagers for Circadence’s Project Ares Academy subscriptions to kickstart their learning journey toward a cyber career path. If kids are going to game, let’s point them to something that will build employable skills for the future.
In our last two blogs, we introduced readers to a couple of new ideas:
- WHY they need to re-think how they prepare cyber defenders in their organizations.
- HOW defenders and their managers and teams can get started.
Now, let’s pivot and focus on practical cyber scenarios (let’s say Tier 1 or Tier 2 defender scenarios)—situations that would likely be directed to experienced cyber professionals to handle. Walk us through some of detail about how Circadence has built SecOps gaming experiences into Project Ares through mission scenarios that are inspired by real cyber incidents pulled from news headlines incorporating today’s most common attack methods such as ransomware, credential theft, and even nation-state attacks?
A: Sure. I’ll start with descriptions of a couple of our foundational missions.
Scenario one: Ransomware—Project Ares offers several mission scenarios that address the cyber kill chain around ransomware. The one I’ll focus on is Mission 10, Operation Crimson Wolf. Acting as a cyber force member working for a transportation company, the user must secure networks so the company can conduct effective port activity. However, the company is in danger as ransomware has encrypted data and a hacker has launched a phishing attack on the network, impacting how and when operators offload ships. The player must stop the ransomware from spreading and attacking other nodes on the network before it’s too late. I love this scenario because 1) it’s realistic, 2) ransomware attacks occur far too often, and 3) it allows the player to engage in a virtual environment to build skills.
Users who engage in this mission learn core competencies like:
- Computer network defense.
- Incident response management.
- Data forensics and handling.
We map all our missions to the NIST/NICE work role framework and Mission 10 touches on the following work roles: System Security Analyst, Cyber Defense Analyst, Cyber Defense Incident Responder, and the Cyber Defense Forensics Analyst.
Scenario two: Credential theft—Another mission that’s really engaging is Mission 1, Operation Goatherd. It teaches how credential theft can occur via a brute force attack. In this mission, the user must access the command and control server of a group of hackers to disable a botnet network in use. The botnet is designed to execute a widespread financial scam triggering the collapse of a national bank! The user must scan the command and control server located at myloot.com for running services, identify a vulnerable service, perform a brute force attack to obtain credentials, and then kill the web server acting as the command and control orchestrator.
This scenario is powerful because it asks the player to address the challenge by thinking from an adversary’s perspective. It helps the learner understand how an attacker would execute credential theft (though there are many ways) and gives the learner a different perspective for a well-rounded comprehension of the attack method.
Users who engage in this mission learn core competencies like:
- Network protocols.
- Reconnaissance and enumeration.
- Password cracking and exploration.
The NIST/NICE work role aligned to this mission is a Cyber Operator. Specific tasks this work role must address include:
- Analyzing target operational architecture for ways to gain access.
- Conducting network scouting and vulnerability analysis of systems within a network.
- Detecting exploits against targeted networks.
Q: Can you discuss how Project Ares’ learning curriculum addresses critical threats from advanced state or state-backed attackers. While we won’t name governments directly, the point for our readers to understand is that the national and international cybersecurity stage is built around identifying and learning how to combat the tools, tactics, and procedures that threat actors are using in all industries.
A: Here’s a good example.
Scenario three: Election security—In this mission, we deploy in our next release of Project Ares, which now leverages cloud native architecture (running on Microsoft Azure), is Mission 15, Operation Raging Mammoth. It helps a cyber professional protect against an election attack—something we are all too familiar with through recent headlines about election security. As an election security official, the user must monitor voting systems to establish a baseline of normal activity and configurations from which we identify anomalies. The user must detect and report changes to an administrator’s access permissions and/or modifications to voter information.
The NIST/NICE work roles aligned to this mission include professionals training as a Cyber Defense Analyst, Cyber Defense Incident Responder, or Threat/Warning Analyst.
I’ve reviewed some of the specific cyber scenarios a Tier 1 or Tier 2 defender might experience on the job. Now I’d like to share a bit how we build these exercises for our customers.
It really comes down to the professional experiences and detailed research from our Mission and Battle Room design teams at Circadence. Many of them have explicit and long-standing professional experience as on-the-job cyber operators and defenders, as well as cyber professors and teachers at renowned institutions. They really understand what professionals need to learn, how they need to learn, and the most effective ways to learn.
We profile Circadence professionals in the Living Our Mission Blog Series to help interested readers understand the skill and dedication of the people behind Project Ares. By sharing the individual faces behind the solution, we hope current and prospective customers will appreciate Project Ares more knowing that Circadence is building the most relevant learning experiences available to support immersive, gamified learning of today’s cyber professionals.Learn more
To see Project Ares “in action” visit Circadence and request a demonstration, or speak with your local Microsoft representative. You can also try your hand at it by attending an upcoming Microsoft Ignite: The Tour event, which features a joint Microsoft/Circadence “Into the Breach” capture the flag exercise.
To learn more about how to close the cybersecurity talent gap, read the e-book: CISO essentials: How to optimize recruiting while strengthening cybersecurity. For more information on Microsoft intelligence security solutions, including guidance on Zero Trust, visit Reach the optimal state in your Zero Trust journey.
The post Rethinking cyber scenarios—learning (and training) as you defend appeared first on Microsoft Security.
Any modern security expert can tell you that we’re light years away from the old days when firewalls and antivirus were the only mechanisms of protection against cyberattacks. Cybersecurity has been one of the hot topics of boardroom conversation for the last eight years, and has been rapidly increasing to higher priority due to the size and frequency of data breaches that have been reported across all industries and organizations.
The security conversation has finally been elevated out of the shadows of the IT Department and has moved into the executive and board level spotlights. This has motivated the C-teams of organizations everywhere to start asking hard questions of their Chief Information Officers, Chief Compliance Officers, Privacy Officers, Risk Organizations, and Legal Counsels.
Cybersecurity professionals can either wait until these questions land at their feet, or they can take charge and build relationships with executives and the business side of the organization.Taking charge of the issue
Professionals fortunate enough to have direct access to the Board of Directors of their organization can also build extremely valuable relationships at the board level as well. As cybersecurity professionals establish lines of communication throughout organizational leadership, they must keep in mind that these leaders, although experts in their respective areas, are not technologists.
The challenge that cybersecurity professionals face is being able to get the non-technical people on board with the culture of change in regards to security. These kinds of changes in culture and thinking can help facilitate the innovation that is needed to decrease the risk of compromise, reputation damage, sanctions against the organization, and potential stock devaluation. So how can one deliver this message of Fear, Uncertainty, and Doubt (FUD) without losing the executive leaders in the technical details or dramatization of the current situation?
Start by addressing the business problem, not the technology.The answer isn’t as daunting as you might think
The best way to start the conversation with business leaders is to begin by stating the principles of your approach to addressing the problem and the risks of not properly addressing it. It’s important to remember to present the principles and methods in a way that is understandable to non-technical persons.
This may sound challenging at first, but the following examples will give you a good starting point of how to accomplish this:
- At some point in time, there will be a data breach—Every day we’re up against tens of thousands of “militarized” state-sponsored threat actors who usually know more about organizations and technical infrastructure than we do. This is not a fight we’ll always win, even if we’re able to bring near unlimited resources to the table, which is often rare itself. In any scenario, we must accept some modicum of risk, and cybersecurity is no different. The approach for resolution should involve mitigating the likelihood and severity of a compromise situation when it ultimately does occur.
- Physical security and cybersecurity are linked—If you have access to physical hardware, there are a myriad of ways to pull data directly from your enterprise network and send it to a dark web repository or other malicious data repository for later decryption and analysis. If you have possession of a laptop or mobile device, and storage encryption hasn’t been implemented, an attacker can forensically image the device fairly easily and make an exact replica to analyze later. By using these or similar examples, you can clearly state that physical security even equals cybersecurity in many cases.
- You can’t always put a dollar amount on digital trust—Collateral damage in the aftermath of a cyberattack go well beyond dollars and paying attention to cybersecurity and privacy threats demonstrate digital trust to clients, customers, employees, suppliers, vendors, and the general public. Digital trust underpins every digital interaction by measuring and quantifying the expectation that an entity is who or what it claims to be and that it will behave in an expected manner. This can set an organization apart from its competitors.
- Everything can’t be protected equally; likewise, everything doesn’t have the same business value—Where are the crown jewels and what systems’ failure would create a critical impact on the organizations business? Once identified, the organization has a lot less to worry about and protect. Additionally, one of the core principles should be, “When in doubt, throw it out.” Keeping data longer than it needs to be kept increases the attack surface area and creates liability for the firm to produce large amounts of data during requests for legal discovery. The Data Retention Policy needs to reflect this. Data Retention Policies need to be created with input from the business and General Counsel.
- Identity is the new perimeter—Additional perimeter-based security appliances will not decrease the chance of compromise. Once identity is compromised, perimeter controls become useless. Operate as if the organization’s network has already been compromised as mentioned in principle #1. Focus the investment on modern authentication, Zero Trust, conditional access, and abnormal user and information behavior detection. Questions to ask now include, what’s happening to users, company data, and devices both inside and outside the firewall. Think about data handling—who has access to what and why and is it within normal business activity parameters?
If leadership is not on board with the people, process, and technology changes required to fulfill a modern approach to cybersecurity and data protection, any effort put into such a program is a waste of time and money.
You can tell immediately if you’ve done the appropriate amount of marketing to bring cybersecurity and data protection to the forefront of business leaders’ agendas. If the funding and the support for the mission is unavailable, one must ask oneself if the patient, in this case the organization, truly wants to get better.
If, during a company meeting, a CEO declares that “data protection is everyone’s responsibility, including mine,” everyone will recognize the importance of the initiative to the company’s success. Hearing this from the CISO or below does not have the same gravitas.
The most successful programs I’ve seen are those who have been sponsored at the highest levels of the organization and tied to performance. For more information on presenting to the board of directors, watch our CISO Spotlight Episode with Bret Arsenault, Microsoft CISO.Stayed tuned and stay updated
Stay tuned for “Changing the monolith—Part 2” where I address who you should recruit as you build alliances across the organization, how to build support through business conversations, and what’s next in driving organizational change. In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
The post Changing the monolith—Part 1: Building alliances for a secure culture appeared first on Microsoft Security.
For governments to function, the flow of data on a massive scale is required—including sensitive information about critical infrastructure, citizens, and public safety and security. The security of government information systems is subject to constant attempted attacks and in need of a modern approach to cybersecurity.
Microsoft 365 provides best-in-class productivity apps while protecting identities, devices, applications, networks, and data. With Microsoft 365 security services, governments can take confident steps to adopt a Zero Trust security model where all users and devices—both inside and outside the network—are deemed untrustworthy by default and the same security checks are applied to all users, devices, applications, and data.
The post Microsoft 365 helps governments adopt a Zero Trust security model appeared first on Microsoft Security.
As members of Microsoft’s Detection and Response Team (DART), we’ve seen a significant increase in adversaries “living off the land” and using compromised account credentials for malicious purposes. From an investigation standpoint, tracking adversaries using this method is quite difficult as you need to sift through the data to determine whether the activities are being performed by the legitimate user or a bad actor. Credentials can be harvested in numerous ways, including phishing campaigns, Mimikatz, and key loggers.
Recently, DART was called into an engagement where the adversary had a foothold within the on-premises network, which had been gained through compromising cloud credentials. Once the adversary had the credentials, they began their reconnaissance on the network by searching for documents about VPN remote access and other access methods stored on a user’s SharePoint and OneDrive. After the adversary was able to access the network through the company’s VPN, they moved laterally throughout the environment using legitimate user credentials harvested during a phishing campaign.
Once our team was able to determine the initially compromised accounts, we were able to begin the process of tracking the adversary within the on-premises systems. Looking at the initial VPN logs, we identified the starting point for our investigation. Typically, in this kind of investigation, your team would need to dive deeper into individual machine event logs, looking for remote access activities and movements, as well as looking at any domain controller logs that could help highlight the credentials used by the attacker(s).
Luckily for us, this customer had deployed Azure Advanced Threat Protection (ATP) prior to the incident. By having Azure ATP operational prior to an incident, the software had already normalized authentication and identity transactions within the customer network. DART began querying the suspected compromised credentials within Azure ATP, which provided us with a broad swath of authentication-related activities on the network and helped us build an initial timeline of events and activities performed by the adversary, including:
- Interactive logins (Kerberos and NTLM)
- Credential validation
- Resource access
- SAMR queries
- DNS queries
- WMI Remote Code Execution (RCE)
- Lateral Movement Paths
Detect and investigate advanced attacks on-premises and in the cloud.Get started
This data enabled the team to perform more in-depth analysis on both user and machine level logs for the systems the adversary-controlled account touched. Azure ATP’s ability to identify and investigate suspicious user activities and advanced attack techniques throughout the cyber kill chain enabled our team to completely track the adversary’s movements in less than a day. Without Azure ATP, investigating this incident could have taken weeks—or even months—since the data sources don’t often exist to make this type of rapid response and investigation possible.
Once we were able to track the user throughout the environment, we were able to correlate that data with Microsoft Defender ATP to gain an understanding of the tools used by the adversary throughout their journey. Using the right tools for the job allowed DART to jump start the investigation; identify the compromised accounts, compromised systems, other systems at risk, and the tools being used by the adversaries; and provide the customer with the needed information to recover from the incident faster and get back to business.Learn more and keep updated
Learn more about how DART helps customers respond to compromises and become cyber-resilient. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
The post Threat hunting in Azure Advanced Threat Protection (ATP) appeared first on Microsoft Security.