Much of the anti-adversarial research has been on the potential for minute, largely undetectable alterations to images (researchers generally refer to these as “noise perturbations”) that cause AI’s machine learning (ML) algorithms to misidentify or misclassify the images. Adversarial tampering can be extremely subtle and hard to detect, even all the way down to pixel-level subliminals. If an attacker can introduce nearly invisible alterations to image, video, speech, or other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.Growing threat to deployed AI apps
This is no idle threat. Eliciting false algorithmic inferences can cause an AI-based app to make incorrect decisions, such as when a self-driving vehicle misreads a traffic sign and then turns the wrong way or, in a worst-case scenario, crashes into a building, vehicle, or pedestrian. Though the research literature focuses on simulated adversarial ML attacks that were conducted in controlled laboratory environments, general knowledge that these attack vectors are available will almost certainly cause terrorists, criminals, or mischievous parties to exploit them.
The year was 2012, and a revised security protocol called OAuth 2 swept the web, allowing users to use security providers to easily log in to websites. Many single sign-on systems, from AWS’s Cognito to Okta, implement OAuth. OAuth is what enables you to “authenticate with Google” or other providers to a completely different website or application.
It works like a beer festival. You go to a desk and authenticate with your ID (and some money), and they give you tokens. From there, you go to each beer tent and exchange a token for a beer. The individual brewer does not need to check your ID or ask if you paid. They just take the token and hand you a beer. OAuth works the same way, but with websites instead of beers.
Developers often want to do the “right” thing when it comes to security, but they don’t always know what that is. In order to help developers continue to move quickly, while achieving better security outcomes, organizations are turning to DevSecOps.
DevSecOps is the mindset shift of making all parties who are part of the application development lifecycle accountable for the security of the application, by continuously integrating security across your development process. In practice, this means shifting security reviews and testing left—i.e., shifting from auditing or enforcing at deployment time to checking security controls earlier at build or development time.
Code Risk Analyzer is described by IBM as a security measure that can be configured to run at the start of a developer’s code pipeline, analyzing and reviewing Git repositories to discover issues with open source code. The goal is to help application teams recognize cybersecurity threats, prioritize application security problems, and resolve security issues. IBM Cloud Continuous Delivery helps provision toolchains, automate tests and builds, and control software quality with analytics.
In the cloud-native space, microservice architectures and containers are reshaping the way that enterprises build and deploy applications. They function, in a word, differently than traditional monolithic applications.
Microservices are far more distributed and dynamic than their traditional counterparts. A single application might have tens or hundreds of microservices, leading potentially to thousands of separate OS-level processes, each with its own API, deployed across multiple data centers all over the world, and spun up and down dynamically. These architectural differences from monolithic applications cause challenges for developers, operations, and security alike.