When Okta launched its $50 million Okta Ventures investment fund in April, one of its investments was in an early stage privacy startup called DataGrail. Today, the companies announced a partnership that they hope will help boost DataGrail, while providing Okta customers with a privacy tool option.
DataGrail CEO and co-founder Daniel Barber says that with the increase in privacy legislation from GDPR to the upcoming California Consumer Protection Act (and many other proposed bills in various states of progress), companies need tools to help them comply and protect user privacy. “We are a privacy platform focused on delivering continuous compliance for businesses,” Barber says.
They do this in a way that fits nicely with Okta’s approach to identity. Whereas Okta provides a place to access all of your cloud applications from a single place with one logon, DataGrail connects to your applications with connectors to provide a way to monitor privacy across the organization from a single view.
It currently has 180 connectors to common enterprise applications like Salesforce, HubSpot, Marketo and Oracle. It then collects this data and presents it to the company in a central interface to help ensure privacy. “Our key differentiator is that we’re able to deliver a live data map of the customer data that exists within an organization,” Barber explained.
The company just launched last year, but Barber sees similarities in their approaches. “We we see clear alignment on our go-to-market approach. The product that we built aligns very similarly to the way Okta is deployed, and we’re a true partner with the industry leader in identity management,” he said.
Monty Gray, SVP and head of corporate development at Okta, says that the company is always looking for innovative companies that fit well with Okta. The company liked DataGrail enough to contribute to the startup’s $5.2 million Series A investment in July.
Gray says that while DataGrail isn’t the only privacy company it’s partnering with, he likes how DataGrail is helping with privacy compliance in large organizations. “We saw how DataGrail was thinking about [privacy] in a modern fashion. They enable these technology companies to become not only compliant, but do it in a way where they were not directly in the flow, that they would get out of the way,” Gray explained.
Barber says having the help of Okta could help drive sales, and for a company that’s just getting off the ground, having a public company in your corner as an investor, as well as a partner, could help push the company forward. That’s all that any early startup can hope for.
In yet another letter seeking to pry accountability from Facebook, the chair of a British parliamentary committee has pressed the company over its decision to adopt a policy on political ad that supports flagrant lying.
In the letter Damian Collins, chair of the DCMS committee, asks the company to explain why it recently took the decision to change its policy regarding political ads — “given the heavy constraint this will place on Facebook’s ability to combat online disinformation in the run-up to elections around the world”.
Facebook have dropped a ban on “deceptive, false or misleading content” in political ads.
— Digital, Culture, Media and Sport Committee (@CommonsCMS) October 22, 2019
“The change in policy will absolve Facebook from the responsibility of identifying and tackling the widespread content of bad actors, such as Russia’s Internet Research Agency,” he warns, before going on to cite a recent tweet by the former chief of Facebook’s global efforts around political ads transparency and election integrity who has claimed that senior management ignored calls from lower down for ads to be scanned for misinformation.
“I also note that Facebook’s former head of global elections integrity ops, Yael Eisenstat, has described that when she advocated for the scanning of adverts to detect misinformation efforts, despite engineers’ enthusiasm she faced opposition from upper management,” writes Collins.
Facebook hired me to head Elections Integrity ops for political ads. I asked if we could scan ads for misinfo. Engineers had great ideas. Higher ups were silent. Free speech is b.s. answer when FB takes $ for ads. Time to regulate ads same as tv and print.https://t.co/eKJmH7Sa7r
— Yael Eisenstat (@YaelEisenstat) October 9, 2019
In a further question, Collins asks what specific proposals Eisenstat’s team made; to what extent Facebook determined them to be feasible; and on what grounds were they not progressed.
He also asks what plans Facebook has to formalize a working relationship with fact-checkers over the long run.
A Facebook spokesperson declined to comment on the DCMS letter, saying the company would respond in due course.
In a naked display of its platform’s power and political muscle, Facebook deployed a former politician to endorse its ‘fake ads are fine’ position last month — when head of global policy and communication, Nick Clegg, who used to be the deputy prime minister of the UK, said: ” We do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.”
So, in other words, if you’re a politician you get a green light to run lying ads on Facebook.
Clegg was giving a speech on the company’s plans to prevent interference in the 2020 US presidential election. The only line he said Facebook would be willing to draw was if a politician’s speech “can lead to real world violence and harm”. But from a company that abjectly failed to prevent its platform from being misappropriated to accelerate genocide in Myanmar that’s the opposite of reassuring.
“At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves,” said Clegg. “We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.”
In truth Facebook roundly fails to protect its platform from outside interference too. Inauthentic behavior and fake content is a ceaseless firefight that Facebook is nowhere close to being on top of, let alone winning. But on political ads it’s not even going to try — giving politicians around the world carte blanche to use outrage-fuelling disinformation and racist dogwhistles as a low budget, broad reach campaign strategy.
We’ve seen this before on Facebook of course, during the UK’s Brexit referendum — when scores of dark ads sought to whip up anti-immigrant sentiment and drive a wedge between voters and the European Union.
And indeed Collins’ crusade against Facebook as a conduit for disinformation began in the wake of that 2016 EU referendum.
Since then the company has faced major political scrutiny over how it accelerates disinformation — and has responded by creating a degree of transparency on political ads, launching an archive where this type of advert can be searched. But that appears as far as Facebook is willing to go on tackling the malicious propaganda problem its platform accelerates.
In the US, senator Elizabeth Warren has been duking it out publicly with Facebook on the same point as Collins rather more directly — by running ads on Facebook saying it’s endorsing Trump by supporting his lies.
There’s no sign of Facebook backing down, though. On the contrary. A recent leak from an internal meeting saw founder Mark Zuckerberg attacking Warren as an “existential” threat to the company. While, this week, Bloomberg reports that Facebook’s executive has been quietly advising a Warren rival for the Democratic nomination, Pete Buttigieg, on campaign hires.
So a company that hires politicians to senior roles, advises high profile politicians on election campaigns, tweaks its policy on political ads after a closed door meeting with the current holder of the office of US president, Donald Trump, and ignores internal calls to robustly police political ads, is rapidly sloughing off any residual claims to be ‘just a technology company’. (Though, really, we knew that already.)
In the letter Collins also presses Facebook on its plan to rollout end-to-end encryption across its messaging app suite, asking why it can’t limit the tech to WhatsApp only — something the UK government has also been pressing it on this month.
He also raises questions about Facebook’s access to metadata — asking whether it will use inferences gleaned from the who, when and where of e2e encrypted comms (even though it can’t access the what) to target users with ads.
Facebook’s self-proclaimed ‘pivot to privacy‘ — when it announced earlier this year a plan to unify its separate messaging platforms onto a single e2e encrypted backend — has been widely interpreted as an attempt to make it harder for antitrust regulators to break up its business empire, as well as a strategy to shirk responsibility for content moderation by shielding itself from much of the substance that flows across its platform while retaining access to richer cross-platform metadata so it can continue to target users with ads…
If you’ve read anything of mine in the past year, you know just how complicated security can be.
Every day it seems there’s a new security lapse, a breach, a hack, or an inadvertent exposure, such as leaving a cloud storage server unprotected without a password. These things happen, but they don’t have to; aecurity isn’t as difficult as it sounds, but there’s no one-size-fits-all solution.
We asked Google’s Heather Adkins, Duo’s Dug Song, and IOActive’s Jennifer Sunshine Steffens for their best advice. Here’s what they had to say.
Quotes have been edited and condensed for clarity.1. Don’t put off the security conversation
The one resounding message from the panel: don’t put security off.
“There are basically three areas that folks should start considering how to bucket those risks,” said Duo’s Song. “The first is corporate risk in defending your users and applications they access. The second is application security and product risk. A third area is is around production, security and making sure that the operation of your security program is something that keeps up with that risk. And then a fourth — a new and emerging space — is trust, and not just privacy, but also safety.”
It’s better to be proactive about security than to be reactive to a data breach; not only will it help your company bolster its security posture, but it also serves as an important factor in future fundraising negotiations.
Song said founders have a “very direct obligation” to think about security as soon as they take someone else’s money, but especially when a company starts gathering user or customer data. “You have to put yourself in the shoes of those folks whose data you have to protect,” he said. “It’s not just your existential threats to your business, but you do have a responsibility, right to figure out how to do this well.”
IOActive’s Steffens said startups are already a target — simply because it’s assumed many won’t have thought much about security.
“A lot of attackers will go after startups who have high value data, because they know security is not a priority and it’s going to be a lot easier to get ahold of,” she said. “Data these days is extraordinarily valuable.”2. Start with the security basics
Google’s Adkins, who runs the search giant’s internal information security team, joined the company almost two decades ago when it was just the size of a large startup. Her job is to keep the company’s network, assets, and employees safe.
“When I got there, they were so fanatical about security already, that half of the job was already done,” she said. “From the moment [Google] took its first search query, it was thinking about where those logs are stored, who has access to them, and what is its responsibility to its users,” she said.
“Startups who are successful with security are those where the chief executive and the founders are fanatical from day one and understand what threats exist to the business and what they need to do to protect it,” she said.
Song said many popular products and technologies these days come with strong security by default, such as iPhones, Chromebooks, security keys and Windows 10.
“You’re better off than the 90% of large companies out there,” he said. “That’s one of those few strategic advantages you have as a smaller, nimbler organization that doesn’t have a lot of legacy,” he added. “You can do things better from the start.”
“A lot of the basics are still key,” said Steffens. “Even as we come out with the new shiny technology, having things like firewalls and antivirus, and multi-factor authentication.”
“Security doesn’t always have to be a money thing,” she said. “There’s a lot of open source technology that’s really great.”3. Start looking at security as an investment
“The sooner you start thinking about security, the less expensive it is in the end,” said Steffens.
That’s because, the experts said, proactive security gives companies an edge over competitors who tack on security solutions after a breach. It’s easier and more cost-effective to get it right the first time without having to fill in gaps years later.
It might be a hard sell to funnel money into something where you won’t actively see financial returns, which is why founders should think of security as investments for the future. The idea is that if you spend a little money at the start, it can save you down the line from the inevitable — a security incident that will cost you in bad headlines, lost customer trust, and potentially fines or other sanctions.
NordVPN, a virtual private network provider that promises to “protect your privacy online,” has confirmed it was hacked.
The admission comes following rumors that the company had been breached. It first emerged that NordVPN had an expired internal private keys exposed, potentially allowing anyone to spin out their own servers imitating NordVPN.
VPN providers are increasingly popular as they ostensibly provide privacy from your internet provider and visiting sites about your internet browsing traffic. That’s why journalists and activists often use these services, particularly when they’re working in hostile states. These providers channel all of your internet traffic through one encrypted pipe, making it more difficult for anyone on the internet to see which sites you are visiting or which apps you are using. But often that means displacing your browsing history from your internet provider to your VPN provider. That’s left many providers open to scrutiny, as often it’s not clear if each provider is logging every site a user visits.
For its part, NordVPN has claimed a “zero logs” policy. “We don’t track, collect, or share your private data,” the company says.
But the breach is likely to cause alarm that hackers may have been in a position to access some user data.
NordVPN told TechCrunch that one of its datacenters was accessed in March 2018. “One of the datacenters in Finland we are renting our servers from was accessed with no authorization,” said NordVPN spokesperson Laura Tyrell.
The attacker gained access to the server — which had been active for about a month — by exploiting an insecure remote management system left by the datacenter provider, which NordVPN said they were unaware that such a system existed.
NordVPN did not name the datacenter provider.
“The server itself did not contain any user activity logs; none of our applications send user-created credentials for authentication, so usernames and passwords couldn’t have been intercepted either,” said the spokesperson. “On the same note, the only possible way to abuse the website traffic was by performing a personalized and complicated man-in-the-middle attack to intercept a single connection that tried to access NordVPN.”
According to the spokesperson, the expired private key could not have been used to decrypt the VPN traffic on any other server.
NordVPN said it found out about the breach a “few months ago,” but the spokesperson said the breach was not disclosed until today because the company wanted to be “100% sure that each component within our infrastructure is secure.”
A senior security researcher we spoke to who reviewed the statement and other published evidence, but asked not to be named as they work for a company that requires authorization to speak to the press, called these findings “troubling.”
“While this is unconfirmed and we await further forensic evidence, this is an indication of a full remote compromise of this provider’s systems,” the security researcher said. “That should be deeply concerning to anyone who uses or promotes these particular services.”
NordVPN said “no other server on our network has been affected.”
But the security researcher warned that NordVPN was ignoring the larger issue of the attacker’s possible access across the network. “Your car was just stolen and taken on a joy ride and you’re quibbling about which buttons were pushed on the radio?” the researcher said.
The company confirmed it had installed intrusion detection systems, a popular technology that companies use to detect early breaches, but “no-one could know about an undisclosed remote management system left by the [datacenter] provider,” said the spokesperson.
It’s also believed several other VPN providers may have been breached around the same time. Similar records posted online — and seen by TechCrunch — suggest that TorGuard and VikingVPN may have also been compromised, but spokespeople did not return a request for comment.
Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.
As it looks to modernize its operations, the Los Angeles Fire Department is turning to a number of new technologies, including expanding its fleet of drones for a slew of new deployments.
One of the largest fire departments in the U.S., next to New York and Chicago, the LAFD has a budget of roughly $691 million, employs more than 3,500 and responded to 492,717 calls in 2018.
The department already has a fleet of 11 drones to complement its fleet of 258 fire engines, ambulances and helicopters.
However, Battalion Chief Richard Fields, the head of the department’s Unmanned Aerial Systems program, would like to see that number increase significantly.
Los Angeles has become an early leader in the use of drones for its firefighting applications thanks in part to an agreement with the Chinese company DJI, which the department inked back in April.
At the time, the Chinese drone manufacturer and imaging technology developer announced an agreement to test and deploy DJI drones as an emergency response preparedness tool. The company called it one DJI’s largest partnerships with a fire-fighting agency in the U.S.
“We are excited to be strengthening our partnership with the LAFD, one of the nation’s preeminent public safety agencies, to help them take advantage of DJI’s drone technology that has been purpose-built for the public safety sector,” said Bill Chen, Enterprise Partnerships manager at DJI, in a statement at the time. “Through our two-way collaboration, DJI will receive valuable insight into the complexities of deploying drones for emergency situations in one of the most complex urban environments in the nation.”
Now, roughly five months later, the program seems to have been successful enough that Battalion Chief Fields is looking to double the fleet.
“Our next iteration is to start using our drones to assist our specialized resources,” said Fields. Those are firefighters and support crews that deal with hazardous materials, urban search and rescue, marine environments and swift water rescues, Fields said.
The technologal demands of the fire department extend beyond the drone itself, Fields said. “There are a lot of technologies that allows us to make the drone more versatile… the most valuable tool isn’t the drone; it’s the sensor.”
So far, the most useful application has been using infrared technologies to balance what’s visible and combine it with the heat signatures the sensors pick up.
Training to become a drone pilot for the LAFD is particularly intense, Fields says. The typical pilot will get up to 80 hours of training. “Our training is nation-leading. There’s nothing out there in the commercial market that beats it,” according to Fields.
For now, the entire LAFD fleet is composed of DJI drones, something that has given military and civilian officials pause in the past few years.
Concerns have been growing over the reliance on Chinese technology in core American infrastructure, extending from networking technology companies like Huawei to drone technology developers like DJI.
Back in 2018, the Department of Defense issued a ban on the acquisition and use of commercial drones, citing cybersecurity vulnerabilities. The ban came a year after officials from the Department of Homeland Security and members of Congress called out DJI specifically for its potential to be used by the Chinese government to spy on the United States.
However, the rule isn’t set in stone, and many branches of the military continue to use DJI drones, according to a September Voice of America News report.
In Los Angeles, Fields says he takes those concerns seriously. The department has worked closely with regulators and advocacy groups like the American Civil Liberties Union to craft a strict policy around what gets done with the data the LAFD collects.
“The way that we establish our program is that the drone provides us with our real-time situational awareness,” said Fields. “That helps the incident commander get a visual perspective of the problem and he can make better decisions.”
The only data that is recorded and kept, says Fields, is data collected around brush fires so the LAFD can do a damage assessment, which can later be turned into map layers to keep records of hotspots.
As for data that could be sent back to China, Fields says that any mapping of critical infrastructure is done without connecting to the internet. “It’s being collected on the drone and 90% of that information is how the drone is operating. There is some information of where the drone is and how it is and the [latitude] and [longitude] of the drone itself… That’s the data that’s being collected,” Fields says.
From Fields’ perspective, if the government is so concerned about the use of drones made by a foreign manufacturer, there’s an easy solution. Just regulate it.
“Let’s come up with a standard. If you use them in a federal airspace these are the check marks that you have to pass,” he says. “Saying that DJI drones are bad because they come from China [and] let’s throw them all out… that’s not an answer either.”
Mercedes-Benz car owners have said that the app they used to remotely locate, unlock and start their cars was displaying other people’s account and vehicle information.
TechCrunch spoke to two customers who said the Mercedes-Benz’ connected car app was pulling in information from other accounts and not their own, allowing them to see personal information — including names, locations, phone numbers, and other information — of other vehicle owners.
The apparent security lapse happened late-Friday before the app went offline “due to site maintenance” a few hours later.
It’s not uncommon for modern vehicles these days to come with an accompanying phone app. These apps connect to your car and let you remotely locate them, lock or unlock them, and start or stop the engine. But as cars become internet-connected and hooked up to apps, security flaws have allowed researchers to remotely hijack or track vehicles.
One Seattle-based car owner told TechCrunch that their app pulled in information from several other accounts. He said that both he and a friend, who are both Mercedes owners, had the same car belonging to another customer, in their respective apps but every other account detail was different.
The car owners we spoke to said they were able to see the car’s recent activity, including the locations of where it had recently been, but they were unable to track the real-time location using the app’s feature.
When he contacted Mercedes-Benz, a customer service representative told him to “delete the app” until it was fixed, he said.
The other car owner we spoke to said he opened the app and found it also pulled in someone else’s profile.
“I got in contact with the person who owns the car that was showing up,” he told TechCrunch. “I could see the car was in Los Angeles, where he had been, and he was in fact there,” he added.
He said that he wasn’t sure if the app has exposed his private information to another customer.
“Pretty bad fuck up in my opinion,” he said.
The first customer reported that the “lock and unlock” and the engine “start and stop” features did not work on his app, somewhat limiting the impact of the security lapse. The other customer said they did not attempt to test either feature.
It’s not clear how the security lapse happened or how widespread the problem was. A spokesperson for Daimler, the parent company of Mercedes-Benz, did not respond to a request for comment on Saturday.
According to Google Play’s rankings, more than 100,000 customers have installed the app.
A similar security lapse hit Credit Karma’s mobile app in August. The credit monitoring company admitted that users were inadvertently shown other users’ account information, including details about credit card accounts and balances. But despite disclosing other people’s information, the company denied a data breach.
Thousands of ransomware victims may finally get some long-awaited relief.
New Zealand-based security company Emsisoft has built a set of decryption tools for Stop, a family of ransomware that includes Djvu and Puma, which they say could help victims recover some of their files.
Stop is believed to be the most active ransomware in the world, accounting for more than half of all ransomware infections, according to figures from ID-Ransomware, a free site that helps identify infections. But Emsisoft said that figure is likely to be far higher.
If you’ve never had ransomware, you’re one of the lucky ones. Ransomware is one of the more common ways nowadays for some criminals to make money by infecting computers with malware that locks files using encryption. Once the Stop ransomware infects, it renames a user’s files with one of any number of extensions, replacing .jpg and .png files with .radman, .djvu and .puma, for example. Victims can unlock their files in exchange for a ransom demand — usually a few hundred dollars in cryptocurrency,.
Not all ransomware is created equally. Some security experts have been able to unlock some victims’ files without paying up by finding vulnerabilities in the code that powers the ransomware, allowing them in some cases reverse the encryption and return a victim’s files back to normal.
Stop is the latest ransomware that researchers at Emsisoft have been able to crack.
“The latest known victim count is about 116,000. It’s estimated that’s about one-quarter of the total number of victims.”
“It’s more of a complicated decryption tool than you would normally get,” said Michael Gillespie, the tools’ developer and a researcher at Emsisoft. “It is a very complicated ransomware,” he said.
In Stop’s case, it encrypts user files with either an online key that’s pulled from the attacker’s server; or an offline key, which encrypts users’ files when it can’t communicate with the server. Gillespie said many victims have been infected with offline keys because the attackers’ web infrastructure was often down or inaccessible to the infected computer.
Here are how the tools work.
The ransomware attackers give each victim a ‘master key,’ said Gillespie. That master key is combined with the first five bytes of each file that the ransomware encrypts. Some filetypes, like .png image files, share the same five bytes in every .png file. By comparing an original file with an encrypted file and applying some mathematical computations, he can decrypt not only that .png file but other .png of the same filetype.
Some filetypes share the same initial five bytes. Most modern Microsoft Office documents, like .docx and .pptx share the same five bytes as .zip files. With any before and after file, any one of these filetypes can decrypt the others.
There’s a catch. The decryption tool is “not a cure all” for your infected computer, said Gillespie.
“The victim has to find a good before and after of basically every format that they want to recover,” he said.
Once the system is clean of the ransomware, he said victims should try to look for any files that were backed up. That could be default Windows wallpapers, or it can mean going through your email and finding an original file that you sent and matching it with the now-encrypted file.
When the user uploads a “before and after” pair of files to the submission portal, the server will do the math and figure out if the pair of files are compatible and will spit back which extensions can be decrypted.
But there are pitfalls, said Gillespie.
“Any infections after the end of August 2019, unfortunately there’s not much we can do unless it was encrypted with the offline key,” he said. If an online key was pulled from the attacker’s server, victims are out of luck. He added that files submitted to the portal have to be above 150 kilobytes in size or the decryption tools won’t work, because that’s how much of the file the ransomware encrypts. And some file extensions will be difficult if not impossible to recover because each file extension handles the first five bytes of the file differently.
“The victim really needs to put in some effort,” he said.
This isn’t Gillespie’s first rodeo. For a time, he was manually processing decryption keys for victims whose files had been encrypted with an offline key. He built a rudimentary decryption tool, the aptly named STOPDecrypter, which decrypted some victims’ files. But keeping the tool up to date was a cat and mouse game he was playing with the ransomware attackers. Every time he found a workaround, the attackers would push out new encrypted file extensions in an effort to outwit him.
“They were keeping me on my toes constantly,” he said.
Since the launch of STOPDecrypter, Gillespie has received thousands of messages from people whose systems have been encrypted by the Stop ransomware. By posting on the Bleeping Computer forums, he has been able to keep victims up to date with his findings and updates to his decryption tool.
But as some victims became more desperate to get their files back, Gillespie has faced the brunt of their frustrations.
“The site’s moderators were patiently responding. They’ve kept the peace,” he said. “A couple of other volunteers on the forums have also been helping explain things to victims.”
“There’s been a lot of community support trying to help in every little small bit,” he said.
Gillespie said the tool will also be fed into Europol’s No More Ransom Project so that future victims will be notified that a decryption tool is available.
Popular app Snaptube caught serving invisible ads and charging users for premium purchases they haven’t made
A popular video downloader app for Android has been found generating fake ad clicks and unauthorized premium purchases from its users, according to a security firm.
Snaptube, which boasts some 40 million users, allows users to download videos and music from YouTube, Facebook, and other major video sites. The app, developed in China, is not on Google Play because the app maker claims Google will not allow video downloader apps on the store. Some third-party app stores estimate Snaptube has been downloaded over a billion times to date. The app’s developer says that the app is “safe” to use.
But researchers at London-based security firm Upstream, which shared its findings exclusively with TechCrunch, said the free app ends up costing consumers.
Upstream’s chief executive Guy Krief said users are served invisible ads without their knowledge that run silently on the device, allowing the app maker to generate ad revenue at the expense of churning up a user’s mobile data and battery power. The app also uses the same background click technique to rack up premium purchases charges that the user never asked for.
Krief said the only indication that a user’s device might be used in this way is if their mobile data usage increases, their device gets warm, and the battery runs out faster than usual.
The company pinned the blame on a third-party software development kit (SDK) code, known as Mango, embedded inside Snaptube’s app. Mango was also used in Vidmate, a similar video downloader app also accused of ad fraud behavior; as well as 4shared, a cloud storage app.
According to Uptream, this third-party code kit downloads additional components from a central server in order to engage in this fraudulent ad activity, and uses chains of redirection and obfuscation to hide its activity.
Mango is particularly sneaky, said Krief. Within hours of the news breaking that Vidmate’s app was engaged in similar suspicious behavior, his company saw a Snaptube’s suspicious activity drop almost immediately. “Our assumption back then was they’re probably also using similar code and they went silent because of all the publicity,” he said in a phone call.
Two months later, the same suspicious activity in Snaptube’s app resumed.
Krief said it was “very common” to see apps engaging in ad fraud to go through bursts of high levels of activity, followed by periods of quiet.
In recent weeks Upstream said it’s blocked more than 70 million suspicious transactions originating from four million devices, according to data from its proprietary security platform. The company said consumers could have been charged tens of millions of dollars in unwanted premium charges had those clicks not been blocked.
Snaptube said in a statement: “We didn’t realize the Mango SDK was exercising advertising fraud activities, which brought us major loss in brand reputation.”
“After the user complained about the malicious behavior of the Mango SDK, we quickly responded and terminated all cooperations with them,” a spokesperson said. “The versions on our official site as well as our maintained distribution channels are free of this issue already.”
Snaptube said it was “considering” legal action against the Mango developers.
It’s not the first time Snaptube has been caught out engaging in potentially fraudulent activity. In February, security firm Sophos found the app engaging in similar fraudulent behavior — generating and reporting fake ad clicks and racking up costs for the user. Later in the year, Snaptube responded to reports that Android devices were warning users that the app contained the suspicious third-party code, noting that it would “terminate” using the code “as soon as possible.”
That promise was made in August. Yet, some three months later, the code remains in the app.
Galaxy S10 users should be turn on some alternative security features as Samsung works to address a major flaw with the device’s in-screen fingerprint sensor. The consumer electronics giant noted the issue today after a British user reported the ability to unlock her device with unregistered fingerprints.
The flaw was discovered after placing a $3.50 screen protector on the device, confirming earlier reports that adding one could introduce an air gap that interfered with the ultrasonic scanner. The company noted the issue in a statement, telling the press that it was, “aware of the case of S10’s malfunctioning fingerprint recognition and will soon issue a software patch.”
Third party companies including Korean bank KaKaoBank have suggested users turn off the reader until the issue is addressed. That certainly appears to be the most logical course of action until the next software update.
When it hit the market back in March, the company touted the technology as one of the industry’s most secure biometric features, noting that it was, “engineered to be more secure than a traditional 2D optical scanner, the industry-first Ultrasonic Fingerprint ID, with sensors embedded in the display, reads the 3D contours of your physical fingerprint to keep your phone and data safe. This advanced biometric security technology earned the Galaxy S10 the world’s first FIDO Alliance Biometric Component certification.”
Samsung has warned against the use of screen protectors previously, but the ability to fool the product with a cheap off the shelf mobile accessory clearly presents a major and unexpected security concern for Galaxy users. We’ve reached out to Samsung for further comment.
MyGate, a Bangalore-based startup that offers security management and convenience service for guard-gated premises, said today it has bagged over $50 million in a new financing round as it looks to expand its footprint in the nation.
Chinese internet giant Tencent, Tiger Global, JS Capital, and existing investor Prime Venture Partners funded the three-year-old startup’s $56 million Series B financing round. The new round pushes MyGate’s total fundraise to-date to $67.5 million.
MyGate offers an eponymous mobile app that allows home residents to approve entries and exits, communicate with their neighbors, log attendance, pay society maintenance bills and daily help workers.
The startup says it is operational in 11 cities in India and has amassed over 1.2 million home customers. Its customer base is increasing by 20% each month, it claimed. The service is handling 60,000 requests each minute and clocking over 45 million check-in requests each month.
The idea of MyGate came after its co-founder and CEO, Vijay Arisetty left Indian armed force. In an interview with TechCrunch, he said his family was appalled to learn about the poor state of security across societies in India.
“This was also when e-commerce companies and food delivery firms were beginning to gain strong foothold in the nation. This meant that many people were entering a gated community each day,” he said.
MyGate has inked partnerships with many e-commerce players to create a system to offer a silent and secure delivery experience for its users. The startup also trains guards to understand the system.
According to industry estimates, more than 4.5 million people in India today live in gated communities, and that figure is growing by 13% each year. The private security industry in the country is a $15 billion market.
Arisetty says he believes the startup could significantly accelerate its growth as its solution understands the price sensitive market. Using MyGate costs an apartment about Rs 20 (28 cents) per month. Even at that price, the startup says it is making a profit. “Today, we are seeing more demand than we can handle,” he said.
That’s where the new funding would come into play for the startup, which today employs about 700 people.
The startup plans to use the fresh capital to expand its technology infrastructure, its marketing and operations teams and build new features. The startup aims to reach 15 million homes in 40 Indian cities in the next 18 months.
In a statement, Sanjay Swamy, Managing Partner at Prime Venture Partners, said, “it’s been great to see a fledgling startup execute consistently and holistically, and grow into a category-creating market-leader.”
This morning, the Justice Department announced that it had brought charges against the administrator and hundreds of users of the “world’s largest” child sexual exploitation marketplace on the dark web.
For me, it marked the end of a story I’ve wanted to write for two years.
In November 2017, I was working for CBS as the security editor at ZDNet. A hacker group reached out to me over an encrypted chat claiming to have broken into a dark web site running a massive child sexual exploitation operation. I was stunned. I had previous interactions with the hacker group, but nothing like this.
The group claimed it broke into the dark web site, which it said was titled “Welcome to Video,” and identified four real-world IP addresses of the site, said to be different servers running this supposedly behemoth child abuse site. They also provided me with a text file containing a sample of a thousand IP addresses of individuals who they said had logged in to the site. The hackers boasted about how they siphoned off the list as users logged in, without the users’ knowledge, and had over a hundred thousand more — but they would not share them.
If proven true, the hackers would have made a major breakthrough in not only discovering a major dark web child abuse site, but could potentially identify the owners — and the visitors to the site.
But at the time, we could not prove it.
My then editor-in-chief and I discussed how we could approach the story. A primary concern was that the dark web site was already under federal investigation, and writing about it could jeopardize that effort.
But we also faced another headache: there was no legal way we could access the site to verify it was what the hackers claimed.
“Children around the world are safer because of the actions taken by U.S. and foreign law enforcement to prosecute this case and recover funds for victims.”
Jessie K. Liu, U.S. Attorney for the District of Columbia
The hackers gave me a username and password for the site, which they said they had created just for me to verify their claims. But we could not access the site for any reason — even for journalistic reasons and in a controlled environment — for fear that the site may display child abuse imagery. Only federal agents working an investigation are allowed to access sites that contain illegal content. While journalists have a lot of flexibility and freedoms, this was not one of them.
After a call with several CBS lawyers, we decided that there was no legal way to write the story without verifying the site’s contents, something we legally weren’t able to do.
The story was dead, but the site wasn’t.
One thing the lawyers couldn’t tell me is if I should report the findings to the government. That was ultimately my decision to make. It’s a bizarre situation to be in. As a cybersecurity and national security reporter, the government all too often is “the nemesis,” often a target of journalistic inquisitions and investigations. But while journalists are told to report and observe and not get involved, there are exceptions. Risk to life and child exploitation are top of the list. A journalist cannot idly stand by knowing that there could be a car bomb sitting outside a building, ready to detonate. Nor can one dismiss the idea of a child abuse site continuing to operate on the dark web.
I spoke with a well-known journalist to ask for ethical advice. We agreed to speak on background, from reporter to reporter. Having never faced a situation like this, my primary concern was to ensure I was on the right moral, ethical and legal side of things. Was it right to report this to the feds?
The answer was simple and expected: Yes, it was right to report the information to the authorities, so long as I protected my source. Protecting your sources is one of the cardinal rules of journalism, but my source was a hacker group — it was not the dark web site itself. After all, I was working under the assumption that the authorities would not care much for the source information anyway.
I reached out to a contact at the FBI, who passed me onto a special agent at a field office. After a brief phone call, I emailed the four IP addresses slated to be the dark web site’s real-world location, and the list of the thousand alleged users of the site.
And then silence. I heard nothing back. I followed up and asked, but the agent warned that if the site became — or was already — subject to investigation, there was little, if anything, they could say.
I recall the hackers were frustrated. After I told them I wouldn’t be writing the story, we are no longer communicating.
Weeks went by. I felt just as frustrated at the lack of insight into what I had only guessed or hoped was progress by the federal agents.
I recall running the list of IP addresses that the hackers gave me through a resolver, which provided some limited insight into who might be visiting the dark web site. We found individuals access the dark web site from the networks of the U.S. Army Intelligence, the U.S. Senate, the U.S. Air Force, and the Department of Veterans Affairs, as well as Apple, Microsoft, Google, Samsung, and several universities around the world. We could not identify, however, specific individuals who accessed the site. And because the dark web is anonymized, it’s likely that not even companies knew their staff were accessing this site.
How could they possibly let this go, I thought to myself, wondering whether the FBI agent had acted on the information I handed over. If there was an investigation it would take time and effort, and the wheels of government seldom move quickly. Would I ever know whether the perpetrators would ever be caught?
Today, two years later, I got my answer.
U.S. prosecutors said in the indictment, filed in August 2018 but unsealed Wednesday, that the dark web site — confirmed as “Welcome to Video” — had some 250,000 user-uploaded graphic images and videos of children who were being sexually abused. The government called it the “largest darknet child pornography website” in a press release.
This morning, after news of the site’s removal had been reported, I rifled through the documents posted on the Justice Department’s website and found a screenshot of the site, with the full web address in the address bar. It was a match. For the first time since the hackers told me of the dark web site, I went to the Tor browser and pasted in the address. It loaded — with the government’s “website seized” notice staring back at me.
According to the indictment, federal agents began investigating the site in September 2017, two months before the hackers breached the site. The site’s administrator, Jong Woo Son, had been running the operation from his residence in South Korea since 2015. The indictment said the main landing page to the site contained a security flaw that let investigators discover some of the IP addresses of the dark web site — simply by right-clicking the page and viewing the source of the website.
It was a major error, one that would trigger a chain of events that would ensnare the entire site and its users.
Prosecutors said in the indictment that they found several IP addresses: 188.8.131.52 and 184.108.40.206. One of the IP addresses the hackers gave me was 220.127.116.11 — an address on the same network subnet as the dark web site.
It was long-awaited confirmation that the hackers were telling the truth. They did in fact breach the site. But whether or not the government knew about the breach remains a mystery.
Some five months after I contacted the FBI, the government had obtained a warrant to seize and dismantle the dark web site. It’s believed the indictment was kept under seal until today in order to arrest, charge and prosecute individuals suspected of being involved in the site.
In total, there were 337 arrests, including a former Homeland Security special agent and a Border Patrol officer.
Authorities were able to rescue 23 children who were being actively abused.
I reached out to the federal agent this morning, and was told the FBI was not involved in the investigation. The Internal Revenue Service’s Criminal Investigation division, which investigates and prosecutes financial crimes, and the Homeland Security Investigations unit, which largely deals with human smuggling, child trafficking and related computer crimes, were credited with the work.
While authorities from the U.K. and South Korea contributed to the investigation, sources say the IRS received an anonymous tip that kickstarted it.
From there, the IRS used technology to trace bitcoin transactions, which the dark web site used to profit from the child exploitation videos. Users would have to pay in bitcoin to download content or upload their own child exploitation videos. The government also launched a civil forfeiture case to seize the bitcoins allegedly used by 24 individuals in five countries who are accused of funding the site.
The hacker group has not been in touch since we broke off communications. Publishing a story about the hack two years ago may have caused irreparable harm to the government’s investigation, potentially sinking it entirely. It was a frustrating time, not least being in the dark and not knowing if anyone was doing anything.
I’ve never been so glad to walk away from a story.
The chief executive of Foursquare, one of the largest location data platforms on the internet, is calling on lawmakers to pass legislation to better regulate the wider location data industry amid abuses and misuses of consumers’ personal data.
It comes in the aftermath of the recent location sharing scandal, which revealed how bounty hunters were able to get a hold of any cell subscriber’s real-time location data by obtaining the records from the cell networks. Vice was first to report the story. Since then there have been numerous cases of abuse — including the mass collection of vehicle locations in a single database, and popular iPhone apps that were caught collecting user locations without explicit permission.
The cell giants have since promised to stop selling location data but have been slow to act on their pledges.
In his opinion piece, Glueck called on Congress to push for a federal regulation that enforces three points.
Firstly, phone apps should not be allowed to access location data without explicitly stating how it will be used. Apple has already introduced a new location tracking privacy feature that tells users where their apps track them, and is giving them options to restrict that access — but all too often apps are not clear about how they use data beyond their intended use case.
“Why, for example, should a flashlight app have your location data?,” he said, referring to scammy apps that push for device permissions they should not need.
Second, the Foursquare chief said any new law should provide greater transparency around what app makers do with location data, and give consumers the ability to opt-out. “Consumers, not companies, should control the process,” he added. Europe’s GDPR already allows this to some extent, as will California’s incoming privacy law. But the rest of the U.S. is out of luck unless the measures are pushed out federally.
And, lastly, Glueck said anyone collecting location data should promise to “do no harm.” By that, he said companies should apply privacy-protecting measures to all data uses by not discriminating against individuals based on their religion, sexual orientation or political beliefs. That would make it illegal for family tracking apps, for example, to secretly pass on location data to healthcare or insurance providers who might use that data to hike up a person’s premiums above normal rates by monitoring their driving speeds, he said.
For a business that relies on location data, it’s a gutsy move.
But Glueck hinted that businesses like Foursquare would be less directly affected as they already take a more measured and mindful approach to privacy, whereas the fast and loose players in the location data industry would face greater scrutiny and more enforcement action.
“These steps are necessary, but they’re not sufficient,” said Glueck. But he warned that Congress could do “great damage” if lawmakers fail to sufficiently push overly burdensome regulations on smaller companies, which could increase overheads, put companies out of business and have a negative effect on competition.
“There’s no good reason that companies won’t be able to comply with reasonable regulation,” said Glueck.
“Comprehensive regulation will support future innovation, weed out the bad companies and earn the public trust,” he said.
The Justice Department says it’s dismantled one of the largest child exploitation sites on the dark web.
With the help of international partners in the U.K. and South Korea, U.S. prosecutors have brought charges against a South Korean citizen, Jong Woo Son, for conspiracy to advertise, product, and distribute child abuse imagery.
Son was charged in August 2018 but the indictment was only unsealed Wednesday. NBC News was first to report the indictments.
The site contained more than 200,000 unique videos — some 8 terabytes of data — involving children.
Prosecutors said the site was only accessible on the dark web, a term used for an encrypted and anonymized version of the internet that’s accessible through services like the Tor anonymity network. Investigators identified the real-world internet location of the site by viewing the source of the website, which pointed to a server hosted at the defendant’s residence in South Korea.
“Darknet sites that profit from the sexual exploitation of children are among the most vile and reprehensible forms of criminal behavior,” said Brian A. Benczkowski, assistant attorney general. “Today’s announcement demonstrates that the Department of Justice remains firmly committed to working closely with our partners in South Korea and around the world to rescue child victims and bring to justice the perpetrators of these abhorrent crimes.”
More than three-dozen other individuals involved with the site have also been arrested and charged under various state and national laws.
Data privacy for apps is typically part of the purview of compliance teams — a model that isn’t always perfect, judging by the number of breaches and the extensive regulation that’s been (and still being) put in place to force companies and organizations to behave better. Now, in an effort to improve how apps manage data privacy, a startup called Evervault — founded by a 19 year-old in Dublin, Ireland — is building a data protection solution aimed at developers, by way of an API, which aims to bake data protection into the app from the start.
To help get its product out to market, the company today is announcing that it has raised $3.2 million in seed funding from a high-profile set of investors. Led by Sequoia, the round also includes Kleiner Perkins, Frontline, SV Angel and other unnamed backers.
Ultimately, the aim will be to sell Evervault into any app or piece of software that uses PII (personally identifiable information), to help developers build encrypted “data cages” to handle the information from the moment it’s ingested.
“I believe that once data has been ingested, it should be encrypted and never decrypted again,” founder Shane Curran said in an interview. “There’s a philosophical argument to be made over offering privacy at different levels, but we’re looking at the holistic side of things. If your app gets breached, the data will not leak.”
The investor list in this seed round is impressive, but also all the more notable when you consider that Evervault’s product has yet to be released.
Curran, the company’s CEO — and as of the time of writing, only full-time employee — said in an interview that Evervault’s API has been built, but the startup is still working on how run it efficiently at scale. The funding will be used to help with that, as well as to hire more people for the team. (There are two others joining Curran soon, he said.)
Evervault’s product is notable because it represents a shift in how companies are approaching only data privacy.
“It’s moving away from compliance teams because it should be in products from day one,” said Evervault’s 19-year-old founder to explain why it’s focusing on developer tools.
“Originally, I thought of how to solve data protection from the developer side as a mathematical problem, but it was only having done the research and getting exposed to others in that space that it became something interesting to me,” he added.
A lot of the early work in building so-called “data cages”, Curran noted, was targeted at “crypto anarchists,” but that idea has evolved as data breaches have grown and the concept of data protection has entered the mainstream consciousness. This has opened up an opportunity to build solutions so that companies can continue to operate online as they do but in a way that your data stays private.
“It should be something more reasonable beyond the idea of ‘companies should never touch our data,'” he said. The aim with Evervault, he said, is to make a service more secure with regards to personal data, but in a way that doesn’t compromise the experience for the user, or the app/software company itself.
The idea of building a completely new way of handling data protection, while using a method that will let businesses continue operating as they do, is part of what compelled some of this investment.
“Data is king, and the team at evervault is on a mission to solve the ‘how’ of ensuring data privacy,” said Mamoon Hamid, partner, Kleiner Perkins. “Their developer-first approach ensures that data privacy becomes part of the development fabric, instead of an afterthought left for compliance to troubleshoot. We’re thrilled to partner with evervault and help build the new Internet infrastructure for data privacy.”
On another level, the idea of baking data protection into the app’s code itself also follows on from a bigger trend in building apps, where components are brought in from outside by way of APIs rather than built from the ground up when they are not part of the team’s core competency, or commoditised processes.
It’s the same model we see when apps with voice interfaces might, for example, use NLP from Amazon rather than building that function in house; or an e-commerce service integrates a payment API from Stripe for transactions.
The Stripe comparison, it turns out, is relevant in more ways than one.
Curran first conceived of the idea that became Evervault when he was just 17, as the basis of what became his top-prize-winning submission for the BT Young Scientist & Technology Exhibition, an annual competition in Ireland. (His original description of the tech was this: “qCrypt: The quantum-secure, encrypted, data storage platform with multijurisdictional quorum sharding technology.”)
This happened to be same competition that brought attention to Patrick Collison, Stripe’s co-founder and CEO, who in 2005 also won first prize in the Young Scientist Exhibition, when he was also still a student, for having developed a new computer language, CROMA, as a dialect of LISP to simplify coding.
Putting CROMA to one side, Collison went on to co-found and sell one company, Auctomatic, and then start the extremely successful Stripe.
And that is where the two (for now at least) have diverged. Since winning the competition in 2017, Currant has stayed with his original concept to see how much further he could take it.
Jetting off to the Bay Area to pitch it to what he referred to as the “Irish mafia” in Silicon Valley, one coffee led to another, and before he knew it, he was meeting with tier-one VCs and angels. They not only convinced him to develop his prize-winning idea into a business, but gave him money to do it.
For the record, Curran did finish high school but told me he spent only “a few days” at university before deciding to take a leave of absence to pursue Evervault.
“Shane has a special combination of clear vision, deep thoughtfulness and insatiable curiosity,” said Stephanie Zhan, partner at Sequoia, in a statement. “We are thrilled to partner with evervault at the seed, to solve for today’s massive data breaches and build simple developer tools for data privacy.”
It’s a pretty classic Silicon Valley story: gifted, young founder starts out with a curious, pure interest in a subject, has a coding breakthrough, turns entrepreneur to leverage that into a startup, and then finds funding and business success.
But as with all stories that follow this essentially fairy tale format, it glosses over some of the challenges: Curran is still only 19, he’s building a company from scratch, and his idea remains, essentially, untested as the product has not launched.
“Imposter syndrome is very real,” Curran said, before backtracking a bit. “I mean, I always knew what I wanted to do, but I would have never thought that Sequoia or the others would invest in me. Very spontaneously, this thing just fell together, but then I think, I couldn’t have done it any better. This does set the bar very high, but I’m not complaining.”
Twitter said it will restrict how users can interact with tweets from world leaders who break its rules.
The social media giant said in a tweet that the move will help its users to be informed but while taking responsibility for keeping its rules in check.
Instead, Twitter said it will not allow users to like, reply, share or retweet the offending tweets, but instead will let users to quote-tweet to allow ordinary users to express their opinions.
We haven’t used this notice yet, but when we do, you will not be able to like, reply, share, or Retweet the Tweet in question. You will still be able to express your opinion with Retweet with Comment.
— Twitter Safety (@TwitterSafety) October 15, 2019
Twitter has been in a bind as of late with its rules,
“When it comes to the actions of world leaders on Twitter, we recognize that this is largely new ground and unprecedented,” Twitter said in an unbylined blog post on Tuesday. It comes amid allegations that Twitter has not taken action against world leaders who break the site’s rules. Last year, Twitter said it would not ban President Trump despite incendiary tweets, including allegations that he threatened to declare war on North Korea. Other tweets from world leaders, such as Iran’s supreme leader Ayatollah Seyed Ali Khamenei, had their tweets hidden from the site In cases when threats have been made against individuals.
“We want to make it clear today that the accounts of world leaders are not above our policies entirely,” the company said. That includes promotion of terrorism, making “clear and direct” threats of violence, and posting private information.
“Our goal is to enforce our rules judiciously and impartially,” Twitter added in a tweet. “In doing so, we aim to provide direct insight into our enforcement decision-making, to serve public conversation, and protect the public’s right to hear from their leaders and to hold them to account.”
Elastic acquired Endgame Security in June for $234 million, and as a result of that deal, today the company announced Elastic Endpoint security to help customers secure laptops and servers. It also announced the acquisition has officially closed.
Elastic CEO and co-founder Shay Banon says that the company has already been helping threat hunters inside organizations find security events via its security information and event management (SIEM) tool. With Endgame, the company it wanted to extend its security coverage to laptops and servers. It’s probably not a coincidence that Endpoint is built on top of Elastic technology.
The company announced that it’s going to offer an unusual pricing model for this tool. Banon says that instead of charging by the machine as is the industry norm, it’s going to charge based on the amount of data stored. He says it’s an essential change to carry the security and coverage across the range of tools.
“We deeply believe in order to converge segments like SIEM and endpoint, you not only want to have the same technology stack, but you also want to provide customers with the same packaging and pricing. This is a first in the endpoint market, and we think it’s a big deal when it comes to security users and CISOs and CIOs out there,” Banon told TechCrunch.
Elastic is at its heart a search tool, but it has been expanding what that search tool covers over the years beyond web and enterprise search to other areas like applications performance management, log management and security.
Today’s announcement is about expanding that security component to enable the company to offer more comprehensive coverage across an organization. Endpoint’s 150 employees, which are mostly engineers and data scientists, have joined Elastic and will be providing the company with a machine learning knowledge boost to help make sense of the growing amounts of data across the Elastic toolset.
Endgame is based in Arlington, Virginia and will keep its offices there. It raised over $111 million (according to Crunchbase data) before being acquired.
Germany is resisting US pressure to shut out Chinese tech giant Huawei from its 5G networks — saying it will not ban any supplier for the next-gen mobile networks on an up front basis, per Reuters.
“Essentially our approach is as follows: We are not taking a pre-emptive decision to ban any actor, or any company,” government spokesman, Steffen Seibert, told a news conference in Berlin yesterday.
The country’s Federal Network Agency is slated to be publishing detailed security guidance on the technical and governance criteria for 5G networks in the next few days.
The next-gen mobile technology delivers faster speeds and lower latency than current-gen cellular technologies, as well as supporting many more connections per cell site. So it’s being viewed as the enabling foundation for a raft of futuristic technologies — from connected and autonomous vehicles to real-time telesurgery.
But increased network capabilities that support many more critical functions means rising security risk. The complexity of 5G networks — marketed by operators as “intelligent connectivity” — also increases the surface area for attacks. So future network security is now a major geopolitical concern.
German business newspaper Handelsblatt, which says it has reviewed a draft of the incoming 5G security requirements, reports that chancellor Angela Merkel stepped in to intervene to exclude a clause which would have blocked Huawei’s market access — fearing a rift with China if the tech giant is shut out.
Earlier this year it says the federal government pledged the highest possible security standards for regulating next-gen mobile networks, saying also that systems should only be sourced from “trusted suppliers”. But those commitments have now been watered down by economic considerations at the top of the German government.
The decision not to block Huawei’s access has attracted criticism within Germany, and flies in the face of continued US pressure on allies to ban the Chinese tech giant over security and espionage risks.
The US imposed its own export controls on Huawei in May.
A key concern attached to Huawei is that back in 2017 China’s Communist Party passed a national intelligence law which gives the state swingeing powers to compel assistance from companies and individuals to gather foreign and domestic intelligence.
For network operators outside China the problem is Huawei has the lead as a global 5G supplier — meaning any ban on it as a supplier would translate into delays to network rollouts. Years of delay and billions of dollars of cost to 5G launches, according to warnings by German operators.
Another issue is that Huawei’s 5G technology has also been criticized on security grounds.
A report this spring by a UK oversight body set up to assess the company’s approach to security was damning — finding “serious and systematic defects” in its software engineering and cyber security competence.
Though a leak shortly afterwards from the UK government suggested it would allow Huawei partial access — to supply non-core elements of networks.
An official UK government decision on Huawei has been delayed, causing ongoing uncertainty for local carriers. In the meanwhile a government review of the telecoms supply chain this summer called for tougher security standards and updated regulations — with major fines for failure. So it’s possible that stringent UK regulations might sum to a de facto ban if Huawei’s approach to security isn’t seen to take major steps forward soon.
According to Handelsblatt’s report, Germany’s incoming guidance for 5G network operators will require carriers identify critical areas of network architecture and apply an increased level of security. (Although it’s worth pointing out there’s ongoing debate about how to define critical/core network areas in 5G networks.)
The Federal Office for Information Security (BSI) will be responsible for carrying out security inspections of networks.
Last week a pan-EU security threat assessment of 5G technology highlighted risks from “non-EU state or state-backed actors” — in a coded jab at Huawei.
The report also flagged increased security challenges attached to 5G vs current gen networks on account of the expanded role of software in the networks and apps running on 5G. And warned of too much dependence on individual 5G suppliers, and of operators relying overly on a single supplier.
Shortly afterwards the WSJ obtained a private risk assessment by EU governments — which appears to dial up regional concerns over Huawei, focusing on threats linked to 5G providers in countries with “no democratic and legal restrictions in place”.
Among the discussed risks in this non-public report are the insertion of concealed hardware, software or flaws into 5G networks; and the risk of uncontrolled software updates, backdoors or undocumented testing features left in the production version of networking products.
“These vulnerabilities are not ones which can be remedied by making small technical changes, but are strategic and lasting in nature,” a source familiar with the discussions told the WSJ — which implies that short term economic considerations risk translating into major strategic vulnerabilities down the line.
5G alternatives are in short supply, though.
US Senator Mark Warner recently floated the idea of creating a consortium of ‘Five Eyes’ allies — aka the U.S., Australia, Canada, New Zealand and the UK — to finance and build “a Western open-democracy type equivalent” to Huawei.
But any such move would clearly take time, even as Huawei continues selling services around the world and embedding its 5G kit into next-gen networks.
Shipping tech giant Pitney Bowes has confirmed a cyberattack on its systems.
The company said in a statement that its systems were hit by a “malware attack that encrypted information” on its systems, more commonly known as ransomware.
“At this time, the company has seen no evidence that customer or employee data has been improperly accessed,” the statement said, but many of its internal systems are offline, causing disruption to client services and other corporate processes.
Pitney Bowes was affected by a malware attack which impacted some systems & disrupted client access to some of our services. We apologize for any disruption to your systems. We are working to restore affected systems. Please visit https://t.co/ixUa5FCGUQ for updates.
— Pitney Bowes (@PitneyBowes) October 14, 2019
The company said it’s working with a third-party consultant to address the issue. But it’s not immediately known what kind of ransomware encrypted its systems.
A spokesperson, when reached, did not comment beyond the published statement.
Pitney Bowes is a widely used shipping tech company that provides mailing services to sellers, with more than 1.5 million clients across the world, including the Fortune 500. The company allows sellers to make mailing items and goods easier and more efficient, and is widely used by sellers in marketplaces like Etsy and Shopify.
Several customers on Twitter complained that they were unable to perform basic tasks on their account. It’s known that some account and product support pages, and downloads are unavailable.
It’s the latest in a string of attacks on high-profile businesses. In the past few months, drinks giant Arizona Beverages, aluminum maker Norsk Hydro, and science services company Eurofins have all been hit by ransomware.
Last week, the FBI warned of “high impact” ransomware attacks targeting larger businesses.
Google has revealed its latest Titan security key — and it’s now compatible with USB-C devices.
The latest Titan key arrives just weeks after its closest market rival Yubico — which also manufactures the Titan security key for Google — released its own USB-C and Lightning compatible key, but almost two years after the release its dedicated USB-C key.
These security keys offer near-unbeatable security against a variety of threats to your online accounts, from phishing to nation-state attackers. When you want to log in to one of your accounts, you plug in the key to your device and it authenticates you. Most people don’t need a security key, but they are available for particularly high-risk users, like journalists, politicians, and activists, who are frequently targeted by hostile nation states.
By Google’s own data, security keys are far stronger than other options, like a text message sent to your phone.
Many companies, like Coinbase, Dropbox, Facebook, Twitter and Google, support the use of security keys. But although the list of supported companies is not vast, it continues to grow as security key usage increases.
Google said its newest key will be available from October 15 for $40.
Sophos announced this morning that private equity firm Thoma Bravo, has agreed to buy the British company for £3.1 billion ($3.9 billion USD). The price is based on $7.40 USD per share and the company indicated that the board of directors will recommend that shareholders accept the offer.
Sophos CEO Kris Hagerman, as you would expect, put the deal in the brightest possible light. “Sophos is actively driving the transition in next-generation cybersecurity solutions, leveraging advanced capabilities in cloud, machine learning, APIs, automation, managed threat response, and more. We continue to execute a highly-effective and differentiated strategy, and we see this offer as a compelling validation of Sophos, its position in the industry and its progress,” he said in a statement.
But private equity firms typically look for undervalued firms that they can purchase and either combine with other properties or find ways to build up their value. Thoma Bravo indicated in a public filing that it saw a firm, it called “a global leader in next-generation cybersecurity solutions spanning endpoint, next-generation firewall, cloud security, server security, managed threat response, and more,” it stated in the filing.
The company has 400,000 customers in 150 countries, 47,000 channel partners and more than 100 million users, according to the filing. The stock price was up this morning on the news, according to reports.
It’s worth noting that just last week, TechCrunch’s Zack Whittaker reported on “a vulnerability in [Sophos’] Cyberoam firewall appliances, which a security researcher says can allow an attacker to gain access to a company’s internal network without needing a password.” The company issued an advisory last week on the problem, indicating it had issued a patch on September 30th.