Zebeth Media Solutions

Privacy

Uber tests push notifications, a feature literally no one wants • ZebethMedia

Uber recently launched its new advertising division and in-app ads. Apparently, those ads aren’t staying within the app. Instead, ads from other companies are being sent out as push notifications, much to the chagrin of some Uber users. Over the weekend, people turned to Twitter to complain about the notifications, sharing screenshots of ads, including one particularly popular one from Peloton that Uber had sent out. One of the primary complaints: notifications are being sent out when users aren’t engaging with the app. When Uber first announced its in-app ad “experience,” the company didn’t mention the potentially intrusive implications. Uber told ZebethMedia this “was a limited test and users can always manage their mobile notification settings under Privacy and then Notifications in the app.” The company did not respond in time to follow up questions from ZebethMedia, including how many users are included in the test, whether it is tracking data on how many users turn off ad push notifications, how long the test is scheduled to last and whether Uber would fully implement push notification ads in the future. Uber’s in-app ads feature a single brand for the entire trip. The so-called “journey ads” lets brands show a user different ads at three points of a trip: while waiting for a car, while riding and upon reaching the destination. Brands are able to “personalize” ads to each user based on their travel history and geographic destinations. It’s also not clear if Uber used the same type of data for its push notification ads.

Meta hit with antitrust breach order in Turkey for combining user data across Fb, WhatsApp, Instagram • ZebethMedia

Meta won’t be quaking at the size of the penalty it’s just been handed by Turkey’s competition authority, which announced a 346.72 million lira sanction today. The circa $18.6M fine pales in comparison to a number of recent stings hitting it from European regulators. Such as the $267M fine for WhatsApp in the European Union just over a year ago — for transparency breaches of the bloc’s data protection framework; or the $70M spank a year ago from the UK’s competition authority after it said Meta failed to comply with information requests during scrutiny of its purchase of Giphy. It was subsequently ordered by the UK’s CMA to undo that acquisition too, so the whole sorry saga will likely cost it considerably more. Plenty more data protection complaints are still hanging over its head too, such as the one targeting its EU-US data flows that could see an order to suspend those transfers — and essentially shutter its service in Europe — in the coming months unless a looming replacement for the defunct Privacy Shield framework can be rushed into place first. Still, it’s the crux of the Turkish fine — that Meta holds a dominant position in social media and sought to obstruct competitors by combining data between separate services it operates — that’s likely to send a chill down the social networking giant’s spine because its business runs on people profiling. And that runs on its ability to obtain people’s data and flesh out detailed ad profiles. So any regulatory roadblocks that cut into its ability to conduct its unfettered surveillance of Internet users poses an existential threat to its core microtargeting ad model. The Turkish action is also of note because Germany’s competition regulator has had a similar concern for years. It started investigating Facebook’s ‘superprofiling’ all the way back in March 2016 — going on to confirm an abuse finding in a February 2019 order which concluded that the company’s trampling of user privacy amounted to “exploitative abuse” and a violation of its dominant position in social networking. Hence the German FCO ordered Facebook to stop combining data on users of different products. But Meta appealed and an enforcement battle over that earlier German data separation order continues. Its appeal was referred up to the bloc’s top court in March 2021 and is still pending a judgement (likely next year). But an opinion put out by influential advisor to the CJEU last month favored allowing antitrust authorities to consider data protection compatibility as part of their assessment of competition rules — which, if the court follows the AG’s view, would be bad news for Meta across the EU, as it would open the door to more competition watchdogs taking a non-siloed, ‘big picture’, comprehensive view of what it’s doing when assessing any antitrust concerns. There is therefore a growing sense that international regulators are — gradually, inexorably — closing in on Meta’s legacy of moving fast and breaking things (or, as appears a better description of its modus operandi, hoovering up in all the data and pooling it into a massive data lake far from the reach of any user control, per leaked internal documents). “By combining the data collected by [Meta] from Facebook, Instagram and WhatsApp services… it causes the deterioration of competition by making it difficult for competitors with personal social networking services operating in online display advertising markets and creates barriers to entry to the market,” the Turkish competition authority wrote in a decision published today — following the culmination of an investigation — and explaining its decision to impose an administrative fine [the decision text is in Turkish; we’ve translated it here using machine translation]. The authority’s investigation kicked off last year after a controversial change to WhatsApp’s T&Cs caused a major privacy backlash around the world. And consumer protection regulators in Europe remain concerned about its T&Cs confusing consumers. So there could be more enforcements coming down the pipe on that front, too. (In addition to the massive GDPR ‘transparency’ fine mentioned above — and potentially more GDPR enforcements on a backlog of complaints still being chewed over by the tech giant’s lead data protection regulator in the EU.) The Turkish competition authority found unanimously that Meta holds a dominant position in the social media market and unanimously concluded its behavior amounted to a breach of local competition law. As well as being issued with a fine, the tech giant has been ordered to cease the violation — and establish “effective competition in the market” — with a deadline of one month provided for it to notify the authority of the steps it will take to do that; and a maximum of six months (from today’s decision) for implementing the measures, once approved. Meta has also been ordered to report back to the regulator on the measures it’s taking for a period of five years. The tech giant was contacted for comment on the Turkish authority’s sanction. A Meta spokesperson emailed this brief line — but did not confirm whether or not it will file an objection: “We disagree with the findings of the Turkish Competition Authority. We protect our users’ privacy and provide people with transparency and control over their data. We will consider all our options.” One thing is clear: Meta’s business is facing costly regulatory incursions on multiple fronts — which are threatening its ability to keep a grip on the world’s attention by ignoring privacy laws; threatening its ability to do that through the route of acquiring/assimilating other businesses to grab data that way (as well as threatening its ability to combine data across separate services it already owns); and threatening its ability to try to evade this legacy regulatory reckoning by skating its business to where it thinks the puck is headed (aka ‘the metaverse’) — by blocking its ability to use its market muscle to buy up VR startups that are seeing some nascent success (in what may, in any case, be overhyped vaporware). Add

Australia to toughen privacy laws with huge hike in penalties for breaches • ZebethMedia

Australia has confirmed an incoming legislative change will significant strengthen its online privacy laws following a spate of data breaches in recent weeks — such as the Optus telco breach last month. “Unfortunately, significant privacy breaches in recent weeks have shown existing safeguards are inadequate. It’s not enough for a penalty for a major data breach to be seen as the cost of doing business,” said its attorney-general, Mark Dreyfus, in a statement at the weekend. “We need better laws to regulate how companies manage the huge amount of data they collect, and bigger penalties to incentivise better behaviour.” The changes will be made via an amendment to the country’s privacy laws, following a long process of consultation on reforms. Dreyfus said the Privacy Legislation Amendment (Enforcement and Other Measures) Bill 2022 will increase the maximum penalties that can be applied under the Privacy Act 1988 for serious or repeated privacy breaches from the current AUS $2.22 million (~$1.4M) penalty to whichever is the greater of: AUS $50 million (~$32M); 3x the value of any benefit obtained through the misuse of information; or 30% of a company’s adjusted turnover in the relevant period These amounts are substantially higher than an earlier draft of the reform last year (when penalties of AUS $10M or 10% of turnover were being considered). Major breaches such as at Optus — and another that followed hard on its heels, at the health insurer Medibank Private — appear to have concentrated lawmakers’ minds. The change of government, earlier this year, also means there’s a new broom at work. Additional changes trailed by Dreyfus include greater powers for the Australian information commissioner and a beefed up Notifiable Data Breaches scheme to provide the privacy watchdog with a more comprehensive view of what’s been compromised in a breach, also so it can assess the risk of harm to individuals. The information commissioner and the Australian Communications and Media Authority will also be furnished with greater information sharing powers to enable more regulatory joint-working. Both agencies opened investigations of Optus following last month’s breach. The privacy legislation amendment bill is slated to be presented to Australia’s parliament this week, per Reuters. The Attorney-General’s Department is also undertaking a comprehensive review of the Privacy Act that’s due to be completed this year, with recommendations expected for further reform, it said. “I look forward to support from across the Parliament for this Bill, which is an essential part of the Government’s agenda to ensure Australia’s privacy framework is able to respond to new challenges in the digital era. The Albanese Government is committed to protecting Australians’ personal information and to further strengthening privacy laws,” added Dreyfus.

France fines Clearview AI maximum possible for GDPR breaches • ZebethMedia

Clearview AI, the controversial facial recognition firm that scrapes selfies and other personal data off the Internet without consent to feed an AI-powered identity-matching service it sells to law enforcement and others, has been hit with another fine in Europe. This one comes after it failed to respond to an order last year from the CNIL, France’s privacy watchdog, to stop its unlawful processing of French citizens’ information and delete their data. Clearview responded to that order by, well, ghosting the regulator — thereby adding a third GDPR breach (non-cooperation with the regulator) to its earlier tally. Here’s the CNIL’s summary of Clearview’s breaches: Unlawful processing of personal data (breach of Article 6 of the GDPR) Individuals’ rights not respected (Articles 12, 15 and 17 of the GDPR) Lack of cooperation with the CNIL (Article 31 of the RGPD) “Clearview AI had two months to comply with the injunctions formulated in the formal notice and to justify them to the CNIL. However, it did not provide any response to this formal notice,” the CNIL wrote in a press release today announcing the sanction [emphasis its]. “The chair of the CNIL therefore decided to refer the matter to the restricted committee, which is in charge for issuing sanctions. On the basis of the information brought to its attention, the restricted committee decided to impose a maximum financial penalty of 20 million euros, according to article 83 of the GDPR [General Data Protection Regulation].” The EU’s GDPR allows for penalties of up to 4% of a firm’s worldwide annual revenue for the most serious infringements — or €20M, whichever is higher. But the CNIL’s press release makes clear it’s imposing the maximum amount it possibly can here. Whether France will see a penny of this money from Clearview remains an open question, however. The US-based privacy-stripper has been issued with a slew of penalties by other data protection agencies across Europe in recent months, including €20M fines from Italy and Greece; and a smaller UK penalty. But it’s not clear it’s handed over any money to any of these authorities — and they have limited resources (and legal means) to try to pursue Clearview for payment outside their own borders. So the GDPR penalties look mostly like a warning to stay away from Europe. Clearview’s PR agency, LakPR Group, sent us this statement following the CNIL’s sanction — which it attributed to CEO Hoan Ton-That: “There is no way to determine if a person has French citizenship, purely from a public photo from the internet, and therefore it is impossible to delete data from French residents. Clearview AI only collects publicly available information from the internet, just like any other search engine like Google, Bing or DuckDuckGo.” The statement goes on to reiterate earlier claims by Clearview that it does not have a place of business in France or in the EU, nor undertake any activities that would “otherwise mean it is subject to the GDPR”, as it puts it — adding: “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.” (NB: On paper the GDPR has extraterritorial reach so its former arguments are meaningless, while its claim it’s not doing anything that would make it subject to the GDPR looks absurd given its amassed a database of over 20 billion images worldwide and Europe is, er, part of Planet Earth… ) Ton-That’s statement also repeats a much-trotted out claim in Clearview’s public statements responding to the flow of regulatory sanctions its business attracts that it created its facial recognition tech with “the purpose of helping to make communities safer and assisting law enforcement in solving heinous crimes against children, seniors and other victims of unscrupulous acts” — not to cash in by unlawfully exploiting people’s privacy — not that, in any case, having a ‘pure’ motive would make any difference to its requirement, under European law, to have a valid legal basis to process people’s data in the first place. “We only collect public data from the open internet and comply with all standards of privacy and law. I am heartbroken by the misinterpretation by some in France, where we do no business, of Clearview AI’s technology to society. My intentions and those of my company have always been to help communities and their people to live better, safer lives,” concludes Clearview’s PR. Each time it has received a sanction from an international regulator it’s done the same thing: Denying it has committed any breach and refuted the foreign body has any jurisdiction over its business — so its strategy for dealing with its own data processing lawlessness appears to be simple non-cooperation with regulators outside the US. Obviously this only works if you plan for your execs/senior personnel to never set foot in the territories where your business is under sanction and abandon any notion of selling the sanctioned service to overseas customers. (Last year Sweden’s data protection watchdog also fined a local police authority for unlawful use of Clearview — so European regulators can act to clamp down on any local demand too, if required.) On home turf, Clearview has finally had to face up to some legal red lines recently. Earlier this year it agreed to settle a lawsuit that had accused it of running afoul of an Illinois law banning the use of individuals’ biometric data without consent. The settlement included Clearview agreeing to some limits on its ability to sell its software to most US companies but it still trumpeted the outcome as a “huge win” — claiming it would be able to circumvent the ruling by selling its algorithm (rather than access to its database) — to private companies in the U.S. The need to empower regulators so they can order the deletion (or market withdrawal) of algorithms trained on unlawfully processed data does look like an important upgrade to their toolboxes if we’re to avoid an AI-fuelled dystopia. And it just so happens that the EU’s

DuckDuckGo’s beta Mac app is open to public with new features • ZebethMedia

DuckDuckGo’s web browser for Mac is now available as an open beta test, the Internet privacy company announced today. Six months after the web browsing app rolled out as a closed beta, DuckDuckGo added new features to version 0.30, including a “Duck Player” to defend users from targeted ads and cookies when they watch YouTube videos. With the new Duck Player, YouTube will still register views, however, none of the videos will contribute to a user’s YouTube advertising profile, so they don’t see personalized ads. Other new features include pinned tabs, a bookmark bar, the ability to view locally stored browsing history, immediate access to built-in email protection, and password protection from the open-source password manager Bitwarden. The company also revealed an improved Cookie Consent Pop-Up Manager, claiming it can automatically handle cookie pop-ups on “significantly more sites,” DuckDuckGo wrote in its blog. Another upgrade to DuckDuckGo for Mac is that when it blocks ads, the whitespace that’s left behind is now removed. DuckDuckGo aims to be an all-in-one privacy solution that’s easy-to-use for everyday browsing. In June, its mobile app took the No. 2 spot for search engines in the U.S., Canada, Australia and the Netherlands. “Since announcing the waitlist beta in April, we’ve been listening to beta testers’ feedback and making even more improvements to meet your needs,” the company wrote. DuckDuckGo noted that more built-in features would be added over time. DuckDuckGo for Windows is launching a private waitlist beta in the coming months.

DataGrail announces automated risk assessment tool and $45M investment • ZebethMedia

DataGrail has always focused on helping companies comply with the growing world of privacy regulation, building plug-ins to common data-heavy applications to help automate data discovery and compliance. Today, it’s building on that with a new automated risk monitoring solution that helps companies build third-party application risk assessments quickly. While they were at it, the startup also announced a $45 million Series C investment. Company CEO and co-founder Daniel Barber says that overall the product has evolved into a data privacy control center where customers can have a better understanding of their customer’s data privacy requirements. “We’ve seen the market move towards needing to control [privacy] because largely businesses have been out of control with how they’re managing privacy, while consumers are expecting control. And so we’ve really formed this thesis around the need for a privacy control center,” Barber explained. To help, the company has over 1400 plug-ins, up from 900 when we spoke last year, which help monitor what kinds of data are being collected and how the data moves across applications inside a company. He said they built the new Risk Monitor tool as a way to take advantage of the company’s understanding of these data flows and the risks involved. “We’re announcing this product called Risk Monitor, and what we’re really talking about here is as part of regulatory requirements, many of them require businesses to do assessments of risk,” he said. The tool is designed to help build these assessments, known as Data Protection Impact Assessments (DPIAs), in an automated way, reducing the amount labor involved to build a DPIA on the data used in a particular tool. This reduces the workload for privacy managers, while showing others inside a company what good privacy practice looks like. “What we’ve done is using our 1400 plus integrations and the existing information we know about risk and the third-party risk associated with those applications, we can pre-fill and create intelligent workflows that automate the entire [DPIA process] here to reduce the number of people involved and needed in the privacy program, while effectively centralizing that risk,” he said. In spite of the economic uncertainty that exists today, Barber says the company has grown revenue 3x since we spoke in March 2021 at the time of his company’s $30 million Series B announcement. It has also grown from 40 employees since last year to over 100 today with plans to perhaps double that in the next year powered by the new capital from the Series C investment. He says that as he builds the workforce, he is focused on building a diverse and inclusive company. “It’s something that’s kind of built into the DNA of the business from the beginning. So at the board level, we have equal women and men on the board, which is quite unusual for boards to have equal representation by gender, and we have equal representation at the executive level level as well,” he said. And they also have gender parity at the management level. While he understands that there are many dimensions to diversity, he has achieved gender diversity across all levels of the company. As for the $45 million Series C, that was led by Third Point Ventures with participation from Thomson Reuters Ventures and Sixty Degree Capital along with previous investors Felicis Ventures, Operator Collective, Next47, Cloud Apps Capital and other unnamed investors. The startup has now raised over $84 million.

Demanding employees turn on their webcams is a human rights violation, Dutch Court rules • ZebethMedia

When Florida-based Chetu hired a telemarketer in the Netherlands, the company demanded the employee turn on his webcam. The employee wasn’t happy with being monitored “for 9 hours per day,” in a program that included screen-sharing and streaming his webcam. When he refused, he was fired, according to public court documents (in Dutch), for what the company stated was ‘refusal to work’ and ‘insubordination.’ The Dutch court didn’t agree, however, and ruled that “instructions to keep the webcam turned on is in conflict with the respect for the privacy of the workers’. In its verdict, the court goes so far as to suggest that demanding webcam surveillance is a human rights violation. “I don’t feel comfortable being monitored for 9 hours a day by a camera. This is an invasion of my privacy and makes me feel really uncomfortable. That is the reason why my camera is not on,” the court document quotes the anonymous employee’s communication to Chetu. The employee suggests that the company was already monitoring him, “You can already monitor all activities on my laptop and I am sharing my screen.” According to the court documents, the company’s response to that message was to fire the employee. That might have worked in an at-will state such as Chetu’s home state Florida, but it turns out that labor laws work a little differently in other parts of the world. The employee took Chetu to court for unfair dismissal, and the court found in his favor, which includes paying for the employee’s court costs, back wages, a fine of $50,000, and an order to remove the employee’s non-compete clause. The court ruled that the company needs to pay the employee’s wages, unused vacation days, and a number of other costs as well. “Tracking via camera for 8 hours per day is disproportionate and not permitted in the Netherlands,” the court found in its verdict, and further rams home the point that this monitoring is against the employee’s human rights, quoting from the Convention for the Protection of Human Rights and Fundamental Freedoms; “(…) video surveillance of an employee in the workplace, be it covert or not, must be considered as a considerable intrusion into the employee’s private life (…), and hence [the court] considers that it constitutes an interference within the meaning of Article 8 [Convention for the Protection of Human Rights and Fundamental Freedoms].” Chetu, in turn, was apparently a no-show for the court case. Via NL Times. 

ACLU’s Jennifer Stisa Granick and Google’s Maddie Stone talk security and surveillance at Disrupt • ZebethMedia

In a world filled with bad actors and snooping governments, surveillance is the one factor that affects almost every business across the globe. While companies like Apple, Signal and LastPass fight against surveillance using end-to-end encryption and by shunning mass data collection — you can’t hand over data you don’t have — too many companies, big and small, remain unaware and deeply vulnerable to prying eyes. The fast-changing surveillance landscape is why we’re thrilled that Jennifer Stisa Granick, ACLU’s surveillance and cybersecurity counsel, and Maddie Stone, a security researcher on Google’s Project Zero team, will join us onstage at ZebethMedia Disrupt on October 18–20 in San Francisco. In a panel discussion called “Surveillance in Startup Land,” Granick and Stone will join ZebethMedia security editor Zack Whittaker to present a crash course on the surveillance state to inform, educate and inspire early-stage founders to think about how to protect their users and customers from threats they haven’t even thought of yet. We’ll discuss the emerging threats today, like how spyware makers, like NSO Group, Cytrox and Candiru, which let governments secretly wiretap phones in real time, and data brokers — the companies that trade in people’s personal information and granular location — represent an ever increasing threat to privacy and civil liberties. Surveillance isn’t just in the United States — it’s everywhere — and change can happen quickly and unexpectedly. Case in point: Fear over healthcare data tracking and privacy became a reality after the U.S. Supreme Court overturned Roe v. Wade, the landmark legal case that guaranteed a person’s constitutional right to abortion. The decisions that founders and investors make today can and will affect millions tomorrow. We can’t wait to hear our panelists weigh in on how companies should think about what they’re building now — and in the future — so they don’t inadvertently become extensions of the surveillance state. Jennifer Stisa Granick fights for civil liberties in an age of massive surveillance and powerful digital technology. As the surveillance and cybersecurity counsel with the ACLU Speech, Privacy and Technology Project, she litigates, speaks and writes about privacy, security, technology and constitutional rights. Granick is the author of the book “American Spies: Modern Surveillance, Why You Should Care, and What to Do About It,” published by Cambridge University Press and winner of the 2016 Palmer Civil Liberties Prize. Maddie Stone is a security researcher on Google Project Zero team, where she focuses on zero-day exploits actively used in the wild. Previously, she served as reverse engineer and team lead on the Android security team, focusing predominantly on preinstalled and off-Google Play malware. Stone holds a Bachelor of Science, with a double major in computer science and Russian, and a Master of Science in computer science from Johns Hopkins University. ZebethMedia Disrupt takes place on October 18–20 in San Francisco. Buy your pass today and find out why Disrupt is the place where startups go to grow. Is your company interested in sponsoring or exhibiting at ZebethMedia Disrupt 2022? Contact our sponsorship sales team by filling out this form.

business and solar energy