Zebeth Media Solutions

Government & Policy

How can students work or launch a startup while maintaining their immigration status? • ZebethMedia

Sophie Alcorn Contributor Sophie Alcorn is the founder of Alcorn Immigration Law in Silicon Valley and 2019 Global Law Experts Awards’ “Law Firm of the Year in California for Entrepreneur Immigration Services.” She connects people with the businesses and opportunities that expand their lives. More posts by this contributor Dear Sophie: How can early-stage startups improve their chances of getting H-1Bs? Dear Sophie: How can I launch a startup while on OPT? Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies. “Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.” ZebethMedia+ members receive access to weekly “Dear Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off. Dear Sophie, I’m studying bioinformatics at a university in the U.S. What options do I have to work before and after graduation on my student visa? Do any of these options allow me to launch my own startup? — Wanting to Work Dear Wanting, I applaud your enthusiasm to get to work! The opportunity to work and get training in your field is one of the draws of studying in the U.S. Complex immigration rules and regulations for international students — not to mention processing delays and time limits — can make things challenging, but all you need is a little planning to overcome those challenges! Your ability to work in your area of study — and for how long — depends on what type of student visa you hold: F-1 student visa. J-1 educational and cultural exchange visa. M-1 student visa. F-1 offers the most flexible work options Image Credits: Joanna Buniak / Sophie Alcorn (opens in a new window) The F-1 student visa offers the most options for working both before you graduate and after. Two types of training programs are available to most international students who hold an F-1 visa, making them eligible to work in their field of study: Curricular Practical Training (CPT) is available to students at some colleges and universities. Optional Practical Training (OPT) is available either before or after graduation. STEM OPT is a 24-month extension of OPT available to students who graduated with a STEM degree designated by the U.S. Department of Homeland Security. Working under CPT If CPT is available at a university or college, then students on F-1 visas are eligible if they have been enrolled full time for at least one academic year and have not yet graduated. Some graduate programs allow or even require students to apply for CPT at the very beginning of their program.

Don’t panic — this isn’t Tencent’s first tie-up with a state-owned firm • ZebethMedia

News on Tencent and China Unicom is causing a stir in China’s tech industry on Wednesday afternoon. The gaming and social networking behemoth and the state-owned carrier have received regulatory approval to set up a joint venture, according to a government announcement. Following the transaction, Tencent and China Unicom will respectively own 42% and 47% of the firm. The development has led to concerns over even greater government influence on China’s Big Tech. Some netizens go as far as speculating Tencent will eventually be de-privatized. This reaction is expected given China has been tightening its grip on the internet industry over the past three years. Tencent’s gaming business, for instance, took a big hit when Beijing halted the issuance of new gaming permits. But a closer look at the notice suggests this new “mixed ownership” entity seems to have a limited impact on Tencent’s existing business. The entity, according to a filing in September, will center around two areas: content delivery network and edge computing. CDN refers to a geographically distributed network of servers that work together to speed up content distribution for users, whereas edge computing means processing data at the periphery rather than the center of a network. Tencent’s cloud computing arm seems most pertinent to the new JV. The enterprise-facing segment has gained new significance as a revenue driver since China’s regulatory clampdown sent chills across the consumer internet sector. And it’s indeed in the area of web infrastructure where Tencent’s involvement in the public sector has been the most active. Tencent Cloud has a page dedicated to showcasing the sort of public services it empowers. From online government services to community centers with self-serve kiosks, one can find solutions supplied by Tencent — and in fact, Alibaba, Baidu, and other tech giants we well. Beijing has been working to digitize the government apparatus for years, and what better solution providers are there than its own tech darlings? Tencent has been boasting the role of WeChat as a digital infrastructure for government services as early as 2019: The WeChat owner is no stranger to mixed ownership either. In 2017, China Unicom was seeking to raise $11.7 billion from a dozen investors — including Tencent and Alibaba — as part of Beijing’s push to revitalize state-owned enterprises with private capital, a structure dubbed ‘mixed ownership.’ Working with a state-owned entity doesn’t naturally imply a greater presence of the visible hand at Tencent. The goal of an SOE is to earn profits for the government, too. But undeniably, China’s private tech sector has been under growing pressure to align its interest with that of the state through a series of regulatory overhauls, often at the cost of their profitability. Ant Group has gone through a deep restructuring to play more like a traditional financial institution. Tencent has ramped up protection for minors and put more effort into educational games.    

They’re not going to ban TikTok (but…) • ZebethMedia

We’ve been hearing for years how TikTok hoovers up data globally and presents it to its parent company in China, and potentially thence to the powers that be. But despite renewed calls today from FCC Commissioner Brendan Carr, the popular app is very unlikely to be outright banned. That doesn’t mean it will be allowed to carry on with impunity, though. Commissioner Carr’s opinion appeared in an interview with Axios, during which he stated that he doesn’t believe “anything other than a ban” would be sufficient to protect Americans’ data from collection by Chinese companies and authorities. (To be clearhis is him expressing his own position, not the FCC’s; I asked two others at the agency for comment and have not received any response.) This isn’t the first time Carr has voiced this idea. After BuzzFeed News reported data improprieties implied by leaked internal communications, he wrote in June to Apple and Google calling the app an “unacceptable national security risk” and asking the companies to remove it from their app stores. They didn’t, and now it’s back to the question of federal action — first pondered by the Trump administration, which despite many actions restricting China’s reach in the U.S. never managed to get a lock on TikTok. The reason for that is pretty simple: it would be political self-sabotage. TikTok is not just a wildly popular app, it’s the liferaft to which a generation that abandoned the noble ships Facebook, Instagram, and soon Twitter have clung for years. And the reason why is that American companies haven’t come close to replicating TikTok’s feat of algorithmic addiction. TikTok’s success in gluing Gen Z to their phones isn’t necessarily a good or bad thing — that’s a different discussion. Taking as a given its place in the zeitgeist, however, it makes a ban politically risky for multiple reasons. First, it would be tremendously unpopular. The disaffected-youth vote is supremely important right now, and any President, Senator, or Representative who supports such a ban would be given extreme side-eye by the youth. Already out of touch with technology and the priorities of the younger generation, D.C. would now also be seen as fun police. Whether that would drive voters to the other side or just cause them to not vote, there aren’t any good outcomes. Banning TikTok does not secure votes and that is fatal before you even start thinking about how to do it. (Not to mention it kind of looks like the government intervening to give flailing U.S. social media companies a boost.) Second, there isn’t a clear path to a ban. The FCC can’t do it (no jurisdiction). Despite the supposed national security threat, the Pentagon can’t do it (ditto). The feds can’t force Apple and Google to do it (First Amendment). Congress won’t do it (see above). An executive order won’t do it (too broad). No judge will do it (no plausible case). All paths to bans are impractical for one reason or another. Third, any effective ban would be a messy, drawn-out, contested thing with no guarantee of success. Imagine that somehow the government forced Apple and Google to remove TikTok from their stores and remotely wipe or disable it on phones. No one likes that look — the companies look too weak and too strong, letting the feds push them around and then showing off their power to reach out and touch “your” device. An IP-based ban would be easily circumvented but also set another unpleasant censorship precedent that ironically would make the U.S. look a lot more like China. And even should either or both of these be attempted, they’d be opposed in court by not just ByteDance but companies from around the world that don’t want the same thing to happen to them if they get a hit and the government doesn’t like it. For those reasons and more, an outright ban by law, decision or act of god is a very unlikely thing. But don’t worry: there are other tools in the toolbox. If you can’t beat ’em, bother ’em Image Credits: Bryce Durbin / ZebethMedia The government may not be able to kick TikTok out of the country, but that doesn’t mean they have to be nice about letting them stay. In fact, it’s probable that they’ll do their best to make it downright unpleasant. The company and service exists in something of a loophole, regulator-wise, like most social media companies. The addition of Chinese ownership is both a complicator and an opportunity. It’s more complicated because the U.S. can’t directly affect ByteDance’s policies. On the other hand, as a “foreign adversary,” China’s ascendancy over private industry is a legitimate national security concern and policy can be shaped around that. This involves various more independent agencies that are free to set rules within their remits — the FCC can’t, in this case, make a case. But what about the Commerce Department? Homeland Security? The FTC? For that matter, what about states like California? Rule-making agencies have a free hand — and like tacit Congressional backing — to extend their own fiefdoms to the edges of TikTok, with national security acting as a catch-all reason. If Commerce adds “connected software applications” to supply chain security rules as it has proposed, suddenly the data coming and going through the app is arguably under its protection. (This would all be shown in various definitions and filings at the time of the rulemaking.) What if TikTok’s source code, user data, and other important resources were subject to regular audits to make sure they complied with cross-border data supply chain rules? Well, it’s a pain in the neck for ByteDance because it needs to scour its code base to make sure it isn’t giving too much away. Having to prove that it handles data the way it says it does, to the satisfaction of U.S. authorities given free reign to be picky — not pleasant at all. And that’s just from a relatively quick rule

Elon Musk tells Europe that Twitter will comply with bloc’s illegal speech rules • ZebethMedia

Surprise! Elon Musk’s tenure at Twitter is already shaping up to be confusing and contradictory. Whether this dynamic ends up being more self-defeating for him and his new company than harmful for the rest of humanity and human civilization remains tbc. On the one hand, a fresh report today suggests Musk is preparing major staff cuts: 25%, per the Washington Post. (He denied an earlier report by the same newspaper, last week — suggesting he’d told investors he planned to slash costs by liquidating a full 75% of staff — so how radical a haircut he’s planning is still unclear, even as reports of fired staffers are trickling onto Twitter.) But, also today, Reuters reported that Twitter’s new CEO — the self-styled “Chief Twit” — reached out to the European Union last week to assure local lawmakers that the platform will comply with an incoming flagship reboot of the bloc’s rules on digital governance around illegal content. A move that will, self-evidently, demand a beefed up legal, trust and safety function inside Twitter if Musk is to actually deliver compliance with the EU’s Digital Services Act (DSA) — at a time when Musk is sharpening the knives to cut headcount. DSA compliance for a platform like Twitter will likely require a whole team in and of itself. A team that should be starting work ASAP. The comprehensive EU framework for regulating “information society services” and “intermediary services” across the bloc spans 93 articles and 156 recitals — and is due to start applying as soon as next year for larger platforms. (It’s February 17, 2024, for all the rest.) Penalties for violations of the incoming regime can scale up to 6% of global annual turnover — which, on Twitter’s full year revenue for 2021, implies potential fines of up to a few hundred million dollars apiece. So there should be incentive to comply to avoid such costly regulatory risk. (Er, unless Musk’s strategy for “saving” Twitter involves dismantling the business entirely and running its revenue into the ground.) Yet — in another early step — one of Musk’s first moves as owner of the social media platform was to fire a number of senior execs, including Vijaya Gadde, its former head of Legal, Policy, Trust and Safety. Musk had been critical of her role in a decision by Twitter, back in October 2020, to — initially — limit the distribution of a controversial New York Post article reporting on emails and other data supposedly originating on a laptop belonging to U.S. president Joe Biden’s son, Hunter. The action led to accusations that Twitter was censoring journalism and demonstrating a pro-Democrat bias, even though the company subsequently rowed back on the restrictions and revised its policies. Targeted harassment Musk waded into the saga earlier this year with a tweet that branded the Post’s story “truthful” and dubbed Twitter’s actions “incredibly inappropriate.” He also doubled down shortly afterward by retweeting a meme targeting Gadde by name — which led to a vicious pile-on by his followers that prompted former Twitter CEO, Dick Costolo, to tweet at Musk publicly to ask why he was encouraging targeted harassment of the Twitter exec. Put another way, a former Twitter CEO felt forced to call out the (now current) CEO of Twitter for encouraging targeted harassment of a senior staffer — who also happens to be a woman and POC. To say that this bodes badly for Twitter’s compliance with EU rules that are intended to ensure platforms act responsibility toward users — and drive accountability around how they are operated — is an understatement. what’s going on? You’re making an executive at the company you just bought the target of harassment and threats. — dick costolo (@dickc) April 27, 2022 While the EU’s DSA is most focused on governance rules for handling illegal content/goods and so on — that is, rather than tackling the grayer area of online disinformation, election interference, “legal but harmful” stuff (abuse, bullying, etc.), and such, areas where the EU has some other mechanisms/approaches in the works — larger platforms can be designated as a specific category (called VLOPs, or very large online platforms) and will then have a set of additional obligations they must comply with. These extra requirements for VLOPs include carrying out mandatory risk assessments in areas such as whether the application of their terms and conditions and content moderation policies have any negative effects on “civic discourse, electoral processes and public security,” for example; and a follow-on requirement to mitigate any risks — by putting in place “reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified” (including where risks are impacting users’ fundamental rights, so stuff like respect for human dignity and equality; nondiscrimination; respect for diversity, etc., among other core rights listed in the EU charter). The implication is a VLOP would face major challenges under the DSA if it was to ignore risks to fundamental rights flowing from, say, a decision to apply a “free speech absolutist” approach to content moderation, as Musk has, at times, claimed is his preference (but — ever mercurial — he’s also said that, as Twitter CEO, he would comply with all legal requirements, everywhere in the world they apply). Whether Twitter will be classed as a VLOP is one (now) very burning question for EU citizens and lawmakers. The Commission hasn’t specified either way — but internal market commissioner, Thierry Breton, has (at least) heavily implied Musk’s Twitter will face meaningful checks and balances under the DSA. Which suggests it will be designated and regulated as a VLOP. Hence Breton’s quick schooling of Musk last week — when, in response to Musk’s “free speech” base-inflaming “the bird is freed” tweet, the commissioner pointedly rejoined: “In Europe the bird will fly by our [EU] rules.” Musk did not respond publicly to Breton’s schooling at the time. But, according to a Reuters report today, he reached out to the Commission to “assure” it the platform will

India to create committees with veto power over social media content moderation • ZebethMedia

India will set up grievance committees with the veto power to reverse content moderation decisions of social media firms, it said today, moving ahead with a proposal that has rattled Meta, Google and Twitter. The panels, called Grievance Appellate Committee, will be created within three months, it said. In an amendment to the nation’s new IT law that went into effect last year, the Indian government said any individual aggrieved by the social media’s appointed grievance officer may appeal to the Grievance Appellate Committee, which will comprise a chairperson and two whole time members appointed by the government. The Grievance Appellate Committee will have the power to reverse the social media firm’s decision, the government said. “Every order passed by the Grievance Appellate Committee shall be complied with by the intermediary concerned and a report to that effect shall be uploaded on its website,” New Delhi said in a statement. Shortly after India proposed creating such panels, the US-India Business Council (USIBC), part of the U.S. Chamber of Commerce, and U.S.-India Strategic Partnership Forum (USISPF), both raised concerns about the independence of such committees if the government controlled their formation. Both the firms represent tech giants including Google, Meta and Twitter. (More to follow)

Google filing says EU’s antitrust division is investigating Play Store practices • ZebethMedia

A Google regulatory filing appears to have confirmed rumors in recent months that the European Union’s competition division is looking into how it operates its smartphone app store, the Play Store. However ZebethMedia understands that no formal EU investigation into the Play Store has been opened at this stage. The SEC Form 10-Q, filed by Google’s parent Alphabet (and spotted earlier by Reuters), does make mention of “formal” investigations being opened into Google Play’s “business practices” back in May 2022 — by both the European Commission and the U.K.’s Competition and Markets Authority (CMA). Thing is, the Commission’s procedure on opening a formal competition investigation is to make a public announcement — so the lack of that standard piece of regulatory disclosure suggests any EU investigation is at a more preliminary stage than Google’s citation might imply. The U.K. antitrust regulator’s probe of Google Play is undoubtedly a formal investigation — having been publicly communicated by the CMA back in June — when it said it would probe Google’s rules governing apps’ access to listing on its Play Store, looking at conditions it sets for how users can make in-app payments for certain digital products. While, back in August, Politico reported that the Commission had sent questionnaires probing Play Store billing terms and developer fees — citing two people close to the matter. And potentially suggesting an investigation was underway. Although the EU’s executive declined to comment on its report. A Commission spokeswoman also declined to comment when we asked about the “formal investigation” mentioned in Google’s filing (at the time of writing Google had also not responded to requests about it). But we understand there is no “formal” EU probe into Play as yet — at least not how the EU itself understands the word. This may be because the EU’s competition division is still evaluating responses to enquiries made so far — and/or assessing whether there are grounds for concern. Alternatively, it might have decided it does not have concerns about how Google operates the Play Store. Although developer complaints about app store commissions levied by Google (and Apple) — via the 30% cut that’s typically applied to in-app purchases (a 15% lower rate can initially apply) — haven’t diminished. If anything, complaints have been getting louder — including as a result of moves by the tech giants to expand the types of sales that incur their tax. So lack of competition concern here seems unlikely. Last year, the Commission also charged Apple with an antitrust breach related to the mandatory use of its in-app purchase mechanism imposed on music streaming app developers (specifically) and restrictions on developers preventing them from informing users of alternative, cheaper payment options. So app store T&Cs are certainly on the EU’s radar. More than that: The EU has recently passed legislation that aims, among various proactive provisions, to regulate the fairness of app store conditions. So the existence of that incoming ex ante competition regime seems the most likely explanation for why there’s no formal EU investigation of Google Play today. Where Google is concerned, the Commission has already chalked up several major antitrust enforcements against its business over the last five+ years — with decisions against Google Shopping, Android and AdSense; as well as an ongoing investigation into Google’s adtech stack (plus another looking at an ad arrangement between Google and Facebook).  Another consideration here is that EU lawmakers have had a very busy year hammering out consensus on a number of major pieces of digital regulation — including the aforementioned ex ante competition reform (aka, the Digital Markets Act; DMA) which will cast the Commission in a centralized enforcement role overseeing so-called Internet “gatekeepers.” That incoming regime is requiring the Commission to rapidly spin up new divisions to oversee DMA compliance and enforcement — so the EU may be feeling a little stretched on the resources front. But — more importantly — it may also be trying to keep its powder dry. Essentially, the Commission may want to see if the DMA itself can do the job of sorting out app developer gripes — since the regulation has a number of provisions geared toward app stores specifically, including a prohibition on gatekeepers imposing “general conditions, including pricing conditions, that would be unfair or lead to unjustified differentiation [on business users],” for example. The regulation is due to start applying from Spring 2023 so a fresh competition investigation into Google’s app store at this stage could risk duplicating or complicating the enforcement of conditions already baked into EU law. (Although the process of designating gatekeepers and core platform services will need to come before any enforcement — so the real DMA action may not happen before 2024). For its part, Google denies any antitrust wrongdoing anywhere in the world its business practices are being investigated. In the section of its filing rounding up antitrust investigations targeting its business, it writes: “We believe these complaints are without merit and will defend ourselves vigorously.” Its filing also reveals that it intends to seek to appeal to the EU’s highest court after its attempt to overturn the EU’s Android decision was rejected last month. (The CJEU will only hear appeals on a matter of law so it remains to be seen what Google will try to argue.) Privacy Sandbox Also today, the U.K.’s CMA has released its second report on ongoing monitoring of commitments made by Google as it develops a new adtech stack to replace tracking cookies (aka Privacy Sandbox). The regulator said it had found Google to be complying with commitments given so far — and listed its current priorities as: Ensuring Google designs a robust testing framework for its proposed new tools and APIs; continuing to engage with market participants to understand concerns raised by them, challenging Google over its proposed approaches and exploring alternative designs for the Privacy Sandbox tools which might address these issues; and embedding a recently appointed independent technical expert (a company called S-RM) into the

Countdown to compliance as EU’s Digital Services Act published • ZebethMedia

The European Union’s flagship reboot of long-standing ecommerce rules — aka the Digital Services Act (DSA) — has now been published in the bloc’s Official Journal. You can find the full (and final) text of DSA here. Le Digital Services Act (#DSA) est publié aujourd’hui au Journal Officiel! 📖 Ce texte majeur fera d’Internet un espace plus sûr pour tous les citoyens européens. Retour sur son adoption dans un délai record — un sprint en 7⃣ étapes 🧵 pic.twitter.com/8dMogdUYvV — Thierry Breton (@ThierryBreton) October 27, 2022 Tech firms’ in-house legal teams will be poring over the detail in the coming months as they figure out how to adapt their policies and procedures to ensure compliance and dodge penalties that can scale up to 6% of global turnover for the more egregious breaches. The rules are intended to drive accountability online by streamlining how platforms and marketplaces must tackle illegal content, goods and services, as well as bringing in specific provisions for larger platforms that are aimed at increasing transparency around powerful algorithms. As per EU process, the DSA regulation will enter into force in 20 days’ time (so in mid November). That’s not the real start date though as there’s still a delay before provisions become applicable to allow for a period of adaptation and alignment for businesses. The bulk of the DSA provisions will apply from January 1, 2024, per the Commission. But a subset of obligations — for so-called VLOPs (aka, very large online platforms) — will start to apply next year as the EU has stipulated that application for VLOPs and very large online search engines (aka VLOSEs) will begin four months after they are designated as entering the category. So a swathe of larger tech firms and Big Tech giants will likely have compliance requirements bearing down on them from early next year. For more on the rules digital firms will have to abide by in the EU under the new DSA regime, check out our earlier coverage. A sister regulation, the Digital Markets Act — which exclusively targets Big Tech for ex ante regulation — will also start to apply from early next year. So it’s all change and soon!

UK government denies fresh delay to Online Safety Bill will derail it • ZebethMedia

The UK government has denied a fresh parliamentary delay to the Online Safety Bill will delay the legislation’s passage. The legislation is a core plank of the government’s 2019 manifesto promise to make the UK the safest place in the world to go online, introducing a regime ministers want to will drive a new era of accountability over the content that online platforms make available. PoliticsHome spotted the change to the House of Commons schedule last night, reporting that the bill had been dropped from the Commons business for the second time in four months — despite a recent pledge by secretary of state for digital, Michelle Donelan, that it would return in the autumn. The earlier ‘pause’ in the bill’s progress followed the ousting of ex-(ex)prime minister Boris Johnson as Conservative Party leader over the summer which was followed by a lengthy leadership contest. Prime minister Liz Truss, who prevailed in the contest to replace Johnson as PM (but is now also an ex-PM), quickly put the brakes on the draft legislation over concerns about its impact on freedom of speech — the area that’s attracted the most controversy for the government. Then, last month, Donelan confirmed provisions in the bill dealing with ‘legal but harmful speech would be changed. A source in the Department of Digital, Culture, Media and Sport (DCMS) told ZebethMedia that the latest delay to the bill’s parliamentary timetable is to allow time for MPs to read these new amendments — which they also confirmed are yet to be laid. But they suggested the delay will not affect the passage of the bill, saying it will progress within the next few weeks. They added that the legislation remains a top priority for the government. A DCMS spokesperson also provided this statement in response to questions about the fresh delay and incoming amendments: “Protecting children and stamping out illegal activity online is a top priority for the government and we will bring the Online Safety Bill back to Parliament as soon as possible.” The government is now being led by another new prime minister — Rishi Sunak — who took over from Truss after she resigned earlier this month, following the market’s disastrous reception to her economic reforms. The change of PM may not mean major differences in policy approach in the arena of online regulation as Sunak has expressed similar concerns about the Online Safety Bill’s impact on free speech — also seemingly centered on clauses pertaining to restrictions on the ‘legal but harmful’ speech of adults. In August, The Telegraph reported a spokesman for Sunak (who was then just a leadership candidate) saying: “Rishi has spoken passionately as a dad about his desire to protect children online from content no parent would want their children to see – from violence, self harm and suicide to pornography. “As Prime Minister he would urgently legislate to protect children. His concern with the bill as drafted is that it censors free speech amongst adults which he does not support. Rishi believes the Government has a duty to protect children and crack down on illegal behaviour, but should not infringe on legal and free speech.” However, it remains to be seen how exactly the bill will be amended under Sunak’s watch. Delays as amendments are considered and introduced could still threaten the bill’s passage if it ends up running out of parliamentary time to go through all the required stages of scrutiny. Parliamentary sessions typically run from spring to spring. While there are only around two years left before Sunak will have to call a general election. So the clock is ticking. The Online Safety Bill has already been years in the making, swelling in scope and ambition via a grab-bag of add-ons and late stage additions — from bringing scam ads into the regulation to measures aimed at tackling anonymous trolling, to name two of many. Critics like the digital rights group the ORG argue the bill is hopelessly cluttered, fuzzily drafted and legally incoherent — warning it will usher in a chilling regime of speech policing by private companies and the tone-deaf automated algorithms they will be forced to deploy to shrink their legal risk. There are also concerns about how the legislation might affect end-to-end encryption if secure messaging platforms are also forced to monitor content — with the potential for it to lead to the adoption of controversial technologies like client-side scanning. While the administrative burden and costs of compliance will undoubtedly saddle scores of digital businesses with lots of headaches. Despite having no shortage of critics, the bill has plenty of supporters too, though — including the opposition Labour party, which offered to work with the government to get the bill passed. Children’s safety campaigners and charities have also been loudly urging lawmakers to get on and pass legislation to protect kids online. The recent inquest into the suicide of British schoolgirl, Molly Russell — who was found to have binge-consumed (and been algorithmically fed) content about depression and self harm on social media platforms including Instagram and Pinterest before she killed herself — has added further impetus to safety campaigners’ cause. The coroner concluded that that “negative effects of online content” were a factor in Russell’s death. His report also urged the government to regulate the sector.  

Amazon resumes donations to some 2020 election deniers, just in time for midterms • ZebethMedia

Amazon has quietly mothballed its pledge to stop supporting politicians who refused to certify the 2020 election. The company, like many, said it would suspend donations to those who participated in “the unacceptable attempt to undermine a legitimate democratic process.” 21 months later, however, it has changed its tune — just in time for midterms. Amazon donated a total of $17,500 last month to nine Representatives who fell under its previous ban, as reported by Judd Legum, who has held the feet of many such companies with adjustable scruples to the fire. A list of those who said they would do one thing, then did another, can be found here; CNN has a more comprehensive, but less up-to-date list of companies and their claims. Among the tech companies (according to Legum’s list) that donated to Elector certification objectors or PACs supporting them after saying they wouldn’t are AT&T (~$600,000), Intel ($98,000), Oracle ($55,000) and Verizon ($183,000). Amazon’s contribution may seem rather small compared to theirs, but of course they’re probably just getting started. The funny thing about this is their explanation, from a statement: … [The suspension] was not intended to be permanent. It’s been more than 21 months since that suspension and, like a number of companies, we’ve resumed giving to some members. As any child could point out to them, it isn’t much of a punishment for them to withhold funds from politicians “indefinitely” only to provide them just in time for the midterms. That’s where the money 21 months ago would have gone anyway. Certainly most of the democracy underminers Amazon previously deplored still receive no money from the company that we know of, and although we must not let the perfect be the enemy of the good, we can’t just let this about-face go totally unquestioned. After all, the ones the company did decide to boost haven’t vocally recanted their positions. Amazon did not explain whether or how it reached out to the 147 Republican lawmakers it temporarily banned. Were the (apparently confidential) answers of these nine Reps the only ones that showed sufficient remorse? One would think the reversal of such a strongly argued position would merit some kind of real explanation. I asked Amazon why these members in particular received clemency but the company did not provide a relevant response, only rephrasing part of its statement that it gives to politicians that “agree” with them. I invited more detailed comment. One can imagine reevaluating these suspensions after a midterm election — after all, that’s the perfect way for any politician to publicly show their support for the democratic process. If, after that, Amazon and others said they were resuming or reevaluating donations, it might invite some grumbling but ultimately it’s a rational approach.

How can early-stage startups improve their chances of getting H-1Bs? • ZebethMedia

Sophie Alcorn Contributor Sophie Alcorn is the founder of Alcorn Immigration Law in Silicon Valley and 2019 Global Law Experts Awards’ “Law Firm of the Year in California for Entrepreneur Immigration Services.” She connects people with the businesses and opportunities that expand their lives. More posts by this contributor Dear Sophie: How can I launch a startup while on OPT? Dear Sophie: How can I protect my H-1B and green card if I am laid off? Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies. “Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.” ZebethMedia+ members receive access to weekly “Dear Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off. Dear Sophie, We have a stealth early-stage biotech startup. Do we qualify to petition a co-founder on STEM OPT for an H-1B in the lottery? Is it worth it or are there better alternatives? — Budding Biotech Dear Budding, It’s absolutely possible for an early-stage biotech (or tech) startup in stealth mode to successfully petition a founder or founding engineer for an H-1B in the lottery or even an H-1B transfer. Here’s how, starting with some background on how the H-1B lottery works for startups. In recent years, U.S. Citizenship and Immigration Services (USCIS) has leveled the playing field for startups entering an employee or prospective employee in the H-1B lottery by creating an electronic lottery registration system. Because the demand for H-1B visas far outstrips the annual supply of 85,000 (20,000 of which are reserved for individuals with a master’s or higher degree), USCIS uses the random lottery process to select companies that are eligible to petition for specific beneficiaries. Before 2020, companies had to submit to USCIS a completed, paper-based H-1B petition package for every employee and prospective employee they wanted to enter in the annual lottery. USCIS adjudicated the H-1B applications that were picked in the lottery and literally mailed the unselected paper applications back to the lawyers. The time, energy and legal costs for submitting an H-1B application made participating in the lottery under this system quite onerous, particularly for startups, because you had to commit to paying for a full H-1B before you knew if your candidate had a chance. Image Credits: Joanna Buniak / Sophie Alcorn (opens in a new window) That all changed in 2020, when USCIS instituted an electronic registration process for the lottery. Now, sponsoring companies only need to pay a $10 fee to register an employee or prospective employee in the lottery, which significantly reduced the barrier to entry for all companies, including startups. That means that you can enter as many candidates you would like to sponsor in good faith into the lottery. If people quit after they are selected and before you file, you don’t have to follow through with a full H-1B. If your budget doesn’t allow for you to currently sponsor your entire international remote team but you still want to give everybody a chance, you can do that. Can early-stage biotech startups get H-1Bs? Yes, most definitely! The biggest issues facing early-stage startups when getting an H-1B visa for their founder or co-founders are:

business and solar energy