Zebeth Media Solutions

Robotics & AI

AI that sees with sound, learns to walk, and predicts seismic physics • ZebethMedia

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column, Perceptron, aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter. This month, engineers at Meta detailed two recent innovations from the depths of the company’s research labs: an AI system that compresses audio files and an algorithm that can accelerate protein-folding AI performance by 60x. Elsewhere, scientists at MIT revealed that they’re using spatial acoustic information to help machines better envision their environments, simulating how a listener would hear a sound from any point in a room. Meta’s compression work doesn’t exactly reach unexplored territory. Last year, Google announced Lyra, a neural audio codec trained to compress low-bitrate speech. But Meta claims that its system is the first to work for CD-quality, stereo audio, making it useful for commercial applications like voice calls. Image Credits: An architectural drawing of Meta’s AI audio compression model. Using AI, Meta’s compression system, called Encodec, can compress and decompress audio in real time on a single CPU core at rates of around 1.5 kbps to 12 kbps. Compared to MP3, Encodec can achieve a roughly 10x compression rate at 64 kbps without a perceptible loss in quality. The researchers behind Encodec say that human evaluators preferred the quality of audio processed by Encodec versus Lyra-processed audio, suggesting that Encodec could eventually be used to deliver better-quality audio in situations where bandwidth is constrained or at a premium. As for Meta’s protein folding work, it has less immediate commercial potential. But it could lay the groundwork for important scientific research in the field of biology. Protein structures predicted by Meta’s system. Meta says its AI system, ESMFold, predicted the structures of around 600 million proteins from bacteria, viruses and other microbes that haven’t yet been characterized. That’s more than triple the 220 million structures that Alphabet-backed DeepMind managed to predict earlier this year, which covered nearly every protein from known organisms in DNA databases. Meta’s system isn’t as accurate as DeepMind’s. Of the ~600 million proteins it generated, only a third were “high quality.” But it’s 60 times faster at predicting structures, enabling it to scale structure prediction to much larger databases of proteins. Not to give Meta outsize attention, the company’s AI division also this month detailed a system designed to mathematically reason. Researchers at the company say that their “neural problem solver” learned from a data set of successful mathematical proofs to generalize to new, different kinds of problems. Meta isn’t the first to build such a system. OpenAI developed its own, called Lean, that it announced in February. Separately, DeepMind has experimented with systems that can solve challenging mathematical problems in the studies of symmetries and knots. But Meta claims that its neural problem solver was able to solve five times more International Math Olympiad than any previous AI system and bested other systems on widely-used math benchmarks. Meta notes that math-solving AI could benefit the the fields of software verification, cryptography and even aerospace. Turning our attention to MIT’s work, research scientists there developed a machine learning model that can capture how sounds in a room will propagate through the space. By modeling the acoustics, the system can learn a room’s geometry from sound recordings, which can then be used to build visual renderings of a room. The researchers say the tech could be applied to virtual and augmented reality software or robots that have to navigate complex environments. In the future, they plan to enhance the system so that it can generalize to new and larger scenes, such as entire buildings or even whole towns and cities. At Berkeley’s robotics department, two separate teams are accelerating the rate at which a quadrupedal robot can learn to walk and do other tricks. One team looked to combine the best-of-breed work out of numerous other advances in reinforcement learning to allow a robot to go from blank slate to robust walking on uncertain terrain in just 20 minutes real-time. “Perhaps surprisingly, we find that with several careful design decisions in terms of the task setup and algorithm implementation, it is possible for a quadrupedal robot to learn to walk from scratch with deep RL in under 20 minutes, across a range of different environments and surface types. Crucially, this does not require novel algorithmic components or any other unexpected innovation,” write the researchers. Instead, they select and combine some state-of-the-art approaches and get amazing results. You can read the paper here. Robot dog demo from EECS professor Pieter AbbeelÕs lab in Berkeley, Calif. in 2022. (Photo courtesy Philipp Wu/Berkeley Engineering) Another locomotion learning project, from (ZebethMedia’s pal) Pieter Abbeel’s lab, was described as “training an imagination.” They set up the robot with the ability to attempt predictions of how its actions will work out, and though it starts out pretty helpless, it quickly gains more knowledge about the world and how it works. This leads to a better prediction process, which leads to better knowledge, and so on in feedback until it’s walking in under an hour. It learns just as quickly to recover from being pushed or otherwise “purturbed,” as the lingo has it. Their work is documented here. Work with a potentially more immediate application came earlier this month out of Los Alamos National Laboratory, where researchers developed a machine learning technique to predict the friction that occurs during earthquakes — providing a way to forecast earthquakes. Using a language model, the team says that they were able to analyze the statistical features of seismic signals emitted from a fault in a laboratory earthquake machine to project the timing of a next quake. “The model is not constrained with physics, but it predicts the physics, the actual behavior of the system,” said Chris Johnson. one of the research leads on the

Apple is reportedly working to simplify its ‘Hey Siri’ trigger phrase to just ‘Siri’ • ZebethMedia

Apple is reportedly planning to simplify its “Hey Siri” trigger phrase to just “Siri,” according to Bloomberg’s Mark Gurman. Right now, the quickest way to access Siri is to say “Hey Siri” and then add a command, but Apple is looking to make that process simpler by letting users drop the “Hey” in the trigger phrase. The company reportedly plans to roll out the change either next year or in 2024. Gurman reports that Apple has spend the past few months training the digital assistant to respond to “Siri” instead of “Hey Siri.” Although this may seem like a small change, it requires a significant amount of AI training and engineering work, as the digital assistant will have to understand the single wake word in multiple accents and dialects, Gurman notes. Apple’s current two-word trigger phrase increases the likelihood of Siri picking up on it. Apple has reportedly been testing the simplified trigger phrase with employees. Switching to a single trigger word will help Apple keep up with Amazon’s Alexa, which is already capable of responding to commands to just “Alexa” instead of “Hey Alexa.” Even Microsoft switched Cortana’s wake phrase from “Hey Cortana” to just “Cortana” before shutting it down. The move would also put Apple a step ahead of Google, which still requires users to say either “Hey Google” or “Ok Google.” Gurman notes that Apple’s decision to simplify its trigger phrase will speed up back-to-back requests by making it easier to string multiple requests together. In addition, Gurman reports that Apple is working to integrate Siri deeper into third-party apps and services, while improving the digital assistant’s ability to understand users and process their commands. The report comes as Apple has been working to enhance Siri over the past years. Earlier this year, Apple developed a new Siri voice that doesn’t sound obviously male or female. The decision to introduce a gender-neutral voice is one that saw the tech giant take a step away from the criticism that digital assistants have reinforced unfair gender stereotypes. Apple first addressed concerns with Siri last year when it issued an update that added more diverse voices and also made it so Siri’s voice would no longer default to being female. The new and fifth Siri voice made it possible to not have to think about the gender of your AI voice assistant at all.

5 great reasons to attend iMerit’s ML DataOps Summit • ZebethMedia

The iMerit ML DataOps Summit kicks off tomorrow, November 8. Don’t miss your chance to gather online with more than 2,000 data scientists, engineers and top AI & ML speakers to learn about the latest in dataops solutions, connect and engage with attendees and expand your network. If you haven’t registered yet, here are just five great reasons why you don’t want to miss the iMerit ML DataOps Summit. 1. Keynotes from leaders like Mano Paluri, director, FAIR, Meta Platforms, and Seth Dobrin, former chief AI officer, IBM. Pushing the Frontiers in AI For Billions Around the World: Mano Paluri shares a simple, effective framework to push the frontiers in AI research, while advancing technology that impacts the product end game. You’ll gain insights into the structure for developing AI applications including scaling ML models, adopting a multi-modal understanding, pairing tools and human intelligence to accelerate AI and more. The Stakes Are High: Best Practices For Deploying Responsible AI: AI is sparking significant change across industries worldwide, with projections to add more than $15 trillion to the global economy by 2030. By 2022, more than 60% of enterprises will have implemented machine learning, big data analytics, and related AI tools into their operations. Enterprises navigating the complexities of artificial intelligence — from data operations to full-scale commercialization — must do it with a focused, practical lens. Join this session to hear from IBM’s Former Chief AI Officer to gain insight into the best practices for deploying artificial intelligence responsibly. 2. Panel discussions including “Convergence of ML Ops and Data Pipelines” and “Navigating Data Tooling and Expertise To Achieve High-Quality AI Training Data.” 3. A women-led panel discussion, “It’s a Duo: Human-in-the-Loop and High-Quality Data Are Catalysts in ML and AI.”  4. World-class networking with the AI/ML community’s best and brightest. 5. It’s absolutely free. These are just some of AI/ML’s game-changing leaders — and topics — featured in our power-packed agenda. Come learn from and network with the pros — tomorrow at iMerit’s ML DataOps Summit. Register for free today!

Ouster and Velodyne agree to merger, signaling consolidation in lidar industry • ZebethMedia

Ouster and Velodyne, two lidar companies, have agreed to a merger in an all-stock transaction, the companies said Monday. Both Ouster and Velodyne will maintain a 50% stake in the new company, according to the agreement that was signed on November 4. The merger comes as many in the industry, including autonomous vehicle technology company Cruise’s CEO Kyle Vogt, have been expecting another round of consolidation in the lidar space. That’s in part because there are too many lidar companies for how many OEMs are implementing the sensor for autonomous driving applications. It’s also because many of these companies, including Ouster and Velodyne, went public via special purpose acquisition (SPAC) at potentially inflated valuations that were based on projected revenue, not actual revenue. Earlier this year, Velodyne acquired AI and lidar company Bluecity.ai, and last year, Ouster acquired lidar startup Sense Photonics. AV company Aurora bought out Blackmore in 2019, and Cruise acquired Strobe in 2017. Both Velodyne and Ouster have been struggling with plummeting stock prices over the past year, and neither has been able to turn a profit yet. The companies closed out the second quarter with a net loss of $44.3 million and $28 million, respectively. Loss-generating companies can often maintain investor faith if they at least generate regular increases in revenue, which Ouster has done year-over-year. But Velodyne’s revenue doesn’t seem to have grown at all in the past year; rather it fell 41%. By merging, the companies hope to combine forces and create scale “to drive profitable and sustainable revenue growth,” according to Velodyne’s CEO Ted Tewksbury. The companies say that the merger will allow them to realize annualized cost savings of at least $75 million within the nine months after the transaction closes, as well as $335 million in combined cash for the third quarter. The merger may also be a lifeline for Velodyne, a company that has been struggling over the past year with a series of internal dramas, including the resignation of its CEO Anand Gopalan last July. (Tewskbury took over for him in November.) Velodyne never said why Gopalan resigned, but his leaving cost Velodyne $8 million in equity compensation, according to 2021’s second quarter earnings report. Prior to that, Velodyne’s founder David Hall was removed as chairman of the board and his wife, Marta Thoma Hall, lost her role as chief marketing officer following an investigation by the board into the two for “inappropriate behavior.” The legal fees for the dramas cost Velodyne $3.7 million in the first half of 2021. In May last year, Hall wrote a letter blaming the SPAC with which Velodyne merged, Graf Industrial Corp., for the company’s poor financial performance. A new path ahead The combined company’s board of directors will consist of eight members, four from Ouster’s board and four from Velodyne’s. Angus Pacala, current co-founder and CEO of Ouster, will be CEO of the new company. Tewksbury will act as executive chairman of the board. In a statement, Ouster said the merger would increase operational efficiencies, most likely by getting rid of redundancies. That usually means layoffs will follow, but the companies did not respond in time to ZebethMedia’s request for comment. With a combined commercial footprint and distribution network, the new company expects to deliver higher volumes of product at reduced costs, Ouster said. The merger, which will see Velodyne’s share exchanged for 0.8204 shares of Ouster at closing, is expected to be completed in the first half of 2023, pending shareholder approval by both companies. Ouster and Velodyne will continue to operate their businesses independently until the transaction is complete.

Stability AI backs effort to bring machine learning to biomed • ZebethMedia

Stability AI, the venture-backed startup behind the text-to-image AI system Stable Diffusion, is funding a wide-ranging effort to apply AI to the frontiers of biotech. Called OpenBioML, the endeavor’s first projects will focus on machine learning based approaches to DNA sequencing, protein folding, and computational biochemistry. The company’s founders describe OpenBioML as an “open research laboratory” — aims to explore the intersection of AI and biology in a setting where students, professionals and researchers can participate and collaborate, according to Stability AI CEO Emad Mostaque. “OpenBioML is one of the independent research communities that Stability supports,” Mostaque told ZebethMedia in an email interview. “Stability looks to develop and democratize AI, and through OpenBioML, we see an opportunity to advance the state of the art in sciences, health and medicine.” Given the controversy surrounding Stable Diffusion — Stability AI’s AI system that generates art from text descriptions, similar to OpenAI’s DALL-E 2 — one might be understandably wary of Stability AI’s first venture into health care. The startup has taken a laissez-faire approach to governance, allowing developers to use the system however they wish, including for celebrity deepfakes and pornography. Stability AI’s ethically questionable decisions to date aside, machine learning in medicine is a minefield. While the tech has been successfully applied to diagnose conditions like skin and eye diseases, among others, research has shown that algorithms can develop biases leading to worse care for some patients. An April 2021 study, for example, found that statistical models used to predict suicide risk in mental health patients performed well for white and Asian patients but poorly for Black patients. OpenBioML is starting with safer territory, wisely. Its first projects are: BioLM, which seeks to apply natural language processing (NLP) techniques to the fields of computational biology and chemistry DNA-Diffusion, which aims to develop AI that can generate DNA sequences from text prompts LibreFold, which looks to increase access to AI protein structure prediction systems similar to DeepMind’s AlphaFold 2 Each project is led by independent researchers, but Stability AI is providing support in the form of access to its AWS-hosted cluster of over 5,000 Nvidia A100 GPUs to train the AI systems. According to Niccolò Zanichelli, a computer science undergraduate at the University of Parma and one of the lead researchers at OpenBioML, this will be enough processing power and storage to eventually train up to ten different AlphaFold 2-like systems in parallel. “A lot of computational biology research already leads to open-source releases. However, much of it happens at the level of a single lab and is therefore usually constrained by insufficient computational resources,” Zanichelli told ZebethMedia via email. “We want to change this by encouraging large-scale collaborations and, thanks to the support of Stability AI, back those collaborations with resources that only the largest industrial laboratories have access to.” Generating DNA sequences Of OpenBioML’s ongoing projects, DNA-Diffusion — led by pathology professor Luca Pinello’s lab at the Massachusetts General Hospital & Harvard Medical School — is perhaps the most ambitious. The goal is to use generative AI systems to learn and apply the rules of “regulatory” sequences of DNA, or segments of nucleic acid molecules that influence the expression of specific genes within an organism. Many diseases and disorders are the result of misregulated genes, but science has yet to discover a reliable process for identifying — much less changing — these regulatory sequences. DNA-Diffusion proposes using a type of AI system known as a diffusion model to generate cell-type-specific regulatory DNA sequences. Diffusion models — which underpin image generators like Stable Diffusion and OpenAI’s DALL-E 2 — create new data (e.g. DNA sequences) by learning how to destroy and recover many existing samples of data. As they’re fed the samples, the models get better at recovering all the data they had previously destroyed to generate new works. Image Credits: Stability AI “Diffusion has seen widespread success in multimodal generative models, and it is now starting to be applied to computational biology, for example for the generation of novel protein structures,” Zanichelli said. “With DNA-Diffusion, we’re now exploring its application to genomic sequences.” If all goes according to plan, the DNA-Diffusion project will produce a diffusion model that can generate regulatory DNA sequences from text instructions like “A sequence that will activate a gene to its maximum expression level in cell type X” and “A sequence that activates a gene in liver and heart, but not in brain.” Such a model could also help interpret the components of regulatory sequences, Zanichelli says — improving the scientific community’s understanding of the role of regulatory sequences in different diseases. It’s worth noting that this is largely theoretical. While preliminary research on applying diffusion to protein folding seems promising, it’s very early days, Zanichelli admits — hence the push to involve the wider AI community. Predicting protein structures OpenBioML’s LibreFold, while smaller in scope, is more likely to bear immediate fruit. The project seeks to arrive at a better understanding of machine learning systems that predict protein structures in addition to ways to improve them. As my colleague Devin Coldewey covered in his piece about DeepMind’s work on AlphaFold 2, AI systems that accurately predict protein shape are relatively new on the scene but transformative in terms of their potential. Proteins comprise sequences of amino acids that fold into shapes to accomplish different tasks within living organisms. The process of determining what shape an acids sequence will create was once an arduous, error-prone undertaking. AI systems like AlphaFold 2 changed that; thanks to them, over 98% of protein structures in the human body are known to science today, as well as hundreds of thousands of other structures in organisms like E. coli and yeast. Few groups have the engineering expertise and resources necessary to develop this kind of AI, though. DeepMind spent days training AlphaFold 2 on tensor processing units (TPUs), Google’s costly AI accelerator hardware. And acid sequence training data sets are often proprietary or released under non-commercial licenses. Proteins folding

A look at the EU’s plan to reboot product liability rules for AI • ZebethMedia

A recently presented European Union plan to update long-standing product liability rules for the digital age — including addressing rising use of artificial intelligence (AI) and automation — took some instant flak from European consumer organization, BEUC, which framed the update as something of a downgrade by arguing EU consumers will be left less well protected from harms caused by AI services than other types of products. For a flavor of the sorts of AI-driven harms and risks that may be fuelling demands for robust liability protections, only last month the UK’s data protection watchdog issued a blanket warning over pseudoscientific AI systems that claim to perform ’emotional analysis’ — urging such tech should not be used for anything other than pure entertainment. While on the public sector side, back in 2020, a Dutch court found an algorithmic welfare risk assessment for social security claimants breached human rights law. And, in recent years, the UN has also warned over the human rights risks of automating public service delivery. Additionally, US courts’ use of blackbox AI systems to make sentencing decisions — opaquely baking in bias and discrimination — has been a tech-enabled crime against humanity for years. BEUC, an umbrella consumer group which represents 46 independent consumer organisations from 32 countries, had been calling for years for an update to EU liability laws to take account of growing applications of AI and ensure consumer protections laws are not being outpaced. But its view of the EU’s proposed policy package — which consist of tweaks to the existing Product Liability Directive (PLD) so that it covers software and AI systems (among other changes); and a new AI Liability Directive (AILD) which aims to address a broader swathe of potential harms stemming from automation — is that it falls short of the more comprehensive reform package it was advocating for. “The new rules provide progress in some areas, do not go far enough in others, and are too weak for AI-driven services,” it warned in a first response to the Commission proposal back in September. “Contrary to traditional product liability rules, if a consumer gets harmed by an AI service operator, they will need to prove the fault lies with the operator. Considering how opaque and complex AI systems are, these conditions will make it de facto impossible for consumers to use their right to compensation for damages.” “It is essential that liability rules catch up with the fact we are increasingly surrounded by digital and AI-driven products and services like home assistants or insurance policies based on personalised pricing. However, consumers are going to be less well protected when it comes to AI services, because they will have to prove the operator was at fault or negligent in order to claim compensation for damages,” added deputy director general, Ursula Pachl, in an accompanying statement responding to the Commission proposal. “Asking consumers to do this is a real let down. In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules. As a result, consumers will be better protected if a lawnmower shreds their shoes in the garden than if they are unfairly discriminated against through a credit scoring system.” Given the continued, fast-paced spread of AI — via features such as ‘personalized pricing’ or even the recent explosion of AI generated imagery — there could come a time when some form of automation is the rule not the exception for products and services — with the risk, if BEUC’s fears are well-founded, of a mass downgrading of product liability protections for the bloc’s ~447 million citizens. Discussing its objections to the proposals, a further wrinkle raised by Frederico Oliveira Da Silva, a senior legal officer at BEUC, relates to how the AILD makes explicit reference to an earlier Commission proposal for a risk-based framework to regulate applications of artificial intelligence — aka, the AI Act — implicating a need for consumers to, essentially, prove a breach of that regulation in order to bring a case under the AILD. Despite this connection, the two pieces of draft legislation were not presented simultaneously by the Commission — there’s around 1.5 years between their introduction — creating, BEUC worries, disjointed legislative tracks that could bake in inconsistencies and dial up the complexity. For example, it points out that the AI Act is geared towards regulators, not consumers — which could therefore limit the utility of proposed new information disclosure powers in the AI Liability Directive given the EU rules determining how AI makers are supposed to document their systems for regulatory compliance are contained in the AI Act — so, in other words, consumers may struggle to understand the technical documents they can obtain under disclosure powers in the AILD since the information was written for submitting to regulators, not an average user. When presenting the liability package, the EU’s justice commissioner also made direct reference to “high risk” AI systems — using a specific classification contained in the AI Act which appeared to imply that only a subset of AI systems would be liable. However, when queried whether liability under the AILD would be limited only to the ‘high risk’ AI systems in the AI Act (which represents a small subset of potential applications for AI), Didier Reynders said that’s not the Commission’s intention. So, well, confusing much? BEUC argues a disjointed policy package has the potential to — at the least — introduce inconsistencies between rules that are supposed to slot together and function as one. It could also undermine application of and access to redress for liability by creating a more complicated track for consumers to be able to exercise their rights. While the different legislative timings suggest one piece of a linked package for regulating AI will be adopted in advance of the other — potentially opening up a gap for consumers to obtain redress for AI driven harms in the meanwhile. As it stands,

Iron Ox lays off 50; amounting to nearly half its staff • ZebethMedia

There are no sure bets in this — or any — business. Automation, agriculture and a climate-bent are green flags, but no category is immune from mounting economic headwinds on top of the already difficult task of launching a successful startup. While robotics has thus far seen a limited slowdown in investing relative to many other sectors, there’s no such thing as a recession-proof business in startup land. Bay Area-based Iron Ox has certainly had no shortage of supporters. The agtech firm has raised north of $100 million, culminating with a $53 million Series C announced in September of last year. But earlier this week, the robotic agtech startup instituted widespread layoffs. All told, 50 jobs were cut this week, a figure that amounts to nearly half of the company’s staff of “just over 100 people.” Chief Legal Officer, Myra Pasek, tells ZebethMedia that the decision was made in order to “extend [its] cash runway.” Pasek adds, We’ve decided to hyperfocus on our core competence of engineering and technology; as a result, we eliminated many roles that are not core to our renewed focus. However, the layoff was comprehensive and included positions throughout the organization — i.e., not limited to only certain departments. Reducing the Iron Ox team was a painful decision — one we did not take lightly. We are working with our board members and leaning into our extensive ecosystem throughout Silicon Valley to help employees find meaningful new work at mission-aligned companies. Iron Ox has always hired world-class talent, and I’m confident that the individuals we unfortunately had to cut this week will have many options open to them. As a matter of policy, we are not going to provide additional details or comment on specific personnel, and we ask that you respect their privacy at this sensitive time. It’s a massive blow for a well-funded firm at the cross section of several growth areas. Iron Ox’s play has focused on fully automated greenhouses, courtesy of robotic arms, Kiva-like plant moving carts and other technologies. Utilizing indoor growing techniques and a trove of data, the pitch promised broader growing seasons in more diverse climates and the utilization of less resources than standard farming, all while still harnessing the sun in a way that is often altogether removed from vertical farming. Precisely what shape the new focus will take remains to be seen, though the company’s site reflects a broad range of different satellite categories, including plant and data science and robotics. Speaking with ZebethMedia, Iron Ox explained that it had no intention of winding down operations, though the firm is seemingly open to both seeking additional funding and, perhaps, even a sale. “[A]t Iron Ox, our attitude is that we are always willing and eager to meet with mission-aligned investors who want to decarbonize the agriculture sector,” Pasek says. “Like other competitive startups, we never stop fundraising. We are not talking about winding down operations — we are more focused than ever on our core competencies in engineering and technology.”

There’s still green in climate robots • ZebethMedia

Kicking things off with a big funding round for AMP Robotics this week for a couple of reasons, but when push comes to shove, it comes down to something really simple: There are a lot of great reasons to be bullish on automation and there are a lot of equally great reasons to be bullish on climate tech. If you can manage to position yourself right in the middle of that Venn diagram, you’re probably sitting pretty right now. There are caveats, of course. There are always caveats. A big, scary bear market is the most immediate. We’ve alluded to current and coming layoffs in recent editions of this newsletter, and the truth is that there are going to be a lot more before we’re on the other side of this. As bright as your category is long-term, no one exists outside these macro trends. I certainly wouldn’t want to be in the position of raising a round to keep the lights on at my startup as the headwinds grow stronger. The days of the nine-digit Series A seem to have mostly drawn to a close for the foreseeable future, and I’m accordingly hearing more reports of decreased headcounts. But if I had to choose a tech startup space to ride this out in, climate and automation would be at or near the top. To steal a paragraph from Connie’s recent interview with Chris Sacca: [Climate investing] is recession proof, even without the IRA. Everything we’re doing is providing a substitute good. That’s what almost feels unfair. You spend years building Twitter and you put it up in the app store and you hope somebody gives a damn. It could be a really well-designed product, but maybe no one cares, whereas everything we’re building right now, we actually know the demand for it. And if we deliver a better, cheaper, faster, cooler, easier-to-use, sexier product, then we’ll even grow the market. So I actually think this is some of the easiest investing we’ve done. From where I sit, “recession proof” seems a little hyperbolic in the near term, but climate disaster isn’t a thing of the future. We’re living with it — and have been for some time. There are going to be plenty of bandwagon jumpers and green washers in the interim, but if you’ve got good vision and better vetting, the right climate-focused technology might be as close to a sure thing as you’re going to get as an investor. Ditto for robotics and automation for reasons we’ve outlined plenty of times over the last couple of years. Find the right solution for the right problem, and you might one day be looking at your own $91 million Series C. I’m far from a technological utopianist, and my feelings on the future of climate change are a lot darker than I’m comfortable discussing here. It certainly doesn’t help to prep for all of this by reading a recent Greenpeace report that notes, “The plastics, packaging, and recycling industries have waged a decades-long misinformation campaign to perpetuate the myth that plastic is recyclable.” Image Credits: AMP Robotics It’s important to be pragmatic to a fault here. We don’t do ourselves any favors by sugarcoating the size and scope of the current crisis. Nor do we have much to gain by going full doomer. Somewhere between the two exists the possibilities of achievable solutions. None will fix the problem, but if we’re lucky, the right one could serve to mitigate things. Recycling robotics firm AMP’s latest raise follows a sizable $55 million Series B raised in January of last year. Congruent Ventures and Wellington Management led this massive $91 million round, which also features participation from Blue Earth Capital, Sidewalk Infrastructure Partners, Tao Capital Partners, XN, Sequoia Capital, GV, Range Ventures, and Valor Equity Partners. “Advancements in robotics and automation are accelerating the transformation of traditional infrastructure, and AMP is seeking to reshape the waste and recycling industries,” said Wellington’s Michael DeLucia. “By bringing digital intelligence to the recycling industry, AMP can sort waste streams and extract additional value beyond what is otherwise possible.” All of this comes with the standard caveat that there are truly no surefire bets in this — or any — industry. There are still a million difficult to quantify factors, from timing to competition to sheer luck, which play a role in a product’s success. The more companies that enter a space, the more more failure we’ll ultimately see. Though, that’s kind of the deal with early stage investment — no one gets it right 100% of the time. But a few perfectly timed investments can make a career. The upshot of facing an impossibly large, seemingly insurmountable problem (if one can say such a thing) is that there’s still a ton of problems that need the right minds to tackle them. There are a million oversaturated categories in automation right now. Filtering out all of the aforementioned greenwashing, the same can’t be said for climate. It “almost feels unfair,” to steal a line from Sacca. Frankly, it’s also a space I’d love to see more of the bigger names operate in. Take Google, for example. The company had a big AI day here in NYC this week, showcasing some of its work in the category. Google has investments in both climate and automation, and it would be great to see these sorts of companies working to solve big problems with big ideas. Gaining advantages to move e-commerce to a same-day delivery model is all well and good, but ordering all of the sunscreen on Amazon isn’t going to do you much good in the face of true climate catastrophe. Image Credits: Google Google for Good did take centerstage at the event, however, with the company demonstrating how advances to ML are being used for the very important work of monitoring wildfires and floods. It’s also worth highlighting some of the company’s efforts in robotic learning. Code as Policies (CaP) is a newly announced

StretchSense built an actually comfortable hand-motion capture glove • ZebethMedia

New Zealand-based StretchSense, a maker of hand motion capture technology, believes virtual and augmented reality are going to replace the smartphone as the dominant way we interact with digital worlds and each other. And when that happens, we’ll need natural ways be immersed in those spaces, which means being able to touch and control virtual stuff with your hands. The startup has built a glove that captures the intricate motions of human hands, along with the software that then translates those movements into an animation. Currently, StretchSense’s tech is used by more than 200 gaming and visual effects studios worldwide to create realistic hand gestures for everything from sign language videos to cinematic fight scenes to virtual health and safety training. In fact, it was recently used to make Snoop Dogg’s ‘Crip Ya Enthusiasm‘ music video. Benjamin O’Brien, co-founder and CEO of StretchSense, told ZebethMedia he thinks StretchSense can “be the future of human machine interface for virtual worlds by building garments, not devices.” StretchSenses’s glove is made using the startup’s proprietary stretchable sensor technology that precisely measures the human form. Before it’s been sewn into the glove, the stretchy material looks and feels like elastic rubber with some light black lines running through. Those black lines are referred to as a stretchable capacitor — a capacitor is the same type of sensor used on the screens of smartphones to measure the amount of energy the screen is storing based on where you put your finger down, which is how it works out what you’re touching. In StretchSense’s case, when the material stretches with hand movements, the amount of energy it can store increases. “If you can measure the amount of energy that this can store, you can then work out its geometry very, very, very accurately,” said O’Brien. I tried the glove on myself during a demo in Auckland and can admit that it was indeed a comfortable fit, which O’Brien says isn’t always a given in the hand motion capture world. “The really core advantage is that we actually make garments, not devices. And by that we mean we make garments that are comfortable to wear, that don’t interfere with movement, that don’t break easily, and don’t have hard, lumpy bits of plastic,” said O’Brien. “And so the way we were able to beat out the competition in the motion capture space, if you look at any competitive product, it’s got all these lumpy bits of plastic all over and interferes moving the hand, it breaks easily. And it’s based on technology that just doesn’t naturally conform to the body.” StretchSense closed a $7.6 million Series A investment Thursday, led by Scotland-based Par Equity with participation by existing StretchSense investors GD1, the NZ-based venture capital firm, and Scottish Enterprise, Scotland’s national economic development agency. The startup intends to use the funds to grow out its center of excellence in Edinburgh which is focused on AI and spatial computing and will work on machine learning problems to constantly improve the product — things like making a finer and more accurate capture of details, lowering the threshold of the uncanny valley in animations, and transitioning from a 2D screen to a 3D virtual world. The startup is also working on developing a haptic glove which it will launch into VR training next that will stimulate both touch and motion in virtual worlds. “We want to be the future of how people control and influence and touch virtual worlds, but you have to ground that in realistic business models,” said O’Brien. “And so realistic business model number one was content creation for gaming and movie studios. Number two for us would be VR training. And that’s all about solving the retraining crisis where you have people with shorter and shorter careers, but the complexity of those jobs is increasing. So you’ve got this issue where you actually need to be able to train people really quickly and often in time-critical, safety critical situations where there’s money or lives at stake.” Once StretchSense has built a viable business in the VR training sense, the startup hopes to use a next iteration of that tech to move into the VR gaming and experience space. “We want to create tools that the creators of the metaverse will use to build amazing virtual spaces and experiences,” said O’Brien.

Dataloop secures cash infusion to expand its data annotation tool set • ZebethMedia

Data annotation, or the process of adding labels to images, text, audio and other forms of sample data, is typically a key step in developing AI systems. The vast majority of systems learn to make predictions by associating labels with specific data samples, like the caption “bear” with a photo of a black bear. A system trained on many labeled examples of different kinds of contracts, for example, would eventually learn to distinguish between those contracts and even extrapolate to contracts that it hasn’t seen before. The trouble is, annotation is a manual and labor-intensive process that’s historically been assigned to gig workers on platforms like Amazon Mechanical Turk. But with the soaring interest in AI — and in the data used to train that AI — an entire industry has sprung up around tools for annotation and labeling. Dataloop, one of the many startups vying for a foothold in the nascent market, today announced that it raised $33 million in a Series B round led by Nokia Growth Partners (NGP) Capital and Alpha Wave Global. Dataloop develops software and services for automating aspects of data prep, aiming to shave time off of the AI system development process. “I worked at Intel for over 13 years, and that’s where I met Dataloop’s second co-founder and CPO, Avi Yashar,” Dataloop CEO Eran Shlomo told ZebethMedia in an email interview. “Together with Avi, I left Intel and founded Dataloop. Nir [Buschi], our CBO, joined us as third co-founder, after he held executive positions [at] technology companies and [lead] business and go-to-market at venture-backed startups.” Dataloop initially focused on data annotation for computer vision and video analytics. But in recent years, the company has added new tools for text, audio, form and document data and allowed customers to integrate custom data applications developed in-house. One of the more recent additions to the Dataloop platform is data management dashboards for unstructured data. (As opposed to structured data, or data that’s arranged in a standardized format, unstructured data isn’t organized according to a common model or schema.) Each provides tools for data versioning and searching metadata, as well as a query language for querying datasets and visualizing data samples. Image Credits: Dataloop “All AI models are learned from humans through the data labeling process. The labeling process is essentially a knowledge encoding process in which a human teaches the machine the rules using positive and negative data examples,” Shlomo said. “Every AI application’s primary goal is to create the ‘data flywheel effect’ using its customer’s data: a better product leads to more users leads to more data and subsequently a better product.” Dataloop competes against heavyweights in the data annotation and labeling space, including Scale AI, which has raised over $600 million in venture capital. Labelbox is another major rival, having recently nabbed more than $110 million in a financing round led by SoftBank. Beyond the startup realm, tech giants, including Google, Amazon, Snowflake and Microsoft, offer their own data annotation services. Dataloop must be doing something right. Shlomo claims the company currently has “hundreds” of customers across retail, agriculture, robotics, autonomous vehicles and construction, although he declined to reveal revenue figures. An open question is whether Dataloop’s platform solves some of the major challenges that exist in data labeling today. Last year, a paper published out of MIT found that data labeling tends to be highly inconsistent, potentially harming the accuracy of AI systems. A growing body of academic research suggests that annotators introduce their own biases when labeling data — for example, labeling phrases in African American English (a modern dialect spoken primarily by Black Americans) as more toxic than the general American English equivalents. These biases often manifest in unfortunate ways; think moderation algorithms that are more likely to ban Black users than white users. Data labelers are also notoriously underpaid. The annotators who contributed captions to ImageNet, one of the better-known open source computer vision libraries, reportedly made a median of $2 per hour in wages. Shlomo says it’s incumbent on the companies using Dataloop’s tools to affect change — not necessarily Dataloop itself. “We see the underpayment of annotators as a market failure. Data annotation shares many qualities with software development, one of them being the impact of talent on productivity,” Shlomo said. “[As for bias,] bias in AI starts with the question that the AI developer chooses to ask and the instructions they supply to the labeling companies. We call it the ‘primary bias.’ For example, you could never identify color bias unless you ask for skin color in your labeling recipe. The primary bias issue is something the industry and regulators should address. Technology alone will not solve the issue.” To date, Dataloop, which has 60 employees, has raised $50 million in venture capital. The company plans to grow its workforce to 80 employees by the end of the year.

business and solar energy