Zebeth Media Solutions

Robotics & AI

Labrador Systems deploys its first assistive elder-care robots • ZebethMedia

We’ve been keeping tabs on Labrador Systems since we caught a very early demo of its elder care-focused technology in a hotel suite several CESes ago. Today the California-based robotics firm announced that it’s begun deploying its Retriever Pro system to a handful of early clients, including, On Lok PACE, Nationwide Insurance, Masonic Homes of California, Western Homes Communities, Eskaton, The Perfect Companion, Presbyterian Villages of Michigan, University of Michigan Flint and Graceworks Lutheran Services. The news follows extended piloting for the system in places like senior living communities. The Retriever Pro is designed to bring a kind of assistive freedom to people living on their own with mobility limitations. It’s a clever technology that effectively amounts to a semi-autonomous mobile shelving system that can be used to deliver objects they might otherwise have issues carrying. “The burden on caregivers is growing at a rate that is simply not sustainable. Organizations are already experiencing major caregiver shortages, and in the coming years there will be significantly more people in my parents’ age group (85+) with fewer people to help take care of them,” CEO Mike Dooley said in a release tied to the news. “Our mission is to provide relief on both sides of that equation, empowering individuals who need care to do more on their own while extending the impact of each caregiver’s visit well beyond the time they are physically present.” Image Credits: Labrador The world of elder-care robotics is still fairly nascent in the U.S. Japan may have the largest head start in the category due, in part, to its aging population, but the concept has been growing in acceptance. A number of firms working to design more all-purpose systems have pointed to living assistance as a potential application, but currently the robotic market isn’t exactly flush with this tech. The company says it also “continues to move forward with development and testing” of its more consumer-focused system, the Retriever. Dooley clarified the difference between the two in a comment to ZebethMedia, noting: The key added features for the Pro are for bringing caregivers and staff into the loop and overall supporting the care provider on their mission.  A portion of that is on the software side, with integration with enterprise grade solutions for care management. So for example, caregiving organizations could have multiple users log on to set schedules for the robot, check activity reports and remotely assist with the robot operation. On the hardware side, we’d have more options for carrying and powering a 3rd party tablet or other screen device that the care organization may already be using, to move that device through the home. The Pro will also have provisions for supporting cellular connectivity as an upgrade.

OpenAI will give roughly 10 AI startups $1M each and early access to its systems • ZebethMedia

OpenAI, the San Francisco-based lab behind AI systems like GPT-3 and DALL-E 2, today launched a new program to provide early-stage AI startups with capital and access to OpenAI tech and resources. Called Converge, the cohort will be financed by the OpenAI Startup Fund, OpenAI says. The $100 million entrepreneurial tranche was announced last May and was backed by Microsoft and other partners. The 10 or so founders chosen for Converge will receive $1 million each and admission to five weeks of office hours, workshops and events with OpenAI staff, as well as early access to OpenAI models and “programming tailored to AI companies.” “We’re excited to meet groups across all phases of the seed stage, from pre-idea solo founders to co-founding teams already working on a product,” OpenAI writes in a blog post shared with ZebethMedia ahead of today’s announcement. “Engineers, designers, researchers, and product builders … from all backgrounds, disciplines, and experience levels are encouraged to apply, and prior experience working with AI systems is not required.” The deadline to apply is November 25, but OpenAI notes that it’ll continue to evaluate applications after that date for future cohorts. When OpenAI first detailed the OpenAI Startup Fund, it said recipients of cash from the fund would receive access to Azure resources from Microsoft. It’s unclear whether the same benefit will be afforded to Converge participants; we’ve asked OpenAI to clarify. We’ve also asked OpenAI to disclose the full terms for Converge, including the equity agreement, and we’ll update this piece once we hear back. Beyond Converge, surprisingly, there aren’t many incubator programs focused exclusively on AI startups. The Allen Institute for AI has a small accelerator that launched in 2017, which provides up to a $500,000 pre-seed investment and up to $450,000 in cloud compute credits. Google Brain founder Andrew Ng heads up the AI Fund, a $175 million tranche to initiate new AI-centered businesses and companies. And Nat Friedman (formerly of GitHub) and Daniel Gross (ex-Apple) fund the AI Grant, which provides up to $250,000 for “AI-native” product startups and $250,000 in cloud credits from Azure. With Converge, OpenAI is no doubt looking to cash in on the increasingly lucrative industry that is AI. The Information reports that OpenAI — which itself is reportedly in talks to raise cash from Microsoft at a nearly $20 billion valuation — has agreed to lead financing of Descript, an AI-powered audio and video editing app, at a valuation of around $550 million. AI startup Cohere is said to be negotiating a $200 million round led by Google, while Stability AI, the company supporting the development of generative AI systems, including Stable Diffusion, recently raised $101 million. The size of the largest AI startup financing rounds doesn’t necessarily correlate with revenue, given the enormous expenses (personnel, compute, etc.) involved in developing state-of-the-art AI systems. (Training Stable Diffusion alone cost around $600,000, according to Stability AI.) But the continued willingness of investors to cut these startups massive checks — see Inflection AI‘s $225 million raise, Anthropic’s $580 million in new funding and so on — suggests that they have confidence in an eventual return on investment.

Hear the VC perspective at iMerit ML DataOps Summit • ZebethMedia

Don’t miss the investor-focused session “Current and Future State of ML DataOps Landscape” at the iMerit ML DataOps Summit on November 8.  As enterprises dive deeper into commercializing AI applications to improve business efficiencies, many realize the massive transformation and increasing complexity of the machine learning data operations landscape. Join our panelists as they dig into those complexities and share their perspectives. You’ll hear from Alfred Chuang, founder and general partner at Race Capital; Andy Pavlo, professor of computer science at Carnegie Mellon University and the CEO and co-founder of OtterTune, and Pavan Tripathi, partner at Bregal Sagemount. Alfred Chuang — recognized by Andreessen Horowitz as the “Silicon Valley CEO’s CEO” — is an accomplished entrepreneur and venture capitalist. Before joining Race, Chuang co-founded and took BEA Systems public. He also became BEA’s chairman of the board where he remained until 2008 when Oracle acquired the company for $8.6 billion.  Prior to BEA, Chuang led product development, network infrastructure, systems architecture at Sun Microsystems, Inc. During his tenure, Sun grew from less than 1,000 to 60,000 people with revenue over $6 billion. Andy Pavlo is an associate professor of Databaseology in the Computer Science Department at Carnegie Mellon University. His (unnatural) infatuation with database systems has inadvertently caused him to incur several distinctions, such as VLDB Early Career Award, NSF CAREER, Sloan Fellowship and the ACM SIGMOD Jim Gray Best Dissertation Award. He is also the CEO and co-founder of the OtterTune database tuning start-up.  Pavan Tripathi is a partner and co-founder at Bregal Sagemount. Prior to Bregal Sagemount, Pavan was an investment banker and private equity investor at Goldman Sachs. Most recently, he was a member of the growth equity team in Goldman Sachs’ Merchant Banking Division. Pavan graduated summa cum laude from the University of California, Los Angeles with a BS in Electrical Engineering and a BA in Economics, and received an MBA from the Stanford University Graduate School of Business. These are just three of the many leading AI/ML game-changers you’ll find featured in our power-packed agenda. Take a look at the other sessions, and then join us for the iMerit ML DataOps Summit on November 8.  Don’t miss this opportunity to learn from some of the best minds in AI, data science and ML. Register for free today!

MLOps platform Galileo lands $18M to launch a free service • ZebethMedia

Galileo, a startup launching a platform for AI model development, today announced that it raised $18 million in a Series A round led by Battery Ventures with participation from The Factory, Walden Catalyst, FPV Ventures, Kaggle co-founder Anthony Goldbloom and other angel investors. The new cash brings the company’s total raised to $23.1 million and will be put toward growing Galileo’s engineering and go-to-market teams and expanding the core platform to support new data modalities, CEO Vikram Chatterji told ZebethMedia via email. As the use of AI becomes more common throughout the enterprise, the demand for products that make it easier to inspect, discover and fix critical AI errors is increasing. According to one recent survey (from MLOps Community), 84.3% of data scientists and machine learning engineers say that the time required to detect and diagnose problems with a model is a problem for their teams, while over one in four (26.2%) admit that it takes them a week or more to detect and fix issues. Some of those issues include mislabeled data, where the labels used to train an AI system contain errors, like a picture of a tree mistakenly labeled “houseplant.” Others pertain to data drift or data imbalance, which happens when data evolves to make an AI system less accurate (think a stock market model trained on pre-pandemic data) or the data isn’t sufficiently representative of a domain (e.g., a data set of headshots has more light-skinned people than dark-skinned). Galileo’s platform aims to systematize AI development pipelines across teams using “auto-loggers” and algorithms that spotlight system-breaking issues. Built to be deployable in an on-premises environment, Galileo scales across the AI workflow — from predevelopment to postproduction — as well as unstructured data modalities like text, speech and vision. In data science, “unstructured” data usually refers to data that’s not arranged according to a preset data model or schema, like invoices or sensor data. Atindriyo Sanyal — Galileo’s second co-founder — makes the case that the Excel- and Python script–based processes to ensure quality data is being fed into models are manual, error-prone and costly. A screenshot of the Galileo Community Edition. Image Credits: Galileo “When inspecting their data with Galileo, users instantly uncover the long tail of data errors such as mislabeled data, underrepresented languages [and] garbage data that they can immediately take action upon within Galileo by removing, re-labeling or by adding additional similar data from production,” Sanyal told ZebethMedia in an email interview. “It has been critical for teams that Galileo supports machine learning data workflows end to end — even when a model is in production, Galileo automatically lets teams know of data drifts, and surfaces the highest-value data to train with next.” The co-founding team at Galileo spent more than a decade building machine learning products, where they say they faced the challenges of developing AI systems firsthand. Chatterji led product management at Google AI, while Sanyal spearheaded engineering at Uber’s AI division and was an early member of the Siri team at Apple. Third Galileo co-founder Yash Sheth is another Google veteran, having previously led the company’s speech recognition platform team. Galileo’s platform falls into the burgeoning category of software known as MLOps, a set of tools to deploy and maintain machine learning models in production. It’s in serious demand. By one estimation, the market for MLOps could reach $4 billion by 2025. There’s no shortage of startups going after the space, like Comet, which raised $50 million last November. Other vendors with VC backing include Arize, Tecton, Diveplane, Iterative and Taiwan-based InfuseAI. But despite having launched just a few months ago, Galileo has paying customers from “high-growth” startups to Fortune 500 companies, Sanyal claims. “Our customers are using Galileo while building machine learning applications such as hate speech detection, caller intent detection at contact centers and customer experience augmentation with conversational AI,” he added. Sanyal expects the launch of Galileo’s free offering — Galileo Community Edition — will boost sign-ups further. The Community Edition enables data scientists working on natural language processing to build machine learning models using some of the tools included in the paid version, Sanyal said. “With Galileo Community Edition, anyone can sign up for free, add a few lines of code while training their model with labeled data or during an inference run with unlabeled data to instantly inspect, find and fix data errors, or select the right data to label next using the powerful Galileo UI,” he added. Sanyal declined to share revenue figures when asked. But he noted that San Francisco–based Galileo’s headcount has grown in size from 14 people in May to “more than” 20 people as of today.

Rewind wants to revamp how you remember, with millions from a16z • ZebethMedia

While there have been quite a few attempts to disrupt search engines, Rewind may be the first I’ve ever seen try to revamp the way we search through our online lives. One app at a time. Built by Dan Siroker, the co-founder and former chief executive of Optimizely, Rewind wants to help people with their memory. The startup, launching today, uses nifty tech to record how someone scrolls and chats through their day. It creates a searchable recording of what happened when, who said what during that Zoom meeting and every instance someone has ever brought up expense reporting hacks. “The content of discussions, debates and decisions are often lost forever as soon as a meeting is over,” Siroker said. “With Rewind, you never have to worry about losing this content again….you can go back to the exact moment in a meeting you are looking for by simply searching for a word that was said, a word that appeared on your screen.” Siroker compares the app to a hearing aid, which he says changed his life after he started to go deaf in his 20s. “To lose a sense and gain it back again felt like gaining a superpower,” Siroker said. After leaving Optimizely in October 2020, he began looking for different ways to augment human capability. According to LinkedIn, he started a foundation in 2018 to “fund and conduct scientific research in order to accelerate our path toward human mind emulation.” In product form, this goal looks like Rewind. Siroker said that the startup “uses APIs to determine the specific app that is in focus at any given time,” and then creates a timeline of that behavior. It also uses an API to allow deep linking to websites, so people can open in chrome directly from search results. Users don’t need to integrate with Gmail, Dropbox or Slack, but instead just can download and “rewind” to start capturing the apps. Put in practice, if, for example, you forget the URL of the landing page of a new rival app that someone mentioned during a developer stand-up, you can rewind – haha – through your day, find the moment in the meeting someone threw the link on the screen, copy the link and paste. As for the “why now” question, Siroker had an immediate answer: “Apple Silicon (i.e. M1 and M2 chips). Without it, we couldn’t do what we do. We leverage every part of the System on a Chip (SoC) to do everything locally on your machine.” Rewind claims that it compresses raw video recording data up to 3,750x times without a loss of quality; “that means even with the smallest hard drive you can buy from Apple today, you can store years of continuous recordings,” the company said in a statement provided to ZebethMedia ahead of today’s announcement. (Apple is not an investor). Rewind addresses one of the biggest challenges with any app – user trust – head on. The recordings are stored locally on individuals’ Mac computers. In theory, that means that the company doesn’t touch the data. Rewind says that only users have access to their data. Siroker added that users can pause and delete recordings at any time, excluding specific apps like Signal or 1Password or go Incognito mode, saying that “by default, we don’t record Chrome Incognito or Safari private browsing windows). Image Credits: Rewind There are still some risks with storing a sensitive, all-encompassing repository of everything you’ve seen, said or heard on a machine. Malware could potentially tap into sensitive data if your computer is compromised, for example. There’s also the awkward dance of a Rewind user recording someone on their screen without their permission; which is illegal in some states. Siroker said that he recommends users ask those they are recording for verbal permission, emphasizing that only the user can have access to the recordings. Still, a user with Rewind may feel less inclined to complain about their corporate parents or share personal stories knowing that there is a recording of it somewhere (delete button doesn’t retroactively work, unfortunately). “While the laws differ from state to state, we believe privacy is so important that we recommend users of our product hold themselves accountable to a much higher standard than the bare legal minimum, ” Siroker said, speaking about proactive recording consent. “Not only is this the safest approach legally, but it is also just the right thing to do.” As for how this makes people better at remembering instead of just better at not losing track of random things throughout their workday, the jury is still out. If you look through your day, that doesn’t necessarily mean you’ll remember things more, it may just mean you have more stored memories at your fingertips. Which I guess is a memory aid, but definitely one that requires you to be sitting down at your computer. “The long-term vision is giving humans perfect memory, the product today is all about search,” Siroker said. “That’s where we can have the biggest impact. If you think about it, if we all already had perfect memory we wouldn’t need to search our emails, texts, dcos etc to find things we’ve seen before. We would just remember them.” So far, the company has raised $10 million at a $75 million valuation in a round led by Andreessen Horowitz (a16z), with participation from First Round Capital and others. The app is currently free, Siroker says, but there will be a freemium monthly subscription down the road. And, despite his history in helping companies better market advertising campaigns, Siroker says that Rewind “will never sell your data or do advertising.”

XPeng to begin autonomous driving public road tests in Guangzhou • ZebethMedia

XPeng received a permit Monday to begin testing its G9 electric SUV as an autonomous vehicle on public roads in Guangzhou. The company will begin testing a small fleet as soon as possible with a human safety operator in the driver’s seat. This is a milestone for XPeng as it aims to use its vehicles for robotaxi operations in the future. The G9 is the first mass-produced vehicle to qualify for such tests in China, Xpeng claims. The company is pursuing an approach of using EVs off the shelf for dual purposes — autonomous applications and individual sales — to lower the cost of production and make its vehicles more commercially viable. This is especially salient in the wake of Argo AI’s shutdown, with Ford and Volkswagen pulling their investments in the company in order to prioritize nearer term bets like in-house built advanced driver assistance systems (ADAS). The news follows XPeng’s announcement at its annual 1024 Tech Day that the G9 passed a government-led autonomous driving closed field test, which made it eligible for approval of further testing. Most, if not all, current autonomous vehicle operators rely on existing vehicle models that have been retrofitted with hardware and software suites to drive autonomously. In the U.S., Waymo uses Jaguar I-Paces and Cruise uses Chevrolet Bolts. XPeng’s G9, which was unveiled in September as a passenger vehicle, will be tested for robotaxi applications without any hardware modifications — higher-end versions of the G9 will be built with Nvidia’s Drive Orin chips and rely on 31 sensors, including a front-view camera and dual lidar sensors. That means the vehicle that’s being tested for robotaxi operations is the same vehicle that will be sold to private passengers. The only difference will be in the software. By early next year, G9s purchased by individuals in Guangzhou, Shenzhen and Shanghai will have the option of downloading XNGP software, which is XPeng’s “full scenario” ADAS that promises to automate highway driving, city driving and parking tasks. The G9s XPeng will use for autonomous vehicle testing will be given an upgrade that allows them to perform  Level 4 autonomy. Level 4 autonomy means the vehicle can drive itself without requiring a human safety operator to take over as long as it’s in certain conditions, like a geofenced area or time of day. XPeng will integrate data from both private passenger vehicles and autonomous test vehicles to continue to operate both systems in parallel, a spokesperson said. The company aims to test its vehicle for robotaxi applications over the next two to three years as it develops its next generation vehicle, with the goal of launching that by 2025 as one of the options, according to Xinzhou Wu, XPeng’s VP of autonomous driving. “Hopefully the software will be in good shape by then so we can at least see a limited scenario similar to what Cruise is doing now,” Wu told ZebethMedia. Wu said that while the new vehicle will have a full sensor suite, it probably won’t come in the form of a purpose-built AV — XPeng for now is sticking with a strategy of using the same mass-produced vehicle for passenger vehicle sales as it does for robotaxi operations. XPeng also doesn’t intend to run its own robotaxi operation in the future. The company envisions itself as more of a provider of the software, and possibly the hardware, stack for other ride-hail focused companies.

it’s an “open field” • ZebethMedia

If you’ve been closely following the progress of Open AI, the company run by Sam Altman whose neural nets can now write original text and create original pictures with astonishing ease and speed, you might just skip this piece. If, on the other hand, you’ve only been vaguely paying attention to the company’s progress and the increasing traction that other so-called “generative” AI companies are suddenly gaining and want to better understand why, you might benefit from this interview with James Currier, a five-time founder and now venture investor who cofounded the firm NFX five years ago with several of his serial founder friends. Currier falls into the camp of people following the progress closely — so closely that NFX has made numerous related investments in “generative tech” as he describes it, and it’s garnering more of the team’s attention every month. In fact, Currier doesn’t think the buzz about this new wrinkle on AI isn’t hype so much as a realization that the broader startup world is suddenly facing a very big opportunity for the first time in a long time. “Every 14 years,” says Currier, “we get one of these Cambrian explosions. We had one around the internet and ’94. We had one around mobile phones in 2008. Now we’re having another one in 2022.” In retrospect, this editor wishes she’d asked better questions, but I’m learning here, too. Excerpts from our chat follow, edited for length. You can listen to our longer conversation here. TC: There’s a lot of confusion about generative AI, including how new exactly it is, or whether it’s just become the latest buzzword. JC: I think what happened to the AI world in general is that we had a sense that we could have deterministic AI, which would help us identify the truth of something. For example, is that a broken piece on the manufacturing line? Is that an appropriate meeting to have? It’s where you’re determining something using AI in the same way that a human determines something. That’s largely what AI has been for the last 10 to 15 years. The other sets of algorithms in AI were more these diffusion algorithms, which were intended to look at huge corpuses of content and then generate something new from them, saying, ‘Here are 10,000 examples. Can we create the 10,001st example that is similar?’ Those were pretty fragile, pretty brittle, up until about a year and a half ago. [Now] the algorithms have gotten better. But more importantly, the corpuses of content we’ve been looking at have gotten bigger because we just have more processing power. So what’s happened is, these algorithms are riding Moore’s law — [with vastly improved] storage, bandwidth, speed of computation — and have suddenly become able to produce something that looks very much like what a human would produce. That means the face value of the text that it will write, and the face value of the drawing it will draw, looks very similar to what a human will do. And that’s all taken place in the last two years. So it’s not a new idea, but it’s newly at that threshold. That’s why everyone looks at this and says, ‘Wow, that’s magic.’ So it was compute power that suddenly changed the game, not some previously missing piece of tech infrastructure? It didn’t change suddenly, it just changed gradually until the point where our, the quality of its generation got to a point where it was meaningful for us. So the answer is generally no, the algorithms have been very similar. In these diffusion algorithms, they have gotten somewhat better. But really, it’s about the processing power. Then, about two years ago, the [powerful language model] GPT  came out, which was an on-premise type of calculation, then GPT3 came out where [the AI company Open AI] would do [the calculation] for you in the cloud; because the data models were so much bigger, they needed to do it on their own servers. You just can’t afford to do it [on your own]. And at that point, things really took a jump up. We know because we invested in a company doing AI-based generative games, including “AI Dungeon,” and I think the vast majority of all GPT-3’s computation was coming through “AI Dungeon” at one point. Does “AI Dungeon” then require a smaller team than another game-maker might?  That’s one of the big advantages, absolutely. They don’t have to spend all that money to house all that data, and they can, with a small group of people, produce tens of gaming experiences that all take advantage of that. [In fact] the idea is that you’re going to add generative AI to old games, so your non-player characters can actually say something more interesting than they do today, though you’re going to get fundamentally different gaming experiences coming out of AI into gaming, versus adding AI into the existing games. So a big change is in the quality? Will this technology plateau at some point? No, it will always be incrementally better. It’s just that the differences of the increments will be will be smaller over time because they’re already getting pretty good, But the other big change is that Open AI wasn’t really open. They generated this amazing thing, but then it wasn’t open and was very expensive. So groups got together like Stability AI and other folks, and they said, ‘Let’s just make open source versions of this.’ And at that point, the cost dropped by 100x, just in the last two or three months. These are not offshoots of Open AI. All this generative tech is not going to be built just on the Open AI GPT-3 model; that was just the first one. The open source community has now replicated a lot of their work, and they’re probably eight months behind, six months behind, in terms of quality. But it’s going to get there. And because the open source versions are a third or

Galen Robotics looks to assist ENT surgeons with new bot and $15M round • ZebethMedia

Medical devices and robots have been making their way into operating rooms in an increasing number of procedures. Now a new robot is trying to forge its path in the OR and assist surgeons who don’t yet have that advantage. “There are surgeons out there that have really no robotic assistance at all,” said Bruce Lichorowic, CEO of Galen Robotics. “So you have surgeons out there that are doing everything still by hand, using their training to keep their tremor under control to keep themselves stable. Our goal is to see if we can assist them in these areas where there’s really no help today.” The company’s first robot aims to assist in soft tissue surgeries. Called Galen ES, it acts as a support for surgeons performing ear, nose and throat (ENT) surgeries, particularly laryngeal cancer operations. Swappable instruments follow the surgeon’s hand movements but allow the user to take a break if needed. According to Lichorowic, the goal is to gain more clearances to help in other ENT, brain, spine, and cardiothoracic procedures. ZebethMedia Clip 10_13_22 v3.mov from David Saunders on Vimeo. The Galen ES fills up the space of a person and the company claims it takes no longer than four minutes to set up. While the device is in use, it tracks and records a surgeon’s movements to later be used for training purposes. The product is currently going under FDA review for clinical use approval, which the company said it hopes will come by Q1 or Q2 of 2023. Although the product is under review, a 2019 study showed surgeons who used the device performed better and had close to a 3 times boost in manual dexterity. The Da Vinci surgical system opened the market to adopting surgical robots. Subsequently, other robots have entered the market addressing general, cardiac and orthopedic surgery needs. According to Galen, its robot will be the first to address neurosurgery and spine surgeries, once clearance is earned. Image Credits: Galen Robotics Hospitals adopting the robot must commit to using the device in 200 cases and pay upwards of $1,500 per use. Though, if a hospital wants to flat-out buy the robot they can do so with a cool $350,000. Surgeons at Stanford, Harvard Medical School, UCSF, USC, and Johns Hopkins have expressed interest in the product according to Galen. The device was originally developed and tested at Johns Hopkins. Galen has garnered support in the form of a $15 million Series A round from Ambix Healthcare Partners. The company has also opened a second close for its Series A, hoping to raise an additional $5 million. ”We watched this team take an early surgical robotic prototype from Johns Hopkins University’s Robotics Lab, develop it into a potential game changer, and submit it to FDA, all during a pandemic,” said Arron Berez, managing director of Ambix Healthcare Partners, in a written statement. “Add to that the current state of supply chain issues, and economic uncertainty, and we’re very impressed with how this team was able to consistently execute and hit their milestones.” This round’s funds will be used to develop a surgeon training program and expand various teams within the company.

Dawn of the tentacle • ZebethMedia

Fair warning, it’s going to be a quick one from me today. I caught the thing again, roughly three months after the last time I caught the thing. They say, “third time’s the charm,” and I now recognize that they were referring to chest pain and a general light-headedness. Turns out it doesn’t get easier. Send soup. With that in mind, consider this week’s Actuator a bit more on the housecleaning side of things (don’t we have robots for that now?). It’s more of a smattering of links to interesting stories from the past week, along with some that no doubt fell through the cracks last week, during Disrupt. If this is your first Actuator, sorry. Trying hard not to be sick this time next week. Trend-wise (if a week of news can be referred to as such), I’m seeing a bit of a dip in robotic investment news, with university research rushing in to fill the vacuum. More than anything, the latter is most likely due, in part, to the school year being back in full force. Not that robotics researchers get the summer off, of course. Before the fun stuff, let’s discuss potential slowdowns. As investor Kelly Chen noted on our VC panel at Disrupt, “On the less rosy side, I think the layoffs are yet to come. In an economic downturn, the customers will be less willing to be experimental, so they’re thinking about cutting costs and then economics just becomes so much more important.” The list of “recession-proof” industries is short and doesn’t include robotics, despite being relatively unaffected by the drying out of VC funds. We’ve got a double-edged sword here. On one side, automation can help stave off some economic impacts at companies, if properly deployed. On the other, so much of the stuff we talk about here is so long-tail, it’s easy to see investors and others succumbing to very real short-term concerns. Image Credits: Berkshire Grey Obviously, none of this stuff should be painted with too broad a brush. There are so many different factors at play here. Berkshire Grey, which ran aground a stock dip following a 2021 SPAC deal, is an example of a company that recently “made some updates.” For its part, the firm is framing this as more of a correction than anything. BG won’t confirm how large those “updates” are, but they told ZebethMedia: We discussed on our Q2 earnings call that we’ve matured as a company, improved business operations, and know exactly where we need to focus and invest. We made some updates to our team back in August that were small but will help us focus on continuing to grow our business. That news arrives as the company signs an “equity purchase agreement” with Lincoln Park Capital, which it tells me it’s done for the sake of “some added financial flexibility.” Per a release on the latter bit of news: Under the terms and conditions of the Agreement, the Company has the right, but not the obligation, to sell up to $75 million of its shares of common stock to Lincoln Park over a 36-month period, subject to certain limitations. Any common stock that is sold to Lincoln Park will occur at a purchase price that is determined by prevailing market prices at the time of each sale with no upper limits to the price Lincoln Park may pay to purchase the common stock. The company tells me: These types of deals are common. The $75M commitment from Lincoln Capital allows us to access capital in an inexpensive, simple way that provides us with some added financial flexibility. Certainly the overall market for fulfillment robotics looks to be robust. Given the current level of saturation in the market, however, I’d say it’s safe to expect the category to continue to transform for the foreseeable future. Image Credits: Photo by Jared Wickerham/Argo AI One other element worth pointing out in all of this is the human impact of automation. It’s here and it’s not going away anytime soon, but we can ease the blow as a society. Only if we actually choose to do so, of course. A Reuters piece notes the timing of Walmart’s move to lay off nearly 1,500 workers in fulfillment center roles in Atlanta, Georgia, following the acquisition of robotics startup, Alert Innovation. It said the following of the move: We’re converting the fulfillment center on Fulton Parkway to support our growing WFS (Walmart Fulfillment Services) business. As part of the conversion, the facility’s infrastructure, operational resources, processes, staffing requirements and equipment are being adjusted to meet the building’s needs. I really need to stop leading with the bad news, right? I’m not sure tricking a kid into eating their broccoli is a good model for running a successful newsletter. I’ll get this stuff right eventually (and when I’m a bit less light-headed). Image Credits: IHMC (Strike a pose, Vogue) I’ve noted on these pages why I’m not yet 100% sold on humanoid robots (though I’m aware of some compelling arguments for them), but it’s always fun to watch different companies and laboratories take different approaches to the very real issues around real-world usage. The Institute for Human and Machine Cognition, in Pensacola, Florida, recently revealed a system it’s working on with Boardwalk Robotics (and an assist from Moog’s Integrated Smart Actuators) named Nadia. The system was named as an homage to gymnast Nadia Comăneci and is being developed with funding from the Office of Naval Research, which has been behind a number of interesting robotics projects. IHMC notes: The Nadia project, which has a three-year timeline, is intended to function in indoor environments where stairs, ladders, and debris would require a robot to have the same range of motion as a human, which can be particularly useful in firefighting, disaster response, and other scenarios that might be dangerous for humans. Image Credits: Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kröger, Ken Goldberg New(ish) breakthroughs in clothes-folding robots. The dual-armed system SpeedFolding

Navina secures $22M to process and summarize medical data for clinicians • ZebethMedia

Navina, a company developing AI-powered assistant software for physicians, today announced that it raised $22 million in Series B funding led by ALIVE with participation from Grove Ventures, Vertex Ventures Israel and Schusterman Family Investments. Bringing the startup’s total raised to $44 million, inclusive of a grant from the Israeli Innovation Authority, the proceeds will be put toward product development and widening Navina’s footprint to home, virtual and urgent care, CEO and co-founder Ronen Lavi told ZebethMedia. Navina was founded by Ronen Lavi and Shay Perera, who previously led the Israel Defense Forces’ AI lab, where they say that they built AI “assistant” systems for analysts suffering from data overload. Their work there inspired the products they went on to built at Navina, which aim to help physicians drowning in medical data. “The funding comes at a pivotal time for the U.S. healthcare industry on the heels of the pandemic, when physician burnout is at an all-time high,” Lavi told ZebethMedia in an email interview. “Navina’s platform is uniquely able to put exactly the right patient information in front of physicians at the right time to give them a deep understanding at a glance, along with actionable insights at the point of care.” Several startups — and incumbents, for that matter — are developing AI assistant technologies for clinical settings. For example, there’s Suki, which raised $20 million to create a voice assistant for doctors, and Bot MD, an AI-based chatbot for doctors. Lavi claims that Navina is distinguished by its ability to “understand the complex language of medicine,” including non-clinical data. Trained on a dataset of imaging notes, consult notes, hospital notes, procedures and labs curated by a team led by medical doctors, Navina’s AI systems integrate with existing electronic health records software to identify potential diagnoses and quality and risk gaps requiring attention. Navina.ai uses AI to process and summarize medical records data. Image Credits: Navina “Navina differentiates in the way it structures and organizes data specifically for primary care physicians at the moment of care,” Lavi said. “Navina fits into existing workflows and familiar tools, meeting physicians and staff where they are … Its goal is to align workflows to effectively serve patient populations and improve value-based care.” One point of concern for this reporter is Navina’s diagnostic capabilities. While perhaps helpful, medical algorithms have historically been built on biased rules and homogenous datasets. The consequences have been severe. For example, one algorithm to determine eligible candidates for kidney transplants puts Black patients lower on the list than white patients even when all other factors remain the same. In response to a question about bias, Lavi said that Navina takes steps to “address health inequities and bias” and “ensure high accuracy of data sets and models.” He added that the company is compliant with HIPAA requirements and underwent a third-party privacy audit, and is in the “final stages” of SOC2 certification. With “thousands” of clinicians and supporting staff using the platform, Lavi says he doesn’t anticipate the economic downturn significantly impacting Navina. He demurred, however, when asked about the company’s revenue and precise customer count. “The pandemic gave Navina and other health tech companies a boost as it required both patients and physicians to grow accustomed to new modalities of care, such as telemedicine and remote visits,” Lavi said. “This has led traditional primary care providers to look for solutions that can help them take responsibility for their patients no matter where they enter the health system.” Navina has 65 employees currently. It expects to end the year with around 75.

business and solar energy