Zebeth Media Solutions

Robotics & AI

Inside Motional’s strategy to bring robotaxis to market • ZebethMedia

Motional, the Hyundai-Aptiv joint venture that aims to commercialize autonomous driving technology, announced last week its partnership with Uber to bring robotaxi services to North American cities over the next 10 years. The Uber deal comes off the back of similar partnerships with Via and Lyft to launch robotaxi services in Las Vegas. Sensing a pattern emerging, we reached out to Akshay Jaising, Motional’s new VP of commercialization, who joined the company in July after doing a stint as the director of business development at Kitty Hawk, the electric aviation startup backed by Larry Page that shut down last month. Jaising ran us through the different aspects of Motional’s go-to-market strategy. The upshot? Motional sees partnerships as a way to meet the customer where they already are.  The following interview has been edited for length and clarity. ZebethMedia: Lyft, Via and now Uber. It looks like Motional thinks partnerships are really the way to go. Can you walk me through your thinking? Akshay Jaising: The way we view it is we have limited resources. Our core competency is building the autonomy stack, right? We want to stay focused on doing that piece. There’s other companies like Uber and Lyft that have developed a network for folks to hail rides. We think it makes sense to partner with them especially as the technology matures because we’re taking a very customer-centric view. As a customer, you want to go to one app to get from point A to point B and you want all the options you need to get there. So we want to be part of that concentration set. It allows us to make our technology accessible to millions of riders. People who are used to using an app are now going to be delighted and surprised to see ‘Oh, there’s an option to take an autonomous car from Motional!’ It also gives us a little bit more runway as the technology matures. Initially, we expect smaller deployments. As we mature, you will have larger scale, and you’ll be able to sell more routes. Taking the pathway of trying to create our own app would be more challenging from a customer perspective. If you open an app and there’s not always a ride available, it doesn’t meet your needs and you’re going to stop checking that application. Versus seamlessly integrating into your day-to-day mode of transportation and you get an option now to use an autonomous vehicle.  Cruise and Waymo seem to be more vertically integrated at this stage, as both the tech provider and operator. Is that something Motional would consider in the long run? When it comes to scaling, it’s a unit economics discussion, and that’s where I think partnerships become critical. The ecosystem includes mature businesses that have done pieces of that value chain over time, and have become really good at it. And with that, they’ve got cost efficiencies that they’re able to translate to value for a customer.  Could we try to do everything? We could. But could we do it most efficiently and at a price point where the customer can actually benefit? How do we do it profitably and deploy at scale? And that’s where I think the partnerships are really important. What does it look like selling this technology to ride-hail platforms? Like, is Motional essentially the gig worker with their own car in this scenario? Image Credits: Motional Without getting into the specifics of the agreement, at the high level, Motional is the provider for vehicles on the Uber or Lyft platform. That’s not to say this couldn’t change in the future. There are companies that are really good at fleet management, and maybe there’ll be merging partnerships in that space, as well. But right now we are doing the entire soup to nuts — not only developing the tech, but it’s our vehicles. It’s our partnership with Hyundai that allows us to offer a customized experience. Our value proposition is we have great technology but we also have thought about the customer and integrated key features into the vehicle based on that. So for example, we have cameras for in-cabin monitoring, which are well integrated. We have customer assistance buttons on the exterior of the car, so if you have issues with unlocking the car with your app, you can actually request assistance. So we bundle that as a service and we’re like okay, here’s why a partnership with us can help you scale and offer an additional option to your customers.  Are you trying to come into cities fully driverless from the get-go? Everything we do is focused on safety and scaling when we are ready. At this stage, we feel the right approach is to go drivered first. So we’ll have a fleet of drivered vehicles and then as the technology matures — we’ve got certain metrics and milestones we have to hit — we’ll take the driver out of the vehicle, so it would be a phased approach. Would Motional be interested in working with an OEM to build a purpose-built AV, like Cruise with its Origin? We just launched our partnership with the Hyundai Ioniq 5s, and we’re focused on that. We have nothing to share beyond that, but we’re constantly thinking of what’s next.  Would Motional pursue a commercialization route of integrating your tech into private passenger vehicles?  Right now, the technology is expensive, which is why we’ve taken a fleet-first approach. When you look at personal car ownership, the challenge is because the cost is high, it’s gonna be a small segment who buy it, and people use their cars maybe two hours a day, right? So they’re not fully utilizing this expensive asset. Deploying it in a fleet, we get a lot of exposure to the technology, we have the chance to advance it and bring the cost down. So I think down the road, there will be an opportunity to start integrating Level 4 autonomy into mainstream vehicles, but we think that’s

Stanford’s robotic boot gives wearers a personalized mobility boost • ZebethMedia

Some of the most exciting robotics breakthroughs are happening in the exoskeleton space. Sure, any robotic system worth its salt has the potential to effect change, but this is one of the categories where such changes can be immediately felt — specifically, it’s about improving the lives of people with limited mobility. A team out of Stanford’s Biomechatronics Laboratory just published the results of years-long research into the category in Nature. The project began life — as these things often do — through simulations and laboratory work. The extent of the robot boot’s real-world testing has thus far been limited to treadmills. The researchers behind it, however, at readying it for life beyond the lab doors. “This exoskeleton personalizes assistance as people walk normally through the real world,” lab head Steve Collins said in a release. “And it resulted in exceptional improvements in walking speed and energy economy.” The principle behind the boot is similar to what drives a number of these systems. Rather than attempting to work for the wearer, it provides assistance, lowering some of the resistance and friction that come with mobility impairments. Where the lab says their approach differs, however, is in the machine learning models it uses to “personalize” the push it gives to the calf muscle. Image Credits: Kurt Hickman The researchers liken the assistance to removing a “30-pound backpack” from the user. Collins adds: Optimized assistance allowed people to walk 9% faster with 17% less energy expended per distance traveled, compared to walking in normal shoes. These are the largest improvements in the speed and energy of economy walking of any exoskeleton to date. In direct comparisons on a treadmill, our exoskeleton provides about twice the reduction in effort of previous devices. Those kinds of numbers are delivered, in part, from the emulators that provide the foundation for much of the research. The boot is the culmination of around 20 years of research at the lab, and now the team is working to commercialize the project, with plans to bring it to market in “the next few years.” They’re also developing variations on the hardware to help improve balance and reduce joint pain.

Thanks to AI, you can now create automations in Power Automate by simply describing them • ZebethMedia

Power Automate, Microsoft’s Power Platform service that helps users create workflows between apps, is getting new AI smarts. During its Ignite conference, Microsoft rolled out capabilities powered by OpenAI’s Codex, the code-generating machine learning system underpinning GitHub Copilot. Starting today (in public preview), Power Automate users can write what they want to automate in natural language and have Codex generate suggestions to jumpstart the flow creation. It’s Microsoft’s latest move to more tightly integrate the various technologies from OpenAI, the San Francisco AI startup in which it has invested $1 billion, into its family of products. Two years ago, Microsoft introduced a Power Apps feature that used GPT-3, OpenAI’s text-generating system, to create formulas in Power Fx, Power Platform’s programming language. Microsoft also continues to evolve Azure OpenAI Service, a fully managed, enterprise-focused platform designed to give businesses access to OpenAI innovations with governance features. “Our goal is that anywhere in the ecosystem that a person would need to write code they have the flexibility to start with natural language too, and Codex is core to that strategy,” Stephen Siciliano, VP of Power Automate, told ZebethMedia in an email interview. “[These are] new tools that will help users eliminate tedious work and free up time for workers to focus on more high value projects.” Image Credits: Microsoft Using the new Codex-powered tool, Power Automate users can describe the type of workflow automation they’d like to create in a sentence. Codex will then translate this into flow recommendations, which — when set up with the appropriate connectors — can be fine-tuned within Power Automate’s flow designer to create an automated workflow. Siciliano says that the feature will support “key” Microsoft 365 connectors at launch, and that there will be additional integrations in the coming months. “We have fine-tuned Codex primarily with the thousands of templates that we have for Power Automate cloud flows today,” he added. Originally, Codex was trained on billions of lines of public code in languages like Python and JavaScript to suggest new lines of code and functions given a few snippets of existing code. “These templates are a combination of Microsoft-built and community submitted scenarios, so they cover a breadth of use cases and everything from very simple to more advanced flows.” When asked about the longer-term roadmap, Siciliano declined to reveal much. But he suggested that Codex might come to more places within Power Platform in the future. “[T]here are many different places in the Power Platform where natural language may be useful, so you’ll see a broader rollout,” he continued. “Moreover, we will continue to enhance the accuracy of the [system] over time as well.” The new Codex-Power Automate integration dovetails with enhancements to AI Builder, which also landed this morning. (AI Builder, a built-in Power Automate feature, lets users add AI capabilities and models to automated flows.) AI Builder now offers users the ability to train AI systems on the data they might want to extract from documents, allowing Power Automate to pull data in freeform documents such as contracts, statements of work and letters, even from tables that span several pages. Microsoft says the document-processing capabilities of AI Builder now support 164 languages, including handwritten Japanese.

Microsoft announces Syntex, a set of automated document and data processing services • ZebethMedia

Two years ago, Microsoft debuted SharePoint Syntex, which leverages AI to automate the capture and classification of data from documents — building on SharePoint’s existing services. Today marks the expansion of the platform into Microsoft Syntex, a set of new products and capabilities including file annotation and data extraction. Syntex reads, tags and indexes document content — whether digital or physical — making it searchable and available within Microsoft 365 apps and helping manage the content lifecycle with security and retention settings. According to Chris McNulty, the director of Microsoft Syntex, driving the launch was customers’ increasing desire to “do more with less,” particularly as a recession looms. A 2021 survey from Dimensional Research found that more than two-thirds of companies leave valuable data untapped, largely because of problems building pipelines to access that data. “Just as business intelligence transformed the way companies use data to drive business decisions, Microsoft Syntex unlocks the value of the massive amount of content that resides within an organization,” McNulty told ZebethMedia in an email interview. “Virtually any industry with large scale content and processes will see benefits from adopting Microsoft Syntex. In particular, we see the greatest alignment with industries that work with a higher volume of technically dense and regulated content – financial services, manufacturing, health care, life sciences, and retail among them.” Syntex offers backup, arc1hiving, analytics and management tools for documents as well as a viewer to add annotations and redactions to files. Containers enable developers to store content in a managed sandbox, while “scenario accelerators” provide workflows for use cases like contract management, accounts payable and so on. “The Syntex content processor lets you build simple rules to trigger the next action, whether it’s a transaction, an alert, a workflow or just filing your content in the right libraries and folders,” McNulty explained. “[Meanwhile,] the advanced viewer adds an annotation and inking layer on top of any content viewable in Microsoft 365. Annotations can be made securely, with different permissions than the underlying content, and also without modifying the underlying content.” McNulty says that customers like TaylorMade are exploring ways to use Syntex for contract management and assembly, standardizing contracts with common clauses around financial terms. The company is also piloting the service to process orders, receipts and other transactional documents for accounts payable and finance teams, in addition to organizing and securing emails, attachments and other documents for intellectual property and patent filings. “One of the fastest-growing content transactions is e-signature,” McNulty said. “[With Syntex, you] can send electronic signature requests using Syntex, Adobe Acrobat Sign, DocuSign or any of our other e-signature partner solutions and your content stays in Microsoft 365 while it’s being reviewed and signed.” Intelligent document processing of the type Syntex does is often touted as a solution to the problem of file management and orchestration at scale. According to one source, 15% of a company’s revenue is spent creating, managing and distributing documents. Documents aren’t just costly — they’re time-wasting and error-prone. More than nine in 10 employees responding to a 2021 ABBY survey said that they waste up to eight hours each week looking through documents to find data, and using traditional methods to create a new document takes on average three hours and incurs six errors in punctuation, spellings, omissions or printing. A number of startups offer products to tackle this, including Hypatos, which applies deep learning to power a wide range of back-office automation with a focus on industries with heavy financial document processing needs. Flatfile automatically learns how imported data from files should be structured and cleaned, while another vendor, Klarity, aims to replace humans for tasks that require large-scale document review, including accounting order forms, purchase orders and agreements. As with many of its services announced today, Microsoft, evidently, is betting scale will work in its favor. “Syntex uses AI and automation technologies from across Microsoft, including summarization, translation and optical character recognition,” McNulty said. “Many of these services are being made available to Microsoft 365 commercial accounts with no additional upfront licensing under a new pay-as-you-go business model.” Syntex is beginning to roll out today and will continue to roll out in early 2023. Microsoft says it’ll have additional details on service pricing and packaging published on the Microsoft 365 message center and through licensing disclosure documentation in the coming months.

Microsoft expands Azure OpenAI Service with DALL-E 2 in preview • ZebethMedia

When Azure OpenAI Service launched in 2021, the service — a part of Azure Cognitive Services — provided enterprise-tailored access to OpenAI’s API through the Azure platform for applications like language translation and text autocompletion. That’s not changing. But after expanding the service in May with fine-tuning features, Microsoft is today introducing invite-only access to DALL-E 2 for select Azure OpenAI Service customers.  Customers can use DALL-E 2 to generate custom images using either text or images. In line with the consumer DALL-E 2 service, they can leverage inpainting and outpainting — capabilities that generate new content within a portion of an image or push an image beyond its original confines, respectively — in addition to a feature that generates variations on an existing image. Content for podcasts custom-generated by DALL-E 2, through the Azure OpenAI Service. Image Credits: Microsoft Early adopters include brands like Mattel, which used DALL-E 2 to come up with ideas for a new Hot Wheels model car. German media conglomerate RTL Deutschland, another pilot customer, is considering combining streaming content metadata with DALL-E 2 to generate visuals for podcast episodes and scenes in audiobooks. To prevent misuse, as with Designer and Image Creator, Microsoft says it’s implemented filters to reject DALL-E 2 prompts from Azure OpenAI Service customers that violate content policy. The company also claims it’s integrated techniques to prevent DALL-E 2 from creating images of religious objects and celebrities, plus objects commonly used to try to trick the system into generating sexual or violent content. And Microsoft says it’s added models that remove AI-generated images appearing to contain adult, gore and other types of “inappropriate” content. Generations from Mattel using DALL-E 2. Image Credits: Microsoft “Microsoft is making access available by invitation-only to give us the opportunity to collaborate with customers and create safeguards to prevent harmful uses and unwanted outcomes as customers bring their applications to production,” a Microsoft spokesperson told ZebethMedia via email. “Collaborations with these early customers will help us make sure the responsible AI safeguards are working in practice.” Beyond DALL-E 2, Microsoft gave a general update on Azure OpenAI Service’s growth since its launch roughly a year ago. Companies using the service now span industries including financial services, insurance and healthcare, the company said, including brands like Accenture, Avanade, Autodesk, BMW Group, CarMax, EY and PwC. Some of the most common use cases include writing assistance, natural language-to-code generation and parsing data to generate insights. For example, PwC is leveraging Azure OpenAI Service to classify various news articles into environment, social and governance topics for benchmarking purposes, while CarMax is using the service to generate new marketing content based on customer reviews.

Microsoft brings DALL-E 2 to the masses with Designer and Image Creator • ZebethMedia

Microsoft is making a major investment in DALL-E 2, OpenAI’s AI-powered system that generates images from text, by bringing it to first-party apps and services. During its Ignite conference this week, Microsoft announced that it’s integrating DALL-E 2 with the newly announced Microsoft Designer app and Image Creator tool in Bing and Microsoft Edge. With the advent of DALL-E 2 and open source alternatives like Stable Diffusion in recent years, AI image generators have exploded in popularity. In September, OpenAI said that more than 1.5 million users were actively creating over 2 million images a day with DALL-E 2, including artists, creative directors and authors. Brands such as Stitch Fix, Nestlé and Heinz have piloted DALL-E 2 for ad campaigns and other commercial use cases, while certain architectural firms have used DALL-E 2 and tools akin to it to conceptualize new buildings. “Microsoft and OpenAI have partnered closely since 2019 to accelerate breakthroughs in AI. We have teamed up with OpenAI to develop, test and responsibly scale the latest AI technologies,” Microsoft CVP of modern life, search and devices Liat Ben-Zur told ZebethMedia via email. “Microsoft is the exclusive provider of cloud computing services to OpenAI and is OpenAI’s preferred partner for commercializing new AI technologies. We’ve started to do this through programs like the Azure OpenAI Service and GitHub Copilot, and we’ll continue to explore solutions that harness the power of AI and advanced natural language generation.” Seeking to bring OpenAI’s tech to an even wider audience, Microsoft is launching Designer, a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. Designer — whose announcement leaked repeatedly this spring and summer — leverages user-created content and DALL-E 2 to ideate designs, with drop-downs and text boxes for further customization and personalization. Within Designer, users can choose from various templates to get started on specific, defined-dimensions designs for platforms like Instagram, LinkedIn Facebook ads and Instagram Stories. Prebuilt templates are available from the web, as are shapes, photos, icons and headings that can be added to projects. Image Creator in Microsoft Edge and Bing. “Microsoft Designer is powered by AI technology, including DALL-E 2, which means the ability to instantly generate a variety of designs,” Ben-Zur continued. “[It] helps you bring your ideas to life. Designer will remain free during a limited preview period, Microsoft says — users can sign up starting today. Once the Designer app is generally available, it’ll be included in Microsoft 365 Personal and Family subscriptions and have “some” functionality free to use for non-subscribers, though Microsoft didn’t elaborate. Another new Microsoft-developed app underpinned by DALL-E 2 is Image Creator, heading to Bing and Edge in the coming weeks. As the name implies, Image Creator — accessed via the Bing Images tab or bing.com/create, or through the Image Creator icon in the sidebar within Edge — generates art given a text prompt by funneling requests to DALL-E 2, acting like a frontend client for OpenAI’s still-in-beta DALL-E 2 service. Typing in a description of something, any additional context, like location or activity, and an art style will yield an image from Image Creator. “Image Creator will soon create images that don’t yet exist, limited only by your imagination,” Ben-Zur added. Unlike Designer, Image Creator in Bing and Edge will be completely free to use, but Microsoft — wary of potential abuse and misuse — says it’ll take a “measured approach” to rolling out the app. Image Creator will initially only be available in preview for select geographies, which Microsoft says will allow it to gather feedback before expanding the app further. Microsoft Designer. Some image-generating systems have been used to create objectionable content, like graphic violence and pornographic, nonconsensual celebrity deepfakes. The organization funding the development of Stable Diffusion, Stability AI, was even the subject of a critical recent letter from U.S. House Representative Anna G. Eshoo (D-CA) to the National Security Advisor (NSA) and the Office of Science and Technology Policy, in which she urged the NSA and OSTP to address the release of “unsafe AI models” that “do not moderate content made on their platforms.” Image-generating AI can also pick up on the biases and toxicities embedded in the millions of images from the web used to train them. OpenAI itself noted in an academic paper that an open source implementation of DALL-E could be trained to make stereotypical associations like generating images of white-passing men in business suits for terms like “CEO,” for example. In response to questions about mitigation measures in Designer and Image Creator, Microsoft noted that OpenAI removed explicit sexual and violent content from the dataset used to train DALL-E 2. But Microsoft also said that it took steps of its own, including deploying filters to limit the generation of images that violate content policy, additional query blocking on sensitive topics and technology to deliver “more diverse” images to results. Users will have to agree to terms of use and the aforementioned content policy to start using Designer and Image Creator with their Microsoft account. If a user requests an image deemed inappropriate by Microsoft’s automated filters, they’ll get a warning. If they repeatedly violate the content policy, they’ll be banned, but have a chance to appeal. “It’s important, with early technologies like DALL-E 2, to acknowledge that this is new, and we expect it to continue to evolve and improve,” Ben-Zur said. “We take our commitment to responsible AI seriously … We will not allow users to generate violent content, we may distort people’s faces and won’t show text strings used as input.” Addressing some of the legal questions that’ve sprung up recently around AI-powered image generation systems, Microsoft says that users will have “full” usage rights to commercialize the images they create with Designer and Image Creator. (Among other hosts, Getty Images has banned the upload and sale of illustrations generated using DALL-E 2, Stable Diffusion and similar tools, citing fair use concerns about training

Making robots that make robots to take over the world • ZebethMedia

Welcome back to Found, where we get the stories behind the startups. This week Darrell and Jordan talk with Scott Gravelle, the CEO and co-founder of Attabotics, a robotics company that specializes in distribution and supply chain. Scott talks about how he was inspired by the Cutter Ants to design a vertical warehouse and create an automated system that was not human-centric but instead functioned as a world that was great for robots. They also spoke about caring for mental health as a founder and developing new leadership skills for a virtual world. If you love live conversations with founders, you’ll love ZebethMedia Disrupt in San Francisco from October 18-20. Use code FOUND for 15% off your ticket. Subscribe to Found to hear more stories from founders each week. Connect with us:

Great, now the AI is coming for your grandma’s recipes as well! • ZebethMedia

We’ve seen AIs create music, pornography and art. The Estonian startup Yummy started off creating a meal-kit startup, but along the way created an AI that can create and adapt recipes based on your taste and dietary restrictions, complete with AI-generated images of what your dishes might look like. “Imagine a world where you would not have to spend years of your life on deciding what to eat, search for recipes, research nutritional information and health benefits, follow diets and do grocery shopping,” says co-founder and CEO Martin Salo in an interview with ZebethMedia. “Imagine we solve this complex problem on your behalf, based on your personal preferences — and got it right every time.” The co-founders of the company started Clean Kitchen together in Estonia back in 2020. The company just raised a round of angel investment to bring meal kits to parts of the world where they aren’t as prevalent as in, say, the U.S. More than just the meal kits, though, the company is carving out a novel slice of the market, making every recipe customizable. “We’re using generative AI and other cutting-edge technologies to build a fully customizable meal planning and grocery shopping experience that delivers on budget, taste, health, variety, while minimizing food waste,” says the company’s CBO, Karl Paadam. “We’re not thinking in terms of individual store items but instead offering customers personalized outcomes.” On the Yummy platform, the company wants to make it as easy as writing a Dall-E prompt; “I want to be eating a varied vegetarian diet that will match my taste preferences, my exercise routine and my budget,” for example. “When we think about the current world of shopping for groceries, it’s all about ingredients or maybe recipes in meal kits, right? You can filter your search or perhaps modify ingredients so you can sort of get what you want, but that takes a fair bit of work,” says Salo. “What if you don’t talk about each ingredient but instead make broader choices? You could say ‘I want five fish dishes,’ then ‘okay, now make it cheaper,’ or ‘I want this to be a balanced diet’. Those things all have specific meanings to humans, but figuring it all out by hand would be a lot of work. Figuring out what all the ingredients contain, and if you change one ingredient, it throws everything off balance. If you do your monthly shop, you might actually go through hundreds of items — do you have time to read all of those labels?” That’s where Yummy throws the AI at the problem, giving users the option to make dishes with variations: The cool thing is that the company’s software doesn’t just generate the new recipes, it also generates the images to go with it. Super cool. “What makes this experience so powerful is that in a short time, when using the service, we will learn to create an endless amount of recipes that will exactly match all your preferences. Always,” laughs Paadam. “You will never have to think about all the complexities in regard to food ever again.” The company argues that these features make it possible to eat healthier with specialty ingredients that are in season, or locally available. “We did some really cool experiments. We’re now opening our meal-kit service in Poland, and we took a couple of our Estonian recipes, and said ‘make them more Polish,’ and suddenly, boom, certain national ingredients that were appearing in the instructions are replaced,” Paadam says. “The more generic ones were replaced with specific, locally available ingredients. This is the magic. We can say ‘make it faster to cook,’ ‘make it sweeter,’ ‘make it low calorie’ or ‘make it low sodium,’ and the AI takes care of it. You do not need to go read the labels to do all that research.” The company is backed by a collective of Estonian founders acting as angel investors, raising $3.6 million from investors including Markus Villig of Bolt, Mart Abramov of TaxScouts, Martin Koppel of Fortumo, Thomas Padovani of Adcash, Marko and Kristel Kruustük of Testlio, etc. as well as Startup WiseGuys, Andreas Mihalovits, Hatcher, DEPO and Exelixis, etc.

Is the RPA market in trouble? • TechCrunch

Automation Anywhere, one of the best-funded RPA providers with over $1 billion capital raised to date, went the debt route this week, securing a $200 million loan from Silicon Valley Bank, SVB Capital and Hercules Capital. Debt raises aren’t necessarily a bad thing — they’re a useful tool, particularly for companies with high annual recurring revenue — but the magnitude and timing of the Automation Anywhere raise suggests it was more out of necessity than choice. “This new financing will provide operational capital for the next several years as Automation Anywhere continues to advance its cloud-native automation platform,” CEO Mihir Shukla told TechCrunch via email. “We’re using AI and intelligent automation to design tech that’s accessible to everyone — all kinds of business leaders, managers and citizen developers.” While Shukla insists Automation Anywhere’s business is robust, with a customer base of around 5,000 and “over 50% revenue growth,” the RPA market has long faced headwinds as investors increasingly express skepticism that the technology, which automates repetitive software tasks at enterprise scale, can deliver on its many promises. PitchBook notes that shares of UiPath — Automation Anywhere’s main rival, which went public in April 2021 — plummeted 71% this year. Meanwhile, another large player, Blue Prism, last September agreed to sell itself to Vista Equity Partners for £1.095 billion (about $1.5 billion). Gartner predicts that while the RPA market will reach $2.9 billion by the beginning of 2023, the growth rate will end substantially lower than it was in 2021, when the segment expanded by 30.9% compared to the year prior. Assuming the $2.9 billion figure comes to pass, it’d translate to 19.5% growth between the years 2021 and 2022.

business and solar energy