Zebeth Media Solutions

Robotics & AI

Drones in cities are a bad idea • ZebethMedia

It’s year five, or maybe ten, of “drones are going to revolutionize transport” and so far, we’ve got very little to show for it. Maybe it’s time to put these foolish ambitions to rest and focus on where this technology could actually do some good, rather than pad out a billionaire’s bottom line or let the rich skip traffic. The promise of drone deliveries, drone taxis, and personal drone attendants has never sat, or rather floated, right with me. There’s so little to be gained, while braving so much liability and danger, and necessitating so much invention and testing. Why is anyone even pursuing this? I suspect it is the Jetsons-esque technotopianism instilled in so many of us from birth: It’s only a matter of time and effort before we have the flying cars, subliminal learning pillows, and robot housekeepers we deserve, right? It feels like because we have things that fly, and things that can navigate autonomously, we should be able to put those things together and make delivery drones and air taxis. Just have to wait for the right genius kid building the future out of their garage, with the help of your friendly neighborhood VCs. Of course it’s not quite that easy. And although the Jetsons mentality explains our acceptance of the development of these technologies — unlike others that we disapprove of for their impracticability, cost, or ethics — it doesn’t really explain why a company like Amazon is spending hundreds of millions of dollars to pursue it. The answer there, fortunately, is as clear as why Amazon does anything. To paraphrase Dr Johnson: “Sir, no man but a blockhead ever [spent a decade trying to build an autonomous drone delivery network], except for money.” That’s certainly the case with drone delivery. Amazon has made no secret of its intention to take over the logistics and delivery industry bite by bite, partly through sideways subsidy from other parts of its lucrative, mutually buttressed businesses, and partly with a punishing franchise model that offloads risk and liability onto contractors. That said, the end goal is, as in its warehouses, to replace those flesh and blood workers with tireless automatons. The best evidence for this is that Amazon’s warehouses already treat workers as if they are components in a machine, so it’s just a matter of swapping out a worn out part with another, more reliable part that doesn’t try to unionize. Same with delivery. High hopes Image Credits: Amazon But in the last-mile world, drones are kind of a funny idea. Certainly it has its merits: many packages are small and light and a drone could skip traffic or travel in a straight line over residential blocks to cut hours off delivery times. But that’s before you reckon with any of the actual needs or restrictions of the logistics world. To begin with, drones wouldn’t even cover the last mile — more like the last few hundred meters. Part of the reason for this is regulatory; it’s extremely unlikely that Amazon could procure a permit to fly its drones over all the private property in a city. The liability is just too damn high. Sure, you can do some sweetheart test markets in a random suburb, but good luck convincing urban areas to let commercial drones infest their skies at all hours. So what are they going to do, fly along the streets? High enough that they don’t hit any wires or trees? Carrying a 1-pound package? Only at certain hours? It isn’t particularly efficient! And then, the first time one of those packages or drones drops out of the sky and cracks a windshield next to a grade school, those drones are done in that city, and probably every other city. Done! Even if they could guarantee no accidents, no one wants those things flying around their neighborhood. Best case scenario is: fucking annoying. Drones are pretty loud, and it’s not even the kind of loud you can get used to, like the dull roar of a freeway a few blocks off. No, drones make the most annoying sound in the world short of Jeff Bezos’s laugh. Small ones, big ones, they all sound horrible. There are advances to be brought to bear here, but really, when you have 4 to 8 little rotors spinning at however many thousand RPMs and moving the necessary air downwards to lift a couple dozen pounds of body and payload, you tend to create a certain amount of truly obnoxious noise. That’s just the physics of the thing. If we could make helicopters quiet we would have done so by now. Even if we allow these drones dominion over the air and let them fly with impunity, they’re laughably limited. Where do packages go normally? In a big clearing in your building’s courtyard? On the roof? No, they go to the lobby, which locks, or perhaps in a parcel box… which locks. As commerce has moved online, parcel delivery has skyrocketed, and so has parcel theft. Imagine if a package made a really loud whining noise wherever it went then was guaranteed to be left out in the open somewhere. It’s a really frictionless experience for the criminals, at least. Image Credits: Walmart A drone can’t ring a doorbell or buzz your apartment (unless you hook it into your smart home infrastructure — best of luck with that). It doesn’t have a key to the lobby. It can’t ask you for a signature. Cities are diverse and complex physical environments with a wide variety of obstacles, methods, and requirements for making a package go from here to there in a safe and satisfactory way. We haven’t figured out how any robot can successfully deliver something without the recipient coming out to get it immediately, and doing it from the sky is even harder. Air-dropping is one of the worst possible ways (outside of combat) to deliver anything, only slightly better than yeeting it over the fence — admittedly common,

Trigo raises $100M to expand its Amazon-style cashier-free store technology • ZebethMedia

Amazon has become the pacemaker in commerce, and today a startup that’s been building technology to help retailers keep up with it in the world of physical stores is announcing some funding to expand its business. Trigo, an Israeli startup that builds technology for stores to operate cashier-free, “just walk out” experiences similar to those you might find in Amazon Go stores, has raised $100 million. Trigo focuses on grocery shopping, and it already has a high profile list of grocery retailers on its books, including Tesco, the UK-based supermarket giant; Germany’s REWE; ALDI Nord in The Netherlands; Netto in Munich; Shufersal in Israel; and the Wakefern cooperative in the U.S.. The plan will be to use the funding to expand its engagement with these, and to add more to the roster, amid a strong slate of competition in the market. Others in the same category include Standard Cognition (last year valued at over $1 billion), Shopic, Caper, Zippin, and Grabango, to name a few. It will also be doubling down on expanding its technology. Alongside its autonomous check-out system based on hardware and software, Trigo also provides inventory management and will soon be launching “StoreOS” to bring these together with other tools (analytics, marketing and more) to help physical retailers link up their brick-and-mortar stores better with their online operations, and — thanks to the popularity of e-commerce — what customers are generally expecting out of any shopping experience these days. Singapore’s Temasek and 83North are co-leading this round, with new backer SAP and previous backers Hetz Ventures, Red Dot Capital Partners, Vertex Ventures, Viola, and REWE also participating. The startup is not disclosing valuation, but according to PitchBook its last valuation, in 2020, was in the region of $208 million. This latest round brings the total raised to almost $300 million. Computer vision, machine learning and other innovations in artificial intelligence are being put to use in earnest in autonomous systems across a range of industries  these days, and supermarkets have been one of the more interesting applications. Faced with an onslaught of offerings to buy groceries online and have them delivered to one’s home in ever-shorter turnaround times, retailers’ in-store experiences have largely remained in stasis. In-store, however, also represents a large amount of inefficient overhead due to real estate and building costs, the rotation of products, theft and the cost of maintaining a staff to serve customers. The argument for bringing autonomous systems into the grocery store is not one of the technology for technology’s sake, but that it will help reduce costs and losses in all of these areas, while speeding up the experience for customers usually in a hurry to do something else. Trigo’s self-check-out solution, called “EasyOut,” is based around a series of overhead cameras, shelf sensors and algorithms that work with “digital twins” of stores to operate cashier-free experiences. Some believe that this is a costly approach, both in terms of initial installation and maintenance, arguing that other approaches, such as systems based on sensors that sit on shopping carts themselves, is the better approach. “Smart counters and smart carts have their place, but full-store frictionless checkout based on AI-powered cameras and sensors — where the costs of the hardware are decreasing over time — is superior in both the experience it provides shoppers and for the efficiencies and tools it enables retailers,” CEO and co-founder Michael Gabay said in an email to ZebethMedia. One of the issues is that carts don’t account for shoppers who are only buying a couple of hand-held items, he said. “Frictionless checkout makes shopping seamless for everyone, regardless of the size of their basket or how they plan to shop. If you have a full shopping cart you don’t want to wait at the cashier or scan all of those items at self checkout, you just want to walk out regardless of the size of your shop.” He also believes that the “digital twin” approach that Trigo uses, which mirrors the store in real time, is more accurate and can be repurposed for more than just check-out, such as predictive inventory management. “Smart carts and similar technologies don’t allow for the full digitization of the store, so they are limited solutions when compared with the full system,” he said. Gabay claimed that even in the current market climate — the bigger issue with stores and its shoppers is inflation and people worried about prices of goods, not how long it takes to buy them — has not really dampened conversations with customers. “Especially in periods of high inflation, rising prices, and supply chain disruptions, the value of managing the inventory and procurement is huge,” he said. The company does not disclose how much it costs to, say, equip an average supermarket with its technology, but it says that typically they get return on the investment within 18 months. “Tech-enabled cost savings accumulate over time and boost grocery retailers’ margins,” he said. One argument for Trigo is that its tech can be used for all shopping, no matter the cart size, its focus right now, Gabay said, are large format supermarkets. To date, it has opened stores of between 3,000 square feet and 5,000 square feet — “on-the-go” type stores, Gabay said — but “we are now working on larger formats, including more than 10,000 square feet stores.” While the grocery sector will remain the company’s focus precisely because of its specific inefficiencies, the longer-term plan is to expand to other categories of retail such as pharmacies and quick-service restaurants. “But we see huge potential to retrofit thousands of existing grocery stores worldwide,” Gabay said. “This is accelerating also as grocers increasingly connect their e-commerce shops to their physical stores.” This is precisely where SAP is coming into the picture. It’s described as a strategic backer in this round: it works with its own long list of retailer customers, and the plan is to help integrate Trigo into those systems. “Trigo’s superior computer vision technology built

AI saving whales, steadying gaits and banishing traffic • ZebethMedia

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column, Perceptron, aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter. Over the past few weeks, researchers at MIT have detailed their work on a system to track the progression of Parkinson’s patients by continuously monitoring their gait speed. Elsewhere, Whale Safe, a project spearheaded by the Benioff Ocean Science Laboratory and partners, launched buoys equipped with AI-powered sensors in an experiment to prevent ships from striking whales. Other aspects of ecology and academics also saw advances powered by machine learning. The MIT Parkinson’s-tracking effort aims to help clinicians overcome challenges in treating the estimated 10 million people afflicted by the disease globally. Typically, Parkinson’s patients’ motor skills and cognitive functions are evaluated during clinical visits, but these can be skewed by outside factors like tiredness. Add to that fact that commuting to an office is too overwhelming a prospect for many patients, and their situation grows starker. As an alternative, the MIT team proposes an at-home device that gathers data using radio signals reflecting off of a patient’s body as they move around their home. About the size of a Wi-Fi router, the device, which runs all day, uses an algorithm to pick out the signals even when there’s other people moving around the room. In study published in the journal Science Translational Medicine, the MIT researchers showed that their device was able to effectively track Parkinson’s progression and severity across dozens of participants during a pilot study. For instance, they showed that gait speed declined almost twice as fast for people with Parkinson’s compared to those without, and that daily fluctuations in a patient’s walking speed corresponded with how well they were responding to their medication. Moving from healthcare to the plight of whales, the Whale Safe project — whose stated mission is to “utilize best-in-class technology with best-practice conservation strategies to create a solution to reduce risk to whales” — in late September deployed buoys equipped with onboard computers that can record whale sounds using an underwater microphone. An AI system detects the sounds of particular species and relays the results to a researcher, so that the location of the animal — or animals — can be calculated by corroborating the data with water conditions and local records of whale sightings. The whales’ locations are then communicated to nearby ships so they can reroute as necessary. Collisions with ships are a major cause of death for whales — many species of which are endangered. According to research carried out by the nonprofit Friend of the Sea, ship strikes kill more than 20,000 whales every year. That’s destructive to local ecosystems, as whales play a significant role in capturing carbon from the atmosphere. A single great whale can sequester around 33 tons of carbon dioxide on average. Image Credits: Benioff Ocean Science Laboratory Whale Safe currently has buoys deployed in the Santa Barbara Channel near the ports of Los Angeles and Long Beach. In the future, the project aims to install buoys in other American coastal areas including Seattle, Vancouver, and San Diego. Conserving forests is another area where technology is being brought into play. Surveys of forest land from above using lidar are helpful in estimating growth and other metrics, but the data they produce aren’t always easy to read. Point clouds from lidar are just undifferentiated height and distance maps — the forest is one big surface, not a bunch of individual trees. Those tend to have to be tracked by humans on the ground. Purdue researchers have built an algorithm (not quite AI but we’ll allow it this time) that turns a big lump of 3D lidar data into individually segmented trees, allowing not just canopy and growth data to be collected but a good estimate of actual trees. It does this by calculating the most efficient path from a given point to the ground, essentially the reverse of what nutrients would do in a tree. The results are quite accurate (after being checked with an in-person inventory) and could contribute to far better tracking of forests and resources in the future. Self-driving cars are appearing on our streets with more frequency these days, even if they’re still basically just beta tests. As their numbers grow, how should policy makers and civic engineers accommodate them? Carnegie Mellon researchers put together a policy brief that makes a few interesting arguments. Diagram showing how collaborative decision making in which a few cars opt for a longer route actually makes it faster for most. The key difference, they argue, is that autonomous vehicles drive “altruistically,” which is to say they deliberately accommodate other drivers — by, say, always allowing other drivers to merge ahead of them. This type of behavior can be taken advantage of, but at a policy level it should be rewarded, they argue, and AVs should be given access to things like toll roads and HOV and bus lanes, since they won’t use them “selfishly.” They also recommend that planning agencies take a real zoomed-out view when making decisions, involving other transportation types like bikes and scooters and looking at how inter-AV and inter-fleet communication should be required or augmented. You can read the full 23-page report here (PDF). Turning from traffic to translation, Meta this past week announced a new system, Universal Speech Translator, that’s designed to interpret unwritten languages like Hokkien. As an Engadget piece on the system notes, thousands of spoken languages don’t have a written component, posing a problem for most machine learning translation systems, which typically need to convert speech to written words before translating the new language and reverting the text back to speech. To get around the lack of labeled examples of language, Universal Speech Translator converts speech into “acoustic units”

AI chip startup Axelera lands $27M in capital to commercialize its hardware • ZebethMedia

Several years ago, Fabrizio Del Maffeo and a core team from Imec, a Belgium-based nanotechnology lab, teamed up with Evangelos Eleftheriou and a group of researchers at IBM Zurich Lab to develop a computer chip. Unlike conventional chips, theirs was destined for devices at the edge, particularly those running AI workloads, because Del Maffeo and the rest of the team perceived that most offline, at-the-edge computing hardware was inefficient and expensive. After incubating a startup — Axelera AI — to commercialize their chip technology within the blockchain company Bitfury Group, Del Maffeo and team secured capital from VCs including Imec’s venture arm, Imex.xpand. Innovation Industries led a $27 million Series A in Axelera AI that closed this week with participation from Imec.xpand and the Federal Holding and Investment Company of Belgium. In addition, the Netherland Enterprise Agency awarded Axelera AI a $6.7 million loan commissioned by the Ministry of Economic Affairs and Climate Policy. “One of the main challenges we’re facing is the availability and accessibility of AI to different groups and industries. Some businesses would benefit from AI, but don’t have the knowledge or technical expertise to integrate its benefits into day-to-day operations,” Del Maffeo told ZebethMedia in an email interview. “We’re engineering the AI platform to help overcome this access barrier … [by] delivering a game-changing, user-friendly and scalable technology with superior performance and efficiency at a fraction of the cost of existing players to accelerate computing vision and natural language processing at the edge.” Axelera is working to develop AI acceleration cards and systems for use cases like security, retail and robotics that it plans to sell through partners in the business-to-business edge computing and Internet of Things sectors. The cards and systems pack Axelera’s Thetis Core chip, which employs in-memory computing for AI computations — “in-memory” referring to running calculations in RAM to reduce the latency introduced by storage devices. Axelera is also creating software to manage its chip, which Del Maffeo claims will be “fully integrated” with leading AI frameworks — e.g. PyTorch and TensorFlow — when it’s made available. Axelera’s test chip for accelerating AI and machine learning workloads. Image Credits: Axelera “We’re democratizing AI access,” Del Maffeo said. “When our product launches … we aim to [deliver] a chip that packs the power of an entire AI server.” Axelera has a ways to go before it reaches commercialization, however. The company only last December produced its very first testbench chip, and Alexea doesn’t expect to begin delivering to customers until sometime during the first half of 2023. It’s also not the first company pursuing an in-memory architecture for edge devices. NeuroBlade, which last October raised $83 million in capital, is developing chips that combine both compute and memory into a single hardware block for data processing. MemVerge, GigaSpaces, Hazelcast and H20.ai also offer in-memory solutions for AI, data analytics and machine learning applications. But despite the fact that Axelera is both pre-market and pre-revenue and considering venture debt rounds moving forward, Del Maffeo believes that the company is well-positioned to nab a foothold in the market for custom AI chips. He notes that Jonathan Ballon, the former VP and general manager of Intel’s edge AI and Internet of Things group, is joining Axelera as chairman. And Del Maffeo points out that Axelera continues to hire aggressively, with close to 85 employees based in Europe, remotely and across the company’s offices in Eindhoven and Milan and R&D centers in Leuven and Zurich. Following an expansion in the U.S. and Taiwan within the coming months, Del Maffeo expects Axelera will enter early 2023 with 130 to 140 employees. “While the pandemic led to several shortages in the chip industry, we’ve been fortunate to see significant growth since we launched in 2021,” Del Maffeo said. “While we don’t disclose our burn rate, we can share that we’re well-positioned to raise a new investment round in 2023, significantly larger than our Series A, and we’re already receiving interest, including from more American investors, to help us bring the company to the next stage … We’re carefully building our customer base and partner ecosystem with a curated cohort of companies who have already shown strong interest in our AI platform. Later this year, we’ll also open a unique collaboration opportunity for leading companies to become early adopters of our AI platform.” Assuming Axelera can deliver on its promises, it stands to make some serious cash. The edge AI hardware market is projected to grow from 920 million units in 2021 to 2.08 billion units by 2026 — a lucrative uptick. According to one estimate, the AI chips market alone is poised to be worth $73.49 billion by 2025.

Shutterstock to integrate OpenAI’s DALL-E 2 and launch fund for contributor artists • ZebethMedia

Stock image giant Shutterstock has announced a major push into AI generated imagery today in partnership with OpenAI, expanding on a strategic tie-up the pair announced last year. The partnership between Shutterstock and OpenAI will see the latter’s DALL-E 2 image-generating AI system integrating with Shutterstock content and made available to Shutterstock users worldwide — with the integration slated to launch “in the coming months”. AI generated imagery refers to machine learning technology that’s been trained on visual data so it can respond to text-based prompts with a picture reflecting the description. The quality of the results can vary wildly but these AI systems has been coming on in leaps and bounds lately — causing equal parts awe and anger; with many tech users celebrating the ‘democratization’ of art, while visual artists, whose work may have been appropriated as training fodder for these AIs, can be left feeling ripped off. Unsurprisingly, given these sensitivities, Shutterstock’s push into generative AI is being framed as an “ethical” action plan — which includes the launch of a fund to “compensate artists for their contributions”, as its press release puts it. It also says it will be focusing R&D efforts on “gathering and publishing insights related to AI-generated content” — with the goal of positioning itself “at the forefront of the emerging technology”. So, er, RIP stock photographers? Or will the work of stock photographers’ end up being largely redirected towards capturing training data for honing AI models? (‘AI doesn’t kill jobs it changes them’ is the usual mantra applied to the rise of automation — albeit, oftentimes AI replaces lots of jobs with fewer, more specialist jobs so the ratio of winners to losers isn’t necessarily equal, nor is the wealth typically equally distributed…) Shutterstock says contributors will be “compensated” for the role their content played in the development of this technology — which raises plenty of questions, such as how will contributors be identified and how much will they be paid; how will their contribution be quantified exactly; and how will they know if they’re getting fair payment for their contribution or not? Who will audit these compensation frameworks? And, er, where was the consent from artists to becoming contributors to these AI systems in the first place? “Shutterstock believes that AI-generated content is the cumulative effort of its contributing artists. In an effort to create a new industry standard and unlock new revenue streams for the Company’s artist community, Shutterstock has also created the framework to provide additional compensation for artists whose works have contributed to develop the AI models,” Shutterstock writes — dubbing its framework “ethical and equitable”; and saying it will also “aim” to compensate contributors (“in the form of royalties”) when their intellectual property is used. Commenting in a statement, Paul Hennessy, Shutterstock’s CEO, added: “The mediums to express creativity are constantly evolving and expanding. We recognize that it is our great responsibility to embrace this evolution and to ensure that the generative technology that drives innovation is grounded in ethical practices. We have a long history of integrating AI into every part of our business. This expert-level competency makes Shutterstock the ideal partner to help our creative community navigate this new technology. And we’re committed to developing best practices and experiences to deliver on our purpose, which is to empower the world to create with confidence.” “The data we licensed from Shutterstock was critical to the training of DALL-E,” confirmed Sam Altman, OpenAI’s CEO, in another supporting statement, before adding: “We’re excited for Shutterstock to offer DALL-E images to its customers as one of the first deployments through our API, and we look forward to future collaborations as artificial intelligence becomes an integral part of artists’ creative workflows.” Shutterstock is operating a wait list for getting access to the forthcoming integration of its content with OpenAi’s DALL-E 2 image generator — the list is available on its website.

China’s XPeng wants to launch robotaxi network using G9 SUV • ZebethMedia

Chinese luxury EV startup XPeng is moving forward on its plans to launch a robotaxi business. The company’s latest G9 SUV became China’s first mass-produced commercial vehicle to pass a government-led autonomous driving closed-field test, the company said Monday at its fourth annual 1024 Tech Day. When XPeng unveiled the G9 in September, the company said it would come equipped with XPeng’s new advanced driver assist system (ADAS), the XNGP, which combines XPeng’s Highway Navigated Guided Pilot (NGP) and City NGP to automate certain driving functions in both highway and urban driving scenarios. Now, XPeng says the XNGP is good enough to lay the groundwork for a robotaxi network, and the G9 can help that network scale, according to XPeng’s vice president of autonomous driving, Dr. Xinzhou Wu. “Obtaining the road test permit by our mass-produced commercial vehicles — with no retrofit — is a major achievement,” said Wu at XPeng’s Tech Day. “Our platform-based robotaxi development aims to generate significant cost benefits, and ensure product quality, safety and user experience.” Image Credits: XPeng XPeng attributes its advances in autonomy to its next-generation visual perception architecture, XNet, which adopts an in-house developed deep neural network that delivers visual recognition with “human-like decision-making capabilities, drawing from multiple cameras’ data,” according to the company. XPeng says the neural network technology overrides manual processing logic to constantly self-improve. XNet is backed by Fuyao, a Chinese supercomputing center for autonomous driving, and supported by Alibaba Cloud’s intelligent computing platform, XPeng said. This helps XNet reach a supercomputing capability of 600 PFLOPS, which the company says increases the training efficiency of its AV stack by over 600 times. This is a bold claim, one that posits model training can be reduced to 11 hours, rather than the 276 days it took previously. XPeng says the upgrades to its AV stack have allowed the company to establish an entirely closed-loop autonomous driving data system — from data collection and labeling to training and deployment — that has been able to resolve over 1,000 edge cases each year and reduce the incident rate for Highway NGP by 95%. The robotaxis will also feature XPeng’s new AI-powered voice assistant, according to He Xiaopeng, co-founder and CEO of XPeng. The voice system incorporates Multiple Input Multiple Output (MIMO) multi-zone technology to recognize commands from every passenger in the cabin and understand various instructions across multiple streams of conversations. XPeng says its voice assist tech doesn’t need an internet connection or an activation command (like “Hey Siri”), and is good enough to be accurate 96% of the time and operate in less than one second. XPeng will make the new voice assist technology standard on all new vehicles in China, the company said. XPeng’s robot pony and eVTOLs Image Credits: XPeng At XPeng’s Tech Day, the company also provided updates to its robot pony and flying car. Let’s start with the pony. It’s certainly cuter-looking than Tesla’s humanoid robot, but no less fantastical to imagine going to market anytime soon. Regardless, XPeng shared some design upgrades to support “mutlidegree-of-freedom” motion and locomotion capabilities that might get it closer to moving more naturally. This will help the robot adapt better to “complex indoor and outdoor terrain conditions such as stairs, steep slopes and gravel roads,” according to XPeng. Image Credits: XPeng XPeng also revealed an upgraded design for its electric vertical take-off and landing (eVTOL) flying car, which is being developed by affiliate XPeng Aeroht. When XPeng first unveiled its flying car concept, it had a horizontal dual-rotor structure. This year’s design features a new distributed multi-rotor configuration. The test vehicle successfully completed its maiden flight and multiple single-motor failure tests, XPeng said Monday. XPeng also provided some more information of how a driver would go from controlling a car to a flying car — in flight mode, the car will be piloted using the steering wheel and the gear lever will be used to control movement forward and backward, make turns, ascend, hover and descend, the company claims.

Microsoft’s Windows Dev Kit 2023 lets developers tap AI processors on laptops • ZebethMedia

At its Build conference in May, Microsoft debuted Project Volterra, a device powered by Qualcomm’s Snapdragon platform designed to let developers explore “AI scenarios” via Qualcomm’s Neural Processing SDK for Windows toolkit. Today, Volterra — now called Windows Dev Kit 2023 — officially goes on sale, priced at $599 and available from the Microsoft Store in Australia, Canada, China, France, Germany, Japan, the U.K. and the U.S. Here’s how Microsoft describes it: With Windows Dev Kit 2023, developers will be able to bring their entire app development process onto one compact device, giving them everything they need to build Windows apps for Arm, on Arm. As previously announced, the Windows Dev Kit 2023 contains a dedicated AI processor, called the Hexagon processor, complimented by an Arm-based chip — the Snapdragon 8cx Gen 3 — both supplied by Qualcomm. It enables developers to build Arm-native and AI-powered apps alongside and with tools such as Visual Studio (version 17.4 runs natively on Arm), .NET 7 (which has Arm-specific performance improvements), VSCode, Microsoft Office and Teams and machine learning frameworks including PyTorch and TensorFlow. Microsoft’s Windows Dev Kit 2023, which packs an Arm processor plus an AI accelerator chip. Image Credits: Microsoft Here’s the full list of specs: 32GB LPDDR4x RAM 512GB fast NVMe Storage Snapdragon 8cx Gen 3 compute platform RJ45 for ethernet 3 x USB-A ports 2 x USB-C ports Mini DisplayPort (which supports up to three external monitors, including two at 4K 60Hz) Bluetooth 5.1 and Wi-Fi 6 The Windows Dev Kit 2023 arrives alongside support in Windows for neural processing units (NPU), or dedicated chips tailored for AI- and machine learning-specific workloads. Dedicated AI chips, which speed up AI processing while reducing the impact on battery, have become common in mobile devices like smartphones. But as apps such as AI-powered image upscalers and image generators come into wider use, manufacturers have been adding such chips to their laptops (see Microsoft’s own Surface Pro X and 5G Surface Pro 9). The Windows Dev Kit 2023 taps into the recently released Qualcomm Neural Processing SDK for Windows, which provides tools for converting and executing AI models on Snapdragon-based Windows devices in addition to APIs for targeting distinct processor cores with different power and performance profiles. Using it and the Neural Processing SDK, developers can execute, debug and analyze the performance of deep neural networks on Windows devices with Snapdragon hardware as well as integrate the networks into apps and other code. The tooling benefits laptops built on the Snapdragon 8cx Gen 3 system-on-chip, like the Acer Spin 7 and Lenovo ThinkPad X13s. Engineered to compete against Apple’s Arm-based silicon, the Snapdragon 8cx Gen 3’s AI accelerator can be used to apply AI processing to photos and video. Microsoft and Qualcomm are betting the use cases expand with the launch of the Windows Dev Kit 2023; Microsoft for its part has started to leverage AI accelerators in Windows 11 to power features like background noise removal. Image Credits: Microsoft In a blog post shared with ZebethMedia ahead of today’s announcement, Microsoft notes that developers will “need to install the toolchain as needed for their workloads on Windows Dev Kit 2023” and that some tools and services “may require additional licenses, fees or both.” “More apps, tools, frameworks and packages are being ported to natively target Windows on Arm and will be arriving over the coming months,” the post continues. “In the meantime, thanks to Windows 11’s powerful emulation technology, developers will be able to run many unmodified x64 and x86 apps and tools on their Windows Dev Kit.” It remains to be seen whether the Windows Dev Kit reverses the fortune of Windows on Arm devices, which have largely failed to take off. Historically, they’ve been less powerful than Intel-based devices while suffering from compatibility issues and sky-high pricing (the Surface Pro X cost more than $1,500 at launch). Emulated app performance on the first few Arm-powered Windows devices tended to be poor and certain games wouldn’t launch unless they used a particular graphics library, while drivers for hardware only worked if they were designed for Windows on Arm specifically. The Windows on Arm situation has improved as of late, thanks to more powerful hardware (like the Snapdragon 8cx Gen3) and Microsoft’s App Assurance program to ensure that business and enterprise apps work on Arm. But the ecosystem has a long way to go, still, with Unity — one of the most popular game engines today — only this morning announcing a commitment to allow developers to target Windows on Arm devices to get native performance.

Could machine learning refresh the cloud debate? • ZebethMedia

Welcome to The ZebethMedia Exchange, a weekly startups-and-markets newsletter. It’s inspired by the daily ZebethMedia+ column where it gets its name. Want it in your inbox every Saturday? Sign up here. Should early-stage founders ignore the never-ending debate on server infrastructure? Up to a point, yes: Investors we talked to are giving entrepreneurs their blessing not to give too much thought to cloud spend in their early days. But the rise of machine learning makes us suspect that answers might soon change.  — Anna Bare metal, rehashed If you had a sense of déjà vu this week when David Heinemeier Hansson (DHH) announced that Basecamp’s and Hey’s parent company 37signals was leaving the cloud, you are not alone: The debate on the pros and cons of cloud infrastructure sometimes seems stuck on an infinite loop. It is certainly not the first time that I heard 37signals’ core argument: That “renting computers is (mostly) a bad deal for medium-sized companies like ours with stable growth.” In fact, both DHH’s rationale and its detractors strongly reminded me of the years-old discussion that expense management company Expensify ignited when it defended its choice to go bare metal — that is, to run its own servers. However, it would be wrong to think that the parameters of the cloud versus on-premise debate have remained unchanged. As Boldstart Ventures partner Shomik Ghosh noted in our cloud investor survey, there’s more to on-prem these days than running your own servers. Debate aside, I think most of us can agree that bare metal is not for everyone, which is why it’s interesting to see a middle ground emerge. “In terms of terminology,” Ghosh said, “I think on-prem should also be called ‘modern on-prem,’ which Replicated coined, as it addresses not just bare metal self-managed servers but also virtual private clouds, etc.”

Jasper’s robots assemble fresh meals for nearby apartment dwellers • ZebethMedia

After attempting to sell its tech to large food service companies, cooking automation startup Jasper has shifted to direct-to-consumer. In a recent conversation, CEO Gunnar Froh told ZebethMedia about the pivot and gave a general update on the company, a member of this year’s Battlefield 200 at Disrupt 2022. When Gunnar founded Jasper several years ago (as YPC Technologies) with human-robot interaction expert Camilo Perez Quintero, their motivation was primarily to save time on cooking. After developing robotics technologies to automate cooking processes, they opted for a business-to-business go-to-market approach, hoping to sell their platform to food suppliers and service vendors. But the company never gained the corporate traction Gunnar and Quintero hoped it would.  The company pivoted a few months ago, rebranding to Jasper and adopting what Gunnar calls a “cooking as a service” model. Jasper now runs robotic kitchens in or next to residential high rises, charging residents a subscription fee plus the cost of ingredients for meals. “Having good meals at home is expensive or time consuming. Food delivery is highly inefficient — restaurants or ghost kitchens prepare meals worth a few dollars and then pay someone to ship them across town. While most customers aren’t aware of this, about half of their dollars are spent on platform fees and delivery costs,” Gunnar told ZebethMedia. “By running robotic kitchens in or next to residential high rises, Jasper eliminates labor and delivery inefficiencies to offer residents freshly prepared gourmet meals at the cost of home cooking. Jasper meals are plated on porcelain which allows its clients to cut up to a third of their household waste.” Jasper’s robotics tech platform, which assembles food according to a set menu. Food automation startups are having a moment, as recently evidenced by Chipotle’s investment in Miso Robotics’ tortilla chip-making robot. It’s no surprise — labor shortages and increasingly costly ingredients make food-prepping robots an attractive proposition. In 2020, Karakuri landed $8.4 million for its automated canteen to make meals. Last May, Chef Robotics raised $7.7 million with the goal of helping automate certain aspects of food preparation. A few months later, salad chain Sweetgreen bought kitchen robotics startup Spyce, and this past summer Makeline secured $24 million for its robot that automatically assembles bowl lunches. Jasper competes more directly with Los Angeles-based Nommi, who supplies autonomous food kiosks to real estate and college campus partners. But Gunnar asserts that Jasper’s platform is able to prepare a wider range of menu items (ranging in cost from $1.20 to $16.90), including cod with steamed potatoes, paprika cream chicken and desserts like sticky toffee pudding. “We use machine learning for task scheduling and the dispensing of ingredients. We intend to also add it to enable the experience of a personal chef,” Gunnar sad. “The same way that Spotify can predict what music you like, Jasper will predict what meals our customers would like to eat … No other food robotics company we are aware of can currently serve customers at home the way Jasper does as no other system can prepare a menu as versatile as ours.” Jasper says it ran multiple trials in a residential midrise over the past year and over the past month launched Jasper in six apartment buildings. To date, only about 231 customers have ordered food from Jasper via the company’s ordering platform. But in a sign investors are pleased with current progress, Jasper has raised $3.5 million from backers including Toyota Ventures. Image Credits: Jasper In a statement via email, Toyota Ventures’ founding managing director Jim Adler said: “Toyota Ventures made an early investment in Jasper because we got excited by the team’s vision of bringing fresh cooking, exciting menus, and high food quality close to consumers. They’ve been focused on how best to serve customers daily meals at home. They have impressive early traction that’s been driven by recent labor shortage in the restaurant industry and growing consumer demand for affordable food options. It’s a bit of a perfect storm for Jasper which is creating a huge opportunity for the company to improve the way we eat every day.” Gunnar says the goal is to reach $2.5 million in annual recurring revenue (ARR) as it prepares to raise $7 million in additional capital. Jasper, which employs 13 people (a number Gunnar anticipates increasing to 15 by the end of the year), has a current ARR of “less than” $100,000. “We just launched Jasper in multiple buildings over the past few weeks and will ramp up revenue,” Gunnar said. “This funding will further increase automation in our processes to get to a revenue per man-hour of $167.”

Generally Intelligent secures cash from OpenAI vets to build capable AI systems • ZebethMedia

A new AI research company is launching out of stealth today with an ambitious goal: to research the fundamentals of human intelligence that machines currently lack. Called Generally Intelligent, it plans to do this by turning these fundamentals into an array of tasks to be solved and by designing and testing different systems’ ability to learn to solve them in highly complex 3D worlds built by their team. “We believe that generally intelligent computers will someday unlock extraordinary potential for human creativity and insight,” CEO Kanjun Qiu told ZebethMedia in an email interview. “However, today’s AI models are missing several key elements of human intelligence, which inhibits the development of general-purpose AI systems that can be deployed safely … Generally Intelligent’s work aims to understand the fundamentals of human intelligence in order to engineer safe AI systems that can learn and understand the way humans do.” Qiu, the former chief of staff at Dropbox and the co-founder of Ember Hardware, which designed laser displays for VR headsets, co-founded Generally Intelligent in 2021 after shutting down her previous startup, Sourceress, a recruiting company that used AI to scour the web. (Qiu blamed the high-churn nature of the leads-sourcing business.) Generally Intelligent’s second co-founder is Josh Albrecht, who co-launched a number of companies, including BitBlinder (a privacy-preserving torrenting tool) and CloudFab (a 3D-printing services company). While Generally Intelligent’s co-founders might not have traditional AI research backgrounds — Qiu was an algorithmic trader for two years — they’ve managed to secure support from several luminaries in the field. Among those contributing to the company’s $20 million in initial funding (plus over $100 million in options) is Tom Brown, former engineering lead for OpenAI’s GPT-3; former OpenAI robotics lead Jonas Schneider; Dropbox co-founders Drew Houston and Arash Ferdowsi; and the Astera Institute. Qiu said that the unusual funding structure reflects the capital-intensive nature of the problems Generally Intelligent is attempting to solve. “The ambition for Avalon to build hundreds or thousands of tasks is an intensive process — it requires a lot of evaluation and assessment. Our funding is set up to ensure that we’re making progress against the encyclopedia of problems we expect Avalon to become as we continue to build it out,” she said. “We have an agreement in place for $100 million — that money is guaranteed through a drawdown setup which allows us to fund the company for the long term. We have established a framework that will trigger additional funding from that drawdown, but we’re not going to disclose that funding framework as it is akin to disclosing our roadmap.” Image Credits: Generally Intelligent What convinced them? Qiu says it’s Generally Intelligent’s approach to the problem of AI systems that struggle to learn from others, extrapolate safely, or learn continuously from small amounts of data. Generally Intelligent built a simulated research environment where AI agents — entities that act upon the environment — train by completing increasingly harder, more complex tasks inspired by animal evolution and infant development cognitive milestones. The goal, Qiu says, is to train lots of different agents powered by different AI technologies under the hood in order to understand what the different components of each are doing. “We believe such [agents] could empower humans across a wide range of fields, including scientific discovery, materials design, personal assistants and tutors and many other applications we can’t yet fathom,” Qiu said. “Using complex, open-ended research environments to test the performance of agents on a significant battery of intelligence tests is the approach most likely to help us identify and fill in those aspects of human intelligence that are missing from machines. [A] structured battery of tests facilitates the development of a real understanding of the workings of [AI], which is essential for engineering safe systems.” Currently, Generally Intelligent is primarily focused on studying how agents deal with object occlusion (i.e., when an object becomes visually blocked by another object) and persistence and understanding what’s actively happening in a scene. Among the more challenging areas the lab’s investigating is whether agents can internalize the rules of physics, like gravity. Generally Intelligent’s work brings to mind earlier work from Alphabet’s DeepMind and OpenAI, which sought to study the interactions of AI agents in gamelike 3D environments. For example, OpenAI in 2019 explored how how hordes of AI-controlled agents set loose in a virtual environment could learn increasingly sophisticated ways to hide from and seek each other. DeepMind, meanwhile, last year trained agents with the ability to succeed at problems and challenges, including hide-and-seek, capture the flag and finding objects, some of which they didn’t encounter during training. Game-playing agents might not sound like a technical breakthrough, but it’s the assertion of experts at DeepMind, OpenAI and now Generally Intelligent that such agents are a step toward more general, adaptive AI capable of physically grounded and human-relevant behaviors — like AI that can power a food-preparing robot or an automatic package-sorting machine. “In the same way that you can’t build safe bridges or engineer safe chemicals without understanding the theory and components that comprise them, it’ll be difficult to make safe and capable AI systems without theoretical and practical understanding of how the components impact the system,” Qiu said. “Generally Intelligent’s goal is to develop general-purpose AI agents with human-like intelligence in order to solve problems in the real world.” Image Credits: Generally Intelligent Indeed, some researchers have questioned whether efforts to date toward “safe” AI systems are truly effective. For instance, in 2019, OpenAI released Safety Gym, a suite of tools designed to develop AI models that respect certain “constraints.” But constraints as defined in Safety Gym wouldn’t preclude, say, an autonomous car programmed to avoid collisions from driving two centimeters away from other cars at all times or doing any number of other unsafe things in order to optimize for the “avoid collisions” constraint. Safety-focused systems aside, a host of startups are pursuing AI that can accomplish a vast range of diverse tasks. Adept is developing what

business and solar energy