Zebeth Media Solutions

robotics

Tatum is building a robot arm to help people with deafblindness communicate • ZebethMedia

Precise numbers on deafblindness are difficult to calculate. For that reason, figures tend to be all over the place. For the sake of writing an intro to this story, we’re going to cite this study from the World Federation of the DeafBlind that puts the number of severe cases at 0.2% globally and 0.8% of the U.S. Whatever the actual figure, it’s safe to say that people living with a combination of hearing and sight loss is a profoundly underserved community. They form the foundation of the work being done by the small robotics firm, Tatum (Tactile ASL Translational User Mechanism). I met with the team at MassRobotics during a trip to Boston last week. The company’s 3D-printed robotic hand sat in the middle of the conference room table as we spoke about Tatum’s origins. The whole thing started life in summer 2020 as part of founder Samantha Johnson’s master’s thesis for Northeastern University. The 3D-printed prototype can spell out words with American Sign Language, offering people with deafblindness a window to the outside world. From the user’s end, it operates similarly to tactile fingerspelling. They place the hand over the back of the robot, feeling its movements to read as its spells. When no one is around who can sign, there can be a tremendous sense of isolation for people with deafblindness, as they’re neither able to watch or listen to the news and are otherwise cut off from remote communication. In this age of teleconferencing, it’s easy to lose track of precisely how difficult that loss of connection can be. Image Credits: Tatum Robotics “Over the past two years, we began developing initial prototypes and conducted preliminary validations with DB users,” the company notes on its site. “During this time, the COVID pandemic forced social distancing, causing increased isolation and lack of access to important news updates due to intensified shortage of crucial interpreting services. Due to the overwhelming encouragement from DB individuals, advocates, and paraprofessionals, in 2021, Tatum Robotics was founded to develop an assistive technology to aid the DB community.” Tatum continues to iterate on its project, through testing with the deafblind community. The goal to build something akin to an Alexa for people with the condition, using the hand to read a book or get plugged into the news in a way that might have otherwise been completely inaccessible. In addition to working with organizations like the Perkins School for the Blind, Tatum is simultaneously working on a pair of hardware projects. Per the company: The team is currently working on two projects. The first is a low-cost robotic anthropomorphic hand that will fingerspell tactile sign language. We hope to validate this device in real-time settings with DB individuals soon to confirm the design changes and evaluate ease-of use. Simultaneously, progress is ongoing to develop a safe, compliant robotic arm so that the system can sign more complex words and phrases. The systems will work together to create a humanoid device that can sign tactile sign languages. Image Credits: Tatum Robotics Linguistics: In an effort to sign accurately and repeatably, the team is looking to logically parse through tactile American Sign Language (ASL), Pidgin Signed English (PSE) and Signed Exact English (SEE). Although research has been conducted in this field, we aim to be the first to develop an algorithm to understand the complexities and fluidity of t-ASL without the need for user confirmation of translations or pre-programmed responses. Support has been growing among organizations for the deafblind. It’s a community that has long been underserved by these sorts of hardware projects. There are currently an estimated 150 million people with the condition globally. It’s not exactly the sort of total addressable market that gets return-focused investors excited — but for those living with the condition, this manner of technology could be life changing.

Pickle picks up $26M for its truck unloading robots • ZebethMedia

Fulfillment has arguably been the hottest robotics category over the past two years, as companies have looked to stay competitive with Amazon, even amid ongoing labor shortages. Still, one of the most important links in the chain remains one of the least addressed. Truck unloading isn’t a particularly easy problem to solve, but Pickle Robot Company is single-mindedly focused on it. When I paid a trip to the company’s offices on my trip to Boston last week, Pickle pointed out precisely how large of a problem this has become. Warehouse jobs are tough enough to fill these days, but unloading pallets and trucks bring their own spate of issues, including repetitive heavy lifting and wildly fluctuating temperatures. Imagine stepping foot inside a shipping container that’s been sitting in direct sunlight all day. Traditional heavy equipment like forklifts come with their own issues. The company describes their offering thusly, “Pickle is founded by a cast of MIT alumni. We are teaching off-the-shelf robot arms how to pick up boxes and play Tetris.” The company notes that it has “unload[ed] tens-of-thousands of packages per month at customer sites,” primarily in Southern California. The work thus far has been part of a pilot with United Exchange Corporation, which has deployed the system in a distribution center. Today Pickle is announcing a $26 million Series A raise led by Ranpak, JS Capital, Schusterman Family Investments, Soros Capital and Catapult Ventures. Image Credits: Pickle Robot Company “Customer interest in Pickle unload systems has been incredibly strong, and now that we have our initial unload systems out of the lab and into customer operations we have a clear path to broad commercialization,” said founder and CEO AJ Meyer. “The early customer deployments, financing and leadership additions set the stage for us to accelerate customer acquisition and build the company infrastructure we need to deliver more systems to more customers in the coming months.” That last bit is especially important. As you’re most likely aware, now is not a great time to be raising — even in a booming category like warehouse automation. But given the size and breadth of Pickle’s testing, putting funding off in hopes of better economic conditions isn’t necessarily an option when you’ve got product to deliver. And besides, Pickle’s not the only game in town either, figuratively or literally. Boston Dynamics notably chose truck unlocking as the focus for its second commercialized robot, Stretch. Unlike that solution, however, Pickle’s is tethered. Agility has also explored truck unloading for its Digit robot. Even with that added competition, we’re talking about a huge total addressable market, with room for more than one player.

Facing economic headwinds, Amazon consolidates robotic projects • ZebethMedia

Westborough, Massachusetts is a quiet town of 22,000, 40 minutes by car southeast of Boston. BOS27 is among the town’s newer residents. The 350,000-square-foot Amazon facility opened its doors a little over a year ago. It’s a hulking, gray addition to the tree-filled scenery. Inside is a state of the art facility that — along with a space on the opposite side of Boston in North Reading, Massachusetts — forms the beating heart of the company’s lofty robotics ambitions. In the decade since the company acquired Kiva Systems for $775 million in cash, it’s grown itself into one of the world’s leading robotics firms. Ask any founder in the warehouse robotics space, and they’ll quickly credit the company as the driving force in the space. “We look at Amazon, probably as the best marketing arm in the robotics business today,” Locus Robotics CEO Rick Faulk said at our robotics event in July. “They have set SLAs that everyone has to match. And we look at them as being a great part of our marketing team.” Amazon has set package delivery expectations at once-seemingly-impossible next or same day, and an entire industry has grown up around it, in hopes of keeping smaller firms competitive with the retail giant. Image Credits: Brian Heater What strikes you as soon as you walk through the doors at BOS27 is how much the space resembles one of the company’s many fulfillment centers. It’s cavernous and buzzing with robots and their human counterparts. The space, which was built to accommodate a business that had grown too large for just the North Reading location, is where the company develops, tests and builds its robotic systems. (Another space has recently opened in Belgium, as well, courtesy of Amazon’s September acquisition of Cloostermans.) This week, the company opened its doors to a handful of press members, including ZebethMedia. The “Delivering the Future” event was, by any measure, a PR push. It was an opportunity to show off the company’s shiny new production facility and a chance to present a kind of unified front for Amazon Robotics, a category that now encapsulates every element of the Amazon retail experience from the moment a consumer hits “buy now.” Image Credits: Amazon A couple of guided tours around the floor showcased the company’s growing army of wheeled robots built atop the Kiva platform, including the ubiquitous blue Hercules (the fourth-gen version of the product), and the mini conveyor belt sporting Pegasus and Xanthus, which is, for most intents and purposes, a lightweight version of the latter. Newer on the scene is Proteus, which arrives in a nearly neon green (“Seahawks green” as one executive joked today), with a small LED face and full autonomy — meaning it can safely operate outside the structured confines developed for the older models. Image Credits: Amazon Amazon also showed off a trio of robotic arms, which follow a similar evolutionary trajectory as their wheeled counterparts. There’s Robin, which debuted around 18 months ago and is now installed in 1,000 warehouses across the world. Its successor Cardinal adds a level of efficiency to the system, as it tightly packs boxes to send across the fulfillment center. A third, Sparrow, debuted at today’s event. As with its predecessors, Sparrow is effectively a souped-up version of a Fanuc off-the-shelf industrial robotic arm. The system is still in very limited pilots, including a facility in Texas and behind a safety cage at BOS27. What sets it apart from standard Fanuc arm deployments, however, is two-fold. First is the suction cup gripper, which utilizes pneumatics to pick up a wide range of different objects. The real secret sauce is the software of course. Amazon says the AI, coupled with a range of different hardware sensors, allows the system to identify around 65% of the inventory offered through the retailer. It’s a mindboggling figure. The system uses things like bar codes, size and shape to identify individual objects. Image Credits: Amazon Robin and Cardinal deal exclusively in boxes — of which Amazon has around 15 basic models. Sparrow has the far more complex task of picking up the products themselves. Beyond identification, this introduces its own spate of different challenges. If you’ve ever purchased anything from the company, you know how wildly these things fluctuate in size, shape and material. Hypotheticallym the same arm is picking up a bowling bowl and a bag of cotton swabs. That’s where the suction cup system comes in, offering a far greater range of picks than a rigid robotic hand. All told, the company has deployed more than 520,000 robotic drives since Amazon Robotics’ 2012 founding. It says that more than 75% of products ordered through its site come into contact with one of its robotic systems at some point in the process. Image Credits: Amazon Last-mile was the other of focus of today’s event. That starts with the 1,000 Rivian EVs the company has begun deploying to meet holiday demand. “Customers across the U.S. will begin to see custom electric delivery vehicles from Rivian delivering their Amazon packages, with the electric vehicles hitting the road in Baltimore, Chicago, Dallas, Kansas City, Nashville, Phoenix, San Diego, Seattle and St. Louis, among other cities,” the company noted in July. “This rollout is just the beginning of what is expected to be thousands of Amazon’s custom electric delivery vehicles in more than 100 cities by the end of this year — and 100,000 by 2030.” Image Credits: Amazon Somewhat surprisingly, Amazon is still very bullish on the future of drone deliveries. “A demonstrated, targeted level of safety that is validated by regulators and a magnitude safer than driving to the store,” Prime Air VP David Carbon said during a keynote. “Delivering 500 million packages by drone annually by the end of this decade. Servicing millions of customers, operating in highly populated, suburban areas such as Seattle, Boston and Atlanta. Flying in an uncontrolled space autonomously.” Image Credits: Amazon But while a rendering of its MK30 drone —

Dawn of the tentacle • ZebethMedia

Fair warning, it’s going to be a quick one from me today. I caught the thing again, roughly three months after the last time I caught the thing. They say, “third time’s the charm,” and I now recognize that they were referring to chest pain and a general light-headedness. Turns out it doesn’t get easier. Send soup. With that in mind, consider this week’s Actuator a bit more on the housecleaning side of things (don’t we have robots for that now?). It’s more of a smattering of links to interesting stories from the past week, along with some that no doubt fell through the cracks last week, during Disrupt. If this is your first Actuator, sorry. Trying hard not to be sick this time next week. Trend-wise (if a week of news can be referred to as such), I’m seeing a bit of a dip in robotic investment news, with university research rushing in to fill the vacuum. More than anything, the latter is most likely due, in part, to the school year being back in full force. Not that robotics researchers get the summer off, of course. Before the fun stuff, let’s discuss potential slowdowns. As investor Kelly Chen noted on our VC panel at Disrupt, “On the less rosy side, I think the layoffs are yet to come. In an economic downturn, the customers will be less willing to be experimental, so they’re thinking about cutting costs and then economics just becomes so much more important.” The list of “recession-proof” industries is short and doesn’t include robotics, despite being relatively unaffected by the drying out of VC funds. We’ve got a double-edged sword here. On one side, automation can help stave off some economic impacts at companies, if properly deployed. On the other, so much of the stuff we talk about here is so long-tail, it’s easy to see investors and others succumbing to very real short-term concerns. Image Credits: Berkshire Grey Obviously, none of this stuff should be painted with too broad a brush. There are so many different factors at play here. Berkshire Grey, which ran aground a stock dip following a 2021 SPAC deal, is an example of a company that recently “made some updates.” For its part, the firm is framing this as more of a correction than anything. BG won’t confirm how large those “updates” are, but they told ZebethMedia: We discussed on our Q2 earnings call that we’ve matured as a company, improved business operations, and know exactly where we need to focus and invest. We made some updates to our team back in August that were small but will help us focus on continuing to grow our business. That news arrives as the company signs an “equity purchase agreement” with Lincoln Park Capital, which it tells me it’s done for the sake of “some added financial flexibility.” Per a release on the latter bit of news: Under the terms and conditions of the Agreement, the Company has the right, but not the obligation, to sell up to $75 million of its shares of common stock to Lincoln Park over a 36-month period, subject to certain limitations. Any common stock that is sold to Lincoln Park will occur at a purchase price that is determined by prevailing market prices at the time of each sale with no upper limits to the price Lincoln Park may pay to purchase the common stock. The company tells me: These types of deals are common. The $75M commitment from Lincoln Capital allows us to access capital in an inexpensive, simple way that provides us with some added financial flexibility. Certainly the overall market for fulfillment robotics looks to be robust. Given the current level of saturation in the market, however, I’d say it’s safe to expect the category to continue to transform for the foreseeable future. Image Credits: Photo by Jared Wickerham/Argo AI One other element worth pointing out in all of this is the human impact of automation. It’s here and it’s not going away anytime soon, but we can ease the blow as a society. Only if we actually choose to do so, of course. A Reuters piece notes the timing of Walmart’s move to lay off nearly 1,500 workers in fulfillment center roles in Atlanta, Georgia, following the acquisition of robotics startup, Alert Innovation. It said the following of the move: We’re converting the fulfillment center on Fulton Parkway to support our growing WFS (Walmart Fulfillment Services) business. As part of the conversion, the facility’s infrastructure, operational resources, processes, staffing requirements and equipment are being adjusted to meet the building’s needs. I really need to stop leading with the bad news, right? I’m not sure tricking a kid into eating their broccoli is a good model for running a successful newsletter. I’ll get this stuff right eventually (and when I’m a bit less light-headed). Image Credits: IHMC (Strike a pose, Vogue) I’ve noted on these pages why I’m not yet 100% sold on humanoid robots (though I’m aware of some compelling arguments for them), but it’s always fun to watch different companies and laboratories take different approaches to the very real issues around real-world usage. The Institute for Human and Machine Cognition, in Pensacola, Florida, recently revealed a system it’s working on with Boardwalk Robotics (and an assist from Moog’s Integrated Smart Actuators) named Nadia. The system was named as an homage to gymnast Nadia Comăneci and is being developed with funding from the Office of Naval Research, which has been behind a number of interesting robotics projects. IHMC notes: The Nadia project, which has a three-year timeline, is intended to function in indoor environments where stairs, ladders, and debris would require a robot to have the same range of motion as a human, which can be particularly useful in firefighting, disaster response, and other scenarios that might be dangerous for humans. Image Credits: Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kröger, Ken Goldberg New(ish) breakthroughs in clothes-folding robots. The dual-armed system SpeedFolding

Generally Intelligent secures cash from OpenAI vets to build capable AI systems • ZebethMedia

A new AI research company is launching out of stealth today with an ambitious goal: to research the fundamentals of human intelligence that machines currently lack. Called Generally Intelligent, it plans to do this by turning these fundamentals into an array of tasks to be solved and by designing and testing different systems’ ability to learn to solve them in highly complex 3D worlds built by their team. “We believe that generally intelligent computers will someday unlock extraordinary potential for human creativity and insight,” CEO Kanjun Qiu told ZebethMedia in an email interview. “However, today’s AI models are missing several key elements of human intelligence, which inhibits the development of general-purpose AI systems that can be deployed safely … Generally Intelligent’s work aims to understand the fundamentals of human intelligence in order to engineer safe AI systems that can learn and understand the way humans do.” Qiu, the former chief of staff at Dropbox and the co-founder of Ember Hardware, which designed laser displays for VR headsets, co-founded Generally Intelligent in 2021 after shutting down her previous startup, Sourceress, a recruiting company that used AI to scour the web. (Qiu blamed the high-churn nature of the leads-sourcing business.) Generally Intelligent’s second co-founder is Josh Albrecht, who co-launched a number of companies, including BitBlinder (a privacy-preserving torrenting tool) and CloudFab (a 3D-printing services company). While Generally Intelligent’s co-founders might not have traditional AI research backgrounds — Qiu was an algorithmic trader for two years — they’ve managed to secure support from several luminaries in the field. Among those contributing to the company’s $20 million in initial funding (plus over $100 million in options) is Tom Brown, former engineering lead for OpenAI’s GPT-3; former OpenAI robotics lead Jonas Schneider; Dropbox co-founders Drew Houston and Arash Ferdowsi; and the Astera Institute. Qiu said that the unusual funding structure reflects the capital-intensive nature of the problems Generally Intelligent is attempting to solve. “The ambition for Avalon to build hundreds or thousands of tasks is an intensive process — it requires a lot of evaluation and assessment. Our funding is set up to ensure that we’re making progress against the encyclopedia of problems we expect Avalon to become as we continue to build it out,” she said. “We have an agreement in place for $100 million — that money is guaranteed through a drawdown setup which allows us to fund the company for the long term. We have established a framework that will trigger additional funding from that drawdown, but we’re not going to disclose that funding framework as it is akin to disclosing our roadmap.” Image Credits: Generally Intelligent What convinced them? Qiu says it’s Generally Intelligent’s approach to the problem of AI systems that struggle to learn from others, extrapolate safely, or learn continuously from small amounts of data. Generally Intelligent built a simulated research environment where AI agents — entities that act upon the environment — train by completing increasingly harder, more complex tasks inspired by animal evolution and infant development cognitive milestones. The goal, Qiu says, is to train lots of different agents powered by different AI technologies under the hood in order to understand what the different components of each are doing. “We believe such [agents] could empower humans across a wide range of fields, including scientific discovery, materials design, personal assistants and tutors and many other applications we can’t yet fathom,” Qiu said. “Using complex, open-ended research environments to test the performance of agents on a significant battery of intelligence tests is the approach most likely to help us identify and fill in those aspects of human intelligence that are missing from machines. [A] structured battery of tests facilitates the development of a real understanding of the workings of [AI], which is essential for engineering safe systems.” Currently, Generally Intelligent is primarily focused on studying how agents deal with object occlusion (i.e., when an object becomes visually blocked by another object) and persistence and understanding what’s actively happening in a scene. Among the more challenging areas the lab’s investigating is whether agents can internalize the rules of physics, like gravity. Generally Intelligent’s work brings to mind earlier work from Alphabet’s DeepMind and OpenAI, which sought to study the interactions of AI agents in gamelike 3D environments. For example, OpenAI in 2019 explored how how hordes of AI-controlled agents set loose in a virtual environment could learn increasingly sophisticated ways to hide from and seek each other. DeepMind, meanwhile, last year trained agents with the ability to succeed at problems and challenges, including hide-and-seek, capture the flag and finding objects, some of which they didn’t encounter during training. Game-playing agents might not sound like a technical breakthrough, but it’s the assertion of experts at DeepMind, OpenAI and now Generally Intelligent that such agents are a step toward more general, adaptive AI capable of physically grounded and human-relevant behaviors — like AI that can power a food-preparing robot or an automatic package-sorting machine. “In the same way that you can’t build safe bridges or engineer safe chemicals without understanding the theory and components that comprise them, it’ll be difficult to make safe and capable AI systems without theoretical and practical understanding of how the components impact the system,” Qiu said. “Generally Intelligent’s goal is to develop general-purpose AI agents with human-like intelligence in order to solve problems in the real world.” Image Credits: Generally Intelligent Indeed, some researchers have questioned whether efforts to date toward “safe” AI systems are truly effective. For instance, in 2019, OpenAI released Safety Gym, a suite of tools designed to develop AI models that respect certain “constraints.” But constraints as defined in Safety Gym wouldn’t preclude, say, an autonomous car programmed to avoid collisions from driving two centimeters away from other cars at all times or doing any number of other unsafe things in order to optimize for the “avoid collisions” constraint. Safety-focused systems aside, a host of startups are pursuing AI that can accomplish a vast range of diverse tasks. Adept is developing what

Cyberdontics raises $15M for robotic root canals • ZebethMedia

It’s been more than 20 years since the da Vinci Surgical System received FDA clearance. Pretty incredible when you think about it. Robotic surgery and automation in general have come a long way since then, and a number of companies have entered the lucrative category, focused on all manner of different procedures. Surprisingly, robotic dental procedures have been slow to follow. Let me get this out of the way up front — I’m squeamish about dental procedures. I don’t like thinking about them, don’t like talking about them and certainly don’t like having them. And like many of you reading this, I’m certainly not rushing out to have a robot perform a root canal on me any time soon. I said as much to dentist turned Cyberdontics founder and CEO, Chris Ciriello. Image Credits: Cyberdontics The executive notes that there are two big selling points here from the patient’s standpoint. First is efficacy. He says the system that Cyberdontics is developing will be capable of extremely accurate tooth cutting, down to around 30 microns. The second — and perhaps more important — is speed. “If you’ve had something like a root canal, a crown or any of these types of procedures, where you’re spending an hour or two in the dentist’s chair and you’re spending multiple trips to go back and get it fixed,” he explains, “the idea that you can literally have this robot in your mouth for under one minute and you can be out the door 15 minutes later, is a game changer. For people that really don’t like the dentist, this is a really attractive way to get in and out a lot faster.” The notion was attractive enough to warrant a $15 million Series A for the YC grad. The round, led by dentist chain Pacific Dental Services, will go toward additional R&D and bringing the system to market. The system is supervised by the dentist and, like surgery robots before it, is designed to level access to such procedures amid a dentist shortage. Image Credits: Cyberdontics “Today, a dentist would cut a hole in your tooth and fill the hole with some type of material, whether it’s a crown, a filling, some kind of plastic they squirt in,” says Ciriello. “What we do is scan your tooth, then we virtually create a model of what the tooth will look like after we cut it. Then we can cut your tooth and fabricate a prosthetic at the same time, or we can fabricate the prosthetic in advance of the surgery. Then that piece will fit in just like a puzzle piece, right into the hole we cut.” Cyberdontics “aspirationally” plans to launch its imaging process within the next year, with plans to introduce the robot within the next two, regulator approval depending.

Touchlab to begin piloting its robotic skin sensors in a hospital setting • ZebethMedia

Manipulation and sensing have long been considered two key pillars for unlocking robotics’ potential. There’s a fair bit of overlap between the two, of course. As grippers have become a fundamental element of industrial robotics, these systems require the proper mechanisms for interacting with the world around them. Vision has long been a key to all of this, but companies are increasingly looking to tacticity as a method for gathering data. Among other things, it gives the robot a better sense of how much pressure to apply to a given object, be it a piece of produce or a human being. A couple of months back, Edinburgh, Scotland-based startup Touchlab won the pitch-off at our TC Sessions: Robotics event, among some stiff competition. The judges agreed that the company’s approach to the creation of robotic skin is an important one that can help unlock fuller potential for sensing. The XPrize has thus far agreed, as well. The company is currently a finalist for the $10 million XPrize Avatar Competition. The firm is currently working with German robotics firm Schunk, which is providing the gripper for the XPrize finals. Image Credits: Touchlab “Our mission is to make this electronic skin for robots to give machines the power of human touch,” co-founder and CEO Zaki Hussein said, speaking to ZebethMedia from the company’s new office space. “There are a lot of elements going into replicating human touch. We manufacture this sensing technology. It’s thinner than human skin and it can give you the position and pressure wherever you put it on the robot. And it will also give you 3D forces at the point of contact, which allows robots to be able to do dexterous and challenging activities.” To start, the company is looking into teleoperation applications (hence the whole XPrize Avatar thing) — specifically, using the system to remotely operate robots in understaffed hospitals. On one end, a TIAGo++ robot outfitted with its sensors lends human workers a pair of extra hands; on the other, an operator outfitted with a haptic VR bodysuit that translates all of the touch data. Though such technologies currently have their limitations. Image Credits: Touchlab “We have a layer of software that translates the pressure of the skin to the suit. We’re also using haptic gloves,” says Hussein. “Currently, our skin gathers a lot more data than we can currently transmit to the user over haptic interfaces. So there’s a little bit of a bottleneck. We can use the full potential of the best haptic interface of the day, but there is a point where the robot is feeling more than the user is able to.” Additional information gathered by the robot is translated through a variety of different channels, such as visual data via a VR headset. The company is close to beginning real-world pilots with the system. “It will be in February,” says Hussein. “We’ve got a three-month hospital trial with the geriatric patients in the geriatric acute ward. This is a world-first, where this robot will be deployed in that setting.”

Subscribe to Zebeth Media Solutions

You may contact us by filling in this form any time you need professional support or have any questions. You can also fill in the form to leave your comments or feedback.

We respect your privacy.
business and solar energy