Zebeth Media Solutions

accessibility

Tatum is building a robot arm to help people with deafblindness communicate • ZebethMedia

Precise numbers on deafblindness are difficult to calculate. For that reason, figures tend to be all over the place. For the sake of writing an intro to this story, we’re going to cite this study from the World Federation of the DeafBlind that puts the number of severe cases at 0.2% globally and 0.8% of the U.S. Whatever the actual figure, it’s safe to say that people living with a combination of hearing and sight loss is a profoundly underserved community. They form the foundation of the work being done by the small robotics firm, Tatum (Tactile ASL Translational User Mechanism). I met with the team at MassRobotics during a trip to Boston last week. The company’s 3D-printed robotic hand sat in the middle of the conference room table as we spoke about Tatum’s origins. The whole thing started life in summer 2020 as part of founder Samantha Johnson’s master’s thesis for Northeastern University. The 3D-printed prototype can spell out words with American Sign Language, offering people with deafblindness a window to the outside world. From the user’s end, it operates similarly to tactile fingerspelling. They place the hand over the back of the robot, feeling its movements to read as its spells. When no one is around who can sign, there can be a tremendous sense of isolation for people with deafblindness, as they’re neither able to watch or listen to the news and are otherwise cut off from remote communication. In this age of teleconferencing, it’s easy to lose track of precisely how difficult that loss of connection can be. Image Credits: Tatum Robotics “Over the past two years, we began developing initial prototypes and conducted preliminary validations with DB users,” the company notes on its site. “During this time, the COVID pandemic forced social distancing, causing increased isolation and lack of access to important news updates due to intensified shortage of crucial interpreting services. Due to the overwhelming encouragement from DB individuals, advocates, and paraprofessionals, in 2021, Tatum Robotics was founded to develop an assistive technology to aid the DB community.” Tatum continues to iterate on its project, through testing with the deafblind community. The goal to build something akin to an Alexa for people with the condition, using the hand to read a book or get plugged into the news in a way that might have otherwise been completely inaccessible. In addition to working with organizations like the Perkins School for the Blind, Tatum is simultaneously working on a pair of hardware projects. Per the company: The team is currently working on two projects. The first is a low-cost robotic anthropomorphic hand that will fingerspell tactile sign language. We hope to validate this device in real-time settings with DB individuals soon to confirm the design changes and evaluate ease-of use. Simultaneously, progress is ongoing to develop a safe, compliant robotic arm so that the system can sign more complex words and phrases. The systems will work together to create a humanoid device that can sign tactile sign languages. Image Credits: Tatum Robotics Linguistics: In an effort to sign accurately and repeatably, the team is looking to logically parse through tactile American Sign Language (ASL), Pidgin Signed English (PSE) and Signed Exact English (SEE). Although research has been conducted in this field, we aim to be the first to develop an algorithm to understand the complexities and fluidity of t-ASL without the need for user confirmation of translations or pre-programmed responses. Support has been growing among organizations for the deafblind. It’s a community that has long been underserved by these sorts of hardware projects. There are currently an estimated 150 million people with the condition globally. It’s not exactly the sort of total addressable market that gets return-focused investors excited — but for those living with the condition, this manner of technology could be life changing.

Sight Tech Global 2022 agenda announced • ZebethMedia

The third annual Sight Tech Global conference, a virtual, free and highly accessible event on December 7 and 8 convenes some of the world’s top experts working on assistive tech, especially AI, for people who are blind or visually impaired. If you don’t follow this topic, maybe you should, because a lot of cutting-edge tech over the years — think OCR and NLP — was developed at the outset with blind people in mind, and went from there to more mainstream uses. Register today! At this year’s event we have sessions with the creators of several new devices to assist with vision, and we’ll talk about the technology architecture decisions that went into balancing capability with cost and tapping existing platforms. We’ll also take our first look at accessibility in VR, which is an area of huge concern because if/when VR takes off in the entertainment and business worlds, it’s vital that people without vision have access, as they do today on smart phones and computers thanks to screenreaders like JAWS, VoiceOver and NVDA. Our third big slab of programming is about AI itself. There is no shortage of hype as far as AI’s capabilities, and it’s important to push back on that by discussing some serious limitations and deficits in the way today’s AI works for people with disabilities, not to mention humanity in general. At the same time, AI is arguably the best core tech ever for people without sight. Understanding AI is vital to the future of everyone with disabilities for all those reasons. Don’t forget to register today! And before you browse this awesome agenda: For technologists, designers and product folks working on earthshaking assistive tech, we’re hosting a small, in-person event on December 9 featuring workshops on assistive tech, many run by the same luminaries on the agenda. Interested? Contact us. Here’s the agenda. To see times and more, go to the Sight Tech Global agenda page. The Dynamic Tactile Device: That “Holy Braille” for educations is near  Following up on last year’s discussion of the APH and Humanware collaboration to create an education-focused tactile display (see next session), Greg Stilson updates Sight Tech Global on the project’s progress and APH’s work toward an SDK for developers to build on the tactile display. Greg Stilson will also lead a breakout session for attendees who want to go deeper on the Dynamic Tactile Device. Greg Stilson, Head of Global Innovation, APH Moderator: Devin Coldewey, Writer & Photographer, ZebethMedia The DOT Pad: How the Bible and smartphone speaker tech inspired a breakthrough  For decades, engineers have worked toward a braille display that can render tactile images and multiline braille. DOT Pad may have cracked the code with an innovative approach to generating dynamic fields of braille pins actuated by smart integrations combined with existing technologies, like Apple’s VoiceOver. Eric Kim and Ki Sung will also lead a breakout session for attendees who want to learn more. Eric Ju Yoon Kim Co-Founder/CEO DOT Ki Kwang Sung Co-Founder/CEO DOT Moderator: Devin Coldewey Writer & Photographer ZebethMedia Virtual Reality and Inclusion: What does non-visual access to the metaverse mean? People with disabilities and accessibility advocates are working to make sure the metaverse is accessible to everyone. This panel will delve into research on the challenges current virtual and augmented reality tools create for people who are blind or have low vision.The panelists will share their experiences using immersive technologies and explore how these tools can be used to enhance employment opportunities in hybrid and remote workplaces — but only if they are built with inclusion in mind. Moderator Bill Curtis Davidson Co-Director, Partnership on Employment & Accessible Technology (PEAT) Alexa Huth, Director Strategic Communications, PEAT Brandon Keith Biggs, Software Engineer, The Smith-Kettlewell Eye Research Institute and CEO XR Navigation Aaron Gluck, PhD candidate in Human-Centered Computing, Clemson University Inventing the “screenreader” for VR: Owlchemy Lab’s Cosmonious High  For developers of virtual reality games, there’s every reason to experiment with accessibility from the start, which is what the Owlchemy Labs team did with Cosmonious High, the 2022 release of a fun, first-person game situated in an inter-galactic high school that one reviewer said “has all the charm and cheek of a good Nickelodeon kids show.” And it reveals some of the earliest approaches to accessibility in VR. Peter Galbraith, Accessibility Engineer II, Owlchemy Labs Jazmin Cano, Accessibility Product Manager II, Owlchemy Labs Moderator James Rath, Filmmaker, Accessibility Advocate and Gamer Audio Description the Pixar Way AI-driven, synthetic audio description may have a place in some forms of accessible video content, but the artistry of the entirely human-produced audio descriptions Pixar produces for its productions set a creative standard no AI will ever attain, and that’s all for the good. Meet members of the Pixar team behind excellence in audio descriptions. Eric Pearson, Home Entertainment Supervisor, Pixar Anna Capezzera, Director, Audio Description Operations, Deluxe Laura Post, Voice Actress Christina Stevens, Writing Manager, Deluxe Moderator Tom Wlodkowski, Vice President, Accessibility, Comcast Seeing AI and the New AI Microsoft’s hugely popular Seeing AI is one of the apps that appears to do it all, from reading documents to recognizing people and things. Those services are enabled by Microsoft’s rapidly advancing cloud-based AI systems. How is Seeing AI advancing with those capabilities and what is the future for Seeing AI? Saqib Shaikh, Co-founder of Seeing AI, Microsoft Moderator Larry Goldberg, Accessibility Sensei & Technology Consultant Accessibility Is AI’s Biggest Challenge: How Alexa aims to make it fairer for everyone Smart home technology, like Alexa, has been one of the biggest boons in recent years for people who are blind, and for people with disabilities altogether. Voice technology and AI help empower people in many ways, but one obstacle stands in its way: making it equitable. In this session, learn from Amazon about how they’re approaching the challenge ahead. Peter Korn, Director of Accessibility, Devices & Services, Amazon Josh Miele, Principal Accessibility Researcher, Amazon Caroline Desrosiers, Founder & CEO, Scribely Hands on with Seleste Rapid

Sight Tech Global 2022 announced • ZebethMedia

As we prepare for the third annual Sight Tech Global (December 7-8, free & virtual, register here), a technology event that tracks the advances in technology supporting people who live with blindness, two big shifts are front and center. First, new digital experiences, notably virtual reality, are testing known approaches to accessibility. There are no white canes or screen readers (yet!) in the metaverse. That digital realm is on the verge of going mainstream both for consumers and enterprises so quickly that accessibility could easily become an afterthought, as it was at the start of the PV era. At Sight Tech Global, we will hear from the people working to ensure that the metaverse is open to all. The event is free and virtual! Second, the technology platforms underpinning so many of the breakthroughs in assistive and access tech — the tools that help blind and low vision people navigate the world — are advancing fast and enabling better, cheaper devices. Not long ago, AI-powered computer vision systems were costly marvels of miniaturized cameras, GPUs and batteries, but faster networks, cloud services and more powerful mobile phones are changing everything, especially the cost. The old debate in access tech was which would prevail — universal technology platforms such as mobile phones with built-in accessibility features, or purpose-built devices specifically for people who need vision assistance. The emerging formula is not one or the other but both, in a winning combination. At Sight Tech Global, we will hear from the technologists applying the new approaches to next generation devices. Register today. And underlying so much of these discussions is the evolution of AI itself, which is a critical technology across the spectrum of blind tech. The current “deep learning” form of AI continues to deliver striking results in forms such as the natural language GPT3 or many computer vision AIs. But there is growing dissatisfaction with the ability of those powerful pattern-recognition systems to apply something more like human reasoning to issues large and small, not to mention filter out the bias and problems when AIs are unleashed. At Sight Tech Global, we will hear from top AI technologists grappling with those topics and more. The event agenda will be ready in a month, but we hope you will register today. The event is free and virtual. And one new twist in 2022: Sight Tech Global is adding an “In Person” event on December 9 in San Jose, California. The event is aimed at technologists, product leaders and designers working in access tech. The event is limited to 150 people and by invitation only. Most of the programming will be workshops driven by attendees. Here’s where you can request an invitation or nominate someone. Sight Tech Global is a production of the nonprofit Vista Center for the Blind and Visually Impaired. We’re grateful to current sponsors APH, Amazon, Google, Fable, Humanware, LinkedIn, Microsoft and Waymo. If you would like to sponsor the event, please contact us. All sponsorship revenues go to the nonprofit Vista Center, which has been serving the Silicon Valley area for 75 years.

Subscribe to Zebeth Media Solutions

You may contact us by filling in this form any time you need professional support or have any questions. You can also fill in the form to leave your comments or feedback.

We respect your privacy.
business and solar energy