Zebeth Media Solutions

biometrics

With $7M raised, Keyo launches a biometric palm verification network • ZebethMedia

Maybe you’ve heard of Keyo. Perhaps you saw the initial round of press the firm did in 2017 — roughly two years after its founding. Or maybe you saw it pop back in 2020, riding the wave of news around Amazon’s lukewarmly received hand-scanner tech. You may have wondered precisely what’s been going on with the Chicago-based firm in the interim. “I think we were probably a bit naïve in the beginning to underestimate the true complexity of this undertaking,” admits co-founder/CEO Jaxon Klein. “There’s a lot involved in building a global scale identity solution. We’ve been in deep engineering mode for several years now. We’ve put the last five years and millions of dollars into building what we really view as the first global scale biometric identity ecosystem.” It’s not a unique case, in that respect. And may well mean that your organization is on the right track, if members of the press are willing to discuss your technologies at such an early stage. But the kind of technology Keyo has been working on is the sort of thing it’s important to get exactly right, given the security, privacy and financial implications of its biometrics. Image Credits: Keyo “That early press coverage was us prematurely saying ‘hey, look what we’re doing,’” Klein adds. “It settled in what we were really doing and the reasons that no other companies were competing for the space and how just how long and hard the road were heading down. We then retreated from that and said, ‘okay, we have a lot to build and we need to go actually deploy this into the real world, work with real customers work with real users and make sure we’re doing it right.” This week, the company’s got something to show for that work. Fueled by an aggregate $7 million in seed funding, the Keyo Network had previously been in beta. It’s a combination of hardware and software designed to bring palm scanning to a broad range of different markets and services. Today it’s announcing the Keyo Wave hand-scanner hardware, Keyo mobile app, third-party partner program and the Keyo Identify Cloud, which “enables users to instantly and privately identify themselves based on a simple scan of their hand at any business participating in the Keyo network.” The Keyo team remains small, with 33 remote employees, though Klein says the firm has been hiring around an employee a week. Not huge growth, though he winkingly notes that at least the startup is bucking the current brutal trend in startup land. Image Credits: Keyo “One of the things we’ve gotten really good at is scalable supply chain deployment. We’ve deployed 15,000 devices just recently, and we manage our supply chain internally. Even pre-pandemic, we’ve been building out our supply chain in North America — largely in the U.S. We’ve built a lot of institutional knowledge and capabilities around operating and expanding supply chains. We are really unique in the hardware space — or part of a very small cohort — that designs and builds their own devices, that’s entirely distributed.” The notion of replacing more traditional payment methods like cards — or even phones — with hand scanning will continue to attract its share of critics. That will only increase as massive corporations like Amazon adopt such technologies, but there’s little doubt the interest is there, at least with the corporations fueling such change.

UK watchdog warns against AI for emotional analysis, dubs ‘immature’ biometrics a bias risk • ZebethMedia

The UK’s privacy watchdog has warned against use of so-called “emotion analysis” technologies for anything more serious than kids’ party games, saying there’s a discrimination risk attached to applying “immature” biometric tech that makes pseudoscientific claims about being able to recognize people’s emotions using AI to interpret biometric data inputs. Such AI systems ‘function’, if we can use the word, by claiming to be able to ‘read the tea leaves’ of one or more biometric signals, such as heart rate, eye movements, facial expression, skin moisture, gait tracking, vocal tone etc, and perform emotion detection or sentiment analysis to predict how the person is feeling — presumably after being trained on a bunch of visual data of faces frowning, faces smiling etc (but you can immediately see the problem with trying to assign individual facial expressions to absolute emotional states — because no two people, and often no two emotional states, are the same; hence hello pseudoscience!). The watchdog’s deputy commissioner, Stephen Bonner, appears to agree that this high tech nonsense must be stopped — saying today there’s no evidence that such technologies do actually work as claimed (or that they will ever work). “Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever,” he warned in a statement. “While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgements about a person that are inaccurate and lead to discrimination. “The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science. As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.” In a blog post accompanying Bonner’s shot across the bows of dodgy biometrics, the Information Commission’s Office (ICO) said organizations should assess public risks before deploying such tech — with a further warning that those that fail to act responsibly could face an investigation. (So could also be risking a penalty.) “The ICO will continue to scrutinise the market, identifying stakeholders who are seeking to create or deploy these technologies, and explaining the importance of enhanced data privacy and compliance, whilst encouraging trust and confidence in how these systems work,” added Bonner. The watchdog has fuller biometrics guidance coming in the spring — which it said today will highlight the need for organizations to pay proper mind to data security — so Bonner’s warning offers a taster of more comprehensive steerage coming down the pipe in the next half year or so. “Organisations that do not act responsibly, posing risks to vulnerable people, or fail to meet ICO expectations will be investigated,” the watchdog added. Its blog post gives some examples of potentially concerning uses of biometrics — including AI tech being used to monitoring the physical health of workers via the use of wearable screening tools; or the use of visual and behavioural methods such as body position, speech, eyes and head movements to register students for exams. “Emotion analysis relies on collecting, storing and processing a range of personal data, including subconscious behavioural or emotional responses, and in some cases, special category data. This kind of data use is far more risky than traditional biometric technologies that are used to verify or identify a person,” it continued. “The inability of algorithms which are not sufficiently developed to detect emotional cues, means there’s a risk of systemic bias, inaccuracy and even discrimination.” It’s not the first time the ICO has had concerns over rising use of biometric tech. Last year the then information commissioner, Elizabeth Denham, published an opinion expressing concerns about what she couched as the potentially “significant” impacts of inappropriate, reckless or excess use of live facial recognition (LFR) technology — warning it could lead to a ‘big brother’ style surveillance of the public. However that warning was targeting a more specific technology (LFR). And the ICO’s Bonner told the Guardian this is the first time the regulator has issued a blanket warning on the ineffectiveness of a whole new technology — arguing this is justified by the harm that could be caused if companies made meaningful decisions based on meaningless data, per the newspaper’s report. Where’s the biometrics regulation? The ICO may be feeling moved to make more substantial interventions in this area because UK lawmakers aren’t being proactive when it comes to biometrics regulation. An independent review of UK legislation in this area, published this summer, concluded the country urgently needs new laws to govern the use of biometric technologies — and called for the government to come forward with primary legislation. However the government does not appear to have paid much mind to such urging or these various regulatory warnings — with a planned data protection reform, which it presented earlier this year, eschewing action to boost algorithmic transparency across the public sector, for example, while — on biometrics specifically — it offered only soft-touch measures aimed at clarifying the rules on (specifically) police use of biometric data (taking about developing best practice standards and codes of conduct). So a far cry from the comprehensive framework called for by the Ada Lovelace research institute-commissioned independent law review. In any case, the data reform bill remains on pause after a summer of domestic political turmoil that has led to two changes of prime minister in quick succession. A legislative rethink was also announced earlier this month by the (still in post) secretary of state for digital issues, Michelle Donelan — who used a recent Conservative Party conference speech to take aim at the EU’s General Data Protection Regulation (GDPR), aka the framework that was transposed into UK law back in 2018. She said the government would be “replacing” the GDPR with a bespoke British data protection system — but gave precious little detail on what exactly will be put in place

Subscribe to Zebeth Media Solutions

You may contact us by filling in this form any time you need professional support or have any questions. You can also fill in the form to leave your comments or feedback.

We respect your privacy.
business and solar energy