SAN FRANCISCO—Be My Eyes, the company that connects people who are blind or have low vision with sighted volunteers and companies through live video and AI, has announced that it is working with Microsoft to make AI models more inclusive for the more than 340 million people around the world who are blind or have low vision. By incorporating accessibility data, AI can better serve diverse user needs, making technology more usable and beneficial for everyone. This collaboration with Microsoft is the first of its kind for Be My Eyes, the company said.

Publicly available datasets used to train AI models often lack accessibility context and can fail to reflect the lived experience of people who are blind or have low vision, according to the announcement. This disability data desert risks an AI-prevalent future of inherent bias and inaccessibility that repeats the mistakes made during the evolution of the internet, but with the potential impact of being even more significant.

Earlier this year, Be My Eyes highlighted the concern that the blind and low-vision community are being left out from the development of AI models. Today, disability is often underrepresented or incorrectly categorized in datasets used to train AI, the company said, which can limit the utility of the technology or even magnify bias. Be My Eyes announced in July their intention to provide video data to organizations to train their AI models in a more inclusive way.

Be My Eyes will provide video data collected through its platform to Microsoft for AI model training. No other data (such as images or responses from Be My AI) is being provided, the company stated. The video datasets represent the lived experience of the blind and low vision community and will be used to improve the accuracy and precision of scene understanding and descriptions, with the goal of increasing the utility of AI for the blind and low-vision community. Be My Eyes said it will remove all user, account and Personal Identifiable Information (PII) from video metadata. There are also strict provisions in place that prohibit video data being used for marketing or any other non-training purposes. 

‍“We’re excited to work with Microsoft to make AI models more inclusive,” said Mike Buckley, CEO of Be My Eyes. “As AI models continue to evolve, they are ingesting huge amounts of data at a breakneck speed, but some of that data can reflect bias, ableism and a lack of inclusion, creating the potential for an inaccessible AI future. That’s simply unacceptable and our mission demands that we attack the problem. Be My Eyes and our user community are in a unique position to do something about this and with a thoughtful collaborator like Microsoft, we have a chance to create scalable solutions.” 

Microsoft said it is working to build AI models specifically designed to take accessibility into account and has been working to address the disability data desert for several years. By working with a global ecosystem of people with disabilities, partners and customers, the company is committed to increasing awareness and use of accessible technology, expanding skilling and hiring opportunities, and advocating for policies that advance accessibility as a fundamental right. In order to make the world more accessible for the 1 billion people who experience some form of disability worldwide, having disability-centric training data for AI models is key, Microsoft said.

‍"We live in a world that isn’t designed for disabled people, and this is reflected in the datasets used to train AI systems. Our collaboration with Be My Eyes helps us close the data gap and make AI more inclusive" said Jenny Lay-Flurrie, chief accessibility officer at Microsoft. “Accessibility isn’t just our commitment at Microsoft, it’s part of our company culture—from creating innovative and inclusive technology solutions, to adapting our hiring practices, to working with companies like Be My Eyes, to raise the bar for how technology can make the future more accessible and inclusive.”

To learn more about Be My Eyes' data privacy policy and its announcement to make video data available to train more inclusive AI models, click here.