by Connor Courtien, RDPFS Intern
The Be My Eyes mobile app, which allows users who are blind or have low vision to get assistance from official operators as well as volunteers via live video, is adding a new feature, a first-ever digital visual assistant that leverages the latest developments in artificial intelligence (AI). Called the “Virtual Volunteer,” this feature allows users to take a picture of their surroundings and have the visual assistant answer questions about the scene, providing assistance with a wide variety of tasks. An example scenario would be a user opening their refrigerator and taking a photo of its contents. The visual assistant will then tell the user what items are inside, along with possible recipes that could be used from the available ingredients. This feature uses the latest image-to-text model from OpenAI, an artificial intelligence research and deployment company, called GPT-4, which has the ability to provide context and nuance in its language generation beyond other similar tools. Mike Buckley, CEO of Be My Eyes, stated that with this development “’We are entering the next wave of innovation for accessibility technology powered by AI. This new Be My Eyes feature will be transformative in providing people who are blind or have low vision with powerful tools to better navigate physical environments, address everyday needs, and gain more independence. We are thrilled to work with OpenAI to further our mission of improving accessibility for the 253 million people who are blind or have low-vision, with safe and accessible applications of generative AI.’” The “Virtual Volunteer” feature will be made available to corporate customers of Be My Eyes in the coming weeks, and will expand to the general public later this year. To learn more, read the Business Wire article Be My Eyes Announces New Tool Powered by OpenAI’s GPT-4 to Improve Accessibility for People Who are Blind or Have Low-Vision.