A popular app for blind and visually impaired people launches an AI assistant.


LLMs (Large Language Models) are very promising for assisting people with visual impairments. Be My Eyes, who has been connecting blind people with volunteers who describe what they see, recently introduced a virtual assistant, Be My AI, powered by OpenAI's GPT-4.

Andrew Leland, a writer with low vision, compares the descriptions provided by the built-in iPhone image recognition software and ChatGPT:

[The iPhone] said, “Image may contain adults standing in front of a building.” Then GPT did it: “There are three adult men standing in front of Disney’s princess castle in Anaheim, California. All three of the men are wearing t-shirts that say blah blah.” And you can ask follow-up questions, like, “Did any of the men have mustaches?” or “Is there anything else in the background?” Getting a taste of GPT-4’s image-recognition capabilities, it’s easy to understand why blind people are so excited about it.

The whole interview is worth a read: Assistive Tech at the End of Sight - IEEE Spectrum. Léonie Watson, a blind accessibility expert, also wrote an entertaining post about her experience: Adventures with BeMyAI.

Link: Be My Eyes' website

Back to Top