Smart Cap helps the visually impaired in his day to day activities by narrating the description of the scene. This means that the Smart Cap converts the scenes in front of them to text. It describes the significant objects in the scene. This will help the visually impaired to recognize objects without touching theme, see the beauty of nature as a narrative and help them ease their problems especially in moving from one place to another.
The smart cap uses state of the art deep learning techniques from Microsoft Cognitive Services for image classification and tagging. The experience is powered by the voice assistant ‘Alexa’ through Amazon Echo. It’s really an amazing idea! This way, these people can experience the world that we do.
This project is possible with the list of components below:
Software Apps and Services:
Below is a schematic representation of how the Smart Cap works. Click here fore more details!