Navigation Aid for the Blind and the Visually Impaired People using eSpeak and Tensor Flow
Nishkala H M1, Anu S H2, Ashwini B V3, Kavya C M4, Monika B S5

1Nishkala H M, Assistant Professor, Department of CS&E, SJM Institute of Technology, Chitradurga, India.
2Anu S H, CS&E, SJM Institute of technology, Chitradurga, India.
3Ashwini B V, CS&E, SJM Institute of technology, Chitradurga, India.
4Kavya C M, CS&E, SJM Institute of technology, Chitradurga, India.
5Monika B S, CS&E, SJM Institute of technology, Chitradurga, India.
Manuscript received on March 12, 2020. | Revised Manuscript received on March 25, 2020. | Manuscript published on March 30, 2020. | PP: 2924-2927 | Volume-8 Issue-6, March 2020. | Retrieval Number: F8327038620/2020©BEIESP | DOI: 10.35940/ijrte.F8327.038620

Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Applications of science and technology have made a human life much easier. Vision plays a very important role in one’s life. Disease, accidents or due some other reasons people may loose their vision. Navigation becomes a major problem for the people with complete blindness or partial blindness. This paper aims to provide navigation guidance for visually impaired. Here we have designed a model which provides the instruction for the visionless people to navigate freely. No IR camera is used to capture the picture around the person and identifies the objects. Using earphones voice output is provided defining the objects. This model includes Raspberry Pi 3 processor which collects the objects in surroundings and converts them into voice message, No IR camera is used detect the object, power bank provides the power and earphones are used here the output message. TensorFlow API an open source software library used for object detection and classification. Using TensorFlow API multiple objects are obtained in a single frame. e Speak a Text to Speech synthesizer (TTS) software is used to convert text (detected objects) to speech format. Hence using No IR camera video which is captured is converted into voice output which provides the guidance for detecting objects. Using COCO model 90 commonly used objects are identified like person, table, book etc.
Keywords: COCO model, e Speak, No IR camera, Raspberry Pi 3, TensorFlow API
Scope of the Article: Software safety systems.