Moreover, Asha is equipped with a speech-to-text conversion capability, speech-driven synchronized lip movements, and emotion rendering. It has been programmed to differentiate between commands and voice responses. Asha is engineered with different API calls to implement simple commands such as ‘smile’, ‘pick up’ and ‘hand over’ (of a particular object).
Asha acts as the interface between the end user and the remote operator. A simple analogy is that of a video call between two people. Here, the robot acts like the video calling application, which has a physical form and can perform physical actions. Depending on what the user requests are, the operator makes Asha offer a suitable response. For instance, if the user greets the robot, the operator says “Good evening to you too;” the robot listens and replicates the same in its voice to the user. The operator instructs Asha to smile saying, “Asha is happy,” and Asha speaks thus and smiles at the user. When the operator commands the robot to fetch objects for the user, it moves around to do so. While Asha is largely controlled, it also has ‘semi-autonomy’ through several degrees of freedom, in order to move freely like human beings and keep away from danger. In a specific care-giving use case, Asha performs actions including checking the patient’s temperature with a touchless thermometer, inquiring about the person’s wellbeing, and offering help by bringing water.
Despite the challenges imposed by the pandemic the team dedicatedly worked for 18 months towards a prototype of Asha. New functionalities are being tested and added to improve the humanoid’s practical capabilities. Ultimately, the team aims to make it a viable solution for care-giving environments that are either remote or don’t allow for physical, close proximity human interaction during high-risk times like an epidemic or pandemic.