Building an e-mail verification service which filters invalid e-mail addresses and domains. Also responsible for developing an email campaign system.
Explored the use of different modalities (text, audio) in few-shot and zero-shot action recognition tasks.
Researched techniques to generalize across 3 popular skeleton action datasets with different topologies and conditions (example - lab vs real world) to use for skeleton action recognition on the NTU-60 and NTU-120 datasets.
Researched and developed State of the Art Zero Shot Learning (ZSL) and Generalized Zero Shot Learning (GZSL) Models for Skeleton Activity Recognition under the mentorship of Prof. Ravi Kiran Sarvadevabhatla
Member, Model Porting Team (January 2020 – Present):
Implement custom layers, operations or functions not yet implemented to convert a model to its Intermediate Representation for a boost in performance on edge devices such as the Intel Movidius Neural Compute Stick.
Member, Object Detection Team (August 2019 – December 2019):
Create and test various architectures for object detection. Also fine-tune existing State-of-the-Art Networks for use in the MAVI vision module.
Build and deploy an Android Application ‘VisionAir’ which estimates the local Air Quality Index using an image clicked by the user while preserving the user’s privacy while deploying and testing the concept of Federated Learning under the guidance of Dr. Aakanksha Chowdhery (Software Developer, Google Brain) and Professor Brejesh Lall (Professor, IIT Delhi)
We introduce SynSE, a novel syntactically guided generative approach for Zero-Shot Learning (ZSL). Our end-to-end approach learns progressively refined generative embedding spaces constrained within and across the involved modalities (visual, language). The inter-modal constraints are defined between action sequence embedding and embeddings of Parts of Speech (PoS) tagged words in the corresponding action description. We deploy SynSE for the task of skeleton-based action sequence recognition.