Skip to main content
05:30 pm
PROJECTS

Team Members Dr.P.Subashini, Professor, Dept of Computer Science

Dr.P.Prabhusundhar, Assistant professor,Dept of Computer Science,Gobi Arts college

Dr.R.Janani,Research Assistant, CMLI

Ms.Komalavalli .R, II MCA , Gibi Arts College

Project Summary  The project methodology comprises several essential modules aimed at developing a proficient Automatic Speech Recognition (ASR) system tailored to the nuances of the Irula language. Initially, the data collection module gathers diverse audio recordings of spoken Irula from native speakers, ensuring a comprehensive dataset representative of various dialects and speech patterns. Subsequently, the data preprocessing phase optimizes the collected data by reducing noise, normalizing signals, and segmenting audio files for efficient feature extraction. Feature extraction transforms raw audio signals into a compact and informative feature space, enabling the acoustic model to discern speech patterns accurately. Leveraging Hidden Markov Models (HMM), the acoustic model processes the extracted features to identify and differentiate Irula speech sounds among background noise. Complementing this, the language model, enhanced through pre-trained GPT models and fine-tuning on Irula language data, provides crucial linguistic context for precise speech recognition. Finally, the integration of the Streamlit framework facilitates the development of an intuitive web application interface, ensuring accessibility and ease of use for Irula speakers interacting with the ASR system. Through the seamless integration of these modules, the project aims to create a robust ASR solution that effectively bridges the language gap within the Irula community, facilitating improved communication and societal integration.

Automatic Speech Irulag Recognition Web Protal
Automatic Irula Speech Recognition Web protal

 

05:30 pm
PROJECTS
Development of Mobile Application for Empowering Tribal education in Irula Dialect

Team Members: Dr.M. Krishnaveni, Assistant Professor (SG), Dept of Computer Science

Dr.R.Janani , Research Associate, DST-CURIE-AI,CMLI

Ms.Vasundra R S , I M.Sc , Dept of Computer Science

Project Summary: The project entitled as “Development of Mobile Application for Empowering Tribal education in Irula Dialect” which has been developed using Android Studio framework. Here XML with Java are used as the front end and Firebase as the back end. The application is designed to address the unique needs of tribal children, who often have limited access to educational resources. It includes a range of educational content, including Poems in their Dialect and Assessments. The application is designed with a user[1]friendly interface, featuring colourful graphics that appeal to children. It also includes a range of interactive features, such as audio and media player. The main aim of the proposed system is to develop a mobile application for Tribal children to learn English poem in their own dialect. In this mobile application user can login this application by using their username and password. After successful login user can take mental ability Assessment through this mobile application. After taking assessment successfully, the tribal child can take Mental ability Assessment and Pre Assessment,and this application gives termwise explanation of the poem in Irula dialect and English language. After learning Poem, Child can take Post Assessment. All the Assessment score and user’s Authentication is saved in Firebase. Firebase is a set of backend cloud computing services and application development platforms provided by Google. It hosts databases, services, authentication, and integration for an android application. Overall, the mobile application development for tribal education is an innovative solution that leverages mobile technology to improve access to education for Tribal children. With its engaging content, user-friendly interface, and offline capabilities, it is an ideal tool for empowering tribal children with knowledge and skills for a better future.

 

                                            Irula_app_interface1

 

                                           Irula_app_interface2

 

05:30 pm

---
05:30 pm
PROJECTS
Tamil Voice based Education bot

Team Members: Dr.P.Subashini , Professor , Dept of Computer Science

Dr. T.T. Dhivyaprabha, Research Associate, DST-CURIE-AI

Ms.M.Mohana, Research Scholar , Dept of Computer Science

Ms. Divyasri.S , II M.Sc , Dept of Computer Science

Project Summary: Mobile Learning (M-Learning) application is a rapid growing technology in 21st Century, which plays a major role in educating the children. Previous study shows that the Mobile applications effectively improve the learner’s engagement and the motivation in learning. The main aim of this proposed application is to develop a mobile application in Tamil language to overcome the language issues faced by the Native language learners of aged 8- 10 years to teach computer science subject. It also incorporates the Adaptive learning method and classical Q learning method which customize the students learning by providing the flexible learning path is called Adaptive learning. Classical Q learning customizing the children cognitive skills to obtain the quality of learning with rewards. This proposed system uses the CCI (Child Computer Interaction) standards because it sets base ideas to teach children about basic Computer content. Such contents are: About Computer, Uses of computer, Computer Hardware and computer Software. According to CCI Standards, an educational application should be developed based on the child-centric concept to effectively engage the children in learning. There are 13 Multimodal preferences are made from the various learning strategies. For example, VA (Video and Audio) questioner is the bimodal combination of Visual and Audio strategy. Proposed application is designed based on considering the combination of two strategies called VA questionnaires. In this, children’s basic knowledge about the computer science subject is identified by conducting the Pre[1]Assessment, which is used to recommend the learning content by means of analysing the test scores. After that, the VA learning module represents the visual and aural style. In accordance with learning style selections it shows the learning levels such as (1) Easy level, (2) Medium level and the (3) Hard level, at last it shows the learning progress of an children with Post[1]Assessment score. A pilot study is conducted with the help of 65 randomly selected students from classes 3rd and 5th of Sri Avinashilingam Aided Primary school. The validation is done on two ways: Such as individual validation and the group validation along with feedback. Children showed that they were happy and interested to use the app and also shared their feedback genuinely. This shows that the proposed application significantly increases the children’s interest and engagement in learning

                                                 App_Interface

PROJECTS

Artificial Intelligence (AI)-Internet of Things (IoT) based Environmental Monitoring System for Mushroom Cultivation

Team Members: Dr. M. Krishnaveni, Assistant Professor (SG), Department of Computer Science

Dr. M. K. Nisha, Assistant Professor, Department of Botany

Ms.E. Gaayathri Devi, Research Scholar, Department of Botany

Ms. V. Narmadha, Technical Assistant, DST CURIE-AI

Project Summary: Mushroom cultivation can help reduce vulnerability to poverty and strengthen livelihoods through the generation of a fast-yielding and nutritious source of food and a reliable source of income. AI-based mushroom cultivation employs the wireless network system to monitor the farming process and thus reduce human intervention. Biosensors can be used to monitor the temperature, humidity, carbon dioxide concentration, light intensity in a mushroom farm. The data will be collected to monitor the environmental conditions of the farm which will be connected with the control unit through a server. The current status of parameters is transmitted to the remote monitoring station via a pair of low[1]power ESP8266 as a Wi-Fi modem. The codes for the controller were written in the Arduino programming language, debugged, compiled, and burnt into the microcontroller using the Arduino integrated development environment (IDE). The collected sensor data of all the parameters will be stored in the Google cloud server. k- means clustering is used to implement the algorithm to develop a Decision Support System. Graphical User Interface tool will be developed using open source technologies to find the optimum environmental condition for mushroom cultivation. By the techniques used in this research, the environmental factors that affect the cultivation can be balanced, thus problems can be overcome to obtain a high yield of mushrooms.

                               Team visit at mushroom culture room                              Experimental study in mushroom culture room

                             Team visit at mushroom culture room                  Experimental study in mushroom culture room

 

 

PROJECTS
AI based Intelligent Mosquito Trap to Control Vector Borne Diseases

Team Members: Dr. P. Subashini, Dept of Computer Science

Dr. M. Krishnaveni, Assistant Professor (SG), Dept of Computer Science

Dr. T.T. Dhivyaprabha, Research Associate, DST-CURIE-AI

Ms. B. Gayathre -19PCA001, II MCA, Department of Computer Science

Project Summary: Vector-borne diseases are the most harmful and threat to human beings health, affecting nearly seven hundred million people every year and causing one million deaths annually. Information on mosquito species' population and spatial distribution is essential in identifying vector-borne diseases. Mosquito prevention and monitoring programs are established by public health departments with mosquito traps. Many monitoring systems have already been implemented concerning the worldwide spreading of mosquitoes and mosquito-borne infections, although mosquito population monitoring is inadequate and time[1]consuming in order to identify mosquito species and diseases. Aedes aegypti, Aedes albopictus, Anopheles gambiae, Anopheles arabiensis, Culex pipiens, and Culex quinquefasciatus are the six primary mosquito species prevalent in India that inflict vector[1]borne diseases. It aims to construct an IoT-based mosquito-based disease identification system using machine learning algorithms. The proposed methodology is described as follows. It collects the mosquito's wingbeat audio from the Kaggle website, then eliminates noise from the wingbeat audio file using the Butterworth pre-processing algorithm. After pre[1]processing, wingbeat is subjected to feature extraction for frequency using the Fast Fourier Transform algorithm, followed by classification using the Decision Tree algorithm to classify mosquito wingbeat signals. In the experimental findings and analysis, the accuracy of the constructed system is compared with and without pre-processing approaches. The system enables monitoring of the mosquito population and epidemic through automation, which delivers correct output in a defined time frame without human intervention.

                                  Testing kit             Mosquito trap methodology

                                                   Experimental Testing Kit                                 Methodology of IoT Integrated with ML Phase

 

PROJECTS

Technology Enhanced Mulsemedia Learning In Stem Education For Enhancing The Learner’s Quality Of Experience (Qoe)

Team Members: 

Dr.P.Subashini , Professor , Dept of Computer Science

Dr. N.Valliammal, Assistant Professor (SG), Dept of Computer Science

Ms.M.Mohana, Research Scholar , Dept of Computer Science

Ms. V.Suvetha , II MCA , Dept of Computer Science

Project Summary:  Affective computing refers to the development of technologies that enable machines to recognize and respond to human emotions, essentially creating a form of artificial emotional intelligence. Mulsemedia combines multiple media formats such as audio, video, and interactive content to create an immersive learning experience. Multisensorial learning, on the other hand, focuses on multiple senses, such as sight, sound, haptic, and hearing, to enhance the learning experience. This research also focuses on STEM education, it is an ideal field for the implementation of Mulsemedia due to its focus on science, technology, engineering, and mathematics. The Mulsemedia can help to overcome some of the limitations of e-learning by providing a more interactive and engaging learning experience. The Mulsemedia can provide students with an interactive and engaging learning experience, allowing them to explore complex concepts and theories in a more accessible manner. This paper proposes a new perspective to achieve the model " TECHNOLOGY ENHANCED MULSEMEDIA LEARNING FOR ENHANCING QUALITY OF EXPERIENCE" by integrating devices like Microcontroller- Arduino UNO, Exhaust fans, olfaction-ultrasonic humidifier, and haptics. This Research project targets students between 20-25 years old to provide them with a better Quality of Experience (QoE) while learning. The QoE is, subjective measures, such as self-reported feedback from students, is an important aspect of assessing the effectiveness of Mulsemedia. This project will examine the impact of both the presence and absence of subjective measures and objective measures. The subjective measures rely on personal experiences and opinions, while objective measures use quantifiable data, such as GSR. When mulsemedia elements are incorporated into a learning experience, learners may experience higher levels of engagement and emotional response, which can lead to higher GSR readings and potentially better learning outcomes. Thus, the research to enhance e-learning, by incorporating multisensory activities and integrating devices to provide an immersive and engaging learning experience.

                         Mulsemedia Kit                                      Mulsemedia_portal

                                        Mulsemedia Kit                                                                            Mulsemedia Web portal

 

chat-bot
Saratha here to assist youX
Saratha
Hello! I'm Saratha, How can I help you ?