Skip to main content
05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator,Professor of Computer Science.
Dr.M.Krishnaveni, CMLI Co-coordinator, Assistant Pofessor(SG), Departemnt of Computer Science.
Ms.E.Rathipriya,Technical Assistant,CMLI.
Ms.S.Srina, II M.Sc.CS.

Project Summary:

The Real-Time Error Identification and Nutrient Analysis System is designed to enhance the Solwearth Organic Waste Converter by integrating video processing and sensor technology for automated monitoring and analysis. The project aims to eliminate the need for manual inspection by providing real-time updates on the system's status through an Android application. A web camera continuously captures video of the decomposition process, and HSV color space-based image processing is used to analyze the waste conversion status. The Arduino Uno serves as the central controller, collecting real-time data from sensors. Load cell sensors measure the waste weight before and after decomposition, while an NPK sensor evaluates the manure’s nutrient content by measuring Nitrogen (N), Phosphorus (P), and Potassium (K) levels. The collected data, including weight measurements, decomposition status, and nutrient values, is sent to a ThingSpeak dashboard for remote monitoring and storage. Users can track the system's real-time status via an Android application, ensuring efficient waste processing and nutrient analysis.

Nutrient_analysis_result
05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator,Professor of Computer Science.
Dr.M.Krishnaveni, CMLI Co-coordinator, Assistant Pofessor(SG), Departemnt of Computer Science.
Dr.A.Dhanalakshmi, Associate Professor of Computer Science, Gobi Arts and Science College.
Dr.R.Janani,Research Assistant,CMLI.
Ms.Tamil Purani T, II MCA , Gobi Arts and Science College.

Project Summary:

The Project entitled “QUESTION BANK GENERATION TOOL USING LARGE LANGUAGE MODEL” to develop an AI tool that generates customized questions based on syllabus content and reference materials, offering both MCQ and open-ended question formats with specified difficulty levels. Traditionally, creating question banks was a difficult and time-consuming process. Examining the curriculum, textbooks, and reference materials to identify crucial ideas and significant themes was the responsibility of educators and subject matter specialists. In order to obtain a comprehensive understanding of the subject, this technique involved deliberately constructing questions with varying degrees of difficulty. Teachers must carefully craft these questions to guarantee a suitable balance of various types, such as essay-style questions, multiple-choice questions (MCQs), and short-answer questions. To guarantee that the questions were precise, understandable, and consistent with the learning objectives, this manual approach required a great deal of work, careful planning. Additionally, in order to guarantee the quality and applicability of the question bank, teachers frequently had to work together and conduct several reviews. This manual approach frequently resulted in mistakes, inconsistencies, and gaps in topic coverage despite their best efforts. The method became even more complicated when handling massive amounts of content, especially from comprehensive reference materials. The need for a quicker, more effective way to produce high-quality question banks became apparent as educational demands increased. The Question Bank Generation Tool overcomes these challenges by utilizing technology to automate and improve the question creation process. With the use of this program, users can upload reference materials and syllabus documents, which are then examined to extract important details that will help create thoughtful and organized questions. By choosing the question type (MCQs, 5-marks, 10-marks), the desired level of difficulty (Easy, Medium, Hard), and the required amount of questions, users can customize their preferences. The tool evaluates the uploaded content intelligently, extracts important details, and creates insightful, well-structured questions that correspond with the reference and syllabus materials. To preserve quality and relevance, the algorithm makes sure that the variety of question kinds and levels of difficulty is balanced. The questions appear for review on the internet when they are generated. Additionally, users can choose to download the created 2 questions in JSON format for convenient access and use into learning materials. The Question Bank Generation Tool ensures that clear, correct, and syllabus-focused questions are created while decreasing human effort by integrating automation with a user-friendly web interface. This solution is intended to help content creators, institutions, and educators create efficient and efficient question banks. To ensure both accuracy and efficiency, the Question Bank Generation Tool is developed using a systematic method. The web platform is designed with a user-friendly interface that allows users to log in, upload syllabus and reference materials, and specify their preferences for question type, difficulty level, and quantity. The system preprocesses the submitted content after submission by removing unnecessary language and extracting important information. Based on the inputs supplied, the refined model then produces insightful and well-structured questions. The output is refined using post-processing techniques to guarantee relevance, coherence, and clarity. The generated questions are then presented for review on the website, where users can also choose to download them in JSON format for convenience.

output
05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator,Professor of Computer Science.
Dr.M.Krishnaveni, CMLI Co-coordinator, Assistant Pofessor(SG), Departemnt of Computer Science.
Dr.G.T.Prabavathi, Associate Professor of Computer Science, Gobi Arts and Science College.
Ms.E.Rathipriya,Technical Assistant,CMLI.
Ms.M.Esaivani, Teaching Assistant,CMLI.
Mr.Muralitharan J, II MCA , Gobi Arts and Science College.

Project Summary:

Milk is a crucial component of daily nutrition, yet its quality is frequently compromised due to adulteration. Existing detection methods are large, involving complex laboratory procedures that are slow and inefficient. These methods lack real-time monitoring capabilities, making them ineffective for large-scale dairy industries and distribution chains. The delay in detecting adulteration increases the likelihood of contaminated milk being consumed, leading to serious health risks. To overcome these challenges, a real-time automated system is necessary. The proposed system integrates IoT sensors and machine learning to enable real-time milk quality monitoring. This approach eliminates the limitations of traditional chemical testing, offering an efficient, cost-effective and accessible solution. The system continuously monitors milk properties using IoT-based sensors and classifies milk as pure or adulterated using a Decision Tree algorithm. The results are displayed instantly on an OLED screen, ensuring immediate feedback. If adulterants are detected, the system identifies the type of adulterant and determines the overall toxicity level. Additionally, an LED indicator provides a visual alert for adulterated milk, while the collected sensor data is stored and analyzed on the ThingSpeak cloud dashboard for remote monitoring. 3 The primary objective of this project is to ensure milk quality and safety by implementing a real-time adulteration detection system using IoT sensors and machine learning. The system aims to replace traditional, labor-intensive chemical testing with an automated and instant detection method, providing more efficient and accurate results. By integrating advanced sensing technology, the system can detect specific adulterants such as sodium bicarbonate, detergent, excess water, urea and formalin. Additionally, temperature sensors are employed to monitor deviations that may indicate contamination, further enhancing the accuracy of the detection process. To assess milk safety comprehensively, the system analyzes the concentration levels of adulterants and determines overall toxicity, enabling both consumers and regulatory authorities to take preventive actions against contaminated milk. This feature is crucial in mitigating potential health risks associated with milk adulteration. The system also offers real-time monitoring capabilities by displaying test results instantly on an OLED screen, allowing users to make informed decisions. Moreover, an LED indicator provides a visual alert when adulteration is detected, ensuring quick identification of unsafe milk. To enhance accessibility and usability, the system is integrated with ThingSpeak, a cloud-based platform that enables remote data storage and real-time monitoring. This ensures that dairy farms, milk collection centers and processing units can efficiently track milk quality from any location. Additionally, the system is designed to be cost-effective, reducing dependency on expensive laboratory-based chemical testing while offering a scalable solution for large-scale dairy operations. By automating the detection process, this system significantly improves efficiency, eliminates human error and ensures a reliable and consistent approach to milk adulteration detection. The system consists of multiple IoT sensors, including a pH sensor, conductivity sensor and temperature sensor, formaldehyde sensor that continuously measure milk properties. The collected data is processed using a Decision Tree machine learning algorithm, which classifies milk quality with high accuracy. The OLED display provides instant results, while the LED indicator serves as a quick alert mechanism. Integration with ThingSpeak enables remote monitoring, allowing stakeholders to track milk quality from any location. The automation of this process reduces human intervention, improving efficiency and reliability while eliminating dependency on complex laboratory testing. 4 This advanced detection framework enhances milk quality assessment, facilitates real-time monitoring and ensures better consumer protection by enabling early detection of contaminants. By leveraging IoT and machine learning, this approach provides a scalable and efficient solution to detect adulteration and maintain milk safety. The proposed system eliminates the need for extensive chemical testing, reducing time and cost while enhancing accuracy. As food safety regulations become more stringent, such smart detection systems will play a crucial role in maintaining milk purity and ensuring consumer safety.

output
05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator,Professor of Computer Science.
Dr.M.Krishnaveni, CMLI Co-coordinator, Assistant Pofessor(SG), Departemnt of Computer Science.
Dr.S.Meenakshi, Associate Professor of Computer Science, Gobi Arts and Science College.
Mrs.S.Bhuvaneswari, Research Scholar,Department of Computer Science.
Ms.Rumaiza Fathima RJ, II MCA , Gobi Arts and Science College.

Project Summary:

Crime detection and prevention have always been critical concerns for society. The increasing number of recorded crimes, including robbery, fighting, shooting, and accidents, highlights the need for advanced surveillance techniques to monitor and analyze crime-related activities effectively. Surveillance videos capture a variety of realistic anomalies, but compared to normal activities, abnormal events are rare. Identifying these abnormal events is a crucial role in video surveillance, requiring AI-based models to enhance crime analysis and response. The Project entitled “AUTOMATED FORENSIC CRIME ANALYSIS SYSTEM USING VISION LANGUAGE MODELS FOR CRIME DESCRIPTION AND SUMMARIZATION IN SURVEILLANCE VIDEOS" aims to develop a robust AI-powered forensic system capable of detecting, describing, and summarizing crime events in surveillance videos. The system integrates VLMs and LLMs to automate crime analysis, reducing the need for manual monitoring. The four important steps which are carried out in this project are: preprocessing, anomaly detection, crime scene description and crime scene summarization. Preprocessing: It involves extracting frames from videos, resizing them to a standard resolution, and normalizing pixel values to ensure uniformity and enhance model efficiency. Frame extraction converts video data into individual images, allowing for frame-wise analysis of crime events. Resizing ensures that all frames maintain consistency across different surveillance sources, making the model more robust to variations in video quality. Normalization scales pixel values within a predefined range, improving model convergence and reducing computational overhead. Anomaly Detection: Anomaly detection is the process of identifying whether a given video contains normal or anomalous events. This process is essential in surveillance systems for automatically detecting suspicious activities without manual monitoring. Crime Scene Description: The crime scene description method generates a detailed textual description of the crime event using a VLM. This description captures crucial details such as the nature of the crime, the actions of individuals involved, and the overall context of the scene. By processing key frames. The generated text follows a structured approach, 6 making it easier to interpret crime events without manually reviewing extensive surveillance videos. Crime Scene Summarization: Crime Scene summarization method utilizes a Large Language Model (LLM) to summarize the extracted description into a concise crime report. This step eliminates redundant details while retaining the most critical aspects of the crime. The LLM processes multiple descriptions from different frames, condensing them into a short, structured summary that provides an overview of the incident. This approach ensures that law enforcement agencies receive clear and actionable reports without having to analyze lengthy textual data.

05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator,Professor of Computer Science.
Dr.S.Meenakshi, Associate Professor of Computer Science, Gobi Arts and Science College.
Dr.R.Janani,Research Assistant,CMLI.
Ms.Sathya Sree S, II MCA , Gobi Arts and Science College.

Project Summary:

The project, entitled “Prediction of Toxicity in Drug Discovery using Quantum Machine Learning,” aims to improve the identification of potentially toxic compounds by combining quantum computing with advanced machine learning techniques. The four important steps that are carried out in this project are: input data, feature extraction and selection, quantum encoding, and model training and evaluation.  Input data: The raw dataset contains 1800 drugs represented as SMILES strings. The target variable indicates whether a drug exhibits toxicity (1) or not (0). Feature Extraction and Selection: RDKit is used to extract molecular descriptors from the smiles. These descriptors serve as features that represent the chemical properties of the compounds. After extracting the features, the most relevant ones are selected, focusing on the top 10 features with the highest correlation to toxicity prediction. This helps reduce the dimensionality of the data while maintaining its predictive power. Quantum Encoding: This step involves converting the classical features (molecular descriptors) into quantum states using quantum gates and circuits. Quantum encoding techniques allow the representation of classical data in a quantum system, enabling the use of quantum algorithms to process the information. Once the data is encoded into quantum states, the fidelity of each quantum state is calculated for each quantum state representing the classical data. Based on fidelity values, a fidelity-based quantum kernel is constructed, which can be used in quantum machine learning models. Model Training and Evaluation: The fidelity-based quantum kernel is applied to a Support Vector Machine, which is trained on the dataset. The performance of the QSVM is compared with classical machine learning models using metrics such as Accuracy, F1 Score, Recall, Precision, ROC curve, and Confusion Matrix.
 

05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, CMLI Co-coordinator, Assistant Pofessor(SG), Departemnt of Computer Science.
Ms.E.Rathipriya, Technical Assistant , CMLI.
Ms.K.Deepika,III B.Sc.CS.
Ms.S.Vani,III B.Sc. CS.
Ms.V.Indhu ilakkiya, III B.Sc. CS.

Project Summary:

The beverages that are utilized in day-to-day life has different quality and chemical composition in it and to identify those harmful and beneficiary substance.To address this,an artificial taste bud system known as electronic tongue(E-tongue) has been developed.The E-tongue sensory system that depicts the human sense of taste to detect and classify beverages.E-tongue consists of numerous sensor that helps in detection of chemical substance in beverages.It helps in analysis of taste(sweetness,bitterness/sournesss) present in those beverages and also performs the detection of acid and base levels in beverage samples by glass ph electrode sensor and conductivity sensor for determination of sweetness in beverages.Instead of using taste sensory receptors here the conductivity sensor and ph sensor is used to determine flavours of the beverages.The dataset that are collected using sensor are then trained by using KNN model for better accuracy.This technology not only enhance beverage quality control but also supports food safety regulations by enabling real-time ,accurate assessments of taste and chemical composition. By predicting the ph level, conductivity level, gas level and temperature percentage in beverages it can be concluded whether it is suitable for consumption or to be avoided with the supply of ML techniques providing a suitable surrounding for monitoring the beverages activity by determining the temperature and gas extracted.The final outcomes implements whether the beverages is consumable or non-consumable.

e_tongue_output

 

05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, CMLI Co-coordinator, Assistant Pofessor(SG), Departemnt of Computer Science.
Ms.Aanisa S, Technical Assistant , CMLI.
Ms.Sowndharya B, III B.Sc. CS.
Ms.Sruthi M, III B.Sc. CS.

Project Summary:

The integration of Augmented Reality into school education chemistry practical revolutionizes learning by addressing safety, accessibility, and engagement challenges. AR enables students to simulate hazardous experiments, reducing risks and providing access to activities limited by infrastructure or resources. Through 3D visualization, AR helps students understand complex chemical processes, and reaction mechanisms. It fosters an interactive and immersive learning environment, enhancing engagement, motivation, and retention of knowledge while encouraging exploration beyond traditional curricular.AR applications offer instant feedback, allowing students to learn from mistakes in real-time and practice experiments without material constraints. Collaborative features promote teamwork and critical thinking through group activities and discussions. Practical applications include virtual acid-base titrations, salt analysis, molecular bond visualizations, and. Using smartphones, tablets, or AR goggles, students can interact with virtual lab equipment and observe chemical phenomena vividly and safely. This study explores the integration of augmented reality in chemistry labs using Unity software, aiming to enhance interactive learning experiences by simulating and laboratory experiments in an immersive virtual environment. By integrating AR, educators create a cost- effective, engaging, and safe learning environment, bridging the gap between theory and practice while inspiring interest in chemistry and STEM fields.

Chemylab_output
05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, CMLI Co-coordinator, Assistant Pofessor(SG), Departemnt of Computer Science.
Dr.D.Mathivadhani, Senior Technical Assistant, Department of Computer Science.
Dr.R.Janani, Research Assistant, CMLI.
Ms.S. Sindhuja , III B.Sc. CS .
Ms.S. Suganya, III B.Sc. CS .
Ms.M. Kalaivani, III B.Sc. CS .

Project Summary:
Speech recognition also known as automatic speech recognition (ASR),speech to text is a capability that enables a program to process human speech into a written format.Indigenous languages often face the threat of extinction due to a lack of documentation, digital resources, and limited use in mainstream communication.The Badagas are the largest Communal in the Nilgiris district. This project aims to develop a Speech-to-Text Translator for Badaga Language, which will serve as a bridge to preserve and promote these endangered languages.

badga_project_outpu
05:30 pm

---
05:30 pm
PROJECTS

Team Members: 

Prof. P. Subashini, Professor,Dept of Computer Science 

Dr. M. Krishnaveni, Assistant Professor (SG), Dept of Computer Science 

Mrs. V. Narmadha, Technical Assistant, CMLI

 Project Summary: Farm automation is often associated with smart farming. It is a robot to farm the different crops in a particular area. The robot itself moves around using tracks on the sides of the box, and it works in three dimensions. It moves left, right, forward, backward, up and down. Farmbot sow’s seeds, waters plants and gets rid of weeds by using different tools depending on the task. It monitors the plant 24*7. Farmbot is deployed in our centre that helps to conduct training for students, research scholars, NGOs and entrepreneurs to create awareness and gain knowledge about technology-aided farming. It nurtures interests, skills and motivates various stakeholders to establish new startups and agri-related product development.

05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, Assistant Professor (SG), Department of Computer Science

Dr.S.Meenakshi, Associate Professor & Head , Department of Computer Science , Gobi Arts and Science College

Ms.Jayashree Ganeshkumar, Research Scholar, Department of Computer Science

Ms.T.Bharathi , II MCA , Gobi Arts and Science College

Project Summary:The Project entitled “STUDENT’S PERFORMANCE PREDICTION USING STACKED ENSEMBLE TECHNIQUE ON ONLINE PROGRAMMING COURSE” for   identify the students who are at risk of facing challenges in online programming courses and eventually dropout of the course. Once at-risk students are identified, the system aims to facilitate timely intervention strategies, such as additional support, counseling, or targeted learning resources, to help these students improve their programming performance.

To predict student’s final scores based on their programming submission data and elucidate the predictive model's decisions, a structured approach involving four key steps is followed. i) data preprocessing is executed to refine the raw data, ensuring its cleanliness and suitability for analysis. This involves tasks such as handling missing values and encoding categorical variables. ii) feature engineering is conducted to extract data-driven features from the programming submission data. This step aims to enhance the representation of patterns and relationships within the dataset, facilitating more accurate predictions. iii) regression models are developed and utilized to forecast the final scores using the engineered features. Regression techniques are chosen due to their suitability for predicting continuous outcomes like numerical scores. iv) The  model's decisions are elucidated using interpretability techniques such as SHAP(SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations). These methods provide valuable insights into the influential features driving the model's predictions, thereby enhancing transparency and understanding of the predictive process.

This project has been developed to predict the student performance using the Stacked Ensemble model with a R2 of 0.72. This project has been developed using python.

Students_prediction_project_1

05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, Assistant Professor (SG), Department of Computer Science

Ms.S.Aiswarya, Research Scholar, Department of Computer Science

Ms. Vasundra R.S , II M.Sc CS , Department of Computer Science

Project Summary: A Cataract is a cloudy area in the lens of the eyes that leads to a decrease in vision of the eyes. Cataracts often develop slowly and can affect one or both eyes. Visual impairment caused by cataracts is a commonly observed issue and blindness worldwide. Cataracts are one of the visual impairments that can lead to blindness if not detected and treated early. About 20 million people from worldwide are blind due to cataracts. Traditional cataract examination tools and techniques can only be handled by skilled ophthalmologists, making it impractical to conduct mass screenings for early-stage cataract detection due to a shortage of ophthalmologists and the time-consuming nature of these procedures. This project serves as decision support for optometrist and ophthalmologists in identifying cataract types Nuclear Sclerosis, Cortical Cataract, and Posterior Subcapsular Cataract along with their respective grades grade 1, grade 2, grade 3, grade 4. The project involves a systematic approach to developing a deep learning model for cataract multiclassification and grading, aimed at supporting optometrists and  ophthalmologists  in clinical decision-making. Initiall y, Image data are collected from the real-world and case studies to construct a comprehensive dataset. Subsequently, the lens of the eye is segmented using  image processing technique, Image Masking to isolate and focus on the affected areas within the image. Data preprocessing techniques, such as augmentation (flipping, scaling, rotation), are then applied to enhance the diversity of the dataset, which ultimately improves the performance of the deep learning models. For the multiclassification and grading, deep learning models such as CNN, ResNet50, VGG16, and InceptionNet, are employed. These models are trained to classify different types of cataracts (such as Nuclear Sclerosis, Cortical Cataract, and Posterior Subcapsular Cataract) and assign appropriate grades based on the severity of the condition.To further enhance accuracy and robustness, an ensemble learning approach is adopted, where predictions from multiple models (e.g., CNN,VGG16, InceptionV3, ResNet50) are combined using a majority voting classifier. This ensemble learning strategy leverages the strengths of each individual model to generate a final prediction for each image, leading to improved diagnostic accuracy and reliability compared to using individual models. The outcome of this project aims to deliver a sophisticated deep learning model capable of accurate cataract classification and grading and a user-friendly graphical interface to facilitate effective cataract diagnosis and grading by optometrists and ophthalmologists.

05:30 pm
PROJECTS

Team Members: Dr.D.Mathivadhani ,Senior Technical Assistant, Department of Computer Science

Ms.V.Narmadha, Technical Assistant, CMLI

Ms.Aanisa.S, III B.Sc CS, Department of Computer Science

Ms.Haripriya K , III B.Sc CS , Department of Computer Science

Ms. Pavithra K , III B.Sc CS , Department of Computer Science

Project Summary:This project proposes an enhanced shopping assistant application designed to empower partially sighted individuals. Leveraging Augmented Reality (AR) technology, the application facilitates independent shopping by enabling users to access detailed product information through barcode scanning. Upon scanning a product barcode with their smartphone or tablet camera, users can view product details and a 3D model overlay onto the real world through the AR interface. Developed using Unity and Vuforia, the application aims to improve accessibility and promote greater independence for partially sighted individuals by providing a richer and more informative shopping experience.

Shopping_project_1

 

 

Shopping_project_2

 

 

 

 

 

 

 

05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator, Professor of Computer Science

Dr.R.Janani, Research Assistant, CMLI

Ms.Logavathani R , II MCA , Department of Computer Science

Project Summary:The primary goal of this project is to detect and classify seizures and other types of harmful brain activity using EEG signals recorded from the critically ill patients. The significance of the work is to improve electroencephalography pattern classification accuracy which helps to unlock transformative benefits for neurocritical care, epilepsy and drug development. The classification is done in the six patterns such as seizure, generalized periodic discharges, Lateralized periodic discharges, Lateralized Rhythmic Delta activity, generalized rhythmic Delta Activity and others. Advancement in this area helps doctors and brain researchers to detect seizures or other brain damage which provide faster and more accurate treatment to the ill patient. The methodology involves loading EEG data recorded from ill patients, preprocessing it to enhance the quality, visualizing to identify patterns, training deep learning models like EfficientNetV2 and others, on labeled data and using these models to classify EEG signals into patterns like seizures, aiding in neurocritical care, epilepsy treatment, and drug development advancements. Finally, it helps to predict the patterns such as seizure, generalized periodic discharges, Lateralized periodic discharges, Lateralized Rhythmic Delta activity, generalized rhythmic Delta Activity and others. This project employs various deep learning algorithms such as EfficientNetV2, DenseNet, ResNet, and MobileNet, for classifying seizures and other brain activities based on EEG signals. Among these EfficientNetV2 demonstrates exceptional efficiency in pattern classification contributing to the project's advancements in neurocritical care, epilepsy treatment and drug development. 

hms_project_1
Visualizing some samples of Dataset
HMS_PROJECT_2
Modelling

 

 

 

05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator , Professor of Computer Science

Dr.M.Krishnaveni , Assistant Professor (SG), Department of Computer Science

Dr.A.Dhanalakshmi , Associate Professor , Department of Computer Science , Gobi Arts and Science College

Ms.V.Narmadha, Technical Assistant , CMLI

Dr.Jennyfer Susan M B , Assistant Professor , CMLI

Mr. Sesan D , II MCA , Gobi Arts and Science College

Project Summary:The global population includes a significant number of speech and hearing-impaired individuals who encounter unique communication challenges, particularly in public institutions like post offices. This study introduces the Postal Sign Recognition System for Indian Sign Language, designed to facilitate smoother interactions between post office staff and customers with hearing or speech impairments. The system's methodology involves crucial steps: data collection, preprocessing, object tracking, and recognition. Data collection utilizes Raspberry Pi and a web camera set up to capture Indian Sign Language gestures. Preprocessing techniques, including frame differencing and contour analysis, enhance the quality of collected data. 

Object tracking employs sophisticated algorithms like the Lucas Kanade optical flow and Sparse flow algorithms for precise gesture localization within the video stream. Central to the system is the recognition phase, utilizing a 3D convolutional neural network (3DCNN) model. This model interprets gestures and translates them into textual or auditory outputs, enabling post office staff to comprehend and respond effectively to the communication needs of hearing and speech-impaired customers. By harnessing technology to recognize sign language or visual cues, the system addresses communication barriers these communities face. Implementing the Postal Sign Recognition System facilitates smoother interactions at post offices and underscores technology's potential to promote inclusivity and accessibility. Future research may focus on refining recognition algorithms, expanding language support, and integrating user feedback to optimize usability in real-world scenarios. This innovative solution highlights collaborative efforts to create a more inclusive environment for individuals with hearing and speech impairments.

Postal_project_2

05:30 pm
PROJECTS

Team Member: Dr.M.Krishnaveni, Assistant Professor (SG), Department of Computer Science

Ms.V.Narmadha, Technical Assistant , CMLI

Ms.M.Mythili , III B.Sc CS , Department of Computer Science

Ms.D.Naveena, III B.Sc CS , Department of Computer Science

Ms.G.Preethi , III B.Sc CS, Department of Computer Science

Project Summary: Weather applications enable users to get instant alerts regarding weather conditions. It's a service that informs users what kind of weather to expect in the coming hours, days, and weeks. The existing system shows the weather conditions and how the weather is going to be in a few hours or so in a text based manner. The proposed App ‘‘Augmented Reality based Weather visualization App for South Coastal regions ’’is not just a simple weather app, rather it’s more like an AR based weather app that will enable digital information to be superimposed and integrated into our physical environment. The idea is to display the selected area in a 3D manner and quickly have a look at the weather conditions.

The suggested solution, unlike existing applications, does not display the weather only in text format, it also allows the user to see the different weather conditions in a more realistic way. For example, if it’s going to rain in some places, one can watch rain pouring over those places with a bunch of clouds hovering around in the sky, based on the weather forecast. And that's not all; the proposed software will have a plethora of different simulations for various weather situations in order to provide the most accurate and thorough experience possible. These are going to be achieved using augmented reality, which is an enhanced version of the real physical world that is achieved through the use of digital visual elements delivered via technology.

AR_PROJECT_2

 

AR_PROJECT_3

 

 

 

05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, Assistant Professor(SG), Department of Computer Science

Dr.P.Narendran , Gobi Arts and Science College

Ms.S.Aiswarya, Research Scholar , Department of Computer Science

Ms.Aathi Obusre M , II MCA , Gobi Arts and Science College

Project Summary :Sickle Cell Disease (SCD) is a genetic blood disorder characterized by the presence of abnormal hemoglobin S (HbS), which leads to hemolysis and chronic organ damage. Previous research has primarily focused on classification, whereas this work proposes detection and cell counting methodologies to determine the severity of sickle cell disease.This study effectively addresses the challenges of evaluating sickle cell disease severity through quantitative analysis of cell counts within images, thereby providing valuable data for understanding the condition. Using deep learning for object detection, single stage detectors outperforms in detection accuracy with inference time and it is more effective . In this erythrocyteIDB dataset is used for detection to enhance the training dataset the data augmentation techniques are employed to expand the data. The object detection task in this study utilizes YOLOv4, YOLOv5, and YOLOv8 models.  Comparing these three models YOLOv8 gives better results with accuracy. Intersection over union (IOU) and Non-Maximum Suppression (NMS) algorithms are applied to eliminate duplicate detections and prevent overlapping bounding boxes. The results shows that YOLOv8 exhibits an accuracy of 0.83 at mean average value precision. From the analysis, the proposed model successfully recognizes and counts the different types of cells present in the blood smear image.

05:30 pm
PROJECTS

Team Members:  Dr.M.Krishnaveni , Assistant Professor (SG), Department of Computer Science.

Dr.R.Janani, Research Assistant, CMLI

Ms.A.Hema Priya , II MCA, Department of Computer Science

Project Summary:Paniya, also known as Pania, Paniyan, or Panyah, is a tribal language spoken in India, particularly in the Malayalam-speaking regions. The Paniya language people are currently facing a communication barrier with other communities, limiting interactions within their own group. To overcome this issue faced by the Paniya people, a web application has been developed for speech-to-text using deep learning technique. This user-friendly web application aims to provide a convenient platform for individuals to translate Paniya speech into text. The proposed system not only deals with linguistic complexities, of the Paniya language but also ensures accessibility and usability for a wider audience. The software used to develop for this project is python 3 within the collaborative environment Google colab andby using the streamlit library, the web application was developed. To initiate the methodology, a speech dataset consisting of recordings from Paniya speakers has been collected for analysis and processing, followed by pre-processing using spectral subtraction. This technique enhances the signal-to-noise ratio by estimating and subtracting background noise from the audio signal and it ensure the noise free Paniya speech input for subsequent processing. The features are then extracted using the Mel Frequency Cepstral Coefficient (MFCC), which transforms the Paniya speech signal into a concise representation by capturing its spectral characteristics. This enables the Recurrent Neural Network (RNN) to more effectively analyze and comprehend the nuanced phonetic patterns of the language, resulting in more accurate transcription, with this the Convolutional Neural Network (CNN) is also used to determine the accuracy and performance metrics. Moreover, a linguisticdictionary serves as a reference for mapping Paniya language words to their corresponding textual representations. This aids the system in precisely transcribing spoken words and enhances the overall efficiency of the speech-to-text conversion process.

 

Paniya_project_1

 

 

 

 

 

05:30 pm
PROJECTS

Team Members Dr.P.Subashini, Professor, Dept of Computer Science

Dr.P.Prabhusundhar, Assistant professor,Dept of Computer Science,Gobi Arts college

Dr.R.Janani,Research Assistant, CMLI

Ms.Komalavalli .R, II MCA , Gibi Arts College

Project Summary  The project methodology comprises several essential modules aimed at developing a proficient Automatic Speech Recognition (ASR) system tailored to the nuances of the Irula language. Initially, the data collection module gathers diverse audio recordings of spoken Irula from native speakers, ensuring a comprehensive dataset representative of various dialects and speech patterns. Subsequently, the data preprocessing phase optimizes the collected data by reducing noise, normalizing signals, and segmenting audio files for efficient feature extraction. Feature extraction transforms raw audio signals into a compact and informative feature space, enabling the acoustic model to discern speech patterns accurately. Leveraging Hidden Markov Models (HMM), the acoustic model processes the extracted features to identify and differentiate Irula speech sounds among background noise. Complementing this, the language model, enhanced through pre-trained GPT models and fine-tuning on Irula language data, provides crucial linguistic context for precise speech recognition. Finally, the integration of the Streamlit framework facilitates the development of an intuitive web application interface, ensuring accessibility and ease of use for Irula speakers interacting with the ASR system. Through the seamless integration of these modules, the project aims to create a robust ASR solution that effectively bridges the language gap within the Irula community, facilitating improved communication and societal integration.

Automatic Speech Irulag Recognition Web Protal
Automatic Irula Speech Recognition Web protal

 

05:30 pm
PROJECTS
Development of Mobile Application for Empowering Tribal education in Irula Dialect

Team Members: Dr.M. Krishnaveni, Assistant Professor (SG), Dept of Computer Science

Dr.R.Janani , Research Associate, DST-CURIE-AI,CMLI

Ms.Vasundra R S , I M.Sc , Dept of Computer Science

Project Summary: The project entitled as “Development of Mobile Application for Empowering Tribal education in Irula Dialect” which has been developed using Android Studio framework. Here XML with Java are used as the front end and Firebase as the back end. The application is designed to address the unique needs of tribal children, who often have limited access to educational resources. It includes a range of educational content, including Poems in their Dialect and Assessments. The application is designed with a user[1]friendly interface, featuring colourful graphics that appeal to children. It also includes a range of interactive features, such as audio and media player. The main aim of the proposed system is to develop a mobile application for Tribal children to learn English poem in their own dialect. In this mobile application user can login this application by using their username and password. After successful login user can take mental ability Assessment through this mobile application. After taking assessment successfully, the tribal child can take Mental ability Assessment and Pre Assessment,and this application gives termwise explanation of the poem in Irula dialect and English language. After learning Poem, Child can take Post Assessment. All the Assessment score and user’s Authentication is saved in Firebase. Firebase is a set of backend cloud computing services and application development platforms provided by Google. It hosts databases, services, authentication, and integration for an android application. Overall, the mobile application development for tribal education is an innovative solution that leverages mobile technology to improve access to education for Tribal children. With its engaging content, user-friendly interface, and offline capabilities, it is an ideal tool for empowering tribal children with knowledge and skills for a better future.

 

                                            

Irula_app_interface1

 

                                           

Irula_app_interface2

 

05:30 pm

---
05:30 pm
PROJECTS
Tamil Voice based Education bot

Team Members: Dr.P.Subashini , Professor , Dept of Computer Science

Dr. T.T. Dhivyaprabha, Research Associate, DST-CURIE-AI

Ms.M.Mohana, Research Scholar , Dept of Computer Science

Ms. Divyasri.S , II M.Sc , Dept of Computer Science

Project Summary: Mobile Learning (M-Learning) application is a rapid growing technology in 21st Century, which plays a major role in educating the children. Previous study shows that the Mobile applications effectively improve the learner’s engagement and the motivation in learning. The main aim of this proposed application is to develop a mobile application in Tamil language to overcome the language issues faced by the Native language learners of aged 8- 10 years to teach computer science subject. It also incorporates the Adaptive learning method and classical Q learning method which customize the students learning by providing the flexible learning path is called Adaptive learning. Classical Q learning customizing the children cognitive skills to obtain the quality of learning with rewards. This proposed system uses the CCI (Child Computer Interaction) standards because it sets base ideas to teach children about basic Computer content. Such contents are: About Computer, Uses of computer, Computer Hardware and computer Software. According to CCI Standards, an educational application should be developed based on the child-centric concept to effectively engage the children in learning. There are 13 Multimodal preferences are made from the various learning strategies. For example, VA (Video and Audio) questioner is the bimodal combination of Visual and Audio strategy. Proposed application is designed based on considering the combination of two strategies called VA questionnaires. In this, children’s basic knowledge about the computer science subject is identified by conducting the Pre[1]Assessment, which is used to recommend the learning content by means of analysing the test scores. After that, the VA learning module represents the visual and aural style. In accordance with learning style selections it shows the learning levels such as (1) Easy level, (2) Medium level and the (3) Hard level, at last it shows the learning progress of an children with Post[1]Assessment score. A pilot study is conducted with the help of 65 randomly selected students from classes 3rd and 5th of Sri Avinashilingam Aided Primary school. The validation is done on two ways: Such as individual validation and the group validation along with feedback. Children showed that they were happy and interested to use the app and also shared their feedback genuinely. This shows that the proposed application significantly increases the children’s interest and engagement in learning

                                                 

App_Interface
PROJECTS

Artificial Intelligence (AI)-Internet of Things (IoT) based Environmental Monitoring System for Mushroom Cultivation

Team Members: Dr. M. Krishnaveni, Assistant Professor (SG), Department of Computer Science

Dr. M. K. Nisha, Assistant Professor, Department of Botany

Ms.E. Gaayathri Devi, Research Scholar, Department of Botany

Ms. V. Narmadha, Technical Assistant, DST CURIE-AI

Project Summary: Mushroom cultivation can help reduce vulnerability to poverty and strengthen livelihoods through the generation of a fast-yielding and nutritious source of food and a reliable source of income. AI-based mushroom cultivation employs the wireless network system to monitor the farming process and thus reduce human intervention. Biosensors can be used to monitor the temperature, humidity, carbon dioxide concentration, light intensity in a mushroom farm. The data will be collected to monitor the environmental conditions of the farm which will be connected with the control unit through a server. The current status of parameters is transmitted to the remote monitoring station via a pair of low[1]power ESP8266 as a Wi-Fi modem. The codes for the controller were written in the Arduino programming language, debugged, compiled, and burnt into the microcontroller using the Arduino integrated development environment (IDE). The collected sensor data of all the parameters will be stored in the Google cloud server. k- means clustering is used to implement the algorithm to develop a Decision Support System. Graphical User Interface tool will be developed using open source technologies to find the optimum environmental condition for mushroom cultivation. By the techniques used in this research, the environmental factors that affect the cultivation can be balanced, thus problems can be overcome to obtain a high yield of mushrooms.

                               Team visit at mushroom culture room                              Experimental study in mushroom culture room

                             Team visit at mushroom culture room                  Experimental study in mushroom culture room

 

 

PROJECTS
AI based Intelligent Mosquito Trap to Control Vector Borne Diseases

Team Members: Dr. P. Subashini, Dept of Computer Science

Dr. M. Krishnaveni, Assistant Professor (SG), Dept of Computer Science

Dr. T.T. Dhivyaprabha, Research Associate, DST-CURIE-AI

Ms. B. Gayathre -19PCA001, II MCA, Department of Computer Science

Project Summary: Vector-borne diseases are the most harmful and threat to human beings health, affecting nearly seven hundred million people every year and causing one million deaths annually. Information on mosquito species' population and spatial distribution is essential in identifying vector-borne diseases. Mosquito prevention and monitoring programs are established by public health departments with mosquito traps. Many monitoring systems have already been implemented concerning the worldwide spreading of mosquitoes and mosquito-borne infections, although mosquito population monitoring is inadequate and time[1]consuming in order to identify mosquito species and diseases. Aedes aegypti, Aedes albopictus, Anopheles gambiae, Anopheles arabiensis, Culex pipiens, and Culex quinquefasciatus are the six primary mosquito species prevalent in India that inflict vector[1]borne diseases. It aims to construct an IoT-based mosquito-based disease identification system using machine learning algorithms. The proposed methodology is described as follows. It collects the mosquito's wingbeat audio from the Kaggle website, then eliminates noise from the wingbeat audio file using the Butterworth pre-processing algorithm. After pre[1]processing, wingbeat is subjected to feature extraction for frequency using the Fast Fourier Transform algorithm, followed by classification using the Decision Tree algorithm to classify mosquito wingbeat signals. In the experimental findings and analysis, the accuracy of the constructed system is compared with and without pre-processing approaches. The system enables monitoring of the mosquito population and epidemic through automation, which delivers correct output in a defined time frame without human intervention.

                                  Testing kit             Mosquito trap methodology

                                                   Experimental Testing Kit                                 Methodology of IoT Integrated with ML Phase

 

PROJECTS

Technology Enhanced Mulsemedia Learning In Stem Education For Enhancing The Learner’s Quality Of Experience (Qoe)

Team Members: 

Dr.P.Subashini , Professor , Dept of Computer Science

Dr. N.Valliammal, Assistant Professor (SG), Dept of Computer Science

Ms.M.Mohana, Research Scholar , Dept of Computer Science

Ms. V.Suvetha , II MCA , Dept of Computer Science

Project Summary:  Affective computing refers to the development of technologies that enable machines to recognize and respond to human emotions, essentially creating a form of artificial emotional intelligence. Mulsemedia combines multiple media formats such as audio, video, and interactive content to create an immersive learning experience. Multisensorial learning, on the other hand, focuses on multiple senses, such as sight, sound, haptic, and hearing, to enhance the learning experience. This research also focuses on STEM education, it is an ideal field for the implementation of Mulsemedia due to its focus on science, technology, engineering, and mathematics. The Mulsemedia can help to overcome some of the limitations of e-learning by providing a more interactive and engaging learning experience. The Mulsemedia can provide students with an interactive and engaging learning experience, allowing them to explore complex concepts and theories in a more accessible manner. This paper proposes a new perspective to achieve the model " TECHNOLOGY ENHANCED MULSEMEDIA LEARNING FOR ENHANCING QUALITY OF EXPERIENCE" by integrating devices like Microcontroller- Arduino UNO, Exhaust fans, olfaction-ultrasonic humidifier, and haptics. This Research project targets students between 20-25 years old to provide them with a better Quality of Experience (QoE) while learning. The QoE is, subjective measures, such as self-reported feedback from students, is an important aspect of assessing the effectiveness of Mulsemedia. This project will examine the impact of both the presence and absence of subjective measures and objective measures. The subjective measures rely on personal experiences and opinions, while objective measures use quantifiable data, such as GSR. When mulsemedia elements are incorporated into a learning experience, learners may experience higher levels of engagement and emotional response, which can lead to higher GSR readings and potentially better learning outcomes. Thus, the research to enhance e-learning, by incorporating multisensory activities and integrating devices to provide an immersive and engaging learning experience.

                         Mulsemedia Kit                                      Mulsemedia_portal

                                        Mulsemedia Kit                                                                            Mulsemedia Web portal

 

chat-bot
Sarada here to assist youX
Sarada
Hello! I'm Sarada, How can I help you ?