Skip to main content
05:30 pm

---
05:30 pm
PROJECTS

Team Members: 

Prof. P. Subashini, Professor,Dept of Computer Science 

Dr. M. Krishnaveni, Assistant Professor (SG), Dept of Computer Science 

Mrs. V. Narmadha, Technical Assistant, CMLI

 Project Summary: Farm automation is often associated with smart farming. It is a robot to farm the different crops in a particular area. The robot itself moves around using tracks on the sides of the box, and it works in three dimensions. It moves left, right, forward, backward, up and down. Farmbot sow’s seeds, waters plants and gets rid of weeds by using different tools depending on the task. It monitors the plant 24*7. Farmbot is deployed in our centre that helps to conduct training for students, research scholars, NGOs and entrepreneurs to create awareness and gain knowledge about technology-aided farming. It nurtures interests, skills and motivates various stakeholders to establish new startups and agri-related product development.

05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, Assistant Professor (SG), Department of Computer Science

Dr.S.Meenakshi, Associate Professor & Head , Department of Computer Science , Gobi Arts and Science College

Ms.Jayashree Ganeshkumar, Research Scholar, Department of Computer Science

Ms.T.Bharathi , II MCA , Gobi Arts and Science College

Project Summary:The Project entitled “STUDENT’S PERFORMANCE PREDICTION USING STACKED ENSEMBLE TECHNIQUE ON ONLINE PROGRAMMING COURSE” for   identify the students who are at risk of facing challenges in online programming courses and eventually dropout of the course. Once at-risk students are identified, the system aims to facilitate timely intervention strategies, such as additional support, counseling, or targeted learning resources, to help these students improve their programming performance.

To predict student’s final scores based on their programming submission data and elucidate the predictive model's decisions, a structured approach involving four key steps is followed. i) data preprocessing is executed to refine the raw data, ensuring its cleanliness and suitability for analysis. This involves tasks such as handling missing values and encoding categorical variables. ii) feature engineering is conducted to extract data-driven features from the programming submission data. This step aims to enhance the representation of patterns and relationships within the dataset, facilitating more accurate predictions. iii) regression models are developed and utilized to forecast the final scores using the engineered features. Regression techniques are chosen due to their suitability for predicting continuous outcomes like numerical scores. iv) The  model's decisions are elucidated using interpretability techniques such as SHAP(SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations). These methods provide valuable insights into the influential features driving the model's predictions, thereby enhancing transparency and understanding of the predictive process.

This project has been developed to predict the student performance using the Stacked Ensemble model with a R2 of 0.72. This project has been developed using python.

Students_prediction_project_1

05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, Assistant Professor (SG), Department of Computer Science

Ms.S.Aiswarya, Research Scholar, Department of Computer Science

Ms. Vasundra R.S , II M.Sc CS , Department of Computer Science

Project Summary: A Cataract is a cloudy area in the lens of the eyes that leads to a decrease in vision of the eyes. Cataracts often develop slowly and can affect one or both eyes. Visual impairment caused by cataracts is a commonly observed issue and blindness worldwide. Cataracts are one of the visual impairments that can lead to blindness if not detected and treated early. About 20 million people from worldwide are blind due to cataracts. Traditional cataract examination tools and techniques can only be handled by skilled ophthalmologists, making it impractical to conduct mass screenings for early-stage cataract detection due to a shortage of ophthalmologists and the time-consuming nature of these procedures. This project serves as decision support for optometrist and ophthalmologists in identifying cataract types Nuclear Sclerosis, Cortical Cataract, and Posterior Subcapsular Cataract along with their respective grades grade 1, grade 2, grade 3, grade 4. The project involves a systematic approach to developing a deep learning model for cataract multiclassification and grading, aimed at supporting optometrists and  ophthalmologists  in clinical decision-making. Initiall y, Image data are collected from the real-world and case studies to construct a comprehensive dataset. Subsequently, the lens of the eye is segmented using  image processing technique, Image Masking to isolate and focus on the affected areas within the image. Data preprocessing techniques, such as augmentation (flipping, scaling, rotation), are then applied to enhance the diversity of the dataset, which ultimately improves the performance of the deep learning models. For the multiclassification and grading, deep learning models such as CNN, ResNet50, VGG16, and InceptionNet, are employed. These models are trained to classify different types of cataracts (such as Nuclear Sclerosis, Cortical Cataract, and Posterior Subcapsular Cataract) and assign appropriate grades based on the severity of the condition.To further enhance accuracy and robustness, an ensemble learning approach is adopted, where predictions from multiple models (e.g., CNN,VGG16, InceptionV3, ResNet50) are combined using a majority voting classifier. This ensemble learning strategy leverages the strengths of each individual model to generate a final prediction for each image, leading to improved diagnostic accuracy and reliability compared to using individual models. The outcome of this project aims to deliver a sophisticated deep learning model capable of accurate cataract classification and grading and a user-friendly graphical interface to facilitate effective cataract diagnosis and grading by optometrists and ophthalmologists.

05:30 pm
PROJECTS

Team Members: Dr.D.Mathivadhani ,Senior Technical Assistant, Department of Computer Science

Ms.V.Narmadha, Technical Assistant, CMLI

Ms.Aanisa.S, III B.Sc CS, Department of Computer Science

Ms.Haripriya K , III B.Sc CS , Department of Computer Science

Ms. Pavithra K , III B.Sc CS , Department of Computer Science

Project Summary:This project proposes an enhanced shopping assistant application designed to empower partially sighted individuals. Leveraging Augmented Reality (AR) technology, the application facilitates independent shopping by enabling users to access detailed product information through barcode scanning. Upon scanning a product barcode with their smartphone or tablet camera, users can view product details and a 3D model overlay onto the real world through the AR interface. Developed using Unity and Vuforia, the application aims to improve accessibility and promote greater independence for partially sighted individuals by providing a richer and more informative shopping experience.

Shopping_project_1

 

 

Shopping_project_2

 

 

 

 

 

 

 

05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator, Professor of Computer Science

Dr.R.Janani, Research Assistant, CMLI

Ms.Logavathani R , II MCA , Department of Computer Science

Project Summary:The primary goal of this project is to detect and classify seizures and other types of harmful brain activity using EEG signals recorded from the critically ill patients. The significance of the work is to improve electroencephalography pattern classification accuracy which helps to unlock transformative benefits for neurocritical care, epilepsy and drug development. The classification is done in the six patterns such as seizure, generalized periodic discharges, Lateralized periodic discharges, Lateralized Rhythmic Delta activity, generalized rhythmic Delta Activity and others. Advancement in this area helps doctors and brain researchers to detect seizures or other brain damage which provide faster and more accurate treatment to the ill patient. The methodology involves loading EEG data recorded from ill patients, preprocessing it to enhance the quality, visualizing to identify patterns, training deep learning models like EfficientNetV2 and others, on labeled data and using these models to classify EEG signals into patterns like seizures, aiding in neurocritical care, epilepsy treatment, and drug development advancements. Finally, it helps to predict the patterns such as seizure, generalized periodic discharges, Lateralized periodic discharges, Lateralized Rhythmic Delta activity, generalized rhythmic Delta Activity and others. This project employs various deep learning algorithms such as EfficientNetV2, DenseNet, ResNet, and MobileNet, for classifying seizures and other brain activities based on EEG signals. Among these EfficientNetV2 demonstrates exceptional efficiency in pattern classification contributing to the project's advancements in neurocritical care, epilepsy treatment and drug development. 

hms_project_1
Visualizing some samples of Dataset
HMS_PROJECT_2
Modelling

 

 

 

05:30 pm
PROJECTS

Team Members: Dr.P.Subashini, CMLI Coordinator , Professor of Computer Science

Dr.M.Krishnaveni , Assistant Professor (SG), Department of Computer Science

Dr.A.Dhanalakshmi , Associate Professor , Department of Computer Science , Gobi Arts and Science College

Ms.V.Narmadha, Technical Assistant , CMLI

Dr.Jennyfer Susan M B , Assistant Professor , CMLI

Mr. Sesan D , II MCA , Gobi Arts and Science College

Project Summary:The global population includes a significant number of speech and hearing-impaired individuals who encounter unique communication challenges, particularly in public institutions like post offices. This study introduces the Postal Sign Recognition System for Indian Sign Language, designed to facilitate smoother interactions between post office staff and customers with hearing or speech impairments. The system's methodology involves crucial steps: data collection, preprocessing, object tracking, and recognition. Data collection utilizes Raspberry Pi and a web camera set up to capture Indian Sign Language gestures. Preprocessing techniques, including frame differencing and contour analysis, enhance the quality of collected data. 

Object tracking employs sophisticated algorithms like the Lucas Kanade optical flow and Sparse flow algorithms for precise gesture localization within the video stream. Central to the system is the recognition phase, utilizing a 3D convolutional neural network (3DCNN) model. This model interprets gestures and translates them into textual or auditory outputs, enabling post office staff to comprehend and respond effectively to the communication needs of hearing and speech-impaired customers. By harnessing technology to recognize sign language or visual cues, the system addresses communication barriers these communities face. Implementing the Postal Sign Recognition System facilitates smoother interactions at post offices and underscores technology's potential to promote inclusivity and accessibility. Future research may focus on refining recognition algorithms, expanding language support, and integrating user feedback to optimize usability in real-world scenarios. This innovative solution highlights collaborative efforts to create a more inclusive environment for individuals with hearing and speech impairments.

Postal_project_2

05:30 pm
PROJECTS

Team Member: Dr.M.Krishnaveni, Assistant Professor (SG), Department of Computer Science

Ms.V.Narmadha, Technical Assistant , CMLI

Ms.M.Mythili , III B.Sc CS , Department of Computer Science

Ms.D.Naveena, III B.Sc CS , Department of Computer Science

Ms.G.Preethi , III B.Sc CS, Department of Computer Science

Project Summary: Weather applications enable users to get instant alerts regarding weather conditions. It's a service that informs users what kind of weather to expect in the coming hours, days, and weeks. The existing system shows the weather conditions and how the weather is going to be in a few hours or so in a text based manner. The proposed App ‘‘Augmented Reality based Weather visualization App for South Coastal regions ’’is not just a simple weather app, rather it’s more like an AR based weather app that will enable digital information to be superimposed and integrated into our physical environment. The idea is to display the selected area in a 3D manner and quickly have a look at the weather conditions.

The suggested solution, unlike existing applications, does not display the weather only in text format, it also allows the user to see the different weather conditions in a more realistic way. For example, if it’s going to rain in some places, one can watch rain pouring over those places with a bunch of clouds hovering around in the sky, based on the weather forecast. And that's not all; the proposed software will have a plethora of different simulations for various weather situations in order to provide the most accurate and thorough experience possible. These are going to be achieved using augmented reality, which is an enhanced version of the real physical world that is achieved through the use of digital visual elements delivered via technology.

AR_PROJECT_2

 

AR_PROJECT_3

 

 

 

05:30 pm
PROJECTS

Team Members: Dr.M.Krishnaveni, Assistant Professor(SG), Department of Computer Science

Dr.P.Narendran , Gobi Arts and Science College

Ms.S.Aiswarya, Research Scholar , Department of Computer Science

Ms.Aathi Obusre M , II MCA , Gobi Arts and Science College

Project Summary :Sickle Cell Disease (SCD) is a genetic blood disorder characterized by the presence of abnormal hemoglobin S (HbS), which leads to hemolysis and chronic organ damage. Previous research has primarily focused on classification, whereas this work proposes detection and cell counting methodologies to determine the severity of sickle cell disease.This study effectively addresses the challenges of evaluating sickle cell disease severity through quantitative analysis of cell counts within images, thereby providing valuable data for understanding the condition. Using deep learning for object detection, single stage detectors outperforms in detection accuracy with inference time and it is more effective . In this erythrocyteIDB dataset is used for detection to enhance the training dataset the data augmentation techniques are employed to expand the data. The object detection task in this study utilizes YOLOv4, YOLOv5, and YOLOv8 models.  Comparing these three models YOLOv8 gives better results with accuracy. Intersection over union (IOU) and Non-Maximum Suppression (NMS) algorithms are applied to eliminate duplicate detections and prevent overlapping bounding boxes. The results shows that YOLOv8 exhibits an accuracy of 0.83 at mean average value precision. From the analysis, the proposed model successfully recognizes and counts the different types of cells present in the blood smear image.

05:30 pm
PROJECTS

Team Members:  Dr.M.Krishnaveni , Assistant Professor (SG), Department of Computer Science.

Dr.R.Janani, Research Assistant, CMLI

Ms.A.Hema Priya , II MCA, Department of Computer Science

Project Summary:Paniya, also known as Pania, Paniyan, or Panyah, is a tribal language spoken in India, particularly in the Malayalam-speaking regions. The Paniya language people are currently facing a communication barrier with other communities, limiting interactions within their own group. To overcome this issue faced by the Paniya people, a web application has been developed for speech-to-text using deep learning technique. This user-friendly web application aims to provide a convenient platform for individuals to translate Paniya speech into text. The proposed system not only deals with linguistic complexities, of the Paniya language but also ensures accessibility and usability for a wider audience. The software used to develop for this project is python 3 within the collaborative environment Google colab andby using the streamlit library, the web application was developed. To initiate the methodology, a speech dataset consisting of recordings from Paniya speakers has been collected for analysis and processing, followed by pre-processing using spectral subtraction. This technique enhances the signal-to-noise ratio by estimating and subtracting background noise from the audio signal and it ensure the noise free Paniya speech input for subsequent processing. The features are then extracted using the Mel Frequency Cepstral Coefficient (MFCC), which transforms the Paniya speech signal into a concise representation by capturing its spectral characteristics. This enables the Recurrent Neural Network (RNN) to more effectively analyze and comprehend the nuanced phonetic patterns of the language, resulting in more accurate transcription, with this the Convolutional Neural Network (CNN) is also used to determine the accuracy and performance metrics. Moreover, a linguisticdictionary serves as a reference for mapping Paniya language words to their corresponding textual representations. This aids the system in precisely transcribing spoken words and enhances the overall efficiency of the speech-to-text conversion process.

 

Paniya_project_1

 

 

 

 

 

05:30 pm
PROJECTS

Team Members Dr.P.Subashini, Professor, Dept of Computer Science

Dr.P.Prabhusundhar, Assistant professor,Dept of Computer Science,Gobi Arts college

Dr.R.Janani,Research Assistant, CMLI

Ms.Komalavalli .R, II MCA , Gibi Arts College

Project Summary  The project methodology comprises several essential modules aimed at developing a proficient Automatic Speech Recognition (ASR) system tailored to the nuances of the Irula language. Initially, the data collection module gathers diverse audio recordings of spoken Irula from native speakers, ensuring a comprehensive dataset representative of various dialects and speech patterns. Subsequently, the data preprocessing phase optimizes the collected data by reducing noise, normalizing signals, and segmenting audio files for efficient feature extraction. Feature extraction transforms raw audio signals into a compact and informative feature space, enabling the acoustic model to discern speech patterns accurately. Leveraging Hidden Markov Models (HMM), the acoustic model processes the extracted features to identify and differentiate Irula speech sounds among background noise. Complementing this, the language model, enhanced through pre-trained GPT models and fine-tuning on Irula language data, provides crucial linguistic context for precise speech recognition. Finally, the integration of the Streamlit framework facilitates the development of an intuitive web application interface, ensuring accessibility and ease of use for Irula speakers interacting with the ASR system. Through the seamless integration of these modules, the project aims to create a robust ASR solution that effectively bridges the language gap within the Irula community, facilitating improved communication and societal integration.

Automatic Speech Irulag Recognition Web Protal
Automatic Irula Speech Recognition Web protal

 

05:30 pm
PROJECTS
Development of Mobile Application for Empowering Tribal education in Irula Dialect

Team Members: Dr.M. Krishnaveni, Assistant Professor (SG), Dept of Computer Science

Dr.R.Janani , Research Associate, DST-CURIE-AI,CMLI

Ms.Vasundra R S , I M.Sc , Dept of Computer Science

Project Summary: The project entitled as “Development of Mobile Application for Empowering Tribal education in Irula Dialect” which has been developed using Android Studio framework. Here XML with Java are used as the front end and Firebase as the back end. The application is designed to address the unique needs of tribal children, who often have limited access to educational resources. It includes a range of educational content, including Poems in their Dialect and Assessments. The application is designed with a user[1]friendly interface, featuring colourful graphics that appeal to children. It also includes a range of interactive features, such as audio and media player. The main aim of the proposed system is to develop a mobile application for Tribal children to learn English poem in their own dialect. In this mobile application user can login this application by using their username and password. After successful login user can take mental ability Assessment through this mobile application. After taking assessment successfully, the tribal child can take Mental ability Assessment and Pre Assessment,and this application gives termwise explanation of the poem in Irula dialect and English language. After learning Poem, Child can take Post Assessment. All the Assessment score and user’s Authentication is saved in Firebase. Firebase is a set of backend cloud computing services and application development platforms provided by Google. It hosts databases, services, authentication, and integration for an android application. Overall, the mobile application development for tribal education is an innovative solution that leverages mobile technology to improve access to education for Tribal children. With its engaging content, user-friendly interface, and offline capabilities, it is an ideal tool for empowering tribal children with knowledge and skills for a better future.

 

                                            

Irula_app_interface1

 

                                           

Irula_app_interface2

 

05:30 pm

---
05:30 pm
PROJECTS
Tamil Voice based Education bot

Team Members: Dr.P.Subashini , Professor , Dept of Computer Science

Dr. T.T. Dhivyaprabha, Research Associate, DST-CURIE-AI

Ms.M.Mohana, Research Scholar , Dept of Computer Science

Ms. Divyasri.S , II M.Sc , Dept of Computer Science

Project Summary: Mobile Learning (M-Learning) application is a rapid growing technology in 21st Century, which plays a major role in educating the children. Previous study shows that the Mobile applications effectively improve the learner’s engagement and the motivation in learning. The main aim of this proposed application is to develop a mobile application in Tamil language to overcome the language issues faced by the Native language learners of aged 8- 10 years to teach computer science subject. It also incorporates the Adaptive learning method and classical Q learning method which customize the students learning by providing the flexible learning path is called Adaptive learning. Classical Q learning customizing the children cognitive skills to obtain the quality of learning with rewards. This proposed system uses the CCI (Child Computer Interaction) standards because it sets base ideas to teach children about basic Computer content. Such contents are: About Computer, Uses of computer, Computer Hardware and computer Software. According to CCI Standards, an educational application should be developed based on the child-centric concept to effectively engage the children in learning. There are 13 Multimodal preferences are made from the various learning strategies. For example, VA (Video and Audio) questioner is the bimodal combination of Visual and Audio strategy. Proposed application is designed based on considering the combination of two strategies called VA questionnaires. In this, children’s basic knowledge about the computer science subject is identified by conducting the Pre[1]Assessment, which is used to recommend the learning content by means of analysing the test scores. After that, the VA learning module represents the visual and aural style. In accordance with learning style selections it shows the learning levels such as (1) Easy level, (2) Medium level and the (3) Hard level, at last it shows the learning progress of an children with Post[1]Assessment score. A pilot study is conducted with the help of 65 randomly selected students from classes 3rd and 5th of Sri Avinashilingam Aided Primary school. The validation is done on two ways: Such as individual validation and the group validation along with feedback. Children showed that they were happy and interested to use the app and also shared their feedback genuinely. This shows that the proposed application significantly increases the children’s interest and engagement in learning

                                                 

App_Interface
PROJECTS

Artificial Intelligence (AI)-Internet of Things (IoT) based Environmental Monitoring System for Mushroom Cultivation

Team Members: Dr. M. Krishnaveni, Assistant Professor (SG), Department of Computer Science

Dr. M. K. Nisha, Assistant Professor, Department of Botany

Ms.E. Gaayathri Devi, Research Scholar, Department of Botany

Ms. V. Narmadha, Technical Assistant, DST CURIE-AI

Project Summary: Mushroom cultivation can help reduce vulnerability to poverty and strengthen livelihoods through the generation of a fast-yielding and nutritious source of food and a reliable source of income. AI-based mushroom cultivation employs the wireless network system to monitor the farming process and thus reduce human intervention. Biosensors can be used to monitor the temperature, humidity, carbon dioxide concentration, light intensity in a mushroom farm. The data will be collected to monitor the environmental conditions of the farm which will be connected with the control unit through a server. The current status of parameters is transmitted to the remote monitoring station via a pair of low[1]power ESP8266 as a Wi-Fi modem. The codes for the controller were written in the Arduino programming language, debugged, compiled, and burnt into the microcontroller using the Arduino integrated development environment (IDE). The collected sensor data of all the parameters will be stored in the Google cloud server. k- means clustering is used to implement the algorithm to develop a Decision Support System. Graphical User Interface tool will be developed using open source technologies to find the optimum environmental condition for mushroom cultivation. By the techniques used in this research, the environmental factors that affect the cultivation can be balanced, thus problems can be overcome to obtain a high yield of mushrooms.

                               Team visit at mushroom culture room                              Experimental study in mushroom culture room

                             Team visit at mushroom culture room                  Experimental study in mushroom culture room

 

 

PROJECTS
AI based Intelligent Mosquito Trap to Control Vector Borne Diseases

Team Members: Dr. P. Subashini, Dept of Computer Science

Dr. M. Krishnaveni, Assistant Professor (SG), Dept of Computer Science

Dr. T.T. Dhivyaprabha, Research Associate, DST-CURIE-AI

Ms. B. Gayathre -19PCA001, II MCA, Department of Computer Science

Project Summary: Vector-borne diseases are the most harmful and threat to human beings health, affecting nearly seven hundred million people every year and causing one million deaths annually. Information on mosquito species' population and spatial distribution is essential in identifying vector-borne diseases. Mosquito prevention and monitoring programs are established by public health departments with mosquito traps. Many monitoring systems have already been implemented concerning the worldwide spreading of mosquitoes and mosquito-borne infections, although mosquito population monitoring is inadequate and time[1]consuming in order to identify mosquito species and diseases. Aedes aegypti, Aedes albopictus, Anopheles gambiae, Anopheles arabiensis, Culex pipiens, and Culex quinquefasciatus are the six primary mosquito species prevalent in India that inflict vector[1]borne diseases. It aims to construct an IoT-based mosquito-based disease identification system using machine learning algorithms. The proposed methodology is described as follows. It collects the mosquito's wingbeat audio from the Kaggle website, then eliminates noise from the wingbeat audio file using the Butterworth pre-processing algorithm. After pre[1]processing, wingbeat is subjected to feature extraction for frequency using the Fast Fourier Transform algorithm, followed by classification using the Decision Tree algorithm to classify mosquito wingbeat signals. In the experimental findings and analysis, the accuracy of the constructed system is compared with and without pre-processing approaches. The system enables monitoring of the mosquito population and epidemic through automation, which delivers correct output in a defined time frame without human intervention.

                                  Testing kit             Mosquito trap methodology

                                                   Experimental Testing Kit                                 Methodology of IoT Integrated with ML Phase

 

PROJECTS

Technology Enhanced Mulsemedia Learning In Stem Education For Enhancing The Learner’s Quality Of Experience (Qoe)

Team Members: 

Dr.P.Subashini , Professor , Dept of Computer Science

Dr. N.Valliammal, Assistant Professor (SG), Dept of Computer Science

Ms.M.Mohana, Research Scholar , Dept of Computer Science

Ms. V.Suvetha , II MCA , Dept of Computer Science

Project Summary:  Affective computing refers to the development of technologies that enable machines to recognize and respond to human emotions, essentially creating a form of artificial emotional intelligence. Mulsemedia combines multiple media formats such as audio, video, and interactive content to create an immersive learning experience. Multisensorial learning, on the other hand, focuses on multiple senses, such as sight, sound, haptic, and hearing, to enhance the learning experience. This research also focuses on STEM education, it is an ideal field for the implementation of Mulsemedia due to its focus on science, technology, engineering, and mathematics. The Mulsemedia can help to overcome some of the limitations of e-learning by providing a more interactive and engaging learning experience. The Mulsemedia can provide students with an interactive and engaging learning experience, allowing them to explore complex concepts and theories in a more accessible manner. This paper proposes a new perspective to achieve the model " TECHNOLOGY ENHANCED MULSEMEDIA LEARNING FOR ENHANCING QUALITY OF EXPERIENCE" by integrating devices like Microcontroller- Arduino UNO, Exhaust fans, olfaction-ultrasonic humidifier, and haptics. This Research project targets students between 20-25 years old to provide them with a better Quality of Experience (QoE) while learning. The QoE is, subjective measures, such as self-reported feedback from students, is an important aspect of assessing the effectiveness of Mulsemedia. This project will examine the impact of both the presence and absence of subjective measures and objective measures. The subjective measures rely on personal experiences and opinions, while objective measures use quantifiable data, such as GSR. When mulsemedia elements are incorporated into a learning experience, learners may experience higher levels of engagement and emotional response, which can lead to higher GSR readings and potentially better learning outcomes. Thus, the research to enhance e-learning, by incorporating multisensory activities and integrating devices to provide an immersive and engaging learning experience.

                         Mulsemedia Kit                                      Mulsemedia_portal

                                        Mulsemedia Kit                                                                            Mulsemedia Web portal

 

chat-bot
Saratha here to assist youX
Saratha
Hello! I'm Saratha, How can I help you ?