Detection of Careless Responses in Online Surveys Using Answering Behavior on Smartphone

Some respondents make careless responses due to the “satisficing,” which is an attempt to complete a questionnaire as quickly and easily as possible. To obtain results that reflect a fact, detecting satisficing and excluding the responses with satisficing from the analysis targets are required. One of the devised methods detects satisficing by adding questions that check violations of instructions and inconsistencies. However, this approach may cause respondents to lose their motivation and prompt them to satisficing. Additionally, a deep learning model that automatically answers these questions was reported. This threatens the reliability of the conventional method. To detect careless responses without inserting such screening questions, machine learning (ML) detection using data obtained from answer results was attempted in a previous study, with a detection rate of 55.6%, which is not sufficient from the viewpoint of practicality. Therefore, we hypothesized that a supervised ML model with a higher detection rate could be constructed by using on-screen answering behavior as features. However, (1) no existing questionnaire system can record on-screen answering behavior and (2) even if the answering behavior can be recorded, it is unclear which answering behavior features are associated with satisficing. We developed an answering behavior recording plug-in for LimeSurvey, an online questionnaire system used all over the world, and collected a large amount of data (from 5,692 people) in Japan. Then, a variety of features were examined and generated from answering behavior, and we constructed ML models to detect careless responses. We call this detection method the ML-ABS (ML-based answering behavior scale). Evaluation by cross-validation demonstrated that the detection rate for careless responses was 85.9%, which is much higher than the previous ML method. Among the various features we proposed, we found that reselecting the Likert scale and scrolling particularly contributed to the detection of careless responses.


Introduction:
Some Respondent make careless responses due to the satisfying, which is an attempt to complete a questionnaire as snappily and fluently as possible.To gain results that are fact, detecting satisfying and banning the responses with satisfying from the analysis targets are needed.To detect the Careless responses developed an answering recording Survey, an online questionnaire system used all over the world.we will be adding set of Questions in the survey, When the users starts to attend the survey by the method Gaze tracking it is of Eye Ball tracking it Capture the Eye movement while attending the Survey and Machine Learning is Monitors the respondent reaction and Movement.All the responses and reaction are recorded .
To detect careless responses without inserting such screening questions, machine learning (ML) detection using data obtained from answer results was attempted.After Completing the survey the answers, Movement and reaction are recorded in Python .Result shown as if the respondent completes the survey by analyzing the Question and Time of Completing the questions result will show as Attention or if there is Movement capture by Eye Ball tracking and time of Completion is short result will be shown as Not Attention .

Review of literature:
Ulrich Schroeders and Christoph Schmidt [2] proposed Careless responding is considered a bias in check responses without regard to the factual item content which constitutes a trouble to the factor structure, trust ability, and validity of cerebral measures.Different approaches have been proposed to describe aberrant responses similar as probing questions that directly assess test-taking .(e.g., bogus particulars), supplementary or Para data(e.g., response times), or data-driven statistical ways .The comparison between the results of the simulation and the online study showed that simulations that calculate on prototypical patterns of careless responses tend to overrate the bracket delicacy.Grade boosted trees outperform traditional discovery mechanisms in flagging aberrant responses, especially by including response times as para data, but are not new as a nostrum of data cleaning.
Austin Lee Nicholsa and John E. Edlundb [3] proposed Although careless attests have rested obliteration on disquisition for decades, the frequency and implications of these actors has presumably increased due to multitudinous new methodological ways presently in use.Across three studies, we examined the frequency of careless responding in actors, several means of predicting careless at

IJCRT24A5457
International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.orgm718 testers, and the implications of careless at-testers on data quality and recovery attempts.At the same time, we sought to examine the geographic differences of careless responding and give psychometric validation for using bedded questions to describe these actors.In Study 1, over 1/ 3rd of actors showed some validation of careless responding and that careless attesters displayed certain personality and demographic characteristics.In particular, laxness sounded more current in Asian samples than in North American samples.In Study 2, we set up nearly 1/ 4th of actors who showed validation of careless responding and that conclusions predicated on data including versus banning these actors differed in significant yet changeable ways.Ultimately, in Study 3, we set up that nearly 2/ 3rd of actors who signed up for the study did not meet advertised study conditions for participation and including these actors changed the structure of the data obtained.models can overcome all three backups.Knowledge learned by these models on one data set can be transferred to other datasets.We performed extensive trials using three real-world datasets For spring( 12K posts), Twitter (16K posts), and Wikipedia( 100K posts).Our trials give several useful perceptions about cyber bullying discovery.
Wendy M. Rote1 and Melanie Olmo [4] proposed Copter parenthood (HP) is associated with poorer adaptation and worse connections with parents among rising grown-ups, but these associations may depend on interpretations of HP and the family environment in which it occurs.This study examined within-family patterns of mate and father youth felt over control, and their associated adaptation, relational, and demographic supplements.
Kitti Koonsanit and Nobuyuki Nishiuchi [5] Stoner experience (UX) evaluation investigates how people feel about using products or services and is considered an important factor in the design process.still, there's no comprehensive UX evaluation system for time-nonstop situations during the use of products or services.Because stoner experience changes over time, it's delicate to discern the relationship between evanescent UX which is related to final stoner satisfaction.This exploration aimed to prognosticate final stoner satisfaction by using evanescent UX data and machine literacy ways.The actors were 50 and 25 university scholars who were asked to estimate a service (trial I) or a product (trial II), independently, during operation by answering a satisfaction check.Responses were used to draw a customized UX wind.Actors were also asked to complete a final satisfaction questionnaire about the product or service.evanescent UX data and party satisfaction scores were used to make machine literacy models, and the experimental results were compared with those attained using seven erected machine literacy models.This study shows that actors evanescent UX can be understood using a support vector machine (SVM) with a polynomial kernel and that evanescent UX can be used to make further accurate prognostications about final stoner satisfaction regarding product and service operation.,Diana Steger [6] un proctored, web-predicated assessments are constantly compromised by a lack of control over the actors' test taking behavior.It's likely that actors cheat if particular consequences are high.This meta-analysis summarizes findings on terrain goods in un proctored and proctored capability assessments and examines mean score differences and correlations between both assessment surrounds.As implicit speakers, we consider (a) the perceived consequences of the assessment (b) countermeasures against infidelity (c) the vulnerability to infidelity of the measure itself and (d) the use of different test media.For homogenized mean differences, a three-position arbitrary -goods meta -analysis predicated on 108 effect sizes from 49 studies( total N = 100,434) linked a pooled effect of Δ = 0.20, 95 CI, indicating advanced scores in un proctored assessments.moderator analyses revealed significantly lower goods for measures that are delicate to probe on the Internet.Regarding rank order stability, a small sub sample of studies (n=5) furnishing 15 effect sizes ( total N = 1,280) indicated considerable rank order changes ( ρ = 58,95 CI (.38,.78)).These results demonstrate that un proctored capability assessments are markedly poisoned by cheating.Un proctored assessments may be most suitable for tasks that are delicate to search on the Internet.
Warwick, K. and Wei, H. [7] Paper introduces an approach for user authentication using free-text keystroke dynamics which incorporates the use of unconventional keystroke features.Semi-timing features along with editing features are pulled from the stoners codifying stream.Decision trees were exploited to classify each of the stoners ' data.In comparison, Support Vector Machines (SVM) were also used for type in association with an Ant Colony Optimization (ACO) point selection fashion.The results obtained from this study are encouraging as low False Accept Rates (FAR) and False Reject Rates (FRR) were achieved in the trial phase.This signifies that satisfactory overall system performance was achieved by using the codifying attributes in the proposed approach.Thus, the use of often-conventional typing features improves the understanding of mortal codifying behavior and therefore, provides significant donation to the authentication system.
Valentina N. Burkova [7] Prior and ongoing COVID-19 pandemic restrictions have resulted in substantial changes to everyday life.The pandemic and measures of its control affect mental health negatively.Self-reported data from 15,375 participants from 23 countries were collected from May to August 2020 during the early phases of the COVID-19 pandemic.Two questionnaires measuring anxiety level were used in this study.The Generalized Anxiety Disorder Scale (GAD-7), and the State Anxiety Inventory (SAI), Associations between a set of social indicators on anxiety during COVID-19 (e.g., sex, age, country, live alone) were tested as well.Self-reported anxiety during the first wave of the COVID-19 pandemic varied across countries with the maximum levels reported for Brazil, Canada, Italy, Iraq .Overall, our results demonstrated that the self-reported symptoms of anxiety were higher compared to those reported.We conclude that such cultural dimensions as individualism/collectivism, power distance and looseness/tightness may function as protective adaptive mechanisms against the development of anxiety disorders in a pandemic situation.

Research methodology:
This Research work uses Java and Python to build an android app and Web application for the survey.Web application uses to add Questions for the Survey and Android is a Linux based operating system it is designed primarily for touch screen mobile devices such as smart phones and tablet computers.

Variables:
Independent variables: survey questions, number of respondents.
Dependent variables: Response perceived indifference, respondent satisfaction.

Method of Data Collection:
The first step is development Developing a research platform with eye tracking and machine learning.Perform initial testing to ensure functionality and usability.
The second stage is data collection Invite participants to complete the survey through a variety of channels (e.g., social media, email).Collect demographic information through survey responses and eye-tracking data.Record the responses and reactions of the respondents after completing the survey.
the third stage is Application Use machine learning algorithms to analyze search response and eye tracking data.Developing models to identify patterns of careless responding.

GAZE TRACKING :
Gaze Tracking is a Neural Network it is the process of Measuring and analyzing movements of a person's eyes to determine where they are looking.It uses infrared LED's shown onto the eye, then compute the relative distance of that light to the pupil of the eye as a means to determine eye movement.

User Authentication
The user will be registering with the client application.The user will be giving basic credentials and submit.All the data will be sent to the backend business logic which is written in Java .This will serve the application to store user credentials into mysql database.

Participating Survey
For the registered users they can select options from survey type.Then the survey application will show relevant questions and user can select their desired answers.Once when the survey got over the results will be sent as inputs to sentiment analysis.

Carelessness Detection
The application result will be like positive satisfied normal or abnormal if there is any Movement in Eye Ball .Based on the results scores will be generated.The survey session will be recorded with live face capturing and the streaming video will be sent to python server for processing.The generated score will be stored in database for analytic.The python response will be considered for each user.The rate of carelessness is detected and stored in database which is used for analysis and gives the result as Attention or Not in Attention.

Conclusion
In this proposal the generated score will be stored in a database for analytic.The python response will be considered for each user.
The rate of carelessness is detected and stored in a database which is used for analytic.inthe future, we can propose a method using advanced machine learning algorithms such as face recognition.Thus, the behavior patterns of online surveys are analyzed and rate carelessness is predicted for further analysis.The subject of the responsibility of questionnaire checks can be vastly divided into questionnaire content and response.Finally by analyzing Result shown as if the respondent completes the survey by analyzing the Question and Time of Completing the questions result will shown as Attention or if there is Movement capture by Eye Ball tracking and time of Completion is short result will be shown as Not Attention.
Android application Builds for attending the survey, where it Complies of Touch based Pattern, Eye Ball tracking it is a Device measures the Eye Position or Movement and Machine learning is to Capture the Video and Analyses the Movement of the Person while attending the survey.Python server records all the responses from the Survey and it gives the result.
Quantitative analysis: Calculate the detection of careless responses based on machine learning algorithms.Statistical tests (e.g., chi-square, T-tests) to assess the significance of detected patterns Qualitative analysis: Analyze recorded respondent reactions and movements to identify behavioral indicators of carelessness.

org m721 Register Login Attend Survey Application Database Capture Video Admin Machine Learning Add to Questions. Web application Select a Category
International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.IJCRT24A5457 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.orgm722Result Analysis