Profiling Users’ Behavior, and Identifying Important Features of Review “Helpfulness”

The increasing volume of online reviews and the use of review platforms leave tracks that can be used to explore interesting patterns. It is in the primary interest of businesses to retain and improve their reputation. Reviewers, on the other hand, tend to write reviews that can influence and attract people’s attention, which often leads to deliberate deviations from past rating behavior. Until now, very limited studies have attempted to explore the impact of user rating behavior on review helpfulness. However, there are more perspectives of user behavior in selecting and rating businesses that still need to be investigated. Moreover, previous studies gave more attention to the review features and reported inconsistent findings on the importance of the features. To fill this gap, we introduce new and modify existing business and reviewer features and propose a user-focused mechanism for review selection. This study aims to investigate and report changes in business reputation, user choice, and rating behavior through descriptive and comparative analysis. Furthermore, the relevance of various features for review helpfulness is identified by correlation, linear regression, and negative binomial regression. The analysis performed on the Yelp dataset shows that the reputation of the businesses has changed slightly over time. Moreover, 46% of the users chose a business with a minimum of 4 stars. The majority of users give 4-star ratings, and 60% of reviewers adopt irregular rating behavior. Our results show a slight improvement by using user rating behavior and choice features. Whereas, the significant increase in $R^{2}$ indicates the importance of reviewer popularity and experience features. The overall results show that the most significant features of review helpfulness are average user helpfulness, number of user reviews, average business helpfulness, and review length. The outcomes of this study provide important theoretical and practical implications for researchers, businesses, and reviewers.


I. INTRODUCTION
The growth and popularity of Web 2.0 in e-commerce has encouraged individuals to share their views on products and services in the form of online reviews [1]. These personal views are fundamental to most human activities and therefore are one of the key drivers of human behavior [2]. The review platforms, e.g., Yelp, Amazon, IMDB, etc., allows customers to share their views and opinions on different services and The associate editor coordinating the review of this manuscript and approving it for publication was Long Wang . products, i.e., books, mobiles, hardware, and software, etc., [3], [4]. With the increasing importance of product reviews, it has become a core component of both electronic and traditional businesses [5], [6]. The product reviews serve as a source of information that helps consumers determine the quality of the product and make purchase decisions [7], [8].
Many studies have shown that sales, the image of products, and services are significantly affected by online product reviews [9], [10]. Online customer reviews are also becoming a significant source of information in the tourism industry [11]. The travel reviews on various review platforms not VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ only helped individuals to plan their travel but also influence them in choosing accommodation [12]. Therefore, Online reviews have introduced both opportunities and challenges for all businesses [13]. There is also an increasing interest in collecting, analyzing, summarizing, and interpreting online reviews using data analytics techniques to obtain useful insights related to management issues and for social profiling [14], [15].
Presently, TripAdvisor has over 795 million published reviews, and over 192 million reviews are available on Yelp [16], [17]. Online product reviews can be helpful, nonhelpful, or even spam in the worst-case scenario [18]. The ever-increasing number of reviews has made it difficult for customers and businesses to go through all reviews and introduced the problem of information overload [19]. It has also been stated that reading more online reviews results in information overload and confusion in the decision-making process [20]. There are many features related to online product reviews, but 'Review helpfulness' is most crucial as it reflects the quality of information of the review perceived by readers [21]. Online reviews are highly inconsistent in quality. A simple feedback question related to review helpfulness has boosted Amazon revenue by $2.7 billion [22]. Therefore, researchers and experts needed to understand how online reviews are considered helpful [23]. It has been found that a significant percentage of reviews have little or no feedback on helpfulness, especially the most recent ones [24]. Recently published reviews do not have sufficient time to collect helpful votes [25]. Hence, individual feedback is too scarce to evaluate the helpfulness of reviews [26]. Moreover, the expectations of people vary, a review that appears helpful to one individual, may not be considered helpful to another [27].

B. FEATURES FOR PREDICTING REVIEW HELPFULNESS
As an alternative, researchers and practitioners have explored various types of predictors', i.e., qualitative and quantitative, for predicting helpfulness of reviews using statistical and machine learning techniques [28]. The features explored in previous studies for review helpfulness include reviewer behavioral consistency [29], order effect [30], review sentiment [31], distinct emotions extracted from review text [32], [33], effectiveness of review [34] and cognitive writing [35]. Moreover, the researcher also examined features, i.e., rating of review and review content, but achieved mixed results [36], [37]. The perception of online reviews has been greatly influenced by emotions [32], [33], [38], [39]. As online reviews often contain embedded photos, Ma et al. [40] analyzed the impact of user-provided photos on the helpfulness of hotel reviews by using deep learning. It was reported that the pictures alone are not a good predictor of review helpfulness. However, the combination of both photos and review text features gives the best predictive performance. Chen et al. [41] have analyzed the impact of happy and angry avatar images on the helpfulness of online reviews. It was seen that the perception of review helpfulness increased with the image of a happy avatar, while there was no difference in the case of an angry avatar image. The writing style of reviewers was seen as an important feature of textual reviews. A study used content and style features extracted from online hotel reviews to analyze their impact on helpfulness. Textual features were reported as the key features for predicting the helpfulness of online hotel reviews [42]. The impact of review numerical and textual features on the helpfulness of three types of reviews were analyzed. The results reported that the effect of numerical features on review helpfulness is significant for the regular type of reviews. Whereas, for suggestive and comparative reviews, the sentiment of text appeared to be more significant. Moreover, the length of the review was reported as the most important feature in predicting review helpfulness. It was concluded that the numerical review features were more important across all three types of reviews compared with the textual review features [43]. Wu et al. [44] highlighted the importance of temporal dimensions and proposed a temporal model for predicting review helpfulness. It was stated that the old reviews would not be that helpful for a product where new reviews come in very often. Therefore, the performance index for measuring the helpfulness of reviews should be based on time.
Mafael [45] examined the psychological processes to understand what makes a review to receive useful votes. It was found that readers were more likely to vote review as helpful when the review valence is correlated with personal beliefs. In addition, when the reviewers expressed perceptions of mutual behavior among readers, the likelihood of helpfulness vote decreased. It has also been revealed that readers take more time to assess the helpfulness of negative reviews compared to positive reviews [46]. Recently, a study reported the negative relationship between review polarity and review helpfulness [47]. The sentiment of online reviews has been reported as a strong predictor of review helpfulness. In addition, the product type did not show any significant impact on the helpfulness of online reviews [48], [49]. A study examined the impact of different features, i.e. review rating and review length, etc., on review helpfulness along with user-controlled filters. The length of review and the review rating were identified as key features [50]. Moreover, it has been stated that the title features did not have a significant effect on the usefulness of the review [51].
The literature on predicting review helpfulness focused mainly on English language reviews. To address this issue, a study has developed a multilingual framework for review helpfulness prediction [52]. Lee et al. [53] have found that the helpfulness of travel reviews depends primarily on the reviewer features in contact with the sentiment and quality of the review. The language and writing style of reviewers varies significantly. A examined the effect of writing style on review helpfulness using four linguistic features. The linguistic features appeared to be more significant than those of the 77228 VOLUME 8, 2020 social relationship features [54]. Moreover, the studies also highlighted the important textual features such as polarity, subjectivity, readability, etc., for predicting the helpfulness of reviews [55], [56]. The use Recency, Frequency, and Monetary (RFM) features of review along with textual features improved predictive accuracy of review helpfulness [57]. Several studies have proposed review helpfulness prediction models using different features and a number of machine learning techniques [58]- [61]. These techniques for review helpfulness prediction range from simple regression algorithms to complex neural networks [62]- [64]. In addition, feature extraction and selection techniques have also been proposed to model the review helpfulness for different products [65]- [67].
Namvar [68] has used reviewer and time features to create review clusters. Subsequently, the review features were incorporated to predict the helpfulness of Amazon reviews. A study proposed a new semantic measure to assess the helpfulness of online reviews. A higher R 2 was reported by the evaluations carried out using semantic measures compared to the existing vote-based assessments [69]. Olatunji et al. [70] also argued that the ''X out of Y'' approach to assessing the quality of the reviews did not work well for reviews with fewer total votes. Therefore, a context-aware approach using textual features was proposed to predict the helpfulness of reviews. Evaluations performed using a humanannotated dataset show better predictive performance than existing models. It has been shown that the emotional tone of the reviews significantly influences the helpfulness of the reviews written by females, while no effect has been reported for male reviews [71]. A positive relationship between review helpfulness and similarity between review content and the title has been reported for Amazon reviews [72], [73]. A study suggested that the helpfulness of the reviews varies depending on the hotel class [74]. It has been reported that the reviews of low-class hotels appear to receive more helpful votes if they include price quotes, as opposed to the reviews written for high-class hotels [75]. In addition, the linguistic features have a significant impact on review helpfulness in the presence of review features i.e., review length, rating valance [76].
The review quality and star rating were identified as the most important features in predicting the helpfulness of reviews. The reviews with higher ratings were found to be less helpful than those with lower rating [77]. Sun et al. [78] have shown that the perceived helpfulness of online reviews has been influenced by the type of product i.e., experience or search. Moreover, it has been reported that different classification thresholds are needed for both types of products. A study captured the relationship between review rating and review content using deep neural networks and reported improved predictive performance of review helpfulness [79]. Liang et al. [80] reported that more comprehensive reviews with extreme ratings are seen as helpful. Whereas, the reviews written by high-profile reviewers are always seen as helpful. A study proposed a convolutional neural network model, using textual features based on bag-of-words, to automatically predict the helpfulness of online reviews [81]. Zhu et al. [82] examined the impact of the previous reviews on the helpfulness of the subsequent reviews. It was found that if the reviews come in very quickly, the descriptive reviews are more helpful. Whereas, in the case of extreme reviews, the evaluative reviews are considered to be more helpful. Moreover, a positive relationship has been reported for review and reviewer features with review helpfulness, except for review length [83].

C. USERS' RATING BEHAVIOR AND BUSINESS CHOICE
Online review ratings can significantly influence the review helpfulness. Due to the importance of star ratings, researchers and practitioners have been searching for predictors of online rating behavior of reviewers. There are two schools of thought related to rating behavior, including observed average rating and attention-grabbing. The observed average rating is based on social conformity theory, where lateral reviewers were influenced by the previous business ratings and take rating as a social norm [84]. Since the observed business ratings were based on the consensus of the reviewers who had previously reviewed the business, they serve as a source of social influence for the new reviewers. [85]. An attention-grabbing, the reviewers purposefully deviate from the observed average rating of business to gain attention. Such reviewers were likely to deviate while rating a popular business or a business with a high number of reviews [86]. The previous literature suggested that the reviewer rating behavior can be inconsistent over time and greatly influenced by extrinsic factors. However, the intrinsic features of reviewers, i.e., culture, age, gender, experience, etc., have shown significant influence on rating behavior [84].
The helpfulness of reviews has important managerial and practical applications for both business and customers. The previous literature on the helpfulness of online reviews reported that more attention is paid to reviewers who post more extreme reviews to distinguish themselves from others' [86]. Such extreme reviews (irregular behavior) receive more helpful votes compared with normal reviews that follow observed business rating patterns (regular behavior) [87].
Gao et al. [29] explored the consistency and predictability of reviewer rating behavior over a period of time and its impact on the helpfulness of their reviews. The reviewer's rating behavior has been reported to be consistent over time, and the deviation in future ratings can be explained by past rating behavior rather than the observed average ratings. Furthermore, the reviewer who has published reviews with a higher rating difference in the past has been reported to attract more helpful votes for their future reviews. In addition, it has been reported that helpfulness of reviews is more significantly influenced by the reviewer's past behavior (intrinsic reviewer features) compared to the observed average rating (intrinsic reviewer features). Overall, in the context of online reviews helpfulness, the attention-grabbing strategy and observed average rating did not appear to be significant. A study examined consumer purchasing behavior in different countries by examining the importance of customer reviews. It has been stated that the different cultural factors of each country have a significant impact on consumer preferences, perceptions, and purchasing behavior [88].
The social media data is a meaningful way to get more information about a person or a business than other sources [89]. The business profile on online review portals helps businesses to maintain their reputation and attract new customers. Currently, most of the consumers see online ratings and go through online reviews for business before choosing them. According to Brasel [90], both extrinsic and intrinsic features can have a significant influence on the choice of online business. Han et al. [91] examined the impact of different factors, such as review valance, trust, and severity of disease on the choice of physician. It has been reported that patients usually choose physicians with high ratings, especially for high-risk diseases. Similarly, in the case of hotel selection, the 5-star rating has a significant effect on hotel choice [92]. Ahani et al. [93] proposed a method based on machine learning algorithms for predicting user travel choices. The BrightLocal [94] presented reviewer statistics for local businesses on websites such as Google, Facebook, TripAdvisor, Yelp, etc., using data collected from 1,000 US-based consumers. It was reported that in 2018, 57% of the users choose a business if it had a minimum 4-star rating, which rose from 48% in 2017. Moreover, 11% of the users only chose business with exactly 5-star rating [95]. This suggested that the reviewer choice of business is greatly influenced by ''business star rating'' which can be referred to as the ''reputation of a business''. Yearsold social media posts can make or break the reputation of individuals and businesses. Setting a good first impression and maintaining it over time can be a challenge for everyone [89]. In addition, a study suggested that the ''cumulative usefulness'' of the reviews received by the business, together with the business star rating, should be seen as a measure of business reputation [96].

D. RESEARCH GAPS AND OUR CONTRIBUTIONS
The enormously increasing volume of reviews makes it difficult for businesses to retain their reputation. This makes it important to analyze and explore how reviewers chose and rate businesses. Despite the importance of customer choice of business and its implications, very limited studies have tried to explore the effect of star rating on customer choice of business. Moreover, as far as our knowledge is concerned, no study has explored changes in business reputation over time. Previously, researchers have examined the impact of different features on the helpfulness of online reviews to ease consumers in making purchase decisions. It is noted that, unlike business and reviewer features, more attention has been paid to the review features. Moreover, the findings of previous literature on the helpfulness of online reviews and it's dimensions are contradictory and need further investigation [97], [98]. In addition, the impact of business choice on the helpfulness of online reviews has not yet been analyzed. Likewise, limited literature exists on reviewer rating behavior and its influence on the helpfulness of their reviews [29]. However, there are more perspectives of user behavior that still need to be studied. In order to fill this gap and extend the literature, new features are introduced for business, user rating behavior, and user business choice along with the modification of existing features. A user-focused review selection mechanism is also introduced along with feature mapping to clean reviews and map features. The proposed features are used along with existing features to determine their impact on the helpfulness of online reviews. This study aims to answers the following questions: (a) Can the reputation of a business change over time after the first impression? (b) How does a reviewer choose a business based on star ratings? (c) While reviewing a business, what is the rating behavior of reviewers? (d) Does the choice of reviewer and rating behavior effect the helpfulness of their review? (e) What are the important features of the helpfulness of online reviews?
The rest of the paper is organized into sections. Section-II explains the data collection and preprocessing, features, and statistical methods adopted in this study. In Section-III, the results of the study are discussed in detail. Lastly, Section-IV presents conclusion, implications, limitations, and future work.

II. RESEARCH METHODOLOGY
In this section, we presented a proposed framework for profiling user features and modeling helpfulness of reviews to analyze the importance of different features. A user-based review selection filter is introduced in the data collection and pre-processing stage. Afterward, the entire history of users and businesses reviewed is processed to generate and operationalize new and existing features. The features generated are then fed into the feature mapping component to create a final dataset. The overall framework is presented in Figure 1.

A. DATA COLLECTION AND PRE-PROCESSING
The data from Yelp is used in this study. Yelp is a famous crowd-sourced review platform that was launched in 2004. The Yelp dataset runs from October 12, 2004 to November 14, 2018 and contains information of 1,673,138 users, 6,685,900 reviews, 1,223,094 tips, 200,000 photos and 192,609 businesses [99]. The most popular business categories include Restaurants, Shopping, and Home and Local Services [17]. Several preprocessing steps performed are presented in Figure 2. Firstly, we choose 481,825 reviews related to the shopping category because the volume of reviews is substantial for all business categories and varies in the nature of reviews. After review selection, we excluded 155 reviews that are written in a language other than English. We group the reviews based on users and selected 94,909 reviews written VOLUME 8, 2020

FIGURE 2.
Steps performed in data pre-processing. by users that wrote more than ten reviews to explore behavioral patterns. Using the Automated Readability Index (ARI) (explained in the next section), we excluded 4238 reviews. Finally, we selected the remaining 90,671 reviews written by 4,086 users for 20,811 shopping businesses to perform analysis and conclude results. Figure 3 presents the distribution of selected reviews written in each year, ranging from March 25, 2005, to November 14, 2018. To calculate features, we also make use of the full review history of selected businesses and users and sort their reviews by date and time. After calculating the features, we mapped them on our final dataset (DS-2015-18) of 90,671 reviews.

B. FEATURES
In this section, the new and existing features related to review, business, and reviewer used in this study are explained in detail. The descriptive statistics, along with a complete list of features and their description, are given in Table 2.

1) REVIEW FEATURES
Review features have been extensively studied in the literature. In this study, five existing review content features are used to model the helpfulness of online reviews. The useful vote received by a review is used as the dependent variable (R_Helpfulness) in this study. Review age (R_Age) is calculated in Equation (1) as days since the review i was posted from the data collection date. The length of the review is represented by R_Word_Count and review rating by R_Stars. The readability of review (Review_ARI) is calculated using ARI formula as in Equation (2), where NumChar is character count, NumWord is word count, and NumSent is sentence count. Polarity (R_Polarity) and subjectivity (R_Subjectivity) of review are calculated by using TextBlob [100].

2) BUSINESS FEATURES
Previous studies have not given much importance to business and product features. In this study, we have used two existing features, namely the number of check-ins (B_Checkin_Count) and review volume (B_Review_Count). The B_Review_Count ij as in Equation (3) is used with a slight variation. Instead of taking review volume as a constant variable, we calculated the review volume as the number of reviews for business j posted before review i. Moreover, we have introduced two new features based on a recent study on the cumulative helpfulness of businesses [96]. Similar to the review volume we calculated useful votes (B_Helpfulness_Count ij ) as in Equation (4) and average useful votes (B_Avg_Helpfulness ij ) as in Equation (5) received by a business j before review i. In Equation (3), (4) and, (5) i-1 represents the review for business j previous than the review i. Whereas N i−1 represents the number of reviews received by a business j prior to the review i.

3) REVIEWER FEATURES
Researchers have studied a large number of reviewer features.
In this study, we analyzed the effect of reviewer features on review helpfulness by categorizing them into two groups based. The features related to each category are explained in detail in the respective sections below.

a: BUSINESS CHOICE AND RATING BEHAVIOR
As far as behavioral features are concerned, very limited literature exists that studied these features from the perspective of review helpfulness. Rating deviation, which is an absolute value of the difference between review rating and an observed average rating of the business, shows a significant impact on the helpfulness of online reviews. The absolute deviation of review rating form observed business star rating (Abs_Dev_R&B) in this study is defined following previous literature [29]. Abs_Dev_R&B ik as in Equation (7) is calculated by using the absolute value of Equation (6). Where Obs_Business_Rating ij is the average business stars of business j for all reviews published before i. Moreover, following the previous literature, we have introduced two more absolute rating deviations. One is the absolute deviation of review rating with average user rating (Abs_Dev_R&U) as in Equation (9). In Equation (8) Avg_User_Rating ik is the average star rating of user k for all reviews written before i. The second is the absolute deviation of average user rating with an observed business rating for a user (Abs_Dev_U&B) as in Equation (11). This study defined the rating behavior as ''1'' (Regular) and ''0'' (Irregular) by following two schools of thought in the literature, which are observed average rating (social norm) [84] and attention-grabbing [85]. We took Avg_User_Rating ik and Obs_Business_Rating ij as two extremes. Any review rating that falls outside these extremes is labeled as ''0'', while the rest of the cases are labeled as ''1''. The labeling is each review i from user k for business j is performed using Equation (12). In addition to capturing the effect of overall behavior, we label the overall behavior of user k using Equation (13). To study the impact of user's business choice, we label the U_Business_Choice as ''0'' (a user chooses a new business), ''1'' (the observed average rating of business chosen is lower than user average rating), ''2'' (the observed average rating of business chosen is same as user average rating) and, ''3'' (the observed average rating of business chosen is higher than user average rating). The labeling of U_Business_Choice ik is done using Equation (13). To further explore the effect of user choice of business, the overall choice of the user (U_Overall_Choice) is labeled using Equation (15).

b: POPULARITY AND EXPERIENCE
There is a wide variety of features in this category. However, in literature, their frequency of use is far less than that of the review features. The features used in this study include the number of friends (U_Friend_Count), number of fans (U_Fan_Count), number of compliments (U_Compliment_Count), user credibility (U_Credibility), number of reviews (U_Review_Count), number of helpful votes (U_Helpfulness_Count) and average helpful votes (U_Avg_Helpfulness). Equation (16) is used to calculate the credibility of user k, where No of Elite Years is the number of years in which a user is declared as elite by Yelp.com. U_Review_Count ik is the total number of reviews user has written before review i and calculated as in Equation (17). Similarly, U_Helpfulness_Count ik is the sum helpful votes received by reviews prior to current review i as in Equation (18). Moreover, the U_Avg_Helpfulness ik for and user k before review i is calculated by using Equation (19).

c: FEATURES MAPPING
In data there is a set of businesses B = {b 1 , b 2 ,. . . , b j }, a set of users U = {u 1 ,u 2 ,. . . , u k } who write the reviews and a set of reviews R = {r 1 , r 2 ,. . . , r i }. The Algorithm 1 is used for feature mapping, where i donates a review number, j donates a business number and k donates a user. To calculate business and reviewer features, we also make use of the full review history of selected businesses and users and sort their reviews by date and time. After calculating the features, we mapped them to our final dataset (labeled as DS-2005-18 in Figure 1) of 90,671 reviews.

C. DATA ANALYSIS AND MODELING
In this study, we have adopted different data analysis and regression modeling techniques. Descriptive analysis is performed to summarize the business choice and rating behavior of users. Whereas, a comparative analysis is used to report changes in the reputation of business and user rating behavior over time. The changes in business reputation are observed by comparing review ratings, observed business ratings, and average user ratings for all reviews written for a business. Whereas, the changes in user behavior are examined by comparing review ratings, observed business ratings, and average user ratings for each review written by a reviewer. This study analyses the impact of existing and proposed features on the helpfulness of online reviews as the results of previous studies were contradictory and needed further investigation [98]. The dependent variable (review helpfulness) in this study is a positive integer and ranges from 0 to 179, as in Table 2. The literature reveals that better analysis and model estimations have been achieved by using the variable with logarithmic transformation [101]. Due to the count nature of dependent variable count data regression or standard linear models with data transformation can be used. Firstly, for linear regression, we perform logarithmic transformation for R_Helpfulness and prevented logarithms of zero by adding one [52], [102], [103]. The regression model based on all features is given by Equation (20) Keeping the dependent variable in its original form, Poisson or Negative Binomial regression models can be used in this study. Since the variance and mean of the dependent variable is not equal, we used Negative Binomial regression as used in previous studies [29], [78], [82], [104]. Moreover, for both models, we perform the logarithmic transformation of independent variables that are right-skewed. The overall effect of using new, modified, and existing features are analyzed by regression analysis. The evaluation metrics used for linear regression are squared Correlation (R 2 ) and Root Mean Square Error (RMSE). In contrast, pseudo R 2 and Akaike's Information Criteria (AIC) are used for negative binomial regression. The p-value of features is used to analyze the significance of various features for the helpfulness of online reviews. Moreover, the correlation weights are determined to identify the important features of the helpfulness of reviews.

A. BUSINESS REPUTATION
Two businesses, referred to as A and B, are selected to analyze changes in reputation over time. On Yelp, a business star rating can be 1 to 5 based on the average star rating of reviews. The observed business star rating, average business star rating, and average user ratings are calculated for reviews received by a business to explore business reputation. Figure 4 shows the change in the star rating for business A from April 18, 2009, to September 03, 2018. The ratings of 50 selected reviews for business A are mapped in Figure 4 to highlight how business ratings changes overtime. It is evident that the observed business star rating for business A remained constant over a longer period of time and improved in the middle of the year 2015. The reputation of a business greatly depends on the number of reviews and review ratings given by users. The reputation of business B is illustrated in Figure 5. One hundred and thirty reviews are selected for business B to highlight the changes in business reputation from February 26, 2007, to September 17, 2018. We can see that the start of business A is comparatively more stable (stable start) than that of business B. The star rating of business B fluctuates too frequently (unstable start) in the starting years. This shows that it is very difficult to maintain the reputation of a business in the early years. The reputation of a business can change too frequently in the starting years as the review volume is low, but it will not change too frequently afterward. The findings for business reputation are consistent with the VOLUME 8, 2020  results of previous literature. Wiederhold [89] also reported that it is a challenging task for everyone to set a first online good impression and keep it thereafter. The improvement in the reputation of business A reflects that if a business maintains and improves customer satisfaction, it can improve business reputation over time. Similarly, we can see that the rating of business B is decreased from 4 to 3.5 due to low star ratings of the last four to five years. Average business rating of business A continues to rise in contrast to the average business rating of business B.

B. REVIEWER CHOICE OF BUSINESS
Most of the potential customers these days choose the business based on their star rating on online review platforms [91], [92]. Using Equation (14) and (15) we label the reviewer choice of business. The business choice of business for each review in the dataset is illustrated in Figure 6 (a), whereas Figure 6 (b) presents the overall reviewer choice of business. The results of the business choice based on reviews indicated that, compared to the average user rating, 36% of the reviewers were reviewing businesses with a higher observed average rating. Whereas 34% chose the business with a lower observed average rating, and 18% chose the business having the same observed average rating as average user ratings. Moreover, 12% of reviewers have reviewed new businesses without online reviews. The results of the overall user choice of business based on the cumulative choice of all individual reviews show that one reviewer only reviews the businesses that have not been reviewed before. 70% chose business with lower observed average ratings, 2% chose business having the same observed average rating and, 28% of reviewers choose a business with a higher observed average rating than the user average rating.
Since the patterns of reviewers in choosing businesses based on star rating is of great interest and use for businesses and practitioners. We further explored the choice of business and specifically reported the results based on the observed average business star rating in Figure 7 (a, b). The results show that 46% of the reviewers will only choose a business   with a minimum 4-star rating based on the trend of reviews from 2005 to 2018. Whereas, when we see the overall choice of users based on their all reviews, 91% of the reviewers reviewed the business with businesses less than a 4-star rating. Moreover, we specifically explore the reviewer's business choice for the years 2017 and 2018. The results for years 2017 and 2018 are shown in Figure 8 (a, b). It is interesting to note that the trend of business choice for 2017 is similar to the overall trend from the year 2005 to 2018. However, for 2018, the number of reviewers who chose a business with less than 4-star rating increased by 1%. The BtightLocal findings of the minimum star rating for the year 2017 were 48% of consumers choosing a business with minimum 4-star rating [95]. Similarly, for 2018, our findings for the number of reviewers who only chose a business with a minimum 4-star rating are 45% compared to 57% reported by BrightLocal [95]. The findings of this study for the year 2017 differ from those of the bright local findings by 2% and 12% for the year 2018. BrightLocal relies on the survey data for 1000 users, While the results of this study are based on 90,671 reviews written by 4,086 users. This study also reveals the findings of the number of reviewers that used to review new businesses that have not been explored by previous studies. The results show that 12% of the reviewer prefers new business from 2005 to 2018, while 10% of reviewers only chose a business with an exactly 5-star rating. Previously a study reported that  patients usually prefer to choose physicians with high rating [91]. Similarly, in the selection of hotel rooms, it has been noted that the 5-star rating significantly effects the choice of the user [92]. However, we found that the user's selection for shopping businesses differs significantly from the physician and hotel room. In particular, the difference in choice of shopping businesses and physicians is due to the critical nature of the health care domain.

C. REVIEWER RATING BEHAVIOR
The ratings given by reviewers are of great importance for a business to maintain its reputation, which attracts potential customers. On Yelp, a reviewer can give a business star rating ranging from 1 to 5. The statistics of star ratings given by reviewers from 2005 to 2018 are presented in Figure 9 (a, b). The results in Figure 9 (a) show that the majority of the reviewer give a 4-star rating to business, followed by a 5-star rating. Moreover, when we see the overall ratings given by the reviewers as in Figure 9 (b), the majority of reviewers overall all give 4-star rating followed by a 3-star rating. We calculated the difference of review star rating with business observed average star rating and average user rating as in Equation (6) and (78), respectively. Compared with the observed average star rating of the business, 51% rate higher, 30% give lower, and 19% give the same review rating. Whereas from the perspective of average user rating 48% rate higher, 34% give lower and 18% give the same review rating.
Using Equation (12) and (13) we label reviewer rating behavior as ''0'' (Irregular) or ''1'' (Regular). The results of user rating behavior after labeling all reviews are shown in Figure 10 (a). The overall reviewer behavior based on all reviews written by them is presented in Figure 10 (b). The results show that 58 % of reviewers deviate from regular behavior and, overall, 60 % deviate from regular behavior. It can be seen that the overall user behavior differs by 2% from the individual's rating behavior. The percentage of reviewers who adopt the attention-grabbing strategy is 10% higher than the percentage of reviewers who follow the social norm. The analysis of an individual's rating behavior is performed by selecting two reviewers X (Irregular) and Y (Regular). Reviewers X and Y are labeled as irregular and regular using Equation (13). The review rating history of reviewer X and Y compared to the average user rating and observed business rating is plotted in Figure 11 and 12, respectively. The observed business rating and average user ratings are taken as extremes for each review rating. Review ratings that fall outside both extremes are labeled as irregular behavior, otherwise labeled as regular. We can see that for reviewer X, the majority of the review ratings are outside extreme values compared to the rating behavior of reviewer Y.

D. IMPORTANT FEATURES OF REVIEW HELPFULNESS
The results of linear regression are given in Table 3 Column (1), (2) and (3). The value of R 2 using review and business features to predict the helpfulness of reviews is 0.165. The value of R 2 is increased to 0.177 by using reviewer choice and rating behavior features. Finally, by adding reviewer popularity and experience features, R 2 is boosted to 0.450. The RMSE values are reduced from 0.326 to 0.264 by using reviewer features along with review and business features. Table 3 Column (4), (5), and (6) show results for negative binomial regression. The results show that the pseudo R 2 significantly increased by adding reviewer features. Moreover, the decrease in AIC value also highlights the importance of reviewer features in predicting review helpfulness. Form the results of both linear regression and negative binomial regression, and it is clear that R 2 has slightly increased by adding reviewer choice and behavior features. Moreover, significant improvement in R 2 by adding reviewer popularity and experience features reports the explanatory power of these features. The regression model proposed in this study is performing better than the existing models in terms of R 2 and RMSE. The proposed model achieved the highest R 2 0.450 compared to 0.406 and 0.293, while the lowest RMSE 0.264 is achieved by this study compared to 0.452 reported in previous studies [52], [80]. In addition, the pseudo R 2 of the proposed negative binomial regression model is also higher than a recent study [78]. By looking at the significance level of the review, business, and reviewer features, all features appear to be significant, except R_Subjectivity. Moreover, the significance level of a few features varies from model to model by using a different combination of features, i.e., R_Polarity, B_Review_Count, U_Business_Choice, U_Fan_Count, to highlight a few. The results showed that our findings on R_Word_Count are  consistent with the findings reported in recent studies [69], [71], [73], [82]. However, a study also reported a negative relationship of review length with the helpfulness of online reviews [83]. The previous studies reported inconsistent findings on the direction of the relationship for R_Age [29], [58], [68], [75]. Meanwhile, our results show a negative relation of R_Age with review helpfulness. We find that the relationship between B_Checkin_Count and review helpfulness is negative, but previous literature reported it as Positive [52]. The two proposed features for business B_Helpfulness_Count and B_Avg_Helpfulness show a significant positive relation with review helpfulness. The significant positive relationship between Abs_Dev_R&B and review helpfulness is also consistent with the findings of literature [29]. Moreover, the significant positive relationship between the proposed rating deviation of Abs_Dev_R&U and the review helpfulness shows that the more the reviewer deviates from the past rating behavior attracts more helpful votes. Similarly, the proposed U_Rating_Behavior reflects positive relation, Whereas U_Overall_Behavior negatively effects review helpfulness.  The introduced features, U_Business_Choice and U_Overall_Choice, both show a significant negative relation with review helpfulness. This shows that the reviewers who write reviews for news businesses are likely to attract more helpful votes. The significance and direction of the relationship for most of the features related to the popularity and experience of reviewers are also consistent with the findings of previous studies [68], [71], [75], [80]. However, we find mixed results for U_Review_Count, negative relation for linear regression, and a positive relation for negative binomial regression. Previous studies also reported inconsistent findings on the direction of the relationship between U_Review_Count and review helpfulness [29], [52], [80]. U_Helpfulness_Count also shows mix relationship, while U_Avg_Helpfulness has a positive relationship with the review helpfulness. Recently, a study also reported a positive relationship between the average helpfulness of reviewer and review helpfulness [77]. The most important features that make a review more helpful include R_Word_Count, U_Review_Count, and U_Avg_Helpfulness. Moreover, the proposed Abs_Dev_R&U shows more impact on review helpfulness of reviews compared to Abs_Dev_R&B used by previous study [29].
Furthermore, we look at the correlation weights that range from 0 to 1 for all features. Higher weight represents a strong relationship with the dependent variable (review helpfulness). The weights for all features used in this study are presented in Figure 13. Average helpfulness of reviewer (U_Avg_Helpfulness) is the most correlated feature with the helpfulness of reviews. In contrast to the review features, most of the reviewer features are highly correlated with review helpfulness. In addition, the correlation weights for all proposed features, such as rating deviations, business features, user choice, are comparable to the weights of existing features, except for rating behavior features. The proposed business feature B_Avg_Helpfulness shows a strong relationship with the review helpfulness compared to most of the previously focused review features, i.e., R_Stars, R_Polarity, R_Age.

IV. CONCLUSION
This study examines changes in the reputation of the business and explores the patterns of users' choice of business and their rating behavior. In addition, new business and reviewer features are introduced and their impact, along with existing features, is analyzed on the helpfulness of the reviews. The analysis is carried out using 90,671 shopping category reviews from Yelp.com, which are selected using a userfocused filter. All the generated features are mapped to the respective reviews in order to create the final dataset. The results reported that the reputation of the business might change too frequently in the early years due to the lower volume of reviews. There may be a slight change in the reputation of a business for a longer period of time. A business with a stable start is likely to maintain and improve its reputation later on compared to a business that faces an unstable start. Almost half of the users have opted for a business with a minimum 4-star rating, while only 12% of users visit new businesses. When a business is rated, the majority of reviewers write 4-star reviews, and more than half of the reviewers' rating behavior is classified as irregular. The findings on the significance of most of the features are consistent with the literature, while the features, i.e. the polarity review, etc., show mixed behavior. The most important features that make a review more helpful include average user helpfulness, number of user reviews, average business helpfulness, and length of the review. Moreover, the newly introduced reviewer and business features also appear to be more significant than previously used features. The popularity and experience of reviewers appear to be more important as the addition of these features significantly improves R 2 compared to the reviewer's choice and behavior features.
Like other studies, this study is not without limitations. Firstly, this study focuses only on the reviews of Yelp.com. For future work, a combination of multi-platform reviews will be considered. Secondly, this study only focuses on the shopping category. Future studies should also take advantage of these features and study their relevance to other business categories. Thirdly, this study uses a dataset of long-term (2005 -2018) reviews, which is why the findings of this study should be tested against datasets of short-term reviews. Fourthly, this study finds that the review age has a negative relationship with the helpfulness of online reviews. Future work should investigate the possible inverted U-shaped relationship between the review age and the review usefulness. In addition, future work will also consider using proposed features and machine learning algorithms to predict the usefulness of online reviews. This study has both theoretical and practical implications. From theoretical perspectives, it will help researchers to overcome the inconsistent findings of existing literature. Whereas, from a practical point of view, this study guides reviewers to write more helpful reviews and businesses to maintain and improve their business reputation by monitoring business choice and rating behavior of users. VOLUME 8, 2020