Introduction
In April 2021, the European Commission (EC) proposed a set of rules to regulate artificial intelligence (AI) systems operating across Europe [1], namely the AI Act.1 This was an important step in a long-term process in which the European Union developed its approach towards trustworthy AI [2], setting policy agendas [3], and ethics guidelines [4] among others. This effort to shape the European AI policy has often been criticized by different sectors and for different reasons. One of the most controversial regards the classification of AI-based systems into risk-based categories, as they might not be completely representative of the real capabilities and impact of such systems. Hence, trustworthy AI has been simplified as an enabler of users’ acceptability based on AI systems’ risks [5]. There is also an ongoing debate centered in the balance between promoting innovation and ensuring adequate safeguards against potential risks associated with AI technologies. There is a need to consider the interests and perspectives of all stakeholders, including European citizens, before implementing and enforcing regulation as for now most of the responsibility relies on AI providers (i.e., developers) [6].
To build trust towards AI, it is fundamental to raise awareness regarding AI and to address key issues related to transparency, accountability, and fundamental rights protection in AI deployment to improve the attitude of citizens and end-users. Before the release of the AI Act, the EC sought feedback from different stakeholders to ensure an inclusive policy development, such as the consultation run from February to June 2020 to gather opinions about the White Paper on AI [7]. Usually, these consultations invite reflection on specific actions or policy proposals and can reveal partial information, if anything, about what people think about AI and its related impact on society.
Knowing people's views and perceptions is key to deploying effective governance mechanisms and integrating rules into society. In this article, we aim to fill this gap and report the results of a survey investigating the knowledge and perception of AI in Europe. For this reason, we designed, developed, and validated a new questionnaire, the perceptions on AI by the Citizens of Europe questionnaire (PAICE), structured around three dimensions: awareness, attitude, and trust. Based on a computer-assisted web interview methodology (CAWI) we collected and analyzed the opinions of 4006 European citizens from eight countries (France, Germany, Italy, Netherlands, Poland, Romania, Spain, and Sweden), stratified by age, gender, and geographic urban areas. With this article, we aim to contribute an instrument for investigating people's opinions on AI (the PAICE questionnaire) and outline key trends emerging from our data collection. Our contribution is complemented by policy implications based on the identified trends.
The collected responses show that respondents’ self-assessed knowledge about AI is low, while their attitude is very positive and slightly varies depending on the context of use (e.g., approval is lower when AI is applied to human resources management). The most important measures to increase trust in the AI ecosystem include the introduction of laws by national authorities, transparent communication by AI providers, and education activities. Among trusted entities that could ensure a beneficial use of AI, universities and research centers are ranked higher than other organizations (e.g., national governments and tech companies). The statistical analysis shows that the questionnaire has good internal consistency and that the validity is adequate.
We analyze the results of the survey and identify a few contrasting perceptions which may reflect three broader social trends: 1) approval of a hyped, but poorly known, technology; 2) disconnect from public AI policies; and 3) poor engagement with AI education and training. We discuss how these trends may create friction in the creation of a trustworthy AI culture and suggest a few recommendations. Our findings call for greater consideration of people's views and participation in AI policy-making, especially if we consider the rapid transformations introduced by AI into society and the abundance of policy efforts by states and intergovernmental organizations [1], [8], [9], [10], [11], [12].
A. Related Work
AI and trust recall a vast academic literature investigating shared principles among ethics guidelines [13], [14], [15], challenges and future directions [16], [17], [18], [19], [20], as well as factors influencing users’ trust in AI [21]. More recently, specific scales were developed to measure general attitudes toward AI [22], [23], [24]. In this section, we focus on previous surveys analyzing citizen awareness, trust, and attitude towards AI from different perspectives.
In a global study surveying 10 000 citizens spanning eight countries across six continents [25], respondents reported a mix of positive and negative feelings about AI. In a similar study, the U.K. expressed a markedly negative view of AI, while showing a reasonable understanding and awareness of this technology [26]. The U.S. population has been surveyed on a key dimension of trust: the perception of governance [27], [28]. While most people (especially older segments) find the issue very important, they state that they have little trust in the actors who have the power to develop and manage AI (e.g., companies, universities, and U.S. agencies). Another U.S.-related work investigated the ethical preferences of different groups of people and found that AI practitioners’ value priorities differ from those of the general public [29].
Studies focused on the perception of AI in Europe are not entirely new. In an EU-wide survey, the authors focused on a notion of AI centered around robotics, finding attitudes to be generally positive, with concerns related to job losses [30], later confirmed in a follow-up study [31]. These are generic studies of EU public opinion about science and technology, with only a marginal focus on AI. A subsequent survey on opinion about AI highlighted discrimination and lack of accountability as key concerns for European citizens, and a belief that public policy intervention is needed, shared by a majority of respondents [32].
Recently, Kerr et al. [33] analyzed the positive and negative expectations of 164 individuals visiting a science gallery exhibition in Dublin. The study found that awareness of AI is relatively good, opportunities are related to economic growth and social progress (e.g., mentioning the positive impact on medicine, science, and environments) and concerns are connected to automation, followed by privacy and surveillance. Sartori and Bocca [34] examined awareness of AI, emotional responses to narratives, and the perceived likelihood of future scenarios in Italy. The authors noted a positive correlation between the level of digital expertise and general knowledge of AI and showed an important gender divide with respect to the emotional response to narratives with women more concerned than men across all scenarios. Kieslich et al. [35] investigated how German people prioritize different ethical principles (transparency, fairness, nonmaleficence, responsibility, beneficence, privacy, and machine autonomy) with regard to the application of AI to fraud detection. The study found that all ethical principles are equally important for the respondents but different preference profiles for ethically designed systems exist.
B. Research Questions
The present work departs from the existing literature in two fundamental ways. First, AI is taken as its main target, not as part of broader investigations in science and technology [31], connecting different perspectives (such as awareness and trust) with specific use cases. Second, it aims to reach a large population that includes more than one European country or demographics [33], [34], [35].
The questionnaire was developed in the context of a Horizon 2020 project by a multidisciplinary team of researchers. The research questions addressed by the team are the following.
RQ1: To what extent are EU citizens familiar with AI and the surrounding debate? This covers aspects concerning citizens’ awareness and competency such as: what people think they know about AI, where they think AI is applied, what is the perceived impact of AI, and which EU initiatives addressing ethical and legal concerns they are aware of.
RQ2: To what extent do EU citizens approve of AI? This research question connects to citizens’ attitude towards AI and its use in some specific sectors or contexts of application (such as job recruitment).
RQ3: What could contribute to increasing citizens’ trust? This question investigates citizens’ priorities to promote the responsible development of AI in terms of actions, actors and ethical requirements.
These questions guided the development of the questionnaire around the dimensions of awareness, attitude, and trust. The structure of the questionnaire was also explored in our analysis (i.e., validity and reliability). This allowed us to identify which items of the questionnaire can be used to validate the dimensions suggested by the team of experts who designed the research instrument.
The remainder of the article is organized as follows. First, we present the methodology guiding the survey design. Next, we report the results of the survey according to the dimensions of interest (i.e., awareness, attitude, and trust), and analyze the validity and reliability of the questionnaire. We discuss the results pointing out implicit tensions and discussing potential barriers to the development of inclusive AI policy processes, thereby making recommendations to improve current efforts, especially at the European level. Finally, we summarize our findings and suggest future research directions.
Methods and Materials
A. Survey Method
This survey was conducted by the market research agency marketing problem solving (MPS) based in Italy [36]. The survey was carried out through online interviews (CAWI) on the basis of a structured questionnaire. The average completion time was 20 min. MPS programmed the script of the questionnaire through the creation of a website hosted on the web server owned by MPS and managed the data collection process.
The invitation to complete the questionnaire was sent by e-mail to members of an online panel who voluntarily agreed to share their opinions. To facilitate the task, panel members received the questionnaire in their own language. The respondents were free to drop out at any time and had the opportunity to go back to previous items and change their responses. Respondents’ information was recorded in compliance with the General Data Protection Regulation (GDPR) and the Italian legislation on data protection and privacy.
From the 1st to the 15th of June 2021, MPS realized a total of 4006 interviews in eight European countries: France, Germany, Italy, Netherlands, Poland, Romania, Spain, and Sweden. Countries were selected with a view to consider different geographical areas in Europe (southern, central/eastern, northern, and western). Though our selection is not representative of the full European population, the selected countries differ in various respects such as the quality of democratic processes [37], financial prosperity [38], and the level of digital skills [39]. The survey was completed by individuals aged between 18 and 75 years. Quotas were imposed to ensure the representativeness of the sample with respect to gender, age group (18–34, 35–54, 55–75), and geographical area of residence.
Before undertaking the survey, MPS tested the questionnaire with a sample of panel members to assess the clarity of instructions and the average completion time. MPS monitored the whole interview process to ensure the quality of responses, e.g., by removing participants who completed the survey too quickly or provided contradictory answers.2
The original version of the survey was developed and revised in English and then translations in other languages (Italian, Spanish, German, Polish, French, Romanian, Dutch, and Swedish) were made by professional translators or mother tongue experts.
1) Population
To obtain a quota sample, members of the population were first divided into nonoverlapping subgroups of units called strata (country), then, a sample was selected from each stratum based on city size age groups and gender. The sample consisted of 4006 individuals with equal representation for each country (12.5%). Stratification choices aimed at creating a diverse sample in each country.
The sample was composed of individuals in the 18–75 age range (mean age
With reference to formal education, the descriptive analysis highlighted that 40% of the respondents had the highest level of formal education (bachelor, master, or doctoral degree). Note that this percentage is higher than the share of European citizens with tertiary education (i.e., also including trade schools and vocational education) which is estimated at 31% [40]. The choice of the survey methodology, based on online interviews, possibly facilitated the participation of subjects with higher levels of education.
To investigate confidence with information and communication technology domains we submitted to the respondents an item assessing their level of competence in digital skills on a five-point ordinal scale from almost no knowledge to advanced knowledge. It was observed that 44% of the respondents have an intermediate level of competence in digital skills. Among those who feel less competent in digital skills, we found French and German respondents who represent respectively 31.7% and 34.5% of the population surveyed in each country. The countries reporting the highest level of competency are Spain and Italy where respondents with intermediate or advanced knowledge are 82.9% and 79.8%, respectively. For more details on digital skills and formal education, see tables “digital skills” and “education” in the Appendix section (digital skills).
2) Questionnaire Design
The PAICE questionnaire was created by a group of researchers from different backgrounds (AI & computer science, philosophy, engineering, psychology, and communication) including the authors of the present work.
The design of the questionnaire took six months, from January to June 2021, during which the group met on a monthly basis. In the early stages of the design process, the group collected and analyzed the existing literature and previous surveys at a European and worldwide level. Based on the literature review, the group identified the research questions and subsequently defined the questions for the research instrument. After a refinement process, the group agreed on a total of 14 items including Likert scale, dichotomous, multiresponse items, and ranking. Items were organized according to the three dimensions introduced above (awareness, attitude, and trust) with a view to address the starting research questions. An overview of the structure of the questionnaire with question types and the topics of each item is reported in Table II. Note that some questions, since they were applied to different sectors, policy measures, or entities, were split into subitems (e.g., Q7_1 to Q7_10).
In addition, the questionnaire presented: a control question about the perceived impact of AI, a question investigating the interest in attending a free course on AI, and seven questions on sociodemographic aspects (i.e., age group membership, gender, geographical area, population size, job sector, level of education, and digital expertise). The control question, which was a repetition of item Q3 (see Table II), was added to assess possible changes in opinions after the completion of the questionnaire. The English version of the full questionnaire is available in the Appendix section (questionnaire).
Likert scale items ranged from 1 to 5 where 1 referred to negative or low values (e.g., “not at all”, “never”, “not important at all”, and “strongly disapprove”) and 5 to positive or high values (e.g., “a lot”, “always”, “very important”, and “strongly approve”). For item Q5 we also added the option “I don’t know” to accommodate respondents who did not have a clear opinion on the topic (awareness of interaction). We chose the 5-point Likert scale because this is largely used in social science research to study human attitudes and perceptions [41]. Though the optimum number of choices in a Likert-type scale is a subject of dispute [42], we opted for a 5-point scale to ensure items’ simplicity and intelligibility [43], [44].
To offer a common ground to all respondents we introduced the following definition of AI at the beginning of the questionnaire: “AI refers to computer systems that can perform tasks that usually require intelligence (e.g., making decisions, achieving goals, planning, learning, reasoning, etc.). AI systems can perform these tasks based on objectives set by humans with a few explicit instructions.” Given the heterogeneity of the consulted population, we chose a simple definition that could be intelligible by a large audience.
3) Statistics
To explore the theoretical dimensions structuring the PAICE (awareness, attitude, and trust) an exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were performed. The aim was to evaluate the robustness of items in the questionnaire. To do this we randomly split the sample (n
The EFA was performed to determine the number of fundamental (latent) constructs underlying the set of items and quantify the extent to which each item is associated with the construct [45]. In this context, the EFA allows us to study the strength of relations between the dimensions identified by the team of experts who designed the questionnaire and the associated items. Before performing the EFA analysis, two criteria were tested to determine whether factor analysis was appropriate: the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy and Bartlett's test of sphericity assumptions [46], [47]. A KMO index
To assess the internal consistency of the EFA solution, we calculated Cronbach's
Finally, the validity of the factor structure derived from the EFA was evaluated by using the CFA. The implementation for the CFA was based on a polychoric matrix and the robust diagonally weighted least squares (RDWLS) extraction method which is more suitable for ordinal data than other extraction methods [51].
We assessed the fit of the model using the following criteria: root mean squared error of approximation (RMSEA
B. Limitations
This work is not without limitations. Although we tried to represent different European areas, the sample does not cover all European countries. Thus, our analysis may not be representative of the opinions of all EU citizens. As we suggest in the conclusion, extending the questionnaire to other countries will give a more complete picture of European society. In addition, our questionnaire administration methodology (CAWI) assumes that the target population has access to the internet and is familiar with web navigation. This choice could have impacted on the selection of the population interviewed. The latter may be skewed toward people with higher education levels and/or wealthier socioeconomic status. We acknowledge that this is an important concern for the quality of a study and to overcome such a limitation, scholars may consider using article-based questionnaire for the segment of the population less familiar with digital technologies. Another limitation concerns the measurement of awareness. In this study, we focus on self-reported awareness, which may suffer from subjective and contextual factors. Objective knowledge about AI is another important dimension of awareness; its rigorous measurement would require the development of a specific methodology that goes beyond the scope of this work. However, for the purpose of the study using self-reported knowledge can be as important as measuring actual knowledge. Indeed, people's perceived knowledge level, even if inaccurate, can influence their behavior (e.g., willingness to engage in educational opportunities) and provide valuable insights for policy making. Since in seven decades academia has not agreed on a common definition, we have prioritized in the survey an accessible and understandable definition of AI to gauge broad public perceptions. This approach aligns with widely accepted notions of AI in both scholarly and popular discourse [53], [54], while avoiding overly technical details to forestall any biased perceptions stemming from misunderstandings of its technological components.
Results
The responses to the questionnaire are presented with respect to the three dimensions: awareness, attitude, and trust. Aggregated responses to all items are reported in tables “Likert-scale items” and “nonLikert scales” with the descriptive statistics in the Appendix section (responses). Responses were compared with respect to different groups by using Kruskal–Wallis test where a p-value
Finally, we report the results of the analysis performed to assess the questionnaire's validity and reliability.
A. Awareness
In Fig. 1, we represented the percentages of responses to Likert scale items connected to awareness. Blue and red colored segments identify the two extreme positions: high and low levels of awareness respectively. The largest red segment, including the lowest scale values (i.e., 1 and 2), regards the self-assessed competency on AI (Q1). In this item, almost half of the respondents (49.5%) reported having low or no knowledge, while only 20.9% considered their knowledge to be advanced or expert level. Analyzing the results by country, Germany, and the Netherlands have the highest percentage of respondents who feel less knowledgeable, at 66% and 63% respectively. If we look at gender, the percentage of individuals who feel less competent is greater for males (55%) as compared to females (43%). With respect to age, the portion of individuals with low or no competency is higher for seniors (63%) and lower for young respondents (32%).
Responses to Likert scale items associated with awareness. Low-scale values (1 and 2) are represented by red-like colors, whereas high-scale values (4 and 5) are represented by blue-like colors. Item Q7 is split into subitems regarding the perceived presence of AI in ten different sectors.
When asked about being aware of interacting with a product or service based on AI (Q5), only 26.5% reported being often or always aware, while 24.7% reported to be never or seldom aware, and 12.6% chose the “I don’t know” option. In Germany the fraction of people who feel never or seldom aware increases by nine percentage points (32%). Male respondents declared a higher rate of low or no awareness of interaction (25%) as compared to females (23%). The group of senior respondents achieved the highest percentage of answers expressing unawareness during interaction (28%).
In relation to the impact of AI in their daily lives (Q3), half of the respondents (53.4%) felt like it has somewhat or a lot of impact,while 16.7% answered with “not so much” or “not at all”. The perception of (high) impact is greater in Spain (73%) and lower in Poland (33%)—the latter is also the country in which there is the highest fraction of answers reporting a low perceived impact of AI on their lives (29%).
Items from Q7_1 to Q7_10 assessed to what extent respondents feel AI is used in distinct sectors across Europe. Military (67.9%) and manufacturing (66.5%) present a higher fraction of respondents perceiving AI as being somewhat or very present in such sectors. On the other hand, human resources (50.1%), and agriculture (51.4%) present a lower perception of the presence of AI.
Regarding respondents’ familiarity with the normative and ethical European framework (Q4), two out of three respondents (65.6%) have heard about GDPR, while only one out of three were aware of the trustworthy AI guidelines or the AI act (28.3% and 29.8% respectively).
Participants were also introduced to a list of applications and were asked about which ones may contain AI components (Q6). Facial recognition apps, content and product recommendations, search engines, traffic navigation apps, and car ride-sharing apps were the most identified applications, selected by half of the respondents. Other options with more limited AI applications, such as calculators or text editors, were included by 32.6% and 26.3% of participants respectively. Finally, 7.2% of respondents selected the option “none of the above”, hence did not identify any AI-based application.
B. Attitude
In Fig. 2, we reported the percentages of responses to Likert scale items associated with attitude, where blue segments represent a (very) positive inclination and red segments indicate a (very) negative one.
Responses to Likert scale items associated with attitude. Low-scale values (1 and 2) are represented by red colors, while high-scale values (4 and 5) are represented by blue colors. Item Q8 is split in subitems regarding the attitude towards AI in ten different sectors.
Regarding their general attitude towards AI (Q2), 63.4% of the respondents report strongly approving or approving of AI. The most receptive countries were Romania and Spain with almost 80% approval, while in France fewer than 50% participants declared approval of AI. With respect to gender, females expressed to be more positive as compared to males, with approval or strong approval at 68% and 59% respectively. When considering age, the class of younger respondents reached the highest rate of approval (70%), while the group of seniors reported the lowest one (58%).
Items from Q8_1 to Q8_10 aimed to further understand how approval varies depending on the sector of application. Law enforcement and environment have the highest acceptance with an average of 67% of participants opting for approval or strong approval, followed by manufacturing, healthcare, and agriculture. Human resources presents the lowest acceptance rate (47.3%) and the highest dissatisfaction rate with 21.2% of respondents disapproving or strongly disapproving.
We also considered two specific use case scenarios: Q9 presents an AI-based system that screens candidates’ resumes and selects those who can access the interviewing stage [55]; Q10 introduces a smart meter to reduce energy consumption inspired by demand side management [56] that leverages AI to recommend more efficient usage and provide personalized offers from energy providers. While the proportion of neutral positions is approximately the same for both scenarios, the approval is significantly higher for the smart meter with 58.3% of the respondents feeling fairly or very comfortable, as opposed to 44.7% for the resume screening system. Again, we observed statically significant differences among countries. Poland is the most receptive country with about 67% of respondents feeling fairly or very comfortable in both scenarios. The trend for the gender and age groups is similar to that found for the general attitude with a preference for the smart meter scenario.
C. Trust
In Fig. 3, we represent the responses to Likert scale items referred to trust. Similarly to previous dimensions, colors are indicative of respondents’ satisfaction with actions and entities aimed to ensure trust. When asked to assess the importance of a set of policy measures to increase trust (Q12), 76% of the respondents valued as important or very important the deployment of a set of laws by a national authority that guarantees ethical standards and social responsibility in the AI application. Romania and Germany are the countries in which this percentage is the largest, at 90% and 82% respectively. Regarding age, a large proportion of senior respondents consider this measure important (81%), followed by young (71%) and middle-aged respondents (68%). The remaining measures were also highly supported (more than 50%); the least valued was the creation of diverse design teams and the consultation of different stakeholders throughout the entire life cycle of the AI product (64.4%). Education as a remedy to improve citizens’ trust (Q13) was also largely approved with 71.4% of agreement or strong agreement. Note that this percentage increases significantly in Romania and Spain where agreement reaches 85% and 83% respectively, while it falls to 59% in France.
Responses to Likert scale items associated to trust. Low-scale values (1 and 2) are represented by red-like colors, while high-scale values (4 and 5) are represented by blue-like colors. Item Q12 is split into subitems regarding the perceived importance of six different policy measures. Item Q14 is split in subitems related to the perceived trust in six different entities.
With respect to trusted entities ensuring a beneficial use of AI (Q14), two out of three participants (67%) rated universities and research centers as entities that could be trusted a lot or somewhat. Note that this percentage varies across countries with Romania reporting the highest value (77%) and France the lowest one (55%). Social media companies are the least trusted entity with only 35% of respondents trusting them. With respect to countries the percentage is higher in Italy (47%) and lower in the Netherlands (26%), while, if we consider age groups, trust in social media is lowest for senior respondents (24%) and highest for young respondents (46%).
With Q11, we asked to select three out of the seven most important aspects that an organization should consider to developing or using AI in relation to the previous scenarios (Q9 and Q10). Interestingly, there is a clear preference towards technical aspects related to security, robustness and human oversight, with privacy and data protection leading as a choice for 30.8% respondents. On the other hand, the societal and environmental impact of AI applications was only selected by the 5% of the respondents as a first or second choice.
D. Questionnaire Validity and Reliability
Among the 4 006 participants, 501 ticked the response option “I don’t know” for item Q5 (i.e., “How often are you aware of interacting with a product/service based on or including AI?”), corresponding to 12.6% of the sample. Therefore, these responses were excluded from the statistical analysis. A qualitative analysis was conducted to explore the content of each item and identify the ones with multicollinearity issues [57], [58].
The Kaiser–Meyer–Olkin (KMO) test and the Bartlett's test of sphericity showed that the data are appropriate to perform the EFA with a KMO index
The items that load on the same factor suggested that factor 1 (26% of the total variance) refers to awareness and includes seven items (Q7_2, Q7_4, Q7_5, Q7_6, Q7_7, Q7_9, Q7_10); factor 2 (25% of the total variance) refers to attitude and includes six items (Q8_1, Q8_2, Q8_3, Q8_4, Q8_6, Q8_7) and factor 3 (10% of the total variance) refers to trust and includes only three items (Q14_2, Q14_4, Q14_6).
We also assessed the factor correlation matrix of the final EFA to assess the discriminant validity. The correlations between all three factors were found positive. The largest positive correlation was between factor 1 and factor 2 (0.52), and the smallest correlation was between factor 2 and factor 3 (0.37). Hence, we did not find correlation coefficients greater than 0.7; therefore the factors derived from EFA revealed adequate discriminant validity among the factors.
For reliability, both Cronbach's
E. Key Trends
After validating the questionnaire, we report the key trends for awareness, attitude, and trust. Fig. 4 summarizes results stratified by education, digital expertise, and age. Education has a positive influence on awareness, attitude, and trust; all dimensions increase with education, especially moving from secondary to tertiary education (T-test:
Effect of education, digital expertise, and age on AI awareness, attitude, and trust. Low digital expertise and high educational attainment are especially impactful, leading to a sizable decrease and increase across all dimensions.
Among the three factors, attitude and trust have the highest correlation (
Average of attitude and trust toward AI in each country. Respondents from France, Germany, Netherlands, and Sweden tend to have a more negative attitude and trust toward AI; respondents from Italy, Poland, Romania, and Spain are more approving and trusting of AI. Axes are scaled for ease of visualization.
Discussion
The collected responses reveal some contrasts that are worthy of in-depth analysis. These tensions may signal friction in current efforts towards a trustworthy AI innovation and, in particular, call for reflection on the EU context, where the AI strategy aims to build an ecosystem of trust and the development of an AI regulation is underway. Note that these contrasts reflect more implicit contradictions rather than disagreements openly expressed. Yet, pointing them out allows us to discuss critical social orientations that may constitute a barrier to the development of a trustworthy AI culture and, most importantly, an inclusive approach to AI governance.
A. Implicit Contradictions
1) Knowledge About AI vs Approval of AI
The first remarkable result of this survey is that respondents’ (self-assessed) knowledge of AI is much lower as compared to their approval, which is, by contrast, quite high with respect to both AI generally considered and several domain applications. This tension is confirmed by other studies. For example, Eurostat as part of the European Commission's Digital Decade program has noticed in 2021 a level of basic digital skills that is not yet aligned with the EU targets, which established as a goal that at least 80% of citizens aged 16–74 should have basic skills by 2030. Based on the published results for the year 2021, only 54% of people in Europe aged 16–74 have (at least) basic overall digital skills. In particular, the Eurostat report shows that the Netherlands (79%) have the highest share of general basic digital skills followed by Sweden (67%). On the other hand, Italy (46%), Romania (28%), and Poland (43%) have the lowest shares of digital skills. Note that the gap between people's limited competency and their perceptions and expectations might be influenced by the narratives about the future progress of emerging technologies such as AI [59].
2) AI for the Environment vs the Environmental Impact of AI
AI approval is often dependent on the sector or context of application, such as education or healthcare. In this respect, the high acceptance rate of AI in law enforcement and the environment is rather striking. A plausible interpretation might be that people consider these as critical areas where the use of advanced technologies, like AI, could ensure greater progress as compared to other sectors. However, it is surprising that only a small portion of the respondents choose societal and environmental aspects as one of their ethical priorities. In other words, it seems that the intuition of the beneficial effect of AI on important environmental challenges ahead is not on par with the knowledge of possible negative impacts that AI may have on society and the environment. This intuition would be in line with previous studies showing that people tend to not care about the environmental impact of AI solutions and pay more attention to transparency and explainability [60].
3) Perceived AI Impact vs Knowledge About EU Measures on Trustworthy AI
While the perceived impact of AI is high across the interviewed population, the knowledge of recent measures put forward by the EC to safeguard the risks associated with the use of AI is significantly low. In particular, about 70% of the respondents claim no knowledge about two key recent actions by the EC, i.e., the ethics guidelines for trustworthy AI [4] and the proposal for an AI regulation [1], whereas most of them are familiar with the GDPR. Though this lack of knowledge can be partially explained by the novelty of these initiatives (April 2019 and April 2021 respectively), it seems that the public discussion of the AI impact in the EU is still remote from citizens’ experience. Also, the lack of knowledge about the proposal for an EU regulation on AI is somewhat in contrast with respondents’ policy preferences indicating the introduction of laws as a top priority.
4) Introduction of Laws by National Authorities vs Trust in National Governments
As anticipated, the set up of laws by national authorities is acknowledged as (very) important by the largest portion of the respondents. However, national governments are the second last entity that can be trusted a lot or somewhat. This last opinion may reflect a larger discontent with democratic processes [61], challenged by global crises (e.g., climate, migration, and economy etc.) and more recently by the Covid-19 pandemic. The EU took a leading position in proposing global standards for the governance of AI and promoting a unified approach to AI across all member states. However, the implementation of these policy and regulatory efforts might be undermined by the fragile relations between citizens and democratic institutions and associated phenomena (e.g., anti-EU sentiments and populist movements).
5) AI Education as Measure to Improve Citizens’ Trust vs Interest in Engaging With AI Education
With respect to the role of education in fostering trust in AI, 71% respondents are highly positive and express a (strong) approval. Moreover, Fig. 4 highlights the importance of tertiary education for awareness, attitude, and trust in AI. The value of education and culture is also reflected by the choice of universities and research centers as the most trusted entities in ensuring the beneficial development of AI. To gain a better understating of the value of education we also asked participants if they would be interested in attending a free course on AI with a view to improve their knowledge (see the last question, Q16). Overall, 61% of participants answered positively, although compared to their strong support of education-related initiatives, even higher percentages could be expected. Moreover, only half of those who self-reported a low AI competence (Q1) said they would be interested in attending a free course (Q16). This limited interest in engaging with AI education might be indicative of a sort of hesitancy in joining the innovation process brought about by AI, in particular among individuals who feel less competent. A similar interpretation may also apply to the selection of inclusive design teams and consultation with stakeholders (Q12_6) as the least valued measures.
6) AI Across Countries: Attitude and Trust vs Socioeconomic Indicators
Considering the main trend across countries, we find two groups. Romania, Poland, Spain, and Italy tend to have a more positive attitude and higher trust towards AI than the second group of countries, comprising France, Germany, Netherlands, and Sweden. We interpret these results in light of important socioeconomic indicators. First, these groups reflect differences in wealth, since all countries in the second group have a higher GDP per capita than the countries in the first group [62]. Second, they also mirror differences in self-reported digital literacy; citizens of AI-optimist countries consider themselves less skilled in the use of digital technologies than citizens of more AI-skeptical countries [63]. We hypothesize that the public may view AI as an equalizing force, capable of lowering access to digital tools and increasing economic output. Countries where wealth and digital literacy are lower may therefore be more inclined to welcome AI applications, envisioning higher returns.
B. Potential Barriers and Recommendations
The combination of the implicit contradictions presented above suggests three interrelated social trends. These may affect the way in which AI innovations integrate into the fabric of social life and create a barrier to the human-centric approach that the EU wishes to achieve. For each trend, we discuss critical issues that policymakers could face and suggest a few recommendations. We recall that the European AI strategy pursues three fundamental goals: 1) boosting the AI uptake across the economy by private and public sectors; 2) preparing for socioeconomic changes brought by AI transformations; and 3) ensuring appropriate ethical and legal framework to promote trustworthy AI [3].
1) Approval of a Hyped, But Poorly Known, Tech
The divide between knowledge and approval, regardless of its causes, calls for reflection on the meaning and implications of approving something which is not sufficiently known or understood. Over the last few years we have witnessed an explosion of fictional and nonfictional AI-related communication and narratives. This large availability of information sources can contribute to creating big expectations, on the one hand, but can also increase confusion or even resistance and aversion [64], [65], on the other hand. For example, [66] analyzed trends in beliefs, interests, and sentiment articles around AI in a 30-years period. Results show a significant increase in content with a generally optimistic perspective since 2009, although certain topics regarding ethical, technical, and social aspects of AI are also gaining relevance. Moreover, the language used to communicate is highly influential [59]; when mixed with fictional, or utopian narratives, it can create confusion and lead the general public to overestimate the real capabilities and limitations of AI, augmenting the disconnect from the real progress of the technology.
A manifest example of the risk of this poorly informed approval is the attraction created by the language model ChatGPT. It seems plausible that, in the imagination of a nonspecialist, an AI system of this kind, which creates poems, codes, and answers complex questions in a credible manner, is likely to be credited with advanced cognitive abilities. The problem is that systems like ChatGPT “can fool us into thinking that they understand more than they do” [67], and this limitation is probably unknown by the majority of users. The language and terminology used are fundamental to avoid inaccurate and biased messages that create overhype and misinformation about the real capabilities, limitations and associated risks of AI. Moreover, information needs to be clear and adapted to the audience. To improve media communication on AI and support more informed opinion we recommend: 1) increasing the study of media communication on AI and social dynamics created by AI-related content; 2) fostering training of science and tech journalists/communicators on AI applications, in particular, on new AI breakthrough; and 3) distributing high-quality information through institutional channels (e.g., curating the terminology and translating material in national languages).
2) Disconnect From Public AI Policies
In democratic societies, institutions play a crucial role in anticipating risks and taking preventive actions to protect citizens’ rights when innovation processes take place. This is particularly important in times of global crisis or rapid changes and when parts of the population lack the expertise to face complex challenges [68]. In addition, policy and regulatory strategies can influence the complex interplay between trust and automation e.g., by shaping power dynamics [69]. However, the development and implementation of public policies and laws are more effective when citizens participate in the public discussion and gain a better understanding of the issues at stake [70]. Indeed, increasing public awareness may impact people's values and priorities. Not surprisingly, privacy and data protection, which turned out to be the most well-known EU action (i.e., the GDPR), is one of the highest-rated ethical requirements by the respondents of the survey. We recall that before the GDPR was released there were already a directive and respective national laws regarding data privacy. Moreover, the regulation was accompanied by a large campaign of information and awareness towards the topic, in addition to a two-year adoption period for companies (from 2016 to 2018).
The implicit contrasts observed in our results stress the need of supporting European citizens in gaining a greater understanding of the risks associated with AI, including harms that might be invisible to them. In particular, more efforts should be made to raise awareness of AI's environmental costs in the public discourse as suggested by [71]. A poor understanding of societal harms associated with AI may contribute to exacerbating inequalities and eroding democratic processes. Moreover, if people have limited knowledge about the rules and the initiatives introduced to protect them from potential AI-related risks, they will not be aware of the rights they have and when these are violated. Overall, reflecting on the gap between citizens and the EU policy efforts on AI stresses the importance of building a culture of trust on top of laws and policies. Educational and dissemination resources are needed to promote the last key EU policy initiatives and to empower citizens to know their rights and exercise them. Greater attention should also be directed to the initiatives of inclusive governance to avoid the so-called paradox of participation [72], [73], i.e., inclusion processes failing to achieve structural reforms. To improve participation and make society a relevant stakeholder in AI policy-making we recommend: 1) analyzing the effectiveness of EU initiatives and platforms aimed at stakeholders’ participation, including the European AI alliance [74]; 2) creating information material on the AI-related risks and associated EU measures targeted to different audiences (e.g., children and seniors); and 3) increasing local initiatives (including physical events) on AI and the EU efforts aimed at reaching segments of society who are at the edge of current AI debates (e.g., because lacking technology or other cultural resources).
3) Poor Engagement With AI Education and Training
Education and lifelong learning play a central role in the European AI policy. These strategies aim at boosting economic growth but also preparing the society as a whole, to ensure that “none is left behind in the digital transformation” [3]. This preparation includes the introduction of AI from the early stages of education to increase skilled workers in AI-related tasks, but also the promotion of conditions that make Europe able to attract and retain talent for AI research and industry. These steps connect with the need to preserve democracy and core values in our society increasingly shaped by AI, big data, and behavioral economics [75], [76]. As well as the Digital Europe Programme (DIGITAL), the EU supports several projects to train AI experts and stimulate excellence (e.g., TAILOR, ELISE, and HumanE-AI-Net). European countries are also making efforts to improve AI education at a national level as reported in their AI strategies [77] and the AI Watch investment dashboard—apparently the investments made in talent, skills, and lifelong learning represent about 60% of total investments by private and public organizations [78].
While it is widely acknowledged that education and training are key in promoting citizens’ participation, what such an education should look like is open to discussion. This problem regards the type of knowledge and skills that we will value in the future (see [79] for a systematic review of key competencies of AI literacy). As the economy increasingly relies on AI, we expect that AI-related skills, such as algorithmic formalism [80], will take a greater role in education and culture. However, this change may favor critical processes such as the prioritization of algorithmic thinking over other forms of knowledge [81] and the subordination of education to business and economic interests. Note that the influence of economic drivers in the shaping of AI education could also damage the very field of AI by increasing the role of techniques and approaches with a higher economic and commercial impact and marginalizing the others.
Another issue regards how to deliver AI education and training. Several resources are available online supporting self-education on AI, many offered for free [82]. However, if this becomes the default option, some people might be excluded from AI education, such as workers who have a low level of formal education and digital skills;3 our survey shows that these groups have a more negative attitude towards AI and represent an important cohort for targeted initiatives. Moreover, our analysis suggests that the roll out of AI literacy resources can vary across countries depending on different level of trust and attitude. For instance, countries that tend to be more optimistic and have lower levels of self-awareness, like Italy, Spain, and Romania, might require additional efforts to tackle AI risks in particular areas and establish realistic expectations. We should also consider to what extent people feel comfortable with the education offered, whether they experience anxiety or social pressure. Further concerns regard courses offered by big tech companies and how these can influence the public discourse on AI as well as AI research [83].
To address the issues connected to the shaping of AI education we recommend: 1) assessing to what extent people feel conformable with existing educational resources on AI and identifying categories of the population that might be excluded; 2) increasing the integration of the humanities into computer science and AI curricula to help future tech people address broader sociotechnical challenges; and 3) reconsidering the incentives of research careers, now dictated by the dynamics and standards of individual disciplines, in light of multidisciplinary collaborations and societal challenges raised by techno-science.
Conclusion
This article presents and discusses the results obtained from the PAICE questionnaire. The collected responses show that European citizens have low knowledge of AI capabilities in different applications and domains, as well as of the efforts aimed at building an ethical and regulatory framework for this technology. The analysis of our results suggests some tensions connected to broader social trends that lead to reflection on aspects that may interfere with policy efforts towards trustworthy AI: 1) an uninformed approval recalls attention to the risks of misinformation and poor narratives about AI; 2) a disconnect from EU policy on AI brings attention to the need of high-quality communication campaigns on the AI-related harms and current EU policy and regulatory efforts; and 3) a poor engagement with AI education and training strategies points to the risks of growing social and cultural inequalities.
Through the analysis of the validity and reliability of the questionnaire (PAICE) we assess the robustness of the theoretical structure identified by the working group during the design process and support the research community in the reuse of the PAICE. Validation of the questionnaire shows that, for a subset of items, PAICE can be used to measure awareness, attitude, and trust towards the AI ecosystem. In addition, PAICE proves useful in providing respondents with new stimuli that make them reflect on their interaction with new technologies and their impact on society. At the end of the questionnaire, we repeated item Q3 investigating the perceived impact of AI, and found that 62.2% of the respondents answered that AI has an impact on their daily life, an increase of ten percentage points. In future work, we plan to extend the questionnaire to new countries, and investigate country-specific differences with available data on the AI landscape [84]. Future work will also perform multivariate analyses of awareness, attitudes and trust with respect to diverse levels of urbanization, education, and propensity to attend a course on AI across surveyed countries.
NOTE
Open Access funding provided by ‘Università Ca’ Foscari Venezia’ within the CRUI CARE Agreement
Appendix
Appendix
In the following subsections, we provide the links to the Appendix section of the present research work
Questionnaire
Text in full of the questionnaire on the Perceptions of AI by the Citizens of Europe (PAICE) translated in English: https://github.com/EU-Survey/Material/blob/main/S1_quest.pdf
Responses
Table with all aggregated responses to likert-scale/dichotomous/multi-response items and rankings: https://github.com/EU-Survey/Material/blob/main/S2_res.xlsx. For likert scale items, some descriptive statistics are also reported.
Digital skills
Table with aggregated responses related to digital skills, education and population size grouped by countries: https://github.com/EU-Survey/Material/blob/main/S3_dem.xlsx
Statistics by Groups
Table with all responses to likert scale items aggregated by countries/age groups/gender with p-values: https://github.com/EU-Survey/Material/blob/main/S4_comp.xlsx Responses are presented with respect to the dimension considered (awareness, attitude, and trust).
EFA: Table
Table with the results of the EFA: https://github.com/EU-Survey/Material/blob/main/S5_efa.xlsx. EFA is based on the polychoric matrix which uses principal axis factoring with oblique rotation.
EFA: Figure
Plot of EFA: https://github.com/EU-Survey/Material/blob/main/S6_efa.pdf