Comparing the Ideation Quality of Humans With Generative Artificial Intelligence

Traditionally, ideating new product innovations is primarily the responsibility of marketers, engineers, and designers. However, a rapidly growing interest lies in leveraging generative artificial intelligence (AI) to brainstorm new product and service ideas. This study conducts a comparative analysis of ideas generated by human professionals and an AI system. The results of a blind expert evaluation show that AI-generated ideas score significantly higher in novelty and customer benefit, while their feasibility scores are similar to those of human ideas. Overall, AI-generated ideas comprise the majority of the top-performing ideas, while human-generated ideas scored lower than expected. The executive's emotional and cognitive reactions were measured during the evaluation to check for potential biases and showed no differences between the idea groups. These findings suggest that, under certain circumstances, companies can benefit from integrating generative AI into their traditional idea-generation processes.


I. INTRODUCTION
THEi nnovation domain currently adopts large language models (LLMs) equipped with conversational interfaces,marking a transformative phase in which individuals and organizations increasingly turn to artificial intelligence (AI) systems as problem-solving agents.In particular,scholars and practitioners increasingly integrate generative AI into the ideation phase of the innovation process [1], [2].
Creativity is commonly defined as combining originality-novelty, uniqueness-and effectivenessusefulness,fit,or appropriateness [3], [4], [5].A critical aspect of this definition is the fact that "usefulness" is subjective.Thus,Amabile [6] defines creativity as "the production of novel,appropriate ideas in any realm of human activity,from science to the arts,to education, to business,and to everyday life"; thus,the ideas have to be new and appropriate to the opportunity or problem presented [6,p.40].She defines innovation as "the successful implementation of creative ideas within an organization" [7,p.126].This article uses these definitions of creativity and innovation.
The innovation process involves identifying a problem,researching and gathering information, brainstorming ideas,prototyping and testing solutions,and implementing and scaling the solution.It is iterative and aligned with design thinking principles,emphasizing empathy, user experience,and understanding of user needs and context to drive innovative solutions [8].
Professionals with domain knowledge,such as in marketing, engineering,and design,are primarily responsible for creating innovative product and service ideas [9].This traditional approach is valuable but may inadvertently lead to limitations in idea diversity and missing potential blind spots.Additionally,human creativity, although essential in innovation,often demands substantial time and effort [10].
Businesses often face several challenges when it comes to generating new ideas through human ideation.These include the need to constantly innovate due to global competition,which shortens innovation cycle times.Additionally, there is a greater risk of encountering errors,setbacks,and failures due to increasing customer expectations [11], [12].
The introduction of AI has the potential to revolutionize innovation management,offering a range of new tools and methods for businesses [13].Rather than simply enhancing the existing products,AI facilitates the entire process of innovation [14].With AI,companies can quickly gain insights and make informed decisions about where to focus their innovation efforts [15].Additionally,natural language processing-a rapidly expanding nonhuman intermediarycan identify innovative opportunities and trends [16],while AI can manage and evaluate crowdsourced ideas for creativity and novelty [17].Moreover, AI can augment and participate in design thinking processes,providing valuable insights and support to businesses within the innovation process [18].
With the introduction of generative AI,popular models,such as OpenAIs generative pretrained transformer (GPT) models and Google's pathways language model and their widely popular chat interfaces ChatGPT and Bard,respectively, are piloting a paradigm shift in idea generation.While AI has previously been employed predominantly for analytical innovation tasks [19], generative AI also offers potential usage in the creative process through its ability to create content,such as text,image,video,and audio.As innovative techniques intentionally try to manipulate and divert thinking patterns to develop unconventional, out-of-the-box ideas [20],the probabilistic nature of LLMs appears conducive to this type of content generation that does not require fact checking [2].
Recent research has shed light on the multifaceted impact of AI and the resulting debates surrounding its use,opportunities,and challenges.For instance,Pan and Nishant [21] have emphasized AIs potential role in digital sustainability in achieving sustainability development goals, building on the systematic review of AI for sustainability research by Schoormann et al. [22].Meanwhile, Dwivedi et al. [23] have highlighted the opportunities of AI,such as high intelligence,as well as challenges, such as risks of bias and deceptive intelligence.They propose that, although ChatGPT does not make decisions in matters pertaining to business and society,it serves as a valuable tool that can inspire fresh ideas among humans.By offering comprehensive summaries from different perspectives,AI tools can function as both a supporter and a challenger in the process of creating and ideating.The natural language capabilities of generative AI tools, such as ChatGPT,can substantially impact business and society, surpassing previous technologies [23].Additionally,Eloundou et al. [24] explore the transformative potential of LLMs in making the labor market more efficient.Finally,Nishant et al. [25] have drawn attention to the nuanced interplay of substantive and rhetorical signals in market reactions that can be positive and negative in response to AI adoption announcements.Together,these studies underscore the complex dynamics of AI and emphasize the need for collaborative exploration and consideration of ethical,economic,and societal implications.
Bilgram and Laarmann [2] suggest that innovation methods,such as design thinking,must be updated to make use of the new opportunities brought about by generative AI.Furthermore,the advancements in AI have led Amabile [26] to define AI creativity as "the production of highly novel,yet appropriate,ideas, problem solutions,or other outputs by autonomous machines" [26,p.351].
This study delves into the generative AI and innovation field,seeking to address a fundamental research question through empirical investigation.How do new product ideas generated by human professionals compare to those generated by ChatGPT in terms of novelty,customer benefit, feasibility,and overall quality?This approach closely aligns with previous research that compared different idea-generation sources,such as professional versus crowdsourced ideas [27].This study augments this body of knowledge by comparing ideas produced by professionals and a specific LLM,namely GPT-3.5 as used in ChatGPT.
To compare ideation performances, professionals and ChatGPT were assigned identical tasks of generating innovative ideas for a European supplier of highly specialized packaging solutions.A total of 95 ideas were generated, 43 by humans and 52 by ChatGPT.The company's managing director, an innovation expert with over two decades of experience across multiple industries,evaluated the ideas in a blind review process.This process ensured an unbiased assessment as the evaluator Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
remained unaware of the source of the ideas.
The results of this study provide new insights on integrating generative AI into creative processes,especially during the ideation phase.By shedding light on the potential of AI usage within the ideation phase,this study empowers organizations to draw on a greater variety and quality of ideas to augment their innovation strategies.
Specifically,this study shows that AIgenerated ideas exhibit significantly greater novelty and customer benefit than human-generated ideas, whereas idea feasibility scores similarly to human-generated ideas.Analyzing the composite overall quality score,AI-generated ideas ranked significantly higher than human-generated ideas.
This study offers several contributions to businesses striving to optimize their innovation strategies and processes.

II. BACKGROUND AND RELATED WORK
A. Company Innovation and Creativity Historically, companies have relied on human creativity to fuel new product innovations [7], [28].The innovation process is multifaceted and the idea-generation task is a pivotal early development phase,attempting to bridge the gap between the problem domain and the desired solutions [7].Professionals are primarily responsible for generating new ideas for innovation [29].
Scholars and researchers have developed various models to research the factors influencing creativity and innovation within companies [7], [9].They have also developed methods and frameworks to enhance creativity and innovation, such as workshops with clearly defined phases [20].
New sources for innovation ideas were identified:the users and consumers of products and services [29], [30].This shift toward external sources of innovation gained further momentum with the introduction of crowdsourcing,crowdfunding, or the lead user method,which all harnessed the creativity of external contributors,such as end users [31], [32], [33], [34].
Open innovation is defined as "the use of purposive inflows and outflows of knowledge to accelerate internal innovation and expand the markets for external use of innovation" [35,p.1].This concept involves integrating external ideas and technologies within a firm's boundaries for its innovation process (outside-in aspect) and leveraging unused or underutilized internal knowledge beyond the firm's boundaries (inside-out aspect).West and Bogers [36] substantiate the rationale for using external knowledge sources effectively.They highlight the importance of acquiring and integrating external innovations into the existing systems and processes.
A pivotal study [27] revealed that both professional and usergenerated ideas could be innovative.Interestingly,user-generated ideas were significantly more novel and beneficial to customers,although they tended to be less feasible.A similar study corroborated these findings, proving that user-generated product ideas may also be more successful in the market as they generate significantly higher revenues [37].
Despite these research findings on the superiority of crowdsourced or user-generated ideas,this is still not the standard procedure in most companies.In high-tech industries, innovation is highly dependent on the expertise of skilled professionals, and internal talent remains a primary source of innovation for companies [38], [39].
With the continuous evolution of generative AI,a novel source of innovation potential for companies emerges,raising the question of how these AI systems can contribute to ideation compared with human professionals.

B. Generative AI and Creativity
As the field of generative AI faces increasing interest and substantial developments,researchers have begun to assess the creative potential Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
of AI systems.Research shows that, in 2021,the perception of innovation managers regarding AIs role in idea generation was adamant:AI was considered relatively less important than humans for idea generation than in other domains,such as data analytics.Furthermore,these managers believed that humans would continue outperforming AI in creativity in the next 5-10 years [19].
In 2023,researchers tested knowledge extraction and idea generation on the GPT-3 algorithm [1].They focused on integrating AI in human ideation sessions directly on a prompt-by-prompt engineering basis.Their findings identified limitations and suggested the potential benefits of hybrid intelligence in innovation teams,wherein AI systems and human professionals collaborate in synergy [1].In contrast,this study has the LLM innovate independently.
LLMs represent a significant milestone in generative AI,with newer iterations,such as GPT-3.5 and GPT-4,showcasing exceptional capabilities.These models have demonstrated proficiency in various tasks,including passing academic and professional exams,such as the Uniform Bar Exam and the SAT Math [40].The growing capabilities of LLMs raise intriguing questions about their creative quality potential in innovative ideation compared with human professionals.

III. STUDY METHOD
A. Overview This section outlines the methodology used to address the primary research question of this study:How do the new product ideas generated by a company's professionals compare to those generated by ChatGPT in terms of their novelty,customer benefit,and feasibility?To thoroughly assess this question,a cross-group blind evaluation was conducted,where an executive expert in innovation rated each idea's novelty,customer benefit, and feasibility in a recorded session in randomized order.The ratings were recorded using Typeform and augmented with iMotions Online for webcam eye tracking and facial coding analysis to check for unconscious biases during the evaluation process.The ideas for assessment will be described in Section III-B and the executive responsible for the evaluation in Section III-C.Subsequently,the analysis strategy is elaborated in Section III-D.Additionally,it is essential to note that informed consent was obtained from all participants in compliance with general data protection regulation (GDPR) before data collection.
This study explores the comparative quality of human-generated ideas versus those generated by generative AI in a blind evaluation setting.To acquire human-generated ideas, the authors sought out a suitable company that met specific criteria.1) An intention to innovate within their core business innovation segment.
2) The usage of internal professionals for generating new product ideas.3) A willingness to provide their human-generated ideas for this experiment to compare with AIgenerated ideas in their specific segment.4) A commitment to a blind evaluation of all ideas by their executive innovation expert based on the three dimensions to assess idea quality [27].Before the evaluation,the executive received comprehensive training on the evaluation criteria,including their definitions and correct application.Following established research procedures [27],the assessment of idea quality involved measuring three key variables.1) Novelty,evaluating the idea's distinctiveness in comparison with the existing market norms.2) Customer benefit,assessing the idea's capacity to address underlying problems effectively.3) Feasibility,determining the ease of transforming the idea into an actual commercial product.
All three variables were measured using a five-point Likert scale (1very low;5-very high).
The composite quality measure was incorporated to comprehensively compare idea quality between the two sample groups [27].This overall quality dimension considers the combined score of novelty,customer benefit,and feasibility,replicating the formula by Poetz and Schreier [27] (quality = novelty × customer benefit × feasibility),enabling a holistic assessment of each idea's overall quality.
Furthermore,three binary variables were calculated to compare the best ideas.Ideas with a rating of 4-5 were categorized as "top ideas"for each dimension,while those with a rating of less than or equal to 3 were grouped as "other ideas."Thisclassification differentiated the topperforming ideas and the remaining ones per quality dimension.

D. Idea Evaluation Setup
During the recorded sessions,the evaluator viewed the ideas in randomized order,ensuring unawareness of the idea's source.The authors employed the Typeform platform to facilitate this process,providing a seamless interface for reading and evaluating each idea.
Additionally,the executive's evaluations were recorded via a webcam and screen recordings to facilitate an ex-post analysis of emotions using facial coding (affective computing) and attention using eye tracking.These measures aim to detect cognitive or emotional biases during the evaluation process.Advancements in computing capabilities now enable the detection of human attention and emotions through cognitive algorithms [41].
Affective computing technologies encompass the analysis of facial expressions utilizing the facial action coding system based on the emotion model by the authors in [42] and [43].This study employs the AFFECTIVA algorithm for facial coding and the

IV. ANALYSIS AND RESULTS
A. Evaluation Findings First, the Mann-Whitney U Test results in Table 1 show that human ideas scored significantly lower in novelty (M = 2.65) than those generated by ChatGPT (M = 3.42;U = 740,and p = 0.0039).
Second,human ideas scored significantly lower on customer benefit (M = 3.07) than ideas generated by ChatGPT (M = 3.60; U = 741,and p = 0.0028).Third,no significant difference exists between the human ideas'rated feasibility (M = 2.49) and that of ChatGPT ideas (M = 2.33;U = 1250,and p = 0.2922).The relatively low means in comparison with novelty and customer benefit indicate that idea realization is a bottleneck for both groups.
Finally,the composite measure of quality shows that human ideas' overall ratings (M = 18.95) are significantly lower than ChatGPT ideas'overall ratings (M = 26.75;U= 734,and p = 0.0039).
In a typical corporate innovation funnel,the advanced stages of the innovation process focus on the highest performing ideas.They will undergo subsequent in-depth evaluation and potential testing.Therefore,in the next step of the analysis,ideas are split into two groups:"Top ideas,"which includes all ideas scoring four or five on the five-point Likert scale per respective dimension,and "other ideas"(see Tables 2-4).
First,36 of the 95 ideas are considered highly novel compared with the existing market norms.Of these 38%,ChatGPT generated 25, while company professionals only contributed 11.Table 2 portrays the results of a chi-squared test (χ 2 (1) = 4.15 and p = 0.0416),showing that significantly more ChatGPT ideas and fewer human ideas were rated four or five than expected in terms of novelty.
Second,40 of the 95 ideas are categorized as highly customer beneficial.Of these 42%,more were again generated by ChatGPT,with 29 ideas compared to 11.Table 3 portrays the results of a chi-squared test (χ 2 (1) = 7.6 and p = 0.0058), showing that ChatGPT scored significantly higher than expected, while human ideas scored worse than expected in terms of customer benefit.
Third,11 of the 95 ideas-only 12% of all-qualified as feasible,which is very low compared with the other two dimensions.Humans are responsible for six of these ideas and ChatGPT for five,with no significant differences between the observed and expected frequencies (χ 2 (1) = 0.11 and p = 0.7371) in Table 4.
Finally,it is worth emphasizing that no idea achieved a "top"rating in all three dimensions,which aligns with the observed negative correlation of the feasibility dimension with novelty and customer benefit.5 presents the correlations of the quality dimensions rated in the evaluation process for humangenerated ideas and Table 6 for ChatGPT-generated ideas.The correlation tables compare the relationships between the dependent variables of the human idea group and the ChatGPT idea group.

B. Correlation Analysis Table
The analyses indicate that novelty in both human and ChatGPT ideas positively correlates with customer benefit (p < 0.05) and negatively with feasibility (p < 0.05).Notably, the negative correlation between customer benefit and feasibility is significant for ChatGPT ideas (p < 0.001).In contrast,human ideas show a weaker negative relationship between customer benefit and feasibility (p > 0.05).This suggests that ChatGPT ideas were potentially more "out-of-the-box"and less conventional than human ideas.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

C. Emotion and Eye-Tracking
Data Recognizing the potential for unconscious biases to affect the expert's evaluations,the authors employed several measures to assess these underlying emotional and cognitive reactions.
Facial coding was used,analyzing the evaluator's valence and arousal reactions for each idea,and eye tracking was employed between idea areas of interest (AOIs) and idea rating AOIs.This addressed four key biases.1) Attentional bias [45],where longer or repeated gazes at certain ideas suggest subconscious preferences.2) Emotion-based decision-making bias [46],highlighting the impact of emotions on decision making.3) Confirmation bias [47], comparing attention to idea information between groups.4) Social desirability bias [48], comparing attention to the rating Likert scales between groups.
The evaluator's attention allocation is consistently distributed,regardless of whether humans or AI generated the ideas.
Additionally,the results of the facial coding analysis reveal no statistically significant differences in emotional responses,such as valence or arousal,observed when the evaluator assessed ideas from the two groups.These findings suggest no noticeable emotional or cognitive differences related to the source of the idea present during the evaluation process.This supports the notion that the evaluation process was conducted impartially and underscores the robustness of this study's evaluation procedures.

V. DISCUSSION
A. Summary Who can generate better ideas for new innovative products:the professionals employed for the task or an AI system based on engineered prompts?This research question was approached by comparing the novelty,customer benefit,feasibility,and overall quality of ideas generated by professionals in a real-world scenario and ChatGPT-generated ideas.
The findings of this study demonstrate that ChatGPT can generate significantly better ideas in terms of novelty and customer benefit.However,the bottleneck is the relatively low feasibility score for both groups.It is important to recognize that the feasibility of very early ideas,which do not yet provide detailed solution information,might be unclear.Humans may struggle to demonstrate their superiority in integrating feasibility aspects into these initial ideas such that they offer rather limited solution information that could have been evaluated for feasibility.Also,generic LLMs may be fine-tuned better to reflect specific market,product,or company requirements.These specialized LLMs may be capable of generating more feasible ideas that consider the technologies a company owns,the competence they have accrued,or the brand under which the product is supposed to be launched.
The underlying reason for ChatGPTs superior performance may be the vast informational database that the AI tool draws upon to generate ideas quickly.Furthermore,ChatGPT and similar LLMs can use their growing intelligence to foster ideas that humans have not yet thought about.Finally,it does not fall victim to cognitive bias when ideating but may risk ethical bias based on training data that is potentially biased [23].
While this study provides initial proof that AI-based idea generation may already outrival human idea generation regarding overall idea quality,the authors also experienced that the AI-based ideation process appeared more efficient.TheAI ideageneration process required 30 min based on the simple prompting strategy.Additionally,the authors spent 240 min screening,adapting, and selecting AI-generated ideas based on the predetermined criteria.
In comparison,even brief ideation sessions consume at least half a day with a group of five to ten managers and professional preparation.The facilitation and postprocessing of the workshop are cost intensive.
Further findings demonstrate that ChatGPT can generate significantly more "top ideas"in terms of novelty and customer benefit than expected (p < 0.05).These results not only challenge the existing literature on the potential of generative AI in the future [19] but also mark a shift in the perceived scope of AIs applicability.
In the pregenerative AI era,AI was primarily associated with analytical tasks.However,the introduction of generative AI shows where AIs capabilities extend into the creative domain,extending its potential fields of application.
However,a bottleneck exists in the realization of these ideas, as indicated by the lower than anticipated feasibility scores.This emphasizes company professionals' irreplaceable role in assessing the possibilities and boundaries within the available resources.
A strategic approach may be to bolster the ideation phases through collaboration with generative AI to efficiently produce various exceptionally intriguing ideas.Subsequently,in a secondary review stage,company professionals can analyze the generated ideas,focusing on refining the top-performing concepts.This iterative process aims to uncover common ground,fostering groundbreaking advancements in novel,customer-centric,feasible,and impactful innovations.
The need for a hybrid approach throughout the entire innovation process,not just in the early idea phase,is crucial.This study does not cover the evaluation and further development of ideas.A possible approach is to employ AI as a stimulus and inspiration for ideation,followed by human review and elaboration for more effective innovation management.

B. Theoretical Contributions
This study offers a systematic and data-driven comparison between professional and AI-generated ideas, comprehensively assessing their novelty,customer benefit,feasibility, and overall quality.This contributes to the academic understanding of AIs potential role in innovation,building on the study by Poetz and Schreier [27],that compared professional and user-generated ideas.
Furthermore,this study analyzes the strengths and weaknesses of AIgenerated and human-generated ideas,shedding light on how AI can complement ideation processes and enhance innovation outcomes.This analysis adds depth to the academic discourse on AI and innovation.
Also,by comparing the strengths and weaknesses of both idea generator groups,this study highlights the areas where human creativity and judgment remain essential.It also emphasizes the value of collaborative human-AI approaches in innovation processes.
Finally,this research serves as a blueprint for systematically evaluating the potential of AI in specific ideation tasks within the innovation process.
It offers a framework for future research and experimentation in this transformative field.

C. Practical Implications
This study contributes valuable insights for businesses aiming to optimize their innovation strategies and processes.
1) The systematic quantitative evaluation compares professional and AI-generated ideas,comprehensively assessing their novelty, customer benefit,feasibility, and overall quality.This enables organizations to make datadriven decisions about future idea selection.

VI. LIMITATIONS AND FUTURE RESEARCH
This study has identified several potential internal and external factors that may impact the validity of the research findings.As the study is based on the analysis of correlational data,it is essential to refrain from making causal inferences about the observed relationships.
Furthermore,the evaluation of ideas was conducted by a single executive.
To enhance the robustness of future research,it is recommended that multiple evaluators be involved.This approach ensures consistent ratings, interrater reliability,and minimizing the variance of scores.Incorporating a broader range of evaluators can improve the reliability and validity of the ratings.
While this study focuses on assessing professional ideas relative to AI-generated ideas,it is noteworthy that prior research,as demonstrated by Poetz and Schreier [27],has highlighted the capability of usergenerated ideas to compete with professional ideas.Future research may explore the comparative analysis between user-generated and AIgenerated ideas.Furthermore,a three-way comparison may be of research interest,comparing usergenerated,professional-generated, and AI-generated ideas.This will further advance the understanding of the idea-generation dynamics.
It is important to note that this research setting pertains explicitly to a "first-round"ideation scenario.This scenario involves initiating a creative task to stimulate group brainstorming but does not encompass a fullfledged ideation process,as typically undertaken in creative workshops or design thinking approaches.Here, AI serves as a potent "front-loading" tool.It rapidly generates numerous ideas with minimal time and resource investment,allowing humans to expedite their creative processes.Future research may delve deeper into how AI can assist in subsequent stages of the creative process, including clustering,challenging, enriching,and combining ideas.Furthermore,the integration can involve both ideation and iterative cycles of prototyping and improving in a design thinking approach.This exploration can provide a more comprehensive understanding of AIs role in innovation.
The authors encourage further research into the tasks that generative AI may be able to execute besides idea generation.Specifically, the study of AI for idea evaluation, without relying on primary data collected through market research, has the potential to transform usercentric innovation approaches.As suggested by Bilgram and Laarmann [2],generative AI may play a pivotal role in bridging the gap between idea generation and prototyping, particularly in digital innovation.Text-to-code generators,for instance, can enable teams without software engineering skills to rapidly develop initial prototypes,opening new avenues for innovation and efficiency.
Furthermore,besides the quality perspective,researchers could investigate the cost dimension of AI-assisted ideation by comparing costs that accrue for conventional ideation formats,such as workshops or design sprints,with the efforts of AI-based processes.The necessary time and financial investment could serve as relevant dimensions for comparison.Thus,additional dimensions,such as cost,efficiency, and time effectiveness,may be included in future studies for rating ideas.
Finally,expanding on quantitatively examining the ideation outcomes of generative AI,the authors highlight the importance of human ideation formats from an interpersonal social perspective.While the quality of ideas is certainly an important outcome dimension of ideation sessions, the joint development in a group of managers and the intensive discussions over an extended period may help avoid the not-invented-here syndrome [50] and build a momentum that helps in the following steps of the innovation process.Future research could investigate these "soft"benefits of human ideation and explore how AI-generated ideas lacking the human origination process perform in the subsequent phases of the innovation process.

ACKNOWLEDGMENT
The authors would like to thank the collaborating company for their willingness to participate in the ideation phase of this study and the participating executive for evaluating the ideas in a real-world scenario.

Table 1 . Average Novelty, Customer Benefit, and Feasibility of Professional Human Versus ChatGPT Ideas
* Mann-Whitney U Tests were conducted because the dependent variables have no normal distribution.** Three-way dimension (Novelty × Customer Benefit × Feasibility).

Table 2 . Top Ideas Versus the Rest in Terms of Novelty
* Top ideas scored four or five per respective dimension.

Table 3 . Top Ideas Versus the Rest in Terms of Customer Benefit
* Top ideas scored four or five per respective dimension.

Table 4 . Top Ideas Versus the Rest in Terms of Feasibility
* Top ideas scored four or five per respective dimension.

Table 6 . Correlation of ChatGPT Idea Ratings
N = 52; all correlations are significant at p < 0.001 level.
This work involved human subjects or animals in its research.Approval of all ethical and experimental procedures and protocols was granted by (Name of Review Board or Committee) (IF PROVIDED under Application No.xx, and performed in line with the (Name of Specific Declaration).