Researchers are increasingly focused on understanding the impact of artificial intelligence within financial innovation, specifically how transparency around AI use affects investor behaviour. Ning Wang and Chen Liang, both from the University of Connecticut, investigated the strategic implications of disclosing AI involvement in crowdfunding campaigns. Their work examines how mandatory disclosure policies, such as that implemented by Kickstarter, influence funding success and how the way AI involvement is communicated, through substantive details and rhetorical approaches, can either worsen or alleviate negative perceptions. This research is significant because it demonstrates a substantial decline in funds raised and backer numbers following mandatory AI disclosure policies.
Crowdfunding ventures openly using artificial intelligence face a surprising penalty from potential backers. Mandatory disclosure of AI involvement actually reduces funding and investor numbers, according to new evidence. However, careful communication about how AI is used can help creators regain trust and secure necessary investment. Scientists are increasingly integrating artificial intelligence (AI) into business operations, intensifying concerns regarding potential risks and prompting stricter disclosure requirements.
Legislative initiatives such as California’s SB 1047 and the EU AI Act mandate transparency in AI deployment, aiming to improve oversight and ensure responsible AI use. Concurrent research investigates how AI transparency shapes perceptions and decisions, highlighting effects on trust, consumer engagement, and employee performance. These dynamics are particularly important in crowdfunding, where traditional informational anchors are absent, making creator disclosure the primary channel for assessing competence and project quality.
When AI is involved, disclosure becomes especially consequential, requiring backers to interpret AI’s contribution and implications for human involvement. Consequently, framing AI usage can lead to markedly different interpretations, making crowdfunding a natural setting for examining how disclosure strategies shape economic behaviour. Despite growing regulatory and scholarly attention, existing research primarily focuses on the binary decision of whether or not to disclose AI.
Much less is known about the potential role of AI disclosure strategies in shaping consequential stakeholder decisions, particularly in high-stakes economic contexts like crowdfunding. Understanding how specific strategies encourage or deter investment is therefore essential for both theory and practice. Given that both content and style of disclosure may shape backer inferences, we examine two complementary forms of signalling: substantive signals and rhetorical signals.
Substantive signals convey fact-based information about capabilities or the project development process, offering cues about expected product quality. Applied to AI disclosures, these signals may be particularly impactful.
AI disclosure negatively impacts crowdfunding success but rhetorical strategy moderates effects
Mandatory disclosure of artificial intelligence (AI) involvement in crowdfunding campaigns reduces funds raised by 39.8% and backer counts by 23.9%. This substantial decline highlights initial investor skepticism towards projects utilising AI technologies. Yet, the impact of disclosure isn’t uniform; it varies depending on how creators communicate about their AI use.
Detailed analysis of 1,220 Kickstarter campaigns revealed these figures, establishing a clear negative correlation between mandated AI disclosure and campaign success. Further investigation showed that greater AI involvement amplifies this negative effect, suggesting investors are more wary when AI is central to a project’s creation. Conversely, high authenticity and high explicitness in disclosures mitigate the adverse outcomes.
Specifically, campaigns scoring high on these rhetorical signals, demonstrating genuine creator credibility and clear technical explanations, experienced less decline in funding. However, an excessively positive emotional tone within the disclosure actually worsened results, indicating that overly enthusiastic messaging can raise suspicion. The research identified two key mechanisms driving these effects: perceived creator competence and concerns about ‘AI washing’.
Substantive signals, such as the extent of AI integration, primarily influence judgements of the creator’s skill. Rhetorical signals, meanwhile, impact both competence perceptions and AI washing concerns, sometimes operating through one mechanism or a combination of both. GPT-4o-mini was used to classify AI involvement, assigning a value of 1 if AI was central to the project’s output, and a value of 0 if it was merely supportive.
Classifying campaigns as AI-related involved a three-part keyword dictionary, capturing 97% overlap with self-reported AI adoption, ensuring a transparent and reproducible methodology. LogTotalPledge, the natural logarithm of total pledges, and LogTotalBackers, the natural logarithm of total backers, served as primary dependent variables, measuring funding performance and campaign reach respectively. These findings offer practical guidance for entrepreneurs, platforms, and policymakers navigating the complexities of AI transparency in investment contexts.
Identifying AI projects and assessing Kickstarter’s disclosure policy impact
A detailed analysis of Kickstarter campaigns underpinned the work, beginning with a systematic identification of projects incorporating artificial intelligence. Researchers established a keyword codebook, encompassing terms like “Mistral” and encompassing contemporary generative-AI tools, to define AI-related campaigns. This codebook allowed capture of approximately 97% of campaigns explicitly disclosing AI usage, demonstrating a strong correlation between the operational definition and self-reported adoption.
All keywords were meticulously documented in Appendix A for transparency and reproducibility of the AI-related campaign classification. To assess the impact of a mandatory AI disclosure policy implemented by Kickstarter, a difference-in-differences (DID) approach was central to the research design. This involved comparing crowdfunding performance before and after the policy change, focusing on projects disclosing AI involvement.
LogTotalPledge, the natural logarithm of total funds pledged, and LogTotalBackers, the natural logarithm of the total number of backers, served as primary dependent variables, mirroring established practices in the field. Beyond simply identifying AI use, the study examined how creators disclosed this information. Four moderating variables, AI involvement, explicitness, authenticity, and emotional tone, were constructed to capture variations in disclosure strategy.
These dimensions were assessed using GPT-4o-mini, a large language model, chosen for its ability to perform complex textual analysis with accuracy comparable to human coders, while offering scalability and cost efficiency. The temperature parameter was set to zero, ensuring focused and deterministic classifications. Specifically, GPT-4o-mini classified the extent of AI involvement, determining whether AI was central to the project’s output or merely a supporting tool.
HighExplicitness and HighAuthenticity were determined by scoring disclosures against the sample median, assessing clarity and trustworthiness respectively. Finally, HighPosEmotion was quantified using the Valence Aware Dictionary for Sentiment Reasoning (VADER) package, identifying disclosures with excessively positive emotional tones. All classifications relied on text-based assessment of AI disclosures, with detailed definitions and summary statistics presented in Table 1.
AI disclosure in crowdfunding impacts backing through perceived creator signals
Researchers have uncovered a counterintuitive truth about artificial intelligence and crowdfunding: simply admitting a project uses AI can actively deter backers. A near 40% drop in funds raised is a substantial penalty, suggesting deep-seated anxieties are at play beyond simple technological skepticism. This isn’t about people fearing AI itself, but rather a complex reaction to what AI involvement signals about the project and its creators.
For years, platforms have pushed for greater transparency, assuming honesty builds trust, yet this work demonstrates that full disclosure can backfire when dealing with a technology still viewed with suspicion by many. But the story isn’t one of inevitable failure for AI-backed projects. Instead, the research highlights how how you disclose matters as much as the disclosure itself.
Authenticity and clear explanation appear to soften the blow, while attempts to generate excitement through overly positive language actually worsen outcomes. This suggests backers aren’t necessarily opposed to AI, but are sensitive to perceived attempts at manipulation or a lack of genuine human input. Once creators understand this, they can begin to address the underlying concerns.
Still, significant questions remain. The study focuses on Kickstarter, a platform with a specific user base and disclosure policy; results may vary elsewhere. Moreover, the long-term effects of AI disclosure are unknown, will acceptance grow as the technology becomes more commonplace, or will these initial negative reactions persist. Beyond crowdfunding, these findings have implications for any context where AI is used to augment human effort, from content creation to customer service. Future work should explore whether similar dynamics apply in these areas, and investigate how best to build trust in a world increasingly shaped by algorithms.
👉 More information
🗞 How to Disclose? Strategic AI Disclosure in Crowdfunding
🧠 ArXiv: https://arxiv.org/abs/2602.15698
