Olga Kononykhina
2021-07-03
๐๐จ ๐๐ ๐๐ฐ๐: ๐๐ฎ๐ป ๐ฅ๐ฒ๐ด๐๐น๐ฎ๐๐ถ๐ผ๐ป ๐ฎ๐ป๐ฑ ๐๐ป๐ป๐ผ๐๐ฎ๐๐ถ๐ผ๐ป ๐๐ผ๐ฒ๐
๐ถ๐๐?
Are we over-regulating AI in Europe? Could this stifle startups or help create a more ethical and trustworthy AI ecosystem?
Key insights from research (before and after adoption):
A 2022 study (AI Act Impact Survey) of ~100 European startups showed 51% anticipated a slowdown in AI development, with 12% considering relocation or halting AI work; 16% expect a positive impact of the AI Act on their business.
A September 2024 study (Transatlantic Privacy Perceptions (TAPP)) of 66 privacy experts showed that 18% of European experts believe the EU AI Act enables innovation, 30% see it as a barrier, and 36% expect no impact on innovation.
Why might the AI Act hinder innovation? The TAPP experts say:
๐ฅ High Compliance Costs: Meeting requirements like conformity assessments is expensive.
๐ฅ Slower AI Development: Resources are diverted to compliance instead of innovation.
๐ฅ SME Challenges: Small companies face disproportionate burdens, from reporting rules to accessing training data.
๐ฅ Favoring Big Players: Established companies can better navigate and influence regulations, leaving newcomers at a disadvantage.
๐ฅ Legal Ambiguity: Vague rules around data use and copyright create risks and deter investment.
Why might the AI Act enable innovation? TAPP experts say:
๐ฉ Better Governance: Rules can help tackle deep fakes, mass surveillance, and election interference, building public trust in AI safety.
๐ฉ Stronger Data Practices: Encourages high-quality data governance to meet compliance standards.
๐ฉ Protection of Rights: Promotes ethical AI by safeguarding individual rights and setting industry standards.
๐ฉ Legislative Clarity: Provides clear enforcement mechanisms and resists undue influence from large corporations.
View original Post on LinkedIn: Link