Description
i basically need you to find research on the negative consequences of AI for User Experience and Engagement. You can only find research from these articles: Mckinsey, Gartner, Fast Company, or BCG.
AI for User Experience and Engagement
(CONTENT CREATORS) – alternative platforms, user attrition, miss the target
(COMPANY) – user attrition, missed opportunities, competitive disadvantage, user satisfaction, revenue
(CONSUMERS) – disinterest, better platforms elsewhere
heres an example that is already done:
Consequences of Data Privacy
CONTENT CREATORS) – discrimination, content sent to the wrong audience, out of a job, issues w deepfakes
(COMPANY) – legal ramifications, financial and operational risks plus costs, negative public perception
(CONSUMERS) – discrimination, don’t get to experience authenticity
https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-yearInaccuracy and cybersecurity the most relevant gen-AI risks
https://www.mckinsey.com/featured-insights/mckinsey-explainers/whats-the-future-of-generative-ai-an-early-view-in-15-chartsCharts 2,4,13Inaccuracy, cybersecurity, and intellectual property infringement are the most cited risks of generative AI adoption“For one thing, gen AI has been known to produce content that’s biased, factually wrong, or illegally scraped from a copyrighted source”
Will AI have access to user data and how will it be used
Have to ensure that its following regulations like the GDPR or face legal and financial consequences
deepfakes
Platforms like HeyGen can leave content creators out of a job
Cybersecurity 53 percent of organizations acknowledge cybersecurity as a gen AI-related risk, but only 38 percent are working to mitigate that risk
AI and privacy – tie this to interview where kaitlin said privacy is up to the consumerThe discussion of AI in the context of the privacy debate often brings up the limitations and failures of AI systems, such as predictive policing that could disproportionately affect minorities or Amazon’s failed experiment with a hiring algorithm that replicated the company’s existing disproportionately male workforce
https://www.mckinsey.com/capabilities/quantumblack/our-insights/confronting-the-risks-of-artificial-intelligence
Data difficulties: “it’s easy to fall prey to pitfalls such as inadvertently using or revealing sensitive information hidden among anonymized data”
Security snags: “potential for fraudsters to exploit seemingly nonsensitive marketing, health, and financial data that companies collect to fuel AI systems”
Models misbehaving: “potential for AI models to discriminate unintentionally against protected classes and other groups by weaving together zip code and income data to create targeted offerings”
Interaction issues: “Behind the scenes, in the data-analytics organization, scripting errors, lapses in data management, and misjudgments in model-training data easily can compromise fairness, privacy, security, and compliance”