The rise of Generative AI
Generative AI had its breakout moment with OpenAI’s launch of ChatGPT in late 2022. Within five days, it reached 1 million users; a feat that took Netflix 3.5 years. By early 2023, GPT-4 was on the scene, outperforming most humans in standardized exams and surfacing in almost every industry blog, boardroom, and LinkedIn thread. Roy Steunebrink explains:
At first, it felt like a digital party trick, but when it gained access to real-time data and integration capabilities, we realized we weren’t just looking at a tool. We were looking at a paradigm shift.”
Roy Steunebrink
Head of Development and Implementation, Friday
Accessibility played a huge role in that shift. By putting a powerful AI model in a simple chat interface, ChatGPT democratized experimentation. Suddenly, anyone, marketer, manager, or junior developer, could prototype ideas with AI. But the same ease that made it popular also made it misleading.
AI implementation challenges and how to avoid them
A major reason why so many AI projects stalled, and still do, is the assumption that AI is plug-and-play. Spoiler: it’s not. In fact, most AI projects fail to deliver business value largely due to poor data practices and lack of clear use case definitions. Roy argues that the real problem isn’t AI, it’s the lack of structure, vision, and readiness in most organizations.
To avoid this fate, companies ideally need four foundational elements:
1. Essential AI skills
The success of AI doesn’t hinge solely on the tools you use, but on the people who know how to use them. Companies must either develop AI capabilities in-house or collaborate with trusted partners to bridge gaps in expertise.
Prompt engineering, once a fringe concept, is now a core competency. It requires understanding how to interact effectively with large language models (LLMs) to achieve consistent, accurate, and safe outputs. Similarly, agentic system design, where AI agents autonomously complete tasks across multiple applications, demands fluency in orchestration frameworks, tool integrations, and responsible autonomy.
According to the World Economic Forum’s Future of Jobs Report (2023), skills in AI, machine learning, and big data are among the top five most in-demand by employers. However, AI talent shortages remain a bottleneck: McKinsey estimates that only 10% of companies have the AI talent they need to scale solutions effectively.
In short, to stay competitive, organizations must upskill existing teams, attract AI-savvy talent, or work with specialist vendors who can accelerate adoption while ensuring best practices.
2. Ethical guidelines for safe AI use
AI implementation without oversight is a compliance and reputational risk waiting to happen. The upcoming EU AI Act, expected to begin enforcement in 2025, will be the world’s first comprehensive regulatory framework for AI. It classifies AI systems into four risk categories, unacceptable, high, limited, and minimal, and places strict requirements on transparency, human oversight, and data governance.
Under the General Data Protection Regulation (GDPR), AI that processes personal data must also adhere to principles of fairness, accountability, and explainability. This includes offering users the right to understand and challenge automated decisions, a growing concern in high-risk applications like credit scoring, hiring, or facial recognition.
Companies must act now to:
- Define acceptable use policies for generative AI tools.
- Implement data minimization and anonymization protocols.
- Set internal guardrails for model access, auditing, and decision accountability.
3. A solid data strategy
Data is the fuel behind any AI system, but not all data is created equal. If your data is inconsistent, poorly labeled, or locked away in organizational silos, AI won’t just underperform; it might fail entirely.
Recent studies have showed that up to 80% of AI project time is spent on data prep: cleaning, deduplication, annotation, and normalization. Worse, nearly 55% of data collected by businesses is never used, suggesting widespread inefficiencies in how data is stored and surfaced.
A successful AI data strategy should include:
- Data sourcing best practices: selecting relevant, representative, and domain-specific datasets.
- Smart annotation workflows: combining ML-assisted labeling with human review to ensure accuracy.
- Cloud-based data sharing: enabling secure, real-time access across teams and systems.
4. An AI investor mindset
AI is not a one-time purchase; it’s a strategic capability that compounds over time. Yet many organizations treat it like a quick fix. This short-term mindset is why most AI initiatives stall after the pilot phase. Roy Steunebrink advocates for a more sustainable view:
When it comes to AI, you need patience, not panic.”
Roy Steunebrink
Head of Development and Implementation, Friday
Meaningful AI ROI tends to materialize over 3–5 years, especially in complex sectors like manufacturing, healthcare, and finance.
To succeed, leaders must:
- Embrace experimentation and iteration, knowing not all pilots will work.
- Allocate budgets for continuous model improvement and retraining.
- Measure success not just in immediate cost savings, but in long-term process transformation, customer experience, and innovation capacity.
Essentially, building AI maturity is like building a flywheel; it starts slow, then accelerates with each smart investment.


.avif)









