As AI moves from experimentation to enterprise impact, technology leaders face increasing pressure to drive innovation responsibly. Dr Roland Schmid offers a strategic and grounded perspective shaped by two decades of executive experience at the intersection of technology, business, and ethics. As former Global Head of Corporate IT at Syngenta Group, he led major digital transformation initiatives, unlocked productivity through advanced analytics and AI capabilities, and spearheaded the development of Syngenta’s publicly available AI Manifesto. Today, Dr Schmid remains deeply engaged in the evolving AI landscape, with a focus on value realisation, trust, and responsible adoption at scale. In this Executive Insight, he shares his perspective on what it will take for leaders to navigate AI’s next chapter responsibly and why accountability, transparency, and practical execution matter more than ever.
How are organisations you have worked with ensuring that AI systems are ethical and fair?
First, organisations must be crystal clear on what ethical and fair actually mean in their specific context. Most have a code of conduct or ethical framework in place, which serves as the right starting point. At Syngenta, I led the development of an AI Manifesto that made these principles explicit for both internal and external audiences, demonstrating how they align with existing policies, governance structures, and practical approaches to fairness. The manifesto includes commitments such as fostering diverse perspectives within teams and ensuring transparency through community collaboration. It became the foundation for operationalising responsible AI, informing policies and procedures, system design principles, and supplier contracts.
Importantly, awareness was treated as a shared responsibility: Syngenta rolled out training to the entire workforce, recognising that with GenAI now broadly accessible, every employee must act responsibly. I firmly believe that a clear and visible approach prevents responsible AI from becoming a constraint and instead turns it into a strategic differentiator rooted in trust.
What approaches have you seen work well to prevent bias in AI models?
It is important to distinguish between classic machine learning and generative AI. With traditional machine learning, you typically have full control over the dataset. When developing such solutions, I always ensure a structured approach, focusing on careful data selection, transparent documentation, testing for different types of bias, and ongoing monitoring of outcomes after deployment.
With generative AI, the situation is different. You are working with large language models where the training data is not fully transparent or controllable. This places greater emphasis on how the model is used, requiring careful curation of data for fine-tuning or retrieval-augmented generation, as well as thoughtful prompt design strategies. Ultimately, testing and monitoring for bias becomes even more critical in GenAI.
In both cases, human oversight remains the most important safeguard—from design and development through to the model’s use. At Syngenta, we defined this as the first principle in our AI Manifesto: AI must always include a human in the loop who possesses the expertise and bias awareness to critically assess AI outputs.
Finally, not all use cases carry the same level of risk. A biased result in weed detection is very different from one in hiring. Taking a risk-based approach ensures the level of scrutiny matches the potential impact.
How do you handle transparency and accountability in AI decision-making?
Transparency starts with clearly defining the purpose of an AI system—what problem it is solving, how it works, what its limitations are, and the intended outcomes. This includes documenting assumptions, data sources, and key design decisions in a way that both technical and non-technical audiences can understand. It is also important to clarify what the system can and cannot do, and to make trade-offs visible so people can make informed decisions. When people understand the system, they are more likely to trust and adopt it.
Accountability goes beyond simply assigning responsibilities—it is about ownership. The people who will ultimately use the solution need to be involved from the beginning. That means engaging business stakeholders early, rather than delivering a finished tool without their input. When stakeholders contribute to shaping the logic and understanding the outputs, they take real ownership.
I also believe transparency and accountability are directly tied to value realisation. When people trust the solution, they use it. When they use it, it delivers value.
Can you share an example of how responsible AI principles have been applied in your work?
One example is an AI-based procurement tool we developed, which unlocked significant optimisation potential in the long tail of indirect spend. The value came from working closely with experienced category managers who helped shape the logic, test outputs, and drive adoption. They were involved from the start, truly co-creating the solution. The tool was designed to provide clear, transparent insights, fully empowering them to own decisions and actions. In other words, the human remained in the loop by design.
More broadly, responsible AI has been a core element in much of my work identifying opportunities and defining strategies for AI-enabled digital transformation. I placed strong emphasis on running workshops and hackathons to engage a broad range of stakeholders—not only to generate ideas, but also to raise risk awareness and build sensitivity around responsible AI. Responsible use is not just about designing systems and controlling AI models; it is also about how people understand, apply, and trust the technology in practice.
What do you see as the biggest ethical challenge for AI in the future?
The biggest ethical challenge is overarching: what role do we want AI to play in the world? As a society, we have not yet answered that question. And it will not be easy to resolve, given differences in value systems across global regions and the rapid pace of AI evolution—for example, the emergence of autonomous AI and agents raises new ethical questions. Like any major technological shift, it will take time to balance opportunity and risk.
One specific issue is the impact on jobs. I believe AI will create more new jobs than it eliminates, but many professions will be reshaped. The transition will be uneven, and both companies and governments will need to manage it with care, clarity, and a strong ethical foundation to minimise disruption.
For tech leaders, ethical questions already appear in daily decisions. Which use cases are acceptable? How should the EU AI Act be applied and aligned with emerging regulations across regions? How should rules be set for vendors and how can they be held accountable? There are still significant grey zones to navigate.
AI will remain in flux for some time. What matters most is that leaders stay aware of the risks and act responsibly within their context. In practice, that means asking tough questions early, involving diverse perspectives, being honest about limitations, and focusing on outcomes that genuinely matter to people, businesses, and society.