1. Regulatory
We discuss the EU Artificial Intelligence Act (AI Act) and its potential implications for Switzerland and explore the possible Swiss approach to incoming legislation. Given the EU’s significance as a major trading partner for Switzerland, it is essential for organizations to understand the implications of the EU AI Act and ensure compliance.
EY’s whitepaper helps stakeholders, including traditional biopharma players and tech companies entering the industry, to prepare for upcoming regulations. Investing now in this topic will enable organizations to ensure compliance while maximizing the benefits of AI for their business.
2. Risk
The EU AI Act addresses risk levels and proposes mechanisms for governing them, prohibiting unacceptable risk, permitting high-risk activities subject to strict compliance, enforcing transparency obligations for limited-risk AI, and allowing minimal-risk AI without restrictions.
For the life sciences industry, dealing as it does with safety-critical applications, compliance with the new regulations can be challenging and costly. Companies must integrate AI governance and risk assessments into their organizational structures, develop ethical commitments, ensure strategic vision, assess AI impacts consistently and manage third-party risks.
In this environment, life sciences stakeholders will be keen to connect AI risks to trustworthiness principles along the end-to-end AI lifecycle. Successful operationalization of AI risk management requires alignment with enterprise risk management programs across domains such as governance, culture, methodology, processes and technology.
3. Technology
Finding the right balance between transparency and complexity in AI modeling is crucial for organizations seeking to leverage the benefits of AI. Incorporating approaches that enhance interpretability and identifying potential bias helps mitigate the challenges associated with black box models and fosters understanding and trust in AI systems.
Developers will need to prioritize interpretable features and utilize post-hoc analysis techniques for evaluating black box model behavior. They should also ensure that they document algorithms appropriately to increase transparency and build stakeholder confidence.
Summary
As AI comes of age and regulation catches up, organizations must make informed decisions that strike a balance between transparency, complexity and predictive power. Getting it right empowers organizations to enhance the reliability of their AI systems and optimize their performance while avoiding far-reaching risks of non-compliance.
Acknowledgements
We would like to thank Sharon Kaufman, Michael Imhof, Michael Graf, Iuliia Metitieri, David Sütterlin, Marco Pizziol, Oliver Mohajeri, Aljoscha Gruler, Esther van Laarhoven-Smits for a significant contribution to preparing and shaping this publication.