Weekly digest: ethical AI and the future of product information

Sophie Nobes

This week, we highlight an upcoming webinar from the Pistoia Alliance about the ethical use of AI by pharma, and we watch a recording of an ISMPP University webinar about the use of AI in scientific communication. We explore the future of medical product information in the latest Open Pharma guest blog and read about updates to Taylor & Francis’s AI guidance. Finally, we read an article that questions whether LLMs are as open as they claim to be.

Ethical AI via Pistoia Alliance

How can we ensure that the use of artificial intelligence (AI) by pharma is ethical? Join Christopher Waller (Vice President and Chief Scientist at EPAM systems), Karin Schneider (Associate Director at Johnson & Johnson Innovative Medicine) and Adarsh Srivastava (Head of Data & Analytics Quality Assurance at Roche) in this webinar on 11 July to find out.  

AI in scientific communication via ISMPP | 90-minute watch

In this recording from the International Society for Medical Publication Professionals (ISMPP) University webinar series, Matt Lewis (Global Chief Artificial and Augmented Intelligence Officer at Inizio), Yahya Anvar (Chief of AI Science and Insights at OKRA.ai), Hélène Dassule (Head of Strategy and Excellence GHEOR, GMC, Training/Field Capabilities at Alexion) and Simon Foulcer (Director BPM Evidence Publications at AstraZeneca) explore how to integrate AI tools into existing pharma workflows. An ISMPP account is required to watch this recording.

Transforming medical product information via Open Pharma | 7-minute read

Medical product information is evolving. Read our latest guest blog by Shimon Yoshida (Executive Director, Head of International Labeling Group at Pfizer) to explore how product information is being reimagined both to support clinicians and regulators and to educate, engage and empower patients. The post also announces the formation of a new working group – anyone interested in participating should send an expression of interest to us at oxford.project@pharmagenesis.com.

Taylor & Francis update AI guidance via Taylor & Francis | 2-minute read

Open Pharma Supporter Taylor & Francis has issued a new policy to guide the ethical and transparent use of generative AI in scholarly publishing to ensure accuracy in scientific publications and to protect intellectual property. The policy expands on previous guidance for authors and for editors and reviewers, and outlines prohibited uses of AI. The policy will be updated continuously as AI technology and ethical practices evolve.a

Are LLM models truly open source? via Nature | 6-minute read

Researchers have criticized major tech companies for claiming their AI models are open source while restricting access to code and training data. This study, which rated 40 large language models (LLMs) on their openness, found that many models allow researchers to use trained LLMs but not to inspect or customize them. The authors conclude that until true open source is achieved, AI will remain unaccountable.a


Enjoy our content? Read last week’s digest and check out our latest guest blog!

Don’t forget to follow us on Twitter/X and LinkedIn for regular updates!


aPaige – a generative AI tool created by Oxford PharmaGenesis – was used to create an early draft of this summary. Paige uses OpenAI’s GPT Large Language Models, securely and privately accessed from within Microsoft’s Azure platform. The AI-generated output was reviewed, modified or rewritten, and checked for accuracy by at least one member of the Open Pharma team. The news pieces included in the weekly digest are curated by the Open Pharma team without the use of AI.