最近,得益于single-cell sequencing(单细胞测序)技术的发展,科学家们发现了一种新的保护性神经胶质细胞亚型——疾病相关小胶质细胞(DAM),并破译了它们在阿尔茨海默病进展过程中的动态[iii]。在此项研究中,小鼠和人脑切片的染色显示DAM细胞内有吞噬的Aβ颗粒。这种独特的小胶质细胞类型有可能减缓神经退行性变,这可能对未来治疗 AD 和其他神经退行性疾病具有重要意义。如何使用靶向小胶质细胞特异性抑制检查点来诱导早期 DAM 激活,以及如何作为治疗(或预防)靶点来触发针对 AD 斑块病理学、衰老和其他神经退行性疾病的小胶质细胞反应,仍有待确定。
圆壹智慧创始人兼CEO潘麓蓉博士具有计算化学、结构生物学、人工智能等多重教育和工业背景,从纯物理学的过程模拟到机器学习的方法论,她在过去逾14年里对计算技术用于生物学研究和药物设计进行了开发与应用,覆盖神经退行性疾病、心血管疾病、癌症、罕见病以及传染病。与此同时,在新冠疫情的推动下,全球数据开源量以及计算生物学的发展有了巨大进步。回顾在AI制药从学术到产业的积累与沉淀后,潘麓蓉认为自己“Prepared for This Moment”,2021年4月,圆壹智慧就此诞生。
Lurong Pan is the founder and Chief Executive Officer of Ainnocence, a firm seeking to accelerate drug discovery efforts through comprehensive AI models. We caught up with Lurong Pan during our Medicinal Chemistry Strategy Meeting in Boston, 2022 where we explored her motivations in the industry, and the impact of Artificial Intelligence (AI) in the drug discovery space
PF(PharmaFeatures): It’s a pleasure to have you here with us, Lurong. You have had an interesting journey in the life sciences world so far, leading up to your founding of Ainnocence. Would you like to tell us a bit more about yourself, your personal background and what motivates and drives you?
LP(Lurong Pan): Of course. I would say my journey began with my undergraduate degree in Applied Chemistry. Afterwards, I developed my interest in the field further while doing my PhD in Chemistry specializing in Computational Chemistry, which led to my further academic endeavors through my post-doc in Structural Biology, while studying for a master’s degree in Artificial Intelligence. What I would say has been the common driver through my ten year long academic career is the desire to try and find the best way to leverage computational methods to predict biological events – such as molecular properties, how interactions between different molecules play out, and others, particularly with regards to disease pathophysiologies.
PF:How would you say these experiences shaped you and your approach to modern bioinformatics?
LP: Among all the available computational methods I came across during my time in academia, I kept noticing countless limitations – which kept pushing me forward to look for the next best solution. I moved from approaches such as molecular mechanics docking, molecular dynamics and quantum mechanics to more informatics-based, machine-learning based matters to finetune predictions down to a microscale level of biological properties. But it is also crucial to integrate all these methods, and the properties they predict, to produce a comprehensive predictive platform. This would enable estimations for valuable applications, such as the behavior of drugs in different layers of biology. My academic experience engendered me with the background to ask these questions – my experience of integrating data, which is often a big bottleneck, in industry provided me with the tools to start answering those questions.
PF: And how would you say you went about answering these questions – any formative questions you could recall with using this expertise?
LP: I would say one of my most formative experiences was when I joined a group of scientists to build the Global Health Drug Discovery Institute, a non-profit institute founded by the Bill & Melinda Gates Foundation, Tsinghua University and the Beijing Municipal government, aiming to leverage Chinese expertise to solve global health problems. We built our very own team of computational AI scientists, creating a platform to provide a holistic, open-source, free AI platform for drug discovery to support projects working on problems such as malaria, and other diseases causing unnecessary burdens in the poorest parts of the world. We also provided our expertise and data for COVID study, being one of the first groups to release COVID-related AI models and technologies – regardless of whether it would lead to publication or not; a lot of our work was actually published on GitHub!
PF: Is that one of the main reasons you then went on to found Ainnocence?
LP: Yes, subsequent to these experiences, I felt truly equipped to build a truly comprehensive data-driven AI model. We think that leveraging AI and big data is a new way of doing research, on a global, much more collaborative scale. We named it Ainnocence to hint at the ethical dilemmas presented by Artificial Intelligence. Our view and belief is that AI need not raise ethical concerns when constructed and managed appropriately, hence the innocence. We focused on creating a self-evolving, high throughput system, but with low energy consumption and high portability through the cloud. Our aim is to translate our lived and accumulated experiences into successes for many people seeking to produce life-extending and life-saving therapies, accelerating pipeline development while reducing costs.
PF: What would you say is the unique brand of Ainnocence – with regards to values, and perhaps the impact you seek to have on the world?
LP: I have always held a deep appreciation for the medical practice in general – perhaps because I come from a family of doctors to begin with. I think that decreasing the global burden of disease is a noble goal, and one of the most direct routes towards having an indubitably positive impact on society. And I am not merely referring to easily preventable disease – although it is a tragedy that we live in a world where such conditions claim so many lives and years. I am also referring to mental disease, which has long suffered from a lack of formal investigation. We have seen how collaboration can bring about rapid change, particularly with collaboration. And that is the goal we seek to achieve with Ainnocence: leveraging our own expertise in bioinformatics to translate it to improved health outcomes.
PF: You spoke at length about collaboration – but how do we balance collaboration with competition, so that the two can coexist?
LP: That is a good, and open question, I would say. I do not think the industry has yet reached a point where we can say we are at an ideal equilibrium of collaboration with competition – but they can definitely coexist, when balanced appropriately. AI, for example, is mostly algorithms and mathematic formulae: it is easy for it to be open. But anything data-related raises many more questions regarding collaboration: data ownership, data security, data usage. Data is an asset, and it must not only be balanced by market concerns, but also through regulatory, privacy and ethical concerns. But high quality data can also lead to significantly improved AI models, and I think that is where collaboration should truly come in. Bringing different data owners together, to construct more powerful AI models that everyone can benefit from – without compromising a single party’s data. To reach a point where that happens regularly however, we need to overcome many challenges, such as data fragmentation, industrial and academic siloes. Doing so will require work from an industry-wide scale, but I think it is the best option for the future of healthcare and pharma.
PF: You also spoke about the capacity for AI to be innocent, as you put it or perhaps a bit more malevolent. But, do you think we’re quite there yet? Where AI is actually an artificial intelligence that can make its own decisions and bring good or harm to the world?
LP: I do think we could be there if we wanted; whether AI can have decision-making impact is a function of whether we have delegated such capabilities to it. Obviously, I do not think society has reached a consensus on whether we want to do that yet – and there should be a strong regulatory framework and consensus before we do so. Currently, it is most often employed as a sort of “optimizer”. In this application, AI acts more like an intelligent agent optimizing and reiterating its own self to achieve the goals it was tasked with. In this sense, I do not think that AI can be malevolent; but there can be malevolent goals. AI can definitely be more efficient than humans in a multitude of applications, but maintaining a human involved in processes which can impact decision-making should remain a necessity for the time being.
PF:We saw something similar in multiple papers exploring bias in clinical studies or lack of diversity. And many people are afraid that if we introduce our own biases to models, the models might amplify them, right?
LP:Absolutely. In most implementations, AI currently acts as an accelerator rather than having its own judgment. If bias is introduced to the system, it will likely amplify it. This is why I think it is crucial to be very considerate with how our data is constructed, and how our own models are trained – because this can lead to a butterfly effect of sorts, where originally small problems are later magnified.
PF: Are there any more specific applications of AI that you will be excited to see over the coming years?
LP: Quite a few. I think, for example, in biologic drug discovery, AI can analyse more complicated and heterogeneous datasets, through multiomics and better data integration – particularly for phenotypic drug discovery approaches. Obviously, a lot of other areas beyond that are being touched upon by AI – like diagnostics. I feel that AI will make its way to a majority of industrial spaces, at least in the beginning – before it settles on high-impact, cost-effective niches. I solidly believe drug discovery will be one of those, if only because of the sheer complexity of the process and predictive mechanisms needed.