Summary

In this session, Morgan Levine, Assistant Professor at Yale, gave a sneak peek into the new epigenetic clock they are developing that is able to probe into multiple organ systems, as well as on a new approach how to calculate clocks that is much more reliable, enabling to generate insights from methylation clocks with much smaller samples required. The second talk was given by Jamie Justice, Assistant Professor at Wake Forest, that covered the current ways and strides the longevity field is making towards validating biomarkers of aging through clinical trials, shown on examples of a few senolytic trials they made. In the end she also explained how exactly the TAME trial, which she is a coordinator of, should serve as a vehicle for the field to move further and have a flagship trial to validate new aging biomarkers against in the future.

 

This meeting is part of the Biotech & Health Extension Group and accompanying book draft.

 

Presentation: Morgan Levine

I’m really excited to be back talking to this group. And actually, the two things I’m going to talk about today are brand new, one of them I’ve never even shown and it’s not published, so I’m giving you guys kind of a sneak peek into it and the other one is under review.

For me being able to quantify aging is one of the most important endeavours in the field. There is a lot of emphasis on intervening in aging, and either reverse or slow it, but I would argue that there’s no way to actually definitively test if you did that, if you can’t directly measure the thing that you’re trying to intervene in.

 

When we say aging, most people think of chronological aging, because it is correlated with this process of biological aging. But we know that the rate of biological aging is malleable, if we can quantify and distinguish it from chronological age. And this insight can provide clinical trial endpoints to actually measure whether we’ve slowed or reversed the aging process, since lifespan isn’t really feasible in human populations. It also will inform us on basic biology, and might give us understanding about the mechanisms that either drive the ticking of biological aging or modulate it. And then finally we can use it for risk stratification either on a personal level for people who like to do kind of quantified self and actually understand their own risks and how to change those, but also on a population level, to inform things like policy or secondary prevention.

Just to give a kind of a brief rundown of what we actually mean when we talk about epigenetic clocks. The epigenetic pattern in a cell is really what’s dictating the cellular state for cellular functioning. Most of your cells, aside from somatic mutations, have the same DNA. But what makes the stem cell a stem cell or skin cell a skin cell, really is the epigenetic pattern. This is personified in Conrad Hal Waddington’s landscape where he talks about how with epigenetic changes, you can get an undifferentiated cell that will basically traverse this landscape and differentiate into different cell types that have different epigenetic patterns.

 

But aside from differentiation of cell state and cell identity, it has been shown that the epigenetic pattern can also distinguish young cells from old cells. So this is really where we’re interested in. And for this, we measure DNA methylation, which is just one form of an epigenetic modification. It takes place at CPG dinucleotides where the cytosines can become methylated. And what we see in aging is not just a change – just increases or decreases of methylation – but really a change in the pattern.

 

And we are really measuring the proportion of cells in the sample that have methylation at specific sites. So for instance, you might have a sample from a 20 year old where you have 90% methylation on a site. And then when you look at 80 year olds, you’ll have on average 60%.  You’ll have others that actually increase or become more, what we call hyper methylated with aging, so perhaps 5% in a young individual, and this goes ato 45% with age. The increases and decreases towards mean (50%) with age are what we consider epigenetic drift. But we also see sites that change kind of away from the mean, where you might start with relatively low methylation or relatively high (90%), and then it actually moves more towards the extremes with age (99%).

Because this has been so striking, people have developed epigenetic clocks. They enable us to look at a sample, and using some algorithms, based on the pattern at specific sites that we are measuring, predict the age of the sample. The first clock was developed in 2011, I won’t go through all these clocks for the sake of time. What is worth noting is that all the clocks up until our PhenoAge clock in 2018, were all developed as predictors of chronological age. So we usually refer to these clocks as first generation clocks. They’re trained as chronological age predictors. However, then, we and then others also showed that you can make these clocks that are actually much more robust in terms of predicting things like mortality or disease incidence trained on aging correlates. So that’s what has been done more recently, and where the field is going.

 

What should a good aging biomarker do? It should provide a good prediction. And this is a prediction of chronological age, because of course, you need a biomarker that tracks with age to say it’s actually an aging biomarker. But it shouldn’t be a perfect prediction of chronological age, because, again, chronological age is an imperfect proxy of this biological aging process we’re trying to capture. And for prediction, we also want to be able to predict outcomes above and beyond chronological age. So looking at people, will it differentiate people with the same chronological age who are at greater risk of mortality, or a greater risk of disease incidents, etc?

 

It should have high precision. If I measure the same sample twice, am I going to get the same answer? That’s really important for people who are measuring those and tracking them in themselves, and really important for clinical trials, maybe less so for kind of large population data or epidemiological data. But in terms of what we’re mostly working on, precision becomes really important.

 

Obviously, it should be non invasive. This is a problem when we’re talking about measuring aging in tissues that are actually not accessible.

 

We preferably want something that’s modifiable, because if it’s not modifiable, why do you need to know the epigenetic age of something.

 

And then finally, working on things that make it more affordable.

Talking about precision and reliability, what I mean is that if I split a single blood draw and then assess age twice, will I get the same answer? Or will one say 50 and the other 55?

This is a question that came up when I started working at Elysium, because obviously, if you’re going to offer direct to consumer tests, you want to make sure you’re giving people an answer that you can stand behind. And I discovered pretty quickly that the existing epigenetic clocks are fairly bad when it comes to this. Shown are samples not from different batches, but  literally measured on the exact same array, from the same blood sample, and you can get differences. In some cases up to nine years difference in the exact same sample. GrimAge is actually one of the better ones. This is partly because it includes chronological age and gender, which obviously in replicants is going to be a constant, but you still get some deviation about two and a half years. So, before putting out Elysium I did a lot of work on this, and since then my postdoc Albert Higgins-Chen has been working on this. So I’m going to talk about what he’s done for a paper that’s currently under review.

The reason this happens is because a lot of these CpGs we are measuring on those arrays are very noisy. In the charts you can see that those that tend to be towards fully methylated or hypo methylated across all the cells in your sample also tend to be noisy. It’s partly because these cpgs don’t vary that much. So a certain amount of noise is going to be seen as a big deal in cpgs that have low variance.

One thing that people may say is that we can try just excluding noisy CpGs. But excluding noisy CpGs doesn’t help. You lose your ability to predict things, but not the deviations. This suggests that these noisy CpGs are actually biologically meaningful.

The solution we came up with is can you actually capture more higher order patterns across the genome. Clocks are usually just picking out one CPG to represent some signal that a lot of them are sharing. So these clocks are picking a specific pattern from a good number of CpGs. But what if we look at more of them? Do we get a more reliable signal? Maybe one of the CpGs we’re looking at has some error in it, but if we look at the group, we might do a much better job, the group shouldn’t be picking up the random noise.

So that’s what we did in the paper now on bioRxiv, which will hopefully be out soon. You get a really good, almost perfect correlation after you apply this method. With older clocks you even have some deviations of nine, and we bring all of them under two, most under one. And this happens across the board for all the clocks we tested. So you’re getting the same clocks, just much more precise.

Then the important question is whether we did anything to actually hurt our ability to predict outcomes. Here I’m just showing mortality, but we looked at tons of different outcomes. What you can see again is that the new method improves it, even for GrimAge which already was highly predictive. So we don’t actually lose our ability to predict anything, we actually gain some predictive ability.

This shows longitudinal tracking of the original and corrected clocks. And as you can see, there’s a lot less jumping around. Some of these are really clean in terms of longitudinal tracking, so they should help with that kind of data.

We’re also interested in the impact on using clocks for testing interventions. So this is kind of the idea that you’ll have a placebo, and then an intervention for people who are chronologically  50 years old, you follow them up a year, and maybe they only deviate now by one year epigenetically? Can you pick that up using the original clocks that are noisy or can you pick that up using these new clocks? And so we did these power calculations that show for a given effect size, and how much you think your interventions are going to change epigenetic aging, what sample size you would need to differentiate your case control using the original clock, and the new clocks. For all these different clocks, you can see that the blue one is much lower, so you actually, hugely increase your power. So you need much fewer samples than you would have otherwise needed had you used the original clocks.

We’re also really interested in in-vitro studies of epigenetic aging, and we’re doing a lot of those in house. So we just took some of our data. These are human astrocytes, that we serially passage. You can see that using the original clocks, which is this top row, you get lots of jumping around, not a very clear pattern. Using these new clocks, you get a really precise pattern, you can hardly tell the difference between the technical or the biological replicates until about passage eight, when they actually start diverging. We actually think this is biologically real, it’s not actually just something to do with noise. Again, we looked at this power analysis for if you want to do drug spinnings, and in vitro experiments. And again, we find that you need much fewer samples to do this with the new clocks.

And then this is where I’m going to briefly just talk about a brand new clock that is not yet completed, but I wanted to share it with you guys. It’s not named yet, open to suggestions, right now we’re calling it a Systems clock. Still in development.

 

So in an ideal world, you would be able to non invasively get measurements of aging and different physiological systems or organs. We would want a brain age and kidney age, liver age. In reality, most of these samples require biopsy, often they’re done postmortem, so you’re not going to be able to track these easily. So this is what we tried to do. The nice thing is that our PhenoAge demonstrated that we can actually capture multi-system composite aging measures using DNA methylation in blood. So we think the signal is there, you just have to pull it out.So our goal here was to build systems specific measures, that when you combine them, you can actually get a robust overall aging signal.

So what we did is we combined first clinical biomarkers training variables, using unsupervised machine learning approaches, so we kind of designated them to a different system, or organ group, using unsupervised machine learning to find patterns in that group, and then use supervised learning to train a methylation predictor of those patterns. So we got scores for what we call brain or cognitive, inflammation and cytokines, immune leukocyte measures, a metabolic, hormonal, kidney, liver, cardiovascular, and a red blood cells platelet measure.

And I’ll just briefly show kind of how the full system one compares. I’m not going to go too much into the individual ones, but you can pull them out and look at them individually. So we compared them against just three of the existing epigenetic clocks. And you can see, it’s a substantially better predictor of all cause mortality than existing clocks, and a much better predictor of cognitive functioning than any of the existing clocks. And we think that’s really being driven by the fact that we have this brain cognitive component to it. A much better predictor of physical functioning. And also comorbidity counts and we have other outcomes too. And the nice thing about this is you can pull apart the systems and actually look at profiles of people. So you might have two people that have the exact same full system age, but one of them might get that because they have accelerated liver aging, another one might get that way because they have accelerated aging in some other systems. So we can actually define different subtypes or ageotypes or whatever term you want to use. We actually think this will potentially help in intervention trials, because you might not want the same intervention for everyone, depending on what kind of their subtype is.

And finally, I just want to acknowledge all the people in my lab at Yale. I couldn’t list all my collaborators, so here are some of my collaborators elsewhere, and my funding from NIA and Glenn Foundation.

Q&A

How can we discover the mechanisms underlying epigenetic aging?

  • Epigenetic clocks have been remarkable tools for quantifying biological aging. They are extraordinary in that the same measure can be used across diverse tissues and even diverse mammalian species. The problem is that we have no fundamental understanding of what drives DNA methylation changes with aging or how they directly connect to manifestations of aging at the tissue or organismal level. I feel that developing this understanding will require a three pronged approach—1) we need to utilize reductionistic and/or in vitro experiments to link DNA methylation changes to various aging hallmarks, gene regulatory processes, and/or other epigenetic phenomena; 2) we need to develop better single-cell data in order to link DNAm changes to cellular phenotypes and heterogeneity; 3) we need to employ computational approaches to deconstruct epigenetic clocks given that they likely capture diverse types of DNA methylation changes.

 

So how many samples do you need for the hypothetical study using new PC-GrimAge—looked like very few.

  • Yes, according to our calculation one would only need ~100 samples to see an effect size of 1 year in PCGrimAge.

 

Given that different organs and tissues often have uncorrelated age acceleration, how good are the clocks that we extract from blood / saliva in predicting age acceleration in, say, the brain or bones?

  • One issue is that (in humans) we don’t have a lot of methylation data measured in multiple organs from the same people. We have done some work looking at a given tissue with paired blood and it does seem that there is a lot of tissue/blood discordance. That being said, part of this disagreement may be due to which aging measures we are using. Some signals may be shared and some may diverge and it depends what you are looking at.

 

So do the low-noise versions of the clocks require more CpG sites to be queried?

  • They do use about ~78k CpGs, but all are measured on the Illumina arrays.

 

How do you think about balancing age prediction and functional/phenotype prediction in aging clocks? It seems to me that there are tradeoffs between accuracy at predicting chronological age and accuracy at predicting specific aging phenotypes like cognitive decline.

  • I don’t see much point in trying to optimize age prediction. Obviously you want a measure that correlates with age, but I don’t think we need to keep trying to reduce the error in age prediction. In my opinion, if you can’t predict anything above and beyond age, then why spend $$$ to get a measure (we already know age). I think the biomarker field needs a paradigm shift in which we stop focusing so much on age prediction.

 

Another question I have is whether the PCA technique can be used with a new dataset to create a clock that is specifically applicable to that dataset, processed in that lab, under particular conditions, with particular chips.

  • For most of these clocks, the PCA and penalized regression was done in a specific dataset and then just applied to all the new validation datasets. We don’t rerun PCA in new data. Long way of saying, yes, they can be applied to new data with no issues. One can also use the method to create a brand new clock.

 

Will the coefficients for the PCA versions of various methylation clocks be available to researchers, or are these proprietary?

  • We are working on GitHub distributions of all the PC clocks. Should be ready by the time the paper comes out.

 

Will the data used to derive & validate the PCA be published as well? I’ve worked with similar methods for GWAS & immune repertoire analysis, so it might be interesting to try some of them.

  • We did not generate any of the data used to train the PCA clocks. They are all cohort studies that can already be applied for.

 

To investigate causal biology, will you be doing more non-human methylome work? Also, will you start investigating transcriptomes, proteomes, metabolomes, etc. using similar techniques but correlating between them all?

  • We have a funded project to do thousands of mouse samples. Also doing a lot of intervention work in mice. We have been doing multi-omics in the brain and plan to do this in the mouse work. Once there is more human multi-omics data, that’s the direction we need to go.

 

How easy/feasible is it to either utilize multiple clocks from one set of samples? Or if the samples are no longer available, how easy/feasible is it to plug a set of methylation analysis into multiple clocks? Ultimately, what needs to be considered when you are designing a clinical trial so that it is easy to do? Example: If the TRIIM trial only used one clock analysis, could you do a post-hoc analysis to plug that data into more clocks to increase confidence?

  • Because the human clocks are all built off arrays, one should be able to estimate all of them in the same study. One can also go back to old data and calculate new clocks that come along.

Presentation: Jamie Justice

My talk will be focused more around clinical trials. Interested in applying and use cases for biomarkers in clinical trials.

Nothing to disclose.

Hopefully most of you have some familiarity with geroscience, and the pillars or hallmarks of aging. And you might also be familiar that there are a number of promising interventions that can target those processes. These are just a flavor of various behavioral dietary and pharmacologic interventions that are now being advanced to clinical trials. And as we are moving into trials, we have to develop trials in order to test them and create frameworks for how we should do that.

This is a translational research pipeline for drug development. And it’s a really arduous process. The timeline is well over a decade from discovery to FDA approval. And very few drugs make it through the gauntlet of rigorous testing. So the estimates that are shown here are for relatively traditional linear disease pathways that have generally accepted outcomes to weigh success versus failure. As most of you know, that’s not the case for aging.

For aging there are two different paradigms for moving treatments into clinical trials. Trials focused on various diseases first, and then larger prevention trials.

 

In the first one we might take a single investigation or investigational agent or treatment, and we begin testing it in different disease cases on an individual basis. And so then investigators would be testing these in different disease silos, and then applying to FDA to get an indication extension for each one of these diseases individually. And you know, it’s not great, but it is still a good application of geroscience – it provides new treatment options for hard to treat diseases. One of the greater hopes is that if we really harmonize some of the approaches, and we can look collectively across these diseases, then eventually what we begin to observe may converge and make a mosaic that looks a lot like aging. Having a harmonized set of biomarkers and a standardized process to implement them becomes really critically important.

 

There’s also an alternative approach that we’ve been working on, and this is to consider really what an aging outcome should be, and to design a trial with sufficient power to test that. What that aging outcome is has to balance our collective scientific understanding of what aging is. But it’s also very important that it has to communicate back to the FDA, in the US. So we need to try to find an intervention that changes the way an organism feels, functions and survives. And so I would argue that both of these approaches are really critical to push progress forward.

There are key clinical trial design elements that are common to both. We’re testing interventions, we have different populations, and of course the outcomes and biomarkers share a certain flavor.

Within the interventions the critical component is that we’re looking at interventions that target critical aging pathways. There are some differences, with treatment trials, you’ll have a certain tolerance for higher risk for newer or repurposed agents. Whereas prevention trials tend to have a more established safety record.

Populations would obviously differ depending on the existence or absence of disease.

For the outcomes and the biomarkers,  I think they can be considered along the same continuum. And I’m going to come back to this continuum idea in a little bit. But when we’re thinking about this, I think of them like Russian nesting dolls – for some of the outcomes, especially for the treatment trials. You might be doing this trial under a disease condition, but ultimately, what we’re really interested in (whether we’re talking about biomarkers or outcomes) is sort of the next layer. So what are the geriatric or aging facing outcomes and biomarkers that can really harmonize.

The biggest underlying thing is that there is no perfect biomarker. Some are useful and it really depends on the context of use.

So taking another step backwards, we have recently had one of our FDA officials talking about biomarker qualification. It was a really fantastic talk for anybody who might have missed it. So starting with a centering definition, biomarkers are objective measurements that reflect the interaction between the biological system and a potential hazard. And so in the FDA language, that might be an indicator of a normal or pathogenic process, or it can be a measure of a response to an intervention. But the way that a lot of us are thinking about trials is with an emphasis on the interaction between the two. It’s not good enough for us to have one or the other. It’s really how they play together – testing the biology, but looking also at the response to an intervention.

Going back to some of the FDA language, these are some glossary terms for different biomarker classes or contexts of use that are really critical. For trials, things that I am looking for the most are monitoring, safety, and the holy grail of pharmacodynamic or response to intervention. It could be on target, specifically what is targeted with the agent, or more collectively looking at larger aging phenotypes and on what kind of metrics might be useful there.

Couple of examples after all that information. The first trials on senotherapeutics. This is a really great example of that mosaic approach that I talked about before about testing one agent and a number of different conditions. This figure shows really well the number of different conditions that senotherapeutics and senolytics are being tested in. This is all just published in 2020, it’s already grossly outdated. There’s now probably between 20, or maybe upwards of 27 different interventions being tested, that are listed on clinicaltrials.gov.

The example I will use is the one for idiopathic pulmonary fibrosis, which is a use case of the Translational Geroscience Network that I was fortunate enough to lead. And I want to highlight a couple of things with this trial. IPF is a quintessential disease of aging. It’s ultimately fatal, and it has an estimated survival of only three to four years after diagnosis, and there are very few available treatments. And although the etiology of this disease is really poorly understood, converging evidence suggests that the aging process – cellular senescence – may be at the nexus, a central contributing mechanism in this disease. Some of the foundational data in this regard was shown here by a couple of rockstars of our field, Nathan K. LeBrasseur and Marissa Schafer.

They did a really great job looking at markers of senescence in this disease. They found that not only were some of these senescence cells as identified by p16+ within the lung and fibroblasts and honeycomb structures, but they also found that expression of certain senescence markers and expression of senescence associated secretory products were elevated with increasing IPF disease severity.

That was important, plus some of their preclinical work and disease model showing possible use of things like dasatinib and quercetin in that disease context. And so that was the foundational evidence that we used to launch the first trial of senolytics. We did this in persons who had stable IPF with an age criteria. I’m highlighting this because we did not have a biomarker available that was standardized and validated to say whether or not the people that are enrolling in this trial had senescent cell burden that was elevated at the time of trial entry. We didn’t have it because it doesn’t exist. So this is one of those contexts of use where we’re going for diseases that are known to have some kind of senescence associated feature based on foundational evidence, and we move into trials without actually screening or gating for entry yet, because simply it doesn’t exist.

And so we enrolled these folks. And what was published is our first open label trial with dasatinib and quercetin. It was a short three week intermittent dosing trial. This was followed up by work that was run exclusively with our colleagues at Texas, where they did a randomized controlled trial. And this was really a feasibility and tolerability kind of study. Looking at different assessments we could possibly do in these people, dosing strategies, and really importantly, what kind of functional measures they can tolerate, what patient reported outcomes can they contribute. But really central to this effort was a question whether there are biomarkers that we can use in the context of a trial. This was sort of the wild west of doing work in senescence, and there simply weren’t any that were accessible.

Long story short, we found some interesting things in the trial. We’re using those insights to drive the next stage – we’re planning for an efficacy trial now. Mainly because we saw that there were some improvements in mobility. And we saw clinically meaningful improvements in mobility in the absence of pulmonary specific measures. And really importantly, in the absence of really solid biomarkers.

And so the take home point is that there were no withdrawals, even though there were some anticipated side effects that we’re seeing. There are some potential improvements and functional measures that are common to geriatric assessment, but not disease specific measures. And the biggest point for me was that we didn’t (at the time of doing that trial) have  anything that really could comment on the pharmacodynamic response. We didn’t know exactly what the senolytic effect was. Did we reduce the burden? We had a few insights from circulating markers, but standardized validated biomarkers are absolutely essential to driving next stage trials.

 

And the other thing that we were still working on is how to integrate those target specific biomarkers with more general biomarkers of aging – to really dig into those ageotypes that Morgan did a great job at describing. So the efficacy trials are warranted, but we really need aging outcomes and biomarkers. And there’s also another caveat here – when doing trial design, we have to accept the risk of a potentially failed trial on some disease specific endpoint. Even if it means that you might have a meaningful change in some kind of aging outcome and vice versa.

Back to biomarkers of senescence. There have now been some efforts to show that there might be some movement in biomarkers of senescence with senolytics. This was a follow up trial, looking at p16+ and p21+ in whole adipose tissue before and after a single three day course of senolytics. And there does seem to be some movement there. Unfortunately this requires invasive use of tissue, and there’s still a lot of bugs to work out.

And that’s why it’s really important that there’s this related concept. This is not a biomarker project, but an effort within the NIH Common Fund to fund these mapping centers – SenNet program. So this aims to get to that single cell level and characterize what senescent cells look like in healthy human tissue. And I’m really careful to say that this is not a biomarker project – this is for building the foundational evidence that’s necessary for us to be able to build, validate and standardize biomarkers of senescence for trials.

That’s important at the tissue level, and then there are a lot of other folks looking at biomarkers that might be accessible in the circulating cells. For example this work again from Nathan K. LeBrasseur and Marissa Schafer, first induced senescence, then measured secreted factors, and then began to test these under different clinical conditions. That all came down to a panel of SAS factors that they found had been secreted and could be measured fairly reliably in circulating plasma or serum.

 

What I found really interesting is that we’re talking about many of these factors as if they’re sort of senescence associated. But a number of these factors aren’t necessarily senescence associated only, there is this larger aging connection that we’re looking at – more of a general milieu – that also encompasses senescence.

This carries us into the alternative strategy, the one that’s absolutely imperative that we make progress on if we’re going to move any of these areas forward. And that’s actually developing an aging outcome that can be recognized by regulatory officials. And so this has been work that I was brought in on about six years ago, to help with development of this trial called TAME – Targeting Aging with Metformin. I have the opportunity now to serve on the executive committee for this trial. And I’m in charge of trying to develop or coordinate whatever the biomarker and biorepository strategy is for this trial. For those who don’t know, TAME is a six year double blind, randomized, placebo controlled trial that will be conducted in over 3000 non-diabetic adults, and 14 clinical sites. And most importantly, it’s the first trial that was designed, really crowdsource designed, by a group of scientists across the US to create a regulatory path for clinical trials to target age related multi morbidity.

When we think about what that aging outcome would be, it becomes really critically important that all of these outcomes exist on a continuum. And they’re not exclusive. As we go across this continuum, we have biomarkers and other proof of concept things on one end. And on the other end, we have things like, median lifespan, life extension, other delayed frailty and age related diseases, and they increase in time expense and salience required to see those outcomes as we go along.

 

Really important is FDA, if we’re looking at something that could be properly adjudicated for a trial, then we’re talking about things that are out there on the further end. So it’s not that the biomarkers are not important, it’s just simply that for an aging outcomes trial that reads for the FDA, we have to change function, feels, survives first. And then we can actually have something that we can go back and begin to validate and standardize biomarkers against.

 

Okay, so looking at that, if we have an FDA facing outcome, then if it has an effect as on aging, then we should be able to reduce the incidence of multiple diseases or geriatric syndromes. And those diseases should share a few risk factors other than chronologic age. And so looking at that the outcome then becomes this time to incidence of one of a collection of possible diseases or endpoints and age related conditions.

 

And then we have again this Russian nesting doll thing. We have a part that talks to the FDA, and then we have the part that talks to a lot of us that are actually doing this work. This sort of second effect is that if the drug has an effect on aging, it should improve or attenuate declines and functional outcomes, and clinical phenotypes of aging, that are important for persons who are aging, and also communicate well to the clinical community. We also need to have a drug that can affect and improve biomarkers of aging in aggregate that really reflect that underlying biology, and create a platform that gives back to the scientific community to drive next stage discovery.

And so that was really the concept that went into design to the TAME trial, looking at a clinical outcome that talks to the FDA, and at these other outcomes. 

Using the FDA language, we look at three that are just really critically important. We know we have to monitor, we have to look at safety, and again, the holy grail of pharmacodynamic and response. And so we need the drug’s effect on some of the targeted processes that we have expected to target. And really to reflect the change in the underlying biology. And very important to all of this, especially when we’re looking at some of these really large scale trials, in some of these first kind of trials is that it’s incredibly important that we create a harmonized resource to the scientific community that more people can use to build on.

We can look at this collectively across different platforms. We could have some biomarkers of aging that really reflect some of those pillars and hallmarks, a lot of those aren’t validated or standardized yet, Morgan gave you a great explanation of what’s happening with methylation based biomarkers, which are becoming some of our most advanced biomarkers for trials. We can’t validate them yet, because it requires a trial for validation and qualification. But these are at least things that we would have the platform to be able to work with to begin to validate many of these methylation biomarkers.

And of course, there’s going to be a lot of work into looking at deficit and damage accumulation, whether these are from deficit accumulation just from clinical functional safety measures. Or if we’re going to look more broadly at maybe proteomics or other sort of larger base platforms, so that we can begin to marry some of these ideas back together. And because this trial is starting now but might not be ending for another six to 10 years, the most critical thing we can do is focus on coordinating these ideas. I’m in a data repository, emerging science and ancillary engagement, so that we can make the biggest impact for the most going forward.

It’s really critical, because none of these can be validated without the trial. So we have to have an aging outcome trial for validation and qualification as surrogate endpoints, otherwise we end up in this really circular conversation unless we actually just pull together and move this forward. 

Another important thing is that we have to have the data – just like those that Morgan has shown – that support the analytic performance of these biomarkers. Showing that they’re reliable, showing that they’re standardized, showing that they actually mean what they say they mean. And it’s going to take a really critical and coordinated effort across investigators to make that happen.

In addition to that, we have to have an outcome that we accept as aging. And so I think that is inherent in what we’ve been trying to do on the TAME trial.

And really critically, we have to have an effective intervention in order to do all of that together. So it’s a tall order, and there’s a lot to do.

We’ve been trying to develop a criteria for a perfect biomarker. This is not me telling you, this is meant to be a conversation that I would love to have with people who are working on biomarkers. So if you are thinking about a biomarker for a trial, this is a wishlist of what it needs to do so we know that we see a perfect biomarker in front of us.

Okay, and that’s all I’ve got, there has been a lot of work and I think one of the biggest points is that aging reflects a collection of disease conditions, functional measures, biomarkers. So we’re really trying to capture this larger, broader accumulated deficit, looking at improvements. And that’s going to, of course, make composite measures and aggregate scoring important.

And I have an army to thank – only some of them are represented here.

 

 

Seminar summary by Bolek Kerous.