If ever there was a cautionary tale illustrating the importance of getting allometric scaling of drugs right, it’s the story of Tusko the elephant. In 1962, Tusko was living at the Lincoln Park Zoo in Oklahoma City where he received a dose of the psychomimetic drug, lysergic acid diethylamide, better known as LSD.
The dose of LSD that Tusko received was estimated from a dose previously used successfully in cats. Within five minutes of administering the LSD, he went into status epilepticus. Despite efforts to control the seizures, the elephant died after a little more than an hour. While this dose only produced marginal effects in cats, it was clearly toxic in elephants. This dramatic difference in pharmacological response is due to the fact that physiological processes are slower in elephants than cats. Thus, a dose that was fine for cats resulted in a toxic level of drug exposure with both elevated peak plasma concentration and prolonged effect duration in elephants.
Clearly, we must be very careful in extrapolating the doses used in pre-clinical model organisms (rats, dogs, and monkeys) to first-in-man (FIM) drug trials. Allometric scaling is commonly used to predict human pharmacokinetic (PK) parameters based on animal data. While allometric scaling has been used in drug development for many years, it is a complicated and laborious process. In this blog post, I’ll present a new solution that can streamline designing FIM clinical studies based on pre-clinical data.
Allometric scaling of drug doses
Humans have a distinct biochemistry, anatomy, and physiology compared to other animals. Predictions of a drug’s PK profile in humans based on animal PK data must account for these differences. Allometric scaling is used to predict differences in PK parameters based only on size.
Predicting human dosing with the Phoenix Automation System
Getting the dose right for FIM trials is critical. In addition, this calculation is performed frequently in pre-clinical groups as they study various drugs in different pre-clinical model organisms. To help accelerate the pace of drug development, we have developed an application that automates FIM allometric scaling using pre-clinical PK data.
To illustrate how this user-friendly tool can help you save time and money, I will briefly discuss the following items:
- The assumptions regarding the model parameters
- The tool’s modeling strategy
- The optimal dosing program
- The tool’s outputs and architecture
Assumptions of the model
Like any model, we have built this tool based on several assumptions. For example, the system is assumed to exhibit linear kinetics. Different species have unique pharmacokinetic profiles. Thus, we also assumed that the volume of distribution, clearance from plasma, transfer rate from/to plasma and to/from peripheral compartments (for 2-compartment models) differs between species.
The input into the tool is either oral or IV PK data from rats, dogs, or monkeys. First, the algorithm determines the average values for the PK parameters and finds the best model (one- or two- compartment with or without a time lag) to fit the pre-clinical data. These parameters are then subjected to a pre-defined allometric scaling equation that extrapolates from each animal species to humans. Each animal species has its own pre-defined allometric scaling equation. All other model parameters (Ka, bioavailability, etc) are assumed to be shared across the different species including humans.
A population PK modeling approach is used to fit the model to the combined species input data. The advantage of a population PK modeling approach is that it enables both rich and sparse individual data to be combined. The end product of the model is both average and individual PK parameters for each species. The average animal PK model parameters are then used to extrapolate to humans using a pre-defined allometric scaling equation.
Optimal dosing regimen
Based on user-defined threshold values for either the drug concentration-time area under the curve (AUC), maximum concentration (Cmax) or minimum concentration (Cmin), the program automatically estimates the dose needed to reach each of these threshold values. Finally, the user defines their desired dosing interval, and the program calculates the optimal dosing scenario to achieve steady state IV or oral dosing. The output of the program is a Phoenix project which includes capabilities for performing data manipulations, non-compartmental analysis (NCA), descriptive statistics, and generating accompanying graphics.
The allometric scaling plugin is connected to Phoenix via an API. These can all access data that is stored externally. The plugin application is able to map input data to Phoenix objects, execute and compare models, and acquire results from executed objects to set up the fixed and random effects for the next executable non-linear mixed effects (NLME) model.
Streamline your allometric scaling from pre-clinical to FIM dosing
This unique application supports automating allometric scaling using Phoenix. It can also be easily customized. The same automation concept can be expanded to include individual Bayesian based model predictions, population PK/PD models, and combined PK/PD with categorical response models.
This automation procedure opens the door for new opportunities in the pharmacometrics community. Once the algorithm that automatically calculates initial estimates for model parameters has been determined, it can be implemented in an application that is linked to Phoenix. Companies running this application will save time and money because it is so user-friendly that it can be operated by any scientist involved in pre-clinical DMPK projects.
Finally, the automation procedure delivers as output a Phoenix project with all the model templates as well as the entire workflow. The user can then edit those, run any of the models, and decide what to do next to achieve optimal dosing using the manual mode rather than the automated one. He can still take advantage of all the templates that are generated during the automation procedure (models, NCA, descriptive statistics, graphics, etc).
I recently gave a webinar on this topic. I hope that you’ll watch the recording and let me know what you think in the comments section!