ホーム / コンテンツ / Knowledge Base / What Can We Learn from Dose Normalization?

What Can We Learn from Dose Normalization?

Dose normalization is a common calculation performed with pharmacokinetic parameters. The general process is to divide the PK parameters by the administered dose. This is performed for each individual or treatment group in a study, and then comparisons of dose-normalized parameters can be performed. But, why would we want to dose normalize PK parameters? What will it help us understand?

In my opinion, dose-normalization is a poor way of assessing whether or not a drug exhibits constant clearance. Here’s the basic principle: If a drug follows linear kinetics (see previous post) then exposure parameters should increase proportional to the dose. Thus if the dose is doubled, then the exposure should also double.

The principle noted above should tell us a few things…

  • Dose normalization should only be performed on exposure parameters (Cmax, AUC, Cmin, etc.)
  • Dose normalization does not apply to CL, t1/2, V, or other non-exposure related parameters
  • Dose normalization does not provide “new” information

If you have linear kinetics, then exposure increases proportional to dose along the line of unity (y=x). By dividing each exposure parameter by dose, you change the relationship between exposure and dose to y=0, or a horizontal line. Thus, nothing new is learned from dose-normalizing PK parameters, but it may help some people review exposure parameters and understand linear kinetics. If you dose normalize Cmax and all of the dose-normalized values are the same, then you can safely assume that the drug follows linear kinetics. The same result can be achieved by regressing Cmax vs Dose and checking the slope to see if it is close to 1. Both methods give the same information and both are relatively simple.

Precision dosing—the right dose, for the right patient, at the right time—is crucial to providing patients with the most efficacious medications with minimal probability of adverse events. One key step towards achieving the delivery of individualized dosing is testing potential dosing regimens in a patient’s ‘virtual twin.’The other key step is to have as much drug information as possible. Achieving these key steps requires generating a large amount of data. A computer modeling and simulation platform is needed to assimilate these data together to study their interactions. Watch this webinar to learn more!

About the author

Nathan Teuscher
By: Nathan Teuscher
Dr. Teuscher has been involved in clinical pharmacology and pharmacometrics work since 2002. He holds a PhD in Pharmaceutical Sciences from the University of Michigan and has held leadership roles at biotechnology companies, contract research organizations, and mid-sized pharmaceutical companies. Prior to joining Certara, Dr. Teuscher was an active consultant for companies and authored the Learn PKPD blog for many years. At Certara, Dr. Teuscher developed the software training department, led the software development of Phoenix, and now works as a pharmacometrics consultant. He specializes in developing fit-for-purpose models to support drug development efforts at all stages of clinical development. He has worked in multiple therapeutic areas including immunology, oncology, metabolic disorders, neurology, pulmonary, and more. Dr. Teuscher is passionate about helping scientists leverage data to aid in establishing the safety and efficacy of therapeutics.

トップに戻る
Powered by Translations.com GlobalLink OneLink Software