• One Health
  • Pain Management
  • Oncology
  • Anesthesia
  • Geriatric & Palliative Medicine
  • Ophthalmology
  • Anatomic Pathology
  • Poultry Medicine
  • Infectious Diseases
  • Dermatology
  • Theriogenology
  • Nutrition
  • Animal Welfare
  • Radiology
  • Internal Medicine
  • Small Ruminant
  • Cardiology
  • Dentistry
  • Feline Medicine
  • Soft Tissue Surgery
  • Urology/Nephrology
  • Avian & Exotic
  • Preventive Medicine
  • Anesthesiology & Pain Management
  • Integrative & Holistic Medicine
  • Food Animals
  • Behavior
  • Zoo Medicine
  • Toxicology
  • Orthopedics
  • Emergency & Critical Care
  • Equine Medicine
  • Pharmacology
  • Pediatrics
  • Respiratory Medicine
  • Shelter Medicine
  • Parasitology
  • Clinical Pathology
  • Virtual Care
  • Rehabilitation
  • Epidemiology
  • Fish Medicine
  • Diabetes
  • Livestock
  • Endocrinology

A case-based approach to therapeutic drug monitoring: increasing the likelihood of success (Proceedings)

Article

The success of any fixed dosing regimen most often is based on the patient's clinical response to the drug. Fixed dosing regimens are designed to generate plasma drug concentrations (PDC) within a therapeutic range, ie, achieve the desired effect while avoiding toxicity.

The success of any fixed dosing regimen most often is based on the patient's clinical response to the drug. Fixed dosing regimens are designed to generate plasma drug concentrations (PDC) within a therapeutic range, ie, achieve the desired effect while avoiding toxicity. However, a therapeutic range (Cmin and Cmax)is a population parameter that describes the range between which 95% of the animals might respond. For antibiotics, the peak (Cmax; eg, aminoglycosides [AMG], fluorinated quinolones) depends on the MIC of the infecting organism (target at least 10X the MIC), and the trough depends upon the type of antibiotic (time versus concentration dependent); for AMG, Cmin is targeted to avoid toxicity. Monitoring determines the individual patient's therapeutic range. Response below the therapeutic range does not necessarily indicate therapy is not needed; likewise, failure should not be considered only if the drug is above the maximum range. For some patients, the maximum range will need to be exceeded and should be considered if the drug is safe and respon. Thus, the absence of seizures in a dog with subtherapeutic concentrations is not justification for discontinuing the drug. On the other hand, a very small proportion of animals respond at concentrations higher than the recommended maximum and risk-benefit considerations should determine the need to add a second drug.

Marked inter-individual variability in physiology, response to disease and response to drugs results in variability in dose-response relationships. Factors which determine drug disposition (absorption, distribution, metabolism and excretion) are amenable to change. Physiologic, pathologic and pharmacologic factors can profoundly alter the disposition of a drug such that therapeutic failure or adverse reactions occur. The most recent examples are Collie breeds with the MDR gene mutation, and drug interactions involving CYP3A4 or P-glycoprotein. Changes in drug metabolism and excretion induced by age, sex, disease or drug interactions are among the more important factors which can cause PDC to become higher or lower than expected. Recommended dosing regimens are sometimes designed to compensate for the effects of some of these factors. Examples include many feline dosing regimens (eg, aspirin, some selected antimicrobials, etc.); the use of body surface area rather than body weight for drugs with high potential of toxicity (eg, anticancer drugs) and allometric scaling for exotic species. Unfortunately, the effects of many factors are unpredictable and cannot be anticipated in the individual patient, despite innovated dosing calculations.

Therapeutic drug monitoring replaces the trial and error approach to dosing regimen designs that may prove costly both financially and to patient health. Monitoring is indicated in clinical situations in which an expected therapeutic effect of a drug has not been observed, or in cases where drug toxicity related due to high toxic PDC is suspected. In addition, TDM can be used to establish whether or not optimum therapeutic drug concentrations have been achieved for drugs characterized by a response that is difficult to detect or in which the manifestations of disease are life threatening and the trial and error approach to modification of dosing regimen is unacceptable. In situations in which chronic drug administration is expected, TDM can be used to define the effective target PDC in the patient. The target PDC can then be used if pharmacokinetics change in the patient over the course of chronic drug administration due to disease, environmental changes, age or drug [or diet] interactions. Drug monitoring has also been useful in identifying owner noncompliance as a cause of therapeutic failure or adverse reactions. As such, drugs for which TDM is most useful are characterized by one or more of the following: 1) serious toxicity coupled with a poorly defined or difficult to detect clinical endpoint (eg, antibimicrobials, anticonvulsants and cyclosporine); 2) a steep dose-response curve for which a small increase in dose can result in a marked increase in desired or undesired response (eg, theophylline; [TPH], or phenobarbital [PB] in cats); 3) a narrow therapeutic range (eg, digoxin); 4) marked inter-individual pharmacokinetic variability which increases the variability in the relationship between dose and PDC (eg, PB); 5) non-linear pharmacokinetics which may lead to rapid accumulation of drugs to toxic concentrations (eg, phenytoin or, in cats, PB); and an unexpected toxicity due to drug interactions (eg, enrofloxacin induced TPH toxicity or chloramphenicol or clorazepate induced PB toxicity). In addition, TDM is indicated when a drug is used chronically, and thus is more likely to induce toxicity or changes in pharmacokinetics (ie, anticonvulsants), or in life-threatening situations in which a timely response is critical to the patient (eg, epilepsy or bacterial sepsis). Drugs for which TDM might not be indicated include those characterized by a wide therapeutic index which are seldom toxic even if PDC are higher than recommended, or those for which response can be easily monitored by clinical signs. Not all drugs can be monitored by TDM; certain criteria must be met. Patient response to the drug must correlate with (ie, parallel) PDC. Drugs whose metabolites (eg, diazepam) or for which one of two enantiomers comprise a large proportion of the desired pharmacologic response cannot be as effectively monitored by measuring the parent drug. Rather, all active metabolites and/or the parent drug should be measured. For cyclosporine (CsA), for which parent and some metabolites are active, HPLC measures only the parent whereas immunoassays measure parent and some metabolites. Species differences vary in the production of these metabolites. An effective therapeutic range must have been identified for the drug in the species and for the disease being treated. For many drugs, recommended therapeutic ranges in animals have been extrapolated from those determined in humans, but care must be taken for this approach (eg, bromide and procainamide). The drug must be detectable in a relatively small serum sample size, and analytical methods must be available to rapidly and accurately detect the drug in plasma. Cost of the analytical method must be reasonable.

Implementation and response to TDM requires an understanding of the relationships between PDC, interval (T) and drug elimination half-life [t½]. In general, TDM should not be implemented until PDC have reached steady state in the patient. Steady-state PDC occur at the point when drug input and drug elimination (ie, distribution, metabolism and/or excretion) are equilibrated. Although PDC change to some degree during the dosing interval, they remain constant between intervals at steady-state (note that "steady-state is not actually reached with drugs whose half-life [ t½] is substantially shorter than the dosing interval). With multiple drug dosing at the same regimen, PDC will reach 50, 75 and 87.5% of steady-state concentration at one, two or three half-lives, respectively (and so on) regardless of the drug. The same time period (ie, 3-5 drug half-lives) must elapse prior to monitoring if any portion of the original dosing range (ie, dose, frequency or route) is changed. For drugs with a long t½, compared to the dosing interval, drug accumulation can be very dramatic (ie, the drug concentrations following the first dose (PDCfirst ) are much lower than drug concentrations at steady state (PDCss). The dosing regimen of such drugs is designed such that drug concentrations will be in the therapeutic range, but only when steady state concentrations have been achieved.. The amount that the drug accumulates depends on how much shorter the interval is compared to the t½ (ratio of T: t½): a T: t½ ratio of 1 results in an accumulation ratio (PDCfirst/PDCss ) of 2, 2(2.4), 3(5) and 4 (6.3 fold ; for bromide [t ½ = 21 d], a 12 h interval will result in a an accumulation ratio of 72).). For drugs characterized by a long t½, TDM can be implemented by measuring concentrations at approximately one drug t½ at which time PDC will be approximately 50% of PDCss. A third alternative to proactive monitoring is available for patients for whom steady-state concentrations must be reached immediately. A loading dose an be administered to rapidly achieve therapeutic PDC. The loading dose needed to achieve a known therapeutic concentration of a drug depends upon the Vd of that drug in the patient and the target (ie, the therapeutic concentration:). If the drug is orally administered, the bioavailability (F) must also be taken into account when the dose is determined. After a loading dose is administered, the maintenance dose should be "just right" to maintain PDC achieved after loading. If not, a problem may not become obvious until steady-state occurs (ie, 3 to 5 t½; for bromide, this would be 3 months). However, monitoring can be used to pro-actively evaluate the proper maintenance dose. When using a loading dose, TDM might be performed three times. Using bromide as an example, the first time is after oral absorption of the last dose of the loading dose is complete to establish a baseline (eg, day 6). The second time would be one drug t½ later (eg, 21 days), to assure that the maintenance dose is able to maintain concentrations achieved by loading. One drug t½ later is recommended because most of the change in drug concentrations that will occur if the maintenance dose is not correct will be present at this time. If the second sample (collected at one drug t½) does not approximate the first (collected immediately after the load), the maintenance dose can be modified at this time rather than waiting for steady state and the risk of therapeutic failure or toxicity. In general, monitoring of a drug with a long half-life requires only one sample. Generally, for consistency's sake, we suggest collection of a trough (before the next dose). The trough PDC is the lowest drug concentration that develops during a dosing interval and it theoretically should not drop below the Cmin (the AMG are an exception; toxicity is avoided by allowing the trough to decline below a minimum)

Many drugs are characterized by half-lives that are much shorter than the dosing interval. For these drugs, no to little accumulation occurs, the concept of "steady-state" is perhaps irrelevant, and response can be evaluated with the first dose (or as soon as the disease has had time to respond). The amount PDC declines during a dosing interval, that is, the fluctuation between Cmax/Cmin) depends, again on the relationship between t½ and interval. If the interval is 1, 2, 3, or 4 times the t½, PDC will decrease 50, 75 and 87.5%, respectively during the dosing interval . This fluctuation may be unacceptable (eg, antiepilpetics, some cardiac drugs, potentially cyclosporine) or acceptable (eg, AMG, drugs which act irrersibly). Detection of this fluctuation will require collection of both a peak and trough sample. The peak PDC (Cmax ) is the maximum concentration achieved after a dose is administered and presumably it should not exceed the recommended Cmax if the drug is not safe. Timing of peak sample collection can be difficult to predict; ideally, absorption and distribution should be complete. The route of drug administration can influence the time at which peak PDC occur, which will vary among drugs. For orally administered drugs, absorption is slower (1-2 hours) and distribution is often complete by the time peak PDC have been achieved. However, the absorption rate can vary widely due to factors such as product preparation, the effect of food or patient variability. Because food can slow the absorption of many drugs, fasting is generally indicated (if safe) prior to therapeutic drug monitoring; however exceptions are noted for some drugss (ie, imadazole antifungals). Generally, peak PDC occur 2-4 hours after oral administration. Some drugs are simply absorbed more slowly than others (eg, PB) and the time of peak PDC sample collection is longer (eg, 2 to 5 hours for PB). For drugs administered intravenously, absorption is not a concern but distribution is. For some IM and SC administrations, absorption occurs rapidly (ie, 30-60 minutes), but, again, drug distribution may take longer. Thus, PDC generally are measured 1-2 hours after administration after parenteral drug administration. Exceptions must be made for drugs, such as digoxin, for which distribution may take 6-8 hours. Samples should not be collected for these drugs until distribution is complete (Table 1).

Collection of peak and trough sample is particularly important for drugs characterized by a narrow therapeutic range and a short half-life. For such drugs, calculating the t½ might be useful for determined an appropriate dosing interval, although both a peak and trough sample should be collected (t½ =0.693/ kel, where kel = LN(C1/C2]/t2-t1, where C and t are the concentration and time point of the 1st (peak) and 2nd (trough) sample, respectively (Table 2). This can be easily calculated using Microsoft Excel). In contrast to drugs with a short t½, peak and trough concentrations will not differ substantially for drugs whose t½ is much longer than the dosing interval (eg, bromide, and for some patients, PB) and a single sample is generally sufficient for such drugs. Single samples might also be indicated for slow release products (eg, TPH) since constant drug absorption mitigates a detectable difference between peak and trough concentrations. Single samples also can be collected following a loading dose (ie, bromide) or at the first t½ (ie, 3 to 4 weeks) in a patient that has just begun bromide therapy. Finally, if the question to be answered by TDM is one of toxicity, a single peak sample (eg, digoxin or PB), or trough sample (AMG) may answer the question. The impact of drug interactions (eg, induction [eg PB] or inhibition [eg, cyclosporine and ketaconazole]), or disease [eg, cardiac disease an beta blockers or digoxin] on drug clearance may cause a drug to shift from a short half-life to a long half-life or vice versa. For example, we have measured half-lives as short as 12 hrs for PB and 9 hrs for digoxin, or as long as 150 hrs for cyclosporine. For such drugs, a prudent approach would be monitoring at baseline and then at proposed steady-state once the second drug has begun, or the disease changes. Peak and trough samples might be collected before and after the change has been implemented such that time to steady-state might actually be predicted based on half-life (eg, if ketaconazole is added to cyclosporine). Peak and trough samples might be collected in any patient that is not responding well to therapy with any drug which may have a short t½ compared to the dosing interval. For example, peak and trough digoxin should be collected when disease is stable and decompensates, and should be considered when drug therapy is changed, particularly if the patient responds: as renal clearance changes, so will digoxin concentrations. If a kinetic profile of a patient is the reason for TDM, the two samples preferably are collected at the peak and trough times unless the interval is so long that drug may not be detectable at the trough time (ie, AMG administered at 12 or 24 hour dosing intervals). The most accurate kinetic information is generated from patients receiving an IV dose since the volume of distribution can be estimated along with drug elimination t½. For oral doses, only the rate of elimination and drug t½ can be obtained.

Table 1.

The minimum information necessary for interpretation of PDCs includes the following: 1) The total daily dose of drug which will be correlated with the patient's measured PDC. 2) Time intervals of drug administration are particularly important for drugs with short half-lives (eg, AMG). Provision of this information assures the clinical pharmacologist that blood samples contain the actual trough and/or peak drug concentrations. From this data, a drug t½ can be calculated and a proper dosing interval can be determined. 3) The patient's clinical status is important because both acute and chronic diseases can dramatically alter drug disposition patterns. This is particularly true for patients with renal, liver or cardiac disease. If this information is lacking, disease-induced changes in drug disposition cannot be distinguished from other causes such as non-compliance or drug interactions. 4) Concurrently administered drugs may alter drug disposition patterns and thus contribute to individual differences in drug disposition. Frequency, dose, amount and the actual times of all drugs given to the patient must be known in order to recognize or predict potential drug interactions. 5) Physiologic characteristics such as patient species, breed and age are often important to the interpretation of PDC because known or predictable differences they may induce drug disposition, or because of known differences in pharmacodynamic responses. Weight must be provided in order to determine Vd. 6) The reason for TDM should be given, ie, has the patient failed therapy or is the patient exhibiting signs of toxicity?

Once results are received, either the dose or interval of a drug might be modified (or if all is well, left alone; Table 2). If patient PDC is to high or low, and particularly if the t½ is long, then the dose can be change in proportion to the desired change in PDC. Thus, if the PB is 20 µg/ml and the target is 25 µg/ml, then the old dose should be increased (or decreased) by 25-20/20 or 25%. This approach can be repeated until the maximum (or minimum) end of the therapeutic range is reached. If the t½ is short, decreasing the dosing interval may be more cost effective. Note that for each t½ to be added to the dose interval, the dose must be doubled (to add 2 t½, the dose must be quadrupled, for 3- t½, the dose must be increased 8 fold, etc).

NRNot relevant; HL,Half-life (elimination or disappearance); D, *dog*; C, cat BND:just before next dose,

*See also Chapter 27, Anticonvulsants. Serum separator tubes in general should not be used for therapeutic drug monitoring. Some human therapeutic ranges have not been applied to animals.

When to Implement Monitoring

For the controlled patient at steady state:

1. "Start-up": (a) Maintenance dose. Monitoring at baseline in a responding animal to establish the therapeutic range for the patient. b) Loading dosemaintenance dose combination (including a "mini" loading dose): A sample should be collected after a loading dose has distributed (the day after loading bromide or within 2 hr after loading with phenobarbital) and at one half-life into the maintenance dose. If the two samples do not match, the maintenance dose should be proportionately adjusted. A steady state sample should be collected once steady state has been reached.

2. "Check up": Rechecks: to proactively ensure that effective concentrations are maintained and safe concentrations are not exceeded. The frequency varies with the seriousness of therapeutic failure or the risk of toxicity. Intervals of 6 to 12 months are generally recommended for the well-controlled patient and 3 to 6 months for the poorly controlled patient. For immunomodulators, 3 to 6 month intervals for life-threatening disease.

3. "What's up": (a) Establish a cause for therapeutic failure or to confirm toxicity. For patients that have not responded well to a new drug or a new dose, despite doses at the mid to high end of the recommended dosing range; or in previously well-controlled patients that fail therapy or develop signs of adversity. (b) Respond to changes in patient factors: Progression or improvement of cardiac, renal, or hepatic disease: changes in clearance and, to a lesser degree, volume of distribution may change elimination half-life and thus peak or trough concentrations.(c) Detect drug-drug or drug-diet interactions. Changes in diet or addition of drug which may interact: baseline concentrations should be reestablished before and after a change if there is a risk that the change in diet or drug therapies may alter the disposition of the drug of interest. For example, bromide should be measured before and at steady state after the administration of a new diet; phenobarbital should be monitored before and after beginning chloramphenicol or an imidazole antifungal, cyclosporine should be monitored before and after ketaconazole (or any imidazole) or azithromycin therapy is implemented, etc.

Related Videos
© 2024 MJH Life Sciences

All rights reserved.