Therapeutic drug monitoring (Proceedings)


The success of any fixed dosing regimen most often is based on the patient's clinical response to the drug. Fixed dosing regimens are designed to generate plasma drug concentrations (PDC) within a therapeutic range, ie, achieve the desired effect while avoiding toxicity.

The success of any fixed dosing regimen most often is based on the patient's clinical response to the drug. Fixed dosing regimens are designed to generate plasma drug concentrations (PDC) within a therapeutic range, ie, achieve the desired effect while avoiding toxicity. However, a therapeutic range (Cmin and Cmax ) is a population parameter that describes the range between which 95% of the animals might respond. Response below the therapeutic range does not necessarily indicate therapy is not needed; likewise, failure should not be considered only if the drug is above the maximum range. For some patients, the maximum range will need to be exceeded and should be considered if the drug is safe and respon. Thus, the absence of seizures in a dog with subtherapeutic concentrations is not justification for discontinuing the drug. On the other hand, a very small proportion of animals respond at concentrations higher than the recommended maximum and risk-benefit considerations should determine the need to add a second drug.

Marked inter-individual variability in physiology, response to disease and response to drugs results in variability in dose-response relationships. The most recent examples are Collie breeds with the MDR gene mutation, and drug interactions involving CYP3A4 or P-glycoprotein. Changes in drug metabolism and excretion induced by age, sex, disease or drug interactions are among the more important factors which can cause PDC to become higher or lower than expected. Therapeutic drug monitoring replaces the trial and error approach to dosing regimen designs that may prove costly both financially and to patient health. Monitoring is indicated in clinical situations in which an expected therapeutic effect of a drug has not been observed, or in cases where drug toxicity related due to high toxic PDC is suspected. In addition, TDM can be used to establish whether or not optimum therapeutic drug concentrations have been achieved for drugs characterized by a response that is difficult to detect or in which the manifestations of disease are life threatening and the trial and error approach to modification of dosing regimen is unacceptable. In situations in which chronic drug administration is expected, TDM can be used to define the effective target PDC in the patient. The target PDC can then be used if pharmacokinetics change in the patient over the course of chronic drug administration due to disease, environmental changes, age or drug [or diet] interactions. Drug monitoring has also been useful in identifying owner noncompliance as a cause of therapeutic failure or adverse reactions.

Drugs for which TDM is most useful are characterized by one or more of the following: 1) serious toxicity coupled with a poorly defined or difficult to detect clinical endpoint (eg, antibimicrobials, anticonvulsants and cyclosporine); 2) a steep dose-response curve for which a small increase in dose can result in a marked increase in desired or undesired response (eg, theophylline; [TPH], or phenobarbital [PB] in cats); 3) a narrow therapeutic range (eg, digoxin); 4) marked inter-individual pharmacokinetic variability which increases the variability in the relationship between dose and PDC (eg, PB); 5) non-linear pharmacokinetics which may lead to rapid accumulation of drugs to toxic concentrations (eg, phenytoin or, in cats, PB); and an unexpected toxicity due to drug interactions (eg, enrofloxacin induced TPH toxicity or chloramphenicol or clorazepate induced PB toxicity). In addition, TDM is indicated when a drug is used chronically, and thus is more likely to induce toxicity or changes in pharmacokinetics (ie, anticonvulsants), or in life-threatening situations in which a timely response is critical to the patient (eg, epilepsy or bacterial sepsis). Drugs for which TDM might not be indicated include those characterized by a wide therapeutic index which are seldom toxic even if PDC are higher than recommended, or those for which response can be easily monitored by clinical signs.

Not all drugs can be monitored by TDM; certain criteria must be met. Patient response to the drug must correlate with (ie, parallel) PDC. Drugs whose metabolites (eg, diazepam) or for which one of two enantiomers comprise a large proportion of the desired pharmacologic response cannot be as effectively monitored by measuring the parent drug. Rather, all active metabolites and/or the parent drug should be measured. For cyclosporine (CsA), for which parent and some metabolites are active, HPLC measures only the parent whereas immunoassays measure parent and some metabolites. For many drugs, recommended therapeutic ranges in animals have been extrapolated from those determined in humans, but care must be taken for this approach (eg, bromide and procainamide). The drug must be detectable in a relatively small serum sample size, and analytical methods must be available to rapidly and accurately detect the drug in plasma. Cost of the analytical method must be reasonable.

Implementation and response to TDM requires an understanding of the relationships between PDC, interval (T) and drug elimination half-life [t½]. In general, TDM should not be implemented until PDC have reached steady state in the patient. Steady-state PDC occur at the point when drug input and drug elimination (ie, distribution, metabolism and/or excretion) are equilibrated. Although PDC change to some degree during the dosing interval, they remain constant between intervals at steady-state (note that "steady-state is not actually reached with drugs whose half-life [ t½] is substantially shorter than the dosing interval). With multiple drug dosing at the same regimen, PDC will reach 50, 75 and 87.5% of steady-state concentration at one, two or three half-lives, respectively (and so on) regardless of the drug. The same time period (ie, 3-5 drug half-lives) must elapse prior to monitoring if any portion of the original dosing range (ie, dose, frequency or route) is changed. For drugs with a long t½, compared to the dosing interval, drug accumulation can be very dramatic (ie, the drug concentrations following the first dose (PDCfirst) are much lower than drug concentrations at steady state (PDCss). The dosing regimen of such drugs is designed such that drug concentrations will be in the therapeutic range, but only when steady state concentrations have been achieved. The amount that the drug accumulates depends on how much shorter the interval is compared to the t½ (ratio of T: t½). For drugs characterized by a long t½, TDM can be implemented by measuring concentrations at approximately one drug t½ at which time PDC will be approximately 50% of PDCss. A third alternative to proactive monitoring is available for patients for whom steady-state concentrations must be reached immediately. A loading dose can be administered to rapidly achieve therapeutic PDC. After a loading dose is administered, the maintenance dose should be "just right" to maintain PDC achieved after loading. If not, a problem may not become obvious until steady-state occurs (ie, 3 to 5 t½; for bromide, this would be 3 months). However, monitoring can be used to pro-actively evaluate the proper maintenance dose. When using a loading dose, TDM might be performed three times. Using bromide as an example, the first time is after oral absorption of the last dose of the loading dose is complete to establish a baseline (eg, day 6). The second time would be one drug t½ later (eg, 21 days), to assure that the maintenance dose is able to maintain concentrations achieved by loading. One drug t½ later is recommended because most of the change in drug concentrations that will occur if the maintenance dose is not correct will be present at this time. If the second sample (collected at one drug t½) does not approximate the first (collected immediately after the load), the maintenance dose can be modified at this time rather than waiting for steady state and the risk of therapeutic failure or toxicity. In general, monitoring of a drug with a long half-life requires only one sample. Generally, for consistency's sake, we suggest collection of a trough (before the next dose).

Many drugs are characterized by half-lives that are much shorter than the dosing interval. For these drugs, no to little accumulation occurs, the concept of "steady-state" is perhaps irrelevant, and response can be evaluated with the first dose (or as soon as the disease has had time to respond). The amount PDC declines during a dosing interval, that is, the fluctuation between Cmax/Cmin) depends, again on the relationship between t½ and interval. If the interval is 1, 2, 3, or 4 times the t½, PDC will decrease 50, 75 and 87.5%, respectively during the dosing interval. This fluctuation may be unacceptable (eg, antiepilpetics, some cardiac drugs, potentially cyclosporine) or acceptable (eg, aminoglycoside drugs which act irreversibly). Detection of this fluctuation will require collection of both a peak and trough sample. The peak PDC (Cmax ) is the maximum concentration achieved after a dose is administered and presumably it should not exceed the recommended Cmax if the drug is not safe. Timing of peak sample collection can be difficult to predict; ideally, absorption and distribution should be complete. The route of drug administration can influence the time at which peak PDC occur, which will vary among drugs. For orally administered drugs, absorption is slower (1-2 hours) and distribution is often complete by the time peak PDC have been achieved. However, the absorption rate can vary widely due to factors such as product preparation, the effect of food or patient variability. Because food can slow the absorption of many drugs, fasting is generally indicated (if safe) prior to therapeutic drug monitoring; however exceptions are noted for some drugss (ie, imadazole antifungals). Generally, peak PDC occur 2-4 hours after oral administration. Some drugs are simply absorbed more slowly than others (eg, PB) and the time of peak PDC sample collection is longer (eg, 2 to 5 hours for PB). For drugs administered intravenously, absorption is not a concern but distribution is. For some IM and SC administrations, absorption occurs rapidly (ie, 30-60 minutes), but, again, drug distribution may take longer. Thus, PDC generally are measured 1-2 hours after administration after parenteral drug administration. Exceptions must be made for drugs, such as digoxin, for which distribution may take 6-8 hours. Samples should not be collected for these drugs until distribution is complete (Table 1).

Table 1. Therapeutic drug monitoring data for drugs monitored in small animals

Collection of peak and trough sample is particularly important for drugs characterized by a narrow therapeutic range and a short half-life. For such drugs, calculating the t½ might be useful for determined an appropriate dosing interval, although both a peak and trough sample should be collected (t½ =0.693/ kel, where kel = LN(C1/C2]/t2-t1, where C and t are the concentration and time point of the 1st (peak) and 2nd (trough) sample, respectively (Table 1). This can be easily calculated using Microsoft Excel). In contrast to drugs with a short t½, peak and trough concentrations will not differ substantially for drugs whose t½ is much longer than the dosing interval (eg, bromide, and for some patients, PB) and a single sample is generally sufficient for such drugs. Single samples might also be indicated for slow release products (eg, TPH) since constant drug absorption mitigates a detectable difference between peak and trough concentrations. If the question to be answered by TDM is one of toxicity, a single peak sample (eg, digoxin or PB), or trough sample (AMG) may answer the question; if efficacy, a single trough sample, for example for antiepileptics. The impact of drug interactions (eg, induction [eg PB] or inhibition [eg, cyclosporine and ketoconazole]), or disease [eg, cardiac disease an beta blockers or digoxin] on drug clearance may cause a drug to shift from a short half-life to a long half-life or vice versa. For example, we have measured half-lives as short as 12 hrs for PB and 9 hrs for digoxin, or as long as 150 hrs for cyclosporine. For such drugs, a prudent approach would be monitoring at baseline and then at proposed steady-state once the second drug has begun, or the disease changes. Peak and trough samples might be collected before and after the change has been implemented such that time to steady-state might actually be predicted based on half-life (eg, if ketoconazole is added to cyclosporine). Peak and trough samples might be collected in any patient that is not responding well to therapy with any drug which may have a short t½ compared to the dosing interval. For example, peak and trough digoxin should be collected when disease is stable and decompensates, and should be considered when drug therapy is changed, particularly if the patient responds: as renal clearance changes, so will digoxin concentrations. If a kinetic profile of a patient is the reason for TDM, the two samples preferably are collected at the peak and trough times unless the interval is so long that drug may not be detectable at the trough time (ie, AMG administered at 12 or 24 hour dosing intervals). The most accurate kinetic information is generated from patients receiving an IV dose since the volume of distribution can be estimated along with drug elimination t½. For oral doses, only the rate of elimination and drug t½ can be obtained.

We have used monitoring of amikacin as a means of monitoring the impact of aminoglycoside therapy on renal function. (See Table for 3 yr old 40 kg canine cross with deep pyoderma associated with methicillin resistant Staph intermedius whose MIC for amikacin was less than 4 mcg/ml. Rifampin (7.5 mg/kg bid po) was added to therapy and peak and trough amikacin concentrations were measured sequentially across time after IV administration such that clearance could be calculated (amikacin dosed at 15 mg/kg IV once daily).

The minimum information necessary for interpretation of PDCs includes the following: 1) The total daily dose of drug which will be correlated with the patient's measured PDC. 2) Time intervals of drug administration are particularly important for drugs with short half-lives (eg, AMG). Provision of this information assures the clinical pharmacologist that blood samples contain the actual trough and/or peak drug concentrations. From this data, a drug t½ can be calculated and a proper dosing interval can be determined. 3) The patient's clinical status is important because both acute and chronic diseases can dramatically alter drug disposition patterns. This is particularly true for patients with renal, liver or cardiac disease. If this information is lacking, disease-induced changes in drug disposition cannot be distinguished from other causes such as non-compliance or drug interactions. 4) Concurrently administered drugs may alter drug disposition patterns and thus contribute to individual differences in drug disposition. Frequency, dose, amount and the actual times of all drugs given to the patient must be known in order to recognize or predict potential drug interactions. 5) Physiologic characteristics such as patient species, breed and age are often important to the interpretation of PDC because known or predictable differences they may induce drug disposition, or because of known differences in pharmacodynamic responses. Weight must be provided in order to determine Vd. 6) The reason for TDM should be given, ie, has the patient failed therapy or is the patient exhibiting signs of toxicity?

Once results are received, either the dose or interval of a drug might be modified (or if all is well, left alone) If patient PDC is to high or low, and particularly if the t½ is long, then the dose can be change in proportion to the desired change in PDC. Thus, if the PB is 20 µg/ml and the target is 25 µg/ml, then the old dose should be increased (or decreased) by 25-20/20 or 25%. This approach can be repeated until the maximum (or minimum) end of the therapeutic range is reached. If the t½ is short, decreasing the dosing interval may be more cost effective. Note that for each t½ to be added to the dose interval, the dose must be doubled (to add 2 t½, the dose must be quadrupled, for 3- t½, the dose must be increased 8 fold, etc).

Cyclosporine monitoring

Our laboratory offers the following recommendations for monitoring of patients treated every 12 hours: a 2 hr peak and just before the next dose trough sample is recommended within 3 to 5 days of initiating therapy; the more life threating the target disease, the more important a peak and trough sample may be. For less serious situations, or as treatment shifts from induction to maintenance, a singe 2 hr peak sample may be sufficient for establishing and maintain a target. If therapy is initiated such that CsA disposition might change (whether intentional, such as the addition of ketoconazole, or inadvertent, such as co-treatment with diltiazem or azithromycin or others), a peak and trough sample prior to and 1 week after therapy is initiated is suggested. In situations in which alternatve generic preparations are initiated, a single 2 hr peak concentration before and 3 to 5 days after the switch is recommended. For atopy (24 hr and beyond dosing), a single peak sample should be collected. Likewise, if toxicity is a concern, a single peak sample is indicated. However, in patients with a long half-life, both a peak and trough sample may be necessary to fully assess the risk of toxicity. Monitoring is recommended weekly to biweekly in critical patients, then monthly for the first several months of therapy or until concentrations are stable. For long term maintenance, the frequency of samply may vary with the stability of the patient but should range from 3 to 6 months. Target concentrations vary with the condition but generally, based on a 12 hr dosing interval (using FPIA), a peak concentration of 800 to 1400 ng/ml and a trough concentration of 400 to 600 ng/ml (monoclonal based assay) is recommended for immune mediated diseases. For renal transplantation, trough concentrations of 750 ng/ml are suggested for the first month and 350 to 400 ng/ml, thereafter. For chronic allergic inflammatory disorders, lower concentrations are recommended: 250 ng/ml trough concentrations for chronic inflammatory bowel disorders, and for perianal fistulae, 12 hour trough concentrations at 100 to 600 ng/ml (the higher for induction, the lower for maintenance).

Therapeutic Drug Monitoring Cases

Anticonvulsant therapeutic drug monitoring


Generally, a single trough sample should be sufficient for TDM. However, if induction of drug-metabolizing enzymes has occurred, the elimination half-life may be sufficiently short to allow excessive fluctuation in PDC during the dosing interval. This short half-life can be detected only if both peak and trough samples are measured. In a phenobarbital-naïve dog, or when phenobarbital doses are changed, baseline samples should be determined at steady state, 9 to 14 days after beginning therapy. A recheck trough sample 1 to 3 months later would be prudent to detect induction. Many of our patients respond to phenobarbital at concentrations below the minimum therapeutic range of 15 µg/mL, which suggests that a lower therapeutic range may be indicated in dogs.


Because the elimination half-life of bromide is so long, manipulating the dose before steady state is reached may be necessary for some patients. Collection of a sample at one half-life after the start of therapy (i.e., 3-4 weeks) can be performed to proactively assess the dose; doubling the 3 weeks concentration should approximate the steady state concentration. Baseline should be established at 2.5 to 3 months. If the patient is loaded, a sample should be collected the day after loading is complete, and then at one half-life. The former sample is indicated to determine what the loading dose achieved and the latter to ensure that the maintenance dose is maintaining what the loading dose has achieved; the two samples should be within 15% of each other. If not, the maintenance dose can be adjusted proportionately. Note that a 3-week sample in a patient that received a loading dose is minimally useful without the post-load monitoring sample for comparison: concentrations may increase or decrease depending on the accuracy of the maintenance dosed. In all patients, regardless of the method of dosing, a final sample should be collected at steady state to establish baseline. Finally, if the maintenance dose is altered, a concentration might be measured at one half-life to proactively assess the impact of the change, but the minimum re-assessment should occur at the new steady state (i.e., 2.5 to 3 months after the dose change). Additionally, bromide might also be checked before and after any change in diet or medication that impacts chloride excretion has occurred.


The half-life of zonisamide is generally longer than 24 hours; therefore concentrations should not fluctuate sufficiently during a 12-hour dosing interval to routinely justify a peak and trough sample. Because toxicity is not likely to be as great a concern as therapeutic failure, a trough sample is recommended for routine monitoring. In problematic patients a peak and a trough may be justified to rule out a short half-life as a contributing cause of difficult control. Currently, zonisamide is among the drugs for which the maximum therapeutic range, which has been established in humans, can be exceeded with minimal adverse effects in dogs.


The half-life of levetiracetam (standard release) can be as short as 1 to 2 hours. However, the half-life can be also longer than 8 to 10 hours; longer half-lives should be anticipated if the slow release preparation is used. Because the duration of the half-life is not known, peak and trough samples are recommended at the beginning of therapy to determine the half-life in patients. Control is much more likely to be accomplished with an 8-hour dosing interval in a patient with a longer half-life. Once the half-life is established, a trough sample is recommended if only single samples are to be collected. A mid-sample concentration has little to offer, particularly given that drug concentrations may drop 50% or more from mid-interval concentrations. Thus it is prudent to identify the lowest concentration possible during the interval. The recommended therapeutic range should be targeted by trough, rather than peak, concentration. Note that in a drug with a very short half-life (e.g., 2 hours), peak concentrations in a patient may be as much as 8 times as high as trough concentrations. Levetiracetam is sufficiently safe that a high peak concentration is likely to be tolerated. Because drug concentrations do not accumulate with drugs administered at an interval substantially longer than the half-life, steady state does not occur. Therefore levetiracetam (or another drug with a short half-life) might be monitored in the first 3 to 5 days of therapy. Waiting one seizure interval to ensure that seizures are adequately controlled is reasonable. The approach for monitoring levetiracetam can be followed with other anticonvulsants associated with a short half-life compared to the dosing interval (e.g., gabapentin), unless the drugs is potentially toxic. In such situations, monitoring peak and trough concentrations routinely may be prudent.


The half-life of gabapentin also is short suggesting that both peak and trough samples be collected. Recommendations for collection are similar to those for levetiracetam.

When to Implement Monitoring:

Related Videos
© 2024 MJH Life Sciences

All rights reserved.