Dynamic treatment regime

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In medical research, a dynamic treatment regime (DTR), adaptive intervention, or adaptive treatment strategy is a set of rules for choosing effective treatments for individual patients (Lei et al. 2012). The treatment choices made for a particular patient are based on that individual's characteristics and history, with the goal of optimizing his or her long-term clinical outcome. A dynamic treatment regime is analogous to a policy in the field of reinforcement learning, and analogous to a controller in control theory. While most work on dynamic treatment regimes has been done in the context of medicine, the same ideas apply to time-varying policies in other fields, such as education, marketing, and economics.


Historically, medical research and the practice of medicine tended to rely on an acute care model for the treatment of all medical problems, including chronic illness.[1] More recently, the medical field has begun to look at long term care plans to treat patients with a chronic illness. This shift in ideology, coupled with increased demand for evidence based medicine and individualized care, has led to the application of sequential decision making research to medical problems and the formulation of dynamic treatment regimes.


The figure below illustrates a hypothetical dynamic treatment regime for Attention Deficit Hyperactivity Disorder (ADHD). There are two decision points in this DTR. The initial treatment decision depends on the patient's baseline disease severity. The second treatment decision is a "responder/non-responder" decision: At some time after receiving the first treatment, the patient is assessed for response, i.e. whether or not the initial treatment has been effective. If so, that treatment is continued. If not, the patient receives a different treatment. In this example, for those who did not respond to initial medication, the second "treatment" is a package of treatments—it is the initial treatment plus behavior modification therapy. "Treatments" can be defined as whatever interventions are appropriate, whether they take the form of medications or other therapies.

Hypothetical dynamic treatment regime for the treatment of ADHD.
Example of a dynamic treatment regime

Optimal Dynamic Treatment Regimes[edit]

The decisions of a dynamic treatment regime are made in the service of producing favorable clinical outcomes in patients who follow it. To make this more precise, the following mathematical framework is used:

Mathematical Formulation[edit]

For a series of decision time points, t = 1, \ldots, T, define  A_t to be the treatment ("action") chosen at time point t, and define  O_t to be all clinical observations made at time t, immediately prior to treatment  A_t . A dynamic treatment regime,  \pi = (\pi_1,...,\pi_T) consists of a set of rules, one for each time point t, for choosing treatment  A_t based clinical observations  O_t . Thus  \pi_t(o_1, a_1, ..., o_t,a_t) , is a function of the past and current observations,  (o_1, ..., o_t) and past treatments  (a_1, ..., a_{t-1}) , which returns a choice of the current treatment,  a_t .

Also observed at each time point is a measure of success called a reward R_t. The goal of a dynamic treatment regime is to make decisions that result in the largest possible expected sum of rewards, R = ∑ t R t. A dynamic treatment regime,  \pi^* is optimal if it satisfies

 \pi^* = \arg\max_{\pi}{E \big[R |\text{ treatments are chosen according to }\pi \big]} \!

where E is an expectation over possible observations and rewards. The quantity  E[R | \text{ treatments are chosen according to }\pi] is often referred to as the value of  \pi .

In the example above, the possible first treatments for  A_1 are "Low-Dose B-mod" and "Low-Dose Medication". The possible second treatments for  A_2 are "Increase B-mod Dose", "Continue Treatment", and "Augment w/B-mod". The observations  O_1 and  O_2 are the labels on the arrows: The possible O_1 are "Less Severe" and "More Severe", and the possible O_2 are "Non-Response" and "Response". The rewards R_1, R_2 are not shown; one reasonable possibility for reward would be to set R_1  = 0 and set R_2 to a measure of classroom performance after a fixed amount of time.

Delayed Effects[edit]

To find an optimal dynamic treatment regime, it might seem reasonable to find the optimal treatment that maximizes the immediate reward at each time point and then patch these treatment steps together to create a dynamic treatment regime. However, this approach is shortsighted and can result in an inferior dynamic treatment regime, because it ignores the potential for the current treatment action to influence the reward obtained at more distant time points.

For example a treatment may be desirable as a first treatment even if it does not achieve a high immediate reward. For example, when treating some kinds of cancer, a particular medication may not result in the best immediate reward (best acute effect) among initial treatments. However, this medication may impose sufficiently low side effects so that some non-responders are able to become responders with further treatment. Similarly a treatment that is less effective acutely may lead to better overall rewards, if it encourages/enables non-responders to adhere more closely to subsequent treatments.

Estimating Optimal Dynamic Treatment Regimes[edit]

Dynamic treatment regimes can be developed in the framework of evidence-based medicine, where clinical decision making is informed by data on how patients respond to different treatments. The data used to find optimal dynamic treatment regimes consist of the sequence of observations and treatments (o_1,a_1,o_2,a_2,..,o_T,a_T)_i for multiple patients i=1,...,n along with those patients' rewards  R_t . A central difficulty is that intermediate outcomes both depend on previous treatments and determine subsequent treatment. However, if treatment assignment is independent of potential outcomes conditional on past observations—i.e., treatment is sequentially unconfounded—a number of algorithms exist to estimate the causal effect of time-varying treatments or dynamic treatment regimes.

While this type of data can be obtained through careful observation, it is often preferable to collect data through experimentation if possible. The use of experimental data, where treatments have been randomly assigned, is preferred because it helps eliminate bias caused by unobserved confounding variables that influence both the choice of the treatment and the clinical outcome. This is especially important when dealing with sequential treatments, since these biases can compound over time. Given an experimental data set, an optimal dynamic treatment regime can be estimated from the data using a number of different algorithms. Inference can also be done to determine whether the estimated optimal dynamic treatment regime results in significant improvements in expected reward over an alternative dynamic treatment regime.

Experimental design[edit]

Experimental designs of clinical trials that generate data for estimating optimal dynamic treatment regimes involve an initial randomization of patients to treatments, followed by re-randomizations at each subsequent time point to another treatment. The re-randomizations at each subsequent time point may depend on information collected after previous treatments, but prior to assigning the new treatment, such as how successful the previous treatment was. These types of trials were introduced and developed in Lavori & Dawson (2000), Lavori (2003) and Murphy (2005) and are often referred to as SMART trials (Sequential Multiple Assignment Randomized Trial). Some examples of SMART trials are the CATIE trial for treatment of Alzheimer's (Schneider et al. 2001) and the STAR*D trial for treatment of major depressive disorder (Lavori et al. 2001, Rush, Trivedi & Fava 2003).

SMART trials attempt to mimic the decision-making that occurs in clinical practice, but still retain the advantages of experimentation over observation. They can be more involved than single-stage randomized trials; however, they produce the data trajectories necessary for estimating optimal policies that take delayed effects into account. Several suggestions have been made to attempt to reduce complexity and resources needed. One can combine data over same treatment sequences within different treatment regimes. One may also wish to split up a large trial into screening, refining, and confirmatory trials (Collins et al. 2005). One can also use fractional factorial designs rather than a full factorial design (Nair et al. 2008), or target primary analyses to simple regime comparisons (Murphy 2005).

Reward construction[edit]

A critical part of finding the best dynamic treatment regime is the construction of a meaningful and comprehensive reward variable,  R_t . To construct a useful reward, the goals of the treatment need to be well defined and quantifiable. The goals of the treatment can include multiple aspects of a patient's health and welfare, such as degree of symptoms, severity of side effects, time until treatment response, quality of life and cost. However, quantifying the various aspects of a successful treatment with a single function can be difficult, and work on providing useful decision making support that analyzes multiple outcomes is ongoing (Lizotte 2010). Ideally, the outcome variable should reflect how successful the treatment regime was in achieving the overall goals for each patient.

Variable selection and feature construction[edit]

Analysis is often improved by the collection of any variables that might be related to the illness or the treatment. This is especially important when data is collected by observation, to avoid bias in the analysis due to unmeasured confounders. Subsequently more observation variables are collected than are actually needed to estimate optimal dynamic treatment regimes. Thus variable selection is often required as a preprocessing step on the data before algorithms used to find the best dynamic treatment regime are employed.

Algorithms and Inference[edit]

Several algorithms exist for estimating optimal dynamic treatment regimes from data. Many of these algorithms were developed in the field of computer science to help robots and computers make optimal decisions in an interactive environment. These types of algorithms are often referred to as reinforcement learning methods (Sutton & Barto 1998) . The most popular of these methods used to estimate dynamic treatment regimes is called q-learning (Watkins 1989). In q-learning models are fit sequentially to estimate the value of the treatment regime used to collect the data and then the models are optimized with respect to the treatmens to find the best dynamic treatment regime. Many variations of this algorithm exist including modeling only portions of the Value of the treatment regime (Murphy 2003, Robins 2004). Using model-based Bayesian methods, the optimal treatment regime can also be calculated directly from posterior predictive inferences on the effect of dynamic policies (Zajonc 2010).

An Approach based on random effects linear models[edit]

An alternative approach to developing dynamic treatment regimes is based on random effects linear models, which is supported by solid Decision Theory concepts (this approach does not use machine learning concepts ) (Diaz et al. 2007, 2012 and 2012).[2][3] There is empirical and theoretical evidence, supported by some empirical studies and recent developments in pharmacokinetic theory, showing that random-effects linear models can be used to describe not only patient populations but also individual patients simultaneously, and therefore that these models are suitable for designing dynamic treatment regimes.[4] For instance, by this remarkable characteristics, random-effects linear models are promising and useful tools for investigating drug dosage individualization in chronic diseases and for designing effective treatments for individual patients based on each individual patient's characteristics and needs. The following is a theoretical framework for drug dosage individualization.[2][3] A useful model is the following random effects linear model:

ln(Y_d)=\alpha+\beta^T{X}+d\ln(D)+\epsilon\,\! (1)

where α is characteristic constant that varies from patient to patient, Y_d is steady-state drug plasma concentration in response to drug dosage D, X is vector of covariates (includes clinical, demographic, environmental or genetic covariates), and ϵ is an intra-individual random error. β are population constants. β is a vector of regression coefficients that are treated as constants, and α is a random intercept. So this model (1) is generally called random intercept linear model which can be used to design a clinical algorithm for finding the optimal drug dosage D for a particular patient. The decisions of an appropriate drug dosage D are made by maximizing the probability that the drug plasma concentration response Y_d takes a value in the therapeutic window, that is, a value between two pre-specified values l1 and l2. There is empirical evidence supporting model (1) and some of its generalizations, at least for some drugs. This model still can be generalized to include covariates with random effects. The more general model is

ln(Y_d)=\psi+\eta^T\Z+\beta^T{X}+d\ln(D)+\epsilon\,\! (2)

where ϵ is defined as same as in model (1), ψ and η are both characteristic constants of a particular patient that vary from patient to patient. Z is a vector with covariates.In order to produce a better personalized dosage, Diaz et al.[2] proposed a clinical algorithm for drug dosage individualization based on this more general model (2) which is based on the concept of Bayesian feedback. The assumption of the algorithm is that the model (2) describes adequately a population of patients. The population parameter \mu_\alpha\,\!, β, d, \sigma_\alpha^2\,\!and \sigma_\epsilon^2\,\! must be estimated by using a sample of patients before applying the algorithm, so the estimated model can be built up as empirical prior information. Next, as described before, the dosage regime must be first adapted to the patient’s characteristics and comedication. This initial adaptation realizes a prior individualization. Diaz et al.’s clinical algorithm is not a computer algorithm but a series of steps to find an optimal dosage. In the first step of the algorithm, the clinician uses both estimators and the information from patient’s covariates to compute the initial dosage


where C_0^*=\sqrt{l_1l_2} defined by Diaz et al.[2]

This new dosage is administered to the patient for an appropariate time period, and once the steady-state response is reached, then the new response YD is measured. The step i, i≥2 is as follows: By using the dosage-response pairs(D_j, Y_{D_j}), which were obtained in the previous j-1 steps,compute the ith dosage


where -\hat{\alpha}'_i is an empirical Bayes predictor of α given by

-\hat{\alpha}'_i=(1-\lambda_i\sqrt{\rho^{-1}-1})\left(\frac{1}{i-1}\sum_{j=1}^{i-1} ln\left(\frac{Y_{D_j}}{D_j^d}\right)-\beta^T{X}\right)+\lambda_i\sqrt{\rho^{-1}-1}\mu_\alpha({Z})

with \rho=\rho({Z})=\frac{\sigma_\alpha^2({Z})}{\sigma_\alpha^2({Z})+\sigma_\epsilon^2}

and \lambda_i ,i≥1, is defined by Diaz et al.[2] At this time, if model (2) holds, Diaz et al.'s [2] algorithm is optimal in the sense that that the obtained dosages minimizes the a Bayes risk. Also, Diaz et al.[2] introduced the concept of omega-optimum dosage which that is defined as a dosage D that satisfies

P(l_1<Y_D<l_2 |\gamma)\geqslant w\big\{ \sup_{i\geqslant 1} P(l_1<Y_{D_i}<l_2|\gamma)\big\}

where w is a number between 0 and 1. The concept of omega-optimum dosage allows determineing how many algorithm steps are necessary to obtain the optimal dosage for the patient, and allows developing a theory of drug dosage individualization.

Diaz et al.[2] showed through simulations and theoretical arguments that their proposed approach to drug dosage individualization in chronic diseases may produce better pharmacokinetic or pharmacodynamic responses than traditional approaches used in therapeutic drug monitoring.


  1. ^ Wagner, E. H.; Austin, B. T.; Davis, C.; Hindmarsh, M.; Schaefer, J.; Bonomi, A. (2001), Improving Chronic Illness Care: Translating Evidence Into Action Health Affairs 20 (6): 64–78 PMID 11816692
  2. ^ a b c d e f g h Diaz, Francisco J.; Cogollo, Myladis R.; Spina, Edoardo; Santoro, Vincenza; Rendon, Diego M.; Leon, jose de (2012), "Drug Dosage Individualization Based on a Random-Effects Linear Model", Journal of Biopharmaceutical Statistics 22:3: 463-484
  3. ^ a b Diaz, Francisco J.; Yeh, Hung-Weh; Leon, Jose de (2012), "Role of Statistical Random-Effects Linear Models in Personalized Medicine", Current Pharmacogenomics and Personalized Medicine 10: 22-32
  4. ^ Statistics in Medicine

See also[edit]

External links[edit]


  • Diaz, Francisco J.; Cogollo, Myladis R.; Spina, Edoardo; Santoro, Vincenza; Rendon, Diego M.; Leon, jose de (2012), "Drug Dosage Individualization Based on a Random-Effects Linear Model", Journal of Biopharmaceutical Statistics 22:3: 463–484, doi:10.1080/10543406.2010.547264, PMID 22416835 
  • Diaz, Francisco J.; Yeh, Hung-Weh; Leon, Jose de (2012), "Role of Statistical Random-Effects Linear Models in Personalized Medicine", Current Pharmacogenomics and Personalized Medicine 10 (1): 22–32, doi:10.2174/1875692111201010022 (inactive 2015-01-12), PMC 3580802, PMID 23467392 
  • Banerjee, A.; Tsiatis, A. A. (2006), "Adaptive two-stage designs in phase II clinical trials", Statistics in Medicine 25 (19): 3382–3395, doi:10.1002/sim.2501, PMID 16479547 
  • Collins, L. M.; Murphy, S. A.; Nair, V.; Strecher, V. (2005), "A strategy for optimizing and evaluating behavioral interventions", Annals of Behavioral Medicine 30: 65–73, doi:10.1207/s15324796abm3001_8 (inactive 2015-01-12) 
  • Guo, X.; Tsiatis, A. A. (2005), "Estimation of survival distributions in two-stage randomization designs with censored data", International Journal of Biostatistics 1 (1) 
  • Hernán, Miguel A.; Lanoy, Emilie; Costagliola, Dominique; Robins, James M. (2006), "Comparison of Dynamic Treatment Regimes via Inverse Probability Weighting", Basic & Clinical Pharmacology & Toxicology 98 (3): 237–242, doi:10.1111/j.1742-7843.2006.pto_329.x 
  • Lavori, P. W.; Dawson, R. (2000), "A design for testing clinical strategies: biased adaptive within-subject randomization", Journal of the Royal Statistical Society, Series A 163: 29–38, doi:10.1111/1467-985x.00154 
  • Lavori, P.W.; Rush, A.J.; Wisniewski,, S.R.; Alpert, J.; Fava, M.; Kupfer, D.J.; Nierenberg, A.; Quitkin, F.M.; Sacheim, H.A.; Thase, M.E.; Trivedi, M (2001), "Strengthening clinical effectiveness trials: Equipoise-stratified randomization", Biological Psychiatry 50 (10): 792–801, doi:10.1016/s0006-3223(01)01223-9, PMID 11720698 
  • Lizotte, D. L.; Bowling, M.; Murphy, S. A. (2010), "Efficient Reinforcement Learning with Multiple Reward Functions for Randomized Clinical Trial Analysis", Twenty-Seventh Annual International Conference on Machine Learning 
  • Lokhnygina, Y; Tsiatis, A. A. (2008), "Optimal two-stage group sequential designs", Journal of Statistical Planning and Inference 138 (2): 489–499, doi:10.1016/j.jspi.2007.06.011 
  • Lunceford, J. K.; Davidian, M.; Tsiatis, A. A. (2002), "Estimation of survival distributions of treatment policies in two-stage randomization designs in clinical trials", Biometrics 58 (1): 48–57, doi:10.1111/j.0006-341x.2002.00048.x, PMID 11890326 
  • Murphy, Susan A. (2003), "Optimal Dynamic Treatment Regimes", Journal of the Royal Statistical Society, Series B 65 (2): 331–366, doi:10.1111/1467-9868.00389 
  • Murphy, Susan A. (2005), "An Experimental Design for the Development of Adaptive Treatment Strategies", Statistics in Medicine 24 (10): 1455–1481, doi:10.1002/sim.2022, PMID 15586395 
  • Murphy, Susan A.; Daniel Almiral (2008), "Dynamic Treatment Regimes", Encyclopedia of Medical Decision Making: #–# 
  • Nair, V.; Strecher, V.; Fagerlin, A.; Ubel, P.; Resnicow, K.; Murphy, S.; Little, R.; Chakraborty, B.; Zhang, A. (2008), "Screening Experiments and Fractional Factorial Designs in Behavioral Intervention Research", The American Journal of Public Health 98 (8): 1534–1539 
  • Robins, James M. (2004), "Optimal structural nested models for optimal sequential decisions", in Lin, D. Y.; Heagerty, P. J., Proceedings of the Second Seattle Symposium on Biostatistics, Springer, New York, pp. 189–326 
  • Robins, James M. (1986), "A new approach to causal inference in mortality studies with sustained exposure periods-application to control of the healthy worker survivor effect", Computers and Mathematics with Applications 14: 1393–1512 
  • Robins, James M. (1987), "Addendum to 'A new approach to causal inference in mortality studies with sustained exposure periods-application to control of the healthy worker survivor effect'", Computers and Mathematics with Applications 14 (9–12): 923–945, doi:10.1016/0898-1221(87)90238-0 
  • Schneider, L.S.; Tariot, P.N.; Lyketsos, C.G.; Dagerman, K.S.; Davis, K.L.; Davis, S.; Hsiao, J.K.; Jeste, D.V.; Katz, I.R.; Olin, J.T.; Pollock, B.G.; Rabins, P.V.; Rosenheck, R.A.; Small, G.W.; Lebowitz, B.; Lieberman, J.A. (2001), "National Institute of Mental Health clinical antipsychotic trials of intervention effectiveness (CATIE) Alzheimer disease trial methodology", American Journal of Geriatric Psychiatry 9 (4): 346–360, PMID 11739062 
  • Wagner, E. H.; Austin, B. T.; Davis, C.; Hindmarsh, M.; Schaefer, J.; Bonomi, A. (2001), "Improving Chronic Illness Care: Translating Evidence Into Action", Health Affairs 20 (6): 64–78, doi:10.1377/hlthaff.20.6.64, PMID 11816692 
  • Wahed, A.. S.; Tsiatis, A. A. (2004), "Optimal estimator for the survival distribution and related quantities for treatment policies in two-stage randomization designs in clinical trials", Biometrics 60 (1): 124–133, doi:10.1111/j.0006-341X.2004.00160.x, PMID 15032782 
  • Watkins, C. J. C. H. (1989), "Learning from Delayed Rewards", PhD thesis, Cambridge University, Cambridge, England 
  • Zajonc, T. (2010), Bayesian Inference for Dynamic Treatment Regimes: Mobility, Equity, and Efficiency in Student Tracking, SSRN 1689707