Multivariable regression models are widely used in medical literature for the purpose of diagnosis or prediction. Conventionally, the adequacy of these models is assessed using metrics of diagnostic performances such as sensitivity and specificity, which fail to account for clinical utility of a specific model. Decision curve analysis (DCA) is a widely used method to measure this utility. In this framework, a clinical judgment of the relative value of benefits (treating a true positive case) and harms (treating a false positive case) associated with prediction models is made. As such, the preferences of patients or policy-makers are accounted for by using a metric called threshold probability. A decision analytic measure called net benefit is then calculated for each possible threshold probability, which puts benefits and harms on the same scale. The article is a technical note on how to perform DCA in R environment. The decision curve is depicted with the ggplot2 system. Correction for overfitting is done via either bootstrap or cross-validation. Confidence interval and P values for the comparison of two models are calculated using bootstrap method. Furthermore, we describe a method for computing area under net benefit for the comparison of two models. The average deviation about the probability threshold (ADAPT), which is a more recently developed index to measure the utility of a prediction model, is also introduced in this article.