Details

Medical Statistics from Scratch


Medical Statistics from Scratch

An Introduction for Health Professionals
4. Aufl.

von: David Bowers

CHF 45.00

Verlag: Wiley
Format: EPUB
Veröffentl.: 16.08.2019
ISBN/EAN: 9781119523949
Sprache: englisch
Anzahl Seiten: 495

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p>Correctly understanding and using medical statistics is a key skill for all medical students and health professionals.<br /><br />In an informal and friendly style, <i>Medical Statistics from Scratch</i> provides a practical foundation for everyone whose first interest is probably not medical statistics. Keeping the level of mathematics to a minimum, it clearly illustrates statistical concepts and practice with numerous real-world examples and cases drawn from current medical literature.<br /><br /></p> <p><i>Medical Statistics from Scratch</i> is an ideal learning partner for all medical students and health professionals needing an accessible introduction, or a friendly refresher, to the fundamentals of medical statistics.</p>
<p>Preface to the 4th Edition xix</p> <p>Preface to the 3rd Edition xxi</p> <p>Preface to the 2nd Edition xxiii</p> <p>Preface to the 1st Edition xxv</p> <p>Introduction xxvii</p> <p><b>I Some Fundamental Stuff 1</b></p> <p><b>1 First things first – the nature of data 3</b></p> <p>Variables and data 3</p> <p>Where are we going …? 5</p> <p>The good, the bad, and the ugly – types of variables 5</p> <p>Categorical data 6</p> <p>Nominal categorical data 6</p> <p>Ordinal categorical data 7</p> <p>Metric data 8</p> <p>Discrete metric data 8</p> <p>Continuous metric data 9</p> <p>How can I tell what type of variable I am dealing with? 10</p> <p>The baseline table 11</p> <p><b>II Descriptive Statistics 15</b></p> <p><b>2 Describing data with tables 17</b></p> <p>Descriptive statistics. What can we do with raw data? 18</p> <p>Frequency tables – nominal data 18</p> <p>The frequency distribution 19</p> <p>Relative frequency 20</p> <p>Frequency tables – ordinal data 20</p> <p>Frequency tables – metric data 22</p> <p>Frequency tables with discrete metric data 22</p> <p>Cumulative frequency 24</p> <p>Frequency tables with continuous metric data – grouping the raw data 25</p> <p>Open‐ended groups 27</p> <p>Cross‐tabulation – contingency tables 28</p> <p>Ranking data 30</p> <p><b>3 Every picture tells a story – describing data with charts 31</b></p> <p>Picture it! 32</p> <p>Charting nominal and ordinal data 32</p> <p>The pie chart 32</p> <p>The simple bar chart 34</p> <p>The clustered bar chart 35</p> <p>The stacked bar chart 37</p> <p>Charting discrete metric data 39</p> <p>Charting continuous metric data 39</p> <p>The histogram 39</p> <p>The box (and whisker) plot 42</p> <p>Charting cumulative data 44</p> <p>The cumulative frequency curve with discrete metric data 44</p> <p>The cumulative frequency curve with continuous metric data 44</p> <p>Charting time‐based data – the time series chart 47</p> <p>The scatterplot 48</p> <p>The bubbleplot 49</p> <p><b>4 Describing data from its shape 51</b></p> <p>The shape of things to come 51</p> <p>Skewness and kurtosis as measures of shape 52</p> <p>Kurtosis 55</p> <p>Symmetric or mound‐shaped distributions 56</p> <p>Normalness – the Normal distribution 56</p> <p>Bimodal distributions 58</p> <p>Determining skew from a box plot 59</p> <p><b>5 Measures of location – Numbers R us 62</b></p> <p>Numbers, percentages, and proportions 62</p> <p>Preamble 63</p> <p>N umbers, percentages, and proportions 64</p> <p>Handling percentages – for those of us who might need a reminder 65</p> <p>Summary measures of location 67</p> <p>The mode 68</p> <p>The median 69</p> <p>The mean 70</p> <p>Percentiles 71</p> <p>Calculating a percentile value 72</p> <p>What is the most appropriate measure of location? 73</p> <p><b>6 Measures of spread – Numbers R us – (again) 75</b></p> <p>Preamble 76</p> <p>The range 76</p> <p>The interquartile range (IQR) 76</p> <p>Estimating the median and interquartile range from the cumulative frequency curve 77</p> <p>The boxplot (also known as the box and whisker plot) 79</p> <p>Standard deviation 82</p> <p>Standard deviation and the Normal distribution 84</p> <p>Testing for Normality 86</p> <p>Using SPSS 86</p> <p>Using Minitab 87</p> <p>Transforming data 88</p> <p><b>7 Incidence, prevalence, and standardisation 92</b></p> <p>Preamble 93</p> <p>The incidence rate and the incidence rate ratio (IRR) 93</p> <p>The incidence rate ratio 94</p> <p>Prevalence 94</p> <p>A couple of difficulties with measuring incidence and prevalence 97</p> <p>Some other useful rates 97</p> <p>Crude mortality rate 97</p> <p>Case fatality rate 98</p> <p>Crude maternal mortality rate 99</p> <p>Crude birth rate 99</p> <p>Attack rate 99</p> <p>Age‐specific mortality rate 99</p> <p>Standardisation – the age‐standardised mortality rate 101</p> <p>The direct method 102</p> <p>The standard population and the comparative mortality ratio (CMR) 103</p> <p>The indirect method 106</p> <p>The standardised mortality rate 107</p> <p><b>III The Confounding Problem 111</b></p> <p><b>8 Confounding – like the poor, (nearly) always with us 113</b></p> <p>Preamble 114</p> <p>What is confounding? 114</p> <p>Confounding by indication 117</p> <p>Residual confounding 119</p> <p>Detecting confounding 119</p> <p>Dealing with confounding – if confounding is such a problem, what can we do about it? 120</p> <p>Using restriction 120</p> <p>Using matching 121</p> <p>Frequency matching 121</p> <p>One‐to‐one matching 121</p> <p>Using stratification 122</p> <p>Using adjustment 122</p> <p>Using randomisation 122</p> <p><b>IV Design and Data 125</b></p> <p><b>9 Research design – Part I: Observational study designs 127</b></p> <p>Preamble 128</p> <p>Hey ho! Hey ho! it’s off to work we go 129</p> <p>Types of study 129</p> <p>Observational studies 130</p> <p>Case reports 130</p> <p>Case series studies 131</p> <p>Cross‐sectional studies 131</p> <p>Descriptive cross‐sectional studies 132</p> <p>Confounding in descriptive cross‐sectional studies 132</p> <p>Analytic cross‐sectional studies 133</p> <p>Confounding in analytic cross‐sectional studies 134</p> <p>From here to eternity – cohort studies 135</p> <p>Confounding in the cohort study design 139</p> <p>Back to the future – case–control studies 139</p> <p>Confounding in the case–control study design 141</p> <p>Another example of a case–control study 142</p> <p>Comparing cohort and case–control designs 143</p> <p>Ecological studies 144</p> <p>The ecological fallacy 145</p> <p><b>10 Research design – Part II: getting stuck in – experimental studies 146</b></p> <p>Clinical trials 147</p> <p>Randomisation and the randomised controlled trial (RCT) 148</p> <p>Block randomisation 149</p> <p>Stratification 149</p> <p>Blinding 149</p> <p>The crossover RCT 150</p> <p>Selection of participants for an RCT 153</p> <p>Intention to treat analysis (ITT) 154</p> <p><b>11 Getting the participants for your study: ways of sampling 156</b></p> <p>From populations to samples – statistical inference 157</p> <p>Collecting the data – types of sample 158</p> <p>The simple random sample and its offspring 159</p> <p>The systematic random sample 159</p> <p>The stratified random sample 160</p> <p>The cluster sample 160</p> <p>Consecutive and convenience samples 161</p> <p>How many participants should we have? Sample size 162</p> <p>Inclusion and exclusion criteria 162</p> <p>Getting the data 163</p> <p><b>V Chance Would Be a Fine Thing 165</b></p> <p><b>12 The idea of probability 167</b></p> <p>Preamble 167</p> <p>Calculating probability – proportional frequency 168</p> <p>Two useful rules for simple probability 169</p> <p>Rule 1. The multiplication rule for independent events 169</p> <p>Rule 2. The addition rule for mutually exclusive events 170</p> <p>Conditional and Bayesian statistics 171</p> <p>Probability distributions 171</p> <p>Discrete versus continuous probability distributions 172</p> <p>The binomial probability distribution 172</p> <p>The Poisson probability distribution 173</p> <p>The Normal probability distribution 174</p> <p><b>13 Risk and odds 175</b></p> <p>Absolute risk and the absolute risk reduction (ARR) 176</p> <p>The risk ratio 178</p> <p>The reduction in the risk ratio (or relative risk reduction (RRR)) 178</p> <p>A general formula for the risk ratio 179</p> <p>Reference value 179</p> <p>N umber needed to treat (NNT) 180</p> <p>What happens if the initial risk is small? 181</p> <p>Confounding with the risk ratio 182</p> <p>Odds 183</p> <p>Why you can’t calculate risk in a case–control study 185</p> <p>The link between probability and odds 186</p> <p>The odds ratio 186</p> <p>Confounding with the odds ratio 189</p> <p>Approximating the risk ratio from the odds ratio 189</p> <p><b>VI The Informed Guess – An Introduction to Confidence Intervals 191</b></p> <p><b>14 Estimating the value of a <i>single </i>population parameter – the idea of confidence intervals 193</b></p> <p>Confidence interval estimation for a population mean 194</p> <p>The standard error of the mean 195</p> <p>How we use the standard error of the mean to calculate a confidence interval for a population mean 197</p> <p>Confidence interval for a population proportion 200</p> <p>Estimating a confidence interval for the median of a single population 203</p> <p><b>15 Using confidence intervals to compare two population parameters 206</b></p> <p>What’s the difference? 207</p> <p>Comparing two <i>independent </i>population means 207</p> <p>An example using birthweights 208</p> <p>Assessing the evidence using the confidence interval 211</p> <p>Comparing two <i>paired </i>population means 215</p> <p>Within‐subject and between‐subject variations 215</p> <p>Comparing two <i>independent </i>population proportions 217</p> <p>Comparing two <i>independent </i>population medians – the Mann–Whitney rank sums method 219</p> <p>Comparing two <i>matched </i>population medians – the Wilcoxon signed‐ranks method 220</p> <p><b>16 Confidence intervals for the <i>ratio </i>of two population parameters 224</b></p> <p>Getting a confidence interval for the <i>ratio </i>of two independent population means 225</p> <p>Confidence interval for a population risk ratio 226</p> <p>Confidence intervals for a population odds ratio 229</p> <p>Confidence intervals for hazard ratios 232</p> <p><b>VII Putting it to the Test 235</b></p> <p><b>17 Testing hypotheses about the <i>difference </i>between two population parameters 237</b></p> <p>Answering the question 238</p> <p>The hypothesis 238</p> <p>The null hypothesis 239</p> <p>The hypothesis testing process 240</p> <p>The p‐value and the decision rule 241</p> <p>A brief summary of a few of the commonest tests 242</p> <p>Using the <i>p</i>‐value to compare the means of two independent populations 244</p> <p>Interpreting computer hypothesis test results for the difference in two independent population means – the two‐sample <i>t </i>test 245</p> <p>Output from Minitab – two‐sample <i>t </i>test of difference in mean birthweights of babies born to white mothers and to non‐white mothers 245</p> <p>Output from SPSS_: two‐sample <i>t </i>test of difference in mean birthweights of babies born to white mothers and to non‐white mothers 246</p> <p>Comparing the means of two paired populations – the matched‐pairs <i>t </i>test 248</p> <p>Using <i>p</i>‐values to compare the medians of two independent populations: the Mann–Whitney rank‐sums test 248</p> <p>How the Mann–Whitney test works 249</p> <p>Correction for multiple comparisons 250</p> <p>The Bonferroni correction for multiple testing 250</p> <p>Interpreting computer output for the Mann–Whitney test 252</p> <p>With Minitab 252</p> <p>With SPSS 252</p> <p>Two matched medians – the Wilcoxon signed‐ranks test 254</p> <p>Confidence intervals versus hypothesis testing 254</p> <p>What could possibly go wrong? 255</p> <p>Types of error 256</p> <p>The power of a test 257</p> <p>Maximising power – calculating sample size 258</p> <p>Rule of thumb 1. Comparing the means of two independent populations (metric data) 258</p> <p>Rule of thumb 2. Comparing the proportions of two independent populations (binary data) 259</p> <p><b>18 The Chi‐squared (χ<sup><i>2</i></sup>) test – what, why, and how? 261</b></p> <p>Of all the tests in all the world – you had to walk into my hypothesis testing procedure 262</p> <p>Using chi‐squared to test for related‐ness or for the equality of proportions 262</p> <p>Calculating the chi‐squared statistic 265</p> <p>Using the chi-squared statistic 267</p> <p>Yate’s correction (continuity correction) 268</p> <p>Fisher’s exact test 268</p> <p>The chi‐squared test with Minitab 269</p> <p>The chi‐squared test with SPSS 270</p> <p>The chi‐squared test for trend 272</p> <p>SPSS output for chi‐squared trend test 274</p> <p><b>19 Testing hypotheses about the <i>ratio </i>of two population parameters 276</b></p> <p>Preamble 276</p> <p>The chi‐squared test with the risk ratio 277</p> <p>The chi‐squared test with odds ratios 279</p> <p>The chi‐squared test with hazard ratios 281</p> <p><b>VIII Becoming Acquainted 283</b></p> <p><b>20 Measuring the association between two variables 285</b></p> <p>Preamble – plotting data 286</p> <p>Association 287</p> <p>The scatterplot 287</p> <p>The correlation coefficient 290</p> <p>Pearson’s correlation coefficient 290</p> <p>Is the correlation coefficient statistically significant in the population? 292</p> <p>Spearman’s rank correlation coefficient 294</p> <p><b>21 Measuring agreement 298</b></p> <p>To agree or not agree: that is the question 298</p> <p>Cohen’s kappa (<i>κ</i>) 300</p> <p>Some shortcomings of kappa 303</p> <p>Weighted kappa 303</p> <p>Measuring the agreement between two metric continuous variables, the Bland–Altmann plot 303</p> <p><b>IX Getting into a Relationship 307</b></p> <p><b>22 Straight line models: linear regression 309</b></p> <p>Health warning! 310</p> <p>Relationship and association 310</p> <p>A causal relationship – explaining variation 312</p> <p>Refresher – finding the equation of a straight line from a graph 313</p> <p>The linear regression model 314</p> <p>First, is the relationship linear? 315</p> <p>Estimating the regression parameters – the method of ordinary least squares (OLS) 316</p> <p>Basic assumptions of the ordinary least squares procedure 317</p> <p>Back to the example – is the relationship statistically significant? 318</p> <p>Using SPSS to regress birthweight on mother’s weight 318</p> <p>Using Minitab 319</p> <p>Interpreting the regression coefficients 320</p> <p>Goodness‐of‐fit, <i>R<sup>2</sup> </i>320</p> <p>Multiple linear regression 322</p> <p>Adjusted goodness‐of‐fit: <i>R̄</i><sup>2</sup><b><sup> </sup></b>324</p> <p>Including nominal covariates in the regression model: design variables and coding 326</p> <p>Building your model. Which variables to include? 327</p> <p>Automated variable selection methods 328</p> <p>Manual variable selection methods 329</p> <p>Adjustment and confounding 330</p> <p>Diagnostics – checking the basic assumptions of the multiple linear regression model 332</p> <p>Analysis of variance 333</p> <p><b>23 Curvy models: logistic regression 334</b></p> <p>A second health warning! 335</p> <p>The binary outcome variable 335</p> <p>Finding an appropriate model when the outcome variable is binary 335</p> <p>The logistic regression model 337</p> <p>Estimating the parameter values 338</p> <p>Interpreting the regression coefficients 338</p> <p>Have we got a significant result? statistical inference in the logistic regression model 340</p> <p>The Odds Ratio 341</p> <p>The multiple logistic regression model 343</p> <p>Building the model 344</p> <p>Goodness‐of‐fit 346</p> <p><b>24 Counting models: Poisson regression 349</b></p> <p>Preamble 350</p> <p>Poisson regression 350</p> <p>The Poisson regression equation 351</p> <p>Estimating β<sub>1</sub> and β<sub>2</sub> with the estimators <i>b</i><sub>0</sub> and <i>b</i><sub>1</sub> 352</p> <p>Interpreting the estimated coefficients of a Poisson regression, <i>b</i><sub>0</sub> and <i>b</i><sub>1</sub> 352</p> <p>Model building – variable selection 355</p> <p>Goodness‐of‐fit 357</p> <p>Zero‐inflated Poisson regression 358</p> <p>Negative binomial regression 359</p> <p>Zero‐inflated negative binomial regression 361</p> <p><b>X Four More Chapters 363</b></p> <p><b>25 Measuring survival 365</b></p> <p>Preamble 366</p> <p>Censored data 366</p> <p>A simple example of survival in a single group 366</p> <p>Calculating survival probabilities and the proportion surviving: the Kaplan–Meier table 368</p> <p>The Kaplan–Meier curve 369</p> <p>Determining median survival time 369</p> <p>Comparing survival with two groups 370</p> <p>The log‐rank test 371</p> <p>An example of the log‐rank test in practice 372</p> <p>The hazard ratio 372</p> <p>The proportional hazards (Cox’s) regression model – introduction 373</p> <p>The proportional hazards (Cox’s) regression model – the detail 376</p> <p>Checking the assumptions of the proportional hazards model 377</p> <p>An example of proportional hazards regression 377</p> <p><b>26 Systematic review and meta‐analysis 380</b></p> <p>Introduction 381</p> <p>Systematic review 381</p> <p>The forest plot 383</p> <p>Publication and other biases 384</p> <p>The funnel plot 386</p> <p>Significance tests for bias – Begg’s and Egger’s tests 387</p> <p>Combining the studies: meta‐analysis 389</p> <p>The problem of heterogeneity – the Q and I<sup>2</sup> tests 389</p> <p><b>27 Diagnostic testing 393</b></p> <p>Preamble 393</p> <p>The measures – sensitivity and specificity 394</p> <p>The positive prediction and negative prediction values (PPV and NPV) 395</p> <p>The sensitivity–specificity trade‐off 396</p> <p>Using the ROC curve to find the optimal sensitivity versus specificity trade‐off 397</p> <p><b>28 Missing data 400</b></p> <p>The missing data problem 400</p> <p>Types of missing data 403</p> <p>Missing completely at random (MCAR) 403</p> <p>Missing at Random (MAR) 403</p> <p>Missing not at random (MNAR) 404</p> <p>Consequences of missing data 405</p> <p>Dealing with missing data 405</p> <p>Do nothing – the wing and prayer approach 406</p> <p>List‐wise deletion 406</p> <p>Pair‐wise deletion 407</p> <p>Imputation methods – simple imputation 408</p> <p>Replacement by the Mean 408</p> <p>Last observation carried forward 409</p> <p>Regression‐based imputation 410</p> <p>Multiple imputation 411</p> <p>Full Information Maximum Likelihood (FIML) and other methods 412</p> <p>Appendix: Table of random numbers 414</p> <p>References 415</p> <p>Solutions to Exercises 424</p> <p>Index 457</p>
<p><b>DAVID BOWERS,</b><i> Leeds Institute of Health Sciences, School of Medicine, University of Leeds, Leeds, UK</i>
<p><i>FOURTH EDITION</i></br> <b>Medical Statistics <i>from</i> Scratch</b></br> An Introduction for Health Professionals <p><i>Medical Statistics from Scratch</i> is the ideal learning partner for all medical students and health professionals needing an accessible introduction, or a friendly refresher, to the fundamentals of medical statistics. This new fourth edition has been completely revised, the examples from current research updated and new material added. <p><b>Praise for previous editions</b> <p>"I love this book. It lays out the problem of how to approach statistics in a digestible, understandable, and rather complete way. The book actually follows my biostatistics class very nicely even though the class is using a different and more difficult text. I wish I was in class with the writer of this book. He is really a great teacher. This is now one of my favorite books, and I carry it with me all the time." <p>"After years of trying and failing, this is the only book on medical statistics that I have managed to read and understand. I would certainly recommend this to anyone, especially medical professionals who need to have a good grasp of statistics in order to take up postgraduate exams or to understand peer-reviewed publications. I especially found the exercise quite useful. I only wish I had come across this book earlier." <p>"I though this was an outstanding book. It is organised in a way that logically walks you through the rationale behind picking the appropriate statistical tool for your type of data. It is comprehensive in covering the different situations you'll encounter whether you're designing your own study or reading someone else's. The mathematics are presented in an easy-to-understand format striking just the right balance of providing the important concepts without getting bogged down in minute details. It utilises practical examples and references from the medical literature that you'll be comfortable applying day one to that journal lying on your desk. Whether you're starting out as a student or have been in practice for years and want a refresher, this text should be on your shelf." <p>"This book will help the average healthcare worker understand the essentials of statistics to prepare for a board or be involved in medical research. It is a great vantage point to understand the concept and go from there." <p>"Starts with very basic information and lays the information out clearly in a logical sequence that builds up at an easy pace. Plenty of practice exercises to help cement the concepts taught." <p>"<i>Medical Statistics from Scratch</i> is an excellent introduction which I frequently recommend to students and colleagues with little or no knowledge of statistics!" <p>"My work involves much analysis and evaluation of medical studies. This book helps me, a "non-scientist" make certain that my lack of statistical training does not lead me astray. I found this very helpful." <p>"I used this book while I was doing a medical statistic module for my degree. I was new to statistics and found this book a very good introduction for a complete beginner. The language is very simple, chatty, and easy to understand. There are worked example and questions and answers. It covers the basics of statistics first like standard deviations, averages etc. and then progresses onto the medical statistics such as log-rank test, survival curves etc." <p>"I've been wanting to improve my ability to critically read articles from the medical literature and have found your book to be the perfect tool for that purpose. It's easy to read, understandable, and concise. What has been most valuable to me is how well you explain the concepts and rationale behind a method rather than just the mechanics of the method itself. Thank you for a job well done."

Diese Produkte könnten Sie auch interessieren:

Small-Animal SPECT Imaging
Small-Animal SPECT Imaging
von: Matthew A. Kupinski, Harrison H. Barrett
PDF ebook
CHF 260.00
Frontiers in Biochip Technology
Frontiers in Biochip Technology
von: Wan-Li Xing, Jing Cheng
PDF ebook
CHF 177.00
BioMEMS and Biomedical Nanotechnology
BioMEMS and Biomedical Nanotechnology
von: Mihrimah Ozkan, Mauro Ferrari, Michael Heller
PDF ebook
CHF 236.00