Biostatistics (also known asbiometry) is a branch ofstatistics that applies statistical methods to a wide range of topics inbiology. It encompasses the design of biologicalexperiments, the collection and analysis of data from those experiments and the interpretation of the results.
Biostatistical modeling forms an important part of numerous modern biological theories.Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools.Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism.Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed byWilliam Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, asRaphael Weldon,Arthur Dukinfield Darbishire andKarl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such asCharles Davenport andWilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinianmodern evolutionary synthesis.
Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment ofpopulation genetics and this synthesis all relied on statistics and developed its use in biology.
J. B. S. Haldane's book,The Causes of Evolution, reestablished natural selection as the premier mechanism of evolution by explaining it in terms of the mathematical consequences of Mendelian genetics. He also developed the theory ofprimordial soup.
In parallel to this overall development, the pioneering work ofD'Arcy Thompson inOn Growth and Form also helped to add quantitative discipline to biological study.
Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been a tendency among biologists to distrust or deprecate results which are notqualitatively apparent. One anecdote describesThomas Hunt Morgan banning theFriden calculator from his department atCaltech, saying "Well, I am like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my department waste scarce resources inplacer mining."[3]
Any research inlife sciences is proposed to answer ascientific question we might have. To answer this question with a high certainty, we needaccurate results. The correct definition of the mainhypothesis and the research plan will reduce errors while taking a decision in understanding a phenomenon. The research plan might include the research question, the hypothesis to be tested, theexperimental design,data collection methods,data analysis perspectives and costs involved. It is essential to carry the study based on the three basic principles of experimental statistics:randomization,replication, and local control.
The research question will define the objective of a study. The research will be headed by the question, so it needs to be concise, at the same time it is focused on interesting and novel topics that may improve science and knowledge and that field. To define the way to ask thescientific question, an exhaustiveliterature review might be necessary. So the research can be useful to add value to thescientific community.[4]
Once the aim of the study is defined, the possible answers to the research question can be proposed, transforming this question into ahypothesis. The main propose is callednull hypothesis (H0) and is usually based on a permanent knowledge about the topic or an obvious occurrence of the phenomena, sustained by a deep literature review. We can say it is the standard expected answer for the data under the situation intest. In general, HO assumes no association between treatments. On the other hand, thealternative hypothesis is the denial of HO. It assumes some degree of association between the treatment and the outcome. Although, the hypothesis is sustained by question research and its expected and unexpected answers.[4]
As an example, consider groups of similar animals (mice, for example) under two different diet systems. The research question would be: what is the best diet? In this case, H0 would be that there is no difference between the two diets in micemetabolism (H0: μ1 = μ2) and thealternative hypothesis would be that the diets have different effects over animals metabolism (H1: μ1 ≠ μ2).
Thehypothesis is defined by the researcher, according to his/her interests in answering the main question. Besides that, thealternative hypothesis can be more than one hypothesis. It can assume not only differences across observed parameters, but their degree of differences (i.e. higher or shorter).
Usually, a study aims to understand an effect of a phenomenon over apopulation. Inbiology, apopulation is defined as all theindividuals of a givenspecies, in a specific area at a given time. In biostatistics, this concept is extended to a variety of collections possible of study. Although, in biostatistics, apopulation is not only the individuals, but the total of one specific component of theirorganisms, as the wholegenome, or all the spermcells, for animals, or the total leaf area, for a plant, for example.
It is not possible to take themeasures from all the elements of apopulation. Because of that, thesampling process is very important forstatistical inference.Sampling is defined as to randomly get a representative part of the entire population, to make posterior inferences about the population. So, thesample might catch the mostvariability across a population.[5] Thesample size is determined by several things, since the scope of the research to the resources available. Inclinical research, the trial type, asinferiority,equivalence, andsuperiority is a key in determining samplesize.[4]
Data collection methods must be considered in research planning, because it highly influences the sample size and experimental design.
Data collection varies according to type of data. Forqualitative data, collection can be done with structured questionnaires or by observation, considering presence or intensity of disease, using score criterion to categorize levels of occurrence.[7] Forquantitative data, collection is done by measuring numerical information using instruments.
In agriculture and biology studies, yield data and its components can be obtained bymetric measures. However, pest and disease injuries in plats are obtained by observation, considering score scales for levels of damage. Especially, in genetic studies, modern methods for data collection in field and laboratory should be considered, as high-throughput platforms for phenotyping and genotyping. These tools allow bigger experiments, while turn possible evaluate many plots in lower time than a human-based only method for data collection.Finally, all data collected of interest must be stored in an organized data frame for further analysis.
Data can be represented throughtables orgraphical representation, such as line charts, bar charts, histograms, scatter plot. Also,measures of central tendency andvariability can be very useful to describe an overview of the data. Follow some examples:
One type of table is thefrequency table, which consists of data arranged in rows and columns, where the frequency is the number of occurrences or repetitions of data. Frequency can be:[8]
Absolute: represents the number of times that a determined value appear;
Relative: obtained by the division of the absolute frequency by the total number;
In the next example, we have the number of genes in tenoperons of the same organism.
Figure A:Line graph example. The birth rate in Brazil (2010–2016);[9] Figure B:Bar chart example. The birth rate inBrazil for the December months from 2010 to 2016; Figure C:Example of Box Plot: number of glycines in the proteome of eight different organisms (A-H); Figure D:Example of a scatter plot.
Line graphs represent the variation of a value over another metric, such as time. In general, values are represented in the vertical axis, while the time variation is represented in the horizontal axis.[10]
Abar chart is a graph that shows categorical data as bars presenting heights (vertical bar) or widths (horizontal bar) proportional to represent values. Bar charts provide an image that could also be represented in a tabular format.[10]
In the bar chart example, we have the birth rate in Brazil for the December months from 2010 to 2016.[9] The sharp fall in December 2016 reflects the outbreak ofZika virus in the birth rate in Brazil.
Thehistogram (or frequency distribution) is a graphical representation of a dataset tabulated and divided into uniform or non-uniform classes. It was first introduced byKarl Pearson.[11]
Ascatter plot is a mathematical diagram that uses Cartesian coordinates to display values of a dataset. A scatter plot shows the data as a set of points, each one presenting the value of one variable determining the position on the horizontal axis and another variable on the vertical axis.[12] They are also calledscatter graph,scatter chart,scattergram, orscatter diagram.[13]
Box plot is a method for graphically depicting groups of numerical data. The maximum and minimum values are represented by the lines, and the interquartile range (IQR) represent 25–75% of the data.Outliers may be plotted as circles.
Although correlations between two different kinds of data could be inferred by graphs, such as scatter plot, it is necessary validate this though numerical information. For this reason,correlation coefficients are required. They provide a numerical value that reflects the strength of an association.[10]
Scatter diagram that demonstrates the Pearson correlation for different values ofρ.
Pearson correlation coefficient is a measure of association between two variables, X and Y. This coefficient, usually represented byρ (rho) for the population andr for the sample, assumes values between −1 and 1, whereρ = 1 represents a perfect positive correlation,ρ = −1 represents a perfect negative correlation, andρ = 0 is no linear correlation.[10]
It is used to makeinferences[15] about an unknown population, by estimation and/or hypothesis testing. In other words, it is desirable to obtain parameters to describe the population of interest, but since the data is limited, it is necessary to make use of a representative sample in order to estimate them. With that, it is possible to test previously defined hypotheses and apply the conclusions to the entire population. Thestandard error of the mean is a measure of variability that is crucial to do inferences.[5]
Hypothesis testing is essential to make inferences about populations aiming to answer research questions, as settled in "Research planning" section. Authors defined four steps to be set:[5]
The hypothesis to be tested: as stated earlier, we have to work with the definition of anull hypothesis (H0), that is going to be tested, and analternative hypothesis. But they must be defined before the experiment implementation.
Significance level and decision rule: A decision rule depends on thelevel of significance, or in other words, the acceptable error rate (α). It is easier to think that we define acritical value that determines the statistical significance when atest statistic is compared with it. So, α also has to be predefined before the experiment.
Experiment and statistical analysis: This is when the experiment is really implemented following the appropriateexperimental design, data is collected and the more suitable statistical tests are evaluated.
Inference: Is made when thenull hypothesis is rejected or not rejected, based on the evidence that the comparison ofp-values and α brings. It is pointed that the failure to reject H0 just means that there is not enough evidence to support its rejection, but not that this hypothesis is true.
A confidence interval is a range of values that can contain the true real parameter value in given a certain level of confidence. The first step is to estimate the best-unbiased estimate of the population parameter. The upper value of the interval is obtained by the sum of this estimate with the multiplication between the standard error of the mean and the confidence level. The calculation of lower value is similar, but instead of a sum, a subtraction must be applied.[5]
Thesignificance level denoted by α is the type I error rate and should be chosen before performing the test. The type II error rate is denoted by β andstatistical power of the test is 1 − β.
Thep-value is the probability of obtaining results as extreme as or more extreme than those observed, assuming thenull hypothesis (H0) is true. It is also called the calculated probability. It is common to confuse the p-value with thesignificance level (α), but, the α is a predefined threshold for calling significant results. If p is less than α, the null hypothesis (H0) is rejected.[16]
In multiple tests of the same hypothesis, the probability of the occurrence offalse positives(familywise error rate) increase and a strategy is needed to account for this occurrence. This is commonly achieved by using a more stringent threshold to reject null hypotheses. TheBonferroni correction defines an acceptable global significance level, denoted by α* and each test is individually compared with a value of α = α*/m. This ensures that the familywise error rate in all m tests, is less than or equal to α*. When m is large, the Bonferroni correction may be overly conservative. An alternative to the Bonferroni correction is to control thefalse discovery rate (FDR). The FDR controls the expected proportion of the rejectednull hypotheses (the so-called discoveries) that are false (incorrect rejections). This procedure ensures that, for independent tests, the false discovery rate is at most q*. Thus, the FDR is less conservative than the Bonferroni correction and have more power, at the cost of more false positives.[17]
The main hypothesis being tested (e.g., no association between treatments and outcomes) is often accompanied by other technical assumptions (e.g., about the form of the probability distribution of the outcomes) that are also part of the null hypothesis. When the technical assumptions are violated in practice, then the null may be frequently rejected even if the main hypothesis is true. Such rejections are said to be due to model mis-specification.[18] Verifying whether the outcome of a statistical test does not change when the technical assumptions are slightly altered (so-called robustness checks) is the main way of combating mis-specification.
Recent developments have made a large impact on biostatistics. Two important changes have been the ability to collect data on a high-throughput scale, and the ability to perform much more complex analysis using computational techniques. This comes from the development in areas assequencing technologies,Bioinformatics andMachine learning (Machine learning in bioinformatics).
New biomedical technologies likemicroarrays,next-generation sequencers (for genomics) andmass spectrometry (for proteomics) generate enormous amounts of data, allowing many tests to be performed simultaneously.[19] Careful analysis with biostatistical methods is required to separate the signal from the noise. For example, a microarray could be used to measure many thousands of genes simultaneously, determining which of them have different expression in diseased cells compared to normal cells. However, only a fraction of genes will be differentially expressed.[20]
Multicollinearity often occurs in high-throughput biostatistical settings. Due to high intercorrelation between the predictors (such asgene expression levels), the information of one predictor might be contained in another one. It could be that only 5% of the predictors are responsible for 90% of the variability of the response. In such a case, one could apply the biostatistical technique of dimension reduction (for example via principal component analysis). Classical statistical techniques like linear orlogistic regression andlinear discriminant analysis do not work well for high dimensional data (i.e. when the number of observations n is smaller than the number of features or predictors p: n < p). As a matter of fact, one can get quite high R2-values despite very low predictive power of the statistical model. These classical statistical techniques (esp.least squares linear regression) were developed for low dimensional data (i.e. where the number of observations n is much larger than the number of predictors p: n >> p). In cases of high dimensionality, one should always consider an independent validation test set and the corresponding residual sum of squares (RSS) and R2 of the validation test set, not those of the training set.
Often, it is useful to pool information from multiple predictors together. For example,Gene Set Enrichment Analysis (GSEA) considers the perturbation of whole (functionally related) gene sets rather than of single genes.[21] These gene sets might be known biochemical pathways or otherwise functionally related genes. The advantage of this approach is that it is more robust: It is more likely that a single gene is found to be falsely perturbed than it is that a whole pathway is falsely perturbed. Furthermore, one can integrate the accumulated knowledge about biochemical pathways (like theJAK-STAT signaling pathway) using this approach.
Bioinformatics advances in databases, data mining, and biological interpretation
The development ofbiological databases enables storage and management of biological data with the possibility of ensuring access for users around the world. They are useful for researchers depositing data, retrieve information and files (raw or processed) originated from other experiments or indexing scientific articles, asPubMed. Another possibility is search for the desired term (a gene, a protein, a disease, an organism, and so on) and check all results related to this search. There are databases dedicated toSNPs (dbSNP), the knowledge on genes characterization and their pathways (KEGG) and the description of gene function classifying it by cellular component, molecular function and biological process (Gene Ontology).[22] In addition to databases that contain specific molecular information, there are others that are ample in the sense that they store information about an organism or group of organisms. As an example of a database directed towards just one organism, but that contains much data about it, is theArabidopsis thaliana genetic and molecular database – TAIR.[23] Phytozome,[24] in turn, stores the assemblies and annotation files of dozen of plant genomes, also containing visualization and analysis tools. Moreover, there is an interconnection between some databases in the information exchange/sharing and a major initiative was theInternational Nucleotide Sequence Database Collaboration (INSDC)[25] which relates data from DDBJ,[26] EMBL-EBI,[27] and NCBI.[28]
Nowadays, increase in size and complexity of molecular datasets leads to use of powerful statistical methods provided by computer science algorithms which are developed bymachine learning area. Therefore, data mining and machine learning allow detection of patterns in data with a complex structure, as biological ones, by using methods ofsupervised andunsupervised learning, regression, detection ofclusters andassociation rule mining, among others.[22] To indicate some of them,self-organizing maps andk-means are examples of cluster algorithms;neural networks implementation andsupport vector machines models are examples of common machine learning algorithms.
Collaborative work among molecular biologists, bioinformaticians, statisticians and computer scientists is important to perform an experiment correctly, going from planning, passing through data generation and analysis, and ending with biological interpretation of the results.[22]
On the other hand, the advent of modern computer technology and relatively cheap computing resources have enabled computer-intensive biostatistical methods likebootstrapping andre-sampling methods.
In recent times,random forests have gained popularity as a method for performingstatistical classification. Random forest techniques generate a panel of decision trees. Decision trees have the advantage that you can draw them and interpret them (even with a basic understanding of mathematics and statistics). Random Forests have thus been used for clinical decision support systems.[citation needed]
With new technologies and genetics knowledge, biostatistics are now also used forSystems medicine, which consists in a more personalized medicine. For this, is made an integration of data from different sources, including conventional patient data, clinico-pathological parameters, molecular and genetic data as well as data generated by additional new-omics technologies.[29]
The study ofpopulation genetics andstatistical genetics in order to link variation ingenotype with a variation inphenotype. In other words, it is desirable to discover the genetic basis of a measurable trait, a quantitative trait, that is under polygenic control. A genome region that is responsible for a continuous trait is called aquantitative trait locus (QTL). The study of QTLs become feasible by usingmolecular markers and measuring traits in populations, but their mapping needs the obtaining of a population from an experimental crossing, like an F2 orrecombinant inbred strains/lines (RILs). To scan for QTLs regions in a genome, agene map based on linkage have to be built. Some of the best-known QTL mapping algorithms are Interval Mapping, Composite Interval Mapping, and Multiple Interval Mapping.[30]
However, QTL mapping resolution is impaired by the amount of recombination assayed, a problem for species in which it is difficult to obtain large offspring. Furthermore, allele diversity is restricted to individuals originated from contrasting parents, which limit studies of allele diversity when we have a panel of individuals representing a natural population.[31] For this reason, thegenome-wide association study was proposed in order to identify QTLs based onlinkage disequilibrium, that is the non-random association between traits and molecular markers. It was leveraged by the development of high-throughputSNP genotyping.[32]
Inanimal andplant breeding, the use of markers inselection aiming for breeding, mainly the molecular ones, collaborated to the development ofmarker-assisted selection. While QTL mapping is limited due resolution, GWAS does not have enough power when rare variants of small effect that are also influenced by environment. So, the concept of Genomic Selection (GS) arises in order to use all molecular markers in the selection and allow the prediction of the performance of candidates in this selection. The proposal is to genotype and phenotype a training population, develop a model that can obtain the genomic estimated breeding values (GEBVs) of individuals belonging to a genotype and but not phenotype population, called testing population.[33] This kind of study could also include a validation population, thinking in the concept ofcross-validation, in which the real phenotype results measured in this population are compared with the phenotype results based on the prediction, what used to check the accuracy of the model.
As a summary, some points about the application of quantitative genetics are:
Studies for differential expression of genes fromRNA-Seq data, as forRT-qPCR andmicroarrays, demands comparison of conditions. The goal is to identify genes which have a significant change in abundance between different conditions. Then, experiments are designed appropriately, with replicates for each condition/treatment, randomization and blocking, when necessary. In RNA-Seq, the quantification of expression uses the information of mapped reads that are summarized in some genetic unit, asexons that are part of a gene sequence. Asmicroarray results can be approximated by a normal distribution, RNA-Seq counts data are better explained by other distributions. The first used distribution was thePoisson one, but it underestimate the sample error, leading to false positives. Currently, biological variation is considered by methods that estimate a dispersion parameter of anegative binomial distribution.Generalized linear models are used to perform the tests for statistical significance and as the number of genes is high, multiple tests correction have to be considered.[34] Some examples of other analysis ongenomics data comes from microarray orproteomics experiments.[35][36] Often concerning diseases or disease stages.[37]
There are a lot of tools that can be used to do statistical analysis in biological data. Most of them are useful in other areas of knowledge, covering a large number of applications (alphabetical). Here are brief descriptions of some of them:
ASReml: Another software developed by VSNi[40] that can be used also in R environment as a package. It is developed to estimate variance components under a general linear mixed model usingrestricted maximum likelihood (REML). Models with fixed effects and random effects and nested or crossed ones are allowed. Gives the possibility to investigate differentvariance-covariance matrix structures.
CycDesigN:[41] A computer package developed by VSNi[40] that helps the researchers create experimental designs and analyze data coming from a design present in one of three classes handled by CycDesigN. These classes are resolvable, non-resolvable, partially replicated andcrossover designs. It includes less used designs the Latinized ones, as t-Latinized design.[42]
Orange: A programming interface for high-level data processing, data mining and data visualization. Include tools for gene expression and genomics.[22]
R: Anopen source environment and programming language dedicated to statistical computing and graphics. It is an implementation ofS language maintained by CRAN.[43] In addition to its functions to read data tables, take descriptive statistics, develop and evaluate models, its repository contains packages developed by researchers around the world. This allows the development of functions written to deal with the statistical analysis of data that comes from specific applications.[44] In the case of Bioinformatics, for example, there are packages located in the main repository (CRAN) and in others, asBioconductor. It is also possible to use packages under development that are shared in hosting-services asGitHub.
SAS: A data analysis software widely used, going through universities, services and industry. Developed by a company with the same name (SAS Institute), it usesSAS language for programming.
PLA 3.0:[45] Is a biostatistical analysis software for regulated environments (e.g. drug testing) which supports Quantitative Response Assays (Parallel-Line, Parallel-Logistics, Slope-Ratio) and Dichotomous Assays (Quantal Response, Binary Assays). It also supports weighting methods for combination calculations and the automatic data aggregation of independent assay data.
Weka: AJava software formachine learning anddata mining, including tools and methods for visualization, clustering, regression, association rule, and classification. There are tools for cross-validation, bootstrapping and a module of algorithm comparison. Weka also can be run in other programming languages as Perl or R.[22]
Almost all educational programmes in biostatistics are atpostgraduate level. They are most often found in schools of public health, affiliated with schools of medicine, forestry, or agriculture, or as a focus of application in departments of statistics.
In the United States, where several universities have dedicated biostatistics departments, many other top-tier universities integrate biostatistics faculty into statistics or other departments, such asepidemiology. Thus, departments carrying the name "biostatistics" may exist under quite different structures. For instance, relatively new biostatistics departments have been founded with a focus onbioinformatics andcomputational biology, whereas older departments, typically affiliated with schools ofpublic health, will have more traditional lines of research involving epidemiological studies andclinical trials as well as bioinformatics. In larger universities around the world, where both a statistics and a biostatistics department exist, the degree of integration between the two departments may range from the bare minimum to very close collaboration. In general, the difference between a statistics program and a biostatistics program is twofold: (i) statistics departments will often host theoretical/methodological research which are less common in biostatistics programs and (ii) statistics departments have lines of research that may include biomedical applications but also other areas such as industry (quality control), business andeconomics and biological areas other than medicine.
^abcNizamuddin, Sarah L.; Nizamuddin, Junaid; Mueller, Ariel; Ramakrishna, Harish; Shahul, Sajid S. (October 2017). "Developing a Hypothesis and Statistical Planning".Journal of Cardiothoracic and Vascular Anesthesia.31 (5):1878–1882.doi:10.1053/j.jvca.2017.04.020.PMID28778775.
^Szczech, Lynda Anne; Coladonato, Joseph A.; Owen, William F. (4 October 2002). "Key Concepts in Biostatistics: Using Statistics to Answer the Question "Is There a Difference?"".Seminars in Dialysis.15 (5):347–351.doi:10.1046/j.1525-139X.2002.00085.x.PMID12358639.S2CID30875225.
^abcdForthofer, Ronald N.; Lee, Eun Sul (1995).Introduction to Biostatistics. A Guide to Design, Analysis, and Discovery. Academic Press.ISBN978-0-12-262270-0.
^Benjamini, Y. & Hochberg, Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society. Series B (Methodological) 57, 289–300 (1995).
^Helen Causton; John Quackenbush; Alvis Brazma (2003).Statistical Analysis of Gene Expression Microarray Data. Wiley-Blackwell.
^Terry Speed (2003).Microarray Gene Expression Data Analysis: A Beginner's Guide. Chapman & Hall/CRC.
^Frank Emmert-Streib; Matthias Dehmer (2010).Medical Biostatistics for Complex Diseases. Wiley-Blackwell.ISBN978-3-527-32585-6.
^Warren J. Ewens; Gregory R. Grant (2004).Statistical Methods in Bioinformatics: An Introduction. Springer.
^Matthias Dehmer; Frank Emmert-Streib; Armin Graber; Armindo Salvador (2011).Applied Statistics for Network Biology: Methods in Systems Biology. Wiley-Blackwell.ISBN978-3-527-32750-8.
^Piepho, Hans-Peter; Williams, Emlyn R; Michel, Volker (2015). "Beyond Latin Squares: A Brief Tour of Row-Column Designs".Agronomy Journal.107 (6): 2263.Bibcode:2015AgrJ..107.2263P.doi:10.2134/agronj15.0144.