# Distribution

Classified in Mathematics

Written at on English with a size of 12.96 KB.

 Tweet
Chapter 1 1. Population and sample: (a) Population: A collection of all units in interest. (b) Subject: An individual unit of a population. (c) Sample: An observed subset of the units of a population. 2. Parameter and statistic: (a) Parameter: A number that describes a population, which is usually unknown. (b) Statistic: A number that describes a sample. It must be computable from the sample, and Therefore is known. 3. Statistical inference is the procedure of using a sample to learn about the population. Chapter 2 4. Types of variables: (a) Categorical (b) Quantitative I. Discrete Ii. Continuous 5. Graphical summaries for categorical variables: (a) Pie chart (b) Bar plot 6. Graphical summaries for quantitative variables: (a) Stem-and-leaf plot (b) Histogram (c) Box plot 7. Finding the center and spread of a quantitative data from its graphical summaries: (a) Stem-and-leaf plot and histogram: I. The center is located at the longest stem(s) in the stem-and-leaf plot or the tallest rectangular Bar(s) in the histogram. 1 Ii. The range begins at the lowest valued stem in the stem-and-leaf plot with nonzero length or The leftmost rectangular bar in the histogram with nonzero height; ends at the highest valued Stem in the stem-and-leaf plot with nonzero length or the rightmost rectangular bar in the Histogram with nonzero height. (b) Box plot: I. The center is located at the thick line in the box. Ii. The range begin at the bottom of the lower whisker and ends at the top of the upper whisker. Iii. The inter-quartile range (IQR) is the height of the box. 8. The outliers refer to the observations that fall far from the bulk of the data. 9. Shape of a distribution: (a) Number of modes: I. If the histogram of a distribution contains one peak, then the distribution is unimodal. Ii. If the histogram contains two peaks, then the distribution is bimodal. Iii. If the histogram does not contain any obvious peaks, then the distribution is uniform. (b) Skewness: I. If the histogram of a distribution is symmetric about the center, then the distribution is Symmetric. Ii. If the distance from the center is farther to the maximum than to the minimum, i.E. The Upper tail is longer, then the distribution is right-skewed. Iii. If the distance from the center is farther to the minimum than to the maximum, i.E. The lower Tail is longer, then the distribution is left-skewed. 10. Computing and interpreting the numerical summaries of quantitative data: (a) Measures of center: I. Mean: x = Pxi N . Ii. Median: M = the middle value (when the number of observations is odd); the average of the Middle two values (when the number of observations is even.) Note: For a right-skewed distribution, the mean is usually greater than the median; for a leftskewed Distribution, the mean is usually less than the median. (b) Measures of spread: I. Standard deviation: s = RP(xi−x) 2 N−1 . Ii. Range: maximum − minimum. Iii. Inter-quartile range (IQR): 3rd quartile − 1st quartile. 11. A numerical summary of the data is resistant if extreme observations have little influence on its value. Examples: The median and IQR are resistant, but the mean, range, and standard deviation are not. 12. 3-standard deviations rule: For approximately bell-shaped distributions, most observations fall within 3 standard deviations of the mean (see also the 68-95-99.7 rule in Chapter 6.) 2 Chapter 4 13. Experiments and observational studies: (a) In an experiment, the researcher assigns the subjects to various experiment conditions (i.E. Treatments), And then observe the response variable. (b) In an observational study, the researcher observes both the explanatory variable(s) and the response Variable; there is no assigning of treatments. 14. Causal relationships can only be established through experiments. 15. Sampling frame is the list of subjects in the population from which we draw the sample. 16. Simple random sampling is a method of sampling in which every possible sample of size n from the Population is equally likely to be selected. It is the best way to obtain a sample that is representative Of the population. 17. Types of biases: (a) Sampling bias: Using a sample that is not randomly drawn from the population. (b) Nonresponse bias: Subjects in the sample are unavailable or (intentionally or unintentionally) do Not respond. (c) Response bias: Subjects in the sample give inaccurate responses either because they are untruthful Or the questions are misleading. 18. Convenience sample is the type of sample which is easy to obtain. For example, volunteer sample is a Common form of convenience sample. Convenience sample is usually unrepresentative. Chapter 5 19. A random phenomenon is an event in which the individual outcomes are uncertain but the long-run Behavior is regular. 20. The probability of an event can be interpreted as its long-run proportion of occurring. 21. A probability model contains a list of all possible outcomes and the probability of each outcome. 22. The sample space of a random phenomenon is the set of all possible outcomes. 23. The basic probability rules: (a) 0 ≤ P(A) ≤ 1. (b) Law of total probability: P(S) = 1 where S is the sample space. (c) Complement rule: P(Ac ) = 1 − P(A) where Ac Is the complement of A. (d) General addition rule: P(A ∪ B) = P(A) + P(B) − P(A ∩ B). (e) Partition of probability: P(A) = P(A ∩ B) + P(A ∩ Bc ). 24. Disjoint events: (a) Events A and B are disjoint if A ∩ B is empty. (b) When A and B are disjoint, then P(A ∪ B) = P(A) + P(B), and P(A ∩ B) = 0. 25. Independent events: Events A and B are independent if P(A ∩ B) = P(A)P(B), that is, knowing that One occurs does not affect the probability that the other occurs. 3 26. Conditional probability of event A given event B is the probability that event A occurs given that event B has occurred and is denoted as P(A | B). Provided that P(B) > 0, we have P(A | B) = P (A∩B) P (B) . 27. General multiplication rule: P(A ∩ B) = P(A | B)P(B) = P(B | A)P(A). 28. Two events A and B are independent if any of the following equivalent conditions holds: (a) P(A ∩ B) = P(A)P(B). (b) P(A | B) = P(A). (c) P(B | A) = P(B). Chapter 6 29. Random variable is the numerical outcome of a random phenomenon. 30. Types of random variable: (a) Discrete: the possible values can be listed. (b) Continuous: the possible values are from an interval. 31. Properties of discrete random variables: (a) Each possible outcome has a non-negative probability. (b) The probabilities add up to 1. 32. Center and spread of a discrete random variable: (a) Mean: µ = PxP(x), which can be interpreted as the long-run average outcome, i.E. The expected Average value of the outcomes if a very large sample is drawn. (b) Standard deviation: σ = PP(x − µ) 2P(x). 33. The probability distribution of continuous random variables are described by density curves, which Have the following properties: (a) Non-negative everywhere. (b) For every numbers a and b such that a ≤ b, the probability P(a < X < b) equals the area under The density curve between X = a and X = b. (c) The total area under density curve is 1. 34. The normal distribution: (a) The normal distributions are a family of distributions controlled by the parameters µ and σ, Denoted as N(µ, σ). (b) The density curves of all normal distributions have the same overall shape, which is bell-shaped, Symmetric, and centered at µ. (c) The N(µ, σ) distribution has the mean and median equal to µ, and the standard deviation equal To σ. (d) By changing µ (while fixing σ), the normal density curve shifts its location horizontally but does Not stretch; by changing σ (while fixing µ), the normal density curve stretches but the center Remains unchanged. (e) The N(0, 1) distribution is known as the standard normal distribution and is denoted as Z. 4 35. The 68-95-99.7 rule: For any normal distribution, (a) approximately 68% of the distribution falls within 1σ of µ. (b) approximately 95% of the distribution falls within 2σ of µ. (c) approximately 99.7% of the distribution falls within 3σ of µ. 36. Using the normal probability table: (a) The normal probability table allows us to find the probability P(Z < a) for any number a. (b) To compute the probability P(Z > a), rewrite it as 1 − P(Z < a) and then use the normal Probability table to find P(Z < a). (c) To compute the probability P(a < Z < b), rewrite it as P(Z < b) − P(Z < a) and then use the Normal probability table to find P(Z < b) and P(Z < a). (d) To compute the p-th percentile of Z, search for the number a on the normal probability table Such that P(Z < a) is as close to p 100 as possible. 37. Standardizing and unstandardizing: (a) Any normal distribution X ∼ N(µ, σ) can be transformed into the standard normal distribution Through the transformation Z = X−µ σ . Under this transformation, an observation x drawn from The N(µ, σ) distribution corresponds to the value z = X−µ σ In the standard normal distribution. The value z is known as the z-score of x. (b) Let X ∼ N(µ, σ). The probabilities involving X can be computed by first standardizing X into The standard normal distribution. For example, the probability P(X < a) can be computed by Rewriting it as P(Z < a−µ σ ) and then using the normal probability table to find P(Z < a−µ σ ). (c) Let X ∼ N(µ, σ). To find the p-th percentile x ∗ of X, first find the p-th percentile z ∗ of Z through The normal probability table, then unstandardize z ∗ Into x ∗ using the formula x ∗ = µ + z ∗σ. 38. Normality assessment: (a) Histogram and/or boxplot: Check for the unimodality, skewness, and outliers. (b) Normal Q-Q plot: A Q-Q plot is constructed by plotting the quantiles of a distribution against that of another Distribution. In a normal Q-Q plot, the quantiles of the data are plotted against the quantiles of The standard normal distribution. The normal Q-Q plot allows us to check for the heavy-tailedness Of the data. If the data points on the normal Q-Q plot approximately form a straight diagonal Line, then it is reasonable that the data comes from a normal distribution; otherwise, the data is Likely to come from some other distribution. Chapter 7 39. Sampling distribution is the probability distribution of the values of a sample statistic (e.G. Sample Mean, sample proportion) among all possible samples of the same size from the population. 40. Standard error refers to the standard deviation of a sample statistic, with all involved unknown parameters Replaced by their estimates. (Note: the exact definition of standard error depends on the context. In Chapter 7 we assume that We knew the population parameters, so the standard error is just the standard deviation of a sample Statistic; in Chapters 8 and 9, we shall see that the standard error for the sample proportion is different For confidence intervals and hypothesis tests.) 5 Section 7.1 – Sampling distribution of a sample mean 41. Law of large numbers: Let x1, x2, . . . , xn be independent observations drawn from a population with mean µ. As n increases Towards infinity, x → µ. 42. Mean and standard deviation of the sampling distribution of x: Let x1, x2, . . . , xn be a random sample drawn from a (not necessarily normal) population with mean µ And standard deviation σ, and let x be the sample mean. (a) The mean of x is µx = µ (same as the population mean.) (b) The standard deviation of x is σx = √σ N (the variation of x becomes smaller as σ decreases, or as N increases.) 43. Distribution of x when the population is normal: If the population follows a normal distribution N(µ, σ), then x follows the normal distribution N(µ, √σ N ) For any sample size n. 44. Central Limit Theorem for x: If the population follows any distribution with mean µ and standard deviation σ, then x approximately Follows the normal distribution N(µ, √σ N ) when n is large (rule of thumb: n ≥ 30.) Section 7.2 – Sampling distribution of a sample proportion 45. Law of large numbers: Let ˆp be the sample proportion of success in a random sample of size n drawn from a population with Proportion of success p. As n increases towards infinity, ˆp → p. 46. Mean and standard deviation of the sampling distribution of ˆp: Let ˆp be the sample proportion of success in a random sample of size n drawn from a population with Proportion of success p. (a) The mean of ˆp is µpˆ = p (same as the population proportion of success.) (b) The standard deviation of ˆp is σpˆ = Q P(1−p) N (the variation of ˆp becomes smaller as p moves away From 0.5, or as n increases.) 47. Central Limit Theorem for ˆp: The sample proportion ˆp approximately follows the normal distribution N(p, q P(1−p) N ) when n is Sufficiently large (rule of thumb: np ≥ 15 and n(1 − p) ≥ 15.)