ECON61001: Econometric Methods
Home » ECON61001: Econometric Methods

ECON61001: Econometric Methods

Custom Research Paper Writing Service 

University of Manchester

Don't use plagiarized sources. Get Your Custom Essay on
ECON61001: Econometric Methods
Just from $13/Page
Order Essay

ECON61001: Econometric Methods

Final Exam

January 2021

Release date/time: 28/01/21, 14.00hrs GMT

Submission deadline: 30/01/21, 14.00hrs GMT

Instructions:

  • You must answer all five questions in Section A and two out of the four questions in Section B. If you answer more questions than are required and do not indicate which answers should be ignored, we will mark the requisite number of answers in the order in which they appear in your answer submission: answers beyond that number will not be considered.
  • Your answers could be typed or hand-written (and scanned to a single pdf file that can be submitted) or a combination of a typed answer with included images of algebra or figures.
  • Where relevant, questions include word limits. These are limits, not targets. Excellent answers can be shorter than the word limit. If you go beyond the word limit the additional text will be ignored. Where a question includes a word limit you HAVE to include a word count for your answer (excluding formulae). You could use https://wordcounter.net to obtain word counts.
  • Candidates are advised that the examiners attach considerable importance to the clarity with which answers are expressed.
  • You must correctly enter your registration number and the course code on your answer.

c The University of Manchester, 2021.

A

  1. Suppose a researcher is interested in the following linear regression model

yi = xi0β0 + ui,   i = 1,2,…N,

where xi = (1,x2,i,x3,i,x4,i)0 and β0 = (β0,10,20,30,4)0 . Given the context, the researcher is able to assume that {ui,xi0}Ni=1 form a sequence of independently and identically distributed random vectors with E[xixi0] = Q, a finite, positive definite matrix of constants, E[ui|xi] = 0 and V ar[ui|xi] = σ02. Therefore, she estimates the model via Ordinary Least Squares (OLS), obtaining the following estimated equation

i = 1.0213 − 0.0020 x2,i − 0.0208 x3,i + 0.0095 x4,i,

(0.1416)     (0.0005)     (0.0080)     (0.0088)

where the number in parenthesis is the conventional OLS standard error for the coefficient in question. The OLS estimator of σ02 in this model is σˆN2 = 0.0081.

Given these results, the researcher concludes that β0,4 = 0 and so decides to estimate the model via OLS with x4,i excluded, obtaining the following estimated equation

i = 1.1485 − 0.0020x2,i − 0.0211x3,i.  (1)

Given that the sample size is N = 108 in both estimations, calculate the OLS estimator of the error variance σ02 from the estimation in (1). Be sure to carefully explain your calculations. Hint: consider the F-statistic for testing β0,4 =

  1. [8 marks]
  2. Suppose it is desired to predict zt using zˆt = wt0γ where wt is a vector of observable variables and γ is a vector of constants that needs to be specified. The choice of γ associated with the linear projection of zt on wt is γ0 where E[(zt − wt0γ0)wt] = 0.
    • What optimality property does zˆto = wt0γ0 possess? (Word limit: 50) [1 mark]
    • Now consider the regression model zt = wt0γ0 + vt. Is wt contemporaneously exogenous or strictly exogenous for estimation of γ0 in this model?

Justify your answer. (Word limit: 150)          [4 marks]

  • Suppose the model in part (b) is dynamically complete. What optimality property does zˆto possess? Briefly justify your answer. (Word limit:

150)     [3 marks]

Continued over

SECTION A continued

3.(a) Let A be n × n nonsingular symmetric matrix. Show that if A is positive definite then A−1 is positive definite. Hint: AA−1 = In. [4 marks]

3.(b) Consider the classical linear regression model

y = Xβ0 + u,      (2)

where X is the T×k observable data matrix that is fixed in repeated samples with rank(X) = k, and u is a T × 1 vector with E[u] = 0 and V ar[u] = σ02IT where σ02 is an unknown positive finite constant. Let βˆT be the OLS estimator of β0 based on (2) and βˆR,T be the Restricted Least Squares (RLS) estimator of β0 based on (2) subject to the restrictions Rβ0 = r where R is a nr × k matrix of specified constants with rank(R) = nr and r is a specified nr × 1 vector of constants. Assuming the restrictions are correct, prove that βˆR,T is at least as efficient as βˆT. Hint: you may quote the formula for the variance-covariance matrix of the OLS and RLS estimators without proof; you may also take advantage of the stated result in part (a).     [4 marks]

  1. Consider the linear regression model

y = Xβ0 + u,

where y and u are T × 1 vectors, X is T × k matrix, and β0 is the k × 1 vector of unknown regression coefficients. Assume that X is fixed in repeated samples with rank(X) = k, and u  where σ02 is an unknown positive constant. Let θˆT denote the maximum likelihood estimator of the unknown parameter vector θ0 = (β0002)0. Derive the information matrix for this model. Hint: you may state the form of the log likelihood function and score function for this model without proof. [8 marks]

  1. Consider the model

yi = xi0β0 + ui,   i = 1,2,…,N,

where β0 is the k × 1 vector of unknown regression coefficients, {(x0i,ui)}Ni=1 is a sequence of independently and identically distributed random vectors with E[ui|xi] = 0, V ar[ui|xi] = σ02, an unknown finite positive constant and E[xixi0] = Q, a finite positive definite matrix of constants. Let σˆN2 be the OLS estimator of σ02. Show that N1/2(σˆN2 − σ02) →d N(0, µ4 − σ02) where µ4 = E[u4i ].

Hint: You may assume that:0 N0 −1 PNi=1 xixi0 →p Q; (ii) N−1/2 PiN=1 vi →d N(0,Ω) where vi = (u2i − σ02,xiui) , and Ω = V ar[vi] is a finite, positive definite (k + 1) × (k + 1) matrix whose elements you must specify as needed to develop your answer.  [8 marks]

Continued over

 

  1. Consider the regression model

yi = xi0β0 + ui,   i = 1,2,…,N,

where β0 is the k × 1 vector of unknown regression coefficients, {(x0i,ui)}Ni=1 is a sequence of independently and identically distributed random vectors with E[ui|xi] = 0, V ar[ui|xi] = σ02, an unknown finite positive constant and E[xixi0] = Q, a finite positive definite matrix of constants. You may further assume that: ( i ) N−1 PNi=1 xixi0 →p Q; (ii) N−1/2 PiN=1 xiui →d N(0,σ02Q).

Let βˆR,N denote the RLS estimator based on the linear restrictions Rβ = r where R is a nr × k matrix of pre-specified constants with rank equal to nr and r is a nr × 1 vector of pre-specified constants, and let λˆN be the vector of Lagrange Multipliers associated with this RLS estimation. Assuming Rβ0 = r, answer the following questions.

  • Show that N1/2(βˆR,N − β0) →d N (0, VR ) where

VR = σ02 Q−1 − Q−1R0(RQ−1R0)−1RQ−1 .

Hint: you may quote the formulae for βˆR,N and βˆN, the Ordinary Least Squares estimator of β0, without proof. [10 marks]

  • A colleague proposes testing H0 : Rβ0 = r versus H1 : Rβ0 =6 r using the decision rule of the form: reject H0 at the (approximate) 100α% significance level if λˆN0 MNλˆN > cnr(1 − α)

where cnr(1 − α) is the 100(1 − α)th percentile of the χ2nr distribution. However, your colleague is unsure what the matrix MN should be in order that this decision rule has the properties implied by the stated significance level. Provide a suitable choice of MN, being sure to justify your choice carefully. Hint: you may quote without proof: (i) the formulae for λˆN and βˆN; (ii) that both the OLS and RLS estimators of σ02 are consistent under the conditions of the question.        [20 marks]

Continued over

7.(a) Let {vt}Tt=3 be a weakly stationary time series process. Consider the following

statistic,

T

ρˆ4,T = Pt=1T v tvt2−4.

Pt=1   v t

Let  denote a sequence of independently and identically distributed (i.i.d.) random variables with mean zero and variance σε2.

  • Assume that vt = εt. Show that T1/2ρˆ4,T d N(0,1). [6 marks]
  • Assume that vt = θ4vt−4 + εt,

where |θ4| < 1. What is the probability limit of ρˆ4,T as T →∞? Be sure to justify your answer carefully. Hint: vt has the following representation, vt P . [9 marks]

7.(b) A researcher wishes to test the simple efficient-markets hypothesis in the foreign exchange market. Let st = ln(St) and ft,n = ln[Ft,n], where St and Ft,n are the levels of the spot exchange rate at time t and the n−period forward exchange rate at time t. The simple efficient-markets hypothesis is that ft,n = E[st+n |It] where It is the information set at time t which for the purposes of this question can be taken to be It = {st,ft,n,st1,ft1,n,st2,ft2,n,…}. Using daily spot and thirty-day forward exchange rate data for the US dollar UK pound exchange rate, the researcher estimates the model,

yt+n = xt0β0 + ut,n,          (3)

where yt+n = st+n − ft,n, xt is the 3 × 1 vector given by

xt0 = (1, st − ft−n,n, st−1 − ft−1−n,n ),

n = 30 and ut,n is the error term. If the simple efficient markets hypothesis holds in this foreign exchange market then E[ut,n |It] = 0 and the regression coefficients in (3) satisfy a set of restrictions denoted here by g(β0) = 0 where g(·) is ng × 1 vector.

  • What is g(β0)? Briefly justify your answer. (Word limit: 75) [4 marks]

Continued over

7.(b) continued

  • The researcher tests H0 : g(β0) = 0 versus H1 : g(β0) =6 0 using the test statistic

1

ST = Tg(βˆT)0 G(βˆT)Vˆβ G(βˆT)0 g(βˆT ), (4)

where G . Assuming T1/2(βˆT − β0) →d N(0, Vβ) and

β p Vβ, write down a suitable T decision rule for this test. If ST = 8.2 then what is the outcome of the test? [3 marks]

  • Since the yt+n is a financial variable, the researcher is concerned that the errors may exhibit autoregressive conditional heteroscedasticity and so has calculated Vˆβ in (4) using White’s heteroscedasticity robust estimator. Given this information, do you have any concerns about the test in part (ii)? If so then explain your concerns and how you would modify the test to address these concerns. (Word limit: 350) [8 marks]

Continued over

  1. Consider the linear regression model

y1,i = γ0y2,i + z10,iδ0 + u1,i = xi0β0 + u1,i,

where xi0 = (y2,i,z10,i), β0 = (γ000 )0 and assume that

y2,i = zi0η0 + u2,i,           (5)

where y1,i and y2,i are observable random variables, zi = (z10,i,z20,i)0 is a random vector of observable variables, u1,i and u2,i are the error terms ( unobservable scalar random variables), γ0 is an unknown scalar parameter, and δ0, and η0 are vectors of unknown parameters. Suppose there is a sample of N observations, and let yˆ2,i denote the predicted value of y2,i based on Ordinary Least Squares (OLS) estimation of (5). Define xˆi = (yˆ2,i,z10,i)0. Let X be the N ×k matrix with ith row xi0, Xˆ be the N × k matrix with ith row xˆ0i, Z be the N × q matrix with ith row zi0 and y1 be the N × 1 vector with ith element y1,i. Consider the following three estimators of β0:

  • βˆ1 = (Xˆ0Xˆ)−10y1; • βˆ2   = (Xˆ0X)−10y1;
  • βˆ3 = {X0Z(Z0Z)−1Z0X }−1 X0Z(Z0Z)−1Z0y1.
  • Show that βˆ1 = βˆ2 = βˆ3. [15 marks]
  • Let φˆ = (βˆ0,θˆ)0 be the OLS estimator of φ0 = (β000)0 based on the model

y1,i = xi0β0 + θuˆ2,i + “error”

where uˆ2,i is the ith element of uˆ2, the N×1 vector of residual from OLS estimation of (5). Via an application of the Frisch-Waugh-Lovell Theorem or otherwise, show that βˆ = βˆ1.

[15 marks]

Continued over

9.(a) Let {(yi,xi0)}Ni=1 be a sequence of independently and identically distributed ( i.i.d. ) random vectors. Suppose that yi is a dummy variable and so has a sample space of {0,1}. Consider the model

yi = xi0β0 + ui.

  • Assume that E[ui|xi] = 0. Derive the Generalized Least Squares ( GLS ) estimator of β0 in this model? Hint: you may quote the generic formula for the GLS estimator that is, βˆGLS = (X0Σ−1X)−1X0Σ−1y, but you must derive

Σ for this model.         [6 marks]

  • Is your answer to part (i) a feasible or infeasible GLS estimator? If infeasible then suggest a feasible GLS estimator. Do you foresee any potential problems in implementing your proposed Feasible GLS estimator? [4 marks]

(b) Let {Vi}Ni=1 be a sequence of i.i.d. Bernoulli random variables with P(Vi = 1) = θ0. We assume here that θ0 ∈ (0,1) and that our sample size is large enough for both outcomes to occur.

  • Derive the Wald, Likelihood Ratio and Lagrange Multiplier statistics for testing H0 : θ0 = θ∗ against H1 : θ0 6= θ∗. Hint: you may quote the form of the log likelihood function, the score equation and the formula for the maximum likelihood estimator for this model without proof. [16 marks]
  • Given that N = 100 and the sample contains 55 outcomes that are one, use your statistics in part (i) to test the hypothesis H0 : θ0 = 0.5 against

H1 : θ0 =6 0.5 at the 5% significance level.    [4 marks]

END OF EXAMINATION

 

1          TABLE 1: PERCENTAGE POINTS FOR THE T DISTRIBUTION

  • Table 1: Percentage Points for the t distribution
Student’s t Distribution Function for Selected Probabilities

The table provides values of tα,v where Pr(T ≤ tα,v) = α and T ∼ tv

α 0.750 0.800 0.900 0.950 0.975 0.990 0.995 0.9975 0.999 0.9995
ν Values of tα,v
1 1.000 1.376 3.078 6.314 12.706 31.821 63.657
2 0.816 1.061 1.886 2.920 4.303 6.965 9.925
3 0.765 0.978 1.638 2.353 3.182 4.541 5.841
4 0.741 0.941 1.533 2.132 2.776 3.747 4.604
5 0.727 0.920 1.476 2.015 2.571 3.365 4.032 4.773
6 0.718 0.906 1.440 1.943 2.447 3.143 3.707 4.317 5.208
7 0.711 0.896 1.415 1.895 2.365 2.998 3.499 4.029 4.785 5.408
8 0.706 0.889 1.397 1.860 2.306 2.896 3.355 3.833 4.501 5.041
9 0.703 0.883 1.383 1.833 2.262 2.821 3.250 3.690 4.297 4.781
10 0.700 0.879 1.372 1.812 2.228 2.764 3.169 3.581 4.144 4.587
11 0.697 0.876 1.363 1.796 2.201 2.718 3.106 3.497 4.025 4.437
12 0.695 0.873 1.356 1.782 2.179 2.681 3.055 3.428 3.930 4.318
13 0.694 0.870 1.350 1.771 2.160 2.650 3.012 3.372 3.852 4.221
14 0.692 0.868 1.345 1.761 2.145 2.624 2.977 3.326 3.787 4.140
15 0.691 0.866 1.341 1.753 2.131 2.602 2.947 3.286 3.733 4.073
16 0.690 0.865 1.337 1.746 2.120 2.583 2.921 3.252 3.686 4.015
17 0.689 0.863 1.333 1.740 2.110 2.567 2.898 3.222 3.646 3.965
18 0.688 0.862 1.330 1.734 2.101 2.552 2.878 3.197 3.610 3.922
19 0.688 0.861 1.328 1.729 2.093 2.539 2.861 3.174 3.579 3.883
20 0.687 0.860 1.325 1.725 2.086 2.528 2.845 3.153 3.552 3.850
21 0.686 0.859 1.323 1.721 2.080 2.518 2.831 3.135 3.527 3.819
22 0.686 0.858 1.321 1.717 2.074 2.508 2.819 3.119 3.505 3.792
23 0.685 0.858 1.319 1.714 2.069 2.500 2.807 3.104 3.485 3.768
24 0.685 0.857 1.318 1.711 2.064 2.492 2.797 3.091 3.467 3.745
25 0.684 0.856 1.316 1.708 2.060 2.485 2.787 3.078 3.450 3.725
26 0.684 0.856 1.315 1.706 2.056 2.479 2.779 3.067 3.435 3.707
27 0.684 0.855 1.314 1.703 2.052 2.473 2.771 3.057 3.421 3.690
28 0.683 0.855 1.313 1.701 2.048 2.467 2.763 3.047 3.408 3.674
29 0.683 0.854 1.311 1.699 2.045 2.462 2.756 3.038 3.396 3.659
30 0.683 0.854 1.310 1.697 2.042 2.457 2.750 3.030 3.385 3.646
40 0.681 0.851 1.303 1.684 2.021 2.423 2.704 2.971 3.307 3.551
50 0.679 0.849 1.299 1.676 2.009 2.403 2.678 2.937 3.261 3.496
60 0.679 0.848 1.296 1.671 2.000 2.390 2.660 2.915 3.232 3.460
70 0.678 0.847 1.294 1.667 1.994 2.381 2.648 2.899 3.211 3.435
80 0.678 0.846 1.292 1.664 1.990 2.374 2.639 2.887 3.195 3.416
90 0.677 0.846 1.291 1.662 1.987 2.368 2.632 2.878 3.183 3.402
100 0.677 0.845 1.290 1.660 1.984 2.364 2.626 2.871 3.174 3.390
110 0.677 0.845 1.289 1.659 1.982 2.361 2.621 2.865 3.166 3.381
120 0.677 0.845 1.289 1.658 1.980 2.358 2.617 2.860 3.160 3.373
0.674 0.842 1.282 1.645 1.960 2.326 2.576 2.808 3.090 3.297
  • TABLE 2: PERCENTAGE POINTS FOR THE χ2 DISTRIBUTION 2 Table 2: Percentage Points for the χ2 distribution
The χ2 Distribution Function for Selected Probabilities
The table provides values of  where Pr(  and
α 0.005 0.01 0.025 0.05 0.1       0.5       0.9 0.95 0.975 0.99 0.995
v Values of χ2α,v
1 0.000 0.000 0.001 0.004 0.016 0.455 2.706 3.841 5.024 6.635 7.879
2 0.010 0.020 0.051 0.103 0.211 1.386 4.605 5.991 7.378 9.210 10.60
3 0.072 0.115 0.216 0.352 0.584 2.366 6.251 7.815 9.348 11.34 12.84
4 0.207 0.297 0.484 0.711 1.064 3.357 7.779 9.488 11.14 13.28 14.86
5 0.412 0.554 0.831 1.145 1.610 4.351 9.236 11.07 12.83 15.09 16.75
6 0.676 0.872 1.237 1.635 2.204 5.348 10.64 12.59 14.45 16.81 18.55
7 0.989 1.239 1.690 2.167 2.833 6.346 12.02 14.07 16.01 18.48 20.28
8 1.344 1.646 2.180 2.733 3.490 7.344 13.36 15.51 17.53 20.09 21.95
9 1.735 2.088 2.700 3.325 4.168 8.343 14.68 16.92 19.02 21.67 23.59
10 2.156 2.558 3.247 3.940 4.865 9.342 15.99 18.31 20.48 23.21 25.19
11 2.603 3.053 3.816 4.575 5.578 10.34 17.28 19.68 21.92 24.72 26.76
12 3.074 3.571 4.404 5.226 6.304 11.34 18.55 21.03 23.34 26.22 28.30
13 3.565 4.107 5.009 5.892 7.042 12.34 19.81 22.36 24.74 27.69 29.82
14 4.075 4.660 5.629 6.571 7.790 13.34 21.06 23.68 26.12 29.14 31.32
15 4.601 5.229 6.262 7.261 8.547 14.34 22.31 25.00 27.49 30.58 32.80
16 5.142 5.812 6.908 7.962 9.312 15.34 23.54 26.30 28.85 32.00 34.27
17 5.697 6.408 7.564 8.672 10.09 16.34 24.77 27.59 30.19 33.41 35.72
18 6.265 7.015 8.231 9.390 10.86 17.34 25.99 28.87 31.53 34.81 37.16
19 6.844 7.633 8.907 10.12 11.65 18.34 27.20 30.14 32.85 36.19 38.58
20 7.434 8.260 9.591 10.85 12.44 19.34 28.41 31.41 34.17 37.57 40.00
21 8.034 8.897 10.28 11.59 13.24 20.34 29.62 32.67 35.48 38.93 41.40
22 8.643 9.542 10.98 12.34 14.04 21.34 30.81 33.92 36.78 40.29 42.80
23 9.260 10.20 11.69 13.09 14.85 22.34 32.01 35.17 38.08 41.64 44.18
24 9.886 10.86 12.40 13.85 15.66 23.34 33.20 36.42 39.36 42.98 45.56
25 10.52 11.52 13.12 14.61 16.47 24.34 34.38 37.65 40.65 44.31 46.93
26 11.16 12.20 13.84 15.38 17.29 25.34 35.56 38.89 41.92 45.64 48.29
27 11.81 12.88 14.57 16.15 18.11 26.34 36.74 40.11 43.19 46.96 49.64
28 12.46 13.56 15.31 16.93 18.94 27.34 37.92 41.34 44.46 48.28 50.99
29 13.12 14.26 16.05 17.71 19.77 28.34 39.09 42.56 45.72 49.59 52.34
30 13.79 14.95 16.79 18.49 20.60 29.34 40.26 43.77 46.98 50.89 53.67
35 17.19 18.51 20.57 22.47 24.80 34.34 46.06 49.80 53.20 57.34 60.27
40 20.71 22.16 24.43 26.51 29.05 39.34 51.81 55.76 59.34 63.69 66.77
45 24.31 25.90 28.37 30.61 33.35 44.34 57.51 61.66 65.41 69.96 73.17
50 27.99 29.71 32.36 34.76 37.69 49.33 63.17 67.50 71.42 76.15 79.49
60 35.53 37.48 40.48 43.19 46.46 59.33 74.40 79.08 83.30 88.30 91.95
70 43.28 45.44 48.76 51.74 55.33 69.33 85.53 90.53 95.02 100.4 104.2
80 51.17 53.54 57.15 60.39 64.28 79.33 96.58 101.9 106.6 112.3 116.3
90 59.20 61.75 65.65 69.13 73.29 89.33 107.6 113.1 118.1 124.1 128.3
100 67.33 70.06 74.22 77.93 82.36 99.33 118.5 124.3 129.6 135.8 140.2
150 109.1 112.7 118.0 122.7 128.3 149.3 172.6 179.6 185.8 193.2 198.4
200 152.2 156.4 162.7 168.3 174.8 199.3 226.0 234.0 241.1 249.4 255.3
  • TABLE 3: UPPER 5% PERCENTAGE POINTS FOR THE F DISTRIBUTION
  • Table 3: Upper 5% percentage points for the F distribution
The F Distribution Function for α = 0.05
The table provides values of Fα,v1,v2 where Pr(F ≥ Fα,v1,v2) = 0.05 and F ∼ F (v1,v2)
v1
v2 1 2 3 4 5 6 7 8 9 10 12 15
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.62
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.51
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.22
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.01
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.85
11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.79 2.72
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.60 2.53
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.53 2.46
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40
16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.42 2.35
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 2.38 2.31
18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.34 2.27
19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.31 2.23
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20
21 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 2.32 2.25 2.18
22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 2.23 2.15
23 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 2.27 2.20 2.13
24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 2.18 2.11
25 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 2.24 2.16 2.09
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01
35 4.12 3.27 2.87 2.64 2.49 2.37 2.29 2.22 2.16 2.11 2.04 1.96
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.00 1.92
45 4.06 3.20 2.81 2.58 2.42 2.31 2.22 2.15 2.10 2.05 1.97 1.89
50 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07 2.03 1.95 1.87
55 4.02 3.16 2.77 2.54 2.38 2.27 2.18 2.11 2.06 2.01 1.93 1.85
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84
70 3.98 3.13 2.74 2.50 2.35 2.23 2.14 2.07 2.02 1.97 1.89 1.81
80 3.96 3.11 2.72 2.49 2.33 2.21 2.13 2.06 2.00 1.95 1.88 1.79
90 3.95 3.10 2.71 2.47 2.32 2.20 2.11 2.04 1.99 1.94 1.86 1.78
100 3.94 3.09 2.70 2.46 2.31 2.19 2.10 2.03 1.97 1.93 1.85 1.77
110 3.93 3.08 2.69 2.45 2.30 2.18 2.09 2.02 1.97 1.92 1.84 1.76
120 3.92 3.07 2.68 2.45 2.29 2.18 2.09 2.02 1.96 1.91 1.83 1.75
150 3.90 3.06 2.66 2.43 2.27 2.16 2.07 2.00 1.94 1.89 1.82 1.73
  • TABLE 4: UPPER 1% PERCENTAGE POINTS FOR THE F DISTRIBUTION

4 Table 4: Upper 1% percentage points for the F distribution

The F Distribution Function for α = 0.01
The table provides values of Fα,v1,v2 where Pr(F ≥ Fα,v1,v2) = 0.01 and F ∼ F (v1,v2)
v1
v2 1 2 3 4 5 6 7 8 9 10 12 15
5 16.3 13.3 12.1 11.4 11.0 10.7 10.5 10.3 10.2 10.1 9.89 9.72
6 13.7 10.9 9.78 9.15 8.75 8.47 8.26 8.10 7.98 7.87 7.72 7.56
7 12.2 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 6.62 6.47 6.31
8 11.3 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 5.81 5.67 5.52
9 10.6 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 5.26 5.11 4.96
10 10.0 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 4.85 4.71 4.56
11 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63 4.54 4.40 4.25
12 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30 4.16 4.01
13 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19 4.10 3.96 3.82
14 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03 3.94 3.80 3.66
15 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89 3.80 3.67 3.52
16 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78 3.69 3.55 3.41
17 8.40 6.11 5.18 4.67 4.34 4.10 3.93 3.79 3.68 3.59 3.46 3.31
18 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60 3.51 3.37 3.23
19 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52 3.43 3.30 3.15
20 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 3.37 3.23 3.09
21 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40 3.31 3.17 3.03
22 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35 3.26 3.12 2.98
23 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30 3.21 3.07 2.93
24 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 3.17 3.03 2.89
25 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22 3.13 2.99 2.85
30 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 2.98 2.84 2.70
35 7.42 5.27 4.40 3.91 3.59 3.37 3.20 3.07 2.96 2.88 2.74 2.60
40 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 2.80 2.66 2.52
45 7.23 5.11 4.25 3.77 3.45 3.23 3.07 2.94 2.83 2.74 2.61 2.46
50 7.17 5.06 4.20 3.72 3.41 3.19 3.02 2.89 2.78 2.70 2.56 2.42
55 7.12 5.01 4.16 3.68 3.37 3.15 2.98 2.85 2.75 2.66 2.53 2.38
60 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 2.63 2.50 2.35
70 7.01 4.92 4.07 3.60 3.29 3.07 2.91 2.78 2.67 2.59 2.45 2.31
80 6.96 4.88 4.04 3.56 3.26 3.04 2.87 2.74 2.64 2.55 2.42 2.27
90 6.93 4.85 4.01 3.53 3.23 3.01 2.84 2.72 2.61 2.52 2.39 2.24
100 6.90 4.82 3.98 3.51 3.21 2.99 2.82 2.69 2.59 2.50 2.37 2.22
110 6.87 4.80 3.96 3.49 3.19 2.97 2.81 2.68 2.57 2.49 2.35 2.21
120 6.85 4.79 3.95 3.48 3.17 2.96 2.79 2.66 2.56 2.47 2.34 2.19
150 6.81 4.75 3.91 3.45 3.14 2.92 2.76 2.63 2.53 2.44 2.31 2.16

 

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 15% with the discount code ESSAYHELP