shifted exponential distribution method of moments

shifted exponential distribution method of moments

Thus, we will not attempt to determine the bias and mean square errors analytically, but you will have an opportunity to explore them empricially through a simulation. Learn more about Stack Overflow the company, and our products. Run the simulation 1000 times and compare the emprical density function and the probability density function. This distribution is called the two-parameter exponential distribution, or the shifted exponential distribution. Now, substituting the value of mean and the second . An engineering component has a lifetimeYwhich follows a shifted exponential distri-bution, in particular, the probability density function (pdf) ofY is {e(y ), y > fY(y;) =The unknown parameter >0 measures the magnitude of the shift. Therefore, we need just one equation. << =\bigg[\frac{e^{-\lambda y}}{\lambda}\bigg]\bigg\rvert_{0}^{\infty} \\ In statistics, the method of momentsis a method of estimationof population parameters. \( \E(V_a) = b \) so \(V_a\) is unbiased. If \(a\) is known then the method of moments equation for \(V_a\) as an estimator of \(b\) is \(a V_a \big/ (a - 1) = M\). How to find estimator for $\lambda$ for $X\sim \operatorname{Poisson}(\lambda)$ using the 2nd method of moment? When one of the parameters is known, the method of moments estimator for the other parameter is simpler. One would think that the estimators when one of the parameters is known should work better than the corresponding estimators when both parameters are unknown; but investigate this question empirically. Solving gives the result. The method of moments equations for \(U\) and \(V\) are \begin{align} \frac{U V}{U - 1} & = M \\ \frac{U V^2}{U - 2} & = M^{(2)} \end{align} Solving for \(U\) and \(V\) gives the results. For \( n \in \N_+ \), the method of moments estimator of \(\sigma^2\) based on \( \bs X_n \) is \[T_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - M_n)^2\]. By clicking Accept all cookies, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. First, let \[ \mu^{(j)}(\bs{\theta}) = \E\left(X^j\right), \quad j \in \N_+ \] so that \(\mu^{(j)}(\bs{\theta})\) is the \(j\)th moment of \(X\) about 0. The rst population moment does not depend on the unknown parameter , so it cannot be used to . Next we consider estimators of the standard deviation \( \sigma \). In this case, we have two parameters for which we are trying to derive method of moments estimators. /Filter /FlateDecode \( \var(U_h) = \frac{h^2}{12 n} \) so \( U_h \) is consistent. stream First, assume that \( \mu \) is known so that \( W_n \) is the method of moments estimator of \( \sigma \). The method of moments estimator of \(p\) is \[U = \frac{1}{M}\]. Suppose now that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the negative binomial distribution on \( \N \) with shape parameter \( k \) and success parameter \( p \), If \( k \) and \( p \) are unknown, then the corresponding method of moments estimators \( U \) and \( V \) are \[ U = \frac{M^2}{T^2 - M}, \quad V = \frac{M}{T^2} \], Matching the distribution mean and variance to the sample mean and variance gives the equations \[ U \frac{1 - V}{V} = M, \quad U \frac{1 - V}{V^2} = T^2 \]. E[Y] = \frac{1}{\lambda} \\ If \(b\) is known then the method of moment equation for \(U_b\) as an estimator of \(a\) is \(b U_b \big/ (U_b - 1) = M\). Suppose that \( a \) is known and \( h \) is unknown, and let \( V_a \) denote the method of moments estimator of \( h \). The geometric distribution on \(\N_+\) with success parameter \(p \in (0, 1)\) has probability density function \( g \) given by \[ g(x) = p (1 - p)^{x-1}, \quad x \in \N_+ \] The geometric distribution on \( \N_+ \) governs the number of trials needed to get the first success in a sequence of Bernoulli trials with success parameter \( p \). For each \( n \in \N_+ \), \( \bs X_n = (X_1, X_2, \ldots, X_n) \) is a random sample of size \( n \) from the distribution of \( X \). Solving gives (a). Statistics and Probability questions and answers Assume a shifted exponential distribution, given as: find the method of moments for theta and lambda. The first limit is simple, since the coefficients of \( \sigma_4 \) and \( \sigma^4 \) in \( \mse(T_n^2) \) are asymptotically \( 1 / n \) as \( n \to \infty \). (a) Find the mean and variance of the above pdf. The equations for \( j \in \{1, 2, \ldots, k\} \) give \(k\) equations in \(k\) unknowns, so there is hope (but no guarantee) that the equations can be solved for \( (W_1, W_2, \ldots, W_k) \) in terms of \( (M^{(1)}, M^{(2)}, \ldots, M^{(k)}) \). Although very simple, this is an important application, since Bernoulli trials are found embedded in all sorts of estimation problems, such as empirical probability density functions and empirical distribution functions. a. (a) Assume theta is unknown and delta = 3. From our previous work, we know that \(M^{(j)}(\bs{X})\) is an unbiased and consistent estimator of \(\mu^{(j)}(\bs{\theta})\) for each \(j\). Suppose that \(k\) is unknown, but \(b\) is known. 7.3.2 Method of Moments (MoM) Recall that the rst four moments tell us a lot about the distribution (see 5.6). This page titled 7.2: The Method of Moments is shared under a CC BY 2.0 license and was authored, remixed, and/or curated by Kyle Siegrist (Random Services) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. The geometric distribution is considered a discrete version of the exponential distribution. \( \E(W_n^2) = \sigma^2 \) so \( W_n^2 \) is unbiased for \( n \in \N_+ \). Has the Melford Hall manuscript poem "Whoso terms love a fire" been attributed to any poetDonne, Roe, or other? (Incidentally, in case it's not obvious, that second moment can be derived from manipulating the shortcut formula for the variance.) Which estimator is better in terms of mean square error? Surprisingly, \(T^2\) has smaller mean square error even than \(W^2\). Estimating the variance of the distribution, on the other hand, depends on whether the distribution mean \( \mu \) is known or unknown. The method of moments estimators of \(k\) and \(b\) given in the previous exercise are complicated, nonlinear functions of the sample mean \(M\) and the sample variance \(T^2\). Find the method of moments estimator for delta. The moment distribution method of analysis of beams and frames was developed by Hardy Cross and formally presented in 1930. endstream The same principle is used to derive higher moments like skewness and kurtosis. The hypergeometric model below is an example of this. Our basic assumption in the method of moments is that the sequence of observed random variables \( \bs{X} = (X_1, X_2, \ldots, X_n) \) is a random sample from a distribution. Well, in this case, the equations are already solved for \(\mu\)and \(\sigma^2\). A standard normal distribution has the mean equal to 0 and the variance equal to 1. How to find estimator for shifted exponential distribution using method of moment? The fact that \( \E(M_n) = \mu \) and \( \var(M_n) = \sigma^2 / n \) for \( n \in \N_+ \) are properties that we have seen several times before. Mean square errors of \( T^2 \) and \( W^2 \). Example 4: The Pareto distribution has been used in economics as a model for a density function with a slowly decaying tail: f(xjx0;) = x 0x . As an alternative, and for comparisons, we also consider the gamma distribution for all c2 > 0, which does not have a pure . (c) Assume theta = 2 and delta is unknown. Solution: First, be aware that the values of x for this pdf are restricted by the value of . L() = n i = 1 x2 i 0 < xi for all xi = n n i = 1x2 i 0 < min. xMk@s!~PJ% -DJh(3 See Answer However, we can judge the quality of the estimators empirically, through simulations. These results follow since \( \W_n^2 \) is the sample mean corresponding to a random sample of size \( n \) from the distribution of \( (X - \mu)^2 \). Modified 7 years, 1 month ago. Our work is done! Parabolic, suborbital and ballistic trajectories all follow elliptic paths. Xi;i = 1;2;:::;n are iid exponential, with pdf f(x; ) = e xI(x > 0) The rst moment is then 1( ) = 1 . The method of moments estimator of \( \mu \) based on \( \bs X_n \) is the sample mean \[ M_n = \frac{1}{n} \sum_{i=1}^n X_i\]. stream $$, Method of moments exponential distribution, Improving the copy in the close modal and post notices - 2023 edition, New blog post from our CEO Prashanth: Community is the future of AI, Assuming $\sigma$ is known, find a method of moments estimator of $\mu$. Connect and share knowledge within a single location that is structured and easy to search. Then \[ U_b = b \frac{M}{1 - M} \]. \( \var(U_p) = \frac{k}{n (1 - p)} \) so \( U_p \) is consistent. Suppose that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the geometric distribution on \( \N \) with unknown parameter \(p\). How to find estimator for shifted exponential distribution using method of moment? Doing so, we get: Now, substituting \(\alpha=\dfrac{\bar{X}}{\theta}\) into the second equation (\(\text{Var}(X)\)), we get: \(\alpha\theta^2=\left(\dfrac{\bar{X}}{\theta}\right)\theta^2=\bar{X}\theta=\dfrac{1}{n}\sum\limits_{i=1}^n (X_i-\bar{X})^2\). \(\bias(T_n^2) = -\sigma^2 / n\) for \( n \in \N_+ \) so \( \bs T^2 = (T_1^2, T_2^2, \ldots) \) is asymptotically unbiased. Why refined oil is cheaper than cold press oil? Again, the resulting values are called method of moments estimators. Notice that the joint pdf belongs to the exponential family, so that the minimal statistic for is given by T(X,Y) m j=1 X2 j, n i=1 Y2 i, m j=1 X , n i=1 Y i. Since we see that belongs to an exponential family with . Which estimator is better in terms of bias? Whoops! By adding a second. Shifted exponentialdistribution wiki. The standard Gumbel distribution (type I extreme value distribution) has distributution function F(x) = eex. xR=O0+nt>{EPJ-CNI M%y Thus, we have used MGF to obtain an expression for the first moment of an Exponential distribution. such as the risk function, the density expansions, Moment-generating function . Solving for \(V_a\) gives the result. The method of moments estimator \( V_k \) of \( p \) is \[ V_k = \frac{k}{M + k} \], Matching the distribution mean to the sample mean gives the equation \[ k \frac{1 - V_k}{V_k} = M \], Suppose that \( k \) is unknown but \( p \) is known. /Filter /FlateDecode Because of this result, the biased sample variance \( T_n^2 \) will appear in many of the estimation problems for special distributions that we consider below. As an example, let's go back to our exponential distribution. Exercise 28 below gives a simple example. Estimator for $\theta$ using the method of moments. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Compare the empirical bias and mean square error of \(S^2\) and of \(T^2\) to their theoretical values. For illustration, I consider a sample of size n= 10 from the Laplace distribution with = 0. (b) Assume theta = 2 and delta is unknown. Odit molestiae mollitia Solving gives \[ W = \frac{\sigma}{\sqrt{n}} U \] From the formulas for the mean and variance of the chi distribution we have \begin{align*} \E(W) & = \frac{\sigma}{\sqrt{n}} \E(U) = \frac{\sigma}{\sqrt{n}} \sqrt{2} \frac{\Gamma[(n + 1) / 2)}{\Gamma(n / 2)} = \sigma a_n \\ \var(W) & = \frac{\sigma^2}{n} \var(U) = \frac{\sigma^2}{n}\left\{n - [\E(U)]^2\right\} = \sigma^2\left(1 - a_n^2\right) \end{align*}. Shifted exponential distribution method of moments. /Filter /FlateDecode The basic idea behind this form of the method is to: Equate the first sample moment about the origin M 1 = 1 n i = 1 n X i = X to the first theoretical moment E ( X). For \( n \in \N_+ \), \( \bs X_n = (X_1, X_2, \ldots, X_n) \) is a random sample of size \( n \) from the distribution. Exercise 6 LetX 1,X 2,.X nbearandomsampleofsizenfromadistributionwithprobabilitydensityfunction f(x,) = 2xex/, x>0, >0 (a . Run the gamma estimation experiment 1000 times for several different values of the sample size \(n\) and the parameters \(k\) and \(b\). Now, we just have to solve for \(p\). If \(k\) is known, then the method of moments equation for \(V_k\) is \(k V_k = M\). >> f(x ) = x2, 0 < x. What are the advantages of running a power tool on 240 V vs 120 V? Although this method is a deformation method like the slope-deflection method, it is an approximate method and, thus, does not require solving simultaneous equations, as was the case with the latter method. Exercise 5. Hence \( T_n^2 \) is negatively biased and on average underestimates \(\sigma^2\). What's the cheapest way to buy out a sibling's share of our parents house if I have no cash and want to pay less than the appraised value? We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. ;a,7"sVWER@78Rw~jK6 Solving gives the result. Finally \(\var(V_k) = \var(M) / k^2 = k b ^2 / (n k^2) = b^2 / k n\). Suppose now that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the beta distribution with left parameter \(a\) and right parameter \(b\). The results follow easily from the previous theorem since \( T_n = \sqrt{\frac{n - 1}{n}} S_n \). In the wildlife example (4), we would typically know \( r \) and would be interested in estimating \( N \). Assume both parameters unknown. An exponential family of distributions has a density that can be written in the form Applying the factorization criterion we showed, in exercise 9.37, that is a sufficient statistic for . Given a collection of data that may fit the exponential distribution, we would like to estimate the parameter which best fits the data. Next, let \[ M^{(j)}(\bs{X}) = \frac{1}{n} \sum_{i=1}^n X_i^j, \quad j \in \N_+ \] so that \(M^{(j)}(\bs{X})\) is the \(j\)th sample moment about 0. Now solve for $\bar{y}$, $$E[Y] = \frac{1}{n}\sum_\limits{i=1}^{n} y_i \\ ^ = 1 X . Two MacBook Pro with same model number (A1286) but different year, Using an Ohm Meter to test for bonding of a subpanel. Since \( a_{n - 1}\) involves no unknown parameters, the statistic \( S / a_{n-1} \) is an unbiased estimator of \( \sigma \). endobj The method of moments estimators of \(a\) and \(b\) given in the previous exercise are complicated nonlinear functions of the sample moments \(M\) and \(M^{(2)}\). \( \var(V_k) = b^2 / k n \) so that \(V_k\) is consistent. Therefore, is a sufficient statistic for . "Signpost" puzzle from Tatham's collection. Now, we just have to solve for the two parameters \(\alpha\) and \(\theta\). The negative binomial distribution is studied in more detail in the chapter on Bernoulli Trials. In addition, if the population size \( N \) is large compared to the sample size \( n \), the hypergeometric model is well approximated by the Bernoulli trials model. >> of the third parameter for c2 > 1 (matching the rst three moments, if possible), and the shifted-exponential distribution or a convolution of exponential distributions for c2 < 1. Let \(U_b\) be the method of moments estimator of \(a\). = \lambda \int_{0}^{\infty}ye^{-\lambda y} dy \\ Note that we are emphasizing the dependence of the sample moments on the sample \(\bs{X}\). The variables are identically distributed indicator variables, with \( P(X_i = 1) = r / N \) for each \( i \in \{1, 2, \ldots, n\} \), but are dependent since the sampling is without replacement. Suppose that \(a\) is unknown, but \(b\) is known. Suppose we only need to estimate one parameter (you might have to estimate two for example = ( ; 2)for theN( ; 2) distribution). 1-E{=atR[FbY$ Yk8bVP*Pn endstream Suppose that \(b\) is unknown, but \(a\) is known. Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. So, in this case, the method of moments estimator is the same as the maximum likelihood estimator, namely, the sample proportion. Probability, Mathematical Statistics, and Stochastic Processes (Siegrist), { "7.01:_Estimators" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "7.02:_The_Method_of_Moments" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "7.03:_Maximum_Likelihood" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "7.04:_Bayesian_Estimation" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "7.05:_Best_Unbiased_Estimators" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "7.06:_Sufficient_Complete_and_Ancillary_Statistics" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()" }, { "00:_Front_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "01:_Foundations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "02:_Probability_Spaces" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "03:_Distributions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "04:_Expected_Value" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "05:_Special_Distributions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "06:_Random_Samples" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "07:_Point_Estimation" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "08:_Set_Estimation" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "09:_Hypothesis_Testing" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "10:_Geometric_Models" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "11:_Bernoulli_Trials" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "12:_Finite_Sampling_Models" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "13:_Games_of_Chance" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "14:_The_Poisson_Process" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "15:_Renewal_Processes" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "16:_Markov_Processes" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "17:_Martingales" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "18:_Brownian_Motion" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "zz:_Back_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()" }, [ "article:topic", "license:ccby", "authorname:ksiegrist", "moments", "licenseversion:20", "source@http://www.randomservices.org/random" ], https://stats.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fstats.libretexts.org%2FBookshelves%2FProbability_Theory%2FProbability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)%2F07%253A_Point_Estimation%2F7.02%253A_The_Method_of_Moments, \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\), \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\Z}{\mathbb{Z}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\var}{\text{var}}\) \(\newcommand{\sd}{\text{sd}}\) \(\newcommand{\cov}{\text{cov}}\) \(\newcommand{\cor}{\text{cor}}\) \(\newcommand{\bias}{\text{bias}}\) \(\newcommand{\mse}{\text{mse}}\) \(\newcommand{\bs}{\boldsymbol}\), source@http://www.randomservices.org/random, \( \E(M_n) = \mu \) so \( M_n \) is unbiased for \( n \in \N_+ \). The following sequence, defined in terms of the gamma function turns out to be important in the analysis of all three estimators. Thus \( W \) is negatively biased as an estimator of \( \sigma \) but asymptotically unbiased and consistent. /Length 747 \( \E(V_k) = b \) so \(V_k\) is unbiased. probability Wouldn't the GMM and therefore the moment estimator for simply obtain as the sample mean to the . Then \[ U = \frac{M^2}{T^2}, \quad V = \frac{T^2}{M}\]. So, let's start by making sure we recall the definitions of theoretical moments, as well as learn the definitions of sample moments. Again, for this example, the method of moments estimators are the same as the maximum likelihood estimators. Recall that \( \var(W_n^2) \lt \var(S_n^2) \) for \( n \in \{2, 3, \ldots\} \) but \( \var(S_n^2) / \var(W_n^2) \to 1 \) as \( n \to \infty \). 7.3. stream Excepturi aliquam in iure, repellat, fugiat illum More generally, the negative binomial distribution on \( \N \) with shape parameter \( k \in (0, \infty) \) and success parameter \( p \in (0, 1) \) has probability density function \[ g(x) = \binom{x + k - 1}{k - 1} p^k (1 - p)^x, \quad x \in \N \] If \( k \) is a positive integer, then this distribution governs the number of failures before the \( k \)th success in a sequence of Bernoulli trials with success parameter \( p \). There are several important special distributions with two paraemters; some of these are included in the computational exercises below.

Banshees Are A Great Source Of Ingredients 3, Boston Protests June 24, New York State Standard Deduction 2022, Articles S