We propose and study a method for partial covariates selection, which only select the covariates with values fall in their effective ranges. The coefficients estimates based on the resulting data is more interpretable based on the effective covariates. This is in contrast to the existing method of variable selection, in which some variables are selected/deleted in whole. To test the validity of the partial variable selection, we extended the Wilks theorem to handle this case. Simulation studies are conducted to evaluate the performance of the proposed method, and it is applied to a real data analysis as illustration.
Covariate, Effective range, Partial variable selection, Linear model, Likelihood ratio test
Variables selection is a common practice in biostatistics and there is vast literature on this topic. Commonly used methods include the likelihood ratio test [1], Akaike information criterion, AIC [2] Bayesian information criterion, BIC [3], the minimum description length [4,5] stepwise regression and Lasso [6], etc. The principal components model linear combinations of the original covariates, reduces large number of covariates to a handful of major principal components, but the result is not easy to interpret in terms of the original covariates. The stepwise regression starts from the full model and deletes the covariate one by one according to some statistical significance measure. May, et al. [7] addressed variable selection in artificial neural network models, Mehmood, et al. [8] gave a review for variable selection with partial least squares model. Wang, et al. [9] addressed variable selection in generalized additive partial linear models. Liu, et al. [10] addressed variable selection in semiparametric additive partial linear models. The Lasso [6,11] and its variation [12,13] are used to select some few significant variables in the presence of a large number of covariates.
However, existing methods only select the whole variable(s) to enter the model, which may not the most desirable in some bio-medical practice. For example, in two heart disease studies [14,15] there are more than ten risk factors identified by medical researchers in their long time investigations, with the existing variable selection methods, some of the risk factors will be deleted wholly from the investigation, this is not desirable, since risk factors will be really risky only when they fall into some risk ranges. Thus deleting the whole variable(s) in this case seems not reasonable, while a more reasonable way is to find the risk ranges of these variables, and delete the variable values in the un-risky ranges. In some other studies, some of the covariates values may just random errors which do not contribute to the influence of the responses, and remove these covariates values will make the model interpretation more accurate. In this sense we select the variables when their value falls within some range. To our knowledge, method for this kind of partial variable selection hasn't been seen in the literature, which is the goal of our study here. Note that in existing method of variable selection, some variables are selected/deleted, while in our method, some variable(s) are partially selected/deleted, i.e., only some proportions of some variable observations are selected/deleted. The latter is very different from the existing methods. In summary, traditional variable selection methods, such as stepwise or Lasso, some covariate(s) will be removed either wholly or none from the analysis. This is not very reasonable, since some of the removed covariates may be partially effective, removing all their values may yield miss-leading results, or at least cost information loss; while for the variables remaining in the model, not all their values are necessarily effective for the analysis. With the proposed method, only the non-effective values of the covariates are removed, and the effective values of the covariates are kept in the analysis. This is more reasonable than the existing methods of removing all or nothing.
In the existing method of deleting whole variable(s), the validity of such selection can be justified using the Wilks result, under the null hypothesis of no effect of the deleted variable(s), the resulting two times log-likelihood ratio will be asymptotically chi-squared distributed. We extended the Wilks theorem to the case for the proposed partial variable deletion, and use it to justify the partial deletion procedure. Simulation studies are conducted to evaluate the performance of the proposed method, and it is applied to analyze a real data set as illustration.
The observed data is $\left({y}_{i},{x}_{i}\right)\left(i=1,\mathrm{...},n\right)$, where ${y}_{i}$ is the response and ${x}_{i}\in {R}^{d}$ is the covariates, of the i-th subject. Denote ${y}_{n}=\left({y}_{1},\dots ,{y}_{n}\right)\text{'}$ and ${X}_{n}=\left({\text{x}}_{1}^{\text{'}},\dots ,{\text{x}}_{\text{n}}\text{'}\right)\text{'}$. Consider the linear model
$${y}_{\text{n}}={X}_{n}\text{\beta}+{\epsilon}_{n},\text{(1)}$$
where $\text{\beta}=\left({\beta}_{1},\dots ,{\beta}_{d}\right)\text{'}$ is the vector of regression parameter, ${\epsilon}_{n}=\left({\epsilon}_{1},\dots ,{\epsilon}_{n}\right)\text{'}$ is the vector of random errors, or residual departure from the linear model assumption. Without loss of generality we consider the case the ${\epsilon}_{i}$'s are independently identically distributed (iid), i.e. with variance matrix $Var\left(\epsilon \right)={\sigma}^{2}{I}_{n}$, where ${I}_{n}$ is the $n$-dimensional identity matrix. When the ${\epsilon}_{i}$'s are not iid, often it is assumed $Var\left(\epsilon \right)=\text{\Omega}$ for some known positive-definite $\text{\Omega}$, then make the transformation ${\tilde{y}}_{n}={\text{\Omega}}^{-1/2}{y}_{n}$, ${\tilde{X}}_{n}={\text{\Omega}}^{-1/2}{X}_{n}$ and $\tilde{\epsilon}={\text{\Omega}}^{-1/2}\epsilon $, then we get the model ${\tilde{y}}_{\text{n}}={\tilde{X}}_{n}\text{\beta}+\tilde{\epsilon}$, and the ${\tilde{\epsilon}}_{i}$'s are iid with $Var\left(\tilde{\epsilon}\right)={I}_{n}$. When $\text{\Omega}$ is unknown, it can be estimated by various ways. So below we only need to discuss the case the ${\epsilon}_{i}$'s are iid.
We first give a brief review of the existing method of variable selection. Assume the model residual $\u03f5=y-{x}^{\prime}\text{\beta}$ has some known density function $f(\cdot )$ (such as normal), with possibly some unknown parameter(s). For simple of discussion we assume there are no unknown parameters. Then the log-likelihood is
${\mathcal{l}}_{n}\left(\text{\beta}\right)={\displaystyle \sum}_{i=1}^{n}\mathrm{log}f\left({\text{y}}_{i}-{x}_{i}^{\text{'}}\beta \right).$
Let $\stackrel{\wedge}{\beta}$ be the Maximum Likelihood Estimate (MLE) of $\text{\beta}$ (when $f(\cdot )$ is the standard normal density, $\stackrel{\wedge}{\beta}$ is just the least squares estimate). If we delete $k\left(\le d\right)$ columns of ${X}_{n}$ and the corresponding components of $\text{\beta}$, denote the remaining covariate matrix as ${X}_{n}^{-}$ and the resulting $\text{\beta}$ as ${\text{\beta}}^{\text{-}}$, and the corresponding MLE as $\stackrel{\wedge}{{\beta}^{-}}$. Then under the hypothesis ${H}_{0}$: the deleted columns of ${X}_{n}$ has no effects, or equivalently the deleted components of $\text{\beta}$ are all zeros, then asymptotically [1].
$2\left[{\mathcal{l}}_{n}\left(\widehat{\beta}\right)-{\mathcal{l}}_{n}\left({\widehat{\beta}}^{-}\right)\right]\stackrel{D}{\to}{\chi}_{k}^{2}$
where ${\chi}_{k}^{2}$ is the chi-squared distribution with $k$ -degrees of freedom. For a given nominal level $\alpha $, let ${\chi}_{d}^{2}\left(1-\alpha \right)$ be the $\left(1-\alpha \right)$-th upper quantile of the ${\chi}_{k}^{2}$ distribution, if $2\left[{\mathcal{l}}_{n}\left(\widehat{\beta}\right)-{\mathcal{l}}_{n}\left({\widehat{\beta}}^{-}\right)\right]\ge {\chi}_{d}^{2}\left(1-\alpha \right)$, then ${H}_{0}$ is rejected at significance level $\alpha $, and its not good to delete these columns of ${X}_{n}$; otherwise we accept ${H}_{0}$ and delete these columns of ${X}_{n}$.
There are some other methods to select columns of ${X}_{n}$, such as AIC, BIC and their variants, as in the model selection field. In these methods, the optimal deletion of columns of ${X}_{n}$ corresponds to the best model selection, which maximize the AIC or BIC. These methods are not as solid as the above one, as may sometimes depending on eye inspection to choose the model which maximize the AIC or BIC.
All the above methods require the models under consideration be nested within each other, i.e., one is a sub-model of the other. Another more general model selection criterion is the Minimum Description Length (MDL) criterion, a measure of complexity, developed by Kolmogorov [4], Wallace and Boulton [16], etc. The Kolmogorov complexity has close relationship with the entropy, it is the output of a Markov information source, normalized by the length of the output. It converges almost surely (as the length of the output goes to infinity) to the entropy of the source. Let $\mathcal{G}=\left\{g\left(\cdot ,\cdot \right)\right\}$ be a finite set of candidate models under consideration, and $\Theta =\left\{{\theta}_{j}:j=1,\dots ,h\right\}$ be the set of parameters of interest. ${\theta}_{i}$ may or may not be nested within some other ${\theta}_{j}$, or ${\theta}_{i}$ and ${\theta}_{j}$ both in $\Theta $ may have the same dimension but with different parametrization. Next consider a fixed density $f(.|{\theta}_{j})$, with parameter ${\theta}_{j}$ running through a subset ${\text{\Gamma}}_{j}\subset {R}^{k}$, to emphasize the index of the parameter, we denote the MLE of ${\theta}_{j}$ under model $f(\cdot |\cdot )$ by ${\widehat{\theta}}_{j}$ (instead of by ${\widehat{\theta}}_{n}$ to emphasize the dependence on the sample size), $I({\theta}_{j})$ the Fisher information for ${\theta}_{j}$ under $f(\cdot |\cdot )$, $\left|I\left({\theta}_{j}\right)\right|$ its determinant, and ${k}_{j}$ the dimension of ${\theta}_{j}$. Then the MDL criterion (for example, Rissanen [17] and the review paper by Hansen and Yu [5], and references there) chooses ${\theta}_{j}$ to minimize
$$-{\displaystyle \sum}_{i=1}^{n}\mathrm{log}f\left({Y}_{i}\text{|}{\widehat{\theta}}_{j}\right)\text{}+\text{}\frac{{k}_{j}}{2}\mathrm{log}\frac{n}{2\pi}\text{}+\mathrm{log}{\int}_{{\Gamma}_{j}}\sqrt{\left|I\left({\theta}_{j}\right)\right|}d{\theta}_{j},\text{}\left(j=1,\dots ,h\right).\text{(3)}$$
This method does not require the models be nested, but still require select/delete some whole columns. The other existing methods for variable selection, such as stepwise regression and Lasso, etc., are all for deleting/keeping some whole variables, and does not apply to our problem.
Now come to our question, which is non-standard and we are not aware of a formal method to address this problem. However, we think the following question is of practical meaning. Consider deleting some of the components within fixed $k$ $\left(k\le d\right)$ columns of ${X}_{n}$, the deleted proportions for these columns are ${\gamma}_{1},\mathrm{...},{\gamma}_{k}(0<{\gamma}_{j}<1)$. Denote ${X}_{n}^{-}$ for the remaining covariate matrix, which is ${X}_{n}$ with some entries replaced by 0's, corresponding to the deleted elements. Before the partial deletion, the model is
${y}_{n}={X}_{n}\text{\beta}+{\epsilon}_{n}$
After the partial deletion of covariates, the model becomes
${y}_{n}={X}_{n}^{-}{\text{\beta}}^{-}+{\epsilon}_{n}$
Note that here $\text{\beta}$ and ${\text{\beta}}^{\text{-}}$ have the same dimension, as no covariate is completely deleted. $\text{\beta}$ is the effects of the original covariates, ${\text{\beta}}^{\text{-}}$ is the effects of the covariates after some possible partial deletion. It is the effects of the effective covariates. As an over simplified example, we have $n=5$ individuals, with five responses ${y}_{n}=\left({y}_{1},{y}_{2},{y}_{3},{y}_{4},{y}_{5}\right)$ and covariate vectors ${x}_{1}=\left(1.3,0.2,-1.5\right)\text{'}$, ${x}_{2}=\left(-0.1,0.9,-1.3\right)\text{'}$, ${x}_{3}={\left(1.1,1.4,-0.3\right)}^{\text{'}}$, ${x}_{4}=\left(0.8,1.2,-1.7\right)\text{'}$, ${x}_{5}=\left(1.0,2.1,-1.1\right)\text{'}$ and ${X}_{n}=\left({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5}\right)$. Then $\beta $ is the effects of the regression of ${y}_{n}$ on ${X}_{n}$. If we remove some seemingly insignificant covariate components, for example, let ${x}_{1}^{-}=\left(1.3,0,-1.5\right)\text{'}$, ${x}_{2}^{-}=\left(1.1,1.4,0\right)\text{'}$, ${x}_{3}^{-}=\left(1.1,1.4,0\right)\text{'}$, ${x}_{4}^{-}=\left(0.8,1.2,-1.7\right)\text{'}$, ${x}_{5}^{-}={\left(1.0,2.1,-1.1\right)}^{\text{'}}$ and ${X}_{n}^{-}=\left({x}_{1}^{-},{x}_{2}^{-},{x}_{3}^{-},{x}_{4}^{-},{x}_{5}^{-}\right)$. In this case ${\text{\beta}}^{\text{-}}$ is the effects of ${y}_{n}$ regressing on ${X}_{n}$. Thus, though $\text{\beta}$ and ${\text{\beta}}^{\text{-}}$ have the same structure, they have different interpretations. The problem can be formulated as testing the hypothesis:
${H}_{0}:\text{\beta}={\text{\beta}}^{-}vs{H}_{1}:\text{\beta}\ne {\text{\beta}}^{-}$
If ${H}_{0}$ is accepted, the partial deletion is valid.
Note that different from the standard null hypothesis that some components of the parameters be zeros, the above null hypothesis is not a nested hypothesis, or ${\text{\beta}}^{\text{-}}$ is not a subset of $\text{\beta}$, so the existing Wilks' theorem for likelihood ratio statistic does not directly apply here.
Denote ${\mathcal{l}}_{n}^{-}\left(\text{\beta}\right)$ be the corresponding log-likelihood based on data $\left({y}_{n},\text{}{X}_{n}^{-}\right)$, and the corresponding MLE as ${\widehat{\beta}}^{-}$. Since after the partial deletion, ${\widehat{\beta}}^{-}$ is the MLE of $\beta $ under a constrained log-likelihood, while $\widehat{\beta}$ is the MLE under the full likelihood, we have ${\mathcal{l}}_{n}^{-}\left({\widehat{\beta}}^{-}\right)\le {\mathcal{l}}_{n}\left(\widehat{\beta}\right)$. Parallel to the log-likelihood ratio statistic for (whole) variable deletion, let, for our case,
${\text{\Lambda}}_{n}=2\left[{\mathcal{l}}_{n}\left(\widehat{\beta}\right)-{\mathcal{l}}_{n}^{-}\left({\widehat{\beta}}^{-}\right)\right]$
Let $\left({j}_{1},\mathrm{...},{j}_{k}\right)$ be the columns with partial deletions, ${C}_{{j}_{r}}=\{i:{x}_{{j}_{r},i}$ is deleted $1\le i\le n\}$ be the index set for the deleted covariates in the ${j}_{r}$-th column $\left(r=1,\mathrm{...},k\right)$; $\left|{C}_{{j}_{r}}\right|$ be the cardinality of ${C}_{{j}_{r}}$, thus ${\gamma}_{r}=\left|{C}_{{j}_{r}}\right|/n\left(r=1,\mathrm{...},k\right)$. For different ${j}_{r}$ and ${j}_{s}$, ${C}_{{j}_{r}}$ and ${C}_{{j}_{s}}$ may or may not have some common components. We first give the following Proposition, in the simple case in which the index sets ${C}_{{j}_{r}}$'s are mutually exclusive. Then in Corollary 1 we give the result in more general case in which the index sets ${C}_{{j}_{r}}$'s are not need to be mutually exclusive.
For given ${X}_{n}$, there are many different ways of partial column deletions, we may use Theorem 1 to test each of these deletions. Given a significance level $\alpha $, a deletion is valid at level $\alpha $ if ${\text{\Lambda}}_{n}<{\chi}^{2}\left(1-\alpha \right)$, where ${\chi}^{2}\left(1-\alpha \right)$ is the $\left(1-\alpha \right)$- th upper quantile of the $\sum}_{j=1}^{k}{\gamma}_{j}{\chi}_{j}^{2$ distribution, which can be computed by simulation for given $\left({\gamma}_{1},\mathrm{...},{\gamma}_{k}\right)$.
The following Theorem is a generalization of the Wilks Theorem [1]. Deleting some whole columns in ${X}_{n}$ corresponds to ${\gamma}_{j}=1$ $\left(j=1,\mathrm{...},k\right)$ in the theorem, and then we get the existing Wilks' Theorem.
Theorem 1: Under ${H}_{0}$, suppose ${C}_{{j}_{r}}\cap {C}_{{j}_{s}}=\varphi $, the empty set, for all $1\le r\ne s\le k$, then we have
${\text{\Lambda}}_{n}\stackrel{D}{\to}{\displaystyle \sum}_{j=1}^{k}{\gamma}_{j}{\chi}_{j}^{2}\text{}.$
where ${\chi}_{1}^{2},\mathrm{...},{\chi}_{k}^{2}$ are iid chi-squared random variable with 1-degree of freedom.
Note that in Wilks problem the null hypothesis is that, the coefficients corresponding to some variables are zero. The null hypothesis is nested within the alternative; while the null hypothesis in our problem is: The coefficients correspond to some partial variables, and the null hypothesis is not nested within the alternative. So the results of the two methods are not really comparable.
The case the ${C}_{{j}_{r}}$'s are not mutually exclusive is a bit more complicated. We first re-write the sets ${C}_{{j}_{r}}$'s such that
$\begin{array}{l}{\cup}_{r=1}^{k}{C}_{{j}_{r}}={\cup}_{r=1}^{k}{\cup}_{{j}_{1},\mathrm{...},{j}_{r}}{D}_{{j}_{1},\mathrm{...},{j}_{r}}\\ \end{array}$
where the ${D}_{{j}_{1},\mathrm{...},{j}_{r}}$'s are mutually exclusive, ${D}_{{j}_{1}},\mathrm{...},{D}_{{j}_{k}}$ are index sets for one column of ${X}_{n}$ only; the ${D}_{{j}_{1},{j}_{2}}$'s are index sets common for columns ${j}_{1}$ and ${j}_{2}$ only; the ${D}_{{j}_{1},{j}_{2},{j}_{3}}$'s are index sets common for columns ${j}_{1},{j}_{2}$ and ${j}_{3}$ only,.... Generally some of the ${D}_{{j}_{1},\mathrm{...},{j}_{r}}$'s are empty sets. Let ${\gamma}_{{j}_{1},\mathrm{...},{j}_{r}}=\left|{D}_{{j}_{1},\mathrm{...},{j}_{r}}\right|$ be the cardinality of ${D}_{{j}_{1},\mathrm{...},{j}_{r}}$ and ${\gamma}_{{j}_{1},\mathrm{...},{j}_{r}}=\left|{D}_{{j}_{1},\mathrm{...},{j}_{r}}\right|/n$ $\left(r=1,\mathrm{...},k\right)$.
By examining the proof of Theorem 1, we get the following corollary which gives the result in the more general case.
Corollary 1: Under ${H}_{0}$, we have
${\text{\Lambda}}_{n}=2\left[{\mathcal{l}}_{n}\left(\widehat{\beta}\right)-{\mathcal{l}}_{n}^{-}\left({\widehat{\beta}}^{-}\right)\right]\stackrel{D}{\to}{\displaystyle \sum}_{r=1}^{k}{\displaystyle \sum}_{{j}_{1},\dots ,{j}_{r}}{\gamma}_{{j}_{1},\dots ,{j}_{r}}{\chi}_{{j}_{1},\dots ,{j}_{r}}^{2}$
where the ${\chi}_{{j}_{1},\mathrm{...},{j}_{r}}^{2}$'s are all independent chi-squared random variables with r-degrees of freedom $\left(r=1,\mathrm{...},k\right)$.
Below we give two examples to illustrate the usage of Proposition.
Example 1: $n=1000$, $d=5$, $k=3$. Columns $\left(1,2,4\right)$ has some partial deletions with ${C}_{1}=\left\{201,202,\mathrm{....},299,300\right\}$, ${C}_{2}=\left\{351,352,\mathrm{...},549,550\right\}$, ${C}_{3}=\left\{601,602,\mathrm{...},849,850\right\}$, the ${C}_{j}$'s have no overlap; ${\gamma}_{1}=1/10$, ${\gamma}_{2}=1/5$, ${\gamma}_{3}=1/4$. So by the Proposition, under ${H}_{0}$ we have
$2\left[{\mathcal{l}}_{n}\left(\widehat{\beta}\right)-{\mathcal{l}}_{n}^{-}\left({\widehat{\beta}}^{-}\right)\right]\stackrel{D}{\to}\frac{1}{10}{\chi}_{1}^{2}+\frac{1}{5}{\chi}_{2}^{2}+\frac{1}{4}{\chi}_{3}^{2}$
where all the chi-squared random variables are independent, each has 1 degree of freedom.
Example 2: $n=1000$, $d=5$, $k=3$. Columns $\left(1,2,4\right)$ has some partial deletions with ${C}_{1}=\left\{101,102,\mathrm{....},299,300;651,652,\mathrm{...},749,750\right\},$ ${C}_{2}=\left\{201,202,\mathrm{...},349,350\right\},$ ${C}_{3}=\left\{251,252,\mathrm{...},299,300;701,702,\mathrm{...},799,800\right\}$. In this case the ${C}_{j}$'s have overlaps, the Proposition can not be used directly, so we use the Corollary. Then ${D}_{1}=\left\{101,102,\mathrm{...},199,200\right\}$, ${D}_{2}=\left\{301,302,\mathrm{...},349,350\right\}$, ${D}_{3}=\left\{701,702,\mathrm{...},799,800\right\}$, ${D}_{1,2}=\left\{201,202,\mathrm{...},249,250\right\}$, ${D}_{1,3}=\left\{701,702,\mathrm{...},749,750\right\}$, ${D}_{2,3}=\varphi $, ${D}_{1,2,3}=\left\{251,252,\mathrm{...},299,300\right\}$; ${\gamma}_{1}=1/5$, ${\gamma}_{2}=1/20$, ${\gamma}_{3}=1/10$, ${\gamma}_{1,2}=1/20$, ${\gamma}_{1,3}=1/20$, ${\gamma}_{2,3}=0$, ${\gamma}_{1,2,3}=1/20$. So by the Corollary, under ${H}_{0}$ we have
$$2\left[{\mathcal{l}}_{n}\left(\widehat{\beta}\right)-{\mathcal{l}}_{n}^{-}\left({\widehat{\beta}}^{-}\right)\right]\stackrel{D}{\to}\frac{1}{5}{\chi}_{1}^{2}+\frac{1}{20}{\chi}_{2}^{2}+\frac{1}{10}{\chi}_{3}^{2}+\frac{1}{20}{\chi}_{1,2}^{2}+\frac{1}{20}{\chi}_{1,3}^{2}+\frac{1}{20}{\chi}_{1,2,3}^{2}$$
where all the chi-squared random variables are independent, with ${\chi}_{1}^{2}$, ${\chi}_{2}^{2}$ and ${\chi}_{3}^{2}$ are each of 1 degree of freedom, ${\chi}_{1,2}^{2}$ and ${\chi}_{1,3}^{2}$ are each of 2-degrees of freedom, and ${\chi}_{1,2,3}^{2}$ is of 3-degrees of freedom.
Next, we discuss the consistency of estimation of ${\widehat{\beta}}^{-}$ under the null hypothesis ${H}_{0}$. Let ${x}^{-}={x}_{r}^{-}$ with probability ${\gamma}_{r}\left(r=0,1,\mathrm{...},k\right)$, where ${x}_{r}^{-}$ is an i.i.d. copy of the ${x}_{i,r}^{-}$'s, whose components with index in ${C}_{jr}$, in particular ${C}_{j0}$ is the index set for those covariates without partial deletion.
Theorem 2: Under conditions of Theorem 1,
$i){\widehat{\beta}}^{-}\to {\text{\beta}}_{\text{0}}\left(a.s.\right).$
$ii)\sqrt{n}\left({\widehat{\beta}}^{-}-{\text{\beta}}_{0}\right)\stackrel{D}{\to}N\left(0,\text{\Omega}\right)$
where
$\text{\Omega}={\text{E}}_{{\beta}_{0}}\left[\dot{\mathcal{l}}\left({\text{\beta}}_{\text{0}}\right)\dot{\mathcal{l}}\text{'}\left({\text{\beta}}_{\text{0}}\right)\right]=E\left[\left({x}^{-}-{\mu}^{-}\right)\left({x}^{-}-{\mu}^{-}\right)\text{'}\right]\int \frac{{\dot{f}}^{2}(\in )}{f(\in )}d\u03f5.$
To extend the results of Theorem 2 to the general case, we need the following more notations. Let be an i.i.d. copy of data in the set ${D}_{{j}_{1},\mathrm{...},{j}_{k}}$. Let ${x}^{-}={x}_{{j}_{1},\dots ,{j}_{r}}^{-}$ with probability ${\gamma}_{{j}_{1},\mathrm{...},{j}_{r}}\left(r=0,1,\mathrm{...},k\right)$, where ${x}_{{j}_{1},\dots ,{j}_{r}}^{-}$ is an i.i.d. copy of the ${x}_{i,{j}_{1},\mathrm{...},{j}_{r}}^{-}$'s, whose components with index in ${C}_{{j}_{1},\mathrm{...},{j}_{r}}$.
Corollary 2: Under conditions of Corollary 1, results of Theorem 2 hold with ${x}^{-}$ given above.
Computationally $E\left[\left({x}^{-}-{\mu}^{-}\right)\left({x}^{-}-{\mu}^{-}\right)\text{'}\right]$ is well approximated by
$$E\left[\left({x}^{-}-{\mu}^{-}\right)\left({x}^{-}-{\mu}^{-}\right)\text{'}\right]\approx {\displaystyle \sum}_{r=0}^{k}\frac{\left|{D}_{{j}_{1},\dots ,{j}_{r}}\right|}{n}\frac{1}{\left|{D}_{{j}_{1},\dots ,{j}_{r}}\right|}{\displaystyle \sum}_{\left(i,j\right)\in {D}_{{j}_{1},\dots ,{j}_{r}}}\left({x}_{i,j}^{-}-{\widehat{\mu}}_{{j}_{1},\dots ,{j}_{r}}^{-}\right){\left({x}_{i,j}^{-}-{\widehat{\mu}}_{{j}_{1},\dots ,{j}_{r}}^{-}\right)}^{\text{'}},$$
where the notation ${\Sigma}_{(i,j)\in {D}_{{j}_{1},\mathrm{...},{j}_{r}}}$ means summation over those ${x}_{i,j}^{-}$'s with deletion index in ${D}_{{j}_{1},\mathrm{...},{j}_{r}}$, and $\left({\widehat{\mu}}_{{j}_{1},\dots ,{j}_{r}}^{-}\right)=\frac{1}{\left|{D}_{{j}_{1},\dots ,{j}_{r}}\right|}{\Sigma}_{(i,j)\in {D}_{{j}_{1},\mathrm{...},{j}_{r}}}{x}_{i,j}^{-}$.
We illustrate the proposed method with two examples, Examples 3 and 4 below. The former rejects the null hypothesis ${H}_{0}$ while the latter accepts. In each case we simulate $n=1000$ i.i.d. data with response ${y}_{i}$ and with covariates ${x}_{i}=\left({x}_{i1},{x}_{i2},{x}_{i3},{x}_{i4},{x}_{i5}\right)\left(i=1,\mathrm{...},n\right)$. We first generate the covariates, sample the ${x}_{i}$'s from the 5-dimensional normal distribution with mean vector $\mu =\left(3.1,1.8,-0.5,0.7,1.5\right)\text{'}$ and a given covariance matrix $\text{\Gamma}$.
Then we generate the response data, which, given the covariates. The ${y}_{i}$'s are generated as
${y}_{i}={x}_{i}^{\text{'}}{\text{\beta}}_{0}+{\u03f5}_{i},\left(i=1,\dots ,n\right)$
${\text{\beta}}_{0}=\left(0.42,0.11,0.65,0.83,0.72\right)\text{'}$, the ${\in}_{i}$'s are i.i.d. $N\left(0,1\right)$.
Hypothesis test is conducted to examine if the partial deletion is valid or not. Significant level is set as $\alpha =0.05$. The experiment repeated 1000 times, $Prop$ represents the proportion ${\Lambda}_{n}>Q\left(1-\alpha \right)$, where $Q\left(1-\alpha \right)$ is the $\left(1-\alpha \right)$-th upper quantile of the distribution ${\sum}_{j=1}^{k}{\gamma}_{j}{\chi}_{j}^{2}$ given in Theorem 1, computed via simulation.
Example 3: In this example, five data sets are generated according to the mentioned method, with five different values of ${\text{\beta}}_{0}$. We are interested to know whether covariates with $\left|{x}_{ij}\right|<\frac{1}{10}$ can be deleted. Five data set with different ${\text{\beta}}_{0}$ values are simulated. The proportion $\gamma =\left({\gamma}_{1},\dots ,{\gamma}_{k}\right)$ of ${x}_{ij}$'s with $\left|{x}_{ij}\right|<\frac{1}{10}$ are shown for each data set, the results are shown in Table 1. The five rows in Table 1 are the results for the five data sets. For each data, the parameter $\text{\beta}$ is estimated, a and test is conducted using the given $\gamma $, the ${\text{\Lambda}}_{n}$ is computed, $Q\left(1-\alpha \right)$ is given, and the corresponding p-value is provided. Note that for our problem, a p-value smaller than $\alpha $ means a significant value of ${\text{\Lambda}}_{n}$, or significant difference between the regression coefficients of original covariates and those of the covariates after partial deletion, which implies in turn that the null hypothesis should be rejected, or the partial deletion should not be conducted (Table 1).
Table 1: The simulation result of $\gamma $, ${\text{\Lambda}}_{n}$, $Q\left(1-\alpha \right)$ and p-value according to ${\beta}_{0}$. View Table 1
We see that the p-values of rejecting ${H}_{0}$, are all smaller than 0.05 in the five set of ${\text{\beta}}_{0}$. This suggests that covariates with $\left|{x}_{ij}\right|<\frac{1}{10}$ should not be deleted at significance level $\alpha =0.05$.
Example 4: In this example, the original $X$ as in Example 3, but now we replace the entries in first 100 rows and first three columns by noise $\in $, where $\u03f5N\left(0,\frac{1}{9}\right)$. The delete proportion $\gamma =\left(0.1,0.1,0.1\right)$ is fixed with ${x}_{ij}$'s having absolute values smaller than the lower 0.1 percent being deleted. We are interested to see in this case whether these noises can be deleted, i.e. ${H}_{0}$ can be rejected or not. The results are shown in the following (Table 2).
Table 2: The simulation result of $\gamma $, ${\text{\Lambda}}_{n}$, $Q\left(1-\alpha \right)$ and p-value according to ${\beta}_{0}$. View Table 2
We see that the p-values of rejecting ${H}_{0}$ are all greater than 0.95 for the five sets of ${\text{\beta}}_{\text{0}}$. It suggests that the data provided strong evidence to conclude that the deleted values are noises and they are not necessary to the data set at 0.05 significance level.
We analyze a data set from the Deprenyl and Tocopherol Antioxidative Therapy of Parkinsonism, which is obtained from The National Institutes of Health (NIH). (For detailed description and data link, https://www.ncbi.nlm.nih.gov/pubmed/2515723). It is a multi-center, placebo-controlled clinical trial that aimed to determine a treatment for early Parkinson's disease patient to prolong their time requiring levodopa therapy. The number of patients enrolled was 800. The selected object were untreated patients with Parkinson's disease (stage I or II) for less than five years and met other eligible criteria. They were randomly assigned according to a two-by-two factorial design to one of four treatment groups: 1) Placebo 2) Active tocopherol 3) Active deprenyl 4) Active deprenyl and tocopherol. The observation continued for $14\pm 6$ months and reevaluated every 3 months. At each visit, Unified Parkinson's Disease Rating Scale (UPDRS) including its motor, mental and activities of daily living components were evaluated. Statistical analysis result was based on 800 subjects. The result revealed that no beneficial effect of tocopherol. Deprenyl effect was found significantly prolong the time requiring levodopa therapy which reduced the risk of disability by 50 percent according to the measurement of UPDRS.
Our goal is to examine whether some of the covariates can be partially deleted. If traditional variable selection methods are used, such as stepwise or Lasso, it will end up with some covariate(s) been removed wholly from the analysis. This is not very reasonable, since some of the removed covariates may be partially effective, removing all their values may yield miss-leading results, or at least cost information loss. We use the proposed method to examine three of the response variables, PDRS, TREMOR and PIGD, and three covariates, Age, Motor and ADL for all these responses. The deleted covariates are the ones with values below the $\gamma $-th data quantile, with $\gamma =0.01,0.02,0.03$ and 0.05. We examine each response and covariate one by one. The results are shown in Table 3, Table 4 and Table 5 below.
Table 3: Response TREMOR: ${\text{\Lambda}}_{n}$ values and estimated regression coefficients. View Table 3
Table 4: Response PIGD: ${\text{\Lambda}}_{n}$ values and estimated regression coefficients. View Table 4
Table 5: Response PDRS: ${\text{\Lambda}}_{n}$ values and estimated regression coefficients. View Table 5
In Table 3, response TREMOR is examined. For covariable Age, the likelihood ratio ${\text{\Lambda}}_{n}$ is larger than the cut-off point $Q\left(1-\alpha \right)$ at all the deletion proportions, it suggests that for Age, no partial deletions with these proportions should be removed. For covariable Motor, ${\text{\Lambda}}_{n}$ is smaller than the cutoff point $Q\left(1-\alpha \right)$ at the 0.01 proportion, this covariable can be partially deleted at this proportion. In other words, the covariate Motor with values smaller than 1%-th of its quantile have no impact on the analysis, or can be treated as noise and should be removed from the analysis. For covariable ADL, with deletion proportions $\text{0}\text{.01-0}\text{.1}$, the likelihood ratio ${\text{\Lambda}}_{n}$ is smaller than $Q\left(1-\alpha \right)$ which suggest that the lower percentage of $\text{1\%-10\%}$ of this covariate have no impact on the analysis and should be deleted. After removing the corresponding proportions of Motor and ADL, the model is re-fitted to get the parameter estimates shown there. These estimates have better meaning than the ones based on the whole covariates data, since now the noise values of covariates are removed, and only the effective covariates entered the analysis. However, if traditional variable methods are used, such as stepwise regression or Lasso, it may end up with the whole covariate Motor, ADL, or both to be removed, and leads loss of information or even miss-leading results.
In Table 4, response PIGD is investigated. For covariable age, ${\text{\Lambda}}_{n}$ is larger than the cut-off point $Q\left(1-\alpha \right)$ at the 0.02, 0.03 and 0.05 proportions, suggests that partial deletion with these proportions are not appropriate. For covariate Motor, ${\text{\Lambda}}_{n}$ is smaller than cut-off point $Q\left(1-\alpha \right)$ at the deletion proportions 0.02 and 0.03, suggests that the lower percentage of $\text{2-3\%}$ should be deleted from the analysis. For the variable ADL, ${\text{\Lambda}}_{n}$ is larger than the cut-off point $Q\left(1-\alpha \right)$ at the delete proportions 0.02, 0.03 and 0.05, hence partial deletion at these proportions are not valid. After deleting 3% of the smallest values of Motor, the model is re-fit to get the parameter estimates shown in the Table 4. The new estimates are more meaning full since the on-effective values of covariate Motor are removed from the analysis.
In Table 5, the response is PDRS. The likelihood ratios ${\text{\Lambda}}_{n}$ of Age, Motor and ADL all are larger than ${\chi}^{2}\left(1-\alpha \right)$ at the deletion proportions of 0.01, 0.02, 0.03 and 0.05. Thus the null hypothesis are rejected at all these proportions, or no deletion is valid at these proportions, and the analysis should be based on the original full data, with the parameter estimates shown in the Tables (Table 3, Table 4, and Table 5).
Note that the coefficient for Age is insignificant, and hence the corresponding ${\text{\Lambda}}_{n}$ values with deleted proportions are senseless.
We proposed a method for partial variable deletion, in which only some proportion(s) of covariate(s) values are to be deleted. This is in contrast to the existing methods either select or delete the entire variable(s). Thus this method is new and is a generalization of the existing variable selection. The question is motivated from practical problems. It can used to find the effective ranges of the covariates, or to remove possible noises in the covariates, and thus the corresponding estimated effects are more interpretable. The proposed test statistic is a generalization of the Wilks likelihood ratio statistic, the asymptotic distribution of the proposed statistic is generally a chi-squared mixture distribution, the corresponding cut-off point can be computed by simulation. Simulation studies are conducted to evaluate the performance of the method, and it is applied to analyze a real Parkinson disease data as illustration. A drawback of the current version of the method is that it needs to specify the proportions of possible deletions for the variables, this makes the optimal proportions are not easy to find. In our next step research we will try to implement an algorithm which finds the optimal proportions automatically, and more easy to use. As suggested from a reviewer, simulation studies should be performed for statistical significance test between the proposed method and existing variable selection method(s) to address the contribution of the proposed method. This will be potential for our future research work (Appendix).
This research was supported by the Intramural Research Program of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.