One of the goals of RNA-seq is to detect differential gene/transcripts expression (for the sake of simplicity, we use the term "gene" to represent gene or transcript in the document). Our recommendation is to perform differential expression it on normalized, rather than on raw data.
Tools for differential expression are available on Gene/Transcript counts and on Normalized counts data nodes, under the RNA-Seq analysis section of the context sensitive menu. The possibilities are as follows:
GSA stands for gene specific analysis, the goal of which is to identify the statistical model that is the best for a specific gene among all the selected models, and then use that best model to calculate p-value and fold change.
The first step of GSA is to choose which attributes to include in the test (Figure 1). All sample attributes including numeric and categorical attributes are displayed in the dialog, so use the check button to select between them. An experiment with two attributes Cell type (with groups A and B)and Time (time points 0, 5, 10) is used as an example in this section.
Click Next to display the levels of each attribute to be selected for sub-group comparisons (contrasts).
To compare A vs. B, select A for Cell type on the top, B for Cell type on the bottom and click Add comparison. The specified comparison is added to the table below (Figure 2).
To compare Time point 5 vs. 0, select 5 for Time on the top, 0 for Time on the bottom, and click Add comparison (Figure 3).
To compare cell types at a certain time point, e.g. time point 5, select A and 5 on the top, and B and 5 on the bottom. Thereafter click Add comparison (Figure 4).
Multiple comparisons can be computed in one GSA run; Figure 5 shows the above three comparisons are added in the computation.
In terms of design pool, i.e. choices of model designs to select from, two 2 factors in this example data will lead to seven possibilities in the design pool:
In GSA, if a 2nd order interaction term is present in the design, then all first order terms must be present, which means, if Cell type * Time interaction is present, the two factors must be included in the model. In the other words, the following designs are not considered:
If a comparison is added, some models that don't have the comparison factors will also be eliminated. E.g. if a comparison on Cell type A vs. B is added, only designs that have Cell type factor included will be in the computation. These are:
The more comparisons on different terms are added, the fewer models will be included in the computation. If the following comparisons are added in one GSA run:
only the following two models will be computed:
If comparisons on all the three terms are added in one GSA run:
then only one model will be computed:
If GSA is invoked from a quantification output data node directly, you will have the option to use the default normalization methods before performing differential expression detection (Figure 6).
If advanced normalization needs to be applied, perform the Normalize counts task on a quantification data node before doing differential expression detection (GSA or ANOVA).
Click on Configure to customize Advanced options (Figure 7).
Low -expression feature section allows you to specify criteria to exclude features that do not meet requirements for the calculation.
Multiple test correction can be performed on the p-values of each comparison, with FDR step-up being the default (1). If you check the Storey q-value (2), an extra column with q-values will be added to the report.
This section configures how to select the best model for a feature. There are two options for Model selection criterion: AICc (Akaike Information Criterion corrected) and AIC (Akaike Information Criterion). AICc is recommended for small sample size, while AIC is recommended for medium and large sample size What about large samples?(3). Note that when sample size grows from small to medium, AICc converges to AIC. Taking the AICc/AIC value into account, GSA considers the model with the lowest information criterion as the best choice.
In the results, the best model's Akaike weight is also generated. The model's weight is interpreted as the probability that the model would be picked as the best if the study were reproduced. The range of Akaike weight is between 0 to 1, where 1 means the best model is very superior to the other candidates from the model pool; if the best model's Akaike weight is close to 0.5 on the other hand, it means the best model is likely to be replaced by other candidates if the study were reproduced. One still uses the best shot model, however, the accuracy of the best shot is fairly low.
The default value for Enable multimodel approach is Yes. It means that the estimation will utilize all models in the pool by assigning weights to them based on AIC or AICc. If No is selected instead, the estimation is based on only one best model which has the smallest AIC or AICc. The output p-value will be different depending on the selected option for multimodel, but the fold change is the same. Multimodel approach is recommended when the best model's Akaike weight is not close to 1, meaning that the best model is not compelling.
There are situations when a model estimation procedure does not outright fail, but still encounters some difficulties. In this case, it can even generate p-value and fold change for the comparisons, but those values are not reliable, and can be misleading. It is recommended to use only reliable estimation results, so the default option for Use only reliable estimation results is set Yes.
Partek® Flow® provides five response distribution types for each design model in the pool, namely:
We recommend to use lognormal with shrinkage distribution (the default), and an experienced user may want to click on Custom to configure the model type and p-value type (Figure 8).
If multiple distribution types are selected, then the number of total models that is evaluated for each feature is the product of the number of design models and the number of distribution types. In the above example, suppose we have only compared A vs B in Cell type as in Figure 2, then the design model pool will have the following three models:
If we select Lognormal with shrinkage and Negative binomial, i.e. two distribution types, the best model fit for each feature will be selected from 3 * 2 = 6 models using AIC or AICc.
The design pool can also be restricted by Min error degrees of freedom. The minimal error degrees of freedom is set to the largest k (k represents the error degree of freedom of the model) in the design model pool, with 0 <= k <=6 for which admissible models exist. Admissible model is one that can be estimated given the specified contrastsPlease make sure that I got the meaning right. In the above example, when we compare A vs B in Cell type, there are three possible design models. The error degree of freedom of model Cell type is largest and the error degree of freedom of model Cell type, Time, Cell type * Time is the smallest:
k(Cell type) > k(Cell type, Time) > k (Cell type, Time, Cell type*Time)
If the sample size is big, k >=6 in all three models, all the models will be evaluated and the best model will be selected for each feature. However, if the sample size is too small, none of the models will have k >=6, then only the model with maximal k will be used in the calculation.
There are two types of p-value, F and Wald., Poisson, negative binomial and normal models can generate p-value using either Wald or F statistics. Lognormal models always employ the F statistics; the more replicates in the study, the less the difference between the two options. When there are no replicates, only Poisson can be used to generate p-value using Wald.
Partek Flow keeps tracking the log status of the data, and no matter whether GSA is performed on logged data or not, the fold change calculation is always in linear scale
If there are multiple design models and multiple distribution types included in the calculation, the fraction of genes using each model and type will be displayed as pie charts in the task result (Figure 9).
Feature list with p-value and fold change generated from the best model selected is displayed in a table with other statistical information (Figure 10).
The following information is included in the table by default:
By clicking on Optional columns, you can retrieve more annotation if there are any more annotation information in the annotation model you specified for quantification, like genomic location, strand information etc.
On the right of each contrast header, there is volcano plot icon ( ). Select it to display the volcano plot on the chosen contrast (Figure 11).
Feature list filter panel is on the left of the table (Figure 12). Click on the black triangle ( ) to collapse and expand the panel.
Select the check box of the field and specify the cutoff, and press Enter to apply. After the filter has been applied, the total number of included features will be updated on the top of the panel (Result).
If lognormal with shrinkage method was selected for GSA, a shrinkage plot is generated in the report (Figure 13). X-axis shows the log2 value of average coverage. The plot helps to determine the threshold of low expression features. If there is an increase before a monotone decrease trend on the left side of the plot, you need to set a higher threshold on the low expression filter. Detailed information on how to set the threshold can be found in the GSA white paper.
ANOVA method is applying a specified log normal model to all the features.
To setup ANOVA model, select factors from sample attribute. The factors can be categorical or numeric attribute. Click on a check button to select and click Add factors button to add it to the model (Figure 14).
When more than one factor is selected, Add interaction button will be enabled to allow you to specify interaction.
Once a factor is added to the model (Figure 14), you can specify whether the factor is a random effect (check Random check box) or not.
Most factors in an analysis of variance are fixed factors, i.e. the levels of that factor represent all the levels of interest. Examples of fixed factors include gender, race, strain, etc. However, in experiments that are more complex, a factor can be a random effect, meaning the levels of the factor only represent a random sample of all of the levels of interest. Examples of random effects include subject and batch.
Consider the example where one factor is type (with levels normal and diseased), and another factor is subject (the subjects selected for the experiment). In this example,
“type” is a fixed factor since the levels normal and diseased represent all conditions of interest. “Subject”, on the other hand, is a random effect since the subjects are only a random sample of all the levels of that factor. When model has both fixed and random effect, it is called a mixed model.
When more than one factor is added to the model, click on the Cross tabulation link at the bottom to view the relationship between the factors (Figure 15).
Once the model is set, click on Next button to setup comparisons (contrasts) (Figure 16).
Start by choosing a factor or interaction from the Factor drop-down list. The subgroups of the factor or interaction will be displayed in the left panel; click to select a subgroup name and move it to one of the panels on the right. The fold change calculation on the comparison will use the group in the top panel as numerator, and the group in the bottom panel as the denominator. Click on Add comparison button to add one comparison to the comparisons table. Note that multiple comparisons can be added to the specified model.
Click on the Configure to customize Advanced options (Figure 17)
Low-expression feature and Multiple test correction sections are the same as the matching GSA advanced option, see above GSA advanced options.
Report option
User only reliable estimation results: There are situations when a model estimation procedure does not fail outright, but still encounters some difficulties. In this case, it can even generate p-value and fold change on the comparisons, but they are not reliable, i.e. they can be misleading. Therefore, the default of Use only reliable estimation results is set Yes.
Display p-value for effects: If set to No, only the p-value of comparison will be displayed on the report, the p-value of the factors and interaction terms are not shown in the report table. When you choose Yes in addition to the comparison’s p-value, type III p-values are displayed for all the non-random terms in the model.
Report partial correlations: If the model has a numeric factor(s), when choosing Yes, partial correlation coefficient(s) of the numeric factor(s) will be displayed in the result table. When choosing No, partial correlation coefficients are not shown.
Data has been log transformed with base: showing the current scale of the input data on this task.
Since there is only one model for all features, so there is no pie charts design models and response distribution information. The Gene list table format is the same as the GSA report.
This option is only available when Cufflinks quantification node is selected. Detailed implementation information can be found in the Cuffdiff manual [5].
When the task is selected, the dialog will display all the categorical attributes more than one subgroups (Figure 18).
When an attribute is selected, pairwise comparisons of all the levels will be performed independently.
Click on Configure button in the Advanced options to configure normalization method and library types (Figure 19).
There are three library normalization methods:
Geometric: FPKM are scaled via the median of the geometric means of the fragment counts across all libraries [6]. This is the default option (and is identical to the one used by DESeq)
The library types have three options:
The report of the cuffdiff task is a table of a feature list p-values, q-value and log2 fold-change information for all the comparisons (Figure 20).
In the p-value column, besides an actual p-value, which means the test was performed successfully, there is also the following flags which indicate the test was not successful:
The table can be downloaded as a text file when clicking the Download button on the lower-right corner of the table.
|