Mercurial > repos > iuc > transit_gumbel
comparison transit_gumbel.xml @ 2:c980be2c002c draft
"planemo upload for repository https://github.com/galaxyproject/tools-iuc/tree/master/tools/transit/ commit f63413d629e4de3c69984b3a96ad8ccfe0d47ada"
author | iuc |
---|---|
date | Tue, 08 Oct 2019 08:24:12 -0400 |
parents | 92496521fd39 |
children | ecce0cbe659f |
comparison
equal
deleted
inserted
replaced
1:202a99525669 | 2:c980be2c002c |
---|---|
24 <expand macro="outputs" /> | 24 <expand macro="outputs" /> |
25 </outputs> | 25 </outputs> |
26 <tests> | 26 <tests> |
27 <test> | 27 <test> |
28 <param name="inputs" ftype="wig" value="transit-in1-rep1.wig,transit-in1-rep2.wig" /> | 28 <param name="inputs" ftype="wig" value="transit-in1-rep1.wig,transit-in1-rep2.wig" /> |
29 <param name="controls" ftype="wig" value="transit-co1-rep1.wig,transit-co1-rep2.wig,transit-co1-rep3.wig" /> | 29 <param name="annotation" ftype="tabular" value="transit-in1.prot" /> |
30 <param name="annotation" ftype="gff3" value="transit-in1.gff3" /> | |
31 <param name="samples" value="1000" /> | 30 <param name="samples" value="1000" /> |
32 <param name="burnin" value="100" /> | 31 <param name="burnin" value="100" /> |
33 <param name="replicates" value="Replicates" /> | 32 <param name="replicates" value="Replicates" /> |
34 <output name="sites" file="gumbel-sites1.txt" ftype="tabular" compare="sim_size" /> | 33 <output name="sites" file="gumbel-sites1.txt" ftype="tabular" compare="sim_size" /> |
35 </test> | 34 </test> |
50 | 49 |
51 **Inputs** | 50 **Inputs** |
52 | 51 |
53 ------------------- | 52 ------------------- |
54 | 53 |
55 Input files for HMM need to be: | 54 Input files for Gumnbel need to be: |
56 | 55 |
57 - .wig files: Tabulated files containing one column with the TA site coordinate and one column with the read count at this site. | 56 - .wig files: Tabulated files containing one column with the TA site coordinate and one column with the read count at this site. |
58 - annotation .prot_table: Annotation file generated by the `Convert Gff3 to prot_table for TRANSIT` tool. | 57 - annotation .prot_table: Annotation file generated by the `Convert Gff3 to prot_table for TRANSIT` tool. |
59 | 58 |
60 | 59 |
70 -m <integer> := Smallest read-count to consider. Default: -m 1 | 69 -m <integer> := Smallest read-count to consider. Default: -m 1 |
71 -t <integer> := Trims all but every t-th value. Default: -t 1 | 70 -t <integer> := Trims all but every t-th value. Default: -t 1 |
72 -r <string> := How to handle replicates. Sum or Mean. Default: -r Sum | 71 -r <string> := How to handle replicates. Sum or Mean. Default: -r Sum |
73 --iN <float> := Ignore TAs occuring at given fraction of the N terminus. Default: -iN 0.0 | 72 --iN <float> := Ignore TAs occuring at given fraction of the N terminus. Default: -iN 0.0 |
74 --iC <float> := Ignore TAs occuring at given fraction of the C terminus. Default: -iC 0.0 | 73 --iC <float> := Ignore TAs occuring at given fraction of the C terminus. Default: -iC 0.0 |
74 -n <string> := Determines which normalization method to use. Default -n TTR | |
75 | 75 |
76 | 76 |
77 - Samples: Gumbel uses Metropolis-Hastings (MH) to generate samples of posterior distributions. The default setting is to run the simulation for 10,000 iterations. This is usually enough to assure convergence of the sampler and to provide accurate estimates of posterior probabilities. Less iterations may work, but at the risk of lower accuracy. | 77 - Samples: Gumbel uses Metropolis-Hastings (MH) to generate samples of posterior distributions. The default setting is to run the simulation for 10,000 iterations. This is usually enough to assure convergence of the sampler and to provide accurate estimates of posterior probabilities. Less iterations may work, but at the risk of lower accuracy. |
78 - Burn-In: Because the MH sampler many not have stabilized in the first few iterations, a “burn-in” period is defined. Samples obtained in this “burn-in” period are discarded, and do not count towards estimates. | 78 - Burn-In: Because the MH sampler many not have stabilized in the first few iterations, a “burn-in” period is defined. Samples obtained in this “burn-in” period are discarded, and do not count towards estimates. |
79 - Trim: The MH sampler produces Markov samples that are correlated. This parameter dictates how many samples must be attempted for every sampled obtained. Increasing this parameter will decrease the auto-correlation, at the cost of dramatically increasing the run-time. For most situations, this parameter should be left at the default of “1”. | 79 - Trim: The MH sampler produces Markov samples that are correlated. This parameter dictates how many samples must be attempted for every sampled obtained. Increasing this parameter will decrease the auto-correlation, at the cost of dramatically increasing the run-time. For most situations, this parameter should be left at the default of “1”. |
80 - Minimum Read: The minimum read count that is considered a true read. Because the Gumbel method depends on determining gaps of TA sites lacking insertions, it may be susceptible to spurious reads (e.g. errors). The default value of 1 will consider all reads as true reads. A value of 2, for example, will ignore read counts of 1. | 80 - Minimum Read: The minimum read count that is considered a true read. Because the Gumbel method depends on determining gaps of TA sites lacking insertions, it may be susceptible to spurious reads (e.g. errors). The default value of 1 will consider all reads as true reads. A value of 2, for example, will ignore read counts of 1. |
81 - Replicates: Determines how to deal with replicates by averaging the read-counts or summing read counts across datasets. This should not have an affect for the Gumbel method, aside from potentially affecting spurious reads. | 81 - Replicates: Determines how to deal with replicates by averaging the read-counts or summing read counts across datasets. This should not have an affect for the Gumbel method, aside from potentially affecting spurious reads. |
82 - Normalisation : | |
83 - TTR (Default) : Trimmed Total Reads (TTR), normalized by the total read-counts (like totreads), but trims top and bottom 5% of read-counts. This is the recommended normalization method for most cases as it has the beneffit of normalizing for difference in saturation in the context of resampling. | |
84 - nzmean : Normalizes datasets to have the same mean over the non-zero sites. | |
85 - totreads : Normalizes datasets by total read-counts, and scales them to have the same mean over all counts. | |
86 - zinfnb : Fits a zero-inflated negative binomial model, and then divides read-counts by the mean. The zero-inflated negative binomial model will treat some empty sites as belonging to the “true” negative binomial distribution responsible for read-counts while treating the others as “essential” (and thus not influencing its parameters). | |
87 - quantile : Normalizes datasets using the quantile normalization method described by Bolstad et al. (2003). In this normalization procedure, datasets are sorted, an empirical distribution is estimated as the mean across the sorted datasets at each site, and then the original (unsorted) datasets are assigned values from the empirical distribution based on their quantiles. | |
88 - betageom : Normalizes the datasets to fit an “ideal” Geometric distribution with a variable probability parameter p. Specially useful for datasets that contain a large skew. See Beta-Geometric Correction . | |
89 - nonorm : No normalization is performed. | |
82 | 90 |
83 | 91 |
84 ------------------- | 92 ------------------- |
85 | 93 |
86 **Outputs** | 94 **Outputs** |