# HG changeset patch # User iuc # Date 1510776927 18000 # Node ID ff11d442feedce38938c803f83f92a24c8a641a3 # Parent b5c5470d7c0936706bf5713f5b282b7b3528ad58 planemo upload for repository https://github.com/galaxyproject/tools-iuc/tree/master/tools/jbrowse commit 908f16ea4eb082227437dc93e06e8cb742f5a257 diff -r b5c5470d7c09 -r ff11d442feed all_fasta.loc.sample --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/all_fasta.loc.sample Wed Nov 15 15:15:27 2017 -0500 @@ -0,0 +1,18 @@ +#This file lists the locations and dbkeys of all the fasta files +#under the "genome" directory (a directory that contains a directory +#for each build). The script extract_fasta.py will generate the file +#all_fasta.loc. This file has the format (white space characters are +#TAB characters): +# +# +# +#So, all_fasta.loc could look something like this: +# +#apiMel3 apiMel3 Honeybee (Apis mellifera): apiMel3 /path/to/genome/apiMel3/apiMel3.fa +#hg19canon hg19 Human (Homo sapiens): hg19 Canonical /path/to/genome/hg19/hg19canon.fa +#hg19full hg19 Human (Homo sapiens): hg19 Full /path/to/genome/hg19/hg19full.fa +# +#Your all_fasta.loc file should contain an entry for each individual +#fasta file. So there will be multiple fasta files for each build, +#such as with hg19 above. +# diff -r b5c5470d7c09 -r ff11d442feed blastxml_to_gapped_gff3.py --- a/blastxml_to_gapped_gff3.py Wed Sep 13 13:07:20 2017 -0400 +++ b/blastxml_to_gapped_gff3.py Wed Nov 15 15:15:27 2017 -0500 @@ -6,47 +6,58 @@ import sys from BCBio import GFF - logging.basicConfig(level=logging.INFO) log = logging.getLogger(name='blastxml2gff3') -__author__ = "Eric Rasche" -__version__ = "0.4.0" -__maintainer__ = "Eric Rasche" -__email__ = "esr@tamu.edu" - __doc__ = """ BlastXML files, when transformed to GFF3, do not normally show gaps in the blast hits. This tool aims to fill that "gap". """ -def blastxml2gff3(blastxml, min_gap=3, trim=False, trim_end=False): +def blastxml2gff3(blastxml, min_gap=3, trim=False, trim_end=False, include_seq=False): from Bio.Blast import NCBIXML from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord from Bio.SeqFeature import SeqFeature, FeatureLocation blast_records = NCBIXML.parse(blastxml) - records = [] - for record in blast_records: + for idx_record, record in enumerate(blast_records): # http://www.sequenceontology.org/browser/release_2.4/term/SO:0000343 match_type = { # Currently we can only handle BLASTN, BLASTP 'BLASTN': 'nucleotide_match', 'BLASTP': 'protein_match', }.get(record.application, 'match') - rec = SeqRecord(Seq("ACTG"), id=record.query) - for hit in record.alignments: - for hsp in hit.hsps: + recid = record.query + if ' ' in recid: + recid = recid[0:recid.index(' ')] + + rec = SeqRecord(Seq("ACTG"), id=recid) + for idx_hit, hit in enumerate(record.alignments): + for idx_hsp, hsp in enumerate(hit.hsps): qualifiers = { + "ID": 'b2g.%s.%s.%s' % (idx_record, idx_hit, idx_hsp), "source": "blast", "score": hsp.expect, "accession": hit.accession, "hit_id": hit.hit_id, "length": hit.length, - "hit_titles": hit.title.split(' >') + "hit_titles": hit.title.split(' >'), } + if include_seq: + qualifiers.update({ + 'blast_qseq': hsp.query, + 'blast_sseq': hsp.sbjct, + 'blast_mseq': hsp.match, + }) + + for prop in ('score', 'bits', 'identities', 'positives', + 'gaps', 'align_length', 'strand', 'frame', + 'query_start', 'query_end', 'sbjct_start', + 'sbjct_end'): + qualifiers['blast_' + prop] = getattr(hsp, prop, None) + desc = hit.title.split(' >')[0] qualifiers['description'] = desc[desc.index(' '):] @@ -62,14 +73,11 @@ # protein. parent_match_end = hsp.query_start + hit.length + hsp.query.count('-') - # However, if the user requests that we trim the feature, then - # we need to cut the ``match`` start to 0 to match the parent feature. - # We'll also need to cut the end to match the query's end. It (maybe) - # should be the feature end? But we don't have access to that data, so - # We settle for this. + # If we trim the left end, we need to trim without losing information. + used_parent_match_start = parent_match_start if trim: if parent_match_start < 1: - parent_match_start = 0 + used_parent_match_start = 0 if trim or trim_end: if parent_match_end > hsp.query_end: @@ -77,7 +85,7 @@ # The ``match`` feature will hold one or more ``match_part``s top_feature = SeqFeature( - FeatureLocation(parent_match_start, parent_match_end), + FeatureLocation(used_parent_match_start, parent_match_end), type=match_type, strand=0, qualifiers=qualifiers ) @@ -87,19 +95,15 @@ "source": "blast", } top_feature.sub_features = [] - for start, end, cigar in generate_parts(hsp.query, hsp.match, - hsp.sbjct, - ignore_under=min_gap): + for idx_part, (start, end, cigar) in \ + enumerate(generate_parts(hsp.query, hsp.match, + hsp.sbjct, + ignore_under=min_gap)): part_qualifiers['Gap'] = cigar - part_qualifiers['ID'] = hit.hit_id + part_qualifiers['ID'] = qualifiers['ID'] + ('.%s' % idx_part) - if trim: - # If trimming, then we start relative to the - # match's start - match_part_start = parent_match_start + start - else: - # Otherwise, we have to account for the subject start's location - match_part_start = parent_match_start + hsp.sbjct_start + start - 1 + # Otherwise, we have to account for the subject start's location + match_part_start = parent_match_start + hsp.sbjct_start + start - 1 # We used to use hsp.align_length here, but that includes # gaps in the parent sequence @@ -117,8 +121,7 @@ rec.features.append(top_feature) rec.annotations = {} - records.append(rec) - return records + yield rec def __remove_query_gaps(query, match, subject): @@ -253,11 +256,13 @@ if __name__ == '__main__': parser = argparse.ArgumentParser(description='Convert Blast XML to gapped GFF3', epilog='') - parser.add_argument('blastxml', type=open, help='Blast XML Output') + parser.add_argument('blastxml', type=argparse.FileType("r"), help='Blast XML Output') parser.add_argument('--min_gap', type=int, help='Maximum gap size before generating a new match_part', default=3) parser.add_argument('--trim', action='store_true', help='Trim blast hits to be only as long as the parent feature') parser.add_argument('--trim_end', action='store_true', help='Cut blast results off at end of gene') + parser.add_argument('--include_seq', action='store_true', help='Include sequence') args = parser.parse_args() - result = blastxml2gff3(**vars(args)) - GFF.write(result, sys.stdout) + for rec in blastxml2gff3(**vars(args)): + if len(rec.features): + GFF.write([rec], sys.stdout) diff -r b5c5470d7c09 -r ff11d442feed gff3_rebase.py --- a/gff3_rebase.py Wed Sep 13 13:07:20 2017 -0400 +++ b/gff3_rebase.py Wed Nov 15 15:15:27 2017 -0500 @@ -83,18 +83,25 @@ def __get_features(child, interpro=False): child_features = {} for rec in GFF.parse(child): + # Only top level for feature in rec.features: + # Get the record id as parent_feature_id (since this is how it will be during remapping) parent_feature_id = rec.id + # If it's an interpro specific gff3 file if interpro: + # Then we ignore polypeptide features as they're useless if feature.type == 'polypeptide': continue - if '_' in parent_feature_id: - parent_feature_id = parent_feature_id[parent_feature_id.index('_') + 1:] + # If there's an underscore, we strip up to that underscore? + # I do not know the rationale for this, removing. + # if '_' in parent_feature_id: + # parent_feature_id = parent_feature_id[parent_feature_id.index('_') + 1:] try: child_features[parent_feature_id].append(feature) except KeyError: child_features[parent_feature_id] = [feature] + # Keep a list of feature objects keyed by parent record id return child_features @@ -132,23 +139,29 @@ __update_feature_location(subfeature, parent, protein2dna) -def rebase(parent, child, interpro=False, protein2dna=False): +def rebase(parent, child, interpro=False, protein2dna=False, map_by='ID'): + # get all of the features we will be re-mapping in a dictionary, keyed by parent feature ID child_features = __get_features(child, interpro=interpro) for rec in GFF.parse(parent): replacement_features = [] for feature in feature_lambda( rec.features, + # Filter features in the parent genome by those that are + # "interesting", i.e. have results in child_features array. + # Probably an unnecessary optimisation. feature_test_qual_value, { - 'qualifier': 'ID', + 'qualifier': map_by, 'attribute_list': child_features.keys(), }, subfeatures=False): - new_subfeatures = child_features[feature.id] - fixed_subfeatures = [] - for x in new_subfeatures: + # Features which will be re-mapped + to_remap = child_features[feature.id] + # TODO: update starts + fixed_features = [] + for x in to_remap: # Then update the location of the actual feature __update_feature_location(x, feature, protein2dna) @@ -156,11 +169,11 @@ for y in ('status', 'Target'): try: del x.qualifiers[y] - except: + except Exception: pass - fixed_subfeatures.append(x) - replacement_features.extend(fixed_subfeatures) + fixed_features.append(x) + replacement_features.extend(fixed_features) # We do this so we don't include the original set of features that we # were rebasing against in our result. rec.features = replacement_features @@ -176,5 +189,6 @@ help='Interpro specific modifications') parser.add_argument('--protein2dna', action='store_true', help='Map protein translated results to original DNA data') + parser.add_argument('--map_by', help='Map by key', default='ID') args = parser.parse_args() rebase(**vars(args)) diff -r b5c5470d7c09 -r ff11d442feed jbrowse.py --- a/jbrowse.py Wed Sep 13 13:07:20 2017 -0400 +++ b/jbrowse.py Wed Nov 15 15:15:27 2017 -0500 @@ -1,7 +1,8 @@ #!/usr/bin/env python import argparse -import codecs +import binascii import copy +import datetime import hashlib import json import logging @@ -14,9 +15,10 @@ from collections import defaultdict from Bio.Data import CodonTable - logging.basicConfig(level=logging.INFO) log = logging.getLogger('jbrowse') +TODAY = datetime.datetime.now().strftime("%Y-%m-%d") +GALAXY_INFRASTRUCTURE_URL = None class ColorScaling(object): @@ -63,6 +65,7 @@ var color = ({user_spec_color} || search_up(feature, 'color') || search_down(feature, 'color') || {auto_gen_color}); var score = (search_up(feature, 'score') || search_down(feature, 'score')); {opacity} + if(score === undefined){{ opacity = 1; }} var result = /^#?([a-f\d]{{2}})([a-f\d]{{2}})([a-f\d]{{2}})$/i.exec(color); var red = parseInt(result[1], 16); var green = parseInt(result[2], 16); @@ -82,11 +85,11 @@ """, 'blast': """ var opacity = 0; - if(score == 0.0) { + if(score == 0.0) {{ opacity = 1; - } else{ + }} else {{ opacity = (20 - Math.log10(score)) / 180; - } + }} """ } @@ -128,7 +131,7 @@ def rgb_from_hex(self, hexstr): # http://stackoverflow.com/questions/4296249/how-do-i-convert-a-hex-triplet-to-an-rgb-tuple-and-back - return struct.unpack('BBB', codecs.decode(hexstr, 'hex')) + return struct.unpack('BBB', binascii.unhexlify(hexstr)) def min_max_gff(self, gff_file): min_val = None @@ -285,6 +288,44 @@ INSTALLED_TO = os.path.dirname(os.path.realpath(__file__)) +def metadata_from_node(node): + metadata = {} + try: + if len(node.findall('dataset')) != 1: + # exit early + return metadata + except Exception: + return {} + + for (key, value) in node.findall('dataset')[0].attrib.items(): + metadata['dataset_%s' % key] = value + + for (key, value) in node.findall('history')[0].attrib.items(): + metadata['history_%s' % key] = value + + for (key, value) in node.findall('metadata')[0].attrib.items(): + metadata['metadata_%s' % key] = value + + for (key, value) in node.findall('tool')[0].attrib.items(): + metadata['tool_%s' % key] = value + + # Additional Mappings applied: + metadata['dataset_edam_format'] = '{1}'.format(metadata['dataset_edam_format'], metadata['dataset_file_ext']) + metadata['history_user_email'] = '{0}'.format(metadata['history_user_email']) + metadata['history_display_name'] = '{hist_name}'.format( + galaxy=GALAXY_INFRASTRUCTURE_URL, + encoded_hist_id=metadata['history_id'], + hist_name=metadata['history_display_name'] + ) + metadata['tool_tool'] = '{tool_id}'.format( + galaxy=GALAXY_INFRASTRUCTURE_URL, + encoded_id=metadata['dataset_id'], + tool_id=metadata['tool_tool_id'], + tool_version=metadata['tool_tool_version'], + ) + return metadata + + class JbrowseConnector(object): def __init__(self, jbrowse, outdir, genomes, standalone=False, gencode=1): @@ -312,6 +353,12 @@ # Ignore if the folder exists pass + try: + os.makedirs(os.path.join(self.outdir, 'data', 'raw')) + except OSError: + # Ignore if the folder exists + pass + self.process_genomes() self.update_gencode() @@ -338,21 +385,20 @@ return os.path.realpath(os.path.join(self.jbrowse, 'bin', command)) def process_genomes(self): - for genome_path in self.genome_paths: + for genome_node in self.genome_paths: + # TODO: Waiting on https://github.com/GMOD/jbrowse/pull/884 self.subprocess_check_call([ 'perl', self._jbrowse_bin('prepare-refseqs.pl'), - '--fasta', genome_path]) + '--fasta', genome_node['path']]) def generate_names(self): # Generate names - args = [ 'perl', self._jbrowse_bin('generate-names.pl'), '--hashBits', '16' ] tracks = ','.join(self.tracksToIndex) - if tracks: args += ['--tracks', tracks] else: @@ -362,7 +408,6 @@ self.subprocess_check_call(args) def _add_json(self, json_data): - cmd = [ 'perl', self._jbrowse_bin('add-json.pl'), json.dumps(json_data), @@ -421,7 +466,7 @@ '--key', trackData['key'], '--clientConfig', json.dumps(clientConfig), '--config', json.dumps(config), - '--trackType', 'JBrowse/View/Track/CanvasFeatures' + '--trackType', 'BlastView/View/Track/CanvasFeatures' ] # className in --clientConfig is ignored, it needs to be set with --className @@ -455,6 +500,8 @@ else: trackData['autoscale'] = wiggleOpts.get('autoscale', 'local') + trackData['scale'] = wiggleOpts['scale'] + self._add_track_json(trackData) def add_bam(self, data, trackData, bamOpts, bam_index=None, **kwargs): @@ -506,7 +553,7 @@ }) self._add_track_json(trackData) - def add_features(self, data, format, trackData, gffOpts, **kwargs): + def add_features(self, data, format, trackData, gffOpts, metadata=None, **kwargs): cmd = [ 'perl', self._jbrowse_bin('flatfile-to-json.pl'), self.TN_TABLE.get(format, 'gff'), @@ -549,6 +596,8 @@ '--trackType', gffOpts['trackType'] ] + if metadata: + config.update({'metadata': metadata}) cmd.extend(['--config', json.dumps(config)]) self.subprocess_check_call(cmd) @@ -556,14 +605,32 @@ if gffOpts.get('index', 'false') == 'true': self.tracksToIndex.append("%s" % trackData['label']) + def add_rest(self, url, trackData): + data = { + "label": trackData['label'], + "key": trackData['key'], + "category": trackData['category'], + "type": "JBrowse/View/Track/HTMLFeatures", + "storeClass": "JBrowse/Store/SeqFeature/REST", + "baseUrl": url, + "query": { + "organism": "tyrannosaurus" + } + } + self._add_track_json(data) + def process_annotations(self, track): + category = track['category'].replace('__pd__date__pd__', TODAY) outputTrackConfig = { 'style': { 'label': track['style'].get('label', 'description'), 'className': track['style'].get('className', 'feature'), 'description': track['style'].get('description', ''), }, - 'category': track['category'], + 'overridePlugins': track['style'].get('overridePlugins', False) == 'True', + 'overrideDraggable': track['style'].get('overrideDraggable', False) == 'True', + 'maxHeight': track['style'].get('maxHeight', '600'), + 'category': category, } mapped_chars = { @@ -579,15 +646,26 @@ '#': '__pd__' } - for i, (dataset_path, dataset_ext, track_human_label) in enumerate(track['trackfiles']): + for i, (dataset_path, dataset_ext, track_human_label, extra_metadata) in enumerate(track['trackfiles']): # Unsanitize labels (element_identifiers are always sanitized by Galaxy) for key, value in mapped_chars.items(): track_human_label = track_human_label.replace(value, key) - log.info('Processing %s / %s', track['category'], track_human_label) + log.info('Processing %s / %s', category, track_human_label) outputTrackConfig['key'] = track_human_label - hashData = [dataset_path, track_human_label, track['category']] - outputTrackConfig['label'] = hashlib.md5('|'.join(hashData).encode('utf-8')).hexdigest() + '_%s' % i + # We add extra data to hash for the case of REST + SPARQL. + try: + rest_url = track['conf']['options']['url'] + except KeyError: + rest_url = '' + + # I chose to use track['category'] instead of 'category' here. This + # is intentional. This way re-running the tool on a different date + # will not generate different hashes and make comparison of outputs + # much simpler. + hashData = [dataset_path, track_human_label, track['category'], rest_url] + hashData = '|'.join(hashData).encode('utf-8') + outputTrackConfig['label'] = hashlib.md5(hashData).hexdigest() + '_%s' % i # Colour parsing is complex due to different track types having # different colour options. @@ -608,10 +686,10 @@ # import sys; sys.exit() if dataset_ext in ('gff', 'gff3', 'bed'): self.add_features(dataset_path, dataset_ext, outputTrackConfig, - track['conf']['options']['gff']) + track['conf']['options']['gff'], metadata=extra_metadata) elif dataset_ext == 'bigwig': self.add_bigwig(dataset_path, outputTrackConfig, - track['conf']['options']['wiggle']) + track['conf']['options']['wiggle'], metadata=extra_metadata) elif dataset_ext == 'bam': real_indexes = track['conf']['options']['pileup']['bam_indices']['bam_index'] if not isinstance(real_indexes, list): @@ -626,11 +704,15 @@ self.add_bam(dataset_path, outputTrackConfig, track['conf']['options']['pileup'], - bam_index=real_indexes[i]) + bam_index=real_indexes[i], metadata=extra_metadata) elif dataset_ext == 'blastxml': - self.add_blastxml(dataset_path, outputTrackConfig, track['conf']['options']['blast']) + self.add_blastxml(dataset_path, outputTrackConfig, track['conf']['options']['blast'], metadata=extra_metadata) elif dataset_ext == 'vcf': - self.add_vcf(dataset_path, outputTrackConfig) + self.add_vcf(dataset_path, outputTrackConfig, metadata=extra_metadata) + elif dataset_ext == 'rest': + self.add_rest(track['conf']['options']['url'], outputTrackConfig, metadata=extra_metadata) + else: + log.warn('Do not know how to handle %s', dataset_ext) # Return non-human label for use in other fields yield outputTrackConfig['label'] @@ -659,10 +741,65 @@ generalData['show_overview'] = (data['general']['show_overview'] == 'true') generalData['show_menu'] = (data['general']['show_menu'] == 'true') generalData['hideGenomeOptions'] = (data['general']['hideGenomeOptions'] == 'true') + generalData['plugins'] = data['plugins'] viz_data.update(generalData) self._add_json(viz_data) + if 'GCContent' in data['plugins_python']: + self._add_track_json({ + "storeClass": "JBrowse/Store/SeqFeature/SequenceChunks", + "type": "GCContent/View/Track/GCContentXY", + "label": "GCContentXY", + "urlTemplate": "seq/{refseq_dirpath}/{refseq}-", + "bicolor_pivot": 0.5 + # TODO: Expose params for everyone. + }) + + if 'ComboTrackSelector' in data['plugins_python']: + with open(os.path.join(self.outdir, 'data', 'trackList.json'), 'r') as handle: + trackListJson = json.load(handle) + trackListJson.update({ + "trackSelector": { + "renameFacets": { + "tool_tool": "Tool ID", + "tool_tool_id": "Tool ID", + "tool_tool_version": "Tool Version", + "dataset_edam_format": "EDAM", + "dataset_size": "Size", + "history_display_name": "History Name", + "history_user_email": "Owner", + "metadata_dbkey": "Dbkey", + }, + "displayColumns": [ + "key", + "tool_tool", + "tool_tool_version", + "dataset_edam_format", + "dataset_size", + "history_display_name", + "history_user_email", + "metadata_dbkey", + ], + "type": "Faceted", + "title": ["Galaxy Metadata"], + "escapeHTMLInData": False + }, + "trackMetadata": { + "indexFacets": [ + "category", + "key", + "tool_tool_id", + "tool_tool_version", + "dataset_edam_format", + "history_user_email", + "history_display_name" + ] + } + }) + with open(os.path.join(self.outdir, 'data', 'trackList2.json'), 'w') as handle: + json.dump(trackListJson, handle) + def clone_jbrowse(self, jbrowse_dir, destination): """Clone a JBrowse directory into a destination directory. """ @@ -677,9 +814,14 @@ # http://unix.stackexchange.com/a/38691/22785 # JBrowse releases come with some broken symlinks - cmd = ['find', destination, '-type', 'l', '-xtype', 'l', '-exec', 'rm', "'{}'", '+'] + cmd = ['find', destination, '-type', 'l', '-xtype', 'l'] log.debug(' '.join(cmd)) - subprocess.check_call(cmd) + symlinks = subprocess.check_output(cmd) + for i in symlinks: + try: + os.unlink(i) + except OSError: + pass if __name__ == '__main__': @@ -689,6 +831,7 @@ parser.add_argument('--jbrowse', help='Folder containing a jbrowse release') parser.add_argument('--outdir', help='Output directory', default='out') parser.add_argument('--standalone', help='Standalone mode includes a copy of JBrowse', action='store_true') + parser.add_argument('--version', '-V', action='version', version="%(prog)s 0.7.0") args = parser.parse_args() tree = ET.parse(args.xml.name) @@ -697,7 +840,13 @@ jc = JbrowseConnector( jbrowse=args.jbrowse, outdir=args.outdir, - genomes=[os.path.realpath(x.text) for x in root.findall('metadata/genomes/genome')], + genomes=[ + { + 'path': os.path.realpath(x.attrib['path']), + 'meta': metadata_from_node(x.find('metadata')) + } + for x in root.findall('metadata/genomes/genome') + ], standalone=args.standalone, gencode=root.find('metadata/gencode').text ) @@ -719,21 +868,74 @@ 'show_overview': root.find('metadata/general/show_overview').text, 'show_menu': root.find('metadata/general/show_menu').text, 'hideGenomeOptions': root.find('metadata/general/hideGenomeOptions').text, - } + }, + 'plugins': [{ + 'location': 'https://cdn.rawgit.com/TAMU-CPT/blastview/97572a21b7f011c2b4d9a0b5af40e292d694cbef/', + 'name': 'BlastView' + }], + 'plugins_python': ['BlastView'], } + + plugins = root.find('plugins').attrib + if plugins['GCContent'] == 'True': + extra_data['plugins_python'].append('GCContent') + extra_data['plugins'].append({ + 'location': 'https://cdn.rawgit.com/elsiklab/gccontent/5c8b0582ecebf9edf684c76af8075fb3d30ec3fa/', + 'name': 'GCContent' + }) + + if plugins['Bookmarks'] == 'True': + extra_data['plugins'].append({ + 'location': 'https://cdn.rawgit.com/TAMU-CPT/bookmarks-jbrowse/5242694120274c86e1ccd5cb0e5e943e78f82393/', + 'name': 'Bookmarks' + }) + + if plugins['ComboTrackSelector'] == 'True': + extra_data['plugins_python'].append('ComboTrackSelector') + extra_data['plugins'].append({ + 'location': 'https://cdn.rawgit.com/Arabidopsis-Information-Portal/ComboTrackSelector/52403928d5ccbe2e3a86b0fa5eb8e61c0f2e2f57', + 'icon': 'https://galaxyproject.org/images/logos/galaxy-icon-square.png', + 'name': 'ComboTrackSelector' + }) + + if plugins['theme'] == 'Minimalist': + extra_data['plugins'].append({ + 'location': 'https://cdn.rawgit.com/erasche/jbrowse-minimalist-theme/d698718442da306cf87f033c72ddb745f3077775/', + 'name': 'MinimalistTheme' + }) + elif plugins['theme'] == 'Dark': + extra_data['plugins'].append({ + 'location': 'https://cdn.rawgit.com/erasche/jbrowse-dark-theme/689eceb7e33bbc1b9b15518d45a5a79b2e5d0a26/', + 'name': 'DarkTheme' + }) + + GALAXY_INFRASTRUCTURE_URL = root.find('metadata/galaxyUrl').text + # Sometimes this comes as `localhost` without a protocol + if not GALAXY_INFRASTRUCTURE_URL.startswith('http'): + # so we'll prepend `http://` and hope for the best. Requests *should* + # be GET and not POST so it should redirect OK + GALAXY_INFRASTRUCTURE_URL = 'http://' + GALAXY_INFRASTRUCTURE_URL + for track in root.findall('tracks/track'): track_conf = {} - track_conf['trackfiles'] = [ - (os.path.realpath(x.attrib['path']), x.attrib['ext'], x.attrib['label']) - for x in track.findall('files/trackFile') - ] + track_conf['trackfiles'] = [] + + for x in track.findall('files/trackFile'): + metadata = metadata_from_node(x.find('metadata')) + + track_conf['trackfiles'].append(( + os.path.realpath(x.attrib['path']), + x.attrib['ext'], + x.attrib['label'], + metadata + )) track_conf['category'] = track.attrib['cat'] track_conf['format'] = track.attrib['format'] try: # Only pertains to gff3 + blastxml. TODO? track_conf['style'] = {t.tag: t.text for t in track.find('options/style')} - except TypeError: + except TypeError as te: track_conf['style'] = {} pass track_conf['conf'] = etree_to_dict(track.find('options')) @@ -743,4 +945,3 @@ extra_data['visibility'][track.attrib.get('visibility', 'default_off')].append(key) jc.add_final_data(extra_data) - jc.generate_names() diff -r b5c5470d7c09 -r ff11d442feed jbrowse.xml --- a/jbrowse.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/jbrowse.xml Wed Nov 15 15:15:27 2017 -0500 @@ -5,6 +5,7 @@ + python jbrowse.py --version ${reference_genome.genomes.fields.path} #else #for $genome in $reference_genome.genomes: - $genome + + + + + + + + #end for #end if @@ -91,24 +114,55 @@ ${jbgen.show_menu} ${jbgen.hideGenomeOptions} + ${__app__.config.galaxy_infrastructure_url} #for $tg in $track_groups: #for $track in $tg.data_tracks: + #if $track.data_format.data_format_select == "rest": + + ${track.data_format.url} + + #else: #for $dataset in $track.data_format.annotation: - + + + + + + + + #end for #if str($track.data_format.data_format_select) == "gene_calls" or str($track.data_format.data_format_select) == "blast": #if str($track.data_format.jbcolor_scale.color_score.color_score_select) == "none": @@ -177,6 +231,7 @@ ${track.data_format.scaling.minimum} ${track.data_format.scaling.maximum} #end if + ${track.data_format.scale_select2} ## Wiggle tracks need special color config #if str($track.data_format.jbcolor.color.color_select) != "automatic": @@ -247,11 +302,17 @@ #end if + #end if #end for #end for - -]]> + +]]> @@ -284,7 +345,6 @@ - @@ -325,15 +385,16 @@ name="category" type="text" value="Default" - help="Organise your tracks into Categories for a nicer end-user experience" optional="False"/> + help="Organise your tracks into Categories for a nicer end-user experience. You can use #date# and it will be replaced with the current date in 'yyyy-mm-dd' format, which is very useful for repeatedly updating a JBrowse instance when member databases / underlying tool versions are updated." optional="False"/> - + + @@ -377,7 +438,7 @@ - + +
@@ -466,7 +528,7 @@ truevalue="true" falsevalue="false" /> - + @@ -481,9 +543,16 @@ type="integer" value="100" /> + + + + + + + - - - + + + @@ -219,9 +264,10 @@ + token_height="10px" + token_maxheight="600">
+
diff -r b5c5470d7c09 -r ff11d442feed test-data/bam/test.xml --- a/test-data/bam/test.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/bam/test.xml Wed Nov 15 15:15:27 2017 -0500 @@ -5,6 +5,17 @@ test-data/merlin.fa + + + 40 + true + + true + true + false + true + false + diff -r b5c5470d7c09 -r ff11d442feed test-data/blastxml/test.xml --- a/test-data/blastxml/test.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/blastxml/test.xml Wed Nov 15 15:15:27 2017 -0500 @@ -5,6 +5,17 @@ test-data/merlin.fa + + + 40 + true + + true + true + false + true + false + diff -r b5c5470d7c09 -r ff11d442feed test-data/bw/data.bw Binary file test-data/bw/data.bw has changed diff -r b5c5470d7c09 -r ff11d442feed test-data/bw/test.xml --- a/test-data/bw/test.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/bw/test.xml Wed Nov 15 15:15:27 2017 -0500 @@ -5,6 +5,17 @@ test-data/merlin.fa + + + 40 + true + + true + true + false + true + false + @@ -20,6 +31,7 @@ __auto__ __auto__ zero + linear @@ -36,6 +48,7 @@ __auto__ __auto__ zero + linear @@ -54,6 +67,7 @@ __auto__ __auto__ zero + linear @@ -74,6 +88,7 @@ __auto__ __auto__ mean + linear @@ -91,6 +106,7 @@ #0000ff #ff0000 mean + linear @@ -108,6 +124,7 @@ #ff0000 #0000ff mean + log @@ -125,6 +142,7 @@ #0000ff #ff0000 100 + linear diff -r b5c5470d7c09 -r ff11d442feed test-data/gencode/test-1.xml --- a/test-data/gencode/test-1.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/gencode/test-1.xml Wed Nov 15 15:15:27 2017 -0500 @@ -3,7 +3,27 @@ 1 - /tmp/tmpPJZIQf/files/000/dataset_1.dat + + + + + + + + @@ -17,7 +37,14 @@ true false + http://localhost:8080 + diff -r b5c5470d7c09 -r ff11d442feed test-data/gencode/test.xml --- a/test-data/gencode/test.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/gencode/test.xml Wed Nov 15 15:15:27 2017 -0500 @@ -3,7 +3,27 @@ 11 - /tmp/tmps5cL_a/files/000/dataset_3.dat + + + + + + + + @@ -17,7 +37,14 @@ true false + http://localhost:8080 + diff -r b5c5470d7c09 -r ff11d442feed test-data/gff3/test.xml --- a/test-data/gff3/test.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/gff3/test.xml Wed Nov 15 15:15:27 2017 -0500 @@ -3,7 +3,27 @@ 11 - test-data/merlin.fa + + + + + + + + @@ -17,22 +37,122 @@ true false + http://localhost:8080 - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ignore @@ -51,15 +171,42 @@ - + + + + + + + + ignore @@ -78,15 +225,42 @@ - + + + + + + + + score @@ -111,15 +285,42 @@ - + + + + + + + + score @@ -144,15 +345,42 @@ - + + + + + + + + score @@ -179,15 +407,42 @@ - + + + + + + + + score @@ -214,15 +469,42 @@ - + + + + + + + + ignore @@ -241,15 +523,42 @@ - + + + + + + + + ignore @@ -268,4 +577,10 @@ + diff -r b5c5470d7c09 -r ff11d442feed test-data/menus/test.xml --- a/test-data/menus/test.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/menus/test.xml Wed Nov 15 15:15:27 2017 -0500 @@ -3,7 +3,27 @@ 11 - /tmp/tmplFZ5li/files/000/dataset_14.dat + + + + + + + + @@ -17,19 +37,47 @@ true false + http://localhost:8080 - + + + + + + + + ignore @@ -62,15 +110,42 @@ - + + + + + + + + ignore @@ -88,4 +163,10 @@ + diff -r b5c5470d7c09 -r ff11d442feed test-data/track_config/test.xml --- a/test-data/track_config/test.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/track_config/test.xml Wed Nov 15 15:15:27 2017 -0500 @@ -3,7 +3,27 @@ 11 - /tmp/tmplFZ5li/files/000/dataset_14.dat + + + + + + + + @@ -17,19 +37,47 @@ true false + http://localhost:8080 - + + + + + + + + ignore @@ -47,4 +95,10 @@ + diff -r b5c5470d7c09 -r ff11d442feed test-data/vcf/test.xml --- a/test-data/vcf/test.xml Wed Sep 13 13:07:20 2017 -0400 +++ b/test-data/vcf/test.xml Wed Nov 15 15:15:27 2017 -0500 @@ -5,6 +5,17 @@ test-data/merlin.fa + + + 40 + true + + true + true + false + true + false +