comparison README.rst @ 0:818896cd2213 draft

planemo upload for repository https://github.com/bgruening/galaxytools/tree/master/tools/sklearn commit 60f0fbc0eafd7c11bc60fb6c77f2937782efd8a9-dirty
author bgruening
date Fri, 09 Aug 2019 07:13:13 -0400
parents
children
comparison
equal deleted inserted replaced
-1:000000000000 0:818896cd2213
1 Galaxy wrapper for scikit-learn library
2 ***************************************
3
4 Contents
5 ========
6
7 - `What is scikit-learn?`_
8 - `Scikit-learn main package groups`_
9 - `Tools offered by this wrapper`_
10
11 - `Machine learning workflows`_
12 - `Supervised learning workflows`_
13 - `Unsupervised learning workflows`_
14
15
16 ____________________________
17
18
19 .. _What is scikit-learn?:
20
21 What is scikit-learn?
22 =====================
23
24 Scikit-learn is an open-source machine learning library for the Python programming language. It offers various algorithms for performing supervised and unsupervised learning as well as data preprocessing and transformation, model selection and evaluation, and dataset utilities. It is built upon SciPy (Scientific Python) library.
25
26 Scikit-learn source code can be accessed at https://github.com/scikit-learn/scikit-learn.
27 Detailed installation instructions can be found at http://scikit-learn.org/stable/install.html
28
29
30 .. _Scikit-learn main package groups:
31
32 Scikit-learn main package groups
33 ================================
34
35 Scikit-learn provides the users with several main groups of related operations.
36 These are:
37
38 - Classification
39 - Identifying to which category an object belongs.
40 - Regression
41 - Predicting a continuous-valued attribute associated with an object.
42 - Clustering
43 - Automatic grouping of similar objects into sets.
44 - Preprocessing
45 - Feature extraction and normalization.
46 - Model selection and evaluation
47 - Comparing, validating and choosing parameters and models.
48 - Dimensionality reduction
49 - Reducing the number of random variables to consider.
50
51 Each group consists of a number of well-known algorithms from the category. For example, one can find hierarchical, spectral, kmeans, and other clustering methods in sklearn.cluster package.
52
53
54 .. _Tools offered by this wrapper:
55
56 Available tools in the current wrapper
57 ======================================
58
59 The current release of the wrapper offers a subset of the packages from scikit-learn library. You can find:
60
61 - A subset of classification metric functions
62 - Linear and quadratic discriminant classifiers
63 - Random forest and Ada boost classifiers and regressors
64 - All the clustering methods
65 - All support vector machine classifiers
66 - A subset of data preprocessing estimator classes
67 - Pairwise metric measurement functions
68
69 In addition, several tools for performing matrix operations, generating problem-specific datasets, and encoding text and extracting features have been prepared to help the user with more advanced operations.
70
71 .. _Machine learning workflows:
72
73 Machine learning workflows
74 ==========================
75
76 Machine learning is about processes. No matter what machine learning algorithm we use, we can apply typical workflows and dataflows to produce more robust models and better predictions.
77 Here we discuss supervised and unsupervised learning workflows.
78
79 .. _Supervised learning workflows:
80
81 Supervised machine learning workflows
82 =====================================
83
84 **What is supervised learning?**
85
86 In this machine learning task, given sample data which are labeled, the aim is to build a model which can predict the labels for new observations.
87 In practice, there are five steps which we can go through to start from raw input data and end up getting reasonable predictions for new samples:
88
89 1. Preprocess the data::
90
91 * Change the collected data into the proper format and datatype.
92 * Adjust the data quality by filling the missing values, performing
93 required scaling and normalizations, etc.
94 * Extract features which are the most meaningfull for the learning task.
95 * Split the ready dataset into training and test samples.
96
97 2. Choose an algorithm::
98
99 * These factors help one to choose a learning algorithm:
100 - Nature of the data (e.g. linear vs. nonlinear data)
101 - Structure of the predicted output (e.g. binary vs. multilabel classification)
102 - Memory and time usage of the training
103 - Predictive accuracy on new data
104 - Interpretability of the predictions
105
106 3. Choose a validation method
107
108 Every machine learning model should be evaluated before being put into practicical use.
109 There are numerous performance metrics to evaluate machine learning models.
110 For supervised learning, usually classification or regression metrics are used.
111
112 A validation method helps to evaluate the performance metrics of a trained model in order
113 to optimize its performance or ultimately switch to a more efficient model.
114 Cross-validation is a known validation method.
115
116 4. Fit a model
117
118 Given the learning algorithm, validation method, and performance metric(s)
119 repeat the following steps::
120
121 * Train the model.
122 * Evaluate based on metrics.
123 * Optimize unitl satisfied.
124
125 5. Use fitted model for prediction::
126
127 This is a final evaluation in which, the optimized model is used to make predictions
128 on unseen (here test) samples. After this, the model is put into production.
129
130 .. _Unsupervised learning workflows:
131
132 Unsupervised machine learning workflows
133 =======================================
134
135 **What is unsupervised learning?**
136
137 Unlike supervised learning and more liklely in real life, here the initial data is not labeled.
138 The task is to extract the structure from the data and group the samples based on their similarities.
139 Clustering and dimensionality reduction are two famous examples of unsupervised learning tasks.
140
141 In this case, the workflow is as follows::
142
143 * Preprocess the data (without splitting to train and test).
144 * Train a model.
145 * Evaluate and tune parameters.
146 * Analyse the model and test on real data.