Преглед изворни кода

Merge branch 'rtd' of github.com:tripal/tripal into rtd

Stephen Ficklin пре 6 година
родитељ
комит
9c08b0225c
82 измењених фајлова са 1104 додато и 12 уклоњено
  1. 15 0
      docs/_static/theme_overrides.css
  2. 6 0
      docs/conf.py
  3. BIN
      docs/user_guide/bulk_loader.1.png
  4. BIN
      docs/user_guide/bulk_loader.10.png
  5. BIN
      docs/user_guide/bulk_loader.11.png
  6. BIN
      docs/user_guide/bulk_loader.12.png
  7. BIN
      docs/user_guide/bulk_loader.13.png
  8. BIN
      docs/user_guide/bulk_loader.2.png
  9. BIN
      docs/user_guide/bulk_loader.3.png
  10. BIN
      docs/user_guide/bulk_loader.4.png
  11. BIN
      docs/user_guide/bulk_loader.5.png
  12. BIN
      docs/user_guide/bulk_loader.6.png
  13. BIN
      docs/user_guide/bulk_loader.7.png
  14. BIN
      docs/user_guide/bulk_loader.8.png
  15. BIN
      docs/user_guide/bulk_loader.9.png
  16. 323 1
      docs/user_guide/bulk_loader.rst
  17. BIN
      docs/user_guide/configuring_page_display.1.png
  18. BIN
      docs/user_guide/configuring_page_display.2.png
  19. BIN
      docs/user_guide/configuring_page_display.3.rearrange.png
  20. BIN
      docs/user_guide/configuring_page_display.4.png
  21. BIN
      docs/user_guide/configuring_page_display.5.png
  22. 53 2
      docs/user_guide/configuring_page_display.rst
  23. 1 1
      docs/user_guide/customize_site.rst
  24. 21 1
      docs/user_guide/galaxy_integration.rst
  25. BIN
      docs/user_guide/galaxy_integration/install.1.png
  26. BIN
      docs/user_guide/galaxy_integration/install.2.png
  27. 50 0
      docs/user_guide/galaxy_integration/install.rst
  28. 2 0
      docs/user_guide/galaxy_integration/job_queue.rst
  29. BIN
      docs/user_guide/galaxy_integration/remote_servers.1.png
  30. BIN
      docs/user_guide/galaxy_integration/remote_servers.2.png
  31. BIN
      docs/user_guide/galaxy_integration/remote_servers.3.png
  32. 34 0
      docs/user_guide/galaxy_integration/remote_servers.rst
  33. BIN
      docs/user_guide/galaxy_integration/site_wide_files.1.png
  34. BIN
      docs/user_guide/galaxy_integration/site_wide_files.2.png
  35. BIN
      docs/user_guide/galaxy_integration/site_wide_files.3.png
  36. 20 0
      docs/user_guide/galaxy_integration/site_wide_files.rst
  37. BIN
      docs/user_guide/galaxy_integration/user_quotas.1.png
  38. BIN
      docs/user_guide/galaxy_integration/user_quotas.2.png
  39. BIN
      docs/user_guide/galaxy_integration/user_quotas.3.png
  40. 19 0
      docs/user_guide/galaxy_integration/user_quotas.rst
  41. 4 0
      docs/user_guide/galaxy_integration/viewing_usage_data.rst
  42. 2 0
      docs/user_guide/galaxy_integration/workflows.rst
  43. 4 0
      docs/user_guide/galaxy_integration/workflows_and_collections.rst
  44. 37 1
      docs/user_guide/job_management.rst
  45. BIN
      docs/user_guide/mviews.1.png
  46. 17 1
      docs/user_guide/mviews.rst
  47. 30 1
      docs/user_guide/searching.rst
  48. 5 0
      docs/user_guide/searching/default_pages.rst
  49. 7 0
      docs/user_guide/searching/elasticsearch_module.rst
  50. BIN
      docs/user_guide/searching/search_api.1.modules.png
  51. BIN
      docs/user_guide/searching/search_api.10.png
  52. BIN
      docs/user_guide/searching/search_api.11.png
  53. BIN
      docs/user_guide/searching/search_api.12.png
  54. BIN
      docs/user_guide/searching/search_api.13.png
  55. BIN
      docs/user_guide/searching/search_api.14.add_fields.png
  56. BIN
      docs/user_guide/searching/search_api.15.configure_title.png
  57. BIN
      docs/user_guide/searching/search_api.16.png
  58. BIN
      docs/user_guide/searching/search_api.17.png
  59. BIN
      docs/user_guide/searching/search_api.18.png
  60. BIN
      docs/user_guide/searching/search_api.19.view.png
  61. BIN
      docs/user_guide/searching/search_api.2.png
  62. BIN
      docs/user_guide/searching/search_api.20.png
  63. BIN
      docs/user_guide/searching/search_api.21.click_search.png
  64. BIN
      docs/user_guide/searching/search_api.22.done.png
  65. BIN
      docs/user_guide/searching/search_api.3.png
  66. BIN
      docs/user_guide/searching/search_api.4.config3_server.png
  67. BIN
      docs/user_guide/searching/search_api.5.png
  68. BIN
      docs/user_guide/searching/search_api.6.png
  69. BIN
      docs/user_guide/searching/search_api.7.png
  70. BIN
      docs/user_guide/searching/search_api.8.png
  71. BIN
      docs/user_guide/searching/search_api.9.png
  72. 160 0
      docs/user_guide/searching/search_api.rst
  73. BIN
      docs/user_guide/setting_page_urls.1.png
  74. BIN
      docs/user_guide/setting_page_urls.2.png
  75. 30 3
      docs/user_guide/setting_page_urls.rst
  76. BIN
      docs/user_guide/web_services.1.png
  77. BIN
      docs/user_guide/web_services.2.png
  78. BIN
      docs/user_guide/web_services.3.png
  79. BIN
      docs/user_guide/web_services.4.png
  80. BIN
      docs/user_guide/web_services.5.png
  81. BIN
      docs/user_guide/web_services.6.png
  82. 264 1
      docs/user_guide/web_services.rst

+ 15 - 0
docs/_static/theme_overrides.css

@@ -0,0 +1,15 @@
+/* override table width restrictions
+See: https://rackerlabs.github.io/docs-rackspace/tools/rtd-tables.html
+ */
+@media screen and (min-width: 767px) {
+
+   .wy-table-responsive table td {
+      /* !important prevents the common CSS stylesheets from overriding
+         this as on RTD they are loaded after this stylesheet */
+      white-space: normal !important;
+   }
+
+   .wy-table-responsive {
+      overflow: visible !important;
+   }
+}

+ 6 - 0
docs/conf.py

@@ -87,6 +87,12 @@ html_theme = "sphinx_rtd_theme"
 # so a file named "default.css" will overwrite the builtin "default.css".
 html_static_path = ['_static']
 
+html_context = {
+    'css_files': [
+        '_static/theme_overrides.css',  # override wide tables in RTD theme
+        ],
+     }
+
 # Custom sidebar templates, must be a dictionary that maps document names
 # to template names.
 #

BIN
docs/user_guide/bulk_loader.1.png


BIN
docs/user_guide/bulk_loader.10.png


BIN
docs/user_guide/bulk_loader.11.png


BIN
docs/user_guide/bulk_loader.12.png


BIN
docs/user_guide/bulk_loader.13.png


BIN
docs/user_guide/bulk_loader.2.png


BIN
docs/user_guide/bulk_loader.3.png


BIN
docs/user_guide/bulk_loader.4.png


BIN
docs/user_guide/bulk_loader.5.png


BIN
docs/user_guide/bulk_loader.6.png


BIN
docs/user_guide/bulk_loader.7.png


BIN
docs/user_guide/bulk_loader.8.png


BIN
docs/user_guide/bulk_loader.9.png


+ 323 - 1
docs/user_guide/bulk_loader.rst

@@ -2,4 +2,326 @@
 Bulk Loader
 ===============
 
-Lorem ipsum
+The bulk loader is a tool that Tripal provides for loading of data contained in tab delimited files. Tripal supports loading of files in standard formats (e.g. ``FASTA``, ``GFF``, ``OBO``), but Chado can support a variety of different biological data types and there are often no community standard file formats for loading these data. For example, there is no file format for importing genotype and phenotype data. Those data can be stored in the feature, stock and natural diversity tables of Chado. The Bulk Loader was introduced in Tripal v1.1 and provides a web interface for building custom data loader. In short, the site developer creates the bulk loader "template". This template can then be used and re-used for any tab delimited file that follows the format described by the template. Additionally, bulk loading templates can be exported allowing Tripal sites to share loaders with one another.  Loading templates that have been shared are available on the Tripal website here: http://tripal.info/extensions/bulk-loader-templates.
+
+The following commands can be executed to install the Tripal Bulk Loader using Drush:
+
+.. code-block bash
+
+  cd /var/www/
+  drush pm-enable tripal_bulk_loader
+
+Plan How to Store Data
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To demonstrate use of the Bulk Loader, a brief example that imports a list of organisms and associates them with their NCBI taxonomy IDs will be performed. The input tab-delimited file will contains the list of all *Fragaria* (strawberry) species in NCBI at the time of the writing of this document.
+
+.. code-block bash
+
+  cd /var/www/html/sites/default/files
+  wget http://tripal.info/sites/default/files/book_pages/Fragaria_0.txt
+
+
+This file has three columns: NCBI taxonomy ID, genus and species:
+
+.. .. csv-table:: Fragaria sample file
+
+  3747    "Fragaria"        "x ananassa"
+  57918   "Fragaria"        "vesca"
+  60188   "Fragaria"        "nubicola"
+  64939   "Fragaria"        "iinumae"
+  64940   "Fragaria"        "moschata"
+  64941   "Fragaria"        "nilgerrensis"
+  64942   "Fragaria"        "viridis"
+
+
+To use the bulk loader you must be familiar with the Chado database schema and have an idea for where data should be stored. It is best practice to consult the GMOD website or consult the Chado community (via the `gmod-schema mailing list <https://lists.sourceforge.net/lists/listinfo/gmod-schema>`_) when deciding how to store data. For this example, we want to add the species to Chado, and we want to associate the NCBI taxonomy ID with these organisms. The first step, therefore, is to decide where in Chado these data should go. In Chado, organisms are stored in the **organism** table. This table has the following fields:
+
+.. csv-table:: Chado organism table
+    :header: "Name",	"Type",	"Description"
+
+  "organism_id",	"serial",	"PRIMARY KEY"
+  "abbreviation",	"character varying(255)",
+  "genus",	"character varying(255)",	"UNIQUE#1 NOT NULL"
+  "species",	"character varying(255)",	"UNIQUE#1 NOT NULL  A type of organism is always uniquely identified by genus and species. When mapping from the NCBI taxonomy names.dmp file, this column must be used where it is present, as the common_name column is not always unique (e.g. environmental samples). If a particular strain or subspecies is to be represented, this is appended onto the species name. Follows standard NCBI taxonomy pattern."
+ 	"common_name",	"character varying(255)"
+ 	"comment",	"text"
+
+
+We can therefore store the second and third columns of the tab-delimited input file in the **genus** and **species** columns of the organism table.
+
+In order to store a database external reference (such as for the NCBI Taxonomy ID) we need to use the following tables: **db**, **dbxref**, and **organism_dbxref**. The **db** table will house the entry for the NCBI Taxonomy; the **dbxref** table will house the entry for the taxonomy ID; and the **organism_dbxref** table will link the taxonomy ID stored in the **dbxref** table with the organism housed in the **organism** table. For reference, the fields of these tables are as follows:
+
+
+
+.. csv-table:: chado.db structure
+  :header: "Name",	"Type",	"Description"
+
+ 	"db_id",	"serial",	"PRIMARY KEY"
+ 	"name",	character varying(255),	"UNIQUE NOT NULL"
+ 	"description",	"character varying(255)", ""
+ 	"urlprefix",	"character varying(255)"
+ 	"url",	"character varying(255)"
+
+
+.. csv-table:: chado.dbxref structure
+  :header: "Name",	"Type",	"Description"
+
+ 	"dbxref_id",	"serial",	"PRIMARY KEY"
+  "db_id",	"integer",	"Foreign Key db.  UNIQUE#1 NOT NULL"
+ 	"accession",	"character varying(255)",	"UNIQUE#1 NOT NULL.  The local part of the identifier. Guaranteed by the db authority to be unique for that db."
+ 	"version",	"character varying(255)",	"UNIQUE#1 NOT NULL DEFAULT ''"
+ 	"description",	"text"
+
+
+.. csv-table:: chado.organism_dbxref structure
+  :header: "Name",	"Type",	"Description"
+
+  "organism_dbxref_id", "serial", "PRIMARY KEY"
+  "organism_id",	"integer",	"Foreign key organism. UNIQUE#1 NOT NULL"
+  "dbxref_id",	"integer",	"Foreign key dbxref.  UNIQUE#1 NOT NULL"
+
+
+For our bulk loader template, we will therefore need to insert values into the **organism**, **db**, **dbxref** and **organism_dbxref** tables. In our input file we have the genus and species and taxonomy ID so we can import these with a bulk loader template. However, we do not have information that will go into the db table (e.g. "NCBI Taxonomy"). This is not a problem as the bulk loader can use existing data to help with import. We simply need to use the "NCBI Taxonomy" database that is currently in the Chado instance of Tripal v3.
+
+Creating a New Bulk Loader Template
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Now that we know where all of the data in the input file will go and we have the necessary dependencies in the database (i.e. the NCBI Taxonomy database), we can create a new bulk loader template. Navigate to ``Tripal → Data Loaders → Chado Bulk Loader``, click the **Templates** tab in the top right corner, and finally click the link **Add Template**. The following page appears:
+
+.. image:: ./bulk_loader.1.png
+
+
+We need to first provide a name for our template. Try to name templates in a way that are meaningful for others. Currently only site administrators can load files using the bulk loader. But, future versions of Tripal will provide functionality to allow other privileged users the ability to use the bulk loader templates. Thus, it is important to name the templates so that others can easily identify the purpose of the template. For this example, enter the name **NCBI Taxonomy Importer (taxid, genus, species)**. The following page appears:
+
+.. image:: ./bulk_loader.2.png
+
+Notice that the page is divided into two sections: **Current Records** and **Current Fields**. Before we continue with the template we need a bit of explanation as to the terminology used by the bulk loader. A **record** refers to a Chado table and an action on that table. For example, to insert the data from the input file we will need to select the NCBI Taxonomy database from the **db** table and insert entries into the **dbxref**, **organism** and **dbxref_organism** tables. Therefore, we will have four records:
+
+* An insert into the organism table
+* A select from the db table (to get the database id (db_id) of the "NCBI Taxonomy" database needed for the insert into the dbxref table)
+* An insert into the dbxref table
+* An insert into the organism_dbxref table.
+
+Each record contains a set of fields on which the action is performed. Thus, when we insert an entry into the organism table we will insert into two fields: **genus** and **species**.
+
+To create the first record for inserting an organism, click the button **New Record/Field**. The following page appears:
+
+.. image:: ./bulk_loader.3.png
+
+By default, when adding a new record, the bulk loader also provides the form elements for adding the first field of the record as well. We are adding a new record, so we can leave the **Record** drop-down as **New Record**. Next, give this record a unique record name. Because we are inserting into the organism table, enter the name **Organism** into the **Unique Record Name** box.
+
+We also have the opportunity with this form to add our first field to the record.  Because we are adding the organism record we will first add the field for the **genus**. In the **Field** section we specify the source of the field. Because the genus value comes from the input file, select the first radio button titled **Data**. Next we need a human-readable name for the field. This field is the **genus** field so we will enter Genus into the **Human-readable Title for Field** box.  Next, we need to specify the **Chado table** for this record. In the Chado table drop down box, choose the **organism** table, and in the **Chado Field/Column** drop down box select **genus**.
+
+In the next section, titled **Data File Column**, we need to indicate the column in the tab-delimited file where the genus is found. For the example file this is column 2 (columns are ordered beginning with number 1). Therefore, enter the number **2** in the **Column** box. There are additional options to expose the field to the user, but for now we can ignore those options. Click the **Save Changes** button at the bottom. We now see that the organism record and the first field have been added to our bulk loader template.
+
+.. image:: ./bulk_loader.4.png
+
+We also see that the **Mode** (or action) for this record has been set to insert by default. Before continuing we should edit the settings for the record so that it is more fault tolerant. Click the **Edit** link to the left of the new organism record. On the resulting page we see the record details we already provided, but now there is a section titled **Action to take when Loading Record**. By default, the **INSERT** option is selected. This is correct. We want to perform an insert. However, notice in the **Additional Insert Options** section, the **SELECT if duplicate (no insert).** Check this box. This is a good option to add because it prevents the bulk loader from failing if the record already exists in the table.
+
+Click the **Save Record** button to save these settings. Now, you will see that the **Mode** is now set to insert or select if duplicate. Previously the **Mode** was just **insert**.
+
+Next, we need to add the **species** field to the record. Click the **Add Field** link to the left of the organism record name. Here we are presented with the same form we used when first adding the organism record. However, this time, the **Record** section is collapsed.  If we open that section the drop down already has the **Organism** record as we are not creating a new record. To add the **Species** field, provide the following values and click the **Save Changes button**:
+
+* Type of field: Data
+* Human-readable Title for Field: Species
+* Chado table: organism (should already be set)
+* Chado Field/Column: species
+* Column: 3
+
+
+We now have two fields for our organism record:
+
+.. image:: ./bulk_loader.5.png
+
+At this point our organism record is complete, however there are still a few fields in the organism table of Chado that are not present in our record. These include the **organism_id, abbreviation, common_name** and **comment** fields. We do not have values in our input file for any of these fields. Fortunately, the **organism_id** field is a primary key field and is auto generated when a record is submitted. We do not need to provide a value for that field. The other fields are not part of the unique constraint of the table. Therefore, those fields are optional and we do not need to specify them. Ideally, if we did have values for those non-required fields we would add them as well.
+
+To this point, we have built the loader such that it can load two of the three columns in our input file. We have one remaining column: the NCBI taxonomy ID. In order to associate an organism with the taxonomy ID we must first insert the taxonomy ID into the **dbxref** table. Examining the dbxref table, we see that a **db_id** field is a required value in a foreign key relationship. We must first retrieve the **db_id** from the **db** table of Chado before we can add the entry to the **dbxref** table. Therefore, we will create a second record that will do just that. On the **Edit Template** page click the button **New Record/Field**. Here we see the same form we used for adding the first organism record. Provide the following values:
+
+* For the record:
+   * Record: New Record
+   * Unique Record Name: NCBI Taxonomy DB
+   * Record Type/Action: SELECT ONCE: Select the record only once for each constant set.
+* For the field:
+   * Type of field: Constant
+   * Human-readable Title for Field: DB name
+   * Chado table: db
+   * Chado field/column: name
+* Within the Constant section:
+   * Constant Value:  NCBITaxon
+   * Check "Ensure the value is in the table"
+
+
+Here we use a field type of **Constant** rather than **Data**. This is because we are providing the value to be used in the record rather than using a value from the input file. The value we are providing is "NCBI Taxonomy" which is the name of the database we added previously. The goal is to match the name "NCBI Taxonomy" with an entry in the **db** table. Click the **Save Changes** button.
+
+We now see a second record on the **Edit Template** page. However, the mode for this record is insert. We do not want to insert this value into the table, we want to select it because we need the corresponding **db_id** for the **dbxref** record. To change this, click the Edit link to the left of the **NCBI Taxonomy DB** record. Here we want to select only the option **SELECT ONCE**. We choose this option because the database entry that will be returned by the record will apply for the entire input file. Therefore, we only need to select it one time. Otherwise, the select statement would execute for each row in the input file causing excess queries. Finally, click **Save Record**. The **NCBI Taxonomy DB** record now has a mode of **select once**.  When we created the record, we selected the option to 'SELECT ONCE'.  This means that the bulk loader will perform the action one time for that record for the entire import process.  Because the field is a constant the bulk loader need not execute that record for every row it imports from our input file.  We simply need to select the record once and the record then becomes available for use through the entire import process.
+
+Now that we have a record that selects the **db_id** we can now create the **dbxref** record. For the **dbxref** record there is a unique constraint that requires the **accession**, **db_id** and **version**. The version record has a default value so we only need to create two fields for this new record: the db_id and the accession. We will use the **db_id** from the **NCBI Taxonomy DB** record and the accession is the first column of the input file. First, we will add the **db_id** record. Click the **New Record/Field** button and set the following:
+
+* For the record:
+   * Record: New Record
+   * Unique Record Name: Taxonomy ID
+   * Record Type/Action:  INSERT: insert the record
+* For the field:
+   * Type of field: Record referral
+   * Human-readable Title for Field: NCBI Taxonomy DB ID
+   * Chado table: dbxref
+   * Chado Field/Column: db_id
+* In the Record Referral Section:
+   * Record to refer to: NCBI Taxonomy DB
+   * Field to refer to: db_id
+
+
+Click the Save Changes button. The Edit Template page appears.
+
+
+.. image:: ./bulk_loader.6.png
+
+
+
+Again, we need to edit the record to make the loader more fault tolerant. Click the Edit link to the left of the Taxonomy ID record. Select the following:
+
+* Insert
+* Select if duplicate
+
+To complete this record, we need to add the accession field. Click the Add field link to the left of the Taxonomy ID record name. Provide the following values:
+
+* For the field:
+   * Type of Field: Data
+   * Human-readable Title for Field: Accession
+   * Chado table: dbxref
+   * Chado field/column: accession
+* In the Data File Column section:
+   * Column: 1
+
+At this state, we should have three records: Organism, NCBI Taxonomy DB, and Taxonomy ID. We can now add the final record that will insert a record into the **organism_dbxref** table. Create this new record with the following details:
+
+
+* For the record:
+   * Record: New Record
+   * Unique Record Name: Taxonomy/Organism Linker
+   * Check: Insert: insert the record
+* For the field:
+   * Type of Field: Record Referral
+   * Human-readable Title for Field: Accession Ref
+   * Chado table: organism_dbxref
+   * Chado field/column: dbxref_id
+* In the Record Referral section:
+   * Record to refer to: Taxonomy ID
+   * Field to refer to: dbxref_id
+
+Create the second field:
+
+* For the field:
+   * Type of Field: Record Referral
+   * Human-readable Title for Field: Organism ID
+   * Chado table: organism_dbxref
+   * Chado field/column: organism_id
+* In the Record Referral section:
+   * Record to refer to: Organism
+   * Field to refer to: organism_id
+
+​After saving the field.  Edit the record and set the following:
+
+* Change the record mode to: insert or select if duplicate
+
+We are now done! We have created a bulk loader template that reads in a file with three columns containing an NCBI taxonomy ID, a genus and species. The loader places the genus and species in the **organism** table, adds the NCBI Taxonomy ID to the **dbxref** table,  links it to the NCBI Taxonomy entry in the db table, and then adds an entry to the **organism_dbxref** table that links the organism to the NCBI taxonomy Id. The following screen shots show how the template should appear:
+
+
+.. image:: ./bulk_loader.7.png
+
+
+To save the template, click the **Save Template** link at the bottom of the page.
+
+Creating a Bulk Loader Job (importing a file)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Now that we have created a bulk loader template we can use it to import a file. We will import the **Fragaria**.txt file downloaded previously. To import a file using a bulk loader template, click the **Add Content** link in the administrative menu and click the **Bulk Loading Job**. A bulk loading job is required each time we want to load a file. Below is a screen shot of the page used for creating a bulk loading job.
+
+
+
+Provide the following values:
+
+* Job Name: Import of Fragaria species
+* Template: NCBI Taxonomy Importer (taxid, genus species).
+* Data File: /var/www/html/sites/default/files/Fragaria_0.txt
+* Keep track of inserted IDs: No
+* File has a header: No
+
+Click **Save**. The page then appears as follows:
+
+
+.. image:: ./bulk_loader.8.png
+
+
+You can see details about constants that are used by the template and the where the fields from the input file will be stored by clicking the **Data Fields** tab in the table of contents on the left sidebar.
+
+
+.. image:: ./bulk_loader.9.png
+
+
+Now that we have created a job, we can submit it for execution by clicking the **Submit Job** button. This adds a job to the Tripal Jobs systems and we can launc the job as we have previously in this tutorial:
+
+.. code-block:: shell
+
+  cd /var/www
+  drush trp-run-jobs --username=admin --root=/var/www/html
+
+After execution of the job you should see similar output to the terminal window:
+
+.. code-block:: shell
+
+
+  Tripal Job Launcher
+  Running as user 'admin'
+  -------------------
+  There are 1 jobs queued.
+  Calling: tripal_bulk_loader_load_data(2, 7)
+  Template: NCBI Taxonomy Importer (taxid, genus, species) (1)
+  File: /var/www/html/sites/default/files/Fragaria_0.txt (46 lines)
+
+  Preparing to load...
+  Loading...
+      Preparing to load the current constant set...
+          Open File...
+          Start Transaction...
+          Defer Constraints...
+          Acquiring Table Locks...
+              ROW EXCLUSIVE for organism
+              ROW EXCLUSIVE for dbxref
+              ROW EXCLUSIVE for organism_dbxref
+      Loading the current constant set...
+  Progress:
+  [|||||||||||||||||||||||||||||||||||||||||||||||||||] 100.00%. (46 of 46) Memory: 33962080
+
+Our *Fragaira* species should now be loaded, and we return to the Tripal site to see them. Click on the **Organisms** link in the **Search Data** menu.  In the search form that appears, type "Fragaria" in the **Genus** text box and click the **Filter** button. We should see the list of newly added *Fragaria* species.
+
+.. image:: ./bulk_loader.10.png
+
+
+Before the organisms will have Tripal pages, the Chado records need to be **Published**.  You can publish them by navigating to ``admin -> Tripal Content -> Publish Tripal Content``.  Select the **organism** table from the dropdown and run the job.
+
+
+
+.. note::
+
+	In Tripal 2, records were synced by naviating to ``Tripal → Chado Modules → Organisms``.
+
+
+
+Once complete, return to the search form, find a *Fragaria* species that has been published and view its page. You should see a Cross References link in the left table of contents. If you click that link you should see the NCBI Taxonomy ID with a link to the page:
+
+.. image:: ./bulk_loader.11.png
+
+
+Sharing Your Templates with Others
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Now that our template for loading organisms with NCBI Taxonomy IDs is completed we can share our template loader with anyone else that has a Tripal-based site.  To do this we simply export the template in text format, place it in a text file or directly in an email and send to a collaborator for import into their site.  To do this, navigate to ``Tripal → Chado Data Loaders → Buik Loader`` and click the **Tempalate** tab at the top.  Here we find a table of all the tempaltes we have created.  We should see our template named **NCBI Taxonomy Importer** (taxid, genus, species).  In the far right colum is a link to export that template.  Licking that link will redirect you to a page where the template is provided in a serialized PHP array.
+
+.. image:: ./bulk_loader.12.png
+
+
+Simply cut-and-paste all of the text in the **Export** field and send it to a collaborator.
+
+To import a template that may have been created by someone else, navigate to ``Tripal → Chado Data Loaders → Buik Loader`` and click the **Tempalate** tab. A link titled Import Template appears above the table of existing importers.  The page that appears when that link is clicked will allow you to import any template shared with you.

BIN
docs/user_guide/configuring_page_display.1.png


BIN
docs/user_guide/configuring_page_display.2.png


BIN
docs/user_guide/configuring_page_display.3.rearrange.png


BIN
docs/user_guide/configuring_page_display.4.png


BIN
docs/user_guide/configuring_page_display.5.png


+ 53 - 2
docs/user_guide/configuring_page_display.rst

@@ -1,5 +1,56 @@
 
 Configuring Page Display
-===============
+=========================
 
-Lorem ipsum
+
+This is one of the many new exciting features of Tripal v3.x. In this version of Tripal we have taken integration with Drupal Fields to a whole new level representing each piece of content (in Chado or otherwise) as a Drupal Field. What this means for site builders is unprecendented control over content display and arrangement through the administrative user interface --No more editing PHP template files to change the order, grouping or wording of content!
+
+You can configure the display of a given Tripal Content Type by navigating to ``Structure → Tripal Content Types`` and then selecting the "Manage Display" link beside the content type you would like to configure.
+
+.. image:: ./configuring_page_display.1.png
+
+
+
+The Manage Display User Interface lists each Drupal Field in the order they will be displayed on the page. Fields are grouped into Tripal Panes by the Tripal DS module and the page is automatically divided into a right and left column. By default the left column contains the table of contents which lists the Tripal Panes available to the user in the order they are listed in this UI. The following screenshots are using the Analysis Content Type for demonstatration.
+
+.. image:: configuring_page_display.2.png
+
+
+Rearranging Fields
+~~~~~~~~~~~~~~~~~~~
+
+To rearrange the fields within a Tripal pane, simply drag them into the order you would like them. For example, the description is currently within the Summary table --it makes much more sense for it to be below the table but still within the summary. To do this, simply drag the description field to the bottom of the summary table and then move it in one level as shown in the following screenshot. Then click the **Save** button at the botton to save the changes.
+
+.. image:: configuring_page_display.3.rearrange.png
+
+
+Removing Fields and/or Field Lables
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Now say we don't want the label "Description" in front of description content since it's pretty self explanatory. We can do that by changing the drop-down beside "Description" which currently says "Above" to "Hidden". This removes the label for the field assuming it's not within a table.
+
+There may also be data you want to collect from your user but don't want to display on the page. This can be achomplished by disabling the field in the Manage Display UI. For example, we might not feel the need to tell users that this is an alaysis page and thus want to hide the Resource Type Field. This is done by changing the drop-down beside the Resource type field from "Right" to "Disabled".
+
+.. warning::
+
+  Don't forget to save the configuration often as you are changing it. You will not see changes to the page unless the **Save** button at the bottom of the Manage Display UI is clicked.
+
+Changing Tripal Pane Names
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The name of a Tripal Pane is displayed both in the header of the Pane itself and in the Table of Contents. To change this name, click the gear button to the far right of the Tripal Pane you would like to change. This will bring up a blue pane of settings. Changing the Field Group Label will change the display name of the pane. For example, the following screenshot shows how you would change the "Cross References" Tripal Pane to be labeled "External Resources" instead if that it what you prefer. Then just click the Update button to see your changes take effect.
+
+.. image:: ./configuring_page_display.4.png
+
+
+Display/Hide Tripal Panes on Page Load
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can also easily control which Tripal Panes you would like displayed to the user on initial page load. By default the Summary Pane is the only one configured to show by default. However, if you would prefer for all panes or even a specific subset of panes to show by default, you can simply click the gear button to the far right of each Tripal Pane you want displayed by default and uncheck the "Hide panel on page load" checkbox. This gives you complete control over which panes you want your user to see first. If more then one pane is displayed by default then they will be shown in the order they are listed on the Manage Display UI.
+
+Display/Hide Empty Fields
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+By default Tripal v3 hides all empty fields from the user. However like most behaviour in Tripal, this can be configured. If you would prefer to show all fields to the user regardless of whether there is content for that particular page, then navigate to ``Structure → Tripal Content Types`` and then click on the edit link beside the Tripal Content Type you would like to show empty fields for. Near the bottom of this form is a **Field Display** drop-down. Just change this drop-down to "show empty fields" and then click **Save Content Type**. As an example, we have changed this setting for the organism content type and, as you can see below, now you can see all fields (including empty fields like cross references and relationships) available to the organism content type.
+
+.. image:: ./configuring_page_display.5.png

+ 1 - 1
docs/user_guide/customize_site.rst

@@ -1,4 +1,4 @@
 Customizing Your Site
 =====================
 
-Lorem ipsum
+Through the combination of Drupal, Chado and the Tripal API, it is possible to fully customize your site and to add new functionality.  Tripal does not store or display all data out-of-the box, and every site has its own different look-and-feel.  If you want greater functionality beyond what comes with Tripal please see the :doc:`Developer's Handbook </dev_guide>` for instructions to use the Drupal and Tripal APIs to develop your own extensions to Tripal.

+ 21 - 1
docs/user_guide/galaxy_integration.rst

@@ -1,4 +1,24 @@
 Galaxy Integration
 ===============
 
-Lorem ipsum
+The Tripal Galaxy module is designed to support integration of Tripal with Galaxy.  In the past, community databases have often provided analytical tools that come prepared with site-specific data.  Examples include BLAST, CAP3, and InterProScan servers, for example.  These tools eased the informatic burden for some researchers by providing tools with site-specific data in easy to use web interfaces.  With larger datasets and more complicated analytical workflows creating online tools becomes a more challenging task.
+
+By integrating Tripal with Galaxy, the Tripal-based community database can offer more complicated analytical tools that support larger data sets using Galaxy as a backend.  To this end, analytical workflows are created by site developers or bioinformaticists inside of the Galaxy user interface.  Once tested and ready, the Tripal Galaxy module communicates with the Galaxy server to provide a web front-end for that workflow.  Users can execute the workflow within the Tripal site, providing a user interface that site-users are familiar and comfortable with.  Users need not know how to use Galaxy to execute the workflow, although, attribution is appropriately provided to the Galaxy server that provides the computation.
+
+The Tripal Galaxy module provides more than just a "wrapper" for Galaxy.  Site administrators can provide files to help end-users easily integrate data from the site within workflows. On Tripal v3 sites, user's can create data collection containing data gleaned from the site which in turn can be used in Galaxy workflows.  Quotas are provided to prevent users from overunning the storage space of the server and usage statistics help a site admin learn which workflows are most used and who are the biggest users.
+
+
+Development of the Tripal Galaxy module and accompanying starter workflows was funded by the `National Science Founation award #1443040 <https://nsf.gov/awardsearch/showAward?AWD_ID=1443040>`_ and is part of the `Tripal Gateway Project <http://tripal.info/projects/tripal-gateway>`_.
+
+.. toctree::
+   :maxdepth: 1
+   :caption: Galaxy Integration Overview
+
+   ./galaxy_integration/install.rst
+   ./galaxy_integration/user_quotas.rst
+   ./galaxy_integration/site_wide_files.rst
+   ./galaxy_integration/remote_servers.rst
+   ./galaxy_integration/workflows.rst
+   ./galaxy_integration/job_queue.rst
+   ./galaxy_integration/viewing_usage_data.rst
+   ./galaxy_integration/workflows_and_collections.rst

BIN
docs/user_guide/galaxy_integration/install.1.png


BIN
docs/user_guide/galaxy_integration/install.2.png


+ 50 - 0
docs/user_guide/galaxy_integration/install.rst

@@ -0,0 +1,50 @@
+Install Tripal Galaxy
+=================================
+
+
+Installation
+~~~~~~~~~~~~~
+
+The Tripal Galaxy module is available as a full Drupal module.  Therefore, it can be installed following the typical Drupal module installation either via the GUI or via Drush.  However it requires a few dependencies:
+
+* curl
+* php-curl
+* blend4php
+
+Follow the typical method for installation of tools for your operating system to install curl and php-curl.  For example, on Ubuntu sytems you can install both curl and php-curl with the following command:
+
+.. code-block:: bash
+
+  sudo apt-get install php-curl
+
+
+Additionally, this module requires the presence of the `blend4php library <https://github.com/galaxyproject/blend4php>`_.  The blend4php library was developed as part of the Tripal Gateway Project as stand-alone PHP bindings to the Galaxy RESTful API.  It allows any PHP-based web application to communicate with a remote Galaxy instance.  To install blend4php, navigate to the sites/all/libraries directory of your Drupal installation and issue the following command:
+
+.. code-block:: bash
+
+  git clone https://github.com/galaxyproject/blend4php.git
+
+
+That's it!  The library is now available for Drupal to find.  Now, we can install the Tripal Galaxy module.  We'll do so here using a Drush command:
+
+.. code-block:: bash
+
+  drush pm-enable tripal_galaxy
+
+You will be asked if you would like to download and then enable the module.  Now that the Tripal Galaxy module is installed, navigate to the ``Administration > Tripal > Extensions > Galaxy`` page via the administrative menu.  At this location is the administrative interface for interacting with remote Galaxy servers.  Near the top of the page are several tabs including **Workflows, Job Queue, Usage, Quotas** and **Files**.
+
+
+.. image:: ./install.1.png
+
+
+Set Permissions
+~~~~~~~~~~~~~~~~
+
+
+Now that the Tripal Galaxy module is installed, we must set permissions so that users can add submit workflows and administrators can manage the workflows.  To set permissions, navigate to **People** page and click the **Permissions** tab in the top right.  Search through the list of permissions until you find the section titled **Tripal Galaxy**.  In this section you will find two types of permssions, one for providing access to execute published workflows and one for administering the Galaxy module.  By default Drupal provides three types of roles:  Anonymous, Authenticated and Administrator users.  The screen shot below shows these permissions for these three roles:
+
+.. image:: ./install.2.png
+
+.. warning::
+
+  If your site is not new you may have additional roles.  As a rule you should **never** give the anonymous user either permission.  Anonymous users should **never** be allowed to execute workflows. Set permissions as appropriate for your site's configuration.  Here we allow authenticated users the ability to execute workflows.

+ 2 - 0
docs/user_guide/galaxy_integration/job_queue.rst

@@ -0,0 +1,2 @@
+The Galaxy Job Queue
+========================

BIN
docs/user_guide/galaxy_integration/remote_servers.1.png


BIN
docs/user_guide/galaxy_integration/remote_servers.2.png


BIN
docs/user_guide/galaxy_integration/remote_servers.3.png


+ 34 - 0
docs/user_guide/galaxy_integration/remote_servers.rst

@@ -0,0 +1,34 @@
+Adding a Remote Galaxy Server
+===============================
+
+Before any workflows can be run, at least one remote Galaxy server must be integrated with Tripal.  This can be done on the Tripal Galaxy administrative page.  This page is available by navigating to  **Administration > Tripal > Extensions > Galaxy**. The following page is presented:
+
+
+.. image:: ./remote_servers.1.png
+
+
+We can add a new remote Galaxy server by clicking the Add Galaxy Instance link.  The following page appears:
+
+.. image:: ./remote_servers.2.png
+
+
+Here we provide the details for the remote Galaxy instance.  This can be a Galaxy instance that was setup and created specifically and dedicated for your site to use, a shared Galaxy instance setup at your institution or one of several `public Galaxy instances <https://galaxyproject.org/public-galaxy-servers/>`_.  For this tutorial we will use the Galaxy Project's `"Use Galaxy" <https://usegalaxy.org/>`_ public instance.  The following values are therefore provided to the fields:
+
+.. csv-table:: Example Galaxy Field Values
+  :header: "Field Name",  "Value"
+
+  "Galaxy Server Name", "Use Galaxy Public Server"
+  "Description", "The Galaxy Project's public server.  Does not provide all tools that may be needed, size and responsiveness is limited."
+  "URL", "https://usegalaxy.org/"
+  "User Name", "*[The name of a user account on the Galaxy server]*"
+  "API Key", "*[The API Key of the user]*"
+
+The fields described above provide everything needed to connect to a remote Galaxy instance. These incluede the URL, user name of a user on the remote Galaxy server and the API Key for that user.  The **Galaxy Server Name** field and **Description** field are meant for you and the site's user's to know where computation is happening and the Description field is for the site admin.
+
+The user name provided will the account under which all Galaxy workflows will be executed, but the Tripal Galaxy module will ensure that each workflow submission runs independent of all others within this user account.  It is recommended to create a **service** account on the remote Galaxy instance that is independent of a real person.  This service account can be specific for your website only.  Every user on Galaxy has the ability to create an API Key. It is via this key that Galaxy will allow remote connections from a client application like Tripal.  To retrieve the API key for your service account user, log on to the remote Galaxy instance and navigate to **User > Preferences > Manage API Key**.  By default, user's do not have an API key.  You can generate a key by clicking the link to generate a new key.  Once the key is generated you must cut-and-paste the key into the **API Key** field of the Tripal Galaxy Instance setup form (shown above).  Click the **Submit** button when done.
+
+Now we can see that we have a Galaxy server available:
+
+.. image:: ./remote_servers.3.png
+
+Clicking the test link beside the server details will check to ensure that we have entered all information correctly and we can successfully connect to the remote Galaxy server. We see in the screenshot above that the test was successful and that the Galaxy server version is currently 17.09.

BIN
docs/user_guide/galaxy_integration/site_wide_files.1.png


BIN
docs/user_guide/galaxy_integration/site_wide_files.2.png


BIN
docs/user_guide/galaxy_integration/site_wide_files.3.png


+ 20 - 0
docs/user_guide/galaxy_integration/site_wide_files.rst

@@ -0,0 +1,20 @@
+Site-wide Files
+========================
+
+The Galaxy workflows allow users to provide their own files, use Tripal data collections (i.e. data baskets) that they have saved or to use site-wide files that are provied by the site administrators.  These files can be used as input to the workflows provided to the user.  As an administrator, you can provide site-wide files for anyone to use in a Galaxy workflow by navigating to the Galaxy administrative page and clicking the **Files** tab near the top.   The following page appears:
+
+.. image:: ./site_wide_files.1.png
+
+
+Initially, there are no site-wide files available.  You can add a file (such as a whole genome assembly FASTA file) for use in workflows by clicking the **Add a New File** link. The following page appears.
+
+.. image:: ./site_wide_files.2.png
+
+Here you have two options for providing a file.  Either upload the file using the File Upload form or add the URL to the file if it is available from a remote source.  Typically files from remote sources are available with an HTTP URL or FTP URL.   For example, if your site provides whole genome data and the FASTA file for the genome assembly is available for retrieval from an FTP site, you should enter the full FTP URL for the file into the **Remote Path** field.  Next, you must provide the type of file.  Examples of file types include ``fasta``, ``fastq``, ``CSV``, etc.  You must provide a file type because Galaxy workflows sometimes require specific file types as input.  If an input requires a FASTA file then it expects a fasta file be provided.  Sometimes remote files do not have an appropriate extension to properly determine the file type. Therefore, you must indicate the file type.  Finally, the site's users should understand what the file is.  Enter a very brief description of the file to help users recognize what it contains.  When completed, click the Add File button.
+
+In keeping with the example provided earlier in this User's Guide for Setup of an Example Genomics Site, the following screen shot shows a site-wide file for the JGI Citrus sinensis v1.1 assembly FASTA file that has been added.  Any workflow that expects a FASTA file as input will now be able to use this file in a workflow:
+
+.. image:: ./site_wide_files.3.png
+
+
+If you no longer need a specific file available for users you can either disable the file by clicking the **Disable** link in the Actions column, or you can delete the file by clicking the **Delete** link. Disabling the file will allow the file to remain as a site-wide file but exclude it from appearing for use in future workflow submissions.

BIN
docs/user_guide/galaxy_integration/user_quotas.1.png


BIN
docs/user_guide/galaxy_integration/user_quotas.2.png


BIN
docs/user_guide/galaxy_integration/user_quotas.3.png


+ 19 - 0
docs/user_guide/galaxy_integration/user_quotas.rst

@@ -0,0 +1,19 @@
+User Quotas
+========================
+
+
+One of the most important settings to configure is the quota system. This can be found on the **Quota** tab of the Galaxy Adminstrative page.  Typically, next-gen workflows require multiple large files.  Users will be able to select data available from this site for their workflows but they can also upload their own data.  To ensure that user's do not exceed the limits of the server a quota system is available.  Click the **Quota** tab to reveal the following page:
+
+.. image:: ./user_quotas.1.png
+
+First, the total amount of space consumed by files uploaded for use in Galaxy workflows is shown at the top of the page.  Initially this will indicate 0 B (for zero bytes) as users upload files this statistic will change.  You may return to this page in the future to check how much space is currently used by user uploads. Here you can also specify the default system-wide quota that all users receive.  By default this is set to 64 Megabytes and a days to expiration of 60 days.  Once a file has existed on the site for 60 days the file is marked for deletion and will be removed when the Drupal cron is executed.  The default of 64MB per user is most likely too small for your site.  Adjust this setting and the days to expire as appropriate for your site's expected number of users and storage limitations and click the Save button to preserve any changes you have made.
+
+
+In addition to the default settings for all users, you may want to allow specific users to have a larger (or perhaps smaller) quota.  You can set user-specific quotas by clicking the **Add Custom User Quota** link near the bottom of the page.   The following page appears:
+
+.. image:: ./user_quotas.2.png
+
+
+Here you must specify the Drupal user name of the user who should be granted a custom quota.  This field will auto populate suggestions as you type to help you find the correct username.  Enter the desired quota size and expiration days and click the **Submit** button. you will then see the user-specific quota listed in the table at the bottom of the page:
+
+.. image:: ./user_quotas.3.png

+ 4 - 0
docs/user_guide/galaxy_integration/viewing_usage_data.rst

@@ -0,0 +1,4 @@
+Viewing Usage Data
+========================
+
+Coming soon.

+ 2 - 0
docs/user_guide/galaxy_integration/workflows.rst

@@ -0,0 +1,2 @@
+Adding a New Workflow
+======================

+ 4 - 0
docs/user_guide/galaxy_integration/workflows_and_collections.rst

@@ -0,0 +1,4 @@
+Workflows and Tripal Data Collections
+========================
+
+Coming soon.

+ 37 - 1
docs/user_guide/job_management.rst

@@ -1,4 +1,40 @@
 Job Management
 ===============
 
-Lorem ipsum
+This module is meant to provide a simple means of creating a robust command-line-driven, fully bootstrapped PHP Daemon. It uses the PHP-Daemon (https://github.com/shaneharter/PHP-Daemon) Library to create the Daemon (via the Libraries API) in order to not re-invent the wheel ;-).
+
+Features
+~~~~~~~~~
+
+* Provides a Drush interface to start/stop your Daemon.
+* Your daemon starts in the background and is detached from the current terminal.
+* Daemon will run all Tripal Jobs submitted within 20 seconds.
+* A log including the number of jobs executed, their identifiers and results.
+* Lock Files, Automatic restart (8hrs default) and Built-in Signal Handling & Event Logging are only a few of the features provided by the Daemon API making this a fully featured & robust Daemon.
+
+
+Requirements
+~~~~~~~~~~~~~
+
+* Libraries API (https://www.drupal.org/project/libraries)
+* PHP-Daemon Library version 2.0 (https://github.com/shaneharter/PHP-Daemon)
+    * Download the PHP-Daemon Library and extract it in your ``sites/all/libraries`` directory. The folder must be named "PHP-Daemon".
+* Drush 5.x (https://github.com/drush-ops/drush)
+* Drush Daemon API (https://www.drupal.org/project/drushd)
+
+Tripal Daemon Usage
+~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: shell
+
+  #Start Daemon drush
+  trpjob-daemon start
+  #Stop Daemon
+  drush trpjob-daemon stop
+  #Check the Status
+  drush trpjob-daemon status
+  #Show the Log
+  #List the last 10 lines of the log file:
+  drush trpjob-daemon show-log
+  #List the last N lines of the log file:
+  drush trpjob-daemon show-log --num_lines=N

BIN
docs/user_guide/mviews.1.png


+ 17 - 1
docs/user_guide/mviews.rst

@@ -1,4 +1,20 @@
 Materialized Views
 ==================
 
-Lorem ipsum
+Chado is efficient as a data warehouse but queries can become slow depending on the type of query. To help simplify and speed up these queries, materialized views can be employed. For a materialized view, a new database table is created and then populated with the results of a pre-defined SQL query. This allows you to execute a much simpler and faster query on the materialized view when producing user pages. A side effect, however is redundant data, with the materialized view becoming stale if not updated regularly.
+
+Tripal provides a mechanism for populating and updating these materialized views. These can be found on the ``Tripal → Data Storage → Chado -> Materialized Views`` page.
+
+.. image:: ./mviews.1.png
+
+Here we see several materialized views. These were installed automatically by the Tripal Chado module. To update these views, click the **Populate** link for each one.
+
+This will submit jobs to populate the views with data. Now, run the jobs:
+
+.. code-block:: shell
+
+  cd /var/www/html
+  drush trp-run-jobs --user=administrator
+
+
+You can now see that all views are up-to-date on the **Materialized Views Page**. The number of rows in the view table is shown.

+ 30 - 1
docs/user_guide/searching.rst

@@ -1,4 +1,33 @@
 Searching
 ===============
 
-Lorem ipsum
+Drupal and Tripal offer a variety of methods for searching biological content on your site.  Each has it's own advantages and meets different needs.  This section provides a description of several different ways to add searching.   The two primary categories of search tools are content-specific and site-wide searching.  The site-wide search tools typically provide a single text box that allow a user to provide a set of key words.  The results of the search will span multiple content types.  Often, site-wide searches allow users to quickly find content regardless of the content type.  But, they sometimes lack fine-grained control for filtering.  The content-specific search tools provide more fine-grained filtering for a single content type.  Therefore, it is often necessary to provide multiple types of search tools for different content types.
+
+There are several options for the addition of both site-wide and content-specific search tools which include:
+
+**For site-wide searching you can:
+**
+
+* Use the Default Drupal Search
+* Use the Search API Module
+* Use an independent search tool. Two popular tools that integrate with Drupal include:
+  * ElasticSearch
+  * Apache Solr
+
+**For content-specific searching you can:
+**
+* Use the search tools that Tripal provides
+* Develop your own search tools using Drupal Views
+* Write your own custom search tools using PHP and Tripal's API functions.
+
+You may not want to consider using multiple search tools, such as a site-wide tool and content-specific tools.  The following sections provide a description for use and setup of some of these different options.
+
+
+
+.. toctree::
+   :maxdepth: 1
+   :caption: Search Guide
+
+   ./searching/default_pages
+   ./searching/search_api
+   ./searching/elasticsearch_module

+ 5 - 0
docs/user_guide/searching/default_pages.rst

@@ -0,0 +1,5 @@
+Tripal Content-Specific Search Tools
+========================================
+
+
+By default, Tripal will provide a search tool for every Tripal content type.  When a new content type is created, a new search tool is automatically created for that tool.

+ 7 - 0
docs/user_guide/searching/elasticsearch_module.rst

@@ -0,0 +1,7 @@
+Tripal Elasticsearch Module
+========================================
+
+
+Tripal Elasticsearch is a module that integrates the `Elasticsearch <https://www.elastic.co/products/elasticsearch>`_ search engine with Tripal sites. By enabling the module, you are provided with a set of tools to index and search your Tripal 3 or Tripal 2 site. It provides an easy to set up site wide search and gene search blocks out of the box. Tripal Elasticsearch also provides support for Cross-site querying, which allows users to search multiple websites in a fast and asynchronous manner in one query.
+
+To enable Tripal Elasticsearch on your site, you must install an Elasticsearch instance, enable the Tripal Elasticsearch module, and use it to index your site’s data. For documentation on how to install and configure Tripal Elasticsearch, please `visit the Github repository <https://github.com/tripal/tripal_elasticsearch>`_.

BIN
docs/user_guide/searching/search_api.1.modules.png


BIN
docs/user_guide/searching/search_api.10.png


BIN
docs/user_guide/searching/search_api.11.png


BIN
docs/user_guide/searching/search_api.12.png


BIN
docs/user_guide/searching/search_api.13.png


BIN
docs/user_guide/searching/search_api.14.add_fields.png


BIN
docs/user_guide/searching/search_api.15.configure_title.png


BIN
docs/user_guide/searching/search_api.16.png


BIN
docs/user_guide/searching/search_api.17.png


BIN
docs/user_guide/searching/search_api.18.png


BIN
docs/user_guide/searching/search_api.19.view.png


BIN
docs/user_guide/searching/search_api.2.png


BIN
docs/user_guide/searching/search_api.20.png


BIN
docs/user_guide/searching/search_api.21.click_search.png


BIN
docs/user_guide/searching/search_api.22.done.png


BIN
docs/user_guide/searching/search_api.3.png


BIN
docs/user_guide/searching/search_api.4.config3_server.png


BIN
docs/user_guide/searching/search_api.5.png


BIN
docs/user_guide/searching/search_api.6.png


BIN
docs/user_guide/searching/search_api.7.png


BIN
docs/user_guide/searching/search_api.8.png


BIN
docs/user_guide/searching/search_api.9.png


+ 160 - 0
docs/user_guide/searching/search_api.rst

@@ -0,0 +1,160 @@
+Search API Module: Site-Wide Searching
+========================================
+
+Installing Drupal Search API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+* **`Search API <https://www.drupal.org/project/search_api>`_**: This module provides an interface for much more powerful, efficient searching than the Drupal core search module. Specifically, it allows you to use more powerful engines such as `Elastic Search <https://www.drupal.org/project/search_api_elasticsearch>`_ and `Apache Solr <https://www.drupal.org/project/search_api_solr>`_, as well as, advanced features such as facets (for narrowing down search results based on fields or entity type), fuzzy search, etc.
+* **`Search API Database Service <https://www.drupal.org/project/search_api_db>`_**: This module provides a Search Backend/Server defining how your search index should be stored. Specifically, it just stores the index in your current drupal database.
+
+Install the **Search API** and **Database search** modules as you would any other Drupal module. This can be done using Drupal's module installation page as shown in the screenshot below. For installation instructions reference the `Drupal.org Tutorial <https://www.drupal.org/documentation/install/modules-themes/modules-7>`_.
+
+.. figure:: ./search_api.1.modules.png
+
+  Screenshot Modules enable page with Database Search, Search API and Search views enabled.
+
+Alternatively, installation can be accomplished on the command-line by executing the following drush commands inside of the Drupal directory:
+
+.. code-block:: shell
+
+  drush pm-enable search_api
+  drush pm-enable search_api_db
+  drush pm-enable search_api_views
+
+
+
+Define your Search Backend/Server
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This tutorial covers using a basic Drupal database storage backend for your search. For large sites, it is recommended to use Elastic Search or Apache Solr. First, we need to tell the Search API where we want our index stored.  Navigate, to the configuration page for the Search API. You can either click on the **Configure** link shown in the above screenshot or navigate to ``Configuration > Search API`` through the administrative toolbar. You should see the following screen:
+
+
+.. figure:: ./search_api.2.png
+
+
+Before proceeding, consider deleting the "Default node index". We don't need it.  Next, click the **Add Server link**.  We are configuring a basic drupal database seach server we do not need to install any third-part software or set-up an external server. Instead, fill out the configuration form to tell the Search API to use its own database to store the search index.  Give this server the name "Drupal Database" and select "Database service" from the **Service Class** drop down.  In the **Database Service** section, select  "Search on parts of a word." If the search is slow due to this feature, then it is an indicator that your site should use a different service class (ie: Elastic Search or Apache Solr). Click "Create Server" to finish configuring the Search backend.
+
+
+.. figure:: ./search_api.3.png
+  :alt:  Add Server Config form
+
+You should see the following screen--assuming all went well. Click on Search API link at the top of the screen (circled in the screenshot below) to return to the **Search API** configuration screen.
+
+.. figure:: ./search_api.4.config3_server.png
+
+Define a Search Index
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Now that we have created a server where the Search API will store our index, we have to define the index itself.  On the Search API Configuration page click on the **Add index** link. The resulting page appears in the following screenshot. Name your index something descriptive. Consider including the word "search" somewhere in the name as this name will be used when setting up the search form/listing (view). For example, enter the name "Tripal Content Search." Next, select "Tripal Content" as the **Item Type**. The item type defines what content types should be indexed. One thing to keep in mind, is that the Search API currently does not support multi-entity (ie: Both Tripal and Node content) search in the same index without the `Search API Multi-index Search <https://www.drupal.org/project/search_api_multi>`_ extension module. Notice that we didn't check any of the **Bundles**. By not selecting bundles, this ensures that all Tripal Content will be indexed by the search. Finally, select from the Server dropdown the server created in the previous step and click the Create Index button.
+
+
+.. figure:: ./search_api.5.png
+
+
+Next we need to configure which fields should be indexed. You will be presented with a very long list of fields (the length is dependent on how many Tripal Content types you have). First, scroll to the bottom of the list and expand the **Add Related Fields** fieldset. If you are interested, add any additional fields first before checking boxes of fields above.  Otherwise it may cause you to lose selection you've already made.  Next, check the box beside each field you would like to be searched.  Unfortunately, the interface does not indicate which fields are used per content type.  Save your selection.
+
+The first few fields will be present for all Tripal Content Types (ie: Content Id, Type, Bundle, etc, as shown in the blue box of the screenshot below), Notice, that for some checked fields there is a boost drop-down. The **boost** drop-down influences the "relevance" that a search result will have. By increasing the boost for the title indicates "if the user's keywords are in the title it is more likely this content is the one they're looking for". Because we want titles and contnet types (i.e. bundles) highly searchable, set the boost 5 for these (see screenshot below).
+
+.. figure:: ./search_api.6.png
+
+
+After the first set of general fields, we see the list of content type specific fields.  Select fields that are appropriate for your own site and content. For full searchability, select most (if not all) of these fields. Keep in mind the number of fields selected affects the size of your index. If you know there is no useful information in a given field then do not select it. You can return later and edit the selected fields at a later date (although it does require re-indexing your site). The most important consideration at this point is what boost to apply to the various fields. As a rule of thumb, give a modest boost (but not as high as the title; e.g. set a boost of 3) for name fields and a default boost otherwise. You may want to apply a negative boost to fields users are extremely unlikely to search (but that you may want to use in facets) or that are likely to produce false positives (e.g.: analysis program version). Once you are done, click on "Save Changes".
+
+.. figure:: ./search_api.7.png
+
+
+After saving our fields we are redirected to the **Filters** tab.  Finally, (last step for creating the index!), pick the extra features you would like supported.  For now we will ignore the **Data Alterations** section.  However, we will set some items in the **Processors** section. Keep in mind that the order you select processors is important (i.e.: if you have html filter after highlighting then it will remove your highlighting). We would like to provide case-insensitive searching with searched words highlighted.  To do this, select **Ignore case**, **HTML Filter** and **Highlighting** in that order.  You may want to add **Tokenizer** if you are indexing any long text fields because errors can occur if the default tokenize can sometimes fail with long words.  Click "Save Configuration".
+
+.. figure:: ./search_api.8.png
+
+
+Your index is now scheduled for building! Depending upon the amount of content you have, this could take awhile as it will only index 50 pages of Tripal content per `Drupal Cron <http://tripal.info/tutorials/v3.x/installation/job-automation>`_ run. If you click the view tab you can see the progress on the indexing process. You can return to this screen in the future from the main Search API configuration page and clicking on the name of the index.
+
+.. figure:: ./search_api.9.png
+
+
+Creating a Search Interface for your users
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+At this point you should have an index for your Tripal Content. However, you still have not created any functionality for end users—the data might be indexed, but they can't search it, yet. To create the Search page we are going to use views. Start by going to the Views Administration UI (``Structure > Views``) and click on **Add new view.**
+
+.. figure:: ./search_api.10.png
+
+
+Name it something descriptive (e.g. Search Biological Data) as this will show up in the administrative listing. For the view type (the drop-down beside **Show**) select the name of the index you created in the last step (e.g.: Tripal Content Search). Name the page something helpful to the user (avoid the word Tripal and describe the data instead; e/g.: "Search Biological Data") and then change the path (e.g. ``search/biological-data``). Click **Continue & edit.**
+
+.. figure:: ./search_api.11.png
+
+Next,  will appear is the Edit Views UI which can be intimidating, even if you've been introduced to it before. With that in mind the following screenshot attempts to orient you to the parts of the UI we will use in reference to a search form/results.  This tutorial  will address **Fields**, **Filters** and **Sort Criteria**.  It is not necessary to understand more except to point out that you should focus on the left side of the UI when looking for the sections discussed below.
+
+.. figure:: ./search_api.12.png
+
+.. note::
+
+  Make sure to save your view periodically by clicking on the "Save" button at the top of the page.
+
+
+Configuring What is displayed for each Search Result
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+First, we are going to change what is displayed for each result. By default just the unique identifier is displayed which of course is not useful to the user. We want to hide that field by clicking on its name, **Indexed Tripal Content: Tripal content id** which opens the configuration pop-up and then checking **Exclude from display**. Since we will be using this field to create our link, we also want to change the **Thousands marker** to **-None-**.  Click **Apply (all displays)** to save these changes.
+
+.. figure:: ./search_api.13.png
+
+
+Next, click on the **Add** button beside the fields title to open the **Add Fields** pop-up shown in the next screenshot. For this tutorial our search results are going to include the title linked to the content and the highlighted "context" of the search. To add the title, scroll through  the fields and click the checkbox beside **Indexed Tripal Content: Title**.  Be sure the item description reads **Tripal content "title" property** as there may be other title fields (e.g. for publications).  Click **Apply (all displays)** to add this field to the view.
+
+.. figure:: ./search_api.14.add_fields.png
+
+
+Next, the configuration form for the field is shown. We do not want a label so uncheck the box titled **Create a label**. We want our title to appear as a link to the content, so expand the **Rewrite Results** field set, check **Output this field as a link** and set the link path to ``bio_data/[id]``. This uses tokens to fill in the unique identifier and use it to create the path to the entity for each search result.  Click the **Apply (all displays)** button to save these settings.
+
+.. figure:: ./search_api.15.configure_title.png
+
+
+Next, we want to add the highlighted search context. To do this click on the **Add** button again but this time set the **Fields** drop-down to **Search** and check **Search: Excerpt**.  Again, click the **Apply (all displays)** button to continue to the configuration pane. On configuration, again, remove the label and apply the settings.
+
+.. figure:: ./search_api.16.png
+
+
+On the resulting page, be sure to uncheck the box **Create** a label just as you did for the Title.
+
+Now that we have a title and excerpt in our Fields section, if you click on the **Update Preview** button you will see a list of titles for your content and then emptiness underneath each title since there was no keyword entered yet so an excerpt could not be generated.
+
+Adding the Keywords Search Box
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Click on the **Add** button beside **Filter Criteria** and in the resulting pop-up, select **Search** for the filter and then check **Search: Fulltext Search**. Click the **Apply (all displays)** button to add the filter.
+
+.. figure:: ./search_api.17.png
+
+
+In order to let the users see a field for searching, we need to expose this filter. We do that by clicking the checkbox beside **Expose this filter to visitors...** on the filter configuration form. We also want to change the **Label** to **Keywords**. Other then those two changes, the defaults will work so click the **Apply (all displays)**.
+
+.. figure:: ./search_api.18.png
+
+
+Save your view and navigate go to the new search page you created with this new view.  The page will be accessible at the URL ``http://[your-site-url]/search/biological-data``. You will see a text box titled **Keywords** and if you enter a valid keyword and click **Apply** then filtered results with context highlighting will appear!
+
+.. figure:: ./search_api.19.view.png
+
+Sort by "Relevance"
+~~~~~~~~~~~~~~~~~~~~
+
+Next, we want to sort our results.  To do this, return to the view configuration page.  Click on the **Add** button beside **Sort Criteria** and in the pop-up select **Search** in the **Filter** drop-down.  Next, check the **Search: Relevance** field. Apply it and configure it to **Sort descending** so that higher scoring results are shown first.  Apply the the configuration settings.
+
+.. figure:: ./search_api.20.png
+
+
+Only Show results when user clicks Search
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Finally, we do not want search results to automatically appear. We want the user to click the **Apply** button on the search form first. To do this use the right-side of the Views UI to expand the **Advanced** field set and under **Exposed Form** click on **Exposed form Style: Basic**. Change the setting  to **Input Required** and click **Apply**. In the following configuration page change the **Submit button** text to "Search", and uncheck **Expose Sort** order.
+
+.. figure:: ./search_api.21.click_search.png
+
+
+Now Save your view -You're Done!
+
+
+.. figure:: ./search_api.22.done.png

BIN
docs/user_guide/setting_page_urls.1.png


BIN
docs/user_guide/setting_page_urls.2.png


+ 30 - 3
docs/user_guide/setting_page_urls.rst

@@ -1,4 +1,31 @@
-Setting Page URLs
-==================
+Setting Page Titles and URLs
+=============================
 
-Lorem ipsum
+Tripal allows for Page Titles and URLs to be set within the Tripal Content type editing interface. This provides the ability to construct consistent url patterns and titles across your site.
+
+Setting Page Titles
+=====================
+
+Page titles can be set within the edit mechanism of the Tripal Content type. This can be found on the ``Structure → Tripal Content Types  → <specific content type>``. Scroll to the bottom of the page to the "Page Title options" tab.
+
+.. image:: ./setting_page_urls.1.png
+
+
+Page Title Settings
+~~~~~~~~~~~~~~~~~~~~
+
+Then the page title pattern can be generated using combination of token. The tokens can be found under the 'Available Tokens' link. Keep in mind that it might be confusing to users if more than one page has the same title.
+
+.. note::
+
+	We recommend you choose a combination of tokens that will uniquely identify your content.
+
+If you already have content within your site and need to update all page titles you can choose to 'Bulk update all titles'. This will update all existing titles for the content type in question. If your title is used to build your alias you will also need to 'Bulk update all aliases'.
+
+Setting URLs
+=============
+
+URLs, also known as aliases, can you found just below the Page Title options tab.The url pattern can be generated using combination of token. The tokens can be found under the 'Available Tokens' link. If you already have content within your site and need to update all urls you can choose to 'Bulk update all aliases'. This will update all existing urls for the content type in question. It will also create redirects from the old url to the new url to ensure 404s and broken links are not created.
+
+
+.. image:: ./setting_page_urls.2.png

BIN
docs/user_guide/web_services.1.png


BIN
docs/user_guide/web_services.2.png


BIN
docs/user_guide/web_services.3.png


BIN
docs/user_guide/web_services.4.png


BIN
docs/user_guide/web_services.5.png


BIN
docs/user_guide/web_services.6.png


+ 264 - 1
docs/user_guide/web_services.rst

@@ -1,4 +1,267 @@
 Web Services
 ===============
 
-Lorem ipsum
+Overview
+~~~~~~~~~
+
+New in Tripal v3 are `RESTful web <https://en.wikipedia.org/wiki/Representational_state_transfer>`_ services.  These web-services are designed to support the following:
+
+1. Allow end-users to access data programmatically using any language of their choice.
+2.  Allow Tripal sites to share data among themselves.
+
+Tripal v3 now comes with a ``tripal_ws`` module that provides web services.  Once enabled, any Tripal v3 site instantly provides a RESTful web service that can access all publicly available data.  Additionally, web services for Tripal are meant to be:
+
+1. Fully discoverable
+2. Searchable
+3. Provide access to the same data as that which appears on the visible content pages.
+4. Adjustable via Drupal's content management interface (i.e. no programming required for existing content)
+5. Provide a programmers API for easy addition of new content.
+6. Share data the way scientists expect to see it.
+7. Use controlled vocabularies to ensure maximal interoperability.
+
+Within the current v3.0 release of Tripal web services are limited to read-only access of already publicly available content.  For the future the following is planned for  web services:
+
+1. Authenticated data management: creation, deletion and editing of resources via the API.
+2. Full implementation of the `Hyrda Core Vocabulary <https://www.hydra-cg.com/spec/latest/core/>`_ specification to provide full discover-ability.
+
+Enable Web Services
+~~~~~~~~~~~~~~~~~~~~
+
+To enable web services, simply install the ``tripal_ws`` module, either using the module installation interface within Drupal's administrative pages, or on the command-line using Drush:
+
+.. code-block:: shell
+
+  cd /var/www/html
+  drush pm-enable tripal_ws
+
+Exploring Web Services
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Once enabled, webservices are immediately available on your site at the URL  ``http://[your.site.name]/web-services/`` (replace [your.site.name] with the address and path of your Tripal site). Web services are meant to be accessed programmatically, but they can be easily explored using a web browser such as with the `Firefox browser <https://www.mozilla.org/en-US/firefox/>`_ and `JSONView <https://jsonview.com/>`_ extension enabled.  For example, the following screen shot shows an example Tripal site with the data loaded following the Setup of a Genomics Site instructions of this tutorial.
+
+
+
+This initial resource "home page" of the web services returns results in `JSON format <http://www.json.org/>`_.  When using the JSONView extension within Firefox you can explore web services by clicking the links that are present in the browser.
+
+Structure of a Web Service Response
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+The initial response in JSON is in the `JSON-LD format <https://json-ld.org/>`_ where the LD means Linked Data.   For example:
+
+.. code-block:: JSON
+
+    {
+    "@context": {,
+      "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
+      "rdfs": "http://www.w3.org/2000/01/rdf-schema#",
+      "hydra": "http://www.w3.org/ns/hydra/core#",
+      "dc": "http://purl.org/dc/dcmitype/",
+      "schema": "https://schema.org/",
+      "local": "http://localhost/cv/lookup/local/",
+      "vocab": "http://localhost/web-services/vocab/v0.1#",
+      "EntryPoint": "vocab:EntryPoint",
+      "content": {
+        "@id": "vocab:EntryPoint/content",
+        "@type": "@id"
+      }
+    }
+    "@id": "http://localhost/web-services",
+    "@type": "EntryPoint",
+    "content": "http://localhost/web-services/content/v0.1"
+  }
+
+
+A notable component of JSON-LD is the `@context` sub array.  It is within this @context section that the "meaning" of the data becomes apparent and where the Linked Data is implemented.  Tripal's use of linked data is to take advantage of controlled vocabularies or otologies to unambiguously identify the meaning of each element of the response.  By using controlled vocabularies to qualify all data, it  becomes possible to exchange data between sites or within a client program while limiting confusion as to the meaning of the data.  Therefore, words that are used as the keys key/value pairs will always be defined in the @context section.  For example, all of the vocabularies whose terms are used to qualify data on the site are identified in the `@context` section with URLs.  Some of these vocabularies include:  rdf, rdfs, hydra, schema, etc.
+
+
+There are two other special keys used in the JSON-LD response.  These are the `@id` and `@type` keys.  The `@id` indicates the unique URL for this resource and becomes a unique name for the resource.  In the example screenshot above, the `@id` of the initial page of web services is   http://localhost/web-services.  This URL will always refer to the initial page for Tripal web services on the given site.  The `@type` identifier specifies what type of resource this URL provides.  In this example, the type is **EntryPoint**.  If a client program is unsure as to what an **EntryPoint** is, then that information is provided in the @context section.  The following line indicates that the term **EntryPoint** expands to the vocabulary term:  vocab:EntryPoint
+
+.. code-block:: JSON
+
+  "EntryPoint": "vocab:EntryPoint",
+
+Here we learn tha the term **EntryPoint** belongs to the vocab resource.  If we look at the vocab entry in the `@context` array then we can find a link to that resource.  A human can follow that link to examine the vocab resource and find the term that it provides.  A client program can use that information to uniquely identify the term.  By default, Tripal provides the **vocab** vocabulary which uses an implementation of the `Hydra Vocabulary <https://www.hydra-cg.com/spec/latest/core/>`_.
+
+Finally, all other entries in the JSON-LD response are key/value pairs that provide **properties** about the resource.  In the example above, only the **content** property is available.  A property can be a scalar value (i.e. number, character string) or a link to another resource.
+
+Primary Services
+~~~~~~~~~~~~~~~~~
+
+
+By default, the only resource that Tripal provides at the initial response level is the content resource.  Any resource at this level is hereafter referred to as primary service.  Tripal is design to allow new web-services to be added to it.  These will be more formally described in the Tripal v3 Developer's Handbook.  In short, a primary service provides a variety of data and services for related content and function.   Each primary resource has a version number to help ensure backwards compatibility as new web services are developed and updated.  For example, the default content service currently has a version of v0.1:
+
+.. code-block:: JSON
+
+  "content": "http://localhost/web-services/content/v0.1"
+
+
+The Content Service
+~~~~~~~~~~~~~~~~~~~~
+
+The content service provided by Tripal shares all publicly available content.  The content that appears on a page is the same content that appears in web services.  A major change in the design of Tripal from v2 to v3 is that all data is organized via controlled vocabularies.  The following diagram demonstrates how this is implemented.  For example the mRNA term comes from the `Sequence Ontology <http://www.sequenceontology.org/>`_.  It's unique term accession is SO:0000234.  Every content type in Tripal consists solely of a type (e.g. mRNA or SO:0000234), it's associated label (e.g.  mRNA) and a numeric id unique to each Tripal site.  The ID is what uniquely identifies every content in Tripal.  Each unique content with these three attributes is referred to as an **Entity**.  All other data associated with a given entity are called **Fields**.  Example fields for an mRNA content type may be the name of the mRNA, a unique name, the coding sequence, the coordinates on the reference genome, etc.  In the diagram below, these fields are the rectangular boxes that jut out of the mRNA entity.   These fields can be "attached" to an entity by Tripal and data can come from any storage backend.  The data that appears on a page and the data in the content service is taken from the same entity and therefore end-users and clients have access to the same data.
+
+.. image:: ./web_services.2.png
+
+
+Content Type Listing
+~~~~~~~~~~~~~~~~~~~~~
+
+When the content service is accessed, the response is always a listing of all available content types on the site.   Site administrators can create new content types by following the Create Content Types section of this tutorial.  By default, Tripal is installed with several commonly used content types, but new ones can be created as needed for the site.  Whenever a new content type is created it immediately is available via the content service, and these content types can be found at the path:  `/web-services/content/v0.1`.  Below is an example screenshot of the resulting JSON from an example site:
+
+.. image:: ./web_services.3.png
+
+Note that the `@type` for this listing is a Collection and the label is **Content Types**.  Each content type has a unique `@id`, a `@type` indicating the term that describes it and a brief description.  The `@id` serves as a URL to obtain further details about that content type.   Also, notice in the above screenshot that the `@context` section is minimized, but as usual, each of the terms used in the key/value pairs are fully qualified in that section.   This JSON-LD response also indicates the total number of content types available.
+
+Content Type Members (Entities)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+The members or entities that belong to a content type are found at the path:  `/web-services/content/v0.1/{name}` where {name} is the name of the content type.  The {name} field must be identical to the label field from the content type listing shown previously.   For example, the mRNA content type  path would be   `/web-services/content/v0.1/mRNA`.  This resource provides a listing of all members for that content type.   The following shows the response for an mRNA listing:
+
+.. image:: ./web_services.4.png
+
+
+Note that the `@type` is also a Collection byt the label is 'mRNA collection'.  To maintain a small response, the results of content member listings is usually paged such that only a subset of members is shown.  In this example, there are 8032 mRNA entities available, but only 25 are shown.  Notice the view term.  It contains several sub elements named first, last and next. These provide navigation links that can be used by a client application to iterate through all entities.  The structure of these links is as follows:
+
+``/web-servies/content/v0.1/{name}?page={page}&limit={limit}``
+
+A client application can therefore navigate through the list of entities by substituting the {name} of the content type, the desired {page} to show (the first page is always 1) and the number of records to show as specified by {limit}.  If a client wanted to retrieve the IDs of all 8032 mRNA of this example, then the following path could be used:
+
+``/web-servies/content/v0.1/mRNA?page=1&limit=8032``
+
+Content (Entity) Resource
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Each entity is accessible via the path: `/web-services/content/v0.1/{name}/{id}`.   Here {name} continues to refer to the name of the content type (e.g. mRNA) and {id} refers to the unique numeric ID for the entity.  In this example an mRNA entity would result in a JSON-LD response like the following:
+
+.. image:: ./web_services.5.png
+
+In the JSON shown above, note that all of the key/value pairs used are referenced in the `@context` section  Also, notice that some key/value pairs contain data while others contain URLs.  Tripal is optimized to not load every attribute.  For example, sequence data for some content type may be large.  By providing a URL for the data, it keeps the response small but allows clients to access that information via the provided URL.   For example, if the URL for the **sequence_coordinate** attribute were followed the following response could be seen:
+
+.. image:: ./web_services.6.png
+
+
+Here the client can obtain the necessary information about the coordinates on the genome for this particular mRNA entity.
+
+Searching for Content
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Currently, Tripal provides the ability to search for content via web services by crafting URLs on content type members pages.   By default, the path for content type listings is:
+
+``/web-services/content/v0.1/{name}``
+
+Where {name} is the label assigned to the content type (See the Content Type Members section above).   Using this path, clients filter content to a specific content type.  But further refinement is possible.  As a reminder, each member (or entity) on the content type members collection appears similar to the following:
+
+.. code-block:: JSON
+
+  {
+    "@id": "http://localhost/web-services/content/v0.1/mRNA/691468",
+    "@type": "mRNA",
+    "label": "LOC_Os01g01010.1",
+    "ItemPage": "http://localhost/bio_data/691468"
+  },
+
+
+When retrieving the data for a specific entity something similar to the following (for our mRNA example) may be seen:
+
+.. code-block:: JSON
+
+  "label": "LOC_Os01g01010.1",
+  "ItemPage": "http://localhost/bio_data/691468",
+  "type": "mRNA",
+  "organism": {
+      "label": "<i>Oryza sativa</i>",
+      "genus": "Oryza",
+      "species": "sativa"
+  },
+  "name": "LOC_Os01g01010.1",
+  "sequence": "http://localhost/web-services/content/v0.1/mRNA/691468/Sequence",
+  "sequence_length": "3017",
+  "sequence_checksum": "019338bdd66c9fcf951439e9368046f9",
+  "time_accessioned": "2017-05-08 23:31:39.792073",
+  "time_last_modified": "2017-05-08 23:31:39.792073",
+  "protein_sequence": "http://localhost/web-services/content/v0.1/mRNA/691468/Protein+sequence",
+  "sequence_coordinates": "http://localhost/web-services/content/v0.1/mRNA/691468/Sequence+coordinates",
+  "relationship": "http://localhost/web-services/content/v0.1/mRNA/691468/relationship",
+  "identifier": "LOC_Os01g01010.1"
+
+As another reminder, when any of these attributes have a URL then further information about that attribute is obtained by following the URL. In the example below, the relationship term yeilds results similar to the following:
+
+.. code-block:: JSON
+
+  {
+      "@id": "http://localhost/web-services/content/v0.1/mRNA/691468/relationship/0",
+      "@type": "relationship",
+      "clause_subject": {
+          "type": "mRNA",
+          "name": "LOC_Os01g01010.1",
+          "identifier": "LOC_Os01g01010.1"
+      },
+      "relationship_type": "part_of",
+      "clause_predicate": {
+          "type": "gene",
+          "name": "LOC_Os01g01010",
+          "identifier": "LOC_Os01g01010"
+      },
+      "clause": "The mRNA, LOC_Os01g01010.1, is a part of the gene, LOC_Os01g01010."
+  },
+
+Here we see information that describes the relationship of the mRNA with its parent gene.  Almost all of the key value pairs shown in the responses above can be used to filter results.  But, attention must be paid as to the level that each attribute appears.  For example,  in the initial entity response above, the organism attribute has several sub terms that include genus, species and label.  The organism term appears as a first-level term and genus, species and label appear as a second-level term.  For relationships, the relationship is the first-level term but that term has a URL!  Tripal does not support filter by URLs.  However, we can use the terms from the results of that URL in our filter.  Thus, the clause_subject, relationship_type and clause_predicate  becomes a second-level terms, and within the clause_subject, the type, name and identifier become third-level terms.
+
+You can easily search for specific entities by knowing these first, second, third, etc. -level terms.  The path for searching is as follows:
+
+``/web-services/content/v0.1/{name}?{first-level}[,{second-level},...,{n-th level}]={value}[;{operator}]``
+
+Here, {name} again refers to the content type name (e.g. mRNA).  The {first-level} placeholder refers to any term that appears at the first level.  Refinement can occur if a term has multiple sublevels by separating those terms with a comma.  The {value} placeholder should contain the search word.   The {operator} placeholder lets you specify the operator to use (e.g. greater than, less than, starts with, etc.).  The {operator} is optional and if not included all searches default to exact matching.   As an example, the organism term has sever second-level terms.  If we wanted to filter all mRNA to include only those from the genus Oryza we could construct the following URL:
+
+``/web-services/content/v0.1/mRNA?organism,genus=Oryza``
+
+Multiple search criteria can be provided at one time, by repeating the search construct as many times as needed and separating with an ampersand character:  &.  For example, to filter the mRNA to only those from Oryza sativa the following URL would be constructed:
+
+``/web-services/content/v0.1/mRNA?organism,genus=Oryza&organism,species=sativa``
+
+The examples provided thus far expect that you are searching for exact values.  However, you can specify different search operators such as the following:
+
+* Numeric Values
+    * equals: eq
+    * greater than: gt
+    * greater than or equal to:  gte
+    * less than: lt
+    * less than or equal to: lte
+    * not equal to: ne
+* Text values
+    * equals: eq
+    * contains: contains
+    * starts with:  starts
+
+Following the path format specified above we set the operator.  For example,   We can use the label as our second-level search term and require that it start with Oryza to find all of the mRNA that belong to the genus Oryza:
+
+``/web-services/content/v0.1/mRNA?organism,label=Oryza;starts``
+
+Finally, you can control ordering of the results by constructing a PATH following this format:
+
+``/web-services/content/{name}?{search filter}&order={term}|{dir}[;{term}|{dir}...]``
+
+Here {search filter} represents the filtering criteria as defined above (keeps the path format from getting extremely long in this document), {term} represents the full term "path" which if it has multi-level terms those levels are separated by a comma (e.g. organism,genus); and {dir} represents the direction of the order and can be **ASC** or **DESC** for ascending and descending sorting respectively.  You may order results by as many terms as needed by separating them with a semicolon.  The order in which the terms are provided will dictate which term gets sorted first.  For example, supposed we want to search for all mRNA within the genus Oryza but order them by the species name.  The following URL construct would suffice:
+
+``/web-services/content/mRNA?organism,genus=Oryza&order=organism,species|ASC``
+
+To demonstrate multi term sorting, we could try to sort by the Genus as well, although, because we filtered by the genus that would be a useless sort, but it demonstrates use of multiple sort criteria:
+
+``/web-services/content/mRNA?organism,genus=Oryza&order=organism,genus|ASC;organism,species|ASC``
+
+
+Searching Limitations
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The ability to search by any term for any content type creates a powerful mechanism to find almost entity.  However there are two primary limitations:
+
+1. Not all fields attached to an entity are conducive for searching. Images can be attached to entities, references to other websites, etc.  In these cases the search functionality for those fields has not been implemented.  Unfortunately, Tripal v3 does not yet provide a listing of which fields are not searchable.   That support will be coming in the future.
+2. The format for constructing a search URL is  equivalent to  an AND operation.  For example, you can filter by genus and species but not by genus or species.  The addition of search criteria adds additional AND filters.
+
+Hiding or Adding Content
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+It is relatively easy to hide or add content to web services.  The 'Configuring Page Display' tutorial walks the user through the steps for adding fields to a page for display, removing them and organizing the placement of those fields on the entity's page.  That same interface is used for indicating which fields are present in web services.  When a field is hidden from a page it is likewise hidden from web services.  When a new field is added to a page it is  added to web services.  Folks who develop new fields for custom modules and share those with others should ensure that their fields implementations follow the design specifications.  If followed correctly then all fields will behave in this way.