Browse Source

Finished remaining User's Guide pages

Stephen Ficklin 6 years ago
parent
commit
5ccbe093c7
29 changed files with 572 additions and 137 deletions
  1. 28 40
      docs/user_guide/bulk_loader.rst
  2. 8 7
      docs/user_guide/configuring_page_display.rst
  3. BIN
      docs/user_guide/example_genomics/analyses.1.png
  4. BIN
      docs/user_guide/example_genomics/analyses.2.png
  5. 33 0
      docs/user_guide/example_genomics/analyses.rst
  6. BIN
      docs/user_guide/example_genomics/controlled_vocabs.1.png
  7. 17 0
      docs/user_guide/example_genomics/controlled_vocabs.rst
  8. BIN
      docs/user_guide/example_genomics/cross_refs.1.png
  9. 22 0
      docs/user_guide/example_genomics/cross_refs.rst
  10. BIN
      docs/user_guide/example_genomics/genomes_genes.1.png
  11. BIN
      docs/user_guide/example_genomics/genomes_genes.2.png
  12. BIN
      docs/user_guide/example_genomics/genomes_genes.3.png
  13. BIN
      docs/user_guide/example_genomics/genomes_genes.4.png
  14. BIN
      docs/user_guide/example_genomics/genomes_genes.5.png
  15. 219 0
      docs/user_guide/example_genomics/genomes_genes.rst
  16. BIN
      docs/user_guide/example_genomics/organisms.citrus_sinensis.jpg
  17. BIN
      docs/user_guide/example_genomics/pub_import.1.png
  18. BIN
      docs/user_guide/example_genomics/pub_import.2.png
  19. BIN
      docs/user_guide/example_genomics/pub_import.3.png
  20. BIN
      docs/user_guide/example_genomics/pub_import.4.png
  21. BIN
      docs/user_guide/example_genomics/pub_import.5.png
  22. 127 0
      docs/user_guide/example_genomics/pub_import.rst
  23. 61 20
      docs/user_guide/job_management.rst
  24. 1 2
      docs/user_guide/mviews.rst
  25. 11 13
      docs/user_guide/searching.rst
  26. 1 1
      docs/user_guide/searching/elasticsearch_module.rst
  27. 11 11
      docs/user_guide/searching/search_api.rst
  28. 6 10
      docs/user_guide/setting_page_urls.rst
  29. 27 33
      docs/user_guide/web_services.rst

+ 28 - 40
docs/user_guide/bulk_loader.rst

@@ -1,6 +1,6 @@
 
 Bulk Loader
-===============
+===========
 
 The bulk loader is a tool that Tripal provides for loading of data contained in tab delimited files. Tripal supports loading of files in standard formats (e.g. ``FASTA``, ``GFF``, ``OBO``), but Chado can support a variety of different biological data types and there are often no community standard file formats for loading these data. For example, there is no file format for importing genotype and phenotype data. Those data can be stored in the feature, stock and natural diversity tables of Chado. The Bulk Loader was introduced in Tripal v1.1 and provides a web interface for building custom data loader. In short, the site developer creates the bulk loader "template". This template can then be used and re-used for any tab delimited file that follows the format described by the template. Additionally, bulk loading templates can be exported allowing Tripal sites to share loaders with one another.  Loading templates that have been shared are available on the Tripal website here: http://tripal.info/extensions/bulk-loader-templates.
 
@@ -12,7 +12,7 @@ The following commands can be executed to install the Tripal Bulk Loader using D
   drush pm-enable tripal_bulk_loader
 
 Plan How to Store Data
-~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------
 
 To demonstrate use of the Bulk Loader, a brief example that imports a list of organisms and associates them with their NCBI taxonomy IDs will be performed. The input tab-delimited file will contains the list of all *Fragaria* (strawberry) species in NCBI at the time of the writing of this document.
 
@@ -37,8 +37,10 @@ This file has three columns: NCBI taxonomy ID, genus and species:
 
 To use the bulk loader you must be familiar with the Chado database schema and have an idea for where data should be stored. It is best practice to consult the GMOD website or consult the Chado community (via the `gmod-schema mailing list <https://lists.sourceforge.net/lists/listinfo/gmod-schema>`_) when deciding how to store data. For this example, we want to add the species to Chado, and we want to associate the NCBI taxonomy ID with these organisms. The first step, therefore, is to decide where in Chado these data should go. In Chado, organisms are stored in the **organism** table. This table has the following fields:
 
-.. csv-table:: Chado organism table
-    :header: "Name",	"Type",	"Description"
+`chado.organism Table Schema`
+
+.. csv-table::
+  :header: "Name",	"Type",	"Description"
 
   "organism_id",	"serial",	"PRIMARY KEY"
   "abbreviation",	"character varying(255)",
@@ -53,8 +55,9 @@ We can therefore store the second and third columns of the tab-delimited input f
 In order to store a database external reference (such as for the NCBI Taxonomy ID) we need to use the following tables: **db**, **dbxref**, and **organism_dbxref**. The **db** table will house the entry for the NCBI Taxonomy; the **dbxref** table will house the entry for the taxonomy ID; and the **organism_dbxref** table will link the taxonomy ID stored in the **dbxref** table with the organism housed in the **organism** table. For reference, the fields of these tables are as follows:
 
 
+`chado.db Table Schema`
 
-.. csv-table:: chado.db structure
+.. csv-table::
   :header: "Name",	"Type",	"Description"
 
  	"db_id",	"serial",	"PRIMARY KEY"
@@ -64,7 +67,9 @@ In order to store a database external reference (such as for the NCBI Taxonomy I
  	"url",	"character varying(255)"
 
 
-.. csv-table:: chado.dbxref structure
+`chado.dbxref Table Schema`
+
+.. csv-table::
   :header: "Name",	"Type",	"Description"
 
  	"dbxref_id",	"serial",	"PRIMARY KEY"
@@ -74,7 +79,9 @@ In order to store a database external reference (such as for the NCBI Taxonomy I
  	"description",	"text"
 
 
-.. csv-table:: chado.organism_dbxref structure
+`chado.organism_dbxref Table Schema`
+
+.. csv-table::
   :header: "Name",	"Type",	"Description"
 
   "organism_dbxref_id", "serial", "PRIMARY KEY"
@@ -85,13 +92,12 @@ In order to store a database external reference (such as for the NCBI Taxonomy I
 For our bulk loader template, we will therefore need to insert values into the **organism**, **db**, **dbxref** and **organism_dbxref** tables. In our input file we have the genus and species and taxonomy ID so we can import these with a bulk loader template. However, we do not have information that will go into the db table (e.g. "NCBI Taxonomy"). This is not a problem as the bulk loader can use existing data to help with import. We simply need to use the "NCBI Taxonomy" database that is currently in the Chado instance of Tripal v3.
 
 Creating a New Bulk Loader Template
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------------
 
 Now that we know where all of the data in the input file will go and we have the necessary dependencies in the database (i.e. the NCBI Taxonomy database), we can create a new bulk loader template. Navigate to ``Tripal → Data Loaders → Chado Bulk Loader``, click the **Templates** tab in the top right corner, and finally click the link **Add Template**. The following page appears:
 
 .. image:: ./bulk_loader.1.png
 
-
 We need to first provide a name for our template. Try to name templates in a way that are meaningful for others. Currently only site administrators can load files using the bulk loader. But, future versions of Tripal will provide functionality to allow other privileged users the ability to use the bulk loader templates. Thus, it is important to name the templates so that others can easily identify the purpose of the template. For this example, enter the name **NCBI Taxonomy Importer (taxid, genus, species)**. The following page appears:
 
 .. image:: ./bulk_loader.2.png
@@ -129,7 +135,6 @@ Next, we need to add the **species** field to the record. Click the **Add Field*
 * Chado Field/Column: species
 * Column: 3
 
-
 We now have two fields for our organism record:
 
 .. image:: ./bulk_loader.5.png
@@ -151,7 +156,6 @@ To this point, we have built the loader such that it can load two of the three c
    * Constant Value:  NCBITaxon
    * Check "Ensure the value is in the table"
 
-
 Here we use a field type of **Constant** rather than **Data**. This is because we are providing the value to be used in the record rather than using a value from the input file. The value we are providing is "NCBI Taxonomy" which is the name of the database we added previously. The goal is to match the name "NCBI Taxonomy" with an entry in the **db** table. Click the **Save Changes** button.
 
 We now see a second record on the **Edit Template** page. However, the mode for this record is insert. We do not want to insert this value into the table, we want to select it because we need the corresponding **db_id** for the **dbxref** record. To change this, click the Edit link to the left of the **NCBI Taxonomy DB** record. Here we want to select only the option **SELECT ONCE**. We choose this option because the database entry that will be returned by the record will apply for the entire input file. Therefore, we only need to select it one time. Otherwise, the select statement would execute for each row in the input file causing excess queries. Finally, click **Save Record**. The **NCBI Taxonomy DB** record now has a mode of **select once**.  When we created the record, we selected the option to 'SELECT ONCE'.  This means that the bulk loader will perform the action one time for that record for the entire import process.  Because the field is a constant the bulk loader need not execute that record for every row it imports from our input file.  We simply need to select the record once and the record then becomes available for use through the entire import process.
@@ -174,11 +178,8 @@ Now that we have a record that selects the **db_id** we can now create the **dbx
 
 Click the Save Changes button. The Edit Template page appears.
 
-
 .. image:: ./bulk_loader.6.png
 
-
-
 Again, we need to edit the record to make the loader more fault tolerant. Click the Edit link to the left of the Taxonomy ID record. Select the following:
 
 * Insert
@@ -196,7 +197,6 @@ To complete this record, we need to add the accession field. Click the Add field
 
 At this state, we should have three records: Organism, NCBI Taxonomy DB, and Taxonomy ID. We can now add the final record that will insert a record into the **organism_dbxref** table. Create this new record with the following details:
 
-
 * For the record:
    * Record: New Record
    * Unique Record Name: Taxonomy/Organism Linker
@@ -227,18 +227,16 @@ Create the second field:
 
 We are now done! We have created a bulk loader template that reads in a file with three columns containing an NCBI taxonomy ID, a genus and species. The loader places the genus and species in the **organism** table, adds the NCBI Taxonomy ID to the **dbxref** table,  links it to the NCBI Taxonomy entry in the db table, and then adds an entry to the **organism_dbxref** table that links the organism to the NCBI taxonomy Id. The following screen shots show how the template should appear:
 
-
 .. image:: ./bulk_loader.7.png
 
-
 To save the template, click the **Save Template** link at the bottom of the page.
 
 Creating a Bulk Loader Job (importing a file)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------------------------
 
 Now that we have created a bulk loader template we can use it to import a file. We will import the **Fragaria**.txt file downloaded previously. To import a file using a bulk loader template, click the **Add Content** link in the administrative menu and click the **Bulk Loading Job**. A bulk loading job is required each time we want to load a file. Below is a screen shot of the page used for creating a bulk loading job.
 
-
+.. image:: ./bulk_loader.8.png
 
 Provide the following values:
 
@@ -250,15 +248,11 @@ Provide the following values:
 
 Click **Save**. The page then appears as follows:
 
-
-.. image:: ./bulk_loader.8.png
-
+.. image:: ./bulk_loader.9.png
 
 You can see details about constants that are used by the template and the where the fields from the input file will be stored by clicking the **Data Fields** tab in the table of contents on the left sidebar.
 
-
-.. image:: ./bulk_loader.9.png
-
+.. image:: ./bulk_loader.10.png
 
 Now that we have created a job, we can submit it for execution by clicking the **Submit Job** button. This adds a job to the Tripal Jobs systems and we can launc the job as we have previously in this tutorial:
 
@@ -296,32 +290,26 @@ After execution of the job you should see similar output to the terminal window:
 
 Our *Fragaira* species should now be loaded, and we return to the Tripal site to see them. Click on the **Organisms** link in the **Search Data** menu.  In the search form that appears, type "Fragaria" in the **Genus** text box and click the **Filter** button. We should see the list of newly added *Fragaria* species.
 
-.. image:: ./bulk_loader.10.png
-
-
-Before the organisms will have Tripal pages, the Chado records need to be **Published**.  You can publish them by navigating to ``admin -> Tripal Content -> Publish Tripal Content``.  Select the **organism** table from the dropdown and run the job.
-
+.. image:: ./bulk_loader.11.png
 
+Before the organisms will have Tripal pages, the Chado records need to be **Published**.  You can publish them by navigating to **Tripal Content -> Publish Tripal Content**.  Select the **organism** table from the dropdown and run the job.
 
 .. note::
 
-	In Tripal 2, records were synced by naviating to ``Tripal → Chado Modules → Organisms``.
-
-
+	In Tripal 2, records were synced by naviating to **Tripal → Chado Modules → Organisms**.
 
 Once complete, return to the search form, find a *Fragaria* species that has been published and view its page. You should see a Cross References link in the left table of contents. If you click that link you should see the NCBI Taxonomy ID with a link to the page:
 
-.. image:: ./bulk_loader.11.png
+.. image:: ./bulk_loader.12.png
 
 
 Sharing Your Templates with Others
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------------
 
-Now that our template for loading organisms with NCBI Taxonomy IDs is completed we can share our template loader with anyone else that has a Tripal-based site.  To do this we simply export the template in text format, place it in a text file or directly in an email and send to a collaborator for import into their site.  To do this, navigate to ``Tripal → Chado Data Loaders → Buik Loader`` and click the **Tempalate** tab at the top.  Here we find a table of all the tempaltes we have created.  We should see our template named **NCBI Taxonomy Importer** (taxid, genus, species).  In the far right colum is a link to export that template.  Licking that link will redirect you to a page where the template is provided in a serialized PHP array.
-
-.. image:: ./bulk_loader.12.png
+Now that our template for loading organisms with NCBI Taxonomy IDs is completed we can share our template loader with anyone else that has a Tripal-based site.  To do this we simply export the template in text format, place it in a text file or directly in an email and send to a collaborator for import into their site.  To do this, navigate to **Tripal → Chado Data Loaders → Buik Loader** and click the **Tempalate** tab at the top.  Here we find a table of all the tempaltes we have created.  We should see our template named **NCBI Taxonomy Importer** (taxid, genus, species).  In the far right colum is a link to export that template.  Licking that link will redirect you to a page where the template is provided in a serialized PHP array.
 
+.. image:: ./bulk_loader.13.png
 
-Simply cut-and-paste all of the text in the **Export** field and send it to a collaborator.
+Cut-and-paste all of the text in the **Export** field and send it to a collaborator.
 
-To import a template that may have been created by someone else, navigate to ``Tripal → Chado Data Loaders → Buik Loader`` and click the **Tempalate** tab. A link titled Import Template appears above the table of existing importers.  The page that appears when that link is clicked will allow you to import any template shared with you.
+To import a template that may have been created by someone else, navigate to **Tripal → Chado Data Loaders → Buik Loader** and click the **Tempalate** tab. A link titled Import Template appears above the table of existing importers.  The page that appears when that link is clicked will allow you to import any template shared with you.

+ 8 - 7
docs/user_guide/configuring_page_display.rst

@@ -5,7 +5,7 @@ Configuring Page Display
 
 This is one of the many new exciting features of Tripal v3.x. In this version of Tripal we have taken integration with Drupal Fields to a whole new level representing each piece of content (in Chado or otherwise) as a Drupal Field. What this means for site builders is unprecendented control over content display and arrangement through the administrative user interface --No more editing PHP template files to change the order, grouping or wording of content!
 
-You can configure the display of a given Tripal Content Type by navigating to ``Structure → Tripal Content Types`` and then selecting the "Manage Display" link beside the content type you would like to configure.
+You can configure the display of a given Tripal Content Type by navigating to **Structure → Tripal Content Types** and then selecting the **Manage Display** link beside the content type you would like to configure.
 
 .. image:: ./configuring_page_display.1.png
 
@@ -17,15 +17,15 @@ The Manage Display User Interface lists each Drupal Field in the order they will
 
 
 Rearranging Fields
-~~~~~~~~~~~~~~~~~~~
+------------------
 
 To rearrange the fields within a Tripal pane, simply drag them into the order you would like them. For example, the description is currently within the Summary table --it makes much more sense for it to be below the table but still within the summary. To do this, simply drag the description field to the bottom of the summary table and then move it in one level as shown in the following screenshot. Then click the **Save** button at the botton to save the changes.
 
 .. image:: configuring_page_display.3.rearrange.png
 
 
-Removing Fields and/or Field Lables
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Removing Fields and/or Field Labels
+-----------------------------------
 
 Now say we don't want the label "Description" in front of description content since it's pretty self explanatory. We can do that by changing the drop-down beside "Description" which currently says "Above" to "Hidden". This removes the label for the field assuming it's not within a table.
 
@@ -35,8 +35,9 @@ There may also be data you want to collect from your user but don't want to disp
 
   Don't forget to save the configuration often as you are changing it. You will not see changes to the page unless the **Save** button at the bottom of the Manage Display UI is clicked.
 
+
 Changing Tripal Pane Names
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+--------------------------
 
 The name of a Tripal Pane is displayed both in the header of the Pane itself and in the Table of Contents. To change this name, click the gear button to the far right of the Tripal Pane you would like to change. This will bring up a blue pane of settings. Changing the Field Group Label will change the display name of the pane. For example, the following screenshot shows how you would change the "Cross References" Tripal Pane to be labeled "External Resources" instead if that it what you prefer. Then just click the Update button to see your changes take effect.
 
@@ -44,12 +45,12 @@ The name of a Tripal Pane is displayed both in the header of the Pane itself and
 
 
 Display/Hide Tripal Panes on Page Load
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+--------------------------------------
 
 You can also easily control which Tripal Panes you would like displayed to the user on initial page load. By default the Summary Pane is the only one configured to show by default. However, if you would prefer for all panes or even a specific subset of panes to show by default, you can simply click the gear button to the far right of each Tripal Pane you want displayed by default and uncheck the "Hide panel on page load" checkbox. This gives you complete control over which panes you want your user to see first. If more then one pane is displayed by default then they will be shown in the order they are listed on the Manage Display UI.
 
 Display/Hide Empty Fields
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-------------------------
 
 By default Tripal v3 hides all empty fields from the user. However like most behaviour in Tripal, this can be configured. If you would prefer to show all fields to the user regardless of whether there is content for that particular page, then navigate to ``Structure → Tripal Content Types`` and then click on the edit link beside the Tripal Content Type you would like to show empty fields for. Near the bottom of this form is a **Field Display** drop-down. Just change this drop-down to "show empty fields" and then click **Save Content Type**. As an example, we have changed this setting for the organism content type and, as you can see below, now you can see all fields (including empty fields like cross references and relationships) available to the organism content type.
 

BIN
docs/user_guide/example_genomics/analyses.1.png


BIN
docs/user_guide/example_genomics/analyses.2.png


+ 33 - 0
docs/user_guide/example_genomics/analyses.rst

@@ -1,2 +1,35 @@
 Analyses
 ========
+
+
+For this tutorial we will later import a set of genes, and their associated mRNA, CDS, UTRs, etc. Tripal's Chado loader for importing genomic data requires that an analysis be associated with all imported features. This has several advantages, including:
+
+- The source of features (sequences) can be traced. Even for features simply downloaded from a database, someone else can see where the features came from.
+- Provenance describing how the features were created can be provided (e.g. whole genome structural and functional annotation description).
+- The analysis associates all of the features together.
+
+To create an analysis for loading our genomic data, navigate to the Add Tripal Content and click on the link: **Analysis**
+
+The analysis creation page will appear:
+
+.. image:: analyses.1.png
+
+Here you can provide the necessary details to help others understand the source of your data. For this tutorial, enter the following:
+
+.. csv-table::
+  :header: "Form Element",	"Value"
+
+  "Name", "Whole Genome Assembly and Annotation of Citrus Sinensis (JGI)"
+  "Program, Pipeline Name or Method Name", "Assembly and Annotation Performed by JGI"
+  "Program Version", "Phytozome v9"
+  "Time Executed", "For this tutorial just select any date."
+  "Data Source Name", "JGI Citrus sinensis assembly/annotation v1.0 (154)"
+  "Data Source URI", "http://www.phytozome.net/citrus.php"
+  "Description (Set to Full HTML):", "<p> <strong><em>Note: </em>The following text comes from phytozome.org:</strong></p> <p> <u>Genome Size / Loci</u><br /> This version (v.1) of the assembly is 319 Mb spread over 12,574 scaffolds. Half the genome is accounted for by 236 scaffolds 251 kb or longer. The current gene set (orange1.1) integrates 3.8 million ESTs with homology and ab initio-based gene predictions (see below). 25,376 protein-coding loci have been predicted, each with a primary transcript. An additional 20,771 alternative transcripts have been predicted, generating a total of 46,147 transcripts. 16,318 primary transcripts have EST support over at least 50% of their length. Two-fifths of the primary transcripts (10,813) have EST support over 100% of their length.</p> <p> <u>Sequencing Method</u><br /> Genomic sequence was generated using a whole genome shotgun approach with 2Gb sequence coming from GS FLX Titanium; 2.4 Gb from FLX Standard; 440 Mb from Sanger paired-end libraries; 2.0 Gb from 454 paired-end libraries</p> <p> <u>Assembly Method</u><br /> The 25.5 million 454 reads and 623k Sanger sequence reads were generated by a collaborative effort by 454 Life Sciences, University of Florida and JGI. The assembly was generated by Brian Desany at 454 Life Sciences using the Newbler assembler.</p> <p> <u>Identification of Repeats</u><br /> A de novo repeat library was made by running RepeatModeler (Arian Smit, Robert Hubley) on the genome to produce a library of repeat sequences. Sequences with Pfam domains associated with non-TE functions were removed from the library of repeat sequences and the library was then used to mask 31% of the genome with RepeatMasker.</p> <p> <u>EST Alignments</u><br /> We aligned the sweet orange EST sequences using Brian Haas's PASA pipeline which aligns ESTs to the best place in the genome via gmap, then filters hits to ensure proper splice boundaries.</p>"
+
+.. note::
+  Above, the description is provided as HTML code.  However if you enabled the **ckeditor** module (as instructed in the Tripal Prerequisites section), you should click the link **Switch to plain-text editor** found below the Description field before cut-and-pasting the code above.  Normally, you would enter the text free-hand but for this tutorial it is fastest to cut-and-paste the HTML.
+
+After saving, you should have the following analysis page:
+
+.. image:: analyses.1.png

BIN
docs/user_guide/example_genomics/controlled_vocabs.1.png


+ 17 - 0
docs/user_guide/example_genomics/controlled_vocabs.rst

@@ -1,2 +1,19 @@
 Controlled Vocabularies
 =======================
+
+Before we proceed with setup of our example genomics site we will want to load the Gene Ontology.  This is because we will be loading a whole genome, genes and transcripts with annotations.  These annotations include Gene Ontology terms.  To load the Gene Ontolgoy, navigate to **Tripal → Data Loaders → Chado Vocabularies → OBO Vocabulary Loader**. You will see the following page:
+
+.. image:: controlled_vocabs.1.png
+
+The Ontology loader allows you to select a pre-defined vocabulary for loading or allow you to provide your own. If you provide your own, you give the remote URL of the OBO file or provide the full path on the local web server where the OBO file is located. In the case of a remote URL, Tripal first downloads and then parses the OBO file for loading. If you do provide your own OBO file it will appear in the saved drop down list for loading of future updates to the ontology.
+
+During the Tripal installation portion of this tutorial, several vocabularies were pre-installed for you. The Gene Ontology, however, was not. To import the Gene Ontology, select it from the drop-down and click the Import Vocabulary button. You will notice a job is added to the jobs system. Now manually launch the jobs
+
+::
+
+  drush trp-run-jobs --username=administrator --root=/var/www/html
+
+
+.. note::
+
+  Loading the Gene Ontology will take several hours.

BIN
docs/user_guide/example_genomics/cross_refs.1.png


+ 22 - 0
docs/user_guide/example_genomics/cross_refs.rst

@@ -1,2 +1,24 @@
 Cross References
 ================
+
+For our gene pages and mRNA pages we want to link back to JGI where we obtained the genes. Therefore, we want to add a database reference for JGI. To add a new external databases, navigate to **Tripal → Data Loaders →  Chado Databases** and click the link titled **Add a Database**. The resulting page provides fields for adding a new database:
+
+.. image:: cross_refs.1.png
+
+Enter the following values for the fields:
+
+.. csv-table::
+  :header: "Field Name", "Value"
+
+  "Database Name", "Phytozome"
+  "Description", "Phytozome is a joint project of the Department of Energy's Joint Genome Institute and the Center for Integrative Genomics to facilitate comparative genomic studies amongst green plants"
+  "URL", "http://www.phytozome.net/"
+  "URL prefix", "https://phytozome.jgi.doe.gov/phytomine/portal.do?externalid=PAC:{accession}"
+
+The URL prefix is important as it will be used to create the links on our gene pages. When an object (e.g. gene) is present in another database, typically those database have a unique identifier (or accession) for the resource.  If we want to link records in our database to records in the remote database we need to provide a URL prefix that Tripal will use to create the URL.   Typically a remote database has a standard URL schema by which someone can specify a unique resource.  Often the resource accession is the last word in the URL to allow others to easily build the URL for any resource.  Tripal can take advantage of these type URL schemas via the URL Prefix field.
+
+The URL prefix should be the URL used to identify a resource.  Two tokens, {db} and {accession}, can be used in place of where the database name and accession might be needed to create the URL. If no {db} or {accession} are provided in the URL prefix then Tripal will append the database name and the accession to the URL prefix to form the final URL.  In this example, the Phytozome URL only requires the accession. The position where that accession will be placed is indicated with the {accession} token.  The {db} token is not needed.
+
+Click **Add**.
+
+We now have added a new database!

BIN
docs/user_guide/example_genomics/genomes_genes.1.png


BIN
docs/user_guide/example_genomics/genomes_genes.2.png


BIN
docs/user_guide/example_genomics/genomes_genes.3.png


BIN
docs/user_guide/example_genomics/genomes_genes.4.png


BIN
docs/user_guide/example_genomics/genomes_genes.5.png


+ 219 - 0
docs/user_guide/example_genomics/genomes_genes.rst

@@ -1,2 +1,221 @@
 Genomes and Genes
 =================
+
+Loading Feature Data
+--------------------
+Now that we have our organism and whole genome analysis ready, we can begin loading genomic data. For this tutorial only a single gene from sweet orange will be loaded into the databsae. This is to ensure we can move through the tutorial rather quickly. The following datasets will be used for this tutorial:
+
+- `Citrus sinensis-orange1.1g015632m.g.gff3 <http://tripal.info/sites/default/files/Citrus_sinensis-orange1.1g015632m.g.gff3>`_
+- `Citrus sinensis-scaffold00001.fasta <http://tripal.info/sites/default/files/Citrus_sinensis-scaffold00001.fasta>`_
+- `Citrus sinensis-orange1.1g015632m.g.fasta <http://tripal.info/sites/default/files/Citrus_sinensis-orange1.1g015632m.g.fasta>`_
+
+One of the new features available in many of the Tripal v3 data loaders is an HTML5 file upload element which allows administrators and users to upload large files reliably. This removes the requirement in previous versions of this tutorial to download these files directly on the server and provide a path to the file. Instead, if you have the file on your current local machine you can now simply upload it for loading.
+
+Another new option in Tripal v3 Data Loaders is the ability to provide a remote path of a file to be loaded. This completely alleviates the need to transfer large files multiple times and eases the loading process.
+
+Loading a GFF3 File
+-------------------
+The gene features (e.g. gene, mRNA, 5_prime_UTRs, CDS 3_prime_UTRS) are stored in the GFF3 file downloaded in the previous step. We will load this GFF3 file and consequently load our gene features into the database. Navigate to **Tripal → Data Loaders → Chado GFF3 Loader**.
+
+.. image:: genomes_genes.1.png
+
+Enter the following:
+
+.. csv-table::
+  :header: "Field Name", "Value"
+
+  "File", "Upload the file name Citrus_sinensis-orange1.1g015632m.g.gff3"
+  "Analysis", "Whole Genome Assembly and Annotation of Citrus sinensis"
+  "Organism", "Citrus sinensis"
+  "All other options", "leave as default"
+
+Finally, click the Import GFF3 file button. You'll notice a job was submitted to the jobs subsystem. Now, to complete the process we need the job to run. We'll do this manually:
+
+::
+
+  drush trp-run-jobs --username=administrator --root=/var/www/html
+
+You should see output similar to the following:
+
+::
+
+  Tripal Job Launcher
+  Running as user 'administrator'
+  -------------------
+  2018-06-29 18:00:50: There are 1 jobs queued.
+  2018-06-29 18:00:50: Job ID 8.
+  2018-06-29 18:00:50: Calling: tripal_run_importer(12)
+
+  Running 'Chado GFF3 File Loader' importer
+  NOTE: Loading of file is performed using a database transaction.
+  If it fails or is terminated prematurely then all insertions and
+  updates are rolled back and will not be found in the database
+
+  Opening /var/www/html/sites/default/files/tripal/users/1/Citrus_sinensis-orange1.1g015632m.g.gff3
+  Percent complete: 100.00%. Memory: 32,211,360 bytes.
+  Adding protein sequences if CDS exist and no proteins in GFF...
+  Setting ranks of children...
+
+  Done.
+
+  Remapping Chado Controlled vocabularies to Tripal Terms...
+  Done.
+
+.. note::
+
+  For very large GFF3 files the loader can take quite a while to complete.
+
+Loading FASTA files
+-------------------
+Using the Tripal GFF3 loader we were able to populate the database with the genomic features for our organism. However, those features now need nucleotide sequence data. To do this, we will load the nucleotide sequences for the mRNA features and the scaffold sequence. Navigate to the **Tripal → Data Loaders → Chado FASTA Loader**.
+
+.. image:: genomes_genes.2.png
+
+Before loading the FASTA file we must first know the Sequence Ontology (SO) term that describes the sequences we are about to upload. We can find the appropriate SO terms from our GFF file. In the GFF file we see the SO terms that correspond to our FASTA files are 'scaffold' and 'mRNA'.
+
+It is important to ensure prior to importing, that the FASTA loader will be able to appropriately match the sequence in the FASTA file with existing sequences in the database. Before loading FASTA files, take special care to ensure the definition line of your FASTA file can uniquely identify the feature for the specific organism and sequence type.
+
+For example, in our GFF file an mRNA feature appears as follows:
+
+::
+
+  scaffold00001   phytozome6      mRNA    4058460 4062210 .       +       .       ID=PAC:18136217;Name=orange1.1g015632m;PACid=18136217;Parent=orange1.1g015632m.g
+
+Note that for this mRNA feature the ID is **PAC:18136217** and the name is **orange1.1g015632m**. In Chado, features always have a human readable name which does not need to be unique, and also a unique name which must be unique for the organism and SO type. In the GFF file, the ID becomes the unique name and the Name becomes the human readable name.
+
+In our FASTA file the definition line for this mRNA is:
+
+::
+
+  >orange1.1g015632m PAC:18136217 (mRNA) Citrus sinensis
+
+By default Tripal will match the sequence in a FASTA file with the feature that matches the first word in the definition line. In this case the first word is **orange1.1g015632m**. As defined in the GFF file, the name and unique name are different for this mRNA. However, we can see that the first word in the definition line of the FASTA file is the name and the second is the unique name. Therefore, when we load the FASTA file we should specify that we are matching by the name because it appears first in the definition line.
+
+If however, we cannot guarantee the that feature name is unique then we can use a regular expressions in the **Advanced Options** to tell Tripal where to find the name or unique name in the definition line of your FASTA file.
+
+.. note::
+
+  When loading FASTA files for features that have already been loaded via a GFF file, always choose "Update only" as the import method. Otherwise, Tripal may add the features in the FASTA file as new features if it cannot properly match them to existing features.
+
+Now, enter the following values in the fields on the web form:
+
+.. csv-table::
+  :header: "Field Name", "Value"
+
+  "FASTA file", "Upload the file named Citrus_sinensis-scaffold00001.fasta"
+  "Analysis", "Whole Genome Assembly and Annotation of Citrus sinensis"
+  "Organism", "Citrus sinensis (Sweet orange)"
+  "Sequence type", "supercontig (scaffold is an alias for supercontig in the sequence ontology)"
+  "Method", "Update only (we do not want to insert these are they should already be there)"
+  "Name Match Type", "Name"
+
+Click the Import Fasta File, and a job will be added to the jobs system. Run the job:
+
+::
+
+  drush trp-run-jobs --username=administrator --root=/var/www/html
+
+Notice that the loader reports the it "Found 1 sequences(s).". Next fill out the same form for the mRNA (transcripts) FASTA file:
+
+.. csv-table::
+  :header: "Field Name", "Value"
+
+  "FASTA file", "Upload the file named Citrus_sinensis-orange1.1g015632m.g.fasta"
+  "Analysis", "Whole Genome Assembly and Annotation of Citrus sinensis"
+  "Organism", "Citrus sinensis (Sweet orange)"
+  "Sequence type", "mRNA"
+  "Method", "Update only"
+  "Name Match", "Name"
+
+The FASTA loader has some advanced options. The advanced options allow you to create relationships between features and associate them with external databases. For example, the definition line for the mRNA in our FASTA file is:
+
+::
+
+  >orange1.1g015632m PAC:18136217 (mRNA) Citrus sinensis
+
+Here we have more information than just the feature name. We have a unique Phytozome accession number (e.g. PAC:18136217) for the mRNA. Using the **External Database Reference** section under **Additional Options** we can import this information to associate the Phytozome accession with the features.  A regular expression is required to uniquely capture that ID.  In the example above the unique accession is 18136217.  Because Tripal is a PHP application, the syntax for regular expressions follows the PHP method. Documentation for regular expressions used in PHP can be found `here <http://php.net/manual/en/reference.pcre.pattern.syntax.php>`_.  Enter the following value to make the associate between the mRNA and it's corresponding accession at Phytozome:
+
+.. csv-table::
+  :header: "Field Name", "Value"
+
+  "External Database", "Phytozome"
+  "Regular expression for the accession", "^.*PAC:(\d+).*$"
+
+Remember, we have the name **Phytozome** in our **External Database** drop down because we manually added it as a database cross reference earlier in the turorial.  After adding the values above, click the **Import FASTA file** button, and manually run the submitted job:
+
+::
+
+  drush trp-run-jobs --username=administrator --root=/var/www/html
+
+Now the scaffold sequence and mRNA sequences are loaded!
+
+.. note:
+
+  If the name of the gene to which this mRNA belonged was also on the definition line, we could use the **Relationships** section in the **Advanced Options** to link this mRNA with it's gene parent. Fortunately, this information is also in our GFF file and these relationships have already been made.
+
+.. note::
+
+  It is not required to load the mRNA sequences as those can be derived from their alignments with the scaffold sequence. However, in Chado the **feature** table has a **residues** column. Therefore, it is best practice to load the sequence when possible.
+
+Creating Gene Pages
+----------------------
+Now that we've loaded our feature data, we must publish them. This is different than when we manually created our Organism and Analysis pages.  Using the GFF and FASTA loaders we imported our data into Chado, but currently there are no published pages for this data that we loaded.  To publish these genomic features, navigating to Structure → Tripal Content Types and click the link titled Publish Chado Content.  The following page appears:
+
+.. image:: genomes_genes.3.png
+
+Here we can specify the types of content to publish. For our site we want to offer both gene and mRNA pages (these types were present in our GFF file). First, to create pages for genes select 'Gene' from the dropdown.  A new Filter section is present and when opened appears as follows.
+
+.. image:: genomes_genes.4.png
+
+The **Filters** section allows you to provide filters to limit what you want to publish.  For example, if you only want to publish genes for a single organism you can select that organism in the Organism drop down list.  We only have one organism in our site, but for the sake of experience, add a filter to publish only genes for Citrus sinesis by selecting it from the Organism drop down.  Scroll to the bottom a click the Publish button.  A new job is added to the job queue.  Manually run the job:
+
+::
+
+  drush trp-run-jobs --username=administrator --root=/var/www/html
+
+You should see output similar to the following:
+
+::
+
+  Tripal Job Launcher
+  Running as user 'administrator'
+  -------------------
+  Calling: tripal_chado_publish_records(Array, 12)
+
+  NOTE: publishing records is performed using a database transaction.
+  If the load fails or is terminated prematurely then the entire set of
+  is rolled back with no changes to the database
+
+  Succesfully published 1 Gene record(s).
+
+Here we see that 1 gene was successfully published. This is because the GFF file we used previously to import the genes only had one gene present.
+
+Now, repeat the steps above to publish the mRNA content type.  You should see that 9 mRNA records were published:
+
+::
+
+  Tripal Job Launcher
+  Running as user 'administrator'
+  -------------------
+  Calling: tripal_chado_publish_records(Array, 13)
+
+  NOTE: publishing records is performed using a database transaction.
+  If the load fails or is terminated prematurely then the entire set of
+  is rolled back with no changes to the database
+
+  Succesfully published 9 mRNA record(s).
+
+.. note::
+
+  It is not necessary to publish all types of features in the GFF file. For example, we do not want to publish features of type **scaffold**. The feature is large and would have many relationships to other features, as well as a very long nucleotide sequence.  These can greatly slow down page loading, and in general would be overwhelming to the user to view on one page. As another example, each **mRNA** is composed of several **CDS** features. These **CDS** features do not need their own page and therefore do not need to be published.
+
+Now, we can view our gene and mRNA pages. Click the Find Tripal Content link. Find and click the new page titled **orange1.1g015632m.g**. Here we can see the gene feature we added and its corresponding mRNA's.
+
+.. image:: genomes_genes.4.png
+
+Next find an mRNA page to view.  Remember when we loaded our FASTA file for mRNA that we associated the record with Phytozome.  On these mRNA pages you will see a link in the left side bar titled **Database Cross Reference**.  Clicking that will open a panel with a link to Phytozome.  This link appears because:
+
+- We added a Database Cross Reference for Phytozome in a previous step
+- We associated the Phytozome accession with the features using a regular expression when importing the FASTA file.
+
+All data that appears on the page is derived from the GFF file and the FASTA  files we loaded.

BIN
docs/user_guide/example_genomics/organisms.citrus_sinensis.jpg


BIN
docs/user_guide/example_genomics/pub_import.1.png


BIN
docs/user_guide/example_genomics/pub_import.2.png


BIN
docs/user_guide/example_genomics/pub_import.3.png


BIN
docs/user_guide/example_genomics/pub_import.4.png


BIN
docs/user_guide/example_genomics/pub_import.5.png


+ 127 - 0
docs/user_guide/example_genomics/pub_import.rst

@@ -1,2 +1,129 @@
 Importing Publications
 ======================
+Tripal provides an interface for automatically and manually adding publications.
+
+Manually Adding a Publication
+-----------------------------
+First, we will manually add a publication. Click the Add Tripal Content link in the administrative menu and then Publication.
+
+.. image:: pub_import.1.png
+
+We will add information about the Tripal publication. Enter the following values:
+
+.. csv-table::
+  :header: "Field Name", "Value"
+
+  "Title", "Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases."
+  "Series Name", "Database"
+  "Publication Year", "2013"
+  "Unique Local Identifier", "Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases."
+  "Type	Journal", "Article"
+  "Publication Date", "2013 Oct 25"
+  "Cross Reference", "Database: PMID"
+  "Accession", "24163125"
+  "Authors", "Sanderson LA, Ficklin SP, Cheng CH, Jung S, Feltus FA, Bett KE, Main D"
+  "Citation", "Sanderson LA, Ficklin SP, Cheng CH, Jung S, Feltus FA, Bett KE, Main D. Tripal: a construction Toolkit for Online Genome Databases. Database, Oct 25 2013. bat075"
+  "Abstract", "Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including ‘Feature Map’, ‘Genetic’, ‘Publication’, ‘Project’, ‘Contact’ and the ‘Natural Diversity’ modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info."
+
+To complete the page click the **Save** button at the bottom
+
+Import of Publications
+----------------------
+
+Tripal supports importing of publications from remote databases such as NCBI PubMed.
+
+Creation of an importer is an administrative function. A publication importer is created by the site administrator and consists of a set of search criteria for finding multiple publications at one time. When the importer is run, it will query the remote database, retrieve the publications that match the criteria and add them to the database. Because we loaded genomic data for Citrus sinensis we will create an importer that will find all publications related to this species.
+
+First, navigate to **Tripal → Data Loaders → Chado Bulk Publication Importers** and click the link New Importer. You will see the following page:
+
+.. image:: pub_import.2.png
+
+Enter the following values in the fields:
+
+.. csv-table::
+  :header: "Field Name", "Value"
+
+  "Remote Database", "PubMed"
+  "Loader Name", "Pubs for Citrus sinensis"
+  "Criteria #1", "
+  - Scope: Abstract/Title
+  - Search Terms: Citrus sinensis
+  - is Phrase?: checked"
+
+Now, click the 'Test Importer' button. This will connect to PubMed and search for all publications that match our provided criteria.  it may take a few minutes to complete. On the date this portion of the tutorial was written, over 800 publications were found:
+
+.. image:: pub_import.3.png
+
+Now, save this importer. You should see that we have one importer in the list:
+
+.. image:: pub_import.4.png
+
+We can use this importer to load all  publications related to Citrus sinensis from PubMed into our database (how to load these will be shown later). However, what if new publications are added? We would like this importer to be run monthly so that we can automatically add new publications as they become available. But we do not need to try to reload these 760 every time the loader runs each month. We will create a new importer that only finds publications within the last 30 days. To do this, click the link New Importer. Now, add the following criteria:
+
+.. csv-table::
+  :header: "Field Name", "Value"
+
+  "Remote Database", "PubMed"
+  "Loader Name", "Pubs for Citrus sinensis last 30 days"
+  "Days since record modified", "30"
+  "Criteria #1", "
+  - Scope: Abstract/Title
+  - Search Terms: Citrus sinensis
+  - is Phrase?: checked"
+
+Now, when we test the importer we find only 1 publications that has been added (created) to PubMed in the last 30 days:
+
+.. image:: pub_import.5.png
+
+Save this importer.
+
+Next, there are two ways to import these publications. The first it to manually import them. There is a Drush command that is used for importing publications. Return to the terminal and run the following command:
+
+::
+
+  cd /var/www/html
+  drush trp-import-pubs --username=administrator
+
+You should see output to the terminal that begins like this:
+
+::
+
+  NOTE: Loading of publications is performed using a database transaction.
+  If the load fails or is terminated prematurely then the entire set of
+  insertions/updates is rolled back and will not be found in the database
+
+  Importing: Pubs for Citrus sinensis
+
+The importer will import 100 publications at a time and pause between each set of 100 as it requests more.
+
+Some things to know about the publication importer:
+
+1. The importer keeps track of publications from the remote database using the publication accession (e.g. PubMed ID).
+2. If a publication with an accession (e.g. PubMed ID) already exists in the local database, the record will be updated.
+3. If a publication in the local database matches by title, journal and year with one that is to be imported, then the record will be updated. Y
+4. Run the newly created Tripal Job to finish:
+  ::
+
+    cd /var/www/html
+    drush trp-run-jobs --user=administrator
+
+The second way to import publications is to add an entry to the UNIX cron. We did this previously for the Tripal Jobs management system when we first installed Tripal. We will add another entry for importing publications. But first, now that we have imported all of the relevant pubs, we need to return to the importers list at **Tripal → Data Loaders → Chado Publication Importers** and disable the first importer we created. We do not want to run that importer again, as we've already imported all historical publications on record at PubMed. Click the edit button next to the importer named Pubs for Citrus sinensis, click the disable checkbox and then save the template. The template should now be disabled.
+
+Now we have the importer titled **Pubs for Citrus sinensis last 30 days** enabled. This is the importer we want to run on a monthly basis. The cron entry will do this for us. On the terminal open the crontab with the following command:
+
+::
+
+  sudo crontab -e
+
+Now add the following line to the bottom of the crontab:
+
+::
+
+  30 8 1,15 * *  su - www-data -c '/usr/local/drush/drush -r /var/www/html -l http://[site url] trp-import-pubs --report=[your email] > /dev/null'
+
+Where
+
+- [site url] is the full URL of your site
+- [your email] is the email address of the user that should receive an email containing a list of publications that were imported. You can separate multiple email addresses with a comma.
+
+The cron entry above will launch the importer at 8:30am on the first and fifteenth days of the month. We will run this importer twice a month in the event it fails to run (e.g. server is down) at least one time during the month.

+ 61 - 20
docs/user_guide/job_management.rst

@@ -1,10 +1,10 @@
-Job Management
-===============
+Job Management (Tripal Daemon)
+==============================
 
-This module is meant to provide a simple means of creating a robust command-line-driven, fully bootstrapped PHP Daemon. It uses the PHP-Daemon (https://github.com/shaneharter/PHP-Daemon) Library to create the Daemon (via the Libraries API) in order to not re-invent the wheel ;-).
+The Tripal Daemon module is meant to provide a simple means of creating a robust command-line-driven, fully bootstrapped PHP Daemon. It uses the PHP-Daemon (https://github.com/shaneharter/PHP-Daemon) Library to create the Daemon (via the Libraries API) in order to not re-invent the wheel. It allows you to execute Jobs submitted to Tripal without using cron.  It provides a faster user experience for running jobs.  Prior to Tripal v3, the Tripal Daemon module was an extension module. It was integrated into the core Tripal pacakge.
 
 Features
-~~~~~~~~~
+--------
 
 * Provides a Drush interface to start/stop your Daemon.
 * Your daemon starts in the background and is detached from the current terminal.
@@ -13,28 +13,69 @@ Features
 * Lock Files, Automatic restart (8hrs default) and Built-in Signal Handling & Event Logging are only a few of the features provided by the Daemon API making this a fully featured & robust Daemon.
 
 
-Requirements
-~~~~~~~~~~~~~
+Installation
+------------
 
-* Libraries API (https://www.drupal.org/project/libraries)
-* PHP-Daemon Library version 2.0 (https://github.com/shaneharter/PHP-Daemon)
-    * Download the PHP-Daemon Library and extract it in your ``sites/all/libraries`` directory. The folder must be named "PHP-Daemon".
-* Drush 5.x (https://github.com/drush-ops/drush)
-* Drush Daemon API (https://www.drupal.org/project/drushd)
+The Tripal Daemon requires the `Libraries API <https://www.drupal.org/project/libraries>`_ module.  You can easily download and install this module using the following drush commands:
 
-Tripal Daemon Usage
-~~~~~~~~~~~~~~~~~~~~~
+.. code-block:: shell
+
+  drush pm-download Libraries
+  drush pm-enable Libraries
+
+Next, we need the `PHP-Daemon Library version 2.0 <https://github.com/shaneharter/PHP-Daemon>`_. You must download the PHP-Daemon Library and extract it in your ``sites/all/libraries`` directory. The folder must be named "PHP-Daemon".  The following commands can be used to do this:
+
+.. code-block:: shell
+
+  cd /var/www/html/sites/all/libraries
+  wget https://github.com/shaneharter/PHP-Daemon/archive/v2.0.tar.gz
+  tar -zxvf v2.0.tar.gz
+  mv v2.0.tar.gz PHP-Daemon
+
+Next, install the `Drush Daemon API <https://www.drupal.org/project/drushd>`_ module.
+
+.. code-block:: shell
+
+  drush pm-download drushd
+  drush pm-enable drushd
+
+Finally, enable the Tripal Daemon module. This module comes with Tripal v3.
+
+.. code-block:: shell
+
+  drush pm-enable tripal_daemon
+
+Usage
+-----
+
+Start the Daemon
+
+.. code-block:: shell
+
+  drush trpjob-daemon start
+
+Stop the Daemon
 
 .. code-block:: shell
 
-  #Start Daemon drush
-  trpjob-daemon start
-  #Stop Daemon
   drush trpjob-daemon stop
-  #Check the Status
+
+Check the status
+
+.. code-block:: shell
+
   drush trpjob-daemon status
-  #Show the Log
-  #List the last 10 lines of the log file:
+
+List the last 10 lines of the log file:
+
+.. code-block:: shell
+
   drush trpjob-daemon show-log
-  #List the last N lines of the log file:
+
+List the last N lines of the log file:
+
+.. code-block:: shell
+
   drush trpjob-daemon show-log --num_lines=N
+
+Set N to the number of lines you want to view.

+ 1 - 2
docs/user_guide/mviews.rst

@@ -3,7 +3,7 @@ Materialized Views
 
 Chado is efficient as a data warehouse but queries can become slow depending on the type of query. To help simplify and speed up these queries, materialized views can be employed. For a materialized view, a new database table is created and then populated with the results of a pre-defined SQL query. This allows you to execute a much simpler and faster query on the materialized view when producing user pages. A side effect, however is redundant data, with the materialized view becoming stale if not updated regularly.
 
-Tripal provides a mechanism for populating and updating these materialized views. These can be found on the ``Tripal → Data Storage → Chado -> Materialized Views`` page.
+Tripal provides a mechanism for populating and updating these materialized views. These can be found on the **Tripal → Data Storage → Chado -> Materialized Views** page.
 
 .. image:: ./mviews.1.png
 
@@ -16,5 +16,4 @@ This will submit jobs to populate the views with data. Now, run the jobs:
   cd /var/www/html
   drush trp-run-jobs --user=administrator
 
-
 You can now see that all views are up-to-date on the **Materialized Views Page**. The number of rows in the view table is shown.

+ 11 - 13
docs/user_guide/searching.rst

@@ -5,23 +5,21 @@ Drupal and Tripal offer a variety of methods for searching biological content on
 
 There are several options for the addition of both site-wide and content-specific search tools which include:
 
-**For site-wide searching you can:
-**
+**For site-wide searching you can:**
 
-* Use the Default Drupal Search
-* Use the Search API Module
-* Use an independent search tool. Two popular tools that integrate with Drupal include:
-  * ElasticSearch
-  * Apache Solr
+- Use the Default Drupal Search
+- Use the Search API Module
+- Use an independent search tool. Two popular tools that integrate with Drupal include:
+  - ElasticSearch
+  - Apache Solr
 
-**For content-specific searching you can:
-**
-* Use the search tools that Tripal provides
-* Develop your own search tools using Drupal Views
-* Write your own custom search tools using PHP and Tripal's API functions.
+**For content-specific searching you can:**
 
-You may not want to consider using multiple search tools, such as a site-wide tool and content-specific tools.  The following sections provide a description for use and setup of some of these different options.
+- Use the search tools that Tripal provides
+- Develop your own search tools using Drupal Views
+- Write your own custom search tools using PHP and Tripal's API functions.
 
+You may not want to consider using multiple search tools, such as a site-wide tool and content-specific tools.  The following sections provide a description for use and setup of some of these different options.
 
 
 .. toctree::

+ 1 - 1
docs/user_guide/searching/elasticsearch_module.rst

@@ -2,6 +2,6 @@ Tripal Elasticsearch Module
 ========================================
 
 
-Tripal Elasticsearch is a module that integrates the `Elasticsearch <https://www.elastic.co/products/elasticsearch>`_ search engine with Tripal sites. By enabling the module, you are provided with a set of tools to index and search your Tripal 3 or Tripal 2 site. It provides an easy to set up site wide search and gene search blocks out of the box. Tripal Elasticsearch also provides support for Cross-site querying, which allows users to search multiple websites in a fast and asynchronous manner in one query.
+Tripal Elasticsearch is an extension module created by the `Staton Lab <https://github.com/statonlab>`_ at the University of Tennessee that integrates the `Elasticsearch <https://www.elastic.co/products/elasticsearch>`_ search engine with Tripal sites. By enabling the module, you are provided with a set of tools to index and search your Tripal 3 or Tripal 2 site. It provides an easy to set up site wide search and gene search blocks out of the box. Tripal Elasticsearch also provides support for Cross-site querying, which allows users to search multiple websites in a fast and asynchronous manner in one query.
 
 To enable Tripal Elasticsearch on your site, you must install an Elasticsearch instance, enable the Tripal Elasticsearch module, and use it to index your site’s data. For documentation on how to install and configure Tripal Elasticsearch, please `visit the Github repository <https://github.com/tripal/tripal_elasticsearch>`_.

+ 11 - 11
docs/user_guide/searching/search_api.rst

@@ -2,10 +2,10 @@ Search API Module: Site-Wide Searching
 ========================================
 
 Installing Drupal Search API
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------
 
-* **`Search API <https://www.drupal.org/project/search_api>`_**: This module provides an interface for much more powerful, efficient searching than the Drupal core search module. Specifically, it allows you to use more powerful engines such as `Elastic Search <https://www.drupal.org/project/search_api_elasticsearch>`_ and `Apache Solr <https://www.drupal.org/project/search_api_solr>`_, as well as, advanced features such as facets (for narrowing down search results based on fields or entity type), fuzzy search, etc.
-* **`Search API Database Service <https://www.drupal.org/project/search_api_db>`_**: This module provides a Search Backend/Server defining how your search index should be stored. Specifically, it just stores the index in your current drupal database.
+- `Search API <https://www.drupal.org/project/search_api>`_: This module provides an interface for much more powerful, efficient searching than the Drupal core search module. Specifically, it allows you to use more powerful engines such as `Elastic Search <https://www.drupal.org/project/search_api_elasticsearch>`_ and `Apache Solr <https://www.drupal.org/project/search_api_solr>`_, as well as, advanced features such as facets (for narrowing down search results based on fields or entity type), fuzzy search, etc.
+- `Search API Database Service <https://www.drupal.org/project/search_api_db>`_: This module provides a Search Backend/Server defining how your search index should be stored. Specifically, it just stores the index in your current drupal database.
 
 Install the **Search API** and **Database search** modules as you would any other Drupal module. This can be done using Drupal's module installation page as shown in the screenshot below. For installation instructions reference the `Drupal.org Tutorial <https://www.drupal.org/documentation/install/modules-themes/modules-7>`_.
 
@@ -24,7 +24,7 @@ Alternatively, installation can be accomplished on the command-line by executing
 
 
 Define your Search Backend/Server
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------------
 
 This tutorial covers using a basic Drupal database storage backend for your search. For large sites, it is recommended to use Elastic Search or Apache Solr. First, we need to tell the Search API where we want our index stored.  Navigate, to the configuration page for the Search API. You can either click on the **Configure** link shown in the above screenshot or navigate to ``Configuration > Search API`` through the administrative toolbar. You should see the following screen:
 
@@ -43,7 +43,7 @@ You should see the following screen--assuming all went well. Click on Search API
 .. figure:: ./search_api.4.config3_server.png
 
 Define a Search Index
-~~~~~~~~~~~~~~~~~~~~~~~
+---------------------
 
 Now that we have created a server where the Search API will store our index, we have to define the index itself.  On the Search API Configuration page click on the **Add index** link. The resulting page appears in the following screenshot. Name your index something descriptive. Consider including the word "search" somewhere in the name as this name will be used when setting up the search form/listing (view). For example, enter the name "Tripal Content Search." Next, select "Tripal Content" as the **Item Type**. The item type defines what content types should be indexed. One thing to keep in mind, is that the Search API currently does not support multi-entity (ie: Both Tripal and Node content) search in the same index without the `Search API Multi-index Search <https://www.drupal.org/project/search_api_multi>`_ extension module. Notice that we didn't check any of the **Bundles**. By not selecting bundles, this ensures that all Tripal Content will be indexed by the search. Finally, select from the Server dropdown the server created in the previous step and click the Create Index button.
 
@@ -74,9 +74,9 @@ Your index is now scheduled for building! Depending upon the amount of content y
 
 
 Creating a Search Interface for your users
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+------------------------------------------
 
-At this point you should have an index for your Tripal Content. However, you still have not created any functionality for end users—the data might be indexed, but they can't search it, yet. To create the Search page we are going to use views. Start by going to the Views Administration UI (``Structure > Views``) and click on **Add new view.**
+At this point you should have an index for your Tripal Content. However, you still have not created any functionality for end users—the data might be indexed, but they can't search it, yet. To create the Search page we are going to use views. Start by going to the Views Administration UI (**Structure > Views**) and click on **Add new view.**
 
 .. figure:: ./search_api.10.png
 
@@ -95,7 +95,7 @@ Next,  will appear is the Edit Views UI which can be intimidating, even if you'v
 
 
 Configuring What is displayed for each Search Result
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------------------------------
  
 First, we are going to change what is displayed for each result. By default just the unique identifier is displayed which of course is not useful to the user. We want to hide that field by clicking on its name, **Indexed Tripal Content: Tripal content id** which opens the configuration pop-up and then checking **Exclude from display**. Since we will be using this field to create our link, we also want to change the **Thousands marker** to **-None-**.  Click **Apply (all displays)** to save these changes.
 
@@ -122,7 +122,7 @@ On the resulting page, be sure to uncheck the box **Create** a label just as you
 Now that we have a title and excerpt in our Fields section, if you click on the **Update Preview** button you will see a list of titles for your content and then emptiness underneath each title since there was no keyword entered yet so an excerpt could not be generated.
 
 Adding the Keywords Search Box
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+------------------------------
 
 Click on the **Add** button beside **Filter Criteria** and in the resulting pop-up, select **Search** for the filter and then check **Search: Fulltext Search**. Click the **Apply (all displays)** button to add the filter.
 
@@ -139,7 +139,7 @@ Save your view and navigate go to the new search page you created with this new
 .. figure:: ./search_api.19.view.png
 
 Sort by "Relevance"
-~~~~~~~~~~~~~~~~~~~~
+-------------------
 
 Next, we want to sort our results.  To do this, return to the view configuration page.  Click on the **Add** button beside **Sort Criteria** and in the pop-up select **Search** in the **Filter** drop-down.  Next, check the **Search: Relevance** field. Apply it and configure it to **Sort descending** so that higher scoring results are shown first.  Apply the the configuration settings.
 
@@ -147,7 +147,7 @@ Next, we want to sort our results.  To do this, return to the view configuration
 
 
 Only Show results when user clicks Search
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------------------
 
 Finally, we do not want search results to automatically appear. We want the user to click the **Apply** button on the search form first. To do this use the right-side of the Views UI to expand the **Advanced** field set and under **Exposed Form** click on **Exposed form Style: Basic**. Change the setting  to **Input Required** and click **Apply**. In the following configuration page change the **Submit button** text to "Search", and uncheck **Expose Sort** order.
 

+ 6 - 10
docs/user_guide/setting_page_urls.rst

@@ -4,28 +4,24 @@ Setting Page Titles and URLs
 Tripal allows for Page Titles and URLs to be set within the Tripal Content type editing interface. This provides the ability to construct consistent url patterns and titles across your site.
 
 Setting Page Titles
-=====================
+-------------------
 
-Page titles can be set within the edit mechanism of the Tripal Content type. This can be found on the ``Structure → Tripal Content Types  → <specific content type>``. Scroll to the bottom of the page to the "Page Title options" tab.
+Page titles can be set within the edit mechanism of the Tripal Content type. This can be found on the **Structure → Tripal Content Types** and click the **edit** link for the desired content type. Scroll to the bottom of the page to the **Page Title options** tab.
 
 .. image:: ./setting_page_urls.1.png
 
 
-Page Title Settings
-~~~~~~~~~~~~~~~~~~~~
-
-Then the page title pattern can be generated using combination of token. The tokens can be found under the 'Available Tokens' link. Keep in mind that it might be confusing to users if more than one page has the same title.
+The page title format can be generated using a combination of token. When titles are generated for a new page, the tokens are replaced with the appropriate field values to which they refer. A list of available tokens can be found under the **Available Tokens** link.
 
 .. note::
 
 	We recommend you choose a combination of tokens that will uniquely identify your content.
 
-If you already have content within your site and need to update all page titles you can choose to 'Bulk update all titles'. This will update all existing titles for the content type in question. If your title is used to build your alias you will also need to 'Bulk update all aliases'.
+If you already have content within your site and need to update all page titles you can choose to **Bulk update all titles**. This will update all titles for the existing content that belong to this type. If your title is used to build your alias you will also need to **Bulk update all aliases**.
 
 Setting URLs
-=============
-
-URLs, also known as aliases, can you found just below the Page Title options tab.The url pattern can be generated using combination of token. The tokens can be found under the 'Available Tokens' link. If you already have content within your site and need to update all urls you can choose to 'Bulk update all aliases'. This will update all existing urls for the content type in question. It will also create redirects from the old url to the new url to ensure 404s and broken links are not created.
+------------
 
+URLs, also known as aliases, can be found just by selecting the **Page Title options** tab. The URL pattern can be generated using a combination of token. The tokens can be found under the **Available Tokens** link. If you already have content within your site and need to update all URLs you can choose to **Bulk update all aliases**. This will update all existing URLs for all pages of the content type . It will also create redirects from the old URL to the new URL to ensure 404s and broken links are not created.
 
 .. image:: ./setting_page_urls.2.png

+ 27 - 33
docs/user_guide/web_services.rst

@@ -1,15 +1,15 @@
 Web Services
-===============
+============
 
 Overview
-~~~~~~~~~
+--------
 
 New in Tripal v3 are `RESTful web <https://en.wikipedia.org/wiki/Representational_state_transfer>`_ services.  These web-services are designed to support the following:
 
 1. Allow end-users to access data programmatically using any language of their choice.
-2.  Allow Tripal sites to share data among themselves.
+2. Allow Tripal sites to share data among themselves.
 
-Tripal v3 now comes with a ``tripal_ws`` module that provides web services.  Once enabled, any Tripal v3 site instantly provides a RESTful web service that can access all publicly available data.  Additionally, web services for Tripal are meant to be:
+Tripal v3 now comes with a **tripal_ws** module that provides web services.  Once enabled, any Tripal v3 site instantly provides a RESTful web service that can access all publicly available data.  Additionally, web services for Tripal are meant to be:
 
 1. Fully discoverable
 2. Searchable
@@ -25,7 +25,7 @@ Within the current v3.0 release of Tripal web services are limited to read-only
 2. Full implementation of the `Hyrda Core Vocabulary <https://www.hydra-cg.com/spec/latest/core/>`_ specification to provide full discover-ability.
 
 Enable Web Services
-~~~~~~~~~~~~~~~~~~~~
+-------------------
 
 To enable web services, simply install the ``tripal_ws`` module, either using the module installation interface within Drupal's administrative pages, or on the command-line using Drush:
 
@@ -35,16 +35,16 @@ To enable web services, simply install the ``tripal_ws`` module, either using th
   drush pm-enable tripal_ws
 
 Exploring Web Services
-~~~~~~~~~~~~~~~~~~~~~~~
-
-Once enabled, webservices are immediately available on your site at the URL  ``http://[your.site.name]/web-services/`` (replace [your.site.name] with the address and path of your Tripal site). Web services are meant to be accessed programmatically, but they can be easily explored using a web browser such as with the `Firefox browser <https://www.mozilla.org/en-US/firefox/>`_ and `JSONView <https://jsonview.com/>`_ extension enabled.  For example, the following screen shot shows an example Tripal site with the data loaded following the Setup of a Genomics Site instructions of this tutorial.
+----------------------
 
+Once enabled, web services are immediately available on your site at the URL  ``http://[your.site.name]/web-services/`` (replace [your.site.name] with the address and path of your Tripal site). Web services are meant to be accessed programmatically, but they can be easily explored using a web browser such as with the `Firefox browser <https://www.mozilla.org/en-US/firefox/>`_ and `JSONView <https://jsonview.com/>`_ extension enabled.  For example, the following screen shot shows an example Tripal site with the data loaded following the Setup of an Example Genomics Site instructions of this User's Guide.
 
+.. image:: web_services.1.png
 
 This initial resource "home page" of the web services returns results in `JSON format <http://www.json.org/>`_.  When using the JSONView extension within Firefox you can explore web services by clicking the links that are present in the browser.
 
 Structure of a Web Service Response
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------------
 
 
 The initial response in JSON is in the `JSON-LD format <https://json-ld.org/>`_ where the LD means Linked Data.   For example:
@@ -72,22 +72,20 @@ The initial response in JSON is in the `JSON-LD format <https://json-ld.org/>`_
   }
 
 
-A notable component of JSON-LD is the `@context` sub array.  It is within this @context section that the "meaning" of the data becomes apparent and where the Linked Data is implemented.  Tripal's use of linked data is to take advantage of controlled vocabularies or otologies to unambiguously identify the meaning of each element of the response.  By using controlled vocabularies to qualify all data, it  becomes possible to exchange data between sites or within a client program while limiting confusion as to the meaning of the data.  Therefore, words that are used as the keys key/value pairs will always be defined in the @context section.  For example, all of the vocabularies whose terms are used to qualify data on the site are identified in the `@context` section with URLs.  Some of these vocabularies include:  rdf, rdfs, hydra, schema, etc.
-
+A notable component of JSON-LD is the **@context** sub array.  It is within this **@context** section that the "meaning" of the data becomes apparent and where the Linked Data is implemented.  Tripal's use of linked data is to take advantage of controlled vocabularies or otologies to unambiguously identify the meaning of each element of the response.  By using controlled vocabularies to qualify all data, it  becomes possible to exchange data between sites or within a client program while limiting confusion as to the meaning of the data.  Therefore, words that are used as the keys key/value pairs will always be defined in the @context section.  For example, all of the vocabularies whose terms are used to qualify data on the site are identified in the **@context** section with URLs.  Some of these vocabularies include:  rdf, rdfs, hydra, schema, etc.
 
-There are two other special keys used in the JSON-LD response.  These are the `@id` and `@type` keys.  The `@id` indicates the unique URL for this resource and becomes a unique name for the resource.  In the example screenshot above, the `@id` of the initial page of web services is   http://localhost/web-services.  This URL will always refer to the initial page for Tripal web services on the given site.  The `@type` identifier specifies what type of resource this URL provides.  In this example, the type is **EntryPoint**.  If a client program is unsure as to what an **EntryPoint** is, then that information is provided in the @context section.  The following line indicates that the term **EntryPoint** expands to the vocabulary term:  vocab:EntryPoint
+There are two other special keys used in the JSON-LD response.  These are the **@id** and **@type** keys.  The **@id** indicates the unique URL for this resource and becomes a unique name for the resource.  In the example screenshot above, the **@id** of the initial page of web services is ``http://localhost/web-services``.  This URL will always refer to the initial page for Tripal web services on the given site.  The **@type** identifier specifies what type of resource this URL provides.  In this example, the type is **EntryPoint**.  If a client program is unsure as to what an **EntryPoint** is, then that information is provided in the @context section.  The following line indicates that the term **EntryPoint** expands to the vocabulary term:  vocab:EntryPoint
 
 .. code-block:: JSON
 
   "EntryPoint": "vocab:EntryPoint",
 
-Here we learn tha the term **EntryPoint** belongs to the vocab resource.  If we look at the vocab entry in the `@context` array then we can find a link to that resource.  A human can follow that link to examine the vocab resource and find the term that it provides.  A client program can use that information to uniquely identify the term.  By default, Tripal provides the **vocab** vocabulary which uses an implementation of the `Hydra Vocabulary <https://www.hydra-cg.com/spec/latest/core/>`_.
+Here we learn tha the term **EntryPoint** belongs to the vocab resource.  If we look at the vocab entry in the **@context** array then we can find a link to that resource.  A human can follow that link to examine the vocab resource and find the term that it provides.  A client program can use that information to uniquely identify the term.  By default, Tripal provides the **vocab** vocabulary which uses an implementation of the `Hydra Vocabulary <https://www.hydra-cg.com/spec/latest/core/>`_.
 
 Finally, all other entries in the JSON-LD response are key/value pairs that provide **properties** about the resource.  In the example above, only the **content** property is available.  A property can be a scalar value (i.e. number, character string) or a link to another resource.
 
 Primary Services
-~~~~~~~~~~~~~~~~~
-
+----------------
 
 By default, the only resource that Tripal provides at the initial response level is the content resource.  Any resource at this level is hereafter referred to as primary service.  Tripal is design to allow new web-services to be added to it.  These will be more formally described in the Tripal v3 Developer's Handbook.  In short, a primary service provides a variety of data and services for related content and function.   Each primary resource has a version number to help ensure backwards compatibility as new web services are developed and updated.  For example, the default content service currently has a version of v0.1:
 
@@ -97,7 +95,7 @@ By default, the only resource that Tripal provides at the initial response level
 
 
 The Content Service
-~~~~~~~~~~~~~~~~~~~~
+-------------------
 
 The content service provided by Tripal shares all publicly available content.  The content that appears on a page is the same content that appears in web services.  A major change in the design of Tripal from v2 to v3 is that all data is organized via controlled vocabularies.  The following diagram demonstrates how this is implemented.  For example the mRNA term comes from the `Sequence Ontology <http://www.sequenceontology.org/>`_.  It's unique term accession is SO:0000234.  Every content type in Tripal consists solely of a type (e.g. mRNA or SO:0000234), it's associated label (e.g.  mRNA) and a numeric id unique to each Tripal site.  The ID is what uniquely identifies every content in Tripal.  Each unique content with these three attributes is referred to as an **Entity**.  All other data associated with a given entity are called **Fields**.  Example fields for an mRNA content type may be the name of the mRNA, a unique name, the coding sequence, the coordinates on the reference genome, etc.  In the diagram below, these fields are the rectangular boxes that jut out of the mRNA entity.   These fields can be "attached" to an entity by Tripal and data can come from any storage backend.  The data that appears on a page and the data in the content service is taken from the same entity and therefore end-users and clients have access to the same data.
 
@@ -105,24 +103,22 @@ The content service provided by Tripal shares all publicly available content.  T
 
 
 Content Type Listing
-~~~~~~~~~~~~~~~~~~~~~
+--------------------
 
-When the content service is accessed, the response is always a listing of all available content types on the site.   Site administrators can create new content types by following the Create Content Types section of this tutorial.  By default, Tripal is installed with several commonly used content types, but new ones can be created as needed for the site.  Whenever a new content type is created it immediately is available via the content service, and these content types can be found at the path:  `/web-services/content/v0.1`.  Below is an example screenshot of the resulting JSON from an example site:
+When the content service is accessed, the response is always a listing of all available content types on the site.   Site administrators can create new content types by following the Create Content Types section of this tutorial.  By default, Tripal is installed with several commonly used content types, but new ones can be created as needed for the site.  Whenever a new content type is created it immediately is available via the content service, and these content types can be found at the path:  ``/web-services/content/v0.1``.  Below is an example screenshot of the resulting JSON from an example site:
 
 .. image:: ./web_services.3.png
 
-Note that the `@type` for this listing is a Collection and the label is **Content Types**.  Each content type has a unique `@id`, a `@type` indicating the term that describes it and a brief description.  The `@id` serves as a URL to obtain further details about that content type.   Also, notice in the above screenshot that the `@context` section is minimized, but as usual, each of the terms used in the key/value pairs are fully qualified in that section.   This JSON-LD response also indicates the total number of content types available.
+Note that the **@type** for this listing is a Collection and the label is **Content Types**.  Each content type has a unique **@id**, a **@type** indicating the term that describes it and a brief description.  The **@id** serves as a URL to obtain further details about that content type.   Also, notice in the above screenshot that the **@context** section is minimized, but as usual, each of the terms used in the key/value pairs are fully qualified in that section.   This JSON-LD response also indicates the total number of content types available.
 
 Content Type Members (Entities)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
+-------------------------------
 
-The members or entities that belong to a content type are found at the path:  `/web-services/content/v0.1/{name}` where {name} is the name of the content type.  The {name} field must be identical to the label field from the content type listing shown previously.   For example, the mRNA content type  path would be   `/web-services/content/v0.1/mRNA`.  This resource provides a listing of all members for that content type.   The following shows the response for an mRNA listing:
+The members or entities that belong to a content type are found at the path:  ``/web-services/content/v0.1/{name}`` where {name} is the name of the content type.  The {name} field must be identical to the label field from the content type listing shown previously.   For example, the mRNA content type  path would be   ``/web-services/content/v0.1/mRNA``.  This resource provides a listing of all members for that content type.   The following shows the response for an mRNA listing:
 
 .. image:: ./web_services.4.png
 
-
-Note that the `@type` is also a Collection byt the label is 'mRNA collection'.  To maintain a small response, the results of content member listings is usually paged such that only a subset of members is shown.  In this example, there are 8032 mRNA entities available, but only 25 are shown.  Notice the view term.  It contains several sub elements named first, last and next. These provide navigation links that can be used by a client application to iterate through all entities.  The structure of these links is as follows:
+Note that the **@type** is also a Collection byt the label is 'mRNA collection'.  To maintain a small response, the results of content member listings is usually paged such that only a subset of members is shown.  In this example, there are 8032 mRNA entities available, but only 25 are shown.  Notice the view term.  It contains several sub elements named first, last and next. These provide navigation links that can be used by a client application to iterate through all entities.  The structure of these links is as follows:
 
 ``/web-servies/content/v0.1/{name}?page={page}&limit={limit}``
 
@@ -131,21 +127,20 @@ A client application can therefore navigate through the list of entities by subs
 ``/web-servies/content/v0.1/mRNA?page=1&limit=8032``
 
 Content (Entity) Resource
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-------------------------
 
-Each entity is accessible via the path: `/web-services/content/v0.1/{name}/{id}`.   Here {name} continues to refer to the name of the content type (e.g. mRNA) and {id} refers to the unique numeric ID for the entity.  In this example an mRNA entity would result in a JSON-LD response like the following:
+Each entity is accessible via the path: ``/web-services/content/v0.1/{name}/{id}``.   Here {name} continues to refer to the name of the content type (e.g. mRNA) and {id} refers to the unique numeric ID for the entity.  In this example an mRNA entity would result in a JSON-LD response like the following:
 
 .. image:: ./web_services.5.png
 
-In the JSON shown above, note that all of the key/value pairs used are referenced in the `@context` section  Also, notice that some key/value pairs contain data while others contain URLs.  Tripal is optimized to not load every attribute.  For example, sequence data for some content type may be large.  By providing a URL for the data, it keeps the response small but allows clients to access that information via the provided URL.   For example, if the URL for the **sequence_coordinate** attribute were followed the following response could be seen:
+In the JSON shown above, note that all of the key/value pairs used are referenced in the **@context** section  Also, notice that some key/value pairs contain data while others contain URLs.  Tripal is optimized to not load every attribute.  For example, sequence data for some content type may be large.  By providing a URL for the data, it keeps the response small but allows clients to access that information via the provided URL.   For example, if the URL for the **sequence_coordinate** attribute were followed the following response could be seen:
 
 .. image:: ./web_services.6.png
 
-
 Here the client can obtain the necessary information about the coordinates on the genome for this particular mRNA entity.
 
 Searching for Content
-~~~~~~~~~~~~~~~~~~~~~~~
+---------------------
 
 Currently, Tripal provides the ability to search for content via web services by crafting URLs on content type members pages.   By default, the path for content type listings is:
 
@@ -186,7 +181,7 @@ When retrieving the data for a specific entity something similar to the followin
   "relationship": "http://localhost/web-services/content/v0.1/mRNA/691468/relationship",
   "identifier": "LOC_Os01g01010.1"
 
-As another reminder, when any of these attributes have a URL then further information about that attribute is obtained by following the URL. In the example below, the relationship term yeilds results similar to the following:
+As another reminder, when any of these attributes have a URL then further information about that attribute is obtained by following the URL. In the example below, the relationship term yields results similar to the following:
 
 .. code-block:: JSON
 
@@ -253,7 +248,7 @@ To demonstrate multi term sorting, we could try to sort by the Genus as well, al
 
 
 Searching Limitations
-~~~~~~~~~~~~~~~~~~~~~~~
+---------------------
 
 The ability to search by any term for any content type creates a powerful mechanism to find almost entity.  However there are two primary limitations:
 
@@ -261,7 +256,6 @@ The ability to search by any term for any content type creates a powerful mechan
 2. The format for constructing a search URL is  equivalent to  an AND operation.  For example, you can filter by genus and species but not by genus or species.  The addition of search criteria adds additional AND filters.
 
 Hiding or Adding Content
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
+------------------------
 
 It is relatively easy to hide or add content to web services.  The 'Configuring Page Display' tutorial walks the user through the steps for adding fields to a page for display, removing them and organizing the placement of those fields on the entity's page.  That same interface is used for indicating which fields are present in web services.  When a field is hidden from a page it is likewise hidden from web services.  When a new field is added to a page it is  added to web services.  Folks who develop new fields for custom modules and share those with others should ensure that their fields implementations follow the design specifications.  If followed correctly then all fields will behave in this way.