diff --git a/manuscript.md b/manuscript.md
index 14be261ec664996b70a5e7e5c9a686fba36b1b1b..f258dd133788ed3e716803c39fcce075dfcc24a2 100644
--- a/manuscript.md
+++ b/manuscript.md
@@ -15,7 +15,7 @@ authors
   - name: Bradley G Lusk
     affiliation: Science The Earth; Mesa, AZ 85201, USA
     
-date: 18 October 2019
+date: 20 October 2019
 
 bibliography: paper.bib
 
@@ -26,16 +26,14 @@ In the age of growing science communication, this tendency for scientists to use
 
 To address this, we created a tool to analyze complexity of a given scientist’s work relative to other writing sources. The tool first quantifies existing text repositories with varying complexity, and subsequently uses this output as a reference to contextualize the readability of user-selected written work. 
 
-While other readability tools currently exist to report the complexity of a single document, this tool uses a more data-driven approach to provide authors with insights into the readability of their published work with regard to other text repositories. This enables them to monitor the complexity of their writing with regard to other available text types, and with hope will lead to the creation of more accessible online material.
+While other readability tools currently exist to report the complexity of a single document, this tool uses a more data-driven approach to provide authors with insights into the readability of their published work with regard to other text repositories. This will enable them to monitor the complexity of their writing with regard to other available text types, and lead to the creation of more accessible online material. We hope it will help scientists interested in science communication to make their published work more accessible to a broad audience, and lead to an improved global communication and understanding of complex topics.
 
 ## Methods
 We built a web-scraping and text analysis infrastructure by extending many existing Free and Open Source (FOS) tools, including Google Scrape, Beautiful Soup, and Selenium.
 
-### Text Metrics to Assess Readability
 The Flesch-Kincaid readability score [@Kincaid:1975] is the most commonly used metric to assess readability, and was used here to quantify the complexity of each text item.
 
-### Reference Texts used for Analysis
-We include a number of available reference texts with varying complexity. 
+Before analysis of the user input, we query and analyze a number of available text repositories with varying complexity. The Flesch-Kincaid readability score was caluclated for each time in the repository.
 
 | Text Source | Mean Complexity | Description |
 |----------|----------|:-------------:|
@@ -44,10 +42,12 @@ We include a number of available reference texts with varying complexity.
 | Post-Modern Essay Generator (PMEG)  | 16.5 | generates output consisting of sentences that obey the rules of written English, but without restraints on the semantic conceptual references   |
 | Art Corpus                       | 18.68  | a library of scientific papers published in The Royal Society of Chemistry |
 
+The author's name entered by the user is queried through Google Scholar, returning the results from articles containing the author's name. The Flesch-Kincaid readability score is then caluclated for each of these articles.
+
 ### Plot Information 
-Entering an author name into the tool generates a histogram binned by readability score, which is initially populated exclusively by the ART corpus [@Soldatova:2007] data. We use this data because it is a pre-established library of scientific papers. The resulting graph displays the mean writing complexity of the entered author against a distribution of ART corpus content.
+The entered author name generates a histogram binned by readability score, which is initially populated exclusively by the ART corpus [@Soldatova:2007] data. We use this data because it is a pre-established library of scientific papers. The resulting graph displays the mean writing complexity of the entered author against a distribution of ART corpus content.
 
-Upgoer5 [@Kuhn:2016], Wikipedia, and PMEG [@Bulhak:1996] libraries are also scraped and analyzed, with their mean readability scores applied to the histogram plot to contextualize the complexity of the ART corpus data with other text repositories of known complexity. 
+The mean readability scores of Upgoer5 [@Kuhn:2016], Wikipedia, and PMEG [@Bulhak:1996] libraries are also applied to the histogram plot to contextualize the complexity of the ART corpus data with other text repositories of known complexity. 
 
 We also include mean readability scores from two scholarly reference papers, Science Declining Over Time [@Kutner:2006] and Science of Writing [@Gopen:1990], which discuss writing to a broad audience in an academic context. We use these to demonstrate the feasability of discussing complex content using more accessible language.
 
@@ -79,8 +79,8 @@ docker run -v $HOME/data_words russelljarvis/science_accessibility_user "R Gerki
 ```
 ![Specific Author Relative to Distribution](for_joss_standard_dev.png)
 
-This tool also allows academic authors in the same field to compete with each other for the lowest average reading grade level. Public competitions and leader boards often incentivise good practices.
-See for example in the figure below, two authors who publish in the field: "Computational Neuroscience" will likely have a different mean reading grade levels:
+This tool also allows the entry of two author names to view whose text has the lowest average reading grade level. Public competitions and leader boards often incentivize good practices, and may also help to improve readability scores over time.
+
 ![Specific Author Relative to Distribution](compete.png)
 
 
@@ -89,7 +89,7 @@ We have created a command line interface (CLI) for using this tool. However, we
 
 While the readability of ART Corpus is comparable to that of other scientific journals [2], a future goal is also to incoporate a larger repository of journal articles to compute the distribution of readability. In addition, we're interested in general readability of the web, and aim to add search engine queries of different and broad-ranging lists of search terms to assess readability of an eclectic range of text. This would further contextualize the readability of published scientific work with regard to topics engaged by the public on a more daily basis.
 
-One final goal is to incorporate other readability metrics, including information entropy, word length and compression rations, subjectivity, and reading ease scores. While the Flesch-Kincaid readability score is the most common readability metric, including other metrics will serve to provide more feedback to the user with regard to the complexity and structure of their written text.
+A final goal is to incorporate other readability metrics, including information entropy, word length, and compression rations, subjectivity, and reading ease scores. While the Flesch-Kincaid readability score is the most common readability metric, including other metrics will serve to provide more robust feedback to the user with regard to the complexity and structure of their written text.