Design and develop a fully functional data product that addresses your identified business problem or organizational need. Include each of the following attributes as they are the minimum required elements for the product:
RhymeStorm is an application to help singers and songwriters brainstorm new lyrics.
**one descriptive method and one non-descriptive (predictive or prescriptive) method
**Descriptive And Predictive Methods
*** Descriptive Method
*** Descriptive Method
**** Most common sentence structures
**** Most Common Grammatical Structures In A Set Of Lyrics
Here is the code to generate a report on the most common sentence structures given a directory of lyrics files.
By filtering songs by metrics such as popularity, number of awards, etc... we can use this software package to determine the most common grammatical phrase structure for different filtered categories.
#+begin_src clojure :results value
Since much of the data a record label might want to categorize songs by is likely proprietary, filtering the songs by whatever metric is the responsibility of the user.
Once the songs are filtered/categorized, they can be passed to this software where a list of the most popular grammar structures will be returned.
In the example below, you'll see that a simple noun-phrase is the most popular structure with 6 occurrences, tied with a sentence composed of a prepositional-phrase, verb-phrase, and adjective.
#+begin_src clojure :results value :session main :exports both
(require '[com.owoga.corpus.markov :as markov]
(require '[com.owoga.corpus.markov :as markov]
'[com.owoga.prhyme.nlp.core :as nlp]
'[com.owoga.prhyme.nlp.core :as nlp]
'[clojure.string :as string]
'[clojure.string :as string]
@ -42,9 +48,15 @@ Here is the code to generate a report on the most common sentence structures giv
*** Prescriptive Method
*** Prescriptive Method
**** Most likely next words
**** Most Likely Word To Follow A Given Phrase
#+begin_src clojure
To help songwriters think of new lyrics, we provide an API to receive a list of words that commonly follow/precede a given phrase.
Models can be trained on different genres or categories of songs. This will ensure that recommended lyric completions are apt.
In the example below, we provide a seed suffix of "bother me" and ask the software to predict the most likely words that precede that phrase. The resulting most popular phrases are "don't bother me", "doesn't bother me", "to bother me", "won't bother me", etc...
@ -71,22 +83,44 @@ Here is the code to generate a report on the most common sentence structures giv
| even | 3 |
| even | 3 |
| shouldn't | 3 |
| shouldn't | 3 |
** collected or available datasets
** Datasets
The dataset currently in use is in ~/dark-corpus~. This dataset was generated from the publicly available lyrics at http://darklyrics.com.
Further datasets will need to be provided by the end-user.
** Decision Support Functionality
*** Choosing Words For A Lyric Based On Markov Likelihood
Entire phrases can be generated using the previously mentioned functionality of generating lists of likely prefix/suffix words.
The software can be seeded with a simple "end-of-sentence" or "beginning-of-sentence" token and can be asked to work backwards to build a phrase that meets certain criteria.
The user can supply criteria such as restrictions on the number of syllables, number of words, rhyme scheme, etc...
The dataset currently in use is in ~/dark-corpus~. Further dataset will need to be provided by the end-user.
*** Choosing Words To Complete A Lyric Based On Rhyme Quality
** Decision support functionality
Another part of the decision support functionality is filtering and ordering predicted words based on their rhyme quality.
*** Choosing words for a lyric based on markov likelihood
The official definition of a "perfect" rhyme is when two words have matching phonemes starting from their primary stress.
*** Choosing words to complete a lyric based on rhyme quality
For example: technology and ecology. Both of those words have a stress on the second syllable. The first syllables differ. But from the stressed syllable on, they have exactly matching phones.
#+begin_src clojure :results value table :colnames yes
A rhyme that might be useful to a songwriter but that doesn't fit the definition of a "perfect" rhyme would be "technology" and "economy". Those two words just barely break the rules for a perfect rhyme. Their vowel phones match from their primary stress to their ends. But one of the consonant phones doesn't match.
Singers and songwriters have some flexibility and artistic freedom and imperfect rhymes can be a fallback.
Therefore, this software provides functionality to sort rhymes so that rhymes that are closer to perfect are first in the ordering.
In the example below, you'll see that the first 20 or so rhymes are perfect, but then "hypocrisy" is listed as rhyming with "technology". This is for the reason just mentioned. It's close to a perfect rhyme and it's of interest to singers/songwriters.
#+begin_src clojure :results value table :colnames yes :session main :exports both
** Ability to support featurizing, parsing, cleaning, and wrangling datasets
| nanotechnology | 5 | 7 |
| geology | 3 | 7 |
| ontology | 2 | 7 |
| morphology | 2 | 7 |
| seismology | 1 | 7 |
| urology | 1 | 7 |
| doxology | 0 | 7 |
| neurology | 0 | 7 |
| hypocrisy | 723 | 6 |
| democracy | 238 | 6 |
| atrocity | 224 | 6 |
| philosophy | 181 | 6 |
| equality | 109 | 6 |
| ideology | 105 | 6 |
** Featurizing, Parsing, Cleaning, And Wrangling Data
The data processing code is in ~prhyme~
The data processing code is in ~prhyme~
Each line gets tokenized using a regular expression to split the string into tokens.
Each line gets tokenized using a regular expression to split the string into tokens.
#+begin_src clojure
#+begin_src clojure :session main
(def re-word
(def re-word
"Regex for tokenizing a string into words
"Regex for tokenizing a string into words
(including contractions and hyphenations),
(including contractions and hyphenations),
@ -143,13 +192,13 @@ words can be compared: "Foo" is the same as "foo".
#+end_src
#+end_src
**methods and algorithms supporting data exploration and preparation
**Data Exploration And Preparation
The primary data structure and algorithms supporting exploration of the data are a Markov Trie
The primary data structure and algorithms supporting exploration of the data are a Markov Trie
The Trie data structure suppors a ~lookup~ function that returns the child trie at a certain lookup key and a ~children~ function that returns all of the immediate children of a particular Trie.
The Trie data structure supports a ~lookup~ function that returns the child trie at a certain lookup key and a ~children~ function that returns all of the immediate children of a particular Trie.
#+begin_src clojure
#+begin_src clojure :eval no
(defprotocol ITrie
(defprotocol ITrie
(children [self] "Immediate children of a node.")
(children [self] "Immediate children of a node.")
(lookup [self ^clojure.lang.PersistentList ks] "Return node at key."))
(lookup [self ^clojure.lang.PersistentList ks] "Return node at key."))
@ -180,17 +229,40 @@ The Trie data structure suppors a ~lookup~ function that returns the child trie
(get (.children- trie) (first k))))))
(get (.children- trie) (first k))))))
#+end_src
#+end_src
** data visualization functionalities for data exploration and inspection
** TODO Data Visualization Functionalities For Data Exploration And Inspection
- graph of phrase complexity on one axis and rhyme quality on another axis.
** implementation of interactive queries
** TODO Implementation Of Interactive Queries
Interactive query capability at [[https://darklimericks.com/wgu]].
Interactive query capability at [[https://darklimericks.com/wgu]].
** implementation of machine-learning methods and algorithms
** TODO implementation of machine-learning methods and algorithms
Functions for training both forwards and backwards
The machine learning method chosen for this software is a Hidden Markov Model.
Each line of each song is split into "tokens" (words) and then the previous ~n - 1~ tokens are used to predict the ~nth~ token.
The algorithm is implemented in several parts which are demonstrated below.
1. Read each song line-by-line.
2. Split each line into tokens.
3. Partition the tokens into sequences of length ~n~.
4. Associate each sequence into a Trie and update the value representing the number of times that sequence has been encountered.
That is the process for building the Hidden Markov Model.
The algorithm for generating predictions from the HMM is as follows.
1. Look up the ~n - 1~ tokens in the Trie.
2. Normalize the frequencies of the children of the ~n - 1~ tokens into percentage likelihoods.
3. Account for "unseen ~n grams~" (Simple Good Turing).
4. Sort results by maximum likelihood.
#+begin_src clojure :session main :results output :exports both
The results above show a sample of 10 elements in a 1-to-3-gram trie
The code sample below demonstrates training a Hidden Markov Model on a set of lyrics where each line gets reversed. This model is useful for predicting words backwards, so that you can start with the rhyming end of a word or phrase and generate backwards to the start of the lyric.
It also performs compaction and serialization. Song lyrics are typically provided as text files. Reading files on a hard drive is an expensive process, but we can perform that expensive training process only once and save the resulting Markov Model in a more memory-efficient format.
#+begin_src clojure
#+begin_src clojure
(defn train-backwards
(defn train-backwards
"For building lines backwards so they can be seeded with a target rhyme."
"For building lines backwards so they can be seeded with a target rhyme."
@ -309,13 +362,77 @@ Functions for training both forwards and backwards
)
)
#+end_src
#+end_src
** functionalities to evaluate the accuracy of the data product
Functionalities To Evaluate The Accuracy Of The Data Product
Since creative brainstorming is the goal, "accuracy" is subjective.
We can, however, measure and compare language generation algorithms against how "expected" a phrase is given the training data. This measurement is "perplexity".
#+begin_src clojure :session main :exports both :results output
"%s is the perplexity of \"%s\" \"hole\" \"</s>\" \"</s>\""
(->> seed
(map database)
(markov/perplexity 4 markov-tight-trie))
word))))
["a" "this" "that"])
nil)
#+end_src
#+RESULTS:
: "a" has preceeded "hole" "</s>" "</s>" a total of 250 times
: "this" has preceeded "hole" "</s>" "</s>" a total of 173 times
: "that" has preceeded "hole" "</s>" "</s>" a total of 45 times
: -12.184088569934774 is the perplexity of "a" "hole" "</s>" "</s>"
: -12.552930899563904 is the perplexity of "this" "hole" "</s>" "</s>"
: -13.905719644461469 is the perplexity of "that" "hole" "</s>" "</s>"
The results above make intuitive sense. The most common word to preceed "hole" at the end of a sentence is the word "a". There are 250 instances of sentences of "... a hole.". That can be compared to 173 instances of "... this hole." and 45 instances of "... that hole.".
Therefore, "... a hole." is has the lowest "perplexity".
This standardized measure of accuracy can be used to compare different language generation algorithms.
** Security Features
Artists/Songwriters place a lot of value in the secrecy of their content. Therefore, all communication with the web-based interface occurs over a secure connection using HTTPS.
Security certificates are generated using Let's Encrypt and an Nginx web server handles the SSL termination.
With this precaution in place, attackers will not be able to snoop the content that songwriters are sending to or receiving from the servers.
** industry-appropriate security features
** TODO Tools To Monitor And Maintain The Product
** tools to monitor and maintain the product
- Script to auto-update SSL cert
- Enable NGINX dashboard?
** a user-friendly, functional dashboard that includes at least three visualization types
** TODO A User-Friendly, Functional Dashboard That Includes At Least Three Visualization Types