You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
1088 lines
55 KiB
Org Mode
1088 lines
55 KiB
Org Mode
#+TITLE: RhymeStorm™ - WGU CSCI Capstone Project
|
|
|
|
:PROPERTIES:
|
|
:END:
|
|
|
|
* WGU Evaluator Notes
|
|
|
|
Hello! I hope you enjoy your time with this evaluation!
|
|
|
|
Here's a quick introduction to help you navigate this project.
|
|
|
|
The document you are reading now contains or points to each of the requirements listed at the course task overview page for C964.
|
|
|
|
The section immediately following this contains notes on how to view and run the software locally. In addition, I'm hosting a demo of the application at https://darklimericks.com/wgu.
|
|
|
|
After I describe the steps to initialize a development environment, you'll find a [[#letter-of-transmittal][Letter Of Transmittal]], [[#executive-summary][Technical Executive Summary]], [[#requirements-documentation][links to the final product and details of how it meets each requirement]], and the [[#remaining-documentation][remaining required documentation]].
|
|
|
|
* Evaluation Technical Documentation
|
|
|
|
It's probably not necessary for you to replicate my development environment in order to evaluate this project. You can access the deployed application at https://darklimericks.com/wgu and the libraries and supporting code that I wrote for this project at https://github.com/eihli/clj-tightly-packed-trie, https://github.com/eihli/syllabify, and https://github.com/eihli/prhyme. The web server and web application is not hosted publicly but you will find it uploaded with my submission as a ~.tar~ archive.
|
|
|
|
** How To Initialize Development Environment
|
|
|
|
*** Required Software
|
|
|
|
- [[https://www.docker.com/][Docker]]
|
|
- [[https://clojure.org/releases/downloads][Clojure Version 1.10+]]
|
|
- [[https://github.com/clojure-emacs/cider][Emacs and CIDER]]
|
|
|
|
*** Steps
|
|
|
|
1. Run ~./db/run.sh && ./kv/run.sh~ to start the docker containers for the database and key-value store.
|
|
a. The ~run.sh~ scripts only need to run once. They initialize development data containers. Subsequent development can continue with ~docker start db && docker start kv~.
|
|
2. Start a Clojure REPL in Emacs, evaluate the ~dev/user.clj~ namespace, and run ~(init)~
|
|
3. Visit ~http://localhost:8000/wgu~
|
|
|
|
** How To Run Software Locally
|
|
|
|
*** Requirements
|
|
|
|
- [[https://www.java.com/download/ie_manual.jsp][Java]]
|
|
- [[https://www.docker.com/][Docker]]
|
|
|
|
*** Steps
|
|
1. Run ~./db/run.sh && ./kv/run.sh~ to start the docker containers for the database and key-value store.
|
|
a. The ~run.sh~ scripts only need to run once. They initialize development data containers. Subsequent development can continue with ~docker start db && docker start kv~.
|
|
2. The application's ~jar~ builds with a ~make~ run from the root directory. (See [[file:../Makefile][Makefile]]).
|
|
3. Navigate to the root directory of this git repo and run ~java -jar darklimericks.jar~
|
|
4. Visit http://localhost:8000/wgu
|
|
|
|
* A. Letter Of Transmittal
|
|
:PROPERTIES:
|
|
:CUSTOM_ID: letter-of-transmittal
|
|
:END:
|
|
|
|
** Problem Summary
|
|
|
|
Songwriters, artists, and record labels can save time and discover better lyrics with the help of a machine learning tool that supports their creative endeavours.
|
|
|
|
Songwriters have several old-fashioned tools at their disposal including dictionaries and thesauruses. But machine learning exposes a new set of powerful possibilities. Using simple machine learning techniques, it is possible to automatically generate vast numbers of lyrics that match specified criteria for rhyming, syllable count, genre, and more.
|
|
|
|
** Benefits
|
|
|
|
How many sensible phrases can you think of that rhyme with "war on poverty"? What if I say that there's a restriction to only come up with phrases that are exactly 14 syllables? That's a common restriction when a songwriter is trying to match the meter of a previous line. What if I add another restriction that there must be primary stress at certain spots in that 14 syllable phrase?
|
|
|
|
This is the process that songwriters go through all day. It's a process that gets little help from traditional tools like dictionaries and thesauruses.
|
|
|
|
And this is a process that is perfect for machine learning. Machine learning can learn the most likely grammatical structure of phrases and can make predictions about likely words that follow a given sequence of other words. Computers can iterate through millions of words, checking for restrictions on rhyme, syllable count, and more. The most tedious part of lyric generation can be automated with machine learning software, leaving the songwriter free to cherry-pick from the best lyrics and make minor touch-ups to make them perfect.
|
|
|
|
** Product - RhymeStorm™
|
|
|
|
RhymeStorm™ is a tool to help songwriters brainstorm. It provides lyrics automatically generated based on training data from existing songs while adhering to restrictions based on rhyme scheme, meter, genre, and more.
|
|
|
|
The machine learning part of software that I described above can be implemented with a simple machine learning technique known as a Hidden Markov Model.
|
|
|
|
Without getting too technical, using a Hidden Markov Model will involve using an existing lyrics database as input and the output will be a function that returns the likelihood of a word following a sequence of previous words.
|
|
|
|
A choice of many different programming languages and algorithms are sufficient to handle the other parts of the product, like splitting a word into phonetic sounds, finding rhymes, and matching stress between phrases.
|
|
|
|
An initial version of the software will be trained on the heavy metal lyrics database at http://darklyrics.com and a website will be created where users can type in a "seed" sequence of word(s) and the model will output a variety of possible completions.
|
|
|
|
This auto-complete functionality will be similar to the auto-complete that is commonly found on phone keyboard applications that help users type faster on phone touchscreens.
|
|
|
|
** Data
|
|
|
|
The initial model will be trained on the lyrics from http://darklyrics.com. This is a publicly available data set with minimal meta-data. Record labels will have more valuable datasets that will include meta-data along with lyrics, such as the date the song was popular, the number of radio plays of the song, the profit of the song/artist, etc...
|
|
|
|
The software can be augmented with additional algorithms to account for the type of meta-data that a record label may have. The augmentations can happen in iterative software development cycles, using Agile methodologies.
|
|
|
|
** Objectives
|
|
|
|
This software will accomplish its primary objective if it makes its way into the daily toolkit of a handful of singers/songwriters.
|
|
|
|
Several secondary objectives are also desirable and reasonably expected. The architecture of the software lends itself to existing as several independently useful modules.
|
|
|
|
For example, the [[https://en.wikipedia.org/wiki/Hidden_Markov_model][Markov Model]] can be conveniently backed by a [[https://en.wikipedia.org/wiki/Trie][Trie data structure]]. This Trie data structure can be released as its own software package and used any application that benefits from prefix matching.
|
|
|
|
Another example is the package that turns phrases into phones (symbols of pronunciation). That package can find use for a number of natural language processing and natural language generation tasks, aside from the task required by this particular project.
|
|
|
|
** Development Methodology - Agile
|
|
|
|
This project will be developed with an iterative Agile methodology. Since a large part of data science and machine learning is exploration, this project will benefit from ongoing exploration in tandem with development.
|
|
|
|
Additionally, the developer(s) working on the project won't have (and won't need to have) access to the data sets that songwriters and record labels may have. Work can begin immediately with an iterative approach and future data sets can be integrated as they become available.
|
|
|
|
The prices quoted below are for an initial minimum-viable-product that will serve as a proof-of-concept. Future contracts can be negotiated for ongoing development at similar rates.
|
|
|
|
** Costs
|
|
|
|
Funding requirements are minimal. The initial dataset is public and freely available. On a typical consumer laptop, Hidden Markov Models can be trained on fairly large datasets in short time and the training doesn't require the use of expensive hardware like the GPUs used to train Deep Neural Networks.
|
|
|
|
For the initial product, the only development expensive would be the hourly rate of a full-stack developer. The ongoing expensive for the website hosting the user interface would be roughly $20 to $200 per month depending on how many users access the site at the same time.
|
|
|
|
These are my estimates for the time and cost of different aspects of initial development.
|
|
|
|
| Task | Hours | Cost |
|
|
|-------------------------+-------+--------|
|
|
| Trie | 60 | $600 |
|
|
| Phonetics | 30 | $300 |
|
|
| HMM Training Algorithms | 60 | $600 |
|
|
| Web User Interface | 80 | $800 |
|
|
| Web Server | 60 | $600 |
|
|
| Testing | 20 | $200 |
|
|
| Quality Assurance | 20 | $200 |
|
|
| Total | 330 | $3,300 |
|
|
|
|
** Stakeholder Impact
|
|
|
|
The only stakeholders in the project will be the record labels or songwriters. I describe the only impact to them in the [[Benefits]] section above.
|
|
|
|
** Ethical And Legal Considerations
|
|
|
|
Web scraping, the method used to obtain the initial dataset from http://darklyrics.com, is protected given the ruling in [[https://en.wikipedia.org/wiki/HiQ_Labs_v._LinkedIn]].
|
|
|
|
The use of publicly available data in generative works is less clear. But Microsoft's lawyers deemed it sound given their recent release of Github CoPilot ([[https://www.theverge.com/2021/7/7/22561180/github-copilot-legal-copyright-fair-use-public-code]]).
|
|
|
|
** Expertise
|
|
|
|
I have 10 years experience as a programmer and have worked extensively on both frontend technologies like HTML/JavaScript, backend technologies like Django, and building libraries/packages/frameworks.
|
|
|
|
I've also been writing limericks my entire life and hold the International Limerick Imaginative Enthusiast's ILIE award for the years 2013 and 2019.
|
|
|
|
* B. Executive Summary - RhymeStorm™ Technical Notes And Requirements
|
|
:PROPERTIES:
|
|
:CUSTOM_ID: executive-summary
|
|
:END:
|
|
|
|
Write an executive summary directed to IT professionals that addresses each of the following requirements:
|
|
|
|
** Decision Support Opportunity
|
|
|
|
Songwriters expend a lot of time and effort finding the perfect rhyming word or phrase. RhymeStorm™ is going to amplify user's creative abilities by searching its machine learning model for sensible and proven-successful words and phrases that meet the rhyme scheme and meter requirements requested by the user.
|
|
|
|
When a songwriter needs to find likely phrases that rhyme with "war on poverty" and has 14 syllables, RhymeStorm™ will automatically generate dozens of possibilities and rank them by "perplexity" and rhyme quality. The songwriter can focus there efforts on simple touch-ups to perfect the automatically generated lyrics.
|
|
|
|
** Customer Needs And Product Description
|
|
|
|
Songwriters spend money on dictionaries, compilations of slang, thesauruses, and phrase dictionaries. They spend their time daydreaming, brainstorming, contemplating, and mixing and matching the knowledge they acquire through these traditional means.
|
|
|
|
A simple experiment you can try yourself will show that it takes between 5 and 30 seconds to look up a word in a dictionary or thesaurus. Then it takes an equal amount of time to look up each synonym, antonym, or other word that comes to mind. A few of those words may rhyme, but each word requires building an entire sentence around it that meets restrictions for sensibility, meter, and scheme.
|
|
|
|
This process can take a person hours for a single line and weeks for a single song.
|
|
|
|
Computers can process and sort this information and sort the results by quality millions of times faster. A few minutes of a songwriter specifying filters, restrictions, and requirements can save them days of traditional brainstorming.
|
|
|
|
** Existing Products
|
|
|
|
We're all familiar with dictionaries, thesauruses, and their shortcomings.
|
|
|
|
There is a small amount of technology being applied to this problem. A popular site to find rhymes is https://www.rhymezone.com.
|
|
|
|
RhymeZone is limited in its capability. It doesn't do well finding rhymes for phrases more than a couple of words and it can't generate suggestions for lyric completions.
|
|
|
|
** Available Data And Future Data Lifecycle
|
|
|
|
The initial dataset will be gathered by downloading lyrics from http://darklyrics.com and future models can be generated by downloading lyrics from other websites. Alternatively, data can be provided by record labels and combined with meta-data that the record label may have, such as how many radio plays each song gets and how much profit they make from each song.
|
|
|
|
RhymeStorm™ can offer multiple models depending on the genre or theme that the songwriter is looking for. With the initial dataset from http://darklyrics.com, all suggestions will have a heavy metal theme. But future data sets can be trained on rap, pop, or other genres.
|
|
|
|
Songs don't get released fast enough that training needs to be an automated ongoing process. Perhaps once a year, or whenever a new dataset becomes available, someone can run a script that will update the data models.
|
|
|
|
The script to generate data models will accept as arguments a directory containing files of songs, a filepath to save the completed model, the "rank" of the Hidden Markov Model, and it will generate a Trie representing the HMM and save it to disk at the specified location.
|
|
|
|
Each new model can be uploaded to the web server and users can select which model they want to use.
|
|
|
|
** Methodology - Agile
|
|
|
|
RhymeStorm™ development will proceed with an iterative Agile methodology. It will be composed of several independent modules that can be worked on independently, in parallel, and iteratively.
|
|
|
|
The Trie data structure that will be used as a backing to the Hidden Markov Model can be worked on in isolation from any other aspect of the project. The first iteration can use a simple hash-map as a backing store. The second iteration can improve memory efficiency by using a ByteBuffer as a [[https://aclanthology.org/W09-1505.pdf][Tightly Packed Trie]]. Future iterations can continue to improve performance metrics.
|
|
|
|
The web server can be implemented initially without security measures like HTTPS and performance measures like load balancing. Future iterations can add these features as they become necessary.
|
|
|
|
The user interface can be implemented as a wireframe and extended as new functionality becomes available from the backend.
|
|
|
|
Much of data science is exploratory and taking an iterative Agile approach can take advantage of delaying decisions while information is gathered.
|
|
|
|
** Deliverables
|
|
|
|
- Supporting libraries source code
|
|
- Application source code
|
|
- Deployed application
|
|
|
|
The supporting libraries of this project are available as open source repositories on Github.
|
|
|
|
[[https://github.com/eihli/clj-tightly-packed-trie][Tightly Packed Trie]]
|
|
|
|
[[https://github.com/eihli/phonetics][Phonetics and Syllabification]]
|
|
|
|
[[https://github.com/eihli/prhyme][Data Processing, Markov, and Rhyme Algorithms]]
|
|
|
|
The trained data model and web interface has been deployed at the following address and the code will be provided in an archive file.
|
|
|
|
[[https://darklimericks.com/wgu][Web GUI and Documentation]]
|
|
|
|
** Implementation Plan And Anticipations
|
|
|
|
I'll start by writing and releasing the supporting libraries and packages: Tries, Syllabification/Phonetics, Rhyming.
|
|
|
|
Then I'll write a website that imports and uses those libraries.
|
|
|
|
Since I'll be writing and releasing these packages iteratively as open source, I'll share them publicly as I progress and can use feedback to improve them before RhymeStorm™ takes its final form.
|
|
|
|
In anticipation of user growth, I'll be deploying the final product on DigitalOcean Droplets. They are virtual machines with resources that can be resized to meet growing demands or shrunk to save money in times of low traffic.
|
|
|
|
** Requirements Validation And Verification
|
|
|
|
the methods for validating and verifying that the developed data product meets the requirements and subsequently the needs of the customers
|
|
|
|
For the known requirements, I'll perform personally perform manual tests and quality assurance. This is a small enough project that one individual can thoroughly test all of the primary requirements.
|
|
|
|
Since the project is broken down into isolated sub-projects, unit tests will be added to the sub-projects to make sure they meet their own goals and performance standards.
|
|
|
|
The final website will integrate multiple technologies and the integrations won't be ideal for unit testing. But as mentioned, the user acceptance requirements are not major and can be manually ensured.
|
|
|
|
** Programming Environments And Costs
|
|
|
|
the programming environments and any related costs, as well as the human resources that are necessary to execute each phase in the development of the data product
|
|
|
|
One of the benefits of a Hidden Markov Model is its relative computational affordability when compared to other machine learning techniques, like Deep Neural Networks.
|
|
|
|
We don't require a GPU or long training times on powerful computers. The over 200,000 songs obtained from http://darklyrics.com can be trained into a 4-gram Hidden Markov Model in just a few hours on a consumer laptop.
|
|
|
|
The training process never uses more than 20 gigabytes of ram.
|
|
|
|
All code was written and all models were trained on a Lenovo T15G with an Intel i9 2.4 ghz processor and 32gb of RAM.
|
|
|
|
** Timeline And Milestones
|
|
|
|
| Sprint | Start | End | Tasks |
|
|
|--------+------------+------------+---------------------------------------------------------------|
|
|
| 1 | 2021-07-01 | 2021-07-07 | Acquire corpus - Explore Modelling - Review Existing Material |
|
|
| 2 | 2021-07-07 | 2021-07-21 | Data Cleanup - Feature Extraction - Lyric Generation (POC) |
|
|
| 3 | 2021-07-21 | 2021-07-28 | Lyric Generation Restrictions (Syllable-count, Rhyme, Etc...) |
|
|
| 4 | 2021-07-28 | 2021-08-14 | Train Full-scale Model - Performance Tuning |
|
|
| 5 | 2021-08-14 | 2021-08-21 | Create Web Interface And Visualizations |
|
|
| 6 | 2021-08-21 | 2021-09-07 | QA - Testing - Deploy And Release Web App |
|
|
|
|
* C. RhymeStormg™ Capstone Requirements Documentation
|
|
:PROPERTIES:
|
|
:CUSTOM_ID: requirements-documentation
|
|
:END:
|
|
|
|
RhymeStorm™ is an application to help singers and songwriters brainstorm new lyrics.
|
|
|
|
** Descriptive And Predictive Methods
|
|
|
|
*** Descriptive Method
|
|
|
|
**** Most Common Grammatical Structures In A Set Of Lyrics
|
|
|
|
By filtering songs by metrics such as popularity, number of awards, etc... we can use this software package to determine the most common grammatical phrase structure for different filtered categories.
|
|
|
|
Since much of the data a record label might want to categorize songs by is likely proprietary, filtering the songs by whatever metric is the responsibility of the user.
|
|
|
|
Once the songs are filtered/categorized, they can be passed to this software where a list of the most popular grammar structures will be returned.
|
|
|
|
In the example below, you'll see that a simple noun-phrase is the most popular structure with 6 occurrences, tied with a sentence composed of a prepositional-phrase, verb-phrase, and adjective.
|
|
|
|
#+begin_src clojure :results value :session main :exports both
|
|
(require '[com.owoga.corpus.markov :as markov]
|
|
'[com.owoga.prhyme.nlp.core :as nlp]
|
|
'[clojure.string :as string]
|
|
'[clojure.java.io :as io])
|
|
|
|
(let [lines (transduce
|
|
(comp
|
|
(map slurp)
|
|
(map #(string/split % #"\n"))
|
|
(map (partial remove empty?))
|
|
(map nlp/structure-freqs))
|
|
merge
|
|
{}
|
|
(eduction (markov/xf-file-seq 0 10) (file-seq (io/file "/home/eihli/src/prhyme/dark-corpus"))))]
|
|
(take 5 (sort-by (comp - second) lines)))
|
|
#+end_src
|
|
|
|
#+RESULTS:
|
|
| (TOP (NP (NNP) (.))) | 6 |
|
|
| (TOP (S (NP (PRP)) (VP (VBP) (ADJP (JJ))) (.))) | 6 |
|
|
| (INC (NP (JJ) (NN)) nil (IN) (NP (DT)) (NP (PRP)) (VBP)) | 4 |
|
|
| (TOP (NP (NP (JJ) (NN)) nil (NP (NN) (CC) (NN)))) | 4 |
|
|
| (TOP (S (NP (JJ) (NN)) nil (VP (VBG) (ADJP (JJ))))) | 4 |
|
|
|
|
*** Prescriptive Method
|
|
|
|
**** Most Likely Word To Follow A Given Phrase
|
|
|
|
To help songwriters think of new lyrics, we provide an API to receive a list of words that commonly follow/precede a given phrase.
|
|
|
|
Models can be trained on different genres or categories of songs. This will ensure that recommended lyric completions are apt.
|
|
|
|
In the example below, we provide a seed suffix of "bother me" and ask the software to predict the most likely words that precede that phrase. The resulting most popular phrases are "don't bother me", "doesn't bother me", "to bother me", "won't bother me", etc...
|
|
|
|
#+begin_src clojure :session main :exports both
|
|
(require '[com.darklimericks.server.models :as models]
|
|
'[com.owoga.trie :as trie])
|
|
|
|
(let [seed ["bother" "me"]
|
|
seed-ids (map models/database seed)
|
|
lookup (reverse seed-ids)
|
|
results (trie/children (trie/lookup models/markov-trie lookup))]
|
|
(->> results
|
|
(map #(get % []))
|
|
(sort-by (comp - second))
|
|
(map #(update % 0 models/database))
|
|
(take 10)))
|
|
#+end_src
|
|
|
|
#+RESULTS:
|
|
| don't | 36 |
|
|
| doesn't | 21 |
|
|
| to | 14 |
|
|
| won't | 9 |
|
|
| really | 5 |
|
|
| not | 4 |
|
|
| you | 4 |
|
|
| it | 3 |
|
|
| even | 3 |
|
|
| shouldn't | 3 |
|
|
|
|
** Datasets
|
|
|
|
The dataset currently in use was generated from the publicly available lyrics at http://darklyrics.com.
|
|
|
|
Further datasets will need to be provided by the end-user.
|
|
|
|
** Decision Support Functionality
|
|
|
|
*** Choosing Words For A Lyric Based On Markov Likelihood
|
|
|
|
Entire phrases can be generated using the previously mentioned functionality of generating lists of likely prefix/suffix words.
|
|
|
|
The software can be seeded with a simple "end-of-sentence" or "beginning-of-sentence" token and can be asked to work backwards to build a phrase that meets certain criteria.
|
|
|
|
The user can supply criteria such as restrictions on the number of syllables, number of words, rhyme scheme, etc...
|
|
|
|
*** Choosing Words To Complete A Lyric Based On Rhyme Quality
|
|
|
|
Another part of the decision support functionality is filtering and ordering predicted words based on their rhyme quality.
|
|
|
|
The official definition of a "perfect" rhyme is when two words have matching phonemes starting from their primary stress.
|
|
|
|
For example: technology and ecology. Both of those words have a stress on the second syllable. The first syllables differ. But from the stressed syllable on, they have exactly matching phones.
|
|
|
|
A rhyme that might be useful to a songwriter but that doesn't fit the definition of a "perfect" rhyme would be "technology" and "economy". Those two words just barely break the rules for a perfect rhyme. Their vowel phones match from their primary stress to their ends. But one of the consonant phones doesn't match.
|
|
|
|
Singers and songwriters have some flexibility and artistic freedom and imperfect rhymes can be a fallback.
|
|
|
|
Therefore, this software provides functionality to sort rhymes so that rhymes that are closer to perfect are first in the ordering.
|
|
|
|
In the example below, you'll see that the first 20 or so rhymes are perfect, but then "hypocrisy" is listed as rhyming with "technology". This is for the reason just mentioned. It's close to a perfect rhyme and it's of interest to singers/songwriters.
|
|
|
|
#+begin_src clojure :results value table :colnames yes :session main :exports both
|
|
(require '[com.darklimericks.linguistics.core :as linguistics]
|
|
'[com.darklimericks.server.models :as models])
|
|
|
|
(let [results
|
|
(linguistics/rhymes-with-frequencies-and-rhyme-quality
|
|
"technology"
|
|
models/markov-trie
|
|
models/database)]
|
|
(->> results
|
|
(map
|
|
(fn [[rhyming-word
|
|
rhyming-word-phones
|
|
frequency-count-of-rhyming-word
|
|
target-word
|
|
target-word-phones
|
|
rhyme-quality]]
|
|
[rhyming-word frequency-count-of-rhyming-word rhyme-quality]))
|
|
(take 25)
|
|
(vec)
|
|
(into [["rhyme" "frequency count" "rhyme quality"]])))
|
|
#+end_src
|
|
|
|
#+RESULTS:
|
|
| rhyme | frequency count | rhyme quality |
|
|
| technology | 318 | 8 |
|
|
| apology | 68 | 7 |
|
|
| pathology | 42 | 7 |
|
|
| mythology | 27 | 7 |
|
|
| psychology | 24 | 7 |
|
|
| theology | 23 | 7 |
|
|
| biology | 20 | 7 |
|
|
| ecology | 11 | 7 |
|
|
| chronology | 10 | 7 |
|
|
| astrology | 9 | 7 |
|
|
| biotechnology | 8 | 7 |
|
|
| nanotechnology | 5 | 7 |
|
|
| geology | 3 | 7 |
|
|
| ontology | 2 | 7 |
|
|
| morphology | 2 | 7 |
|
|
| seismology | 1 | 7 |
|
|
| urology | 1 | 7 |
|
|
| doxology | 0 | 7 |
|
|
| neurology | 0 | 7 |
|
|
| hypocrisy | 723 | 6 |
|
|
| democracy | 238 | 6 |
|
|
| atrocity | 224 | 6 |
|
|
| philosophy | 181 | 6 |
|
|
| equality | 109 | 6 |
|
|
| ideology | 105 | 6 |
|
|
|
|
** Featurizing, Parsing, Cleaning, And Wrangling Data
|
|
|
|
The data processing code is in [[https://github.com/eihli/prhyme]]
|
|
|
|
Each line gets tokenized using a regular expression to split the string into tokens.
|
|
|
|
#+begin_src clojure :session main :eval no
|
|
(def re-word
|
|
"Regex for tokenizing a string into words
|
|
(including contractions and hyphenations),
|
|
commas, periods, and newlines."
|
|
#"(?s).*?([a-zA-Z\d]+(?:['\-]?[a-zA-Z]+)?|,|\.|\?|\n)")
|
|
#+end_src
|
|
|
|
Along with tokenization, the lines get stripped of whitespace and converted to lowercase. This conversion is done so that
|
|
words can be compared: "Foo" is the same as "foo".
|
|
|
|
#+begin_src clojure :eval no
|
|
(def xf-tokenize
|
|
(comp
|
|
(map string/trim)
|
|
(map (partial re-seq re-word))
|
|
(map (partial map second))
|
|
(map (partial mapv string/lower-case))))
|
|
#+end_src
|
|
|
|
** Data Exploration And Preparation
|
|
|
|
The primary data structure and algorithms supporting exploration of the data are a Markov Trie
|
|
|
|
The Trie data structure supports a ~lookup~ function that returns the child trie at a certain lookup key and a ~children~ function that returns all of the immediate children of a particular Trie.
|
|
|
|
All Trie code is hosted in the git repo located at [[https://github.com/eihli/clj-tightly-packed-trie]].
|
|
|
|
#+begin_src clojure :eval no
|
|
(defprotocol ITrie
|
|
(children [self] "Immediate children of a node.")
|
|
(lookup [self ^clojure.lang.PersistentList ks] "Return node at key."))
|
|
|
|
(deftype Trie [key value ^clojure.lang.PersistentTreeMap children-]
|
|
ITrie
|
|
(children [trie]
|
|
(map
|
|
(fn [[k ^Trie child]]
|
|
(Trie. k
|
|
(.value child)
|
|
(.children- child)))
|
|
children-))
|
|
|
|
(lookup [trie k]
|
|
(loop [k k
|
|
trie trie]
|
|
(cond
|
|
;; Allows `update` to work the same as with maps... can use `fnil`.
|
|
;; (nil? trie') (throw (Exception. (format "Key not found: %s" k)))
|
|
(nil? trie) nil
|
|
(empty? k)
|
|
(Trie. (.key trie)
|
|
(.value trie)
|
|
(.children- trie))
|
|
:else (recur
|
|
(rest k)
|
|
(get (.children- trie) (first k))))))
|
|
#+end_src
|
|
|
|
** Data Visualization Functionalities For Data Exploration And Inspection
|
|
|
|
The functionality to explore and visualize data is baked into the Trie data structure.
|
|
|
|
By simply viewing the Trie in a Clojure REPL, you can inspect the Trie's structure.
|
|
|
|
#+begin_example
|
|
(let [initialized-trie (->> (trie/make-trie "dog" "dog" "dot" "dot" "do" "do"))]
|
|
initialized-trie)
|
|
;; => {(\d \o \g) "dog", (\d \o \t) "dot", (\d \o) "do", (\d) nil}
|
|
#+end_example
|
|
|
|
This functionality is provided by the implementations of the ~Associative~ and ~IPersistentMap~ interfaces.
|
|
|
|
#+begin_src clojure
|
|
clojure.lang.Associative
|
|
(assoc [trie opath ovalue]
|
|
(if (empty? opath)
|
|
(IntKeyTrie. key ovalue children-)
|
|
(IntKeyTrie. key value (update
|
|
children-
|
|
(first opath)
|
|
(fnil assoc (IntKeyTrie. (first opath) nil (fast-sorted-map)))
|
|
(rest opath)
|
|
ovalue))))
|
|
(entryAt [trie key]
|
|
(clojure.lang.MapEntry. key (get trie key)))
|
|
(containsKey [trie key]
|
|
(boolean (get trie key)))
|
|
|
|
clojure.lang.IPersistentMap
|
|
(assocEx [trie key val]
|
|
(if (contains? trie key)
|
|
(throw (Exception. (format "Value already exists at key %s." key)))
|
|
(assoc trie key val)))
|
|
(without [trie key]
|
|
(-without trie key))
|
|
#+end_src
|
|
|
|
The Hidden Markov Model data structure doesn't lend itself to any useful graphical type of visualization or exploration.
|
|
|
|
** Implementation Of Interactive Queries
|
|
|
|
*** Generate Rhyming Lyrics
|
|
|
|
This interactive query will return a list of rhyming phrases to any word or phrase you enter.
|
|
|
|
For example, the phrase ~don't bother me~ returns the following results.
|
|
|
|
| Rhyme | Quality | Lyric | Perplexity |
|
|
| forsee | 5 | i'm not one of us forsee | -0.150812027039802 |
|
|
| wholeheartedly | 5 | purification has replaced wholeheartedly | -0.23227389702753784 |
|
|
| merci | 5 | domine, non merci | -0.2567394520839273 |
|
|
| oversea | 5 | i let's torch oversea | -0.3940312599117676 |
|
|
| me | 4 | that is found in me | -0.12708613143793374 |
|
|
| thee | 4 | you ask thee | -0.20919974848757947 |
|
|
| free | 4 | direct from me free | -0.29056603191271085 |
|
|
| harmony | 3 | it's time to go, this harmony | -0.06634608923365708 |
|
|
| society | 3 | mutilation rejected by society | -0.10624747249791901 |
|
|
| prophecy | 3 | take us to the brink of disaster dreamer just a savage prophecy | -0.13097443386137644 |
|
|
| honesty | 3 | for you my threw all that can be the power not honesty | -0.2423380760939454 |
|
|
| constantly | 3 | i thrust my sword into the dragon's annihilation that constantly | -0.2474276676860057 |
|
|
| reality | 2 | smack of reality | -0.14811632033013192 |
|
|
| eternity | 2 | with trust in loneliness in eternity | -0.1507561510378151 |
|
|
| misery | 2 | reminiscing over misery | -0.29506597978960253 |
|
|
|
|
The interactive query for the above can be found at https://darklimericks.com/wgu/lyric-from-seed?seed=don%27t+bother+me. Note that, since these lyrics are randomly generated, your results will vary.
|
|
|
|
*** Complete Lyric Containing Suffix
|
|
|
|
This interactive query will return a list of lyrics completing the given suffix with randomly generated prefixes.
|
|
|
|
For example, let's say a songwriter liked the phrase ~rejected by society~ above, but they want to brainstorm different beginnings of that line.
|
|
|
|
| Lyric | OpenNLP Perplexity | Per-word OpenNLP Perplexity |
|
|
| we have rejected by society | -0.6593112258099724 | -0.03878301328293955 |
|
|
| she rejected by society | -1.0992937688019973 | -0.07852098348585694 |
|
|
| i was despised and rejected by society | -3.5925278871864497 | -0.15619686466028043 |
|
|
| the exiled and rejected by society | -3.6944350673672144 | -0.21731970984513027 |
|
|
| to smell the death mutilation rejected by society | -5.899263654566813 | -0.2458026522736172 |
|
|
| time goes yearning again only to be rejected by society | -2.764028722852962 | -0.08375844614705946 |
|
|
| you won't survive the mutilation rejected by society | -2.5299544352623986 | -0.09035551554508567 |
|
|
| your rejected by society | -1.4840658880458661 | -0.10600470628899043 |
|
|
| dividing lands, rejected by society | -2.2975947244849793 | -0.12764415136027663 |
|
|
| a voice summons all angry exiled and rejected by society | -9.900290597751827 | -0.17679090353128263 |
|
|
| protect the rejected by society | -4.210741684291847 | -0.28071611228612314 |
|
|
|
|
The interactive query for the above can be found at https://darklimericks.com/wgu/rhyming-lyric?rhyming-lyric-target=rejected+by+society. Note again that your results will vary.
|
|
|
|
** Implementation Of Machine Learning Methods
|
|
|
|
The machine learning method chosen for this software is a Hidden Markov Model.
|
|
|
|
Each line of each song is split into "tokens" (words) and then the previous ~n - 1~ tokens are used to predict the ~nth~ token.
|
|
|
|
The algorithm is implemented in several parts which are demonstrated below.
|
|
|
|
1. Read each song line-by-line.
|
|
2. Split each line into tokens.
|
|
3. Partition the tokens into sequences of length ~n~.
|
|
4. Associate each sequence into a Trie and update the value representing the number of times that sequence has been encountered.
|
|
|
|
That is the process for building the Hidden Markov Model.
|
|
|
|
The algorithm for generating predictions from the HMM is as follows.
|
|
|
|
1. Look up the ~n - 1~ tokens in the Trie.
|
|
2. Normalize the frequencies of the children of the ~n - 1~ tokens into percentage likelihoods.
|
|
3. Account for "unseen ~n grams~" (Simple Good Turing).
|
|
4. Sort results by maximum likelihood.
|
|
|
|
#+begin_src clojure :session main :results output :exports both
|
|
(require '[com.owoga.prhyme.data-transform :as data-transform]
|
|
'[clojure.pprint :as pprint])
|
|
|
|
(defn file-seq->markov-trie
|
|
"For forwards markov."
|
|
[database files n m]
|
|
(transduce
|
|
(comp
|
|
(map slurp)
|
|
(map #(string/split % #"[\n+\?\.]"))
|
|
(map (partial transduce data-transform/xf-tokenize conj))
|
|
(map (partial transduce data-transform/xf-filter-english conj))
|
|
(map (partial remove empty?))
|
|
(map (partial into [] (data-transform/xf-pad-tokens (dec m) "<s>" 1 "</s>")))
|
|
(map (partial mapcat (partial data-transform/n-to-m-partitions n (inc m))))
|
|
(mapcat (partial mapv (data-transform/make-database-processor database))))
|
|
(completing
|
|
(fn [trie lookup]
|
|
(update trie lookup (fnil #(update % 1 inc) [lookup 0]))))
|
|
(trie/make-trie)
|
|
files))
|
|
|
|
(let [files (->> "/home/eihli/src/prhyme/dark-corpus"
|
|
io/file
|
|
file-seq
|
|
(eduction (data-transform/xf-file-seq 501 2)))
|
|
database (atom {:next-id 1})
|
|
trie (file-seq->markov-trie database files 1 3)]
|
|
|
|
(pprint/pprint [(map (comp (partial map @database) first) (take 10 (drop 105 trie)))]))
|
|
#+end_src
|
|
|
|
#+RESULTS:
|
|
#+begin_example
|
|
[(("<s>" "call" "me")
|
|
("<s>" "call")
|
|
("<s>" "right" "</s>")
|
|
("<s>" "right")
|
|
("<s>" "that's" "proportional")
|
|
("<s>" "that's")
|
|
("<s>" "don't" "</s>")
|
|
("<s>" "don't")
|
|
("<s>" "yourself" "in")
|
|
("<s>" "yourself"))]
|
|
#+end_example
|
|
|
|
The results above show a sample of 10 elements in a 1-to-3-gram trie
|
|
|
|
The code sample below demonstrates training a Hidden Markov Model on a set of lyrics where each line gets reversed. This model is useful for predicting words backwards, so that you can start with the rhyming end of a word or phrase and generate backwards to the start of the lyric.
|
|
|
|
It also performs compaction and serialization. Song lyrics are typically provided as text files. Reading files on a hard drive is an expensive process, but we can perform that expensive training process only once and save the resulting Markov Model in a more memory-efficient format.
|
|
|
|
#+begin_src clojure :session main :results output pp
|
|
(require '[com.owoga.corpus.markov :as markov]
|
|
'[taoensso.nippy :as nippy]
|
|
'[com.owoga.prhyme.data-transform :as data-transform]
|
|
'[clojure.pprint :as pprint]
|
|
'[clojure.string :as string]
|
|
'[com.owoga.trie :as trie]
|
|
'[com.owoga.tightly-packed-trie :as tpt])
|
|
|
|
(defn train-backwards
|
|
"For building lines backwards so they can be seeded with a target rhyme."
|
|
[files n m trie-filepath database-filepath tightly-packed-trie-filepath]
|
|
(let [database (atom {:next-id 1})
|
|
trie (markov/file-seq->backwards-markov-trie database files n m)]
|
|
(nippy/freeze-to-file trie-filepath (seq trie))
|
|
(println "Froze" trie-filepath)
|
|
(nippy/freeze-to-file database-filepath @database)
|
|
(println "Froze" database-filepath)
|
|
(markov/save-tightly-packed-trie trie database tightly-packed-trie-filepath)
|
|
(let [loaded-trie (->> trie-filepath
|
|
nippy/thaw-from-file
|
|
(into (trie/make-trie)))
|
|
loaded-db (->> database-filepath
|
|
nippy/thaw-from-file)
|
|
loaded-tightly-packed-trie (tpt/load-tightly-packed-trie-from-file
|
|
tightly-packed-trie-filepath
|
|
(markov/decode-fn loaded-db))]
|
|
(println "Loaded trie:" (take 5 loaded-trie))
|
|
(println "Loaded database:" (take 5 loaded-db))
|
|
(println "Loaded tightly-packed-trie:" (take 5 loaded-tightly-packed-trie))
|
|
(println "Successfully loaded trie and database."))))
|
|
|
|
(let [files (->> "/home/eihli/src/prhyme/dark-corpus"
|
|
io/file
|
|
file-seq
|
|
(eduction (data-transform/xf-file-seq 0 4)))
|
|
[trie database] (train-backwards
|
|
files
|
|
1
|
|
5
|
|
"/tmp/markov-trie-4-gram-backwards.bin"
|
|
"/tmp/markov-database-4-gram-backwards.bin"
|
|
"/tmp/markov-tightly-packed-trie-4-gram-backwards.bin")])
|
|
|
|
(def markov-trie (into (trie/make-trie) (nippy/thaw-from-file "/tmp/markov-trie-4-gram-backwards.bin")))
|
|
(def database (nippy/thaw-from-file "/tmp/markov-database-4-gram-backwards.bin"))
|
|
(def markov-tight-trie
|
|
(tpt/load-tightly-packed-trie-from-file
|
|
"/tmp/markov-tightly-packed-trie-4-gram-backwards.bin"
|
|
(markov/decode-fn database)))
|
|
|
|
(println "\n\n Example n-grams frequencies from Hidden Markov Model:\n")
|
|
(pprint/pprint
|
|
(->> markov-tight-trie
|
|
(drop 600)
|
|
(take 10)
|
|
(map
|
|
(fn [[ngram-ids [id freq]]]
|
|
[(string/join " " (map database ngram-ids)) freq]))))
|
|
#+end_src
|
|
|
|
#+RESULTS:
|
|
#+begin_example
|
|
Froze /tmp/markov-trie-4-gram-backwards.bin
|
|
Froze /tmp/markov-database-4-gram-backwards.bin
|
|
Loaded trie: ([(1 1 1 1 2) [2 2]] [(1 1 1 1 11) [11 1]] [(1 1 1 1 14) [14 2]] [(1 1 1 1 17) [17 1]] [(1 1 1 1 22) [22 1]])
|
|
Loaded database: ([hole 7] [trash 227] [come 87] [275 overkill] [breaking 205])
|
|
Loaded tightly-packed-trie: ([(1 1 1 1 2) [2 2]] [(1 1 1 1 11) [11 1]] [(1 1 1 1 14) [14 2]] [(1 1 1 1 17) [17 1]] [(1 1 1 1 22) [22 1]])
|
|
Successfully loaded trie and database.
|
|
|
|
|
|
Example n-grams frequencies from Hidden Markov Model:
|
|
|
|
(["</s> behind from attack cowards" 1]
|
|
["</s> behind from attack" 1]
|
|
["</s> behind from" 1]
|
|
["</s> behind" 1]
|
|
["</s> hate recharging , crushing" 1]
|
|
["</s> hate recharging ," 1]
|
|
["</s> hate recharging" 1]
|
|
["</s> hate" 1]
|
|
["</s> bills and sins pay" 1]
|
|
["</s> bills and sins" 1])
|
|
|
|
|
|
#+end_example
|
|
|
|
** Functionalities To Evaluate The Accuracy Of The Data Product
|
|
|
|
Since creative brainstorming is the goal, "accuracy" is subjective.
|
|
|
|
We can, however, measure and compare language generation algorithms against how "expected" a phrase is given the training data. This measurement is "perplexity".
|
|
|
|
#+begin_src clojure :session main :exports both :results output pp
|
|
(require '[taoensso.nippy :as nippy]
|
|
'[com.owoga.tightly-packed-trie :as tpt]
|
|
'[com.owoga.corpus.markov :as markov])
|
|
|
|
(def database (nippy/thaw-from-file "/home/eihli/.models/markov-database-4-gram-backwards.bin"))
|
|
|
|
(def markov-tight-trie
|
|
(tpt/load-tightly-packed-trie-from-file
|
|
"/home/eihli/.models/markov-tightly-packed-trie-4-gram-backwards.bin"
|
|
(markov/decode-fn database)))
|
|
|
|
(let [likely-phrase ["a" "hole" "</s>" "</s>"]
|
|
less-likely-phrase ["this" "hole" "</s>" "</s>"]
|
|
least-likely-phrase ["that" "hole" "</s>" "</s>"]]
|
|
(run!
|
|
(fn [word]
|
|
(println
|
|
(format
|
|
"\"%s\" has preceeded \"hole\" \"</s>\" \"</s>\" a total of %s times"
|
|
word
|
|
(second (get markov-tight-trie (map database ["</s>" "</s>" "hole" word]))))))
|
|
["a" "this" "that"])
|
|
(run!
|
|
(fn [word]
|
|
(let [seed ["</s>" "</s>" "hole" word]]
|
|
(println
|
|
(format
|
|
"%s is the perplexity of \"%s\" \"hole\" \"</s>\" \"</s>\""
|
|
(->> seed
|
|
(map database)
|
|
(markov/perplexity 4 markov-tight-trie))
|
|
word))))
|
|
["a" "this" "that"]))
|
|
#+end_src
|
|
|
|
#+RESULTS:
|
|
: "a" has preceeded "hole" "</s>" "</s>" a total of 250 times
|
|
: "this" has preceeded "hole" "</s>" "</s>" a total of 173 times
|
|
: "that" has preceeded "hole" "</s>" "</s>" a total of 45 times
|
|
: -12.184088569934774 is the perplexity of "a" "hole" "</s>" "</s>"
|
|
: -12.552930899563904 is the perplexity of "this" "hole" "</s>" "</s>"
|
|
: -13.905719644461469 is the perplexity of "that" "hole" "</s>" "</s>"
|
|
:
|
|
:
|
|
|
|
|
|
The results above make intuitive sense. The most common word to preceed "hole" at the end of a sentence is the word "a". There are 250 instances of sentences of "... a hole.". That can be compared to 173 instances of "... this hole." and 45 instances of "... that hole.".
|
|
|
|
Therefore, "... a hole." is has the lowest "perplexity".
|
|
|
|
This standardized measure of accuracy can be used to compare different language generation algorithms.
|
|
|
|
** Security Features
|
|
|
|
Artists/Songwriters place a lot of value in the secrecy of their content. Therefore, all communication with the web-based interface occurs over a secure connection using HTTPS.
|
|
|
|
Security certificates are generated using Let's Encrypt and an Nginx web server handles the SSL termination.
|
|
|
|
With this precaution in place, attackers will not be able to snoop the content that songwriters are sending to or receiving from the servers.
|
|
|
|
** Tools To Monitor And Maintain The Product
|
|
|
|
By having the application server behind an HAProxy load balancer, we can take advantage of the built-in HAProxy stats page for monitoring amount of traffic and health of the application servers.
|
|
|
|
[[file:images/stats.png]]
|
|
|
|
http://darklimericks.com:8404/stats
|
|
|
|
That page is behind basic authentication with username: admin and password: admin.
|
|
|
|
The server also includes the ~certbot~ script for updating and maintaining the SSL certificates issued by Let's Encrypt.
|
|
|
|
** A User-Friendly, Functional Dashboard That Includes At Least Three Visualization Types
|
|
|
|
You can access an example of the user interface at https://darklimericks.com/wgu.
|
|
|
|
You'll see 3 input fields.
|
|
|
|
The first input field is for a word or phrase for which you wish to find a rhyme. Submitting that field will return three visualizations to help you pick a rhyme.
|
|
|
|
The first visualization is a scatter plot of rhyming words with the "quality" of the rhyme on the Y axis and the number of times that rhyming word/phrase occurrs in the training corpus on the X axis.
|
|
|
|
[[file:images/wgu-vis.png]]
|
|
|
|
The second visualization is a word cloud where the size of each word is based on the frequency with which the word appears in the training corpus.
|
|
|
|
[[file:images/wgu-vis-cloud.png]]
|
|
|
|
The third visualization is a table that lists all of the rhymes, their pronunciations, the rhyme quality, and the frequency. The table is sorted first by the rhyme quality then by the frequency.
|
|
|
|
[[file:images/wgu-vis-table.png]]
|
|
|
|
* D. Documentation
|
|
:PROPERTIES:
|
|
:CUSTOM_ID: remaining-documentation
|
|
:END:
|
|
|
|
Create each of the following forms of documentation for the product you have developed:
|
|
|
|
** Business Vision
|
|
|
|
Provide rhyming lyric suggestions optionally constrained by syllable count.
|
|
|
|
*** Requirements
|
|
|
|
- [X] Given a word or phrase, suggest rhymes (ranked by quality) (Trie)
|
|
- [-] Given a word or phrase, suggest lyric completion (Hidden Markov Model) + [ ] (Future iteration) Restrict suggestion by syllable count
|
|
+ [X] Sort suggestions by frequency of occurrence in training corpus + [X] Sort suggestions by rhyme quality
|
|
+ [ ] (Future iteration) Show graph of suggestions with perplexity on one axis and rhyme quality on the other
|
|
|
|
** Data Sets
|
|
|
|
I obtained the dataset from http://darklyrics.com.
|
|
|
|
The code that I used to download all of the lyrics is at [[https://github.com/eihli/prhyme/blob/master/src/com/owoga/corpus/darklyrics.clj]].
|
|
|
|
In the interest of being nice to the owners of http://darklyrics.com, I'm keeping private the files containing the lyrics.
|
|
|
|
The trained data model is available.
|
|
|
|
See ~resources/darklyrics-markov.tpt~
|
|
|
|
** Data Analysis
|
|
|
|
I wrote code to perform certain types of data analysis, but I didn't find it useful to meet the business requirements of this project.
|
|
|
|
For example, there is natural language processing code at [[https://github.com/eihli/prhyme/blob/master/src/com/owoga/prhyme/nlp/core.clj]] that parses a line into a grammar tree. I wrote several functions to manipulate and aggregate information about the grammar trees that compose the corpus. But I didn't use any of that information in creation of the n-gram Hidden Markov Model nor in the user display. For tasks related to brainstorming rhyming lyrics, that extra information lacked significant value.
|
|
|
|
** Assessment Of Hypothesis
|
|
|
|
I'll use an example output to subjectively assess the results of the project.
|
|
|
|
Below are some of the lyrics suggested to rhyme with the word "technologies".
|
|
|
|
| Rhyme | Quality | Lyric | Perplexity |
|
|
| technologies | 8 | you will tear the skin from the nuclear technologies | -0.04695091652785746 |
|
|
| pathologies | 7 | there's no hope for body's pathologies | -0.09800371561934312 |
|
|
| apologies | 7 | swimming in a grey world dying it's time for apologies | -0.14781111654643642 |
|
|
| chronologies | 7 | damn god damn the seed lurks in chronologies | -0.20912909334441387 |
|
|
| anomalies | 6 | yesterday was born i encounter the anomalies | -0.19578505194217627 |
|
|
| atrocities | 6 | there's no return and and the pimp your atrocities | -0.21516240668167685 |
|
|
| ideologies | 6 | entrenched ideologies | -0.27407234083849513 |
|
|
| monopolies | 6 | monopolies | -0.8472654185540912 |
|
|
| qualities | 5 | with such qualities | -0.0793752454750395 |
|
|
| policies | 5 | stop looking at insurance policies | -0.11580898408112054 |
|
|
| colonies | 5 | betwixt my heels, through the tears you collapse the colonies | -0.1610184959356118 |
|
|
| harmonies | 5 | broken harmonies | -0.18655087962492334 |
|
|
| prophecies | 5 | seek the truth prophecies | -0.24506696021938001 |
|
|
| festivities | 4 | you have touching the festivities | -0.09271388814221376 |
|
|
| delicacies | 4 | grey that consumes what it never was sun and the delicacies | -0.14553081854920977 |
|
|
| anybody's | 4 | your eyes, will remain violent the anybody's | -0.17560987263626957 |
|
|
| extremities | 4 | i am missing extremities | -0.30386279996641197 |
|
|
| casualties | 3 | feed the casualties | -0.23600199637494926 |
|
|
|
|
Do these lyrics provide benefit to the brainstorming process?
|
|
|
|
The lines "make sense" to varying degrees.
|
|
|
|
The "pathologies" line, for example, contains a sensible 2-gram of "body's pathologies". The model has learned that the possessive form of "body" is a reasonable prefix to the word "pathologies".
|
|
|
|
| pathologies | 7 | there's no hope for body's pathologies | -0.09800371561934312 |
|
|
|
|
And the beginning of that line contains a phrase, "there's no hope", that fits perfectly with the genre/context of the training set (dark heavy metal).
|
|
|
|
It's clear that the training worked. The output is relevant to the genre and grammatically reasonable.
|
|
|
|
There's also a wide variety in the output, which is beneficial for
|
|
brainstorming. Suggestions range from clean and clear rhymes, like
|
|
"technologies" and "pathologies", to more abstract rhymes like "technologies"
|
|
and "anybody's", which some artists can creatively manipulate effectively.
|
|
|
|
I assess this version of the product proves viable and there's exciting
|
|
possibilities for improvements by integrating with making suggestions that meet
|
|
certain stress patterns, preferring phrases that contain synonyms or antonyms,
|
|
and more.
|
|
|
|
** Visualizations
|
|
|
|
[[file:images/rhyme-scatterplot.png]]
|
|
|
|
[[file:images/wordcloud.png]]
|
|
|
|
[[file:images/rhyme-table.png]]
|
|
|
|
** Accuracy
|
|
|
|
It's difficult to objectively test the models accuracy since the goal of "brainstorm new lyric" is such a subjective goal. A valid test of that goal will require many human subjects to subjectively evaluate their performance while using the tool compared to their performance without the tool.
|
|
|
|
If we allow ourselves the assumption that the close a generated phrase is to a valid english sentence then the better the generated phrase is at helping a songwriter brainstorm, then one objective assessment measure can be the percentage of generated lyrics that are valid English sentences.
|
|
|
|
*** Percentage Of Generated Lines That Are Valid English Sentences
|
|
|
|
We can use [[https://opennlp.apache.org/][Apache OpenNLP]] to parse sentences into a grammar structure conforming to the parts of speech specified by the [[https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html][University of Pennsylvania's Treebank Project]].
|
|
|
|
If OpenNLP parses a line of text into a "simple declarative clause" from the Treebank Tag Set, as described [[https://catalog.ldc.upenn.edu/docs/LDC95T7/cl93.html][here]], then we consider it a valid sentence.
|
|
|
|
Using this technique on a (small) sample of 100 generated sentences reveals that ~47 are valid.
|
|
|
|
This is just one of many possible assessment techniques we could use. It's simple but could be expanded to include valid phrases other than Treebank's clauses. For the purpose of having a measurement by which to compare changes to the algorithm, this suffices.
|
|
|
|
#+begin_src clojure :session main :eval no-export :results output :exports both
|
|
(require '[com.darklimericks.linguistics.core :as linguistics]
|
|
'[com.owoga.prhyme.nlp.core :as nlp])
|
|
|
|
;; wgu-lyric-suggestion returns 20 suggestions. Each suggestion is a vector of
|
|
;; the rhyming word/quality/frequency and the sentence/parse. This function
|
|
;; returns just the sentences. The sentences can be further filtered using
|
|
;; OpenNLP to only those that are grammatically valid english sentences.
|
|
|
|
(defn sample-of-20
|
|
[]
|
|
(->> "technology"
|
|
linguistics/wgu-lyric-suggestions
|
|
(map (comp first second))))
|
|
|
|
(defn average-valid-of-100-suggestions []
|
|
(let [generated-suggestions (apply concat (repeatedly 5 sample-of-20))
|
|
valid-english (filter nlp/valid-sentence? generated-suggestions)]
|
|
(/ (count valid-english) 100)))
|
|
|
|
(println (average-valid-of-100-suggestions))
|
|
;; 47/100
|
|
#+end_src
|
|
|
|
#+RESULTS:
|
|
: 47/100
|
|
|
|
Where ~nlp/valid-sentence?~ is defined as follows.
|
|
|
|
#+begin_src clojure
|
|
(defn valid-sentence?
|
|
"Tokenizes and parses the phrase using OpenNLP models from
|
|
http://opennlp.sourceforge.net/models-1.5/
|
|
|
|
If the parse tree has a clause as the top-level tag, then
|
|
we consider it a valid English sentence."
|
|
[phrase]
|
|
(->> phrase
|
|
tokenize
|
|
(string/join " ")
|
|
vector
|
|
parse
|
|
first
|
|
tb/make-tree
|
|
:chunk
|
|
first
|
|
:tag
|
|
tb2/clauses
|
|
boolean))
|
|
#+end_src
|
|
|
|
** Testing
|
|
|
|
My language of choice for this project encourages a programming technique or paradigm known as REPL-driven development. REPL stands for Read-Eval-Print-Loop. This is a way to write and test code in real-time without a compilation step. Individual code chunks can be evaluated inside an editor, resulting in rapid feedback.
|
|
|
|
Therefore, many "tests" exist as comments immediately following the code under test. For example:
|
|
|
|
#+begin_src clojure :eval no
|
|
(defn perfect-rhyme
|
|
[phones]
|
|
(->> phones
|
|
reverse
|
|
(util/take-through stress-manip/primary-stress?)
|
|
first
|
|
reverse
|
|
(#(cons (first %)
|
|
(stress-manip/remove-any-stress-signifiers (rest %))))))
|
|
|
|
(comment
|
|
(perfect-rhyme (first (phonetics/get-phones "technology")))
|
|
;; => ("AA1" "L" "AH" "JH" "IY")
|
|
)
|
|
#+end_src
|
|
|
|
The code inside that comment can be evaluated with a simple keystroke while
|
|
inside an editor. It serves as both a test and a form of documentation, as you
|
|
can see the input and the expected output.
|
|
|
|
Supporting libraries have a more robust test suite, since their purpose is to be used more widely across other projects with contributions accepted from anyone.
|
|
|
|
Here is an example of the test suite for the code related to syllabification: [[https://github.com/eihli/phonetics/blob/main/test/com/owoga/phonetics/syllabify_test.clj]].
|
|
|
|
** Source Code
|
|
|
|
*** Tightly Packed Trie
|
|
|
|
This is the data structure that backs the Hidden Markov Model.
|
|
|
|
https://github.com/eihli/clj-tightly-packed-trie
|
|
|
|
*** Phonetics
|
|
|
|
This is the helper library that syllabifies and manipulates words, phones, and syllables.
|
|
|
|
https://github.com/eihli/phonetics
|
|
|
|
*** Rhyming
|
|
|
|
This library contains code for analyzing rhymes, sentence structure, and manipulating corpuses.
|
|
|
|
https://github.com/eihli/prhyme
|
|
|
|
*** Web Server And User Interface
|
|
|
|
This application is not publicly available. I'll upload it with submission of the project.
|
|
|
|
** Quick Start
|
|
|
|
*** How To Initialize Development Environment
|
|
|
|
**** Required Software
|
|
|
|
- [[https://www.docker.com/][Docker]]
|
|
- [[https://clojure.org/releases/downloads][Clojure Version 1.10+]]
|
|
- [[https://github.com/clojure-emacs/cider][Emacs and CIDER]]
|
|
|
|
**** Steps
|
|
|
|
1. Run ~./db/run.sh && ./kv/run.sh~ to start the docker containers for the database and key-value store.
|
|
a. The ~run.sh~ scripts only need to run once. They initialize development data containers. Subsequent development can continue with ~docker start db && docker start kv~.
|
|
2. Start a Clojure REPL in Emacs, evaluate the ~dev/user.clj~ namespace, and run ~(init)~
|
|
3. Visit ~http://localhost:8000/wgu~
|
|
|
|
*** How To Run Software Locally
|
|
|
|
**** Requirements
|
|
|
|
- [[https://www.java.com/download/ie_manual.jsp][Java]]
|
|
- [[https://www.docker.com/][Docker]]
|
|
|
|
**** Steps
|
|
|
|
1. Run ~./db/run.sh && ./kv/run.sh~ to start the docker containers for the database and key-value store.
|
|
a. The ~run.sh~ scripts only need to run once. They initialize development data containers. Subsequent development can continue with ~docker start db && docker start kv~.
|
|
2. The application's ~jar~ builds with a ~make~ run from the root directory. (See [[file:../Makefile][Makefile]]).
|
|
3. Navigate to the root directory of this git repo and run ~java -jar darklimericks.jar~
|
|
4. Visit http://localhost:8000/wgu
|
|
|