Archive
A Shattered Trinity: A Symphony In D Major
I’ve always wanted to write a full symphony. I like the structures, I like the idea of having to compose for such a large group of instruments, and I like the flexibility I get beyond my more guitar based influences, although I also enjoy writing songs in that mode. So one day I asked ChatGPT to describe for me an example structure for a Baroque style symphony. That outline led to my most recent work.
There were song structures I’d never heard of, tempos I’d never composed in, time signatures I’d never thought to attempt despite my foray into odd meter on nearly every album I’ve published. Any other time I’ve tried to start an idea like this I’ve experienced writer’s block trying to come up with compelling themes, especially considering I didn’t quite understand how to use themes in classical music in the first place. But I recalled a simple bit of melody from Servings Of Sadness that had come to me as I sang it to myself while making lunch, and decided if it was compelling enough to be sung, it would be a compelling enough start. That six note motif became the Motto, the base, for the set of themes I crafted for my symphony, including a theme each for the two protagonists, a Love Theme, a Battle theme, and more.
I found many of the structures limiting at first; fugues in particular with their predefined key changes were difficult. On occasion I would get stuck, and when I couldn’t get myself out, I’d take some advice from ChatGPT on potential solutions. Eventually, the story became evident: two young men, friends in fact, fall for the same young woman in a Renaissance era city filled with festivals and joy. The city itself is a character, an observer, to a tragic tale of love and loss.
Three concertos and an orchestral suite later, A Shattered Trinity is born.
You can learn more about this album here.
Available on Spotify, YouTube, and Amazon Music.

An Occasional Coding Exercise Leads To Puzzle Book Sales
There was a time back in my early Amazon career, when I was managing the Independent Publisher Portal, also known as Kindle Direct Publishing, that I wanted to end to end test the publishing process for print on demand. The challenge with doing so was that the publishing workflows were really good at recognizing duplicative content as part of its fraud detection. This made testing repeatedly close to impossible, because each test required a new, unique book.
I decided to pop open Visual Studio, fire up my rusty C# skill, leverage Microsoft Word’s XML based formatting, and write some code to automatically generate books. Because I wanted them to be legitimate, repeatable, and make it to the Amazon marketplace, I couldn’t just randomly generate text files.
So I wrote a program that automatically generated Sudoku puzzles. First, I wrote a randomizer that would generate a random 9×9 sudoku grid filled with a solved and valid result. Then I wrote a sudoku solver to validate that the puzzle in its final form had a solution.
I then decided I wanted to have three different levels of solvable sudokus, with about 30 of each in a book. So, for each level, I removed a certain number of random digits from the puzzle, one by one, until the solver determined that the puzzle was no longer solvable. I then stepped back to the last solvable version and marked that as a “hard” puzzle, added two more digits back for a “medium”, and then two more digits back for an “easy” puzzle.
With that code written, I went online and downloaded a free use sudoku puzzle image, and created a Word document template including the cover file. I saved that file so I could open it later, along with a few fields I could merge in, such as the volume number, as well as the colors for the cover so any books I created could be unique. With that, a few parameters could be passed in to my program, generate 60 puzzles, add them as pages to the Word document, and save out a new, unique puzzle book.
I was able to successfully test my publishing workflow. Ten of these puzzle books were published out to Amazon. They remain available for sale today, and I still occasionally sell one.

With that done, I decided to go back and write a different puzzle output, adding a dictionary integration and code that created word search puzzles. There are ten of those out at Amazon as well. It was a fun little project that took a bit of thinking to get through, and over the course of several years managed to pay for a couple of dinners.
Micro AI Agents
As a writer and reviewer of documents, I’ve spent a lot of time considering how I would want to leverage AI tools to improve my writing. In most cases, I’ve observed the development of models that are similar to what something like Grammarly might provide. They can correct grammar, make suggestions on sentence structure, and potentially point out complex or unconfident words.
There is an AI improvement tool available within the portal I use to write my blog posts. I agree with only about half of what it suggests. As an immediate example, “provide” is not a complex word, but my AI suggestion bot thinks it is. I’m a linear writer by and large; I still outline my thoughts before I start, because I want to be sure I progress from idea to idea in a way that is easy to follow. Sometimes that results in long sentences, but when I use them, it’s with a purpose in mind. I’m very deliberate about my choices when I write.
And that’s where I tend to disagree with most modern writing agents when it comes to providing writing feedback. They can correct mechanics, possibly better and more consistently than I can. But where they miss is in language tone, word choice, elegance of phrase, use of techniques like alliteration even in prose, and other more subjective applications of writing skill.
They lose the uniqueness of the human perspective.
One of the topics that came up often as I reviewed documents at Amazon is how do we distill each reviewer’s unique approach to analysis into models that we can then deploy, and the concept I landed on was what I called “micro agents”. Rather than incorporate everything into a single large model that would then have to make judgments about which feedback to apply, I thought it would be more effective to be able to train a model how I, Rob, would review a document. I would then train another model how another reviewer would review the same document, because the feedback would be different. If I could come up with 10 or 20 models containing each reviewer’s “personalities”, and then deploy those, an author could then select which reviewer or reviewers they would like to apply.
There are several advantages to this approach.
First, the author could target tonally consistent perspectives. I don’t mind complex words, so I’d prefer to get feedback from someone (or something) that likewise is OK with complex words. And as a developer of a model, I don’t want to introduce that feedback loop into a model of another reviewer who has a different perspective on complex words.
Second, the author could leverage feedback with different perspectives from a consistent source without having an AI filter that perspective down or summarize options that are contradictory. I’ve literally had my AI arguing with itself at times as I’ve been writing content because it can’t maintain tonal consistency.
And third, the models could learn independently across many different iterations of different documents without all of them ending up at the same conclusion point. While there is an element of a selection bias by allowing the author to pick specific “experts” to give them advice, that also means that the feedback loop is relevant specifically to the expertise the model has been trained in.
In practice, I don’t want an all knowing model telling me what to do against a filtered set of options with a potential learning bias. I want to seek out the advice of experts at the thing I am doing who can be very, very good at the analysis I require; if I can’t get to them personally, then a model that thinks like them is the next best thing.
I don’t want my writing to end up sounding like everyone else’s because I used AI.
The same thing could apply to my composition of music. In a previous post, I talked about my interactions with ChatGPT as I composed my latest work, a Baroque style symphony. Imagine a composing world where you could pick two or three specific composers from a list and get feedback on how they specifically might approach a problem rather than a generalized answer. Several times I found myself disregarding feedback because it was tonally out of place. Several times I found myself arguing with ChatGPT about specific applications of things, and the answers, while thorough and grounded in theory, didn’t always tell me why they were being suggested or even if they aligned with the style of music I was writing.
As part of my interest in writing, I’ll be exploring if I can train an AI agent to review documents like I do, including analyzing where my approach differs from conventional wisdom. It will be interesting to see where that lands.
Happy writing!
