Behavioral Interviews
I’ve had the opportunity to participate in many, many interviews over the course of my career, including over a hundred as an interviewer at Amazon. Over time I’ve developed a perspective on behavioral interviews. Over the past couple of months, as I’ve conducted my job search, I’ve had the chance to be on the other side of the equation with high profile companies such as Meta. That has sharpened my perspective on this interview approach.
About Behavioral Interviews
Behavioral interviews focus on past experiences under the belief that past behavior best predicts future behavior. Candidates provide examples of how they handled real situations. Through conversation, candidates reveal their skills, competencies, how they handle challenges, their approach to teamwork and collaboration, and their problem-solving capabilities.
Questions are typically posed as a variation of “Tell me about a time you did a thing”, and answers are generally expected to be in the STAR format. The STAR format is prescriptive, requiring the answer to be presented with a Situation, the Task outlined to solve it, the Actions taken, and the Result.
My Candidate Approach
One of my core goals as a candidate is to present myself as calm, confident, and competent. I don’t want to be reading from a script, or perhaps perceived as leveraging AI, so while I have my list of examples sitting in a spreadsheet, I have them mostly memorized; I want to respond naturally as much as possible. I want the interview to be a conversation, not an interrogation. I want the person interviewing me to get a feel for what a working relationship would be like. I like to think I’m a strong communicator, and making the interview conversation go smoothly is a specific goal I have in mind.
I also don’t necessarily follow the STAR format as closely as I’ve seen others do it. It’s a perfectly fine mental model for handling an interview. In my opinion, its rigidity leads candidate to prepare every potential answer ahead of time. As a result, it can lead to answers that lack personality, lack depth, and sound more like a resume reading session. It also tends to mean that answers come from that predefined set of prepared responses even if the answer doesn’t clearly match the question. I prefer to tell a story, to provide a narrative, with an opening, a description, and a conclusion. I incorporate parts of the STAR format into those narratives, but I’m aware of the limitations in that format and the likely gaps it will force into my answers. Instead, I’d like to address those gaps in my initial answer; I want to control the conversation, not allow for the possibility that a follow up will take me in a direction I may not necessarily be prepared to go.
Sometimes that means I take a moment and think before answering. I do this deliberately, and I think it’s acceptable, sometimes even necessary. I make sure that my eyes are not going to what might be perceived as another screen; I’m keenly aware of body language and when I do stop to think I tend to make sure I’m looking upward if I have to avert my eyes to gather my thoughts. I’d much prefer to take a couple of moments, cycle through my examples in my head, and then answer with something as befitting as possible, or acknowledge my answer might not be the best and explain why it might still be relevant.
This can lead me to ramble. I’m very self-aware and will pause at natural stopping points and let the interviewer follow up if they choose. One of the risks I pose to myself is being too verbose in my answers; these interviews are conducted on a clock and running out of time could potentially be a deal-breaker if important points are not brought up. As a result, it’s critical to have a perspective on time as well as a perspective on what’s critical to surface to the interviewer. I limit my introduction to a specific set of data and time. I have seen candidates who, once they start, are so eager to check all the boxes that they are not able to put the brakes on as they speak. In the hands of an experienced interviewer, that can be handled with a gentle interruption; but if a candidate encounters one without that experience, they can burn valuable time and risk not providing all the data points.
What Can Go Wrong With The Process
It takes a certain nuance for an interviewer to be able to conduct a behavioral interview in a way that evaluates thought process when the candidate doesn’t provide the “right” answer. When an interviewer can’t do this, the candidate will find themselves evaluated on being able to articulate the answer the interviewer seeks rather than on how they mentally approach a problem. This is poor evaluation technique; not every candidate will approach a problem the same way, or have an approach that aligns with the interviewer, even if the result was successful. In the end, confirmation bias can be more impactful on the decision to hire or not than a clear evaluation of thought process and results.
There’s no way for someone to prepare for every possible question. As a candidate, it’s critical to be prepared to pivot to similar situations, or take that moment to think, or ask follow up questions while they find an appropriate answer. The risk here is that a candidate can be disqualified simply for not encountering a particular situation before, even though they may have been able to handle it perfectly fine based on their experience. After a discussion on coaching engineers, I was asked if I ever had to manage performance of a senior engineer on my team. I’ve been blessed with awesome senior engineers on my teams, so I honestly answered no, and provided a counter-response where I coached a senior engineer who was moving to management, not on performance, but on handling new expectations. I’m not sure if that disqualified me when they chose not to move forward, but I also strongly believe that I shouldn’t be disqualified due to not having encountered a specific situation. My background indicates I’d be perfectly fine coaching senior engineers in any situation.
I’ve gotten into the habit of cataloging the questions I am asking and taking my own notes after an interview completes to specifically target the ones I was not expecting. I actually like getting new questions, as I can incorporate them into my preparation.
There are interviewers who struggle to balance the time involved (typically 45-60 minutes) and the amount of data they need to gather. This can result in two poor outcomes. First, important data that could impact the hiring decision might be missed, as clarifying or follow up questions are not effectively executed. Second, and this happened to me recently, the interviewer is not able to conduct the questioning cleanly. I had an interview where I was interrupted after nearly every sentence to either direct or clarify what I said, before I had finished my point. This resulted in a disjointed experience, where the interviewer’s inability to coherently conduct questioning led to confusion, changes of direction, and a clearly poor result. If a company decides to pass on a candidate because there are missing data points, and the company is not willing to follow up to address those missing data points before making that decision, poor interviewing technique becomes an even greater risk to the process.
As a candidate, it’s important to maintain your composure at times like this and make sure that you are clarifying the questions you are being asked, and completing your thoughts before moving to the next one.
Final Thoughts
Interviewing is an inexact science. But behavioral interviews are even more subjective. Too often the inherent personal bias of the interviewer, the training provided, or the company guidance are more impactful than thoughtful evaluation and listening skills. The end result is companies are just as likely to hire someone that luckily manages to match their bias as they are to hire someone that actually matches their skills and qualifications. In a job market like the current one, where there’s an overabundance of candidates, companies are even less likely to feel certain about a hiring decision, and more likely to wait for that perfect candidate.
Micro AI Agents
As a writer and reviewer of documents, I’ve spent a lot of time considering how I would want to leverage AI tools to improve my writing. In most cases, I’ve observed the development of models that are similar to what something like Grammarly might provide. They can correct grammar, make suggestions on sentence structure, and potentially point out complex or unconfident words.
There is an AI improvement tool available within the portal I use to write my blog posts. I agree with only about half of what it suggests. As an immediate example, “provide” is not a complex word, but my AI suggestion bot thinks it is. I’m a linear writer by and large; I still outline my thoughts before I start, because I want to be sure I progress from idea to idea in a way that is easy to follow. Sometimes that results in long sentences, but when I use them, it’s with a purpose in mind. I’m very deliberate about my choices when I write.
And that’s where I tend to disagree with most modern writing agents when it comes to providing writing feedback. They can correct mechanics, possibly better and more consistently than I can. But where they miss is in language tone, word choice, elegance of phrase, use of techniques like alliteration even in prose, and other more subjective applications of writing skill.
They lose the uniqueness of the human perspective.
One of the topics that came up often as I reviewed documents at Amazon is how do we distill each reviewer’s unique approach to analysis into models that we can then deploy, and the concept I landed on was what I called “micro agents”. Rather than incorporate everything into a single large model that would then have to make judgments about which feedback to apply, I thought it would be more effective to be able to train a model how I, Rob, would review a document. I would then train another model how another reviewer would review the same document, because the feedback would be different. If I could come up with 10 or 20 models containing each reviewer’s “personalities”, and then deploy those, an author could then select which reviewer or reviewers they would like to apply.
There are several advantages to this approach.
First, the author could target tonally consistent perspectives. I don’t mind complex words, so I’d prefer to get feedback from someone (or something) that likewise is OK with complex words. And as a developer of a model, I don’t want to introduce that feedback loop into a model of another reviewer who has a different perspective on complex words.
Second, the author could leverage feedback with different perspectives from a consistent source without having an AI filter that perspective down or summarize options that are contradictory. I’ve literally had my AI arguing with itself at times as I’ve been writing content because it can’t maintain tonal consistency.
And third, the models could learn independently across many different iterations of different documents without all of them ending up at the same conclusion point. While there is an element of a selection bias by allowing the author to pick specific “experts” to give them advice, that also means that the feedback loop is relevant specifically to the expertise the model has been trained in.
In practice, I don’t want an all knowing model telling me what to do against a filtered set of options with a potential learning bias. I want to seek out the advice of experts at the thing I am doing who can be very, very good at the analysis I require; if I can’t get to them personally, then a model that thinks like them is the next best thing.
I don’t want my writing to end up sounding like everyone else’s because I used AI.
The same thing could apply to my composition of music. In a previous post, I talked about my interactions with ChatGPT as I composed my latest work, a Baroque style symphony. Imagine a composing world where you could pick two or three specific composers from a list and get feedback on how they specifically might approach a problem rather than a generalized answer. Several times I found myself disregarding feedback because it was tonally out of place. Several times I found myself arguing with ChatGPT about specific applications of things, and the answers, while thorough and grounded in theory, didn’t always tell me why they were being suggested or even if they aligned with the style of music I was writing.
As part of my interest in writing, I’ll be exploring if I can train an AI agent to review documents like I do, including analyzing where my approach differs from conventional wisdom. It will be interesting to see where that lands.
Happy writing!
Learning Music Theory Online
In a sense, I’ve been self-taught most of my life. I taught myself how to code, and built a successful technology career before returning to get my degree. When I did that, I cracked books galore. The internet was relatively new and online resources such as Stack Overflow either did not exist or were in their infancy.
Late in high school, as high schoolers sometimes do, some friends of mine and I decided to put together a band. I picked up a cheap bass at the local music store and we learned a few songs, but no one really stuck with it after a few “practices”.
But sometimes opportunity strikes, and my Aunt had a cover band that played local venues, and they needed a bass player. I got the gig after a basic audition.
Now I really needed to learn how to play bass. Luckily, the band had most of their songs charted out on paper, so I literally printed them all out and charted out chord progressions and potential passing tones. Most of the material was standard 3- and 4-chord country based songs, so there weren’t many hard songs to learn.
But my actual learning came from MTV. I spent hours upon hours with my bass in front of the TV, playing with every song that came on. Back then, all MTV did was play music, and that was my training ground. The first song I ever played in such a session is Stranger In A Strange Land by Iron Maiden, a song that remains a favorite of mine.
And other than books that was the only real option.
Today, though, the learning resources are endless, and I have taken advantage of them not for my playing, but for my composing.
There are a ton of YouTube videos and other resources dedicated to becoming a better bass player. Scott Devine and Mark J. Smith are favorites of mine. I’ve learned a ton about bass technique, but also about how to think about bass lines.
But the biggest impact on my understanding of music and my ability to compose has been the incredible volume of high quality content on YouTube about music theory and how it can be applied to both modern music as well as classical music. I’ve never been able to learn from books, I learn by watching. I learned more watching my guitar player’s hands in my early bands than I ever did from a book.
I’m subscribed to over 20 channels dedicated to music theory. I often use them as inspiration. One of my favorite time signatures today is 11/8; one of the techniques I love to employ is polyrhythm, making odd time signatures feel like straightforward time signatures rhythmically. A video of legendary drummer Simon Phillips playing in 33/8 led to the song On A Failure To Dance, which is mostly in the same time signature. I’ve written several songs, and parts of songs, in Locrian mode, considered the “unlistenable” mode. That challenge came from another video.
The point is, if you want to learn, there is no shortage of high quality content online. The sky’s the limit if you want to learn.
As part of my own music content, I’ve outlined some specific channels here. They are well worth your time if you wish to learn.
