Diminished Unison or Augmented Unison? Why This Interval Question is More Controversial Than You Think

Diminished Unison or Augmented Unison? Why This Interval Question is More Controversial Than You Think

Posted on May 3, 2025 by Emmeline Pankhurst

Introduction: The Seemingly Simple Unison

When you think about musical intervals, you probably picture the distance between two different notes – like a C going up to an E (that’s a third!). But the simplest interval is just playing the exact same note at the exact same pitch. C to C, G to G. We call this a unison, and at first glance, it seems incredibly straightforward – almost like it’s not an interval at all, just zero distance. Yet, even this most basic concept can lead to fascinating theoretical questions. What happens when we apply the rules of interval modification to this seemingly unchangeable point? Let’s explore why this simple idea isn’t quite as simple as it seems.

Understanding Intervals and Qualities

Intervals are all about the distance between two notes. Think of it like measuring the steps you take from one note to another on a musical ladder. This distance is described in two ways: by its number and by its quality.

The number is simple! You just count the notes, including the starting and ending notes. C up to G is a fifth because you count C, D, E, F, G – that’s five notes. C up to E is a third (C, D, E). Simple enough.

Now, the quality is where things get interesting. This tells us the exact size of the interval in terms of half steps, giving it its specific ‘flavor’. The main qualities we encounter are Perfect (P), Major (M), Minor (m), Augmented (A), and Diminished (d).

Most intervals, like 2nds, 3rds, 6ths, and 7ths, naturally come in Major or Minor forms. A Major third (like C to E) is a certain number of half steps (4, if you’re counting!). If you shrink a Major third by a half step, it becomes a Minor third (like C to E-flat – 3 half steps). If you increase a Major interval by a half step, it becomes Augmented (C to E-sharp). If you decrease a Minor interval by a half step, it becomes Diminished (C to E-double-flat… yikes!).

Then there are the intervals that are naturally Perfect: the 4ths, 5ths, and octaves (8ths). They have a unique, stable sound. Unlike Major/Minor intervals, Perfect intervals don’t start as Major or Minor. They are just… Perfect. If you increase a Perfect interval by a half step, it becomes Augmented (C to F-sharp is an Augmented 4th). If you decrease a Perfect interval by a half step, it becomes Diminished (C to G-flat is a Diminished 5th).

So, where does our little friend, the unison, fit into all this? The unison is the ultimate stable interval. It’s literally zero distance between two notes of the same pitch. Since it doesn’t have a Major or Minor form – you can’t have a ‘Major Unison’ or a ‘Minor Unison’ because there’s no distance to make ‘major’ or ‘minor’ – it falls squarely into the Perfect category. A unison is, by definition, a Perfect Unison (P1). It’s the bedrock, the starting point, the interval with zero half steps.

So, we have our baseline: the Perfect Unison. It seems solid, unchangeable. But what happens when we try to apply those same rules of augmentation and diminution to this seemingly unalterable Perfect 1st? Does it even make sense? This is where things get a little philosophical – and surprisingly contentious!

The Theoretical Question: Modifying the Unison

We’ve established that the perfect unison (P1) is our starting point, the musical equivalent of standing still. It’s zero steps, zero distance, just… the same note. Now, let’s think back to how we modify other intervals. We take a Major or Perfect interval and nudge it by a half step to make it Minor, Diminished, or Augmented.

A Perfect 5th (like C to G) becomes an Augmented 5th if you raise the top note by a half step (C to G#). It becomes a Diminished 5th if you lower the top note by a half step (C to Gb). This shows how you start with the standard version and then modify it.

Let’s try applying these rules to our Perfect Unison. What happens if we take a Perfect Unison (say, C to C) and try to make it Augmented? We’d raise the second note by a half step. So, C up to… C sharp! C to C#. If a P1 is 0 half steps, an Augmented Unison (A1) would be P1 + 1 half step, which is 0 + 1 = 1 half step. This isn’t so weird; C to C# is one half step. Notationally, it looks like a C being sharpened, staying within the ‘C family’ of notes, just altered.

But what about making it Diminished? We’d take our Perfect Unison (C to C) and lower the second note by a half step. So, C down to… C flat! C to Cb. If a P1 is 0 half steps, a Diminished Unison (d1) would be P1 – 1 half step, which is 0 – 1 = -1 half step. Wait a minute. Negative half steps? How can an interval between two notes – especially two notes that share the same letter name – be less than zero distance? This is where the brain starts to hurt a little!

The core theoretical conflict here is fundamental: can you have an interval that represents less than the same note? It feels counter-intuitive. A unison is defined by the lack of distance, by identity. How can you diminish identity? It’s like trying to make something ‘less than itself’.

This is also where enharmonic equivalents enter the picture and, honestly, sometimes add to the confusion. We know that C# sounds exactly the same as Db. C to C# sounds like C to Db. However, in music theory, spelling matters a lot. C to C# is written as C followed by a sharpened C. Both notes are related to the letter ‘C’, making it a type of unison (specifically, an Augmented Unison because the second C is raised). But C to Db is written as C followed by a flatted D. The note letters are C and D. Counting C to D gives us a second. Since Db is a half step smaller than a Major Second (C to D is a Major 2nd, 2 half steps; C to Db is 1 half step), C to Db is a Minor Second (m2).

So, even though A1 (C to C#) and m2 (C to Db) sound the same, their theoretical names and how they function in harmony can be completely different. The Augmented Unison (C to C#) maintains the idea that we are modifying a C, whereas the Minor Second (C to Db) clearly shows a relationship between C and D. This distinction based on spelling is crucial for understanding key signatures, chord structures, and voice leading.

But the diminished unison (C to Cb)… that’s still problematic. It implies a modification of C, but results in a note (Cb) that is enharmonically equivalent to B natural. C to B is a Major Seventh! So, a d1 (C to Cb) sounds like a M7 (C to B)? That doesn’t fit the pattern at all. A d1 should sound like a negative half step, whatever that means, or at least some tiny interval. It sounds like a huge interval, a seventh!

This clash between the rules of interval modification, the concept of zero distance, and the reality of enharmonic notes is precisely why the existence and validity of the diminished unison, in particular, sparks such debate.

Arguments For and Against the Existence/Validity

We’ve hit this theoretical wall: applying the standard rules of interval modification (adding or subtracting a half step from a Perfect interval) to the Perfect Unison (P1) gives us an Augmented Unison (A1) that makes sense notationally (C to C#), but a Diminished Unison (d1) (C to Cb) that feels… well, mathematically and conceptually awkward (-1 half step? Huh?).

This is where the music theory community gets a little split. It might sound wild that something so basic could be debated, but remember, music theory is partly describing what is (how music works) and partly creating a consistent system. Sometimes those two goals bump heads!

🎹 Every chord. Every scale. At your fingertips. 💡 You don’t need to memorize theory — you need a companion. 🔥 Piano Companion has your back — chords, scales, progressions, all in one app. 🚀 Try Piano Companion Now

On one side are the purists, the defenders of theoretical consistency. They argue, “Look, the rules are the rules!” If we define augmenting a Perfect interval as adding a half step, and diminishing it as subtracting a half step, and the unison is a Perfect interval, then logically, A1 and d1 must exist within the system. Period. They value the logical extension of the definitions across all interval numbers. Why should the unison be the only Perfect interval immune to modification? If a P4 can become A4 or d4, and a P5 can become A5 or d5, why not the P1? This viewpoint values the logical extension of the definitions across all interval numbers, even if the result (like the d1) seems a bit weird initially. They might also point back to the enharmonic notation – having an A1 (C to C#) is crucial because it clearly shows a C being modified, distinguishing it from a m2 (C to Db), which involves a different letter name altogether. This consistency in spelling is vital for understanding harmony and voice leading, even if the sound is the same.

But then you have the other camp, often more focused on practicality and pedagogy. They look at the idea of a diminished unison and say, “That’s just confusing!” How do you explain an interval that is ‘less than’ zero distance? A unison fundamentally represents identity – the same note. Trying to diminish identity feels like trying to make something ‘less than itself’, which breaks the intuitive understanding of what a unison is. From a teaching perspective, introducing a concept like a d1 can just muddy the waters, especially for beginners grappling with basic intervals. Plus, let’s be honest, when do you ever see “d1” written on a piece of sheet music? Almost never! Music notation has evolved to be practical. We see A1 occasionally (like in certain theoretical exercises or complex chromatic passages where the spelling is important), but d1 is virtually non-existent in common practice. The argument here is that theory should serve the music and its understanding, and a concept that’s confusing and unused might not be valid in practice, regardless of theoretical purity.

This creates a tension between maintaining strict theoretical consistency and prioritizing clarity, practicality, and the intuitive understanding of musical distance. It’s a fascinating little corner of music theory where abstract rules meet the reality of how we write and perceive music.

Notation and Practical Implications

We’ve seen that applying the rules of interval modification to the perfect unison creates some interesting theoretical wrinkles, especially with that tricky diminished unison. But how would these intervals actually look if you tried to write them down? And perhaps more importantly, why don’t we see them cluttering up our sheet music?

Let’s start with the Augmented Unison (A1). Remember, this is taking a P1 (like C to C) and raising the second note by a half step. So, on paper, it looks exactly like you’d expect: the same note letter, but the second one has a sharp or a double sharp. For example, C to C# is an A1. F to F## (F double sharp) is also an A1. You simply take the second note and add an accidental (or double accidental) to make it a half step higher than the first note, while keeping the same letter name.

Now, for the Diminished Unison (d1). This is where things get even more notationally awkward, reflecting the theoretical weirdness. To get a d1 from a P1 (like C to C), you need to lower the second note by a half step. So, C to Cb (C flat) would be a d1. F to Fb is a d1. From G to G (P1), lowering the second G gives you G to Gb (d1). You might even need a double flat sometimes; B to Bbb (B double flat) would be a d1 from B to B (P1).

Okay, so you can technically write them down. But here’s the thing: in practical, everyday music reading, you almost never see “d1” and you only occasionally see “A1” in very specific theoretical contexts or complex chromatic passages where the spelling is critical. Why? Because we have simpler, clearer ways to name those exact same sounds! Understanding how intervals relate and are spelled is key, and tools like Piano Companion can help visualize these concepts, showing how notes combine to form different intervals, chords, and scales, even when enharmonic spellings are involved.

An Augmented Unison (A1) like C to C# sounds exactly the same as a Minor Second (m2) like C to Db. And a Diminished Unison (d1) like C to Cb sounds exactly the same as a Major Seventh (M7) like C to B natural! Yes, you read that right. C to Cb is enharmonically equivalent to C to B. This is the major practical hurdle for the diminished unison.

Musicians and composers generally prioritize clarity on the page. When writing music, you want the notation to quickly tell the performer not just the pitch, but also its function within the harmony and melody. Writing C to Db clearly shows an upward step from C to a note related to D. Writing C to C# clearly shows a chromatic alteration of C, staying within the ‘C family’. These spellings help explain why that note is there – is it part of a D-flat chord? Is it leading up chromatically to D? The notation guides the interpretation.

But writing C to Cb? That note sounds like B natural. If the context calls for a B natural (perhaps as the leading tone in C major, or part of a G major chord), writing it as Cb is incredibly confusing! Why use a notation (d1) that sounds like a completely different, much larger interval (M7)? It violates the intuitive relationship between the written note and its perceived musical function. The enharmonic equivalent (C to B) is vastly more practical and understandable in almost any musical situation where you’d encounter that sound.

This is why, despite the theoretical possibility of notating a d1, it’s essentially extinct in practical music. The A1 is rare, but its notation (C to C#) does occasionally serve a purpose in showing a chromatic alteration of the same note letter. The d1 notation (C to Cb), however, is so counter-intuitive because its sound (C to B) is a Major Seventh – a huge leap away from the concept of a unison – that it’s simply not a useful way to communicate musical ideas.

So, we have this situation where the strict theoretical rules suggest these intervals could exist, and you can technically write them down, but the practical reality of music notation and performance means one is rare and the other is virtually non-existent. This disconnect between abstract theory and practical application is often at the heart of musical debates.

Why the Controversy? Exploring the Debate

We’ve seen that the theoretical path that could lead to augmented and diminished unisons creates some fascinating, and frankly, slightly confusing results – especially that diminished unison sounding like a Major Seventh! We’ve also noted how rarely (or never!) these modified unisons actually appear in the sheet music musicians play every day.

This gap between abstract theory and practical application is exactly where the arguments happen! It might seem like a tiny, obscure point, but dig into discussions among musicians and theorists, whether it’s in a university classroom or an online music community, and you’ll find surprising passion on the topic. Why? Because it touches on fundamental questions about what music theory is and how it should function.

On one side are the purists, the defenders of theoretical consistency. They argue that if you define augmenting a perfect interval as adding a half step, and diminishing it as subtracting a half step, and the unison is a perfect interval, then Augmented Unisons (A1) and Diminished Unisons (d1) must exist within the system. Period. They value the logical extension of the rules across all interval numbers. To them, denying the existence of a d1 is like saying negative numbers don’t exist because you can’t hold -1 apples. It’s about the structure of the system itself, not just what’s convenient in notation.

On the other side are the pragmatists, often musicians and educators focused on clarity and utility. They look at the d1 (C to Cb, sounding like C to B) and say, “That’s just not helpful!” Music notation and theory, they argue, should primarily serve to explain and facilitate the creation and understanding of music. A concept like a diminished unison, which represents ‘less than’ zero distance and sounds like a completely different, much larger interval, is counter-intuitive and confusing. Why introduce a theoretical concept that makes no musical sense in practice and is never used? From a teaching perspective, trying to explain a d1 can feel like tying your brain in knots, and most teachers prioritize concepts that help students understand the music they’re playing.

So, is music theory about strict, almost mathematical definition, or is it about the practical conventions that have evolved to make music notation and theory a useful language for musicians? Does historical precedent (the fact that d1 isn’t part of common practice) outweigh the theoretical possibility? This little interval debate forces us to consider the very purpose of music theory itself.

Conclusion: A Matter of Definition and Convention

We’ve taken quite a journey from the simple perfect unison to the thorny question of its augmented and diminished forms. We saw how applying the standard rules of interval modification can theoretically lead to an Augmented Unison (A1) and a Diminished Unison (d1). Yet, the d1 throws a real curveball, sounding like a Major Seventh! This whole debate boils down to whether music theory is primarily a rigid, consistent system or a flexible language designed for practical use and clear communication. While you might encounter an A1 in theory, the d1 is almost non-existent in practice due to its confusing enharmonic equivalent. Ultimately, understanding these nuances helps us appreciate that music theory isn’t just a set of dry rules, but a dynamic system shaped by both logic and convention. Keep asking these “why” questions – they lead to deeper understanding!