
As described in Part 1, forty years ago in 1985, I was inspired by luthier Isaak Vigdorchik to consider the possibility of crafting an acoustic space in which the user could control the energy and decay of individual notes – the notes in the room. The notion of modeling an acoustic space as a large set of ‘modes’ had been explored in academia, but such a model would require many thousands of modes to convincingly recreate the feeling of an acoustic space. So, back then, when Isaak planted the seed, I couldn’t imagine that the degree of processing would ever become available. Clearly, I was wrong. Time marches on. Given the history of the advance of semiconductors, Moore’s Law, I should have known better. But to be fair, it did take 40 years!
As the speed and power of CPUs advanced, I started thinking seriously about a modal reverb plug-in. Then, in 2014, we received an application from Woody Herman, a graduate student at Stanford, who was seeking a summer internship. On his resume, I noticed that Woody had co-authored a paper with Dr. Julius Smith on modal models of acoustic spaces. Julius is a friend and so I reached out to ask about Woody. Julius gave a ‘thumbs up’ and Woody spent the summer of ’14 working on intern projects. When Woody graduated, he joined Eventide and has led the development of several of our plug-ins. You can see and hear Woody talk about his work on emulating the analog ‘magic’ of our H3000 here:
While computers kept getting faster and faster it wasn’t until 2018 that the task of modeling real rooms with the precision needed to get ‘the notes in the room’ to bloom and wither finally seemed within reach. Our developers presented a DAFx paper about learning Modal Responses from IRs using ESPRIT. A number of modal concepts were discussed, and we had working prototypes of other modal plug-ins. Around that time, I asked Woody to start working on a musical modal prototype and we called it Temperance (more on the name later).
The first time I took it for a spin, in late 2020, it was clear that my dream of notes in a room was finally within reach. Woody and I visited Sear Sound in NYC and demonstrated it to Kevin Killen, a FOE (Friend of Eventide). Kevin is an engineer, producer, and educator, and he has remarkable hearing. Kevin echoed what I was hearing and feeling. He said that it was ‘musical’ and that he knew of nothing else that could do what it does. We swore Kevin to secrecy; a secret he has faithfully kept for five years.
Check out Kevin Killen’s interview with our friends at Gear Club Podcast here!
It Took a Team
I’m extremely fortunate to work with a team of brilliant and passionate developers at Eventide and, for the last few years, some of them have worked on using the modal model of reverb to control the energy of notes. The goal was to create a musical reverb. What started out as a simple concept required the best efforts of our team of talented developers. Pete Bischoff, who led the development of algorithms for our guitar pedals and our flagship H9000, took on the Product Owner role. In addition to product lead, Pete Bischoff, the Temperance team included developers Woody Herman, Corey Kereliuk, David Baylies, Tom Longabaugh, Evan Pittson, Peter McCulloch, Collin Bevan, and CTO Russell Wedelich.
Every team member is a musician, so they ‘get’ the notion of tempering the musicality of space. It was an exciting project because we all knew that we’d be breaking new ground. To be fair, we didn’t realize how hard this ground would be. Woody’s prototype from 2020 was our starting point. Its main control was a big knob labeled TEMPER. Early iterations labeled full clockwise “NOTES” and full counterclockwise “NOTS.” Turn the TEMPER knob to the right and the NOTES ring out, turn left and everything but the NOTES ring out. Everything else was “The NOTS!”

Early Temperance Prototype
Given Woody’s prototype, the basic idea of a musical reverb was rather clear and simple. But everything is simple until you need to make it work. And everything is clear until you try to read the fine print. While much of what we needed to do seemed obvious, it became necessary to learn (and even invent) quite a lot along the way. This is not an unfamiliar experience for me at Eventide.
Project Temperance
Fast forward to October 2022; Temperance project launch. Now three years on, here’s my recollection of some of the inside baseball aspects of developing a first-of-its-kind effect with this talented team. The Temperance plug-ins were designed over the course of several years. While the idea had been percolating since Woody’s first prototype, there still was so much to decide, describe, explore and implement.
Here are some of the key steps we took to achieve what I believe is a great leap in the science of artificial reverberation and a groundbreaking new way to mess with sound.
Knobs – 1 or 2?
Possibly the most fundamental decision was – 1 knob or 2. The act of ‘tempering’ is achieved by controlling several key parameters for each of the thousands of modes. Our first builds had multiple tempering knobs which allowed us to explore the vast range and types of sounds possible with this approach to modeling space. The downside is that, with multiple controls, it’s easy to dial in what some would consider unmusical sounds and, what’s arguably worse, the wide range of possibilities is overwhelming.
To keep the ‘vibe’ in the realm of tempering – to help users land on musical, tempered sounds – the team recognized that it would be best to have a single Temper control; one main knob. Woody, with feedback and encouragement from the team, took on the difficult challenge of combining multiple parameters in a single knob. He nailed it! The single Temper control is the heart of the effect.
Spectrum Display & Control
The team also recognized the importance of visualizing what was happening across the audio spectrum both as a development aid and to provide visual feedback to the end user – a display that lets you see what’s happening and what you’re doing. In fact, early on the team needed this display to see what was, um, broken. It took a few iterations to design an intuitive display which shows energy across the audio spectrum while clearly indicating the energy of the selected notes. The selected notes are highlighted, and the width of the highlighted lines represents Note Width. An early version of the spectral display used ‘chiclets’:

The team also recognized the need for a frequency-based temper control in addition to the main Temper knob since there are 10 of each note across the full audio spectrum. If you select C, you may not want all ten Cs to ring out but rather just the high Cs, the low Cs or the mids. The solution? A range slider that gives the user a key control over how tempering works across the spectrum.
Visualizing Temper
Once we settled on one knob and had the spectral display and the one knob control in place, we started thinking about how to ‘show tempering’. The spectral display shows what’s happening in exquisite detail; notes are highlighted and their width is narrowed or widened depending on Note Width setting. It’s important to see what’s happening across the spectrum and it serves critical control functions. What it can’t deliver however is clear indication of which notes are which. Select all 12 notes and 120 notes are highlighted! We realized that we needed a different kind of display; a display that indicates all the energy of each note selected. Temperance needs to provide visual feedback that, in real time, aligns with what you’re hearing.
If the user is adding positive Temper to all or some of the C notes, it’s important to see all that C energy combined in a visual display. The display went through several iterations. The idea of a piano keyboard style display was considered. Here’s an early example (NOTES&NOTS!):

With one central knob it seemed like a good place for both note selection and display.

With some guidance from the mockups by Damon Langlois, Peter McCulloch took on the task of refining the look and feel of the display. After many months and several iterations, he created the floral beauty of this vital, throbbing, visual display called the NoteScape.
The Name Game – Meater? Daisy? Petunia? NoteScope? Nope. It’s a NoteScape!
Naming stuff is a nuisance. For one thing, every name ever imagined has already been taken. And, when it comes to naming something new, there’s no right answer and invariably lots of suggestions (some even good!).
Which brings me back to NoteScape. It wasn’t always called that. For a while we had no idea what to call the lovely pulsing display of reverb energy. In fact, the earliest implementation’s color was a bit weird. The display freaked me out; it looked like throbbing flesh. I suggested that we call the display a MEATER. The team decided instead to use a different color.
The next implementation was beautiful and evoked a blooming flower so, for a while, we called it Daisy. When I showed an early iteration to FOE Laurie Spiegel, she loved its look and its organic, pulsing behavior. She suggested calling it ‘Petun(e)ia’ because it bloomed and withered like a flower while indicating the reverb’s tuning. Lovely as that sounds, it just didn’t feel right.

Now, admittedly, I’m quite the fossil. In fact, I still have an oscilloscope on my desk. Temperance’s floral display suggested a new kind of scope and the word – NoteScope – came to mind. But NoteScope didn’t feel right either. Scopes (e.g. microscopes, telescopes) are for examining things up close, in detail. Our display is more of an overview of the reverb scene. Then, at AES in NY in October 2024, I showed Temperance to a few FOEs. When demoing to FOE Andrew Scheps, I mentioned that I was thinking of calling the display a NoteScope and he said, “Why not NoteScape?” Kinda like a landscape of your notes within the reverberant field. Sold!
Thanks Andrew. Where do we send the check?
What About Time?
Many months went by as the team tweaked and refined all the moving parts to hone the sounds and controls. The team had made great progress, but the range of sounds was not yet ready for prime time – there were “edge cases” that needed attention. Once we had a stable version at hand, we asked a small handful of FOEs, including Alex Case, to do a bit of critical listening. Alex pointed out that in many cases the early field just didn’t sound right.
Woody, Corey and Russell dove in. They researched relevant papers and learned what they could but soon realized that no one had successfully solved the problem. In fact, they found a novel, and now patented, process that allows us to synthesize the early field with better fidelity, precision and clarity. This effort was far from trivial and required the team to invent new ways of synthesizing the early reflections.
But there’s more! With a focus on the onset of reverb, we realized that we had overlooked a fundamental aspect of tempering — time. Shouldn’t users have control over whether tempering affects just the onset of the reverb? Or, just the tail of the reverb? Or all of it? Yes, of course!
In Temperance Lite, we introduced the Range Control giving users the ability to control how tempering affects different frequency ranges (low, mid, or high). Now, in Pro, the user has these two complimentary controls. Range controls “where”, in frequency, to temper while Target controls “when”, in time, to temper.
Which reminds me. Naming things!!!

Final Temperance Pro UI design by Julian Behrens
Why Call it Target?
So much that Temperance does is so new, so different. We agonized over naming this and that control or feature. Deciding on the name for this temper time control turned out to be harder than one might imagine. For a while we were calling it the “Field” selector as in early and late field. And yet, we thought the word field might be confusing.

The control is a way to select when in time the tempering is applied. Is it applied only at the onset of the reverb or only as the reverb fades away? Or is tempering applied to the entire reverb from start to finish? Thinking along these lines brought us to use the word ‘target’ because it describes what the user is doing, targeting tempering in time.
Synthetic Spaces
In the modal domain it is possible to craft spaces that cannot exist in nature. Temperance Lite includes one built-from-scratch synthetic space. Pro includes several.

A Peek Behind the Curtain – Modal Controls
Along the way, the team had fun (and I got quite distracted) exploring some of the possibilities in the modal world completely apart from the notion of notes. That led to adding modal controls for Offset, Position and Density to our Pro version.
Given that this blog appears to have turned into a rant on naming stuff, I’ll end with how we came up with the name for the plug-in – Temperance.
Why Call It That?
Critically, the word Temperance had the benefit of being available. We couldn’t find anyone using it to describe a reverb. Not a shock.
But, given that it opens a new dimension of ambiance and it operates on the 12 notes of the equally tempered scale, Temperance is arguably a good description. OK, argue if you must.
Eventide’s Temperance is a plug-in that can be used to create either well-tempered or ill-tempered ambiance.
The Team
The development of this plug-in took time, talent and commitment and would not have been possible without the dedicated work of this team. Pete Bischoff led the effort. Woody Herman was our modal guru, Peter McCulloch designed the visual displays and Pro’s Sequencer, Corey Kereliuk and Russell Wedelich dove deep into signal processing, Evan Pittson, Damon Langlois and Julian Behrens contributed to the UI, David Baylies coded, power user Collin Bevan provided feedback along the way, and Tom Longabaugh, our plug-ins director, made sure that the product was ready to release to the world.

Pete Bischoff
Guitar

Woody Herman
Guitar

Peter McCulloch
Piano, Keys, B3

Corey Kereliuk
Guitar, Synths

Russell Wedelich
Bass

Tom Longabaugh
Bass, Trumpet

Evan Pittson
Viola

David Baylies
Trumpet

Collin Bevan
Synths, Drums

Julian Behrens
Eurorack Synths

Resonant Design
Synths

Damian Langlois
Bass, Guitar
Eventide team members have also helped with recording covers for our remix contests. Check out these videos.
This fossil is forever grateful for their passion and, especially, for their patience.


