Interview with Justin Gray

ABOUT JUSTIN GRAY

Justin Gray is an award-winning producer, mixing engineer, and mastering engineer based in Toronto. Justin is a leader in the field of immersive audio mixing and mastering, where he works with artists, producers, and labels worldwide to bring their music to life in the Dolby Atmos and Sony 360 Reality Audio formats.

Throughout his career, Justin has enjoyed working with a diverse range of musical styles. Some of his clients include: Snoop Dogg, Brandy, Carlos Santana, Arkells, The Sheepdogs, Valley, Mother Mother, Jann Arden, Blackbear, Christina Perri, Orville Peck, Portion, Amaal, Crown Lands, Peach Pit, Forest Blakk, Josh Ross, Jamie Fine, 24K Goldn, Tanya Tagaq, Charmaine, Nicky Youre, Jack Kays, Universal Music Canada, Columbia Records, Sony music, Arista Records, Warner Music Canada, Atlantic Records, Death Row Records, Last Gang Records, Alma Records, Watchdog Management, Paper Bag Records, Deadbeats, and Platoon.

As a mixing and mastering engineer, Justin works out of his own studio (Synthesis Sound & Immersive Mastering), which is one of the first Dolby Atmos and Sony 360RA certified music studios in Canada.

As an educator, Justin is on the faculty at Humber College in Toronto, where he teaches audio production, composition, and music performance.

Over the course of his career, Justin has received several awards and distinctions, including the recent 2022 Juno award for best Jazz Album.

Follow Justin on socials @synthesissoundproduction and @immersivemastering.

Visit his websites at www.SynthesisSound.com (Production & Mixing) and www.ImmersiveMastering.com (Mastering).

INTERVIEW WITH JUSTIN GRAY

First, let’s start with your work. Can you introduce us to some of your favorite spatial projects that you’ve mixed or produced?

I have the pleasure of producing, mixing and mastering music in Dolby Atmos and Sony 360 Reality Audio for artists, producers and labels around the world. I was one of the first engineers mixing music in Dolby Atmos (2017) and Sony 360 (2021).

Some of my favourite spatial projects I have been involved with include:
Snoop Dogg - Doggystyle (Atmos), Olivia Rodrigo - Guts (Sony360), Brany - B7 (Atmos), Karan Aujla - Making Memories (Atmos), Arkells - Laundry Pile (Atmos), Mother Mother - Grief Chapter (Atmos), GoGo Morrow - Ready (Atmos), Lola Brooke Dennis - Daughter (Atmos), Eliana Cuevas - Seré Libre (Atmos) & Jonathan Kawchuk - Everywhen (Atmos).

 

Describe your journey as an artist. How did you get started in music, and what led you to transition to becoming a mix engineer?

My musical journey began when I started learning bass to join a Rage Against the Machine cover band when I was 12. I went to university for music performance (Humber College), and have since been a professional bassist and composer working primarily in the jazz, world and R&B music scenes. Throughout my career, I have always been involved on the production side of music as well. I spent many years as a recording engineer and producer, and eventually started mixing as a way to chase my creative vision for how I wanted my music to sound and feel. It was that same obsession with sound that led me to become a mastering engineer.

 

The artist’s road is not an easy one. What are some of the challenges you’ve faced in your career and how did you overcome them?

For me, the biggest challenge I have faced is not having enough time or resources to achieve my goals. I am an independent studio owner, and so all of the professional and financial responsibility is mine to balance. In the earlier part of my studio career, I was playing live, teaching and working in the studio (recording/mixing/producing) in order to be able to facilitate growth. Since a lack of proper time management was commonly my largest barrier to success, I spent considerable effort finding organizational systems that work for me. As an artist myself, I understand how complex a production schedule can be, so I am very dedicated to always meeting deadlines and keeping the quality of my work as high as it can be. In my experience, achieving that requires excellent time management.

 

What’s one of the proudest or most influential moments from your career?

A music career is a marathon, and so there are so many moments I am influenced by. That said, I think it is helpful to acknowledge our achievements along the way, as it provides an opportunity to reflect on growth. I am very proud to have remixed Snoop Dogg’s Doggystyle album in Dolby Atmos in 2023. Working on this record was influential for me in so many ways. I grew up listening to hip hop music, and so to have a chance to be involved creatively with such a legendary album was an experience I will always cherish. This was a very involved project, as I worked with the Death Row team to go back to the original tapes for the immersive mix. This meant that I had to completely remix the album from the ground up, since all of the original eq, compression, FX, panning and automation was done on a console, and printed straight to 2-track tape. I obsessed over this album for months (along with my studio assistant Andrew Chung), dialing in every detail. With this immersive mix, I tried to create a balance between honouring a classic production, while also presenting it with a new immersive perspective.

 

How did you get started working in spatial audio?

I have been involved in the world of immersive/spatial audio for close to 15 years. I started with a 5.1 set up as a listener first. Eventually I did some work pre-mixing film scores in surround for a handful of composers here in Toronto. As soon as I heard whispers that Dolby was interested in bringing Atmos into the music ecosystem, I went all in and expanded my studio to 7.1.4 (my Atmos array is now 11.1.6). That process began in 2016, which was still well before Atmos was available to music listeners on streaming platforms. As a result, I was a part of the first wave of engineers to start mixing music in Atmos. I started by working on my own music and productions. Then, eventually when Tidal took on Atmos, I started to work with a few labels, producers and artists, spatially remixing their music. By the time Apple started their spatial audio platform, I had already been working in Atmos for 4 years. Today, I am grateful to have mixed a large collection of albums and songs in immersive audio across a wide range of genres. I was also one of the first engineers mixing music in the Sony 360 Reality Audio format, and really enjoy working within that spatial ecosystem. I have a separate 13.0 Sony 360 array in the studio where I do my mixing and mastering in this format. The last album I mixed in Sony 360 was Olivia Rodrigo’s GUTS, which features her Platinum single Vampire.

 

How does working in spatial audio change your approach to mixing?

Music production is all about intent for me, so I think before discussing the changes, it is important to understand that there are many ways to approach mixing in immersive audio (and stereo), and when used properly, different approaches can produce different (and potentially equally valid) results.


On a musical level, whether I am in stereo or spatial, I am just trying to make decisions that support the music by creating an environment that brings the music to life. In immersive music production there are, however, some expanded opportunities for expression, including three dimensional orchestration. In immersive mixing I can use the spatial position of musical elements to help the music tell the story. I therefore find that in immersive, spatiality is one of my primary mixing tools. I also find that when I am able to mix in an immersive environment, sound sources often can retain their full sonic characterisitics, as there is (potentially) less masking occuring. I love sound so much, and to be able to spatially orchestrate things in a way that allows multiple sounds to breathe together is something that immersive audio can often facilitate in a different way than stereo. I admire a good stereo mix (and master) that solves masking issues, but in immersive I often have the ability to carve sounds less, while still retaining clarity and blend.

 

What are some of the challenges you faced when making the transition from stereo to spatial formats?

Of course the most obvious challenge for everyone is usually the technical set up. When I started, there was very little documentation, and the “specs”’were still undefined for music mixing. I was so obsessed with the idea of being able to produce music in immersive (Atmos) that I had my speakers in the studio on stands before there was a functional home entertainment Atmos renderer. At that point, there were very few people I could talk to about getting set up, but I did have incredible support from some key members of the immersive audio community, and I consulted with folks in the film, classical and acousmatic worlds (who have been doing this for a very long time).


For me, making musical mixing decisions in immersive has always felt very natural, so has never really posed as a challenge, but what was hard was trying to establish best practices for the Atmos and Sony 360 formats, as they were (and are) evolving by the day.


For those who know my work on YouTube, I am obsessed with these formats, and have tried to understand the technology on a very deep level, as that knowledge has helped me to make creative decisions that translate to consumers, which is very important to me. Research can be challenging, and it also takes a community, so I am deeply grateful to my colleagues in this field for everything they do and share, and honoured to do my part in pushing immersive audio music production forward.

 

What role do you see virtual production technology like Immerse Virtual Studio playing in making spatial audio production more accessible?

I have access to a world class Atmos and Sony 360 studio. In Atmos I am working on an 11.1.6 array of Lipinski full range monitors, and in Sony 360, a 13.0 array of genelec speakers. As a result, I never start mixing in headphones.


That said, for the Embody contest you hosted recently, I decided to start with your software to see what happened. I was unbelievably impressed with the results. Your approach to visualizing a world class studio (I used the Lurssen Mastering room) helped me to still get the feeling I am used to when I am spatially orchestrating a mix. I was able to have a real sense of the musicality and interaction of different parts as I designed the spatial environment, and I found it to be intuitive and inspiring to use. Eventually I did move to my speakers, and it was on my Atmos array (I did this in 9.1.6) that I then further refined (and eventually mastered) the mix.


I see this as a very practical way forward. I would encourage creators and producers to design their productions with the Immerse Virtual Studio. From there, I think that the role of immersive mastering will be the perfect pairing, as a creator can hand their Atmos mix designed primarily in headphones, to an experienced mixing and/or mastering engineer, who can refine the balances, and make sure that production is ready for commercial release. I already work with many artists, producers and mixers as a mastering engineer, and we have found it to be a very practical and musical relationship. Of course, if one can work with a mix engineer in a properly tuned speaker array, it is always ideal, but I am very sensitive to the high cost of entry in these immersive formats, and am inspired to be a part of solutions that allow more creation with immersive intent.

 

What value do you see in incorporating Immerse Virtual Studio into your own workflow?

I am going to use it as an additional headphone check (using both of your Virtual rooms), and for real time Apple Spatial monitoring. This will essentially be the equivalent of the “car check” for Dolby Atmos. I have had countless methods for facilitating these kinds of checks over the years, but your tool is highly streamlined and intuitive.

 

What role do headphones play in your process? Do you have a favorite pair?

Headphones are an integral part of my work in immersive (and stereo).


As I said above, I always start my mixes on speakers, but I spend a considerable amount of time in headphones as well. There are so many different avenues for delivering immersive audio experiences. When we talk about the most commercially available (currently) we are discussing formats like Atmos and Sony 360. These are both adaptive object-based ecosystems. It is crucial to therefore understand that these are not discrete-channel speaker playback systems. They certainly can feed those environments (very well!), but it is a part of the core philosophy of these formats that a listener can access them on headphones. It is that adability and consumer access that has us discussing immersive audio on such a large scale today. Therefore, it is my responsibility to the artists I work with to ensure that a listener is equally inspired listening on headphones and speakers. That is no simple task, and of course the speaker experience is often the preferable, but that is no excuse for not ensuring inspiring translation to headphones.


My favourite pair are my Audeze CRBN electrostatic headphones. They are the most natural sounding headphones I have ever heard. I also use my Audeze LCD-5’s on every mix, as I find them to be stunningly revealing, and they help me address the problematic spectral elements of the binaural render.

 

What aspirations do you hold for the future of your career? Where do you hope to go from here?

I am now working towards connecting with artists and producers to help them realize their music in immersive audio from the ground up.


I am releasing an album this year of my own called The Immersive Project. This is a record that I composed, orchestrated, performed, recorded produced, and mixed for immersive audio.


It features over 30 artists from around the world, and was recorded in one of Toronto’s finest studios. In the studio, we had access to some of the finest equipment and acoustics one could ask for, and we captured every single element with various advanced immersive recording techniques. We also had a 9.1.4 monitoring set up in the control room, which allowed me to produce the music for immersive presentation in real time.


This project is something I have been working towards my whole career, as it brings together all of the things I love about music. My goal is for this to not only connect with people, but to also inspire future collaborations for creators who have been looking for a way to expand their pallette for musical expression.

 

What do you hope to see for the future of spatial audio?

I want to see more affordable consumer access to meaningful spatial experiences. We need to see advanced headphone and wireless multi-channel technology get to the consumers, so that more people can experience the immense joy that can come from a good immersive production.

Equally, I want to see creators understand the potential and to integrate immersive audio into their artistic intent. In the end technology is here to serve music, and so what I want for the future is for more amazing music to be produced in the spatial ecosystem.