Sonic Design_Exercises

21/4/2025-21/7/2025

Ruthlene Chua Zhen Si 0365222  

Bachelor of Design (Honours) in Creative Media  

  • Instruction
  • Lectures
  • Class Summary 
  • Task 
    • Adjustment of the sound equalizer using Adobe Audition
    • Adjust different sound effects depending on its environment sound equalizer using Adobe Audition
    • 2 Environment Audio Fundamentals
  • Feedbacks  
  • Reflections


INSTRUCTION


Lectures

Week 1_21/4/2025
  • The required task for this week is to write a reflection blog about what we have learnt through the lecture videos below:
Link 1: Sound Fundamentals

Figure 1.0: Sound fundamental video on YouTube 
  • Nature of Sound:
    • Understanding how sound behaves and what contributes to its creation.
  • Capturing Sound:
    • How sound is recorded using various techniques and equipment.
  • Processing Sound:
    • Analyzing and manipulating sound for specific purposes and media.
  • Converting Sound to Digital:
    • Preparing audio files for use in digital applications.
  • Pro Tools:
    • Using a digital audio workstation (DAW) or sound editing software for audio production.
Nature of Sound (Sound is vibration)
The Process of Sound:

Figure 1.1: The Process of Sound
  • 1: Production
    • Sound is created through the vibration of an object, which causes surrounding air molecules to vibrate.
    • For example: Our vocal cords vibrate and create sound waves when we speak. Similarly, when music is played loudly, the speaker cone vibrates, pushing the air around it; this vibration of air is what starts the sound wave.
Figure 1.2: Examples of a speaker cone
  • 2: Propagation
    • These vibrations travel through the air as sound waves, moving from the source toward the listener.
    • Sound travels through a medium, usually air, through waves. These waves create alternating areas of high pressure called compression and low pressure called rarefaction. As the vibrations move through the air, the molecules shift back and forth, passing the energy along until the sound reaches its destination. 
Figure 1.3: Air molecule
    • When the sound waves reach our ears, they cause the eardrums to vibrate. These vibrations are translated into electrical signals by the inner ear and sent to the brain, which allows us to recognize and understand the sound.
Figure 1.4: Samples of the perception of sound

Small Bio Class Section: 
Human Ear
 (Organ that is made up three main parts, each playing a role in hearing and processing sound)
  • The outer ear: Consists of the visible part of the ear and the ear canal. Its main function is to gather sound waves from the environment and direct them into the ear.
  • The middle ear: Includes the eardrum, a thin membrane that vibrates when sound waves hit it. This section also contains three tiny bones known as the malleus, incus, and stapes. These bones help to amplify and transmit the vibrations from the eardrum to the inner ear.
  • The inner ear: Contains the cochlea, a spiral-shaped structure filled with fluid and tiny hair cells. These cells convert vibrations into electrical signals that are sent to the brain. The inner ear also includes the semicircular canals and the endolymphatic sac, which are responsible for maintaining balance.
Figure 1.5: Human ear structure

Clear version of sound transmission: 

Every sound is produced by a vibration. Vibration is the back-and-forth movement of an object. For example, when speaking, vibrations are created by the vocal cords in the throat. Sound can only be heard when the energy from these vibrations reaches the ears. But how does this sound energy travel?

To understand this, it's important to know that sound energy travels in the form of sound waves. There are mainly two types of waves: transverse waves and longitudinal waves.

Figure 1.6: Transverse wave

Figure 1.7: Longitudinal waves

Sound waves travel at different speeds depending on the medium. The medium can be a solid, liquid, or gas. In a solid, the particles are packed very closely together. When one particle vibrates, it quickly passes the energy to the next, allowing sound to travel the fastest through solids.

In liquids, the particles are slightly further apart compared to solids. Because of this, sound energy takes a bit longer to travel through liquids.

In gases, the particles are much more spread out, so sound waves travel the slowest through them.

Figure 1.8: How sound travels in different media

What is wavelength? 
Wavelength is the distance between two consecutive compressions or rarefactions in a sound wave. The number of waves that pass through a point in one second is called frequency, and it is measured in hertz (Hz). Sound can also be reflected when it hits a solid surface. This reflection of sound, or bouncing back, is known as an echo. 
  • 3: Perception
    • The sound waves reach our ears, stimulate the eardrums, and are interpreted by the brain as sound.
Psychoacoustics is the study of how humans perceive sound, based on pitch, loudness, and quality. People react to sounds differently. For example, traffic noise might make someone feel uneasy, or one person may be distracted by loud music while another can study just fine. This shows how sound perception varies from person to person.

1. Wavelength
The distance between a point on one wave and the same point on the next. It’s literally the length of the wave.

2. Amplitude
The height of the wave. Higher amplitude means louder sound, which is why a device that increases amplitude is called an amplifier.

3. Frequency
The number of wavelengths that occur in one second. Faster vibrations mean higher frequency, which is perceived as a higher pitch.

Properties of sound 
1: Pitch: High note or low note 
2: Loudness: How the sound's loudness
3: Timbre: Is the quality of the sound
4: Perceived duration: How you pace a particular sound/ Perceive the pace of the sound (fast or slow ?)
5: Envelope: The structure of a particular sound 
    (When does the sound get loud/ soft/maintain it's volume)
6: Specialization: Means the location of the sound in space (left/ right/close to you / further away)

Figure 1.9: Hertz cycle per second

Figure 2.0: The range of human hearing

Link 2: 
  • Ear training = essential
    • Good mixes come from good ears. It’s not just about using gear or plugins, but to need to actually hear what’s going on in the sound.
  • Frequencies
    • Bass: low-end, can make things sound full or muddy
    • Mids: where most instruments sit, but too much = boxy
    • Highs: clarity and sparkle, but too much = harsh
  • Mixing problems
    • Muddiness = too low-mid
    • Harshness = too high-mid
    • Dull mix = not enough top end
    • Thin mix = missing low frequencies
Week 2_28/4/2025
  • The required task for this week is to write a reflection blog about what we have learnt through the lecture videos below:
Link 1: Sound Design Tools

Figure 2.1: Sound Design Tools on Youtube
  • Any sound editing software or digital audio workstation (DAW) typically comes with a set of common tools that are essential for sound design.
  • For example, imagine you're doing graphic design. There are basic functions you would expect to find, like creating a drop shadow. Even if the software doesn’t have a specific plugin or menu option for drop shadows, you can still create one manually by making a copy of the object, placing it behind the original, lowering its opacity, and adding some feathering. The same principle applies to sound design tools: even the simplest tools can achieve complex results if you know how to use them creatively.
  • While these tools might seem basic or even boring at first, they are actually crucial to sound design.
    Here are five typical techniques often used in sound design:
    (You don’t necessarily use all five at once, but these are common options.)

    1. Layering
    • Purpose: To make sounds richer, more interesting, and of higher quality.
    • How to apply it: Layering involves taking two or more sounds and placing them on top of each other. That’s the basic definition. However, just like in visual design, successful layering requires a good sense of balance and blending. In audio, layering allows you to create entirely new, unique sounds by blending different elements together — that's the ultimate goal.
    2. Time Stretching / Time Compression
    • Purpose: To stretch or compress the duration of a sound without necessarily changing its pitch.
    • How it works: When you stretch a sound, it becomes longer, and when you compress it, it becomes shorter. Sometimes stretching can also affect the pitch — for example, stretching a voice recording will make it slower and deeper, while compressing it will make the voice faster and higher. Most DAWs let you control whether or not the pitch changes when adjusting the time.
    3. Pitch Shifting
    • Purpose: To raise or lower the pitch of a sound.
    • How it works: In sound design, pitch shifting is used to achieve different effects. For instance, raising the pitch can create a chipmunk-like voice, while lowering it can produce a deep, scary, monster-like sound. The way you shift the pitch depends on the target effect you’re aiming for — it’s very subjective and creative.
    4. Reversing
    • Purpose: To create strange or unusual sounds by playing them backward.
    • How it works: Reversing a sound can produce interesting and unexpected effects. When combined with layering techniques, reversed sounds can add unique textures and energy to a project.
    5. Mouth It!
    • Purpose: To create sound effects manually when no suitable recordings are available.
    • How it works: If the perfect sound can't be found or recorded, sometimes the best solution is to mimic it with your mouth. This technique can be surprisingly effective and gives you a lot of creative freedom.
    Link 2: The Art of Sound Design
    • Using sound allows you to tell a story in ways that visuals alone never could. Adding sound effects to your project brings your film to life, making the experience much more immersive and emotionally powerful.
    • Sound effects are without a doubt one of the most powerful assets for storytelling. They can dramatically enhance the viewer’s experience and make every moment feel more real and intense.
    Where to Find High-Quality Sound Effects
    • Co-Pulse: One of the most widely used sound effect packs.
    • YouTube: A good source for free or royalty-free sound effects (be sure to check usage rights).
    Where to Start
    • Use the right sound effects at the right moments.
    • For example, if your scene shows a mountain hike, you could add sounds of wind, birds chirping, and trees swaying.
    • The key is to match the environment naturally to what the audience sees on screen.
    When Is It Too Little or Too Much?
    • A good rule of thumb: don't use more than three sound effects per scene.
    • Sometimes, even just one well-placed sound can make a scene feel complete.
    • Look carefully at your clip, identify the main subject, and design your sound around it.
    • For example: If the focus is an elephant statue, add an elephant sound effect ; If it’s a landscape scene with waves crashing, add ocean waves and perhaps some soft wind sounds. 
    Caution Part: 
    • The most important thing to remember: make the sound effects loud and obvious enough.
    • Most people will be watching videos on their phones with the volume below 50%. If the sound is too subtle, they won't hear or feel it at all.
    POV: Less is more

    Week 3_5/5/2025
    • The required task for this week is to write a reflection blog about what we have learnt through the lecture videos below:
    Figure 2.2 : Ultimate Guide to Diegetic vs Non-Diegetic Sound on Youtube

    Diegetic Sound – What It Really Means
    • The word “diegetic” comes from the Greek word “diegesis,” which basically means storytelling. Think of it as everything that exists inside the world of the film — the stuff the characters can actually hear or see.
    • The director acts like a narrator, using sound to build the world around the characters.
    • Internal diegetic sound is when we hear a character’s thoughts. The other characters can’t hear it, but it’s still part of the story world.
    • Diegetic sound helps show what a character is experiencing  
      • it can be used to give us insight into their mental state.
    • Sound can also be used in unexpected ways to create emotion or surprise. 
    Nondiegetic Sound – What the Characters Can’t Hear
    • Nondiegetic sound is anything the audience hears that the characters don’t. This includes background music, narration (when the narrator isn’t part of the story), and added sound effects.
    • Sound effects are also added for dramatic or funny moments, like an exaggerated swoosh or punch sound in a comedy.
    Transdiegetic Sound – Switching Back and Forth
    • Sometimes, sound starts off nondiegetic and then becomes part of the scene, or the other way around. This is called transdiegetic sound.
    • It’s a clever way to blur the line between what’s real and what’s not, perfect for dream sequences or emotional moments.
    When It Doesn’t Fit the Rules
    • Some films don’t follow the usual rules and play around with sound in really creative ways:
      • In Magnolia, all the characters sing along to a song that normally would just be background music. It becomes this emotional moment where reality and feeling mix together.
    Figure 2.3 :Sound effects in Magnolia (part of the scene)
      • In Psycho, Norman Bates hears his mother’s voice in his head , which is really his own and it gives us insight into his fractured mind.
    Figure 2.4 :Sound effects in Psycho (part of the scene)
      • In Joker, Arthur sings along to music we thought was just playing in the background, hinting that maybe it’s all in his head.
    Figure 2.5 :Sound effects in Joker (part of the scene)
      • In La La Land, background music in a restaurant turns into something a character hears inside her own imagination, leading her to make a bold decision.
    Figure 2.6 Sound effects in La La Land (part of the scene)

    Sound Theory Made Simple
    • How we hear sound in film:
      • Acousmatic sound: You can hear it, but you don’t see where it’s coming from. This could be part of the story (like birds chirping offscreen) or not (like the movie score).
      • Visualized sound: You see and hear the source of the sound at the same time.
    • Filmmakers use these techniques to guide what we feel, what we focus on, and how we understand a scene.
    Final Thoughts
    • We always say we’re watching a movie but actually we’re also listening to one.
    • Sound design is just as powerful as visuals when it comes to storytelling. It helps create the mood, the meaning, and the connection we feel with what’s on screen.
    Week 4_12/5/2025
    • The required task for this week is to write a reflection blog about what we have learnt through the lecture videos below:
    Figure 2.7 What is Soundscape video on Youtube

    How It Works
    • When we hear a sound, like a dog barking, our brains quickly connect it to something familiar.

    • Put a few sounds together, and we start to build a full scene in our heads, almost like watching a movie without the screen.

    What Soundscapes Can Tell Us
    • Distance – We can sense whether something is close or far based on how it sounds.

    • Space – Certain sounds make us feel like we’re in a wide-open space or a small, enclosed room.

    • Direction – We can tell where a sound is coming from, left, right, behind, or above us.

    • Temperature, Time, and Era – Sounds can suggest warmth or cold, a time of day, or even a specific moment in history.

    • Emotion – A single sound can make us feel calm, tense, joyful, or nostalgic.

    Music Is Part of It Too
    • Music plays a big role in soundscapes; it blends with other sounds to help set the mood.

    • It adds emotion and helps the scene feel more alive.

    Sounds We Learn vs. Sounds We Feel
    • Learned sounds are ones we recognize because of what we’ve seen or heard before, like the sound of a cash register making us think of money.

    • Instinctual sounds are hardwired into us, high-pitched sounds often feel safe, while deep, low-pitched ones can feel threatening.

    • These instincts come from how we, as humans, evolved to respond to our surroundings.

    Why It Matters
    • Soundscapes are more than just background noise.

    • They help us tell stories, shape how we feel, and make us feel like we’re somewhere else; just through sound.


    Class Summary

    Week 1: In the first week, we were introduced to the module along with the requirements for the assignments. The lecture explained two types of headsets, one with a wire and one with Bluetooth. The wired option are noted to be more accurate and recommended for use due to its advanced features. Some senior students' works were shown to give us a clearer idea of the expectations, and the lecturer offered helpful advice to guide us through the assignment's goals and important points to watch out for. We were also given a playlist and asked to complete an exercise where we adjusted the equalizer settings to produce a flat version of the sound. 

    Week 2: In this exercise, we edited a given soundtrack to reflect six different environments: phone, walkie-talkie, closet, airport, stadium, and bathroom. The goal was to understand how sound changes based on acoustic settings and apply suitable equalizer and reverb adjustments. The lecture guide us through the entire section, check on our process and give us feedbacks.

    Week 3:For this week’s lesson, we learned how to adjust sound direction using different methods. There are two main ways to edit sound. We were then asked to edit a soundtrack from last week to make it sound like someone was slowly walking into a cave, requiring us to apply sound effects based on the specific situation. Our lecturer also introduced us to the Sonic Design Studio, explained the purpose of the sonic equipment, and familiarized us with the studio environment.


    Task

    Week 1_22/4/2025 (Adjustment on sound equalizer using Adobe Audition)

    In this exercise, we were given one flat (original) sound file along with four edited versions. Our task was to identify which parts of the edited tracks differed from the original, such as changes in bass or treble. Using the Parameter Equalizer, we learned how to adjust the sound frequencies to re-equalize the edited tracks and make them sound as close as possible to the flat version.

    Figure 1.a: 5 Soundtracks that needed to be used in the exercise

    At first, it was challenging to hear the differences without using a proper headset. Once headphones were used, the contrast between tracks became much clearer, highlighting the importance of accurate audio equipment when working with sound. During class, the professor let us try both his headphones and ours to compare the differences. His headphones were closer to professional studio models, and when we used them, the sound changes became much more obvious. The contrast between the original and edited versions was significantly clearer, proving how much audio equipment can affect the way we perceive sound. This experience was especially helpful and will likely play an important role in our understanding and progress throughout the semester.

    A total of four soundtracks needed to be adjusted using the equalizer: 

    Figure 1.b: Equalizer 1

    Figure 1.c: Equalizer 2

    Figure 1.d: Equalizer 3

    Figure 1.e: Equalizer 4 

    Week 2_29/4/2025 (Adjust different sound effects depending on it's environment sound equalizer using Adobe Audition)

    In this exercise, we were given a soundtrack and instructed to adjust it according to different environmental sound effects. We were asked to modify the original audio to match various acoustic settings, including a phone, walkie-talkie, closet, airport, stadium, and bathroom. 

    The phone sound effect will be more narrow and compressed, often emphasizing the midrange frequencies while cutting off both the low and high ends.

    Figure 1.f: Phone sound effect equalizer

    The "closet" equalizer setting gives the sound a sense of heaviness or muffled pressure, creating a feeling of being trapped or enclosed. Because of this, the audio mixing needs to be more careful and deliberate, focusing on reducing low-end muddiness and enhancing clarity, while still preserving that boxed-in, confined atmosphere.

    Figure 1.g: Clothset sound effect equalizer 

    For this equalizer, we are required to make the audio sound like it’s coming from a walkie-talkie. Walkie-talkie audio is typically sharp, distorted, and somewhat muffled due to its limited frequency range and low audio quality. The lecturer mentioned that the sound of a walkie-talkie is based on the sound of a telephone, but with a higher pitch and more compressed frequency range, giving it a more artificial and tinny quality.

    Figure 1.h: Walkie Talkie sound effect equalizer

    Airport sound effects typically have a spacious and open quality. To recreate this atmosphere, the sound should be edited with added echo and adjusted reverb to simulate the acoustics of a large, open space with hard surfaces. This helps create a realistic sense of depth and distance, making the environment feel more immersive.

    Figure 1.i: Airport sound effect equalizer

    Stadium sound effects are characterized by a wide, open, and echoing atmosphere. The audio should be edited with a large reverb setting and extended decay time to mimic the vast, reflective surfaces of a stadium. Adding subtle delays and layering crowd ambience can further enhance the realism and sense of space.
    Figure 1.j: Stadium sound effect equalizer

    Bathroom sound effects are often more enclosed, reflective because of hard surfaces like tiles and porcelain. The audio should be adjusted with a short, tight reverb and a subtle echo to convey the feeling of a small, confined space. 

    Figure 1.j: Bathroom sound effect equalizer

    All Audio ( Phone, walkie-talkie, closet, airport, stadium, bathroom) : LINK

    Week 3_5/5/2025 (Adjust different direction of sound effects using Adobe Audition)

    For this particular exercise, we were instructed to move the airplane sound from left to right and gradually fade it out. There are two ways to adjust the sound: one is by editing directly in the soundtrack, and the other is by editing below the soundtrack. The second method allows the edits to remain even if the original soundtrack is removed, and you can apply the same sound effects to a new soundtrack.

    In Adobe Audition, the blue line controls the Pan, which determines the stereo positioning of the sound, while the yellow line adjusts the volume. To create the effect of sound moving from left to right, you can adjust the pan line by adding keyframes at different points on the timeline. Start by placing a keyframe where you want the sound to begin (on the left), and another where you want it to end (on the right). 

    By dragging keyframes, we can smoothly transition the sound across the stereo field. Additionally, the yellow volume line can be adjusted to control the overall loudness, allowing for fades or dynamic volume changes. Fine-tuning both the pan and volume adjustments will help achieve a natural, flowing movement of the sound throughout the track.

    Figure 1.k: Plane 1 ( Flying from left to right )

    In Adobe Audition, the second method involves editing below the soundtrack in the Multitrack view, using a separate audio track for the adjustments. This allows us to apply changes to the sound without altering the original audio file directly. 

    When edits are made beneath the soundtrack, such as applying effects or modifying the pan and volume, these adjustments are saved independently of the original track. As a result, even if the original soundtrack is removed or replaced, the edits will remain intact. We can then apply the same effects to a new soundtrack by simply dragging the edited portion below the new track, ensuring consistency across multiple tracks without losing previous work. 

    This method is particularly useful when working with multiple versions of a sound or when we need to reapply the same sound effects to different tracks. By using this technique, we maintain greater flexibility and non-destructive editing, which is essential for refining and experimenting with sound design.

    Figure 1.l: Plane 2 ( Flying around and vanish, editing using different ways )

    Next, we were asked to edit the previous soundtrack (from last week), where a woman is speaking. The goal was to make it sound like she was slowly walking into a cave, with added echo and sound adjustments. First, I try applying the equalizer and reverb effects. After that, focus on adjusting specific parts of the sound to enhance the echo and other effects, creating the sense of a cave environment.

    I used the second method of editing, making changes below the soundtrack so the effects remain even if the original audio is replaced. In the sound adjustment process, I applied several effects to enhance the cave-like feeling. For example, I used EQ Band 1 Center Frequency under the Equalizer to shape the tonal quality of the voice. Then, I applied Studio Reverb settings including Early Reflections, High Frequency Cut, Damping, and adjusted both the Dry Output Level and Wet Output Level to balance the clarity and depth of the echo. These effects helped simulate the acoustics of a cave, with the voice sounding distant and gradually more reflective as if the speaker were walking deeper inside.

    Figure 1.m: Cave (walk in to the cave and vanish) 

    All Audio 
    (Plane 1:Fly over from left to right, Plane 2: Fly around and vanish, Walk into Cave) : LINK


    Week 4_13/5/2025 Project 1 Audio Fundamentals

    We were assigned a warm-up exercise for Project 1 that involved editing environmental sound. The task was to create an ambient sound mix based on two provided concept art images. We were allowed to search for suitable free sound effects online and layer or edit them using Adobe Audition to match the given scenarios.

    Figure 1.n: 1st of the environment audio sound design 

    For this particular environment, I incorporated techno gadget typing sounds to enhance the atmosphere and make it feel more like an experimental lab.

    Figure 1.o: Techno gadget soundtrack

    In the picture, I noticed some people wearing armored suits, which led me to interpret the scene as a military presence protecting the lab, likely for security purposes. To reflect this, I added the sound of an army patrol passing by the central object. However, the original soundtrack I found didn’t quite match the environment, so I adjusted the echoes to make it sound more spacious and fitting for an experimental lab. I also modified the panning and volume to simulate the soldiers walking past, emphasizing the weight and texture of their heavy gear.

    Figure 1.p: Adjustment on paralyzers and reverb

    Then, I noticed that the sound of the army walking past in heavy boots lacked a sense of direction it wasn't clear whether they were approaching, moving away, or how close they were. To fix this, I adjusted the volume levels and used stereo balance to create a more realistic sense of movement and spatial depth.

    Figure 1.q: Army walking soundtrack and editing

    To make the environment sound more engaging, I added the sound of a door opening and closing, as if someone were entering the lab helping to build a narrative through sound. However, the original door sound didn’t match the pitch or acoustic feel of the environment, so I adjusted the reverb and pitch to better blend with the atmosphere and make it more immersive. 

    Figure 1.r: Atmosphere adjustment

    So here we are, this is the door opening and close adjustment on sound and echo. 

    Figure 1.s: Door sound effects 

    For this part, I added a subtle sound of a gun touching an army shirt. I also edited the frequency and volume.

    Figure 1.t: Gun sound 

    The main object in the environment is the tree, so I added gas-absorbing and releasing effects to make it look like the tree is breathing through air pipes.

    Figure 1.u: Gas absorbing sound 

    Since the door opens, there must be someone entering. So, I added the sound of high heels walking to make it feel like someone is walking into the lab. I also adjusted the volume to make the footsteps gradually fade as the person walks farther away.

    Figure 1.v: Walking with high heels soundtrack

    Last but not least, I added the sound of water droplets to make it seem like the tree is breathing and absorbing nutrient-rich liquid from the pipe to sustain the experiment. For this part, I make some editing on the reverb to make it have extra echo. 

    Figure 1.w: Water droplets tune adjustment 

    So that's all for the first pic. Outcome was in the link drive document. 

    As for the second exercise, it also took place in the experiment lab, but the difference is that everything was mechanical and made of metal. Therefore, the echo in the environment needed to sound more metallic and industrial.

    Figure 1.x: 2nd of the environment audio sound design 

    Just like in the first assignment, I searched for and found all the necessary soundtracks online, downloaded them, and imported them into Adobe Audition.

    Figure 1.y: List of soundtracks

    This was the dialogue soundtrack. Since the original audio was too loud, I adjusted the volume to lower it in certain areas.

    Figure 1.z: Talking volume adjustment

    This soundtrack, it was a machine sound. The purpose was to create the feeling of a large space with echoes, enhancing the sense of depth and atmosphere in the environment. I adjust and reverb the sound to make it more like an empty and wide experiment hall sound. 

    Figure 2.a: Reverb adjustment on machines sounds

    I found an interesting sound effect a countdown. I adjusted the volume from low to high to build a more intense and suspenseful feeling.

    Figure 2.b: Gradual change of sound track

    The original purpose of the soundtrack was to mimic a fan sound. I adjusted the volume, added reverb and echo, and combined it with an iron clinking sound to make it feel like a heavy machine slowly starting up and getting ready to function.

    Figure 2.c: Reverb and echo adjustment on the iron fan machines sound

    The purpose of adjusting the volume and reverb for the typing sounds was to create a realistic programming environment. By adding typing echoes and varying the volume like sometimes loud, sometimes quiet or silent. I aimed to mimic the natural rhythm of programmers typing. 

    This “sometimes there is sound, sometimes there isn’t” effect helps make the scene feel more authentic and dynamic, reflecting how typing sounds naturally fluctuate in a real workspace. (Maybe ?

    Figure 2.d: Echo adjustment

    So, those are all the adjustments I made for this particular exercise. The final outcome can be found in the drive link. 


    Feedbacks

    Week 1_General feedback: When you're adjusting the equalizer, the equipment you're using is quite important, as it affects how your headphones pick up sound. If your headphones don’t support bass, mid-focus, or bright sound, it becomes difficult to distinguish the audio details and the effects added through editing. If possible, consider getting a pair of studio headphones. If your current headphones already offer features like enhanced bass or other sound profiles, that’s even better. Any adjustments made on the equalizer will influence how the sound is perceived, so take time to listen carefully to the flat version of the audio.

    Week 2_Specific feedback: The others were fine, but the walkie-talkie effect needed some adjustments. Its base is similar to the phone sound effect, but it requires a more muffled and sharper quality. To achieve this, the grain can be increased to make the sound louder and give it a more distinct, compressed tone.

    Week 3_General feedback: The cave echo shouldn’t start too wide. Remember, the environment is one where the speaker is gradually walking deeper into the cave. As she moves further inside, the echo should slowly become more prominent, and the sound should grow more muffled and unclear. Therefore, the adjustments need to reflect this progression, don’t begin with a strong echo effect right away, but increase it gradually based on the situation.

    Week 4_Specific feedback: Stadium, bathroom, and airport all three have same sounds. For the cave scene, it's not from a first-person point of view. Instead, you're observing someone else walking into the cave while they’re talking. It begins with the person in front of you, then they walk away and enter the cave. Make sure to include the ambient sounds of both environments, too.

    Week 5 Absent_not feeling well :(


    Reflection 

    Experience
    These exercise has been a really hands-on journey through different aspects of sound design. It started with basic equalization, where the task was to match edited soundtracks back to a flat version using the Parameter Equalizer. That felt tricky at first, especially without good headphones, but once better audio gear was used, the differences became clearer. From there, the exercises moved into adjusting sound for different environments like stadiums, closets, and bathrooms, which pushed the understanding of how space changes how sound is heard. Direction-based editing, like moving an airplane sound from left to right or making a voice sound like it’s walking into a cave, helped bring movement and space into focus. The highlight was definitely the warm-up project, where ambient sound scenes were created based on concept art. Adding and editing layers like machines, footsteps, and even water droplets made it feel like crafting a full world with just sound.

    Findings
    One of the biggest takeaways is how much equipment affects sound perception. Using high-quality headphones made it easier to hear bass, mids, and treble clearly, which really helped in fine-tuning each track. The equalizer turned out to be more than just a tool, but also the key to matching sound to space. Also, learning the difference between instinctual and learned sounds gave more depth to how certain effects were chosen. Something as simple as reverb or echo changed the mood entirely, especially when used to show distance or emotional tone. Editing volume and panning wasn’t just technical—it became a way to lead the listener through the space and create direction, like someone slowly walking into a room or moving further away.

    Observations
    What stood out the most was how sound could completely change the way a scene feels, even without visuals. Carefully placed effects brought emotion, whether it was tension from a countdown or calm from soft echoes. It became clear that sound design isn’t just about adding noise, but about building a believable space and telling a story through atmosphere. Even tiny edits, like adjusting the timing of footsteps or the echo of a door closing, made everything feel more alive. Sound has this quiet power, it shapes mood, gives direction, and brings moments to life in a way that feels both creative and real.

    Comments

    Popular posts from this blog

    Information Design_Exercise

    Sonic Design_Final Project