Ear to the Ground

A Breakthrough “Smarter” Method for Assessing Workplace Noise Exposure

Workplace Noise-Dosimetry monitoring and assessment

Noise dosimetry is one of the most common and important methods in assessing noise exposure in the workplace. “Dosimetry” refers to the method of placing a small, data logging sound level monitor on a worker, for an extended period of time, such as an entire work shift. Ideally, the sound data logged result is a direct measure of the noise exposure of that worker. Unfortunately, though, traditional dosimetry is never quite that simple, nor very accurate.

As acoustical consultants specializing in noise and vibration control, we have found that although there is a real need for the type of data that dosimetry can furnish, the traditional instrumentation and methods can be fraught with problems. In many cases, the logged results overstate workers’ noise exposures. Traditional dosimetry measurements are prone to irrelevant “pseudo-noises”, such as bumps against the microphone, rustling of clothing, or the worker’s own voice, which the dosimeter alone cannot differentiate from bona fide workplace noise. These inflated results can misleadingly suggest that the workplace is louder than it actually is, and lead to unwarranted effort and cost in pursuing noise control efforts for a phantom problem.

Fortunately, recent advances in digital signal processing technology have made possible new methods in dosimetry, making it more accurate and much more powerful in identifying sources of excessive noise exposure in the workplace. In fact, in some jurisdictions, the governing Standards now include recommendations to utilize these new methods, to obtain more accurate assessments of noise exposure.

Assessing and Managing Noise in the Workplace

In broad terms, effectively managing noise in the workplace entails:

  • knowing what the noise exposure levels of the workers and the sound levels throughout the workplace are, relative to the regulatory limits;
  • implementing engineered noise controls where practicable, to reduce the noise to be within the limits; and, for areas where the sound levels cannot feasibly be reduced,
  • determining appropriate hearing protective devices, enforcing their use, and implementing a hearing conservation program including regular audiometric testing. A shorthand way to think of these three components is:
  • workplace noise assessment
  • workplace noise control
  • hearing conservation

Traditionally, dosimetry has been used as one element of the first step in noise management – workplace noise assessment. If the dosimetry results suggest that certain workers’ noise exposure levels exceed the regulatory limit, additional steps can be taken to create a noise map of the workplace, identify the equipment, operations and activities contributing to elevated noise levels, and investigate options for engineered noise control measures. In this respect, traditional dosimetry has been just the first step in the process, and when it indicates excessive noise levels, considerable additional work is s

New Noise Dosimetry Technology Now Offers a Smarter Approach

A key benefit of dosimetry is that it is automated – i.e., it can be pre-programmed to run continuously, while a worker goes about their day with the device attached, gathering many hours of continuous sound level data, without the ongoing presence or participation of a technician or acoustical consultant. So, the risk of missing occasional or intermittent noisy events or other variations in sound level over the course of a work shift is reduced.

But the automated nature of dosimetry is also its weakness. Unlike an acoustical expert conducting measurements and observations in person, the dosimeter has no real built-in “smarts” and so cannot use its own field experience and judgment to recognize anomalous noises and exclude them in real-time to avoid inflated overall readings. Moreover, traditional dosimeters utilize primarily analog electronics, which cannot process or store the acoustic frequency spectrum of the noise. So, they offer little information about the characteristics of the noise, which could be useful to identify the source of the dominant noise sources.

We began several years ago to research better ways to conduct dosimetry measurements. For a decade or more, there have been full-size digital sound level meters with the capability to gather digital audio recordings, concurrently while measuring the sound levels. And they could also process acoustic frequency information, such as measurements in full-octave or 1/3-octave frequency bands. But these units were much bigger and more fragile than a dosimeter.

We would either strap a cumbersome digital sound level meter to a worker – perhaps attached to a belt, with the microphone on a cable, pinned to the lapel or shoulder. Or, we would use a small dosimeter, alongside a digital audio recorder. By recording audio, and configuring the instrument to store the results in fast time steps, typically once per second, we were able to view the graph of sound level versus time (often called a “time-history”) in post-processing, find any peaks in sound level, and then listen to the audio recording to identify the type of sound.

Traditional Dosimetry Can Wrongly Overstate Noise Exposure Levels

This new approach confirmed exactly what we had long suspected – that traditional dosimetry frequently overstates true noise exposure levels. But, we were surprised by the severity of these overestimates. By clipping any the irregularities out of the data, we found that in a considerable number of cases, the unfiltered data showed noise exposure levels exceeding the governing limits, while the corrected results were well within the limits. Given this degree of divergent results, the bottom-line consequences to a business not using modern dosimetry methods can be significant.

To our good fortune, and that of our clients, over the last few years, at least two manufacturers of acoustical instrumentation have introduced fully digital dosimeters, which can gather calibrated audio recordings, measure in full-octave or 1/3-octave frequency bands, and log the results with very fine time resolution (one second intervals or shorter).

These manufacturers provide post-processing software that easily allows the user to view the time-history graphs and listen to the audio recording synchronously, as the cursor scrolls through the graph. The user of the software can then also highlight and clip-out atypical events, or group together similar acoustical events and calculate cumulative exposure levels from different activities or noise sources.

New and Evolving Regulatory Noise Measurement Standards

We suspect that the instrumentation manufacturers introduced these new capabilities in response to requests from users in industry and consultants like us. But it may also have been in response to evolving measurement standards. In Canada, for example, the newly revised CSA National Standard Z107.56-18 “Measurement of Noise Exposure” recognizes the limitations of traditional dosimetry and provides corrective recommendations, as set out in the following Sections:

4.2.1 – “Concurrent measurement with octave or 1/3-octave bands should be used to assist with hearing protection selection and noise source identification and control.”

4.2.3 – “Audio recording capability may be used to assist with the identification and removal of spurious events through post-analysis if required.”

6.3.1 – “Users should be aware that dosimetry measurements can be elevated by the worker’s own voice, if communication with raised vocal effort is a common occurrence on the job.”

Notwithstanding the power of the new digital dosimeters and the accompanying software, the process of filtering the data can be time-consuming. We typically find that, if the raw dosimetry data contains many extraneous sounds, such as the case of a worker who talks frequently with a loud voice – the time required to “clip out” all of the artifacts and calculate the results can be about a 1:1 ratio to the actual monitoring period. That is, it can require up to eight hours to parse the data from an 8-hour shift-long recording. Therefore, our typical approach is to post-process only those dosimetry records that exceed the governing limit, and which have identifiable peaks in the time history graph.

 Powerful Additional Benefits of State-of-the-Art “High-Definition” Noise Dosimetry

There are broader benefits than simply excluding extraneous noises from the data set. In complex workplace environments that have many diverse noise sources – perhaps components of multi-stage manufacturing processes with interlocked operations – it can be an arduous task to isolate the sound from each item of equipment or even sub-components within each item, in order to know which ones are the culprits contributing most to the noise excesses. Doing so is not as simple as establishing which sources are the loudest (which can be challenging enough), because the noise exposure experienced by an individual worker depends also on:

  • the distance between the various sources and the worker, which may vary as the worker moves about
  • the acoustical shielding afforded by intervening obstructions
  • whether the equipment cycles on and off or varies in loudness over process cycles; and
  • the effects of room reverberation.

Other than hint at the magnitude of a noise problem, traditional dosimetry offers little to no useful information about what equipment, activities or processes in the workplace were the prime contributors to noise excesses.

Now the real power of modern, “cutting-edge” dosimetry emerges. In many cases, the audio recording, together with the time-history graph, can be used to identify and collate sounds of different activities and then calculate the time-weighted sound exposure levels of the various activities individual occurring throughout the worker’s shift.

For example, if the worker is in an area with cycling process stages – such as granulating, mixing, drying, dispensing – we can use the software to highlight all occurrences of each process stage and determine which stage has the greatest or least impact on the overall noise exposure during the worker’s shift.

Or if a worker is mobile and moves through different areas of the workplace, some louder and some quieter, we can readily determine which areas contribute most to the shift-long sound exposure. Some loud areas might not be a priority for abatement, if the worker spends only a short amount of time in that area.

Easier Identification and Ranking of Possible Unsafe Workplace Noise Sources

Or if a worker is performing different tasks throughout the shift – e.g., milling, welding, drilling, grinding, hammering – we can flag each occurrence of those activities and have the software calculate time-weighted sound exposures for each type of activity. With this functionality, it is therefore easy to rank and prioritize the various activities for investigation of noise control measures.

In this way, modern dosimetry provides a wealth of information about what the key noise sources are, and which ones contribute most to a worker’s exposure. This information provides a much more solid starting point for a noise mitigation feasibility study, and the eventual design of noise control measures, than does traditional, analog dosimetry.

Greater Accuracy in Assessing Workplace Noise Exposure

In short, “smarter” noise dosimetry – using digital technology and the improved analysis methods it makes possible – leads to greater accuracy in assessing workplace noise exposure and provides useful information for later noise control studies, thereby reducing costs and saving time.

In that respect, this type of dosimetry is quickly establishing itself as an indispensable next-generation tool for workplace noise health and safety.