Independent Subwoofer Testing Data

This is where we stop listening to marketing and start proving.

Here, you’ll find the most detailed, objective, and continuously growing collection of objective car audio subwoofer Klippel measurements available to the public. No marketing fluff. No cherry-picked specs. Just raw, usable data, collected by a third-party loudspeaker engineering firm that is fully trained on Klippel systems and focused on proper testing, not brand loyalty.

Every speaker driver in this section, including our own, was tested under controlled conditions using standardized methods. You’ll see all the key plots — BL, KMS, LE, distortion, and more — along with straightforward breakdowns that explain what each graph means in real-world performance terms, plus the explained interpretations of the objective data for each subwoofer. Whether you’re comparing drivers for your installation, designing an enclosure for a subwoofer you already have, or just trying to understand what makes one car audio subwoofer better than another, this is your home base.

And no, this isn’t just about sharing numbers or trashing other companies products. It’s about educating, informing, and giving you, the consumer, the tools needed to make smarter decisions when it comes to planning or upgrading your installation and spending your hard-earned money. It’s about transparency. It’s about raising the standard for how customers are treated and calling out an industry that has relied on misinformation and marketing BS for way too long. And honestly, it’s about god damn time someone did something about it.

Before you dive in, PLEASE read through the sections below. They explain the test methods, how to read and interpret the graphs, and the limitations of this testing that you should be aware of. Context matters if you want to make fair comparisons and get the most out of this data.

Read before interpreting this data

All subwoofer testing data presented in this section was conducted by an independent third-party loudspeaker engineering firm that is fully trained and certified to operate Klippel testing equipment. The purpose of this project is to bring transparency, education, and accountability to an industry that has historically lacked it, and one that will likely continue fighting to keep it that way. Every effort was made to ensure this data is accurate, repeatable, and representative of real-world performance.

We want to be completely clear: We do not modify, manipulate, or bias any of the test results presented here. And frankly, if you’re here just to discredit what we’ve done, don’t waste your time. We did everything possible to limit variance and make the process as fair, consistent, and transparent as it can be. If you think the data is wrong, nut up or shut up. Do your own testing and prove it. If anyone can provide valid evidence that our results misrepresent a product, we will gladly remove the current listing, purchase a new sample, re-test it under the same proper conditions, and publish the updated results.

The lab we contracted for this testing performed the work objectively, with no involvement in selecting the drivers or deciding how the results would be used or published. They were not made aware of our intentions until after the initial round of testing over 25 drivers was already complete. They were simply handed drivers and test requirements and did their job. They have also requested not to be named publicly to avoid being dragged into potential drama from overzealous hobbyists or known temper-tantrum-throwing manufacturers who may not like having their products independently evaluated without their involvement or control. They just want to do their job and move on with their lives. Leave them out of this.

Every brand and model listed here was selected, purchased or borrowed from a reliable source, prepared, and submitted by us for testing. In most cases, the drivers were brand new and broken in by the lab prior to measurement. All drivers, new and used, were carefully screened to ensure they were in excellent, like-new condition. Several samples were rejected altogether if they didn’t meet our strict standards. The specific condition and history of each subwoofer is clearly listed on its individual page.

This is not a list of products we are trying to promote or attack. Inclusion simply means the driver was selected for testing for one reason or another — popularity, personal curiosity, or simply because it was available — and the data is being shared publicly for the benefit of the community. Just because a driver is featured here does not mean we sell it, endorse it, or believe it performs well. The same goes for drivers that perform poorly. If a manufacturer believes the sample tested was unrepresentative or that our results are inaccurate, they are welcome to conduct their own third-party testing and publish their findings. And as mentioned above, if anyone can provide valid evidence that our results misrepresent their product, we will gladly remove the current listing, purchase a new sample, re-test it under the same conditions, and publish the updated results.

Every company in this industry has access to this type of testing, especially when evaluating their own products. There is no excuse for relying on vague marketing claims while throwing tantrums in response to objective data. These companies had this information about their own products before we ever did and still willingly put those products on the market for customers to buy. And if they did not have this information and went in blind before offering them for sale, well, that is even worse in our opinion.

We are not responsible for how this data is used or interpreted by others, but we will do our best to explain everything clearly and objectively. At the same time, we are going to stay true to ourselves and true to the data. We encourage respectful, civil, and informed discussion around what is shown here. This project is not about creating conflict. It is about providing a shared foundation of truth and helping push this industry in the right direction by raising the standard of honesty and accountability.

Dispute/Retest Policy

Contact us directly if you wish to dispute the validity of a test. These disputes will only be replied to if they are directly from the company owner or product line manager themselves. 

A personal note from Nick

Hey everyone, Nick here. I wrote everything in this write up as neutral and legal safe as I possibly could, except for here. In this section, I will detail my personal thoughts, experiences, things I learned, things I had previously thought that turned out to be wrong, and pointing out a few things outside of this test that will also be eye openers to you if you pay attention close enough.

Read this as me talking directly to you, not as some neutral lab report. The rest of the testing section is written to be clean, structured, and as “legal safe” as I can make it. This is where I get to actually say what I think, what I learned, and where I had to change my mind on some pre-conceived notions that I had.

Why I did this in the first place

I did not spend around twenty grand on testing and hundreds of hours (so far) of my own time because I was bored.

I did it because I got tired of hearing opinions that did not line up with what I was seeing and hearing in real systems, and I got tired of guessing. I also did it as part of the R&D and market research to see what else was on the market and how things were performing when going into the design and prototyping of the ResoNix subwoofers. Initially, it was going to be a small test of subwoofers that I know well and like very much. Then, as with everything I do, it snowballed into a massive project that I obsessed over.

I wanted to know, for myself, how these subwoofers actually behave when you stop listening to marketing and your own pre-conceived biases. I wanted to see real data from a proper Klippel based setup, with a real engineering firm and serious hardware behind it, instead of trusting whatever a brand, a spec sheet, or a random internet comment said.

What I learned about the market

The short version is this: Frankly, the market is flooded with low performing or mediocre drivers, and only a small number are genuinely good in a technical sense. That is exactly why I started my own line of subs. I was not impressed with anything else on the market after many good options that I was using in the past were discontinued.

This testing confirmed a few uncomfortable things.

1. The disconnect between hype and reality is huge.
That disconnect is not minor. This clear disconnect and my personal observations of people in this hobby/career after being immersed in it for over a decade tells me most people do not have the tools, experience, or reference points to know what “good” actually looks, and most importantly SOUNDS like under the hood.

This test revealed to me truly and undisputedly how bad some of these subwoofers are, yet there are hundreds of people that praise them online. You might want to initially think of this as “well good reviews must mean they are good.” No. It confirms to me that no one has any real clue what they are talking about, nor the qualifications to actually speak on these things and offer fair and accurate comparisons or reviews. Obviously everyone is free to comment their experience, but it is sad how inexperienced people really are yet speak with so much conviction.

2. Manufacturer specs are more questionable than I had previously thought.
Manufacturer published T/S parameters are one thing. Though, I did learn that small signal TS parameters are not as useful as I previously thought.

Manufacturer published Xmax is another story entirely. The amount of products that not only did not meet their published rating, but did not even come close, was a huge eye opener. Very sad for our industry to be like this.

The nice way to say it is that specs are “optimistic.” The honest way is that a lot of this looks like it is pulled out of thin air. Also, using most manufacturers listed specs to model enclosure and driver behavior is now to me a complete waste of time and as useful as reading their marketing materials.

The ugly part is this: every one of these companies has access the same kind of testing we used, and most likely already has the data from it. So either they have similar data and choose to ignore it when they publish specs and marketing, or they simply never bothered to test properly and shipped a product they do not actually understand. Neither option is good.
Small signal parameters by themselves turned out to be much less useful than I used to think. At least the ones listed on most manufacturers sites.

Where my thinking changed

Here are a few specific things I had to adjust in my own head along the way.

1. “Everything audible shows up as THD” is wrong.
I used to lean on the idea that if something is audible, it will show up to at least some degree in a THD measurement. This testing made it clear that this is not fully true. There are audible issues that can get “hidden” if you only stare at basic THD plots.

It is also important to keep in mind that the type of program material, the specific harmonics, the level, and how it all lines up with human hearing all matter a lot. Saying “X percent THD at Y frequency is always inaudible” is a very lazy oversimplification and is wrong in most real contexts.

2. LSI curves matter more than I expected.
I thought distortion graphs would be king as the quick glance tool. The engineering firm that I worked with on this educated me on the subject of LSI and what it translates to in real world audible results, and how they might not exactly show themselves on a THD graph. THD is still obviously very valuable, but LSI curves show you the underlying mechanisms and can reveal things about audible end results that are not explained on a distortion graph.

You start to see which non-linearities are waking up, where symmetry breaks, whether the motor is still in control, and how much of the behavior is clean versus struggling, even if a basic THD trace looks “fine” at first glance.

3. Inductance is not something you can hand wave just because it is a subwoofer.
The “it is a subwoofer, inductance does not matter because it does not play high anyway” line is nonsense. The absolute inductance and how Le changes with stroke both showed up as real problems in most drivers. You get added distortion and audible artifacts that tie back directly to ugly Le behavior.

Something that I observed from the years of being in this hobby/career and doing this testing, several subs with high or poorly controlled inductance are usually the ones that people later say “have a sound” that never really disappears and makes the subwoofer stick out and not able to blend in. That is not a coincidence.

4. My original scoring was too kind to small, low Xmax drivers.
When I first set up the scores, the shallow and low Xmax subs effectively got a free pass in the high level distortion tests, just because they never moved very far. Of course distortion looks decent if the driver barely moves. That is not a win.

To fix that, I added an excursion vs distortion factor to the score card. Drivers that can swing real stroke and still keep distortion under control get rewarded. That helps level the playing field between “wimpy but clean at 3 mm” and “strong, still acceptably clean at serious excursion.”

And, I know. The obvious fix to this would have been to test each driver at more than two voltage levels. Unfortunately that would have made the testing, mostly the data analyzation, much more grueling and time consuming to the point where it probably would have overwhelmed me to the point where I would have given up, and overwhelmed readers since there would be too much data to sift through. It also would have added costs and time to the actual testing procedures. We had to axe the idea. My excursion/distortion score and its formula help us extrapolate what could have been.

Why the scorecard exists and how I want you to use it

Here is how I want you to use it.

First I need to make it clear why I even came up with a “scorecard” in the first place. See, for as much information as I provide on how to analyze and assess the data, there are even more people that will not read it nor care enough to educate themselves on how to interpret it all. This raw LSI and TRF data will essentially turn the reports into hieroglyphics for most people.

This scorecard is my way to give those people who are interested enough to check these pages out, but not enough to spend hours trying to learn the subject of speaker engineering, something digestible to reference quickly and easily. It makes the test have value to more people and it helps make the time, money, and effort that I spent on this test more digestible and worthwhile.

So:

  • Treat the scores as a structured comparison of these specific samples, in this specific test environment, relative to each other and only drivers in this test.
  • Do not treat a small score gap as some huge subjective difference you will always hear. Instead, see where the differences lie, and how they correlate to real world use. Every application and user’s goal is different, and the scorecards cannot account for this. They can only account for raw data of the performance of these subwoofers.
  • If you care zero about output and only care about low level accuracy, you can mentally subtract, or literally recalculate, the excursion versus distortion parts of the score for yourself.

The total score is out of 1250. That gives a lot of resolution, but it is still just one way of compressing a lot of behavior into a single number. There is no perfect way to combine accurate reproduction capabilities and output capability in one score sheet. Some people might have different needs out of subwoofers, which can skew what the scores even mean to them.

It is important that you also take the time to learn what all of the data is (from the home page of the testing) so you can read the data and draw your own conclusions for your use case. Please educate yourselves and review the data please.

What this test does and does not cover

We did not test everything. That would have made this whole thing impossible to finish.

We measured THD, but we opted to exclude intermodulated distortion, doppler distortion, hysteresis distortion, SD distortion, etc. We had to exclude these as it would make this test and its subsequent write ups way too overwhelming. Those would add even more layers of complexity to the write ups and would bury most readers, but more importantly, it would bury me in work to the point where I would not be able to complete these write ups. It would just be too much.

We tested single, carefully screened samples of each model. Yes, there are some variances from sample to sample, but they should not be large. If someone wants to make the argument, especially a manufacturer trying to defend their product in this test, that the sample we had was just a bad one, well, that is a red flag on its own. That tells me that the defense is “the tolerances or variance is so wide across the assembly line” which is obviously not good.

Also, the defense that a driver performed poorly due to it being defective is also not a good one, as we very strictly screened drivers before testing, and a defective driver would show much worse results in these tests. It would be obvious if something was not fit for testing.

The value here is in the consistency. Same lab, same Klippel platform, same basic approach, same thresholds, applied to every driver in the batch. This is why I say to only compare test results of this test to other results in this same test.

Accuracy vs preference

This is one of the most important distinctions in the whole project and the biggest thing I want to get out of my head and onto paper.

PREFERENCE IS SUBJECTIVE. If someone prefers a sub that is obviously poor performing or has high distortion and sticks out from the rest of the system, that is their call. Some people genuinely like that sound, or they built their reference around it. The problem here is that most people think that their preference is accurate. I see this behavior all of the time and it is just not true.

ACCURACY IS NOT SUBJECTIVE. Accurate means how far off the audible end result is from the electrical source signal. That is measurable, and we have the data. That is what Klippel and well structured TRF tests are for.

SOUND QUALITY IS NOT SUBJECTIVE AND IS NOT MUTUALLY EXCLUSIVE TO INDIVIDUALS PREFERENCES. Sound quality, by definition, refers to the accurate reproduction of the input signal. Your preferences might differ from what this is, AND THAT’S OKAY. Mine do to a degree as well. It is very important that we understand these differences and that we are not mixing things up.

So if someone says “this test is garbage, my favorite sub did badly but I think it sounds amazing,” the first questions you should ask are:

  • What other subs have they actually lived with and pushed in a comparable system?
  • What is their install quality like?
  • How good is their system tuning?
  • What is their reference for “good” in the first place?
  • Is this person biased or trying to protect something?

Most of the time, once you dig even a little, it becomes clear they simply do not have enough breadth of experience or enough control of their variables to make strong claims about what is objectively “better.” Years in the hobby does not automatically mean good understanding or good experience. And for better or worse, everyone on the internet has a relatively equally loud voice, so to say.

About online opinions

This testing really drove home how much noise there is in online audio discussion. I have always been one to call out how much BS there is on forums, facebook groups, etc. Its funny and somewhat relieving to finally see this backed, at least on this specific subject.

There are a lot of people speaking with extreme confidence who are not even close to qualified enough to be discussing technical performance at this level, and I am sure they will continue to spew the same nonsense even after seeing this article. Loudest one in the room sorta personality.

Anyways... Some of the drivers that get praised heavily online, on the bench, are objectively “an abomination”, in the words of the test engineers, and shown in the data.

Everyone is free to share their personal experience. That is fine. The problem is when those experiences get treated as equal to structured, repeatable, data backed work. They are not even close to the same thing. Frankly, anyone who thinks otherwise is delusional, no matter how experienced they or you think they are.

The people who ran this testing are Klippel trained, working in a proper engineering facility, with something in the six figure range of gear dedicated to measurement. The difference between that and “some guy on Facebook/DIYMA/Reddit who likes his setup a lot” is enormous.

About my own subs and bias

My own subs prototypes are included in this testing, starting with pre-production prototypes that will be further refined. Each prototype revision and eventually the production models will be added as they are completed and tested. I also have or had business relationships with several of the brands in this group, and friendships with people who work at some of these companies.

I am not pretending to be a blank slate. I am not. No one except for the engineering firm who I contracted to do this is. What I did do is build the scoring system in a way that is as immune as reasonably possible to my preferences using the references and articles on the Klippel website, among other resources.

  • Inductance and excursion versus distortion scores are derived from math, not gut feel.
  • For THD, I built Photoshop stacks where all distortion plots for the group could be visually compared, one on top of the other, and sorted from “cleanest” to “ugliest” in the bands I care about, namely, 20 Hz up to roughly 120 Hz.
  • I did the same for BL, CMS, and QTS related behavior. Best curves and worst curves get anchored first, then everything else is placed relative to those.

Is this perfect? No. QTS scoring for example is probably the weakest part at the moment. One driver was so bad in that regard that it skewed the scale like comparing planetary distances to the distance to another galaxy. I may revisit that scoring later as more data comes in.

Scores and interpretations may evolve as I become more knowledgeable on the subject, and test more drivers and as I refine how I weigh different factors.

Distortion and audibility

One myth that needs to die is the lazy and ridiculous “below X percent THD at Y Hz is inaudible” line. That kind of statement ignores almost everything that actually matters.

Yes, low frequency distortion is often less obvious than distortion at higher frequencies, but it is not magically inaudible just because a number on a spec sheet or some random guy behind a screen name on the internet said it is under some arbitrary threshold.

  • It ignores which harmonics are present.
  • It ignores what the program material looks like.
  • It ignores how different distortion patterns interact with real music and human hearing.

This project gave me plenty of examples where “on paper” THD numbers looked fine to someone used to simplified rules, while the rest of the data, and actual listening experiences correlated with the data and clearly showed there were problems that would translate to the listening experience.
Again, that one is just flat out wrong. It is also usually based on a shallow understanding of distortion in general. This test project can be used to explain, in plain English, why that line does not hold up when you dig into the details. If you think it does not, you have not read all of the information we have provided.

Pushback, criticism, and what I will actually care about

I expect a few main types of pushback.

“You did the test wrong”
Klippel measurements are highly automated. There is not much to “manipulate” beyond basic protections and sanity checks, and I did not run the tests myself. I paid a third party engineering lab to do it. If someone wants to claim the methodology is wrong, they need to come with a coherent technical argument, not “I do not like the result.”

“You are biased and cooked the scores”
I already covered how the scoring works. Could I have made a mistake somewhere, especially late at night after a long day? Absolutely. If you find a clear, specific error or something that just does not make sense in the writeups or scores, tell me. I will fix it. I am also open to solid, well thought out suggestions on how to score certain aspects better.

What I am not interested in is generic “I feel like X should score higher” because someone’s favorite driver took a hit. Most people are not qualified to redesign this scoring system, and I am not going to pretend otherwise.

“Lots of distortion is mostly inaudible at low frequencies”
Again, that one is just flat out wrong. It is also usually backed by a shallow, cherry picked understanding of a couple of studies rather than a real grasp of what is going on.
In short, I will listen to criticism that points to specific technical issues, real mistakes, or genuinely better ways to structure the scores. Most of the rest will get ignored, because 99 percent of people are simply not qualified to come in and try to reframe the entire project.

Where the ResoNix subs sit in this

My subs are included. Well, at least for now the pre-production prototypes are for now, and there are adjustments being made to those that will make them even better. As mentioned earlier, as we make revisions and test them, they will be added. Once the production versions are done and ready, they will be added. Anyways, my intention is not to turn this into a full sales pitch here. The data and the individual product pages can speak to that and if our product is a good fit for you.

For context that is more for you to understand my choices, though seemingly irrelevant to the topic of the testing: I traded specific design and size possibilities by going with some open tooled parts (basket being the most important) so I could keep costs down while giving you the best performance possible. Doing custom baskets would have been able to shave maybe half an inch of depth or more, but would have added significant cost for me, and therefore for the end user. These are already pretty shallow enough, and our other line will be even shallower, lower distortion, but also lower Xmax.

Where this leaves you

If you have made it this far, you are probably the kind of person this project is for. You care enough to dig deeper than brand names and forum posts.

Use this data to make better decisions, to understand the tradeoffs between different subs, and to separate personal preference from actual accuracy. Understand that everything here is built around these samples, this test setup, and this methodology, and that it is still the closest thing you are going to get to a level playing field in this space right now by a longshot.

Thank you for making it this far. Love you <3
-Nick Apicella

Glossary

Check out our subwoofers - coming soon!

In the meantime, be sure to check out our first model of subwoofers, the GUS Series, that will be available soon!

All specs and updates will get posted there, including revisions, timelines, cool bits of info, and data, etc. This page will grow and things will be added as time goes on.

All tested subwoofers

Explore all subwoofers that have been independently tested.
Small batches of tests will be released periodically.

Batch 1: Released 12/4/2025
Batch 2: Released 12/16/2025

Before you jump into the data, please take the time to read the sections below about how this test works and how to interpret the graphs. The context matters or you will misread the results. Understanding the method will make the comparisons and scores actually useful. Ignoring the context and taking the data out of context is a recipe for misunderstanding.

Please note, there might be some errors on photos shown, sections shown, etc. as this is a brand new section of the site. Please let us know if you spot anything that is out of place. Thank you!

Thank you.
-Nick

How the test works

Lab and equipment

All testing was performed by a Klippel-trained, third-party engineering lab certified to operate the equipment correctly and accurately. As mentioned above, we’re not naming the lab as they want nothing to do with any potential backlash. Their only role was to run the requested procedures and return the data. Equipment used includes a Klippel R&D system with the LSI (Large Signal Identification) and TRF (Transfer Function) modules, along with an Earthworks M30 measurement microphone.

The LSI module identifies large-signal parameters and nonlinear behavior like motor force (BL), suspension compliance (Cms/Kms), inductance (Le), and limits for safe excursion. The TRF module handles frequency response and harmonic distortion using a logarithmic sweep. 

This testing follows a BL-priority methodology. That means if another protection setting, like Cms or Le(x), would otherwise prevent the system from fully resolving the 70 percent BL(x) limit, those protections are temporarily lowered to allow BL to take priority—up to a point. For example, on some shallow subwoofers, Cms protection had to be reduced so the system could accurately capture BL(x) behavior through the desired excursion range. Once 70 percent BL(x) is reached, its threshold is locked in, and no other protection is allowed to drop below 65 percent. This keeps the test within a safe margin and avoids bottoming out fragile or short-stroke subs. Whatever the other parameters resolve to at that point is what’s reported, even if they’re more limited than BL. If a particular parameter (like Cms or Le) becomes the limiting factor earlier, that’s noted in the test results.

All displacement-based limits in this test—70% BL(x), 50% KMS(x), and 17% Le(x)—are based on an industry-accepted distortion threshold of approximately 20%. While not officially standardized under any IEC rule, this has become the practical norm for subwoofer testing. The actual IEC 60268 standard defines a more conservative 10% distortion threshold, which corresponds to 82% BL(x), 75% KMS(x), and 10% Le(x), and is more appropriate for full-range speakers where low-level linearity and fidelity are the priority. Subwoofers, by their nature, are used at higher excursion levels and lower frequencies where higher distortion is tolerated and expected, so the 20% threshold is more realistic for defining usable stroke in these applications.

One important note about the LSI testing process: it is almost entirely automated. Aside from entering basic driver parameters and setting protection limits for BL(x) and CMS(x) thresholds to avoid damage during the sweep, the system runs the test on its own. The only time human intervention occurs is if a mechanical or thermal issue is detected mid-test and the procedure needs to be paused. Otherwise, the operator has no real ability to influence the data.

This level of automation matters because it removes any meaningful opportunity for bias or tampering. The Klippel system uses real-time displacement, voltage, and current data to mathematically determine the large-signal parameters, without relying on subjective input or manual curve fitting. We’ve included all results exactly as they came from the system, with no alterations or hand-picked smoothing.

Fixtures and mic position

All drivers were tested in free air using a rigid mounting fixture. This was done according to Klippel’s own recommendations for LSI testing, which prioritizes isolating the driver’s nonlinear behavior without enclosure effects. For acoustic measurements, we used nearfield mic positioning—two inches from the dust cap, centered, and normal to the cone surface. This allows us to minimize room interaction and approximate a half-space free-field response without requiring a full anechoic chamber.

Environment

Testing was conducted indoors in a controlled office environment at approximately 70°F with stable humidity. Ambient conditions in the facility are, by nature of being a functioning work office, kept consistent across all sessions which minimizes variability in compliance behavior, thermal rise, and distortion measurements. The facility maintains consistent airflow, and all test equipment was operated within calibration specifications.

All drivers were mounted using the official Klippel Free Air Test Stand, which is the same fixture Klippel provides and uses for their own hardware testing. The stand was bolted to a 100-pound granite base, which was physically isolated from the floor to eliminate vibration during measurement. This setup provides a rigid, non-resonant mounting platform for accurate large signal analysis.

The Keyence LK-G5000 laser system with LK-H152 laser sensor that is capable of measuring +/-40mm accurately, and an Earthworks M30 microphone were mounted on independent stands and mechanically decoupled from the test fixture to prevent cross-interference. The laser was used for all LSI measurements and then swapped for the microphone prior to running TRF tests. Sensor positioning was adjusted per driver to ensure proper focus, spacing, and alignment.

TRF distortion tests were conducted in the nearfield to eliminate room interaction, with the microphone placed at a distance of one-tenth the cone diameter plus two inches to maintain physical clearance during excursion. Three consecutive TRF tests were performed at each drive level to ensure repeatability and consistency in the results.

Stimulus and sweep parameters

TRF testing was performed using a logarithmic sweep to capture both frequency response and harmonic distortion. All drivers were measured using identical sweep configurations and processing settings to ensure consistency across the entire data set.

Each driver was tested at two drive levels: a 1.00 volt RMS baseline, and a high-output sweep using the RMS voltage corresponding to the excursion limit determined by the LSI test. The high-level sweep was set just below the point where BL dropped to 70 percent of its maximum value. Why we chose these two voltage levels, with the high voltage varying is to see what the standard low-level 1v input distortion level is, but to see how it tracks as input level goes up, which is more useful considering how these are used. Comparing the two and noting differences between the two can give clues into how the cone and rest of the moving mass behave from low to high levels of input. Differences can be typically be noted as usually design, or sometimes assembly flaws in the moving parts.

The microphone was positioned at a distance of one-tenth the cone diameter plus two inches to maintain clearance at peak excursion. All TRF tests were conducted in the nearfield to eliminate room interaction, and each sweep was repeated three times per drive level to confirm measurement consistency.

Published frequency response plots extend to 1 kHz, and harmonic distortion data is shown up to 500 Hz. All response and distortion graphs are scaled from 65 dB to 110 dB and smoothed to 1/6th octave for clarity.

On the 1v relative distortion graph, it is scaled from 0% to 10% and the high level graph is scaled from 0% to 50%. If there is a subwoofer that exceeds roughly 15% THD on the 1v graph, it will be scaled from 0% to 50% instead of 10%. There are a few subwoofers that needed this.

Drive levels and stop criteria

Each driver was tested at two input levels:

  1. 1.00 V nearfield sweep – used to establish a baseline frequency response and distortion profile at low power.
  2. High-power sweep near Xmax – this level is calculated based on the Klippel LSI results. We set it just below the excursion where BL drops to roughly 70 percent of its maximum value. This is a conservative threshold chosen to represent the driver’s usable linear range without pushing it past its mechanical limits. Why we did a 1v test and a full xmax test are discussed in the section below about interpreting the data.

Stop criteria were enforced to protect the drivers and maintain data integrity. Testing was halted if:

  • Excursion exceeded safe limits based on BL and Cms curves.
  • Thermal behavior indicated risk, such as overheating of the voice coil. This happened with one unit during testing so far and that run was stopped.

Microphone and calibration

We used an Earthworks M30 lab-grade measurement microphone. These are known for their accuracy and consistency and are well within calibration requirements for this type of work. Mic placement was always consistent—two inches from the dust cap, centered and perpendicular to the cone.

Sample intake and prep

Most drivers tested were brand new. A few were lightly used but verified to be in healthy condition. 
Every single driver went through strict intake procedures that included:

  • Full physical inspection of cone, surround, spider, and former
  • Impedance sweep and small-signal T/S parameter check
  • Functional check at low volume to confirm clean movement and sound

If a driver failed any of these checks, it was rejected and not included in the data set.

What we published for each driver

For every subwoofer tested, you’ll see the following data. Each graph includes drive voltage, test fixture, and date for context. Explanations about how these graphs are interpreted are in the section below.

BL Curve - Motor Strength & Symmetry

Shows how motor force (BL) changes as the cone moves in and out. A flat, symmetrical curve means strong, consistent motor force and lower distortion across the usable stroke.

Cms and Kms - Suspension Linearity & Balance

Cms (compliance) and Kms (its inverse, stiffness) show how the suspension behaves under load. Good drivers have smooth, symmetrical curves with no sudden drops, kinks, or asymmetry that would cause distortion.

Le(x) and Le(i) - Inductance vs. Position & Current

Tracks how inductance changes with cone position (Le(x)) and input current (Le(i)). Low, stable values mean the frequency response won’t shift as the driver moves or plays louder. Helps predict upper-end accuracy and dynamic stability.

Qts(x) - Total loss vs. Excursion

Shows how total system losses (mechanical and electrical) behave as the cone moves. Useful for understanding how damping changes with excursion.

1.00 V Nearfield Frequency Response With THD, H2, H3, etc.

Baseline acoustic response and distortion at low power. Gives a clean look at tonal balance and harmonic behavior without heavy excursion influence.

1.00 V Nearfield Distortion (THD, H2, H3, etc.) As A Percentage Of Output


Harmonic distortion breakdown as a percentage of output. Helps identify resonances or non-linearities in the motor or suspension even at low levels.

High-Power Nearfield Frequency Response With THD, H2, H3, etc.

Acoustic performance near the driver’s real-world upper limit (just under BL 70% dropoff). Shows how the sub holds up dynamically at higher volume.

High-Power Distortion (THD, H2, H3, etc.) As A Percentage Of Output

Objective view of how the sub handles high excursion just below its 70% BL limit. Useful for identifying where distortion rises and which frequencies are affected. Compare directly to the 1 volt distortion plot: nonlinear behavior appears as a different distortion shape or new in-band peaks at the higher drive level. A strong driver keeps a similar distortion signature from 1 volt to the higher voltage; a weak driver shows a very different distortion response when pushed. This view also reflects real-world use, since subwoofers are typically operated closer to their excursion limits rather than the fractions of a millimeter seen at 1 volt. Each high-level test was performed three consecutive times to confirm consistency.

Why these measurements matter

Each of these measurements was chosen because it reveals something specific and meaningful about how a subwoofer actually performs. Together, they build a complete picture that goes far beyond the usual marketing specs. The motor force and suspension curves show how linear and balanced the driver is through its stroke, which has a direct impact on distortion and output capability. Inductance behavior affects upper frequency response, dynamics, and consistency at different drive levels. Frequency response and distortion plots at both low power and near xmax give a clear view of how the subwoofer behaves when pushed, which is often where differences between good and mediocre designs become obvious.

The point is to give you data that translates into real performance, not just numbers for a spec sheet. These measurements let you objectively compare drivers, understand their strengths and weaknesses, and make informed choices for your system instead of relying on hype or guesswork.

Free-air for LSI

Klippel’s own documentation recommends free-air or standard baffle testing when capturing large signal behavior. This removes enclosure loading and isolates how the driver performs on its own.

Nearfield mic technique

Mic placement 2 inches from the cone and centered normal to the dust cap is a common technique to minimize room reflections while approximating a free-field response. This is consistent with Klippel’s nearfield guidance.

Log sweep TRF and consistent processing

We used the same sweep settings, windowing, and harmonic separation across all tests. This ensures direct comparability between drivers and maintains measurement integrity.

Understanding the limits of Klippel data

Klippel testing is one of the most advanced ways to analyze how a subwoofer behaves under real working conditions. It gives us consistent, high-resolution insight into motor force, suspension behavior, inductance, distortion, and more. But like any tool, it has limits that should be understood before drawing hard conclusions.

The LSI (Large Signal Identification) module works by fitting a mathematical model to how the driver behaves under load. While this model is highly accurate for most conventional designs, it still involves assumptions and simplifications. It simplifies certain physical behaviors and assumes ideal alignment between the mechanical and electrical centers. In rare cases, very unusual motor or suspension designs may not fit the model as cleanly, which can lead to discrepancies. Even Klippel’s own documentation acknowledges that this is an approximation, not a literal one-to-one measurement of every variable.

TRF measurements for frequency response and distortion are also subject to test-specific factors like input level, mic position, temperature, and alignment. Distortion measurements, particularly those using multi-tone and large-signal sweeps, are highly useful for understanding what a driver is doing under load. However, they also come with context. The level of distortion shown is influenced by test conditions, input level, the enclosure used, and environmental factors like temperature. Klippel’s tools provide several ways to interpret distortion, including harmonic and intermodulation distortion, and what is considered audible or problematic can vary depending on listener sensitivity and application. This means distortion numbers are best used for comparison between drivers tested under the same conditions, not as absolute values that translate directly to what you will hear in a car. These tests are powerful, but the results can vary if conditions are not properly controlled.

Additionally, unit-to-unit variation is a real-world factor. Manufacturing tolerances exist. What you see here is how one properly prepared and screened sample performed. That sample is representative, not definitive. And while free-air testing gives clean, comparable results, it does not account for how a subwoofer behaves once it is installed in an enclosure or vehicle.

The takeaway is simple. This data is powerful, and it tells you a lot, but it does not tell you everything. Like any measurement system, Klippel has boundaries. Knowing how to interpret those boundaries is what makes the data valuable.

How to read and interpret the data

This section is here to help you understand what the graphs mean and how they relate to real-world subwoofer behavior. Every driver in this test was measured using the same hardware, settings, and procedures, which means the results are directly comparable. While one set of distortion measurements is taken at 1 volt across all drivers, the high-output tests are done at different voltages. These are calculated based on each subwoofer’s xmax, pushing them just below the point where motor force drops to 70 percent. This lets you see how each driver performs near its real-world mechanical limits.

The graphs may look technical, but you don’t need to be an engineer to understand the core ideas. Each one gives insight into how the driver is built, how it performs, and where its limitations are. If you know what to look for, you can quickly spot the difference between a well-engineered subwoofer and one that was designed to hit a price point instead of a performance target. We’ll walk through each plot, explain what it shows, and highlight what actually matters.

Bl(x) force factor

The Bl(x) force factor plot shows how much drive force the motor produces as the cone moves forward and backward from rest. Bl is the product of magnetic field and coil length in the gap, so it is the motor’s leverage on the cone. You are looking for a curve that holds a high value over a wide range of motion, stays linear around center, and falls off gradually at the ends. Flat, wide, linear, and centered indicates that the motor maintains control through the intended stroke. A curve that is asymmetrical, narrow, or drops quickly away from center tells you the usable excursion is small, output will compress earlier, and distortion will rise sooner. You'll see a solid black line, which is the actual measurement, and a lighter gray line, which is just a mirrored version of the black line to help visualize any asymmetrical behavior. To summarize, on a good driver, the black curve will sit high, remain broad and fairly flat through the working stroke, and the mirrored gray will overlap closely across that range, with only gentle rolloff near the limits.

Bl(x) symmetry range

The Bl(x) Symmetry Range plot shows whether that motor strength is the same in both directions. It is derived from the Bl(x) curve and plotted around zero. A line that stays flat and near zero means the motor is balanced forward and backward. If the line trends consistently above or below zero over part of the stroke, the motor is biased to one side. The larger and more persistent the offset, the more you should expect even-order distortion, drift of the rest position under drive, and less predictable behavior as levels rise. Small offsets are common in real products, but large asymmetric regions are a red flag. To summarize, on a good driver, this line will hug zero across the intended stroke with only small, brief deviation near the extremes. Symmetric curvature primarily shows up as odd-order components (H3). Asymmetry raises even-order components (H2) and can introduce DC shift.

Why this matters most

In woofers, BL nonlinearity is usually the dominant source of distortion. The cone should see force that is proportional to input current. When BL changes with position, that proportionality breaks. The current is still a clean waveform, but the force applied to the cone is no longer a clean scaled copy. That mismatch generates harmonics, raises distortion, and produces output compression near the limits. A motor that keeps BL flat, wide, linear, and symmetrical over the intended stroke will hold its shape better at level, deliver more clean headroom before compression, and generally make everything the suspension and coil are doing work more cleanly. If you only check one pair of plots, start here.

Kms(x) stiffness of suspension

KMS is the stiffness of the suspension as the cone moves around its rest position. It is the inverse of compliance, so higher KMS means stiffer, lower KMS means softer. A well behaved suspension will show similar to a well behaved BL curve. A broad, near-flat region around center that stays smooth and predictable, then a gradual, symmetric rise in stiffness as you approach the excursion limits. Flat, wide, linear, and centered around the intended working stroke is what you want to see. If stiffness rises too early, or the curve is narrow and steep, the suspension is limiting usable excursion. That will push distortion up earlier and reduce clean output. If the shape is uneven, you can expect inconsistent control depending on which direction the cone is moving. Ideally, the trace is flat, wide, linear, and centered across the working stroke, with a gradual, symmetric rise only near the excursion limits; the black measured line should closely match the mirrored gray over the usable range. Again, you'll see a solid black line, which is the actual measurement, and a lighter gray line, which is just a mirrored version of the black line to help visualize any asymmetrical behavior.

Kms(x) symmetry range

This plot shows whether suspension stiffness is balanced forward and backward. A line that hugs zero across the intended stroke tells you the suspension is centered and behaves the same in both directions. Ideally the line sits on zero across the intended stroke with minimal drift, any small deviation appears only near the extremes and stays low in magnitude. If the line trends above or below zero over part of the stroke, the suspension is biased to one side. The larger and more persistent the offset, the more the rest position can shift under drive, and the more distortion you should expect as levels rise. As with BL, small deviations may exist, but large, directional offsets are a red flag. Symmetric stiffness behavior tends to produce odd-order components, while asymmetry raises even-order content and promotes rest-position shift under drive.

Why this matters most

Suspension stiffness sets how easily the cone moves near center and how quickly it resists motion at higher excursion. If KMS stays linear, wide, and symmetric where you plan to use the driver, the cone motion will track the signal more faithfully, distortion will stay lower, and compression will come in later. If KMS is too steep or asymmetric, the suspension becomes a non linear spring inside the stroke you actually care about, which adds distortion, shifts the rest position under heavy drive, and cuts into clean headroom.

Le(x,i=0) - Electrical inductance vs. position

This plot shows how the voice coil’s inductance changes as the cone moves, with current set to zero. You are looking for a curve that stays low, flat, and centered through the intended stroke. When LE varies with position, the electrical load seen by the amplifier changes as the cone moves. That can shift the upper response of the driver, create level-dependent tonal changes, and add distortion as excursion increases. Designs that keep LE(x) stable, for example by using effective shorting paths and well controlled gap geometry, tend to maintain more consistent response, lower distortion, and cleaner behavior at higher levels. Ideally, the trace is low, flat, and centered across the intended excursion, with forward and backward movement matching. A gentle change only near the extreme ends is acceptable.

Le(x=0,i) - Electrical inductance vs. current

This plot shows how inductance changes with drive current when the cone is at rest. You want a line that stays as flat as possible across the current range used in the tests. If LE rises strongly with current, the motor’s electrical behavior is changing with level, which can contribute to modulation of frequency response and added distortion as you turn it up. Stable LE(i) indicates that the motor’s inductive part is not drifting under load, which supports more consistent dynamics and cleaner upper bass. Ideally, the trace is low and flat across the tested current range, without rising as current increases. If both up and down sweeps are shown, they should overlap closely.

Why this matters most

Inductance that is low, flat, and stable, both across position and current, helps the subwoofer behave predictably as level and excursion change. Variation in LE can shift the effective low pass behavior of the driver and can add distortion components tied to cone movement and drive level. The goal is simple. Keep Le(x,i=0) and Le(x=0,i) as flat, wide, linear, and symmetrical as possible over the range you plan to use the driver. This supports cleaner response, more consistent tuning, and fewer surprises when the system is pushed.

Qts(x) – Total loss vs. excursion

Qts(x) shows how the total damping of the driver changes as the cone moves. It combines electrical and mechanical losses into one curve, so it is a quick way to see whether control stays consistent across the working stroke. You are not chasing a specific number here. You want behavior that is stable and predictable. Ideal shape: a flat, centered trace across the intended stroke, forward and backward sides overlapping closely, with only a gentle, symmetric rise near the excursion limits. Avoid tilts across the working range, sudden steps, or directional offsets.

On a well performing driver, Qts(x) stays relatively flat and centered through the intended excursion, with forward and backward motion matching, and only gradual change near the extremes. A sharp rise with stroke means damping is falling as you push the cone, which can lead to looser control and more ringing. A sharp drop means damping is increasing, which can show up as early compression and reduced efficiency at level. If the curve is asymmetric, expect different behavior on inward vs outward motion, which can raise even-order components and promote rest-position shift at level.

Read Qts(x) together with BL(x) and KMS(x). If all three are flat, wide, linear, and symmetrical over the range you plan to use, the driver is likely to stay controlled and consistent when you turn it up.

Bl(x) force factor

The Bl(x) force factor plot shows how much drive force the motor produces as the cone moves forward and backward from rest. Bl is the product of magnetic field and coil length in the gap, so it is the motor’s leverage on the cone. You are looking for a curve that holds a high value over a wide range of motion, stays linear around center, and falls off gradually at the ends. Flat, wide, linear, and centered indicates that the motor maintains control through the intended stroke. A curve that is asymmetrical, narrow, or drops quickly away from center tells you the usable excursion is small, output will compress earlier, and distortion will rise sooner. You'll see a solid black line, which is the actual measurement, and a lighter gray line, which is just a mirrored version of the black line to help visualize any asymmetrical behavior. To summarize, on a good driver, the black curve will sit high, remain broad and fairly flat through the working stroke, and the mirrored gray will overlap closely across that range, with only gentle rolloff near the limits.

Bl(x) symmetry range

The Bl(x) Symmetry Range plot shows whether that motor strength is the same in both directions. It is derived from the Bl(x) curve and plotted around zero. A line that stays flat and near zero means the motor is balanced forward and backward. If the line trends consistently above or below zero over part of the stroke, the motor is biased to one side. The larger and more persistent the offset, the more you should expect even-order distortion, drift of the rest position under drive, and less predictable behavior as levels rise. Small offsets are common in real products, but large asymmetric regions are a red flag. To summarize, on a good driver, this line will hug zero across the intended stroke with only small, brief deviation near the extremes. Symmetric curvature primarily shows up as odd-order components (H3). Asymmetry raises even-order components (H2) and can introduce DC shift.

Why this matters most

In woofers, BL nonlinearity is usually the dominant source of distortion. The cone should see force that is proportional to input current. When BL changes with position, that proportionality breaks. The current is still a clean waveform, but the force applied to the cone is no longer a clean scaled copy. That mismatch generates harmonics, raises distortion, and produces output compression near the limits. A motor that keeps BL flat, wide, linear, and symmetrical over the intended stroke will hold its shape better at level, deliver more clean headroom before compression, and generally make everything the suspension and coil are doing work more cleanly. If you only check one pair of plots, start here.

Kms(x) stiffness of suspension

KMS is the stiffness of the suspension as the cone moves around its rest position. It is the inverse of compliance, so higher KMS means stiffer, lower KMS means softer. A well behaved suspension will show similar to a well behaved BL curve. A broad, near-flat region around center that stays smooth and predictable, then a gradual, symmetric rise in stiffness as you approach the excursion limits. Flat, wide, linear, and centered around the intended working stroke is what you want to see. If stiffness rises too early, or the curve is narrow and steep, the suspension is limiting usable excursion. That will push distortion up earlier and reduce clean output. If the shape is uneven, you can expect inconsistent control depending on which direction the cone is moving. Ideally, the trace is flat, wide, linear, and centered across the working stroke, with a gradual, symmetric rise only near the excursion limits; the black measured line should closely match the mirrored gray over the usable range. Again, you'll see a solid black line, which is the actual measurement, and a lighter gray line, which is just a mirrored version of the black line to help visualize any asymmetrical behavior.

Kms(x) symmetry range

This plot shows whether suspension stiffness is balanced forward and backward. A line that hugs zero across the intended stroke tells you the suspension is centered and behaves the same in both directions. Ideally the line sits on zero across the intended stroke with minimal drift, any small deviation appears only near the extremes and stays low in magnitude. If the line trends above or below zero over part of the stroke, the suspension is biased to one side. The larger and more persistent the offset, the more the rest position can shift under drive, and the more distortion you should expect as levels rise. As with BL, small deviations may exist, but large, directional offsets are a red flag. Symmetric stiffness behavior tends to produce odd-order components, while asymmetry raises even-order content and promotes rest-position shift under drive.

Why this matters most

Suspension stiffness sets how easily the cone moves near center and how quickly it resists motion at higher excursion. If KMS stays linear, wide, and symmetric where you plan to use the driver, the cone motion will track the signal more faithfully, distortion will stay lower, and compression will come in later. If KMS is too steep or asymmetric, the suspension becomes a non linear spring inside the stroke you actually care about, which adds distortion, shifts the rest position under heavy drive, and cuts into clean headroom.

Le(x,i=0) - Electrical inductance vs. position

This plot shows how the voice coil’s inductance changes as the cone moves, with current set to zero. You are looking for a curve that stays low, flat, and centered through the intended stroke. When LE varies with position, the electrical load seen by the amplifier changes as the cone moves. That can shift the upper response of the driver, create level-dependent tonal changes, and add distortion as excursion increases. Designs that keep LE(x) stable, for example by using effective shorting paths and well controlled gap geometry, tend to maintain more consistent response, lower distortion, and cleaner behavior at higher levels. Ideally, the trace is low, flat, and centered across the intended excursion, with forward and backward movement matching. A gentle change only near the extreme ends is acceptable.

Le(x=0,i) - Electrical inductance vs. current

This plot shows how inductance changes with drive current when the cone is at rest. You want a line that stays as flat as possible across the current range used in the tests. If LE rises strongly with current, the motor’s electrical behavior is changing with level, which can contribute to modulation of frequency response and added distortion as you turn it up. Stable LE(i) indicates that the motor’s inductive part is not drifting under load, which supports more consistent dynamics and cleaner upper bass. Ideally, the trace is low and flat across the tested current range, without rising as current increases. If both up and down sweeps are shown, they should overlap closely.

Why this matters most

Inductance that is low, flat, and stable, both across position and current, helps the subwoofer behave predictably as level and excursion change. Variation in LE can shift the effective low pass behavior of the driver and can add distortion components tied to cone movement and drive level. The goal is simple. Keep Le(x,i=0) and Le(x=0,i) as flat, wide, linear, and symmetrical as possible over the range you plan to use the driver. This supports cleaner response, more consistent tuning, and fewer surprises when the system is pushed.

Qts(x) – Total loss vs. excursion

Qts(x) shows how the total damping of the driver changes as the cone moves. It combines electrical and mechanical losses into one curve, so it is a quick way to see whether control stays consistent across the working stroke. You are not chasing a specific number here. You want behavior that is stable and predictable. Ideal shape: a flat, centered trace across the intended stroke, forward and backward sides overlapping closely, with only a gentle, symmetric rise near the excursion limits. Avoid tilts across the working range, sudden steps, or directional offsets.

On a well performing driver, Qts(x) stays relatively flat and centered through the intended excursion, with forward and backward motion matching, and only gradual change near the extremes. A sharp rise with stroke means damping is falling as you push the cone, which can lead to looser control and more ringing. A sharp drop means damping is increasing, which can show up as early compression and reduced efficiency at level. If the curve is asymmetric, expect different behavior on inward vs outward motion, which can raise even-order components and promote rest-position shift at level.

Read Qts(x) together with BL(x) and KMS(x). If all three are flat, wide, linear, and symmetrical over the range you plan to use, the driver is likely to stay controlled and consistent when you turn it up.

What is distortion and why is it important?

Distortion is the most useful part of this whole project because it tells you the audible end result of poorly designed or poorly implemented mechanisms in the motor, suspension, cone, and former. It shows what the sub is doing wrong when it is actually working. Frequency response illustrates the subwoofer’s linear output behavior relative to the input signal, typically measured at low drive levels where non-linearities are minimal. THD shows the total amount of extra content (distortion) that is not in the original input signal. H2 and H3 show the specific second and third harmonic pieces of that extra content. BL, KMS, and Le point to where the mechanisms of the speaker might misbehave, but the distortion plots are the audible proof that those mechanisms, along with moving mass issues like cone flex, are creating sound you did not ask for (again, distortion). Two drivers can look similar on BL, KMS, and Le. Distortion is what tells you how those, and other elements interact and whether the driver stays clean when played at both low levels, and real use-case levels. That is why we publish both 1 volt and near-xmax distortion graphs and why we focus on the shape of THD, H2, and H3 inside 20 to 120 Hz for subwoofers.

Also, if anyone wants to see what various distortions from non-linear motor and suspensions sounds like, This Is A Great Article to read, and they even have audio examples of what they sound like.

Total harmonic distortion

THD, or Total Harmonic Distortion, is simply the sum of the harmonic byproducts relative to the main signal. Think of it as a single number that says how much extra tonal junk (distortion) is riding along, regardless of which harmonic it is. THD is useful as a quick sanity check and for seeing how much distortion the speaker is producing in general, and how much distortion is added from increases with more power applied to achieve a higher output level. If THD climbs fast as you turn it up while the driver is still playing within its comfort zone, we have a not-so-perfect speaker design. But, THD alone does not tell you why. That is why we also publish H2 and H3 separately. Together, THD tells you how much distortion there is in general. H2 and H3 tell you how much of what types of harmonic distortion the speaker is producing.

What are H2 and H3?

When you play one frequency through a speaker, the ideal output is just that frequency. Unfortunately, nothing in the world is perfect and speakers add exact multiples of that frequency, which is harmonic distortion. H2 is the second harmonic, one octave above (frequency x 2). H3 is the third harmonic, an octave plus a fifth above (frequency x 3). If you play 100 Hz, H2 is 200 Hz and H3 is 300 Hz. We show H2 and H3 relative to the main frequency response and THD so you can see how much of each type the driver adds across its passband.

How non-linearities create harmonics

If the moving parts and motor are centered and behave the same forward and backward, the dominant distortion tends to be odd order, most obviously H3. That is what symmetric curvature in BL or KMS produces. If the system is biased to one side, for example, the suspension is tighter in one direction or the rest position shifts under drive, even order distortion rises, most obviously H2. That shows up as an offset in the BL or KMS symmetry line. Inductance (Le) that changes with position or current can add both kinds and can also create intermodulated distortion products. That is why Le(x) and Le(i) matter when you interpret the acoustic plots. To summarize, we want symmetry with as little curvature as possible to our BL, KMS, and Le curves to achieve the lowest amount of distortion possible.

How to read the two distortion graphs - 1v vs. high level

Start at 1 volt. That is the baseline signature with very small motion. Then look at the higher level, near-xmax sweep. A healthy driver keeps the same basic distortion character and usually just rises in a smooth, broadband way. If the high level plot develops new peaks or dips, or obvious shape changes that were not present at 1 volt, one or multiple specific non-linearities are waking up, often tied to something you can later confirm in BL, KMS, Le. And if not there, most likely in the moving parts. If H3 grows broadly while H2 stays low, symmetric non-linearity inside the working stroke is the likely cause. If H2 shows up mainly at high level, asymmetry that becomes active only at large excursion is the likely cause. For subwoofers, judge inside 20 to 120 Hz first, then use behavior above that as supporting context.

Interpreting distortion at 1 volt vs. high level

The TRF distortion plots are shown at two drive levels: 1 volt RMS and a higher level set just below the BL 70 percent point determined by LSI. Both are nearfield and use the same settings. Each level is run three consecutive times to confirm consistency.

What 1 volt tells you

  • Baseline behavior: Shows the driver’s harmonic structure when excursion is small.
  • Noise floor check: If the 1 volt plot is already messy with narrow peaks in band, expect those issues to remain or grow with level.
  • Matching across drivers: Because every driver is at 1 volt, you can compare low-level linearity directly.

What the high-level sweep adds

  • Near real use: Subwoofers are typically used closer to their excursion limits than the tiny motion at 1 volt.
  • Stress behavior: Reveals how nonlinearity grows as you approach the BL 70 percent point. Look for whether distortion simply rises in a smooth, broadband way or if new narrow peaks appear.
  • Consistency: A good driver keeps a similar distortion signature at high level. A weak driver’s distortion “frequency response” looks different when pushed.

What good looks like

  • 1 volt and high-level plots share the same overall shape.
  • High-level plot shows a controlled, broadband rise with no new narrow in-band peaks.
  • H2 and H3 remain low relative to output across most of 20 to 120 Hz.
  • Any increases line up with the edges of the intended excursion window seen in BL(x) and KMS(x).

What bad looks like

  • The high-level plot has a different distortion shape than the 1 volt plot.
  • New, narrow in-band peaks appear at high level.
  • Even-order components jump in the working band alongside BL or KMS symmetry issues.
  • Distortion grows early in band where BL and KMS already show steep curvature inside the intended stroke.

Read with the published scales in mind

Response is shown to 1 kHz and THD to 500 Hz, scaled 65 to 110 dB and smoothed to 1/6 octave for readability. Use the same scaling when visually comparing drivers to keep the interpretation consistent.

How to compare the two

Shape, not just level
Compare the curve shapes of THD, H2, and H3 between 1 volt and high level. Healthy behavior is a broadly similar shape with a predictable rise in magnitude. Red flag behavior is a different shape with new in-band peaks at the higher level.

Even vs odd harmonics
A rise in H2 that was not present at 1 volt often points to asymmetry showing up under excursion, which lines up with offsets in BL symmetry or KMS symmetry. A strong rise in H3 that was not present at 1 volt often points to loss of linearity around center, which relates to BL or KMS curvature inside the working stroke.

Frequency regions, not single points Identify where changes occur.

  • New or growing narrow peaks usually indicate specific mechanisms becoming active at that frequency region.
  • A smooth broadband rise is expected and is less concerning if the shape remains consistent with the 1 volt plot.

What it means to your ears

H2 and the other even orders tend to read as added thickness because H2 is an octave of the note you already played. In small amounts this can be less annoying in the sub band, but as it rises it blurs pitch and makes bass lines seem to smear together. H3 and the other odd orders are more obvious and more fatiguing and considered less ideal to the ear. They add tones that do not sit as neatly with the original note, which you hear as roughness, a gritty or buzzy edge, and a loss of clean impact. Sometimes people even mistake it for the sound of a damaged speaker. When H3 ramps up with level, things start to sound mechanical and far less clean. When H2 ramps up, at least with bass, things gets soft and boomy, and the tail of bass notes sound longer than it should.

How to use distortion with the LSI plots

Start with the distortion graphs because that is what you hear at tell you the bigger picture as we hear it. If the distortion looks clean and stays similarly shaped from 1 volt to the higher level, near-xmax sweep, the driver is behaving well in the frequency range that matters. If there is a lot of distortion in the higher level, near-xmax measurement, or it is not similar in shape to the 1v distortion plot, then something is up. From here, we can use the BL, KMS, and Le graphs to verify the why. Strong, centered BL with a symmetry line near zero explains low odd-order distortion. Broad, symmetric KMS explains why even-order content stays low and why the rest position is stable at level. Flat, low Le(x) and Le(i) explain why upper bass does not change character as you turn it up and why intermodulated distortion (a whole other type of distortion that is more complex than harmonic distortion) stays controlled. In short, distortion tells you the result, and LSI graphs explain the mechanisms that may be the contributing factor of said distortion.

How to judge results without getting lost in single numbers

A lone THD number or a single H2 or H3 percentage at one frequency does not tell the full story. What matters is the overall amount and shape of THD, H2, and H3, across 20 to 120 Hz (for subwoofers) and how that shape and amount changes from 1 volt to the high level, near-xmax sweep. A good driver keeps a similar shape and shows maybe a minimal amount of controlled rise. A driver with problems shows new peaks or dips in distortion, or a very different shape, or a disproportionate amount (relative to 1v) of distortion at high level. Use the LSI plots after the fact to confirm the mechanisms that make up a speaker are actually well designed and implemented, but make your first decision from what you see in the distortion plots, since that is what maps directly to what you will hear.

Summary

Distortion is the direct, audible footprint of a driver’s non-linearity. H3 lines up with symmetric curvature in the motor or suspension and sounds like roughness and grit when pushed. H2 lines up with asymmetry and sounds like thick, boomy smear. And THD is the combined sum. Read the distortion plots first to judge audible performance, then consult BL, KMS, and Le to understand why a driver is behaving the way it is and to verify that the engineering supports what you hear.

Performance score, what it is and why it exists

This 1250-point score is a fast and honest but subjective summary of what actually maps to how a sub performs based on the objective measurements we have taken. It puts the acoustic result first, then uses LSI plots to back up the story. There is no hidden math here, it is a subjective attempt at a consistent, quick interpretation of objective plots with fixed scales, applied the same way to every driver. I want to add a disclaimer here that there is no formula to determine some of these scores, but I will do my best to keep them honest and even. Le(x), Le(i), High-xmax distortion are determined by a formula.

Quick note on intent: This is a reader facing summary, not a lab paper. Acoustic behavior drives the score because that is what you hear. LSI plots explain why. Where the suspension ruins an otherwise strong BL window at level, we will say it plainly.

Another disclaimer: Every driver is penalized equally regardless of size or depth or power handling when it comes to distortion performance and curve linearity and symmetry.

Please note: Each subwoofer in this test is scored based on the same criteria. They are not graded based on where their strengths and weaknesses may lie. That would be an ever-shifting goalpost, and would make things very complicated. Using this data and this information to decide if any of these drivers work best for you, is purely up to you.

High level broadband distortion quality

250 points

What it means: overall THD, H2, and H3 performance at the high level sweep in 20 to ~120 Hz.

Why it is rated: this is the closest proxy to how the driver behaves and sounds when it is actually used.

How we rate: start from full points, subtract for clear narrow peaks in band, subtract for a strained broadband rise instead of a clean lift, subtract for non-linear distortion response, subtract a bit more if H3 dominates most of the band.

Distortion shape stability

90 points

What it means: does the distortion signature keep a relatively similar shape from 1 volt to its near-xmax high level measurement?

Why it is rated: stability means the driver keeps its character when you turn it up, and can indicate if a driver in question has any issues that cause varying distortion outside of the motor, suspension, and inductance. This can hint to cone flex and breakup, power compression, non-linear QTS(x), etc. Essentially, we are penalizing subwoofers that have distortion levels and profiles that do not scale well as more power is applied.

How we rate: compare 1 volt and high level curves, subtract if new in band peaks appear at level or if the overall curve shape changes across a region, or if broadband distortion is smooth but rises too much relative to what is expected across the board. A simple broadband lift costs little, and no to minimal rise is perfect. Going from higher 1v distortion to lower, more stable high level distortion is less of a penalty.

High level excursion weighted distortion

300 points

What it means: A combined look at how clean the driver is in the high level distortion test and how much excursion there was at 20 Hz during that same test. This is distortion performance judged in the context of real excursion and low distortion output capability, not in isolation and not compared to some abstract xmax spec. It tells you how impressive the distortion result is for the actual stroke that specific driver was doing in its specific test.

Why it is rated: The goal is to paint a fuller picture of real performance that represents the drivers that are capable of higher excursion and more output. A driver that barely moves can look fantastic on a distortion plot, but that does not mean it has meaningful low distortion AND the output level capability that we are typically after in higher-end SQ car audio systems. On the other hand, a driver that is moving very far and still holding distortion together deserves extra credit, even if the raw distortion score is not “pretty” on its own. This section lets drivers that can actually move air and stay reasonably clean float higher than wimpy subwoofers that only look good because they were barely moving in comparison.

Now, I know what you are going to say, “Nick, why didnt you just test all drivers at the same voltage.” The short answer is that I would have liked to do three power level measurements for each driver, one at 1 volt, one at a common mid level, and one at the higher limit that pushes the driver just below the 70 percent BL xmax limit. That would have meant three separate TRF sweeps per driver at shared drive levels across the whole group, which would have added a lot of labor on the measurement side and an even larger load on the data analysis and explanation side, with a lot of redundant information. We had to draw a line somewhere, and that plan got cut. Instead, we tie the high level distortion result to the simulated 20 Hz excursion at that exact drive level so this score approximates how a driver’s distortion behavior scales with level. It is not perfect, nothing is, but it is a fair way to represent drivers that are capable of high excursion, give them a better chance to show their potential in real world use, and keep scores from unfairly favoring wimpy subs over the big boy subs that were tested at much higher output.

How we rate: We start with the high level broadband distortion score out of 250 and normalize it, then look at the simulated one way excursion at 20 Hz from that same high level test and normalize that against a possible 32 mm reference. Roughly 40 percent of this 300 point score comes from the normalized distortion result itself, and 60 percent comes from how much excursion was actually used. Drivers with strong distortion performance and high 20 Hz excursion land near the top, drivers with good distortion but low stroke or drivers with not so great distortion but high stroke land around the middle, and drivers that are both high distortion and low excursion score near the bottom.

Low level baseline

40 points

What it means: the 1 volt sanity check.

Why it is rated: if it is messy at tiny motion, that is a bad sign.

How we rate: start from full points, subtract for any narrow in band peaks or a generally messy or high-distortion baseline.

BL window width, flatness & linearity

130 points

What it means: usable motor force and linearity across stroke.

Why it is rated: a wide, flat, centered BL window supports clean headroom and lower odd order content.

How we rate: subtract for early falloff inside the working stroke, mid stroke sag, or a narrow, tilted window. Linear, smooth, and symmetrical is key here.

BL symmetry

70 points

What it means: a motor that pulls the same forward and backward.

Why it is rated: symmetry keeps even order content low and helps the rest position stay put at level.

How we rate: subtract for sustained offsets of the symmetry line across the stroke, small blips near the limits cost little, a directional bias costs more.

CMS/KMS width, flatness, and linearity

90 points

What it means: how quickly the suspension tightens as you move away from center.

Why it is rated: a minimal rise near the ends supports clean stroke. A steep wall inside the range, 
or hitting 50% well before BL hits 70% you will push distortion up and kill output early. This is poor speaker design, at least when manufacturer stated xmax is well above the 50% CMS mark, and will be punished accordingly.

How we rate: subtract for steepening and non-linearity well inside the stroke.

CMS/KMS symmetry

50 points

What it means: a suspension that behaves the same inward and outward.

Why it is rated: asymmetry raises even order content and can shift the rest position at level.

How we rate: subtract for sustained offsets across the stroke, minor offsets near limits cost little, real bias costs more.

Le(x) flatness

90 points

What it means: inductance versus position.

Why it is rated: low, flat Le(x) helps keep upper bass and distortion behavior consistent as excursion increases.

How we rate: subtract for a clear tilt across the stroke or big end region swings.

Le(i) stability

40 points

What it means: inductance versus current.

Why it is rated: stable Le(i) means tone and distortion performance does not change as you turn it up.

How we rate: subtract if Le grows noticeably with current across the tested range.

Qts(x) stability

100 points

What it means: damping that stays put through the stroke.

Why it is rated: a flat, centered trace supports predictable control at level.

How we rate: subtract for steady tilt, non-linearity, non-symmetry, or steps inside the working range.

Total Score: /1250

The Combined Total Acoustic & Mechanical Functions Performance based on our subjective but honest interpretation of the objective data.

Marketing materials accuracy to our measurements

100 points

What it means: how closely the brand’s published specs and marketing materials’ claims match what we measured under stated conditions.

Why it is rated: accurate specs let you design correctly and set honest expectations; inflated claims hide real limits that affect distortion, headroom, enclosure size, and overall performance.

How we rate: start at full points, deduct for verifiable mismatches with extra weight on xmax and power handling. Minor variance is a small hit, major or misleading gaps are a big hit. This is our subjective read of objective data.

Max output at 20Hz in 0.707 QTC sealed enclosure (70% BL Xmax) (Anechoic)

What it means: The highest clean 20 Hz SPL a single driver can produce in a Qtc 0.707 sealed box before hitting its BL 70 percent limit or a thermal/compression limit, simulated anechoic at 1 meter using WinISD.

Why is it rated: Gives a clear, apples to apples clean output capability reference for the lowest frequency a proper subwoofer should reproduce in a common sealed alignment, without room gain or EQ.

How we rate: Set sealed volume for Qtc 0.707 using large-signal cold T/S parameters; simulate 1 m anechoic in WinISD; increase drive until BL 70 percent or a thermal/compression limit is reached, which is rare but possible; report the highest stable 20 Hz SPL, the limiting factor, required power, and one-way excursion at 20 Hz.

Max output @ 20Hz in manufacturer-recommended sealed enclosure

What it means: The highest clean 20 Hz SPL a single driver can produce in the manufacturer’s recommended sealed volume before hitting BL 70 percent or the stated RMS limit, simulated anechoic at 1 meter using WinISD.

Why is it rated: Sanity-checks marketing and shows the practical difference versus the neutral 0.707 sealed case; ideally the two are very close in output, if not equal.

How we rate: Set sealed volume to the manufacturer’s recommendation using large-signal cold T/S parameters; simulate 1m anechoic in WinISD; increase drive until BL 70 percent, the stated RMS limit, or a thermal/compression limit is reached; report the highest stable 20 Hz SPL, the limiting factor, required power, and one-way excursion at 20 Hz. Why until BL 70%, RMS limit, or thermal/compression for this test only is due to many manufacturers severely overrate how small of an enclosure their subwoofers can get away with, and super small sealed enclosures will severely limit xmax on the low end and will sometimes require enormous amounts of power to hit the other thresholds. In these situations, the voice coil would burn up and is not realistic.

FAQ

General principles and purpose of the tests

We are only comparing speakers. That’s it, as it keeps it apples to apples and has nothing really to do with the individual speakers performance capabilities. Free air isolates the driver. Use these plots to judge the drivers various mechanisms qualities and distortion behaviors. Enclosures and cars change response shape and level, not the underlying nonlinear trends you see here.

The lab asked not to be named as they do not want to be dragged into any possible backlash from this test and the results. I’m sure anyone rational understands this and will accept it. Trust comes from transparent methods, fixed settings, repeat runs, and showing the same scales for every driver so anyone with a Klippel can replicate.

We are measuring driver performance. The performance of the driver itself does not change because of an enclosure or a car. Those are separate factors, so we are not adding enclosure-loaded curves.

Use T/S modeling for the alignment choice, then use our plots to vet overall quality.

Measurement methodology

Mic distance is D/10 plus 2 inches, centered on the dust cap, mic normal to the cone. This keeps geometry consistent across sizes while preserving nearfield benefits.

Upper response helps spot cone breakup and inductive behavior. Harmonic distortion above 500 Hz is way less relevant for subwoofer use.

1.00 V is a standardized low-excursion check, apples to apples across all drivers. High level is set just below the excursion where BL falls to almost the 70 percent industry standard limit of its maximum clean stroke, so you see real use behavior without crossing obvious limits.

For subs, BL is usually the dominant limiter. The 70 percent BL criterion is widely used, conservative, and comparable across designs.

Break-in is high volume, band-limited pink noise that pushes the driver near its intended xmax, applied until it is broken in. Then we measure after the drivers voice coil is back to room temperature.

Yes, they can. Compliance and coil resistance shift with environment. We test indoors around 70 °F with stable humidity and calibrated gear to keep sessions consistent.

1/6 smooths without hiding trends. Axes and scales are fixed across drivers so visual comparisons are honest but easy to decipher.

Reliability and repeatability

We run multiple sweeps per level and accept a run only when the overlays are consistent by inspection. Any decent, healthy driver repeats well without issue.

There is variation from manufacturing tolerances. Treat one sample as representative, not definitive. We test one sample per model unless a specific sample is credibly challenged. That said, any speaker produced today with wild swings in variance to trigger something here is a red flag on its own.

We reject it and do not go forward with testing it.

Reading and interpreting the plots

Why would anyone compare an 8 to a 15, they serve different use cases. You can compare mechanism quality and distortion shapes, but do not pretend headroom is equivalent.

If BL symmetry is clean and KMS symmetry drifts, expect even-order rise and earlier compression from suspension bias. If KMS is clean and BL narrows or tilts, expect odd-order growth and motor-driven compression.

It should not be a priority choice. Look at BL, KMS, and Le together, all are valid and all inform the interpretation. Though, BL is more important than the others if we had to focus on just one.

Flat and centered across the working stroke, forward and backward overlapping, with only a gentle rise near the ends. A sustained tilt across the range means damping is changing with excursion, which hurts consistency.

Distortion analysis and causes

H3 is usually heard before H2 in the 20 to 120 Hz range, especially around 40 to 80 Hz. On clean tones near 50 Hz at about 100 dB SPL, H3 tends to show up around 0.5 to 1 percent, while H2 is closer to 1.5 to 2 percent, and at 20 to 30 Hz you can tolerate a bit more. As a practical target, keep H3 under ~1 percent from 40 to 80 Hz and under ~1.5 percent at 20 to 30 Hz, and keep H2 under ~2 percent in the mid bass and ~4 to 5 percent at 20 to 30 Hz. Music masks distortion, so these tone based limits are conservative.

Asymmetry that shows up with excursion, for example suspension bias, rest-position shift, or uneven surround or spider behavior. Check the BL and KMS symmetry lines.

Look for BL curvature or narrowing around center, or a KMS window that steepens inside the working stroke. Symmetric curvature drives odd-order content.

Single numbers hide where the distortion lives and whether it is level dependent. Shape and stability tell you if the driver stays clean when pushed inside the entire passband.

No. Frankly, it adds too much process overhead to capture, document, and explain. What is here is more than enough. I am already overwhelmed as it is lol.

Performance scoring system

No single “Clean Performance Score” yet. We plan star ratings by category, low-level distortion, high-level distortion, and the change between them, with more weight on high-level and on the difference. BL, KMS, Le, and Qts(x) will have their own category ratings.

Performance Score, total 1250

  • High-level broadband quality (250 points)
  • Distortion stability/consistency low to high level voltage (90 points)
  • High level excursion weighted distortion (300 points)
  • Low-level baseline (40 points)
  • BL window width and flatness (130 points)
  • BL symmetry (70 points)
  • KMS window steepness (90 points)
  • KMS symmetry (50 points)
  • Le(x) flatness (90 points)
  • Le(i) stability (40 points)
  • Qts(x) stability (100 points)

Data access and retesting policy

No. The data is on the page.

Valid evidence includes independent Klippel plots with comparable methods. Submit model, method details, plots, and preferably the Klippel file. If credible, we will buy or borrow a new independent sample and re-test. If the new result changes the conclusion, we update the page.

We do not plan to change anything. If we ever correct an error or re-test a model, we will note it on the page.

Requests, business, and attribution

Yes. If it interests us, we will likely even fund it to have you ship it to us, and pay the facility for the testing procedures. If its not a model that interests us, we are still willing to get various subwoofers tested if you cover the lab fee and shipping. Its at our discretion what we consider an interest to us, so please, no temper tantrums when I don’t want to pay hundreds of dollars to test your subwoofer that is completely unrelated to this segment of the market. Contact us to discuss.

No. Some we sell, some we do not. Some we are dealers for, and some are our very own ResoNix models. Methods are identical either way, and each page states whether we carry it.

Yes, as long as you link back to the data page and do not alter axes, scales, smoothing, or watermarks.