With our last couple of blog entries, we have talked a lot about MIL-STD-461G RS105 testing, especially given our recent installation of a 2m EUT RS105 system at Naval Information Warfare Center (NIWC) Pacific in San Diego (pictured below). Every time we get the pleasure of doing one of these installs and helping customers learn about the RS105 test procedure, we learn a lot of valuable lessons. In this blog, we would like to do our best to share some of the lessons we have learned along the way to hopefully help you in your testing process. If you’d like to learn more about the 2m EUT RS105 system, you can take a look on our website.
Primarily, we have noticed that MIL-STD-461G needs some updates for RS105: Often the authors of military standards are doing their absolute best to maintain continuity and usability in the format of test standards, while not overwhelming program managers and engineers with unnecessary information. The downside of this is that some critical information can get lost in the process. For the MIL-STD-461G RS105 test standard, there are a few recommendations we would like to pass along:
1. Clarify Terminology: Parallel Plate vs. Conical Transmission Lines Figure RS105-2 and the language regarding “parallel plate” transmission lines can be misleading. As we mention in greater detail in our blog on EMP test, PART 3: SUBSYSTEM TESTING STANDARDS AND SOLUTIONS, parallel plate systems are largely a holdover from the legacy RS05 standard that used a 5ns rise-time. RS105 now uses a 1.8-2.8ns rise-time, which is fast enough that a parallel-plate transmission line’s multiple transitions and solid plates can create significant reflections and diffraction that can significantly affect the waveform. Most all of the systems meeting RS105 or the classified MIL-STD-2169 use what are referred to as “conical” lines, which have a single transition point at the load, thereby minimizing reflections. A more in-depth discussion can be found here.
2. Alternative to High-Voltage Probe: Using a Ground-Plane D-Dot E-Field Sensor Under Section 5.22.3.2 Test equipment, the standard calls for a “high-voltage probe” with a 1 GHz minimum bandwidth. To the best of our knowledge, there is no high-voltage probe with this bandwidth. As a result, APELC and others typically use a ground-plane d-dot E-field sensor, such as the Prodyn ADS-180R to measure the electric field directly at the output of the pulse generator. These probes typically have bandwidths well above 1 GHz. Multiplying the measured E-field by the plate/wire height at this point results in the voltage measurement at the pulser output. A picture of the reference d-dot probe installed at the output of the APELC 2m RS105 system is shown below.
3. Digital Integration Over Passive Integrators Within this same section the standard calls for an “Integrator, time constant ten times the overall pulse width “. While there are passive integrators in existence that meet this specification, APELC has found that this type of integrator not only attenuates the signal to the point that it is difficult to measure given signal-to-noise ratio, but that the resultant signal still requires post-processing to remove droop. As a result, we prefer numerical integration either native to the oscilloscope or in post-processing. This gives the operator much greater control over the amount of signal and in offset correction. A screen capture showing both the raw un-integrated probe signals as well as the integrated Math channels is shown below.
4. Comprehensive Calibration for EUT Volume Measurements Within Section 5.22.3.4 Procedures, under the sub-section for calibration, the standard calls for the “Peak value of the electric or magnetic field for each grid position”. APELC has found that for all practical purposes, it is necessary to generate a calibration point for the full range (starting at 10%) of E-fields to ensure each EUT volume measurement level has a corresponding reference probe level- especially given the fact that the standard asks the operator to remove the EUT probe during actual testing and rely completely on the reference probe.
5. Practical Steps for Increasing Pulse Amplitude During EUT Testing This is perhaps the biggest point of contention we have noted for most test operators. In the EUT testing section of the standard, the procedure states “Apply the pulse starting at 10% of the pulse peak amplitude determined in 5.22.3.4b(4) with the specified waveshape where practical. Increase the pulse amplitude in step sizes of 2 or 3 until the required level is reached.”. The question that always arises is “2 or 3 what?”. Does this mean 2 or 3 kV/m (an unrealistic level of granularity given the nature of these pulse generators), or does it mean 2 or 3X (which would be very large steps if you are threshold testing an EUT)? Our typical recommendation, and often the preference of the test engineers we have worked with, is to start at 10% and work your way up in 5 kV/m increments- e.g. 5 kV/m, 10 kV/m etc…
6.Tolerance Considerations for Peak Electric Field Values The standard defines a peak value for the electric field that meets the following tolerance criteria: 0 dB ≤ magnitude ≤ 6 dB above limit. The question that arises and is not defined within the standard is: does this tolerance band apply from the 10% level upward? Again, the spark gap-based pulsed power used to generate the <2.8ns pulses at hundreds of kilovolts does not have an incredibly high degree of granularity in amplitude adjustment. So while our systems typically have no problem falling within +/- 3kV/m of the desired set-point, the +6dB upper limit can be a useful amount of tolerance from 10% on up. This often results in set-point ranges of 5-10 kV/m, 10-15 kV/m, etc…
7. Effective Techniques for Reducing Common-Mode Noise In Appendix A of the standard (Section A.5.22 (5.22)), there is some more clarification regarding the test procedure. One point worth mentioning here is that the standard mentions inverting the b-dot or d-dot probe by 180 degrees and taking another shot so that one can subtract the signals to remove common-mode noise. While this is good in theory, in practice these simulators have some variation in the pulses between shots that can create enough difference between the opposing waveforms to cause more problems than this would solve. There are actually two accepted means of handling this common-mode noise that are regularly employed by those in the HEMP test community: The first is the use of a probe and balun combination, such as Prodyn Models AD-55 and BIB-100, that simultaneously measure the 180 degree-opposed signals for each shot and cancel out the common mode when the two signals from the AD-55 are combined within the BIB-100 balun. The second method is the use of analog fiber optic links to replace long runs of coaxial cable from each probe with fiber-optic. Having tried using long runs of coax for this purpose, we cannot stress enough how much fiber-optic links can help in removing noise from the diagnostic signal chain. Unless the simulator is very small, this is often a non-negotiable item to have given the amount of noise that can be induced on long runs of coaxial cable. Example image of the EUT probe with balun and fiber-optic transmitter is shown below. Note that the coaxial cable is slightly longer than desired and is covered with an additional layer of braid along with ferrites (beneath the braid) to reduce common-mode noise.
In general, it is typically understood by all parties that the test engineers and program managers will need to use their own discretion in performing the test beyond what is described in the MIL-STD. That said, maintaining clear communication and expectations between the test lab and the customer is critical in making sure the results of the test are useful to all involved. This is not a trivial concern given the fact that the OEM for the EUT is often the test lab’s customer, and the OEM’s customer is most often the Department of Defense.