Using Electron Microscopy in Metallurgical Failure Analysis Lab Services

Understanding why things fail is critical to preventing failure in the future. Whether it is a single catastrophic failure whose root cause needs to be understood to prevent future critical failures or a test run of a prototype that is about to go to production understanding the root causes of failure are essential.

Mechanical failures, in particular, can be complex and difficult to understand. When there is a mechanical failure of a material, several tests and images must be taken in order to understand the cause of the failure. Taking your sample to a lab with electron microscopy services can help you dig down further to find out where your failure might have occurred.

Advantages of Electron Microscopy in Mechanical Tests

The failure analysis engineers in our scanning electron microscopy lab are experienced experts in the use of this analytical technique.   Electron Microscopy has several advantages which can be leveraged for performing the mechanical analysis. First and foremost is the resolution. Scanning Electron Microscopes (SEMs) are a valuable failure analysis tool in the hands of an experienced technician. Not only are they capable of producing much higher magnification than an optical microscope, but they also have an array of analytical tools that can be used to enhance investigations.

The enhanced resolution allows an investigator to be able to look down to the atomic scale and look at developing grain boundaries, crystalline formations and obtain element analysis. This means that looking at the right set of images could point directly to the root cause of a failure at the molecular level.

Fatigue Analysis

Failures can happen quite a few different ways. When looking at a failed component, detailed images can show materials experts clues to what might have been the cause. For example, a material that fails due to one powerful tensile force is going to look very different than something that experienced high cycles of very low forces, or damage due to low-frequency vibration. A skilled lab with an SEM can take detailed images that will determine the differences between these two events.

SEMs are capable of taking images using much lower voltages. This allows the microscope to take images showing much more detail along the surface of the failure. The process is referred to as surface topography.

When a part fails due to fatigue, be it high tensile force or low force repetitive failure, the surface of the material will stretch and have formations called striations. These formations will indicate the cause of the failure. Being able to see as much detail as possible when looking at fatigue striations will allow engineers and materials experts to determine what mode or modes might have caused the part to fail. All of this is made possible with images taken by skilled SEM lab operators.

Structural Analysis and Crack Propagation

Another type of failure that can occur to the material is cracking. Cracks generally form at the micro-level along formations in a material called a grain boundary.

When one material is combined with another (in metallurgy this is called alloying), the two dissimilar materials will bond together in groups. The easiest way to think of this would be to picture a rice cereal treat. The full bar is made up of small pieces of cereal, you can see the boundaries between the rice granules. A similar thing happens in metals.

If materials are combined properly then the individual “grains” will be small. There will be short boundaries between the two grains. In some cases, the grains are large and lead to long boundary lines between them.

When a material is subjected to stresses, cracks can form along these boundaries. Over time, a crack will follow this grain boundary, becoming larger and larger until the part fails.

Well taken images from an electron microscope will allow materials experts to view the granules and the grain boundaries of the material. This can clue them into where cracks in the material can form, how a crack propagated through the material and was the material at fault for the failure.

Consider Your Sample Size

One thing to consider when thinking about using a lab for electron microscopy is how your sample is. Sample size can determine if testing is able to be done in a way that will not damage the sample, allowing other testing techniques to be used on it in the future.

Samples that are too large will often need to be cut to the proper size for the electron microscope. If this is done, it could compromise the sample for future use. It is very important to know your lab’s capabilities and what sample sizes they can handle before testing begins.

Using an experienced and skilled lab for your SEM failure analysis could mean the difference between root cause determination of the failure or waiting around for further failures to conduct more testing. Contact us today for about Electron Microscopy services!

Choosing the Right Microelectronics Failure Analysis Lab

Computers used to take up entire rooms to perform what we would consider today rather rudimentary calculations. As computing power increased, the size of the computers decreased. What was once an easily spotted blown tube transistor became very difficult to see electron leakage through a PNP junction.

Enter the world of microelectronics. Every mobile electronic device today is powered by microelectronics. They need to be small, fast and reliable. They also need to be durable. When things go wrong with them, we want to know what caused the failure and how it can be fixed to make our electronics as reliable as possible.

What Are Microelectronics?

An understanding of where microelectronics might fail begins with an understanding of what microelectronics are. In short, microelectronics are circuits that are constructed at the micrometer-scale, perhaps even smaller.

The heart of every computer is the transistor. An easy way to think of it is the more computing power you wish your computer to have, the more transistors you need in its CPU. This means that in order to make a more powerful computer but keep the size of the computer the same, there is a need to figure out a way to make transistors smaller and smaller; enter semiconductor devices.

How We Make Transistors From Semiconductors

As computers advanced and the demand for computing power increased, it became necessary to figure out a way to create a transistor out of something that could be very small. Engineers were able to figure out a way to create the effect of a transistor by using a combination of metals and semiconductor materials.

Transistors created using this method are the foundation of the integrated circuitry controlling all of our electronic devices today. They are made using complex fabrication techniques that allow engineers to make transistors so small they cannot be seen by the naked eye.

When Things Go Wrong

Inevitably, as with any production method, things can go wrong. When this happens, chips and devices fail. This is where the need for accurate failure analysis comes in to play. In order for engineers to understand the causes of the circuit failure, special equipment and experts in using that equipment are needed to do a detailed analysis of the broken part.

There are quite a few places that microelectronics can fail:

  • Fabrication Process Failures – these are generally defects that exist when the circuit is created. Complex chemical and electrical processes are used to create microelectronics on a small scale. Small impurities in the materials, the wrong concentration of etching or cleaning chemicals, or plasma etching process issues- all of these can contribute to manufacturing problems which cause IC and device failure.
  • Operational Parameter Issues – Issues like this could be design flaws that allow voltages or currents too high for ICs to handle. Things like high operating temperatures or shock damage that exceeds what the IC can withstand are other examples of operational parameters. Evidence of all of these can be seen using the right analysis techniques.
  • Design Flaws – These are issues with the circuit itself not performing up to specification due to an improper layout.
  • Knowing how to look is a skill. Labs who are experienced with microelectronic failure analysis have a feel as to where in the circuit to begin their investigation. This can save time and money on the device manufacturer’s part as the flaw will be spotted sooner. This means using the right equipment to do the imaging, using the right processes to take a cross-section of devices if needed, and how to interpret the results.
  • Knowing where to look is as important as knowing how to look. Experienced technicians will be able to have a feel where inside the IC that these types of failures can occur and look in the right spots sooner than less experienced labs.

Imaging and Failure Analysis

Once the device fails, the investigation into what caused the failure begins. This is where the experts come into play. Choosing a lab that has both the necessary equipment and expertise in the area can be the difference between spotting a flaw and fixing it, or having to scrap a design and go back to the drawing board.

Failure Analysis on the micro scale is not something that is done easily. Remember, these devices are small- so small that even regular microscopes are not able to see them in some cases. This means two things:

As you can see, microelectronics can be complicated devices. When trying to determine the root causes of failure, failure analysis experience is key. Choose an electronics failure analysis lab that has the right equipment and experience in the field. Spirit Electronics is here to help you decipher what is needed to solve your problem.

Scanning Electron Microscope – Explore the Nanoscopic World

Modern semiconductors and integrated circuits are built with geometries measured in terms of angstroms and nanometers, and defects on these devices may be completely invisible under an optical microscope. For uncovering even the smallest defects, Spirit offers scanning electron microscopy services, providing a crisp, clear image of any anomaly imaginable. Learn more.

The electronic component failure analysis process can be long and arduous, involving a wide variety of tools and techniques to uncover the root cause of a malfunction.

Ultimately, however, the culminating moment of any investigation is the moment where an analyst can produce a clear, sharp photograph as incontrovertible evidence of the existence of a defect. Indeed, not only in failure analysis but in any of the sciences, it can be said that “seeing is believing” and a detailed picture can remove any shadow of a doubt as to the nature of an object.

In the case of failure analysis, a good image can help to identify the type of corrective action that must be implemented to resolve a recurring problem. Large defects, like those that result from severe electrical overstress, can often be seen clearly under an optical microscope; however, modern integrated circuits are built with geometries measured in terms of angstroms and nanometers, far below the resolution threshold of optical microscopy. Defects on these devices may be completely invisible under an optical microscope. For uncovering even the smallest defects, Spirit offers electron microscopy services, providing a crisp, clear image of any anomaly imaginable.

The way an electron microscope differs from a visible light microscope can be reasonably inferred from the names of the two techniques; where visible light microscopy focuses the rays of light that our eyes perceive normally using optical glass lenses, electron microscopy uses strong electromagnetic fields to produce, shape, and focus a beam of electrons onto the surface of a sample. As the electron beam interacts with the sample, several phenomena occur; for the microscopist, the most important of these phenomena is the generation of secondary and backscattered electrons.

By scanning the electron beam across the surface of a sample and collecting the secondary and backscattered electrons, the electron microscope can construct an image of the sample. Since an electron has a far shorter wavelength than a photon of visible light, the diffraction limit of the tool is much smaller; resolutions of several angstroms can be achieved, where visible light is limited to roughly two-tenths of a micron. This increased resolution makes it possible for an analyst with access to a good electron microscope in his or her lab to find nano-scale defects like gate oxide pinholes or crystalline dislocations.

Tuning the electron microscope detector to gather mostly secondary or mostly backscattered electrons can produce different data, showing greater implied topography or accentuating elemental differences, respectively. Electron microscopes also boast a much greater depth of field than optical microscopes, making it possible to keep large three-dimensional structures in focus across larger distances – a benefit when performing inspections of circuit assemblies or deprocessed integrated circuits.

Electron microscopy services are not limited to imaging; in addition to the generation of secondary and backscattered electrons, bombarding apart with a high energy electron beam also produces characteristic x-rays as a result of the excitation and relaxation of the electrons orbiting the atoms of the sample. The energies of these characteristic x-rays are uniquely tied to the element from which they are emitted; by using an energy dispersive spectrometer (EDS), these x-rays can be collected and the material composition of the sample can be identified. The EDS can be used to positively determine the makeup of contaminants, measure the constituents of an alloy to be compared to a specification or other reference, or generate an elemental “map” showing where certain elements are concentrated on a sample.

The electron microscope can also be used as an isolation tool for certain types of defects. The electrons that make up the focused beam of the tool are negatively charged, and therefore will experience some degree of attraction or repulsion depending on the charge present on a sample. By intentionally placing a charge on a sample (for example, connecting a voltage source to a failing signal on an integrated circuit), it is possible to change the way that the electron beam interacts with the device, creating differences in image contrast that can highlight a defect. This technique, known as “charge contrast” or “voltage contrast”, can be invaluable in finding certain types of anomalies, especially those that cause open circuits. Indeed, certain defects may not require any additional setup at all; the passive charge contrast resulting from the electron beam itself may be enough for an analyst to pinpoint a defect.

Electron microscopy allows our electronic failure analysts to take incredible images of a huge variety of defects. From melted silicon to cracked metallization and all points between, an electron microscope is an invaluable tool for inspecting any anomaly. Electron microscopy services are, of course, only one part of successful failure analysis; though an electron microscope picture might be the culminating piece of data for a failure analysis report, it takes experience and skill to ensure that the electron microscope picture is, in fact, of the defect at the root cause of failure.

An Approach to Capacitor Failure Analysis

The humble capacitor is one of the most fundamental components of any electronic assembly. These ubiquitous passive devices come in a variety of different flavors; whether formed using electrolytic fluids, metal foils, the metals and oxides of an integrated circuit, or any of a multitude of other materials, there is hardly a printed circuit assembly in the world without at least one capacitor mounted somewhere on its surface. Capacitors form the backbone of charge pumps, frequency filters, power conditioners, and many other common applications; since these components are so crucial to these operations, a malfunctioning capacitor can often cause complete failure of a system. At first blush, a capacitor would seem to be a fairly straightforward device to perform analysis on (after all, how complex can two electrodes separated by a thin dielectric be?), capacitor failure analysis poses unique challenges that must be met with equally unique approaches.

As with any project, the ultimate goal in capacitor failure analysis is determining a root cause for failure – in other words, finding whether the improper operation is due to manufacturing imperfections, end-user abuse, or other factors. Just as with an integrated circuit, the first step in the process is determining where an analyst should even begin looking for a failure; after all, failing capacitors rarely give outward indication that they have malfunctioned (though an exception can be found with polarized electrolytic capacitors, which have a tendency to explode violently when abused, much to the chagrin of many an inattentive engineering student). The same set of tools that an analyst uses to ferret out defects on an integrated circuit can also be applied to the analysis of a capacitor, with the addition of a little creativity.

The most common failure mechanism for capacitors is a compromised dielectric causing leakage between the capacitor’s two electrodes. Depending on the type of capacitor, this dielectric may take many forms; one of the most common capacitors, the multi-layer ceramic capacitor often referred to as a chip cap, uses a ceramic material comprised of small particles of various materials blended to achieve a desired set of characteristics. In this type of capacitor, the most common failure is cracking or delamination of the capacitor’s internal layers. An acoustic microscope can be used to detect these damaged dielectrics, just as it might find delamination in an encapsulated integrated circuit; analyzing a capacitor acoustically, however, does not necessarily follow the same course as analyzing a packaged IC.

In a packaged IC, there are two primary acoustic techniques for determining the condition of a package; a plan-view image of the device (referred to as a C-Mode image), and comparisons of the reflected acoustic wave at several points (known as A-Scans). The C-Mode image contains data about a small handful of interfaces within the package (e.g. the die-to-encapsulant interface, or the encapsulant-to-leadframe interface), while variations in phase and amplitude on the A-Scan can be used to identify differences between points that might indicate a defect. A chip cap has many more interfaces than an integrated circuit, with multiple layers of metal and ceramic stacked upon one another; the C-Scan can really only be used to look at one of these interfaces at a time, and as such is not an ideal approach to analyzing the entire device. For a ceramic capacitor, the appropriate technique is a tomographic approach known as a B-Scan – a technique which provides cross-sectional images of the entire thickness of the device.

Using a B-Scan, it is not only to determine the presence of a damaged dielectric in the capacitor, but also its relative location in the device, facilitating a targeted cross-section. Since many capacitor failures result in increased leakage current, many integrated circuit techniques for isolating leakage translate directly to capacitor analysis. While techniques steeped in semiconductor physics like photoemission are of limited utility for capacitor failure analysis, methods of isolating current flow by its secondary effects, like thermal imaging, are more than capable of identifying dielectric pinholes or other leakage sites.

Since these techniques often rely on line-of-sight, they are more useful as a secondary confirmation of a failure, correlating an electrical signature to a physical defect revealed during deconstruction or cross-section of a device. The failures here are only a small incursion into the realm of capacitor failure analysis. Indeed, even devices as seemingly humdrum as the simple capacitor can make for exciting failures; leaking electrolytic capacitors may cause catastrophic failure in the form of burnt circuit boards, tantalum capacitors may explode in a shower of sparks, and high voltage capacitors may break down with a thunderous crack. Despite their simplicity, failure analysis on capacitors is a complex, yet worthwhile endeavor, even if the end result is only an improvement in product reliability instead of the aversion of an uncontrollable conflagration.

Electronics Component Failure Analysis – Isolating Failing Components

The modern electronics consumer is a demanding, discerning individual. The demands placed on any product are extensive; end users expect a wide range of functionality, with high reliability, at low cost. A device as ubiquitous as a smartphone is capable of facilitating transcontinental data transfer, displaying cutting edge graphics, and performing feats of mathematical might, all in a package small enough to fit into a pocket – and at a price point low enough not to empty said pocket. Modern electronic systems require hundreds, if not thousands, of components, all working together in concert to provide the functionality consumers have come to rely on; from the sheer computing power of a cutting-edge microprocessor to the simplicity of a passive capacitor, each component is vital to a device’s operation, since extraneous or redundant parts are trimmed during design in order to minimize costs. When one of these components fail – even one as minor as a surface mount resistor – a device can go from a modern marvel of technology to an extremely expensive inert hunk of plastic and metal. Determining why a device failed is often an excellent first step towards improving the reliability of future generations of products;  electronic component failure analysis is, therefore, a key component in the race for continuous improvement of electronic devices.

While the complexity of modern electronics allows the versatility and functionality end users expect, it can make it difficult to determine where to start in attempting to isolate a failure. A circuit board may be hundreds of square inches of densely packed discrete components, integrated circuits, and wiring; a schematic view may be so intricate as to require several feet of paper to print out. In these cases, electronic component failure analysis gains a whole new aspect of complexity; an analyst must be able to isolate the failing component amongst a plethora of other devices. Analyzing each component inside a device is not a particularly effective approach, nor is it an efficient use of time; exhaustive testing could require hours of an analyst’s time and produce very little actionable data. In order to perform a successful analysis, one must first narrow the field of possibilities to create a more manageable test plan.

By examining a device’s history and reported failure mode, an analyst can create a much more limited list of potential failure mechanisms; through experience, the analyst may choose one or two theories that are the most plausible, in doing so limiting the number of potentially failing components. This process often involves poring over the layouts and schematics for a given product; by getting an in-depth look at the way a device is constructed and how the circuit is intended to work, an analyst can more easily identify likely points of failure. Once an analyst has developed a working theory, the failure analysis project proceeds like any other scientific endeavor; by gathering supporting data.

In order to prove their theory, an analyst must be able to provide concrete data pinpointing the failing component. Sometimes, an analyst might be able to use tools like thermal imaging to generate this data (for example, by identifying a component that is overheating as a result of a short-circuit); in other instances, it is necessary to electrically isolate a potentially failing component from the rest of the circuit. Isolating a failure might be as simple as removing components from the board and checking to see if the reported failure is still present; in more complex cases, it may be necessary to carefully cut traces on a board in order to isolate a device from other parts of the circuit. Immediately following every circuit modification, additional electrical testing is necessary to determine whether the correct component has been identified; once the failing device has been found, failure analysis of the individual electronic component can begin.

While the most glamorous part of any electronics component failure analysis project is the moment where an analyst produces the perfect image or bit of test data that inarguably identifies the root cause of a device’s failure, there is a substantial amount of work that goes into a project before that culminating instant of victory. Though an analyst tracing through schematics, removing components from the board, hunching over a test bench and taking readings off multimeters and curve tracers to determine which component among hundreds may never get the glitzy Hollywood treatment on prime-time television (despite countless attempts to sell a script for the pilot episode of Chip Scale Investigators), these uninspiring tasks are nevertheless a fundamental part of the failure analysis process.

Technical Competitive Analysis Using Failure Analysis Tools

The modern electronics and semiconductor markets are fiercely competitive. Manufacturers are constantly vying for supremacy, attempting to carve out a niche with novel, innovative approaches to fulfill the needs and wants of an increasingly demanding customer base. In such a rapidly changing, fast-paced environment, bringing a new product to market can be challenging, especially without any sort of knowledge of how the competition might measure up. Often, a manufacturer looking to break into the market will employ a third party to perform a technical competitive analysis – an in-depth look at the construction of a product that can provide insight into key details like process node, die size, and functional block size that can be used to perform cost and performance analyses. At first blush, technical competitive analyses appear completely separate from failure analysis services; in reality, the tools and techniques developed for finding defects on cutting-edge products translate seamlessly to the type of teardowns necessary to perform a deep dive into the minutiae of a product’s construction.

In performing failure analysis, the ability to produce the perfect image of a defect is paramount; a crisp, clear photograph of a gate oxide pinhole or metal over-etch can provide a wealth of information to an engineer grappling with catastrophically low yields. Similarly, the right picture is worth a thousand trite clichés when performing a technical competitive analysis. With the same high resolution tools that a failure analyst uses to capture images of melted silicon and metal in the aftermath of an electrical overstress event, it is possible to identify functional blocks on a die, measure the size of a memory cell, and determine the processes used to manufacture a product. High magnification optical images of a product can provide easy-to-interpret, high-level information about a device, while ultra-high resolution electron microscopy can be used to perform circuit extraction, reverse engineering, and process analysis. Of course, just as with failure analysis, there is often a rocky path that must be traversed in order to get to the perfect picture; the sample preparation of a device is just as crucial as the imaging process.

One of the mainstays of the failure analysis process is deprocessing, the act of removing the layers of metal and oxide comprising a device in order to reveal the defects hidden therein. These same techniques are also applicable to technical competitive analysis: in many cases, features of interest are hidden from view (either intentionally or as a consequence of the dense layers of interconnect required in cutting-edge semiconductor products), and must be revealed before any meaningful insight can be gleaned. In many cases, simply removing the metal interconnects to expose the underlying transistors at the polysilicon layer of a device is sufficient; information about functional blocks and process node are easily accessed without the interceding metals obscuring important features. For more in-depth reverse engineering and circuit extraction work, a more methodical, layer-by-layer approach is necessary, so that an expert might be able to trace a signal of interest as it weaves its way through the metallic highways and byways of a circuit. This analytical path is highly targeted towards developing understanding of a device’s circuitry; in order to better understand its construction, other techniques are more suited.

In the same way that cross-sectional analysis is used to view the many layers of an integrated circuit and look for defects or process weaknesses between vertically stacked traces, a cross-section for technical competitive analysis can reveal aspects of a device’s construction that are not readily apparent through deprocessing. The materials used in constructing a device are often equally important as the device’s circuit layout: dielectric composition, the spacing between traces, and the type of metallization used can all greatly impact a device’s performance. Cross-section is also one of the best ways to determine transistor construction characteristics – not just gate length for assessing process node, but atypical transistor constructions that might not be readily apparent from a top-down inspection (for example, the appearance of LDMOS in an RF block of a baseband processor). With the proper sample preparation, an expert can even make inferences about dopant types and profiles based on a cross-sectional inspection.

A very close cousin of technical competitive analysis is intellectual property investigation – specifically, patent infringement analysis. As mentioned, the modern electronics and semiconductor markets are fiercely competitive; so competitive, in fact, that safeguarding one’s intellectual property in order to maintain a technological edge is crucial. Using the same set of aforementioned techniques, a team of analysts can generate a compelling dataset to prove infringement on a client’s IP; in doing so, the client can maneuver themselves into a position of strength for licensing negotiations or IP litigation.

Solder Quality Inspections and Failure Analysis

While solder, the metallic alloy that is melted and reflowed to create joints between components and printed circuit boards, may not be as exciting and glamorous as the intricate webwork of copper and polysilicon in an integrated circuit, it is still vital to the creation of an electronic device. Without proper solder connections, even the most advanced of integrated circuits is reduced to an ineffectual paperweight, lacking any pathways for power and signals to travel over. Being able to perform a solder quality inspection is, therefore, an integral part of any failure analyst’s repertoire of skills.

As with any failure analysis study, solder quality inspections begin with non-destructive tests, in order to try and pinpoint defects without inadvertently eliminating any evidence. X-ray inspection is one of the principal methods of inspecting solder quality non-destructively, as it is easy to characterize joints and generate statistics that can be useful in determining whether to accept or reject a given process. Percent voiding (the area of a given joint where there is a void, or air pocket, in the solder as a percentage of the total area of the joint), ball size, and ball shape (whether the solder balls of a BGA appear round and uniform, or squashed and stretched out of shape) can all provide insight into the reliability of a given solder process. X-ray tomography systems, which produce three-dimensional models of the devices they analyze, can provide an even greater depth of detail of solder joint issues; depending on resolution, they can reveal joint defects, like “head-in-pillow” or non-wetting. Even the relatively minuscule C4 bumps used to connect “flip-chip” die to their substrate can be examined this way; these particular devices are also good candidates for acoustic microscopy at ultra-high frequencies (generally greater than 150MHz), which can reveal cracked or otherwise malformed joints.

While non-destructive test methods provide strong indicators of possible failures or quality issues, they generally need to be corroborated by a more direct view of the failure; destructive testing in solder quality inspections is used to confirm defects noted during non-destructive analysis, and to reveal defects of a size or nature that masked them from less intrusive methods. One of the most common techniques used to analyze solder joints in this fashion is the micro-section; by grinding into and polishing a solder joint, many defects can be viewed and photographed directly. Micro-sectioning also provides information about intermetallic compound (IMC) formation and solder grain structure, both of which can be indicators that can be used to characterize a soldering process. Micro-sectioning provides a high level of detail about a limited number of solder joints on a component; the complementary technique, dye penetrant testing, offers a broader view of all of a component’s joints. By immersing a sample in fluorescent or an otherwise brightly colored dye, then prying the sample from the board, an analyst can locate cracked or non-wetted joints across the whole of a sample.

While imaging techniques and other direct methods of seeing defects are often easiest to understand, generating data through electrical characterization is also an important part of solder quality inspection. Reliability tests, such as HALT or other stress testing, provide important simulated data on how a device might age in the field; following these tests up with the aforementioned techniques provides a more comprehensive dataset for understanding a soldering process. Even very basic tests, like placing a device under bias in an environmental chamber and varying the temperature across the sample’s specified operating range, can reveal defects and process weaknesses that might otherwise go unnoticed.

In some cases, solder quality inspection does not have anything to do with the structure of the solder, but of the materials that comprise it. RoHS certification requires that solder be free of lead, in order to mitigate some of the environmental damage posed by e-waste. Tools like energy dispersive spectroscopy or x-ray fluorescence provide data about the elemental composition of a given sample and can be used to screen a process to ensure that lead-free solder has been used for all components.

Though solder quality inspection may take on a variety of different forms, there is always one point of commonality; all are designed to generate data to act as a springboard for continuous improvement. By more thoroughly understanding the solder processes used to create an electronic device, manufacturers can see potential weaknesses and reliability issues. In-depth analysis of solder quality is, therefore, invaluable for any manufacturer looking to deliver a more robust product.

Electronics Failure Analysis of Hermetic Packages

Failure analysis of consumer electronics can pose a wide variety of challenges, due to the multitude of different failure mechanisms that might befall a device. Environmental factors, mistreatment, and even the way that the device is packaged can contribute to the untimely demise of a device. While the vast majority of integrated circuits are packaged using a plastic or epoxy based mold compound, some high-reliability devices – especially those used in aerospace applications – are encased in hermetically sealed tombs of ceramic and metal. Performing electronic failure analysis of these hermetic packages poses a new set of challenges, as there are certain failure mechanisms and tests that are applicable only to this type of packaging.

Generally speaking, hermetic packages are sealed under a dry, neutral environment, to prevent the ingress of contaminants that might reduce the device’s reliable lifespan. Ideally, this seal is a completely impermeable metallic weld, preventing even the smallest molecule of unwanted material from entering the die cavity and wreaking havoc with the integrated circuit within. However, it should be no surprise that these seals do not always approach ideality, and may allow gases to seep into the cavity over time. If water vapor or other harmful contaminants can penetrate the cavity, a device’s lifespan may be drastically reduced; as a result, it is very beneficial to have a way to test hermetic packages for potential leaks when performing electronics failure analysis of these devices.
Gross and fine leak testing can be performed in a multitude of different ways – using radioactive tracers, relatively inert gases like helium, fluorocarbons, or any number of different materials – but the basic method is the same. A device is placed in a chamber that has been pressurized with the tracer of choice and allowed to “soak”, to give the tracer time to wend its way into the device cavity. The device is then removed from the chamber and exposed to a detection mechanism, keyed to look for the signature of the particular tracer used in the test; a radioactive tracer might be detected with a Geiger counter, for example. Gross leaks are detected by immersing a device in the fluid after the pressure soak; as the tracer escapes the device cavity, a stream of bubbles issues forth from the site of the breach.
Even if the device has been thoroughly welded shut, there may still be trouble inside the device cavity. Small metallic particles, fragments of substrate material inadvertently chipped off during the packaging process, or other materials may be sealed inside the hermetic cavity along with the integrated circuit. These particles are, in many cases, only one bump or jostle away from creating catastrophic failure, shorting pins and bouncing off the glassy surface of the semiconductor die. These particles can, in many cases, be so microscopically small or constructed of a material such that they are difficult to detect with x-ray imaging; in order to verify that a particle is present, then to extract it for analysis, requires specialized tools.
The most common way of identifying the presence of such particles for electronics failure analysis is a Particle Induced Noise Detection (PIND) system. The system consists of a platform capable of dealing short, sharp, shocks or sustained vibrations to a sample (in order to jar the particle loose from any crevasse it may have wedged itself in) and a sensitive transducer (similar to a microphone) that picks up even the softest of sounds. The suspect part is placed onto the PIND tester and is subjected to a series of shocks and vibrations, shaking the particle and bouncing it off the walls of its metallic prison. With each impact, the sound is detected by the transducer and amplified many times over; the transducer output is displayed as an oscilloscope trace and is also played on a speaker for the analyst’s listening pleasure (since particles are often possessed of an impeccable sense of rhythm). Of course, simply verifying that there is a particle inside the cavity is not enough; an analyst must also be able to extract and identify the particle, in order to determine where in the process it was introduced. To this end, a hole is punctured in the device lid, and covered with tape. The device is returned to the PIND tester and vibrated until the particle can no longer be detected, at which point it has most likely bounced onto the tape. The tape is then removed and the particle is inspected.
While leak testing and PIND are two methods of finding failures in a hermetic package, they are far from the only analytical methods available. An analyst’s greatest asset is versatility; while the tests detailed here are targeted specifically at finding defects in hermetically sealed packages, there are many other techniques in the analytical toolbox that can be adapted to find these types of failures with equal success.

A Study in Printed Circuit Board (PCB) Failure Analysis, Part 2

Continued from A Study in Printed Circuit Board Failure Analysis, Part 1

The next step in the failure analysis process, revealing the defect, would almost certainly involve the destruction of the board; as a result, a strong hypothesis was necessary before embarking upon any further analysis. In order to determine the best course of action, our analyst reviewed the facts as they stood before proceeding.

 

  1. The failure can be thermally modulated – as board temperature increases, the failure becomes more pronounced.
  2. In the failing condition, high resistance is measured between two points on the same node. This high resistance results in reduced output current.
  3. No signs of solder quality issues – cracking or non-wetting – were noted in the area of the failure.
  4. X-Ray inspection did not reveal any signs of damage to the copper trace between the two suspect points.

Given this list of facts, our analyst determined that the most likely cause of failure was an intermittent contact between the two points in question that became worse under thermal expansion (as the board materials heated and expanded, less material remained in contact to conduct electricity). The most likely location for this type of failure would be at the connection between the copper trace and the barrel of a via or plated thru-hole; given this hypothesis, the analyst elected to cross-section through the PTH for the suspect pin.

The area of interest was cut away from the bulk of the PCB and encapsulated in epoxy. A cross-section was performed by grinding into the suspect pin with progressively finer grits of polishing abrasive, finishing with a sub-micron polishing compound to bring the sample surface to a finish suitable for high magnification imaging. The PTH was imaged with a high power optical microscope; as hypothesized, an incomplete connection between the copper trace and the PTH barrel was noted (reference figures 5 and 6). The analyst had the proverbial smoking gun; now, the only remaining step was to tie the physical defect to its most likely cause.

Though the physical defect had been revealed, the analyst’s job was not over; the goal of any failure analysis project is to find the root cause of failure and determine the most likely origin of an existing defect. Of the many possible explanations for this type of failure, two were considered as the most likely candidates:

  • Mechanical stresses (vibration, thermal cycling, board flexure) may have broken a trace that was originally well connected
  • Insufficient etchback or smear removal (followup after drilling holes in the board) was performed during the PCB manufacturing process, preventing a good bond between the buried traces and the barrel

If mechanical stresses were the root cause of this failure, an analyst would expect to see much more damage to the PCB’s copper traces (some degree of tearing or other stress-related cracking); other than the separation from the barrel wall, no such damage was noted. Improper cleaning and etchback during manufacturing, on the other hand, could very well result in an incomplete bond between a buried trace and the via barrel. The defect was therefore classified as most likely occurring during manufacturing. Corrective action was implemented by adding additional inspection and destructive physical analysis on incoming PCBs per IPC-A-600 as part of production screening; as a result, other failures similar to this were found before reaching the end user, and the PCB manufacturer was able to identify and correct an inadequacy in their process.

Conclusion

In this case study, we examined how the failure analysis process enables a defective part to produce actionable data that suppliers and manufacturers can use to improve their product. Despite starting with little more than a nebulous problem description – “this doesn’t work” – the analyst was able to methodically work towards a more comprehensive explanation of the failure; in doing so, an expensive chunk of scrap was transformed into a valuable source of knowledge, identifying a process weakness and helping to prevent further defective product from reaching end users. Future columns will continue to provide other approaches to failure analysis of printed circuit boards (PCB), components, and other electronic devices; in the meantime, keep an open mind, and remember that failure is nothing more than an opportunity to improve!

A Study in Printed Circuit Board Failure Analysis, Part 1

Over the course of a failure analyst’s career, they will be exposed to an extensive and varied array of devices. No matter the technology – whether they be nanoscopic silicon sensors with moving parts so small as to defy belief or massive circuit assemblies comprised of thousands of discrete components and integrated circuits – no device is completely immune to failure. Variations in process control, insufficiently robust designs and extended abuse by an end user can all spell early doom for a device. In our introductory article, we took a high-level overview of the failure analysis process, discussing the steps an analyst takes to turn a failing, rejected product into actionable knowledge for process improvement; in this column, we will see how these steps are applied to a specific failure. Naturally, examining a relatively trivial case would not provide the necessary depth of learning, so instead, we choose to give an example of a failure many analysts dread: an intermittent failure on a printed circuit assembly.

In this study, a single printed circuit assembly was received as an RMA from an end user. The end user was able to identify the failing assembly only by swapping parts; lacking any sort of test equipment, the customer was unable to provide any detail that could help to narrow the scope of the analysis beyond the most basic of failure descriptions (“this part doesn’t work anymore”). The first step in the failure analysis process is to verify the failure; after initial photo documentation, the assembly was put into functional testing using an application test bench. Initial results were disheartening, to say the least; the assembly functioned as designed, with supply current and output levels within specifications. In the absence of any reproducible failure mode, an analyst must rack their brain, grasping at any explanation for why the product has miraculously returned to normal function. Could the product have been improperly used by the customer – for example, were all connectors fully seated? Were power supply voltages stable and held at the correct levels? Had this board been processed with a top secret, self-healing material pulled straight from the annals of science fiction that had repaired whatever defect was responsible for the initial failure (hopefully not, lest our intrepid analyst find himself out of a job)? Fortunately, in this case, our analyst was rescued from the throes of despair and his search for a new career writing schlocky novellas about autonomous, regenerating electronic assemblies by a sudden change in the functional test results: an output that was previously within specifications suddenly dropped out, with only a fraction of the expected current being supplied to its load. Though our analyst rejoiced at being returned firmly to the realm of reality, these results indicated that the most likely root cause of failure would be hard to pin down – an intermittent connection.

The initial functional test led to several key observations that helped to characterize the failure. Initially, the assembly worked as intended, but after some period of time under power, the device would fail. Furthermore, the failure was not a “hard fail” (i.e. a short circuit or open circuit); power was still being supplied to the output pin, but insufficient drive current was available. After repeating the functional test and seeing the same failure characteristics, it was hypothesized that some thermal effect (thermal expansion, for example) was causing the device to fail. When first powered up, the board was at room temperature; however, after being under bias for a length of time, the power dissipated by the board caused enough self-heating to create a failure. Environmental testing was performed, and the temperature of the board was modulated; a strong correlation was noted between higher board temperature and reduced load current provided by the failing output. With the failure verified and characterized, the next step was to isolate the problem; in this case, isolation was done completely non-destructively, by tracing the circuit from the failing output back until an unexpected high resistance (48,000 ohms) between two points on the same node was noted.

With the failure verified and isolated to a relatively small area, non-destructive testing procedures were performed. For PCB failures, x-ray analysis and optical inspection are chief among the non-destructive approaches available; other techniques, like acoustic microscopy, are more appropriate for component-level failures. At this point in the process, an analyst would inspect for cracked solder joints or broken PCB traces, misaligned via drills, or any other anomalous features that might help to explain the failure mechanism; in this particular case, no issues were noted during non-destructive testing. While a negative result like this may seem like no value added to the analysis, in this case, the data can be used to rule out certain types of defects (for example, a crack in the copper trace between the two points as a result of the warping of the PCB is unlikely).