MIPAR Software is an innovative image analysis company and the makers of MIPAR Image Analysis software. MIPAR is a novel image analysis tool that allows users to reliably extract the measurements from their images. Handling challenging applications in everything from materials to life science, MIPAR has facilitated automated solutions to hundreds of practical problems.
In this interview, AZoM talks to Dr. John Sosa, CEO and Co-Founder of MIPAR Image Analysis, about how to use MIPAR image analysis software to save time and costs.
What do clients come to MIPAR for and what applications can MIPAR serve?
Clients have come to us when they need effective automated solutions that work for their problems. These are not solutions that demo well but typically fail in real practice, nor are they code-heavy solutions that require a CS degree to develop and are only usable by one or two people.
Instead, we offer effective, automated solutions that overcome challenges that other products often could not, and that are developed rapidly and are easily implemented at their site for their problems.
Over the last year and a half at MIPAR, we have constructed over 700 custom solutions to automate various problems, many in materials, but also many in life science, too. MIPAR covers applications like phase analysis, pores, cracks, additive materials, carbonization porosity, retained austenite measurements, and many, many more.
These are all applications that we have built custom solutions for, and we chose to settle on these three because of the predominance that we have seen our clients request them. We've seen many grain size examples, but in particular, we have seen an overwhelming number of particle analysis applications in the last few months, especially with the explosion of additive manufacturing and the role that particle quantification plays in that space.
Grain sizing is often carried out with EBSD but can be a laborious process. Could you tell us about a case study in which MIPAR was able to streamline the EBSD grain sizing process?
We had an industrial client who had been stuck using EBSD alone for grain sizing. It was the only way they could get an accurate segmentation and therefore get an accurate grain size. Manual analysis was not an option because of the overwhelming cost and time intensiveness of it, as well as its subjectivity, especially with the microstructure they were working with where boundaries are fairly ambiguous.
The primary technical challenges stopping other products from properly recognizing the grains were the faint and often very subtle contrast at some of the boundaries, and even more importantly, the need for a software algorithm to do what the human brain does so well, which is estimate where many of these boundaries lie. This can be quite challenging for software to do.
Additionally, the client didn't just need to measure the images. They needed to collect fields in the tens or hundreds and then aggregate the grain sizes from all of those images into a single histogram to get a single grain size mean, min, and max. In all three goals, especially their last endeavor, they had been unable to find to their satisfaction the data in the solutions available to them.
Our users often point out that the side-by-side view at the beginning of the process, once an image has been loaded, is immensely valuable. It means you can always have a before and after of your original image state and your process state.
When users zoom into their image, they can see subtle, faint contrast at boundaries, and they can see incomplete boundaries. The brain is pretty good at estimating where those completions might be, so it would be great for the software to do the same. How we judge the success of this is whether the grain size measured from this image or set of images matches their ground truth, or what the user thinks the true answer is from their EBSD measurements.
The team who was working with this client created this recipe in under a working day with absolutely no programming involved. When the recipe loads, users will see the detection results. If readers are familiar with MIPAR, they will know that they can unlock this and go into detail mode, getting completely under the hood, as it were, and make infinite numbers of customizations to the algorithm or to the recipe. What we had, in this case, were two layers, which are the different classifications of features that have been picked up on by the recipe. Layers can be shown, hidden, renamed, and recolored.
The first layer is all of the grains and the second is just the complete ones or ones that do not touch the edge. This is what the client wanted their size measurements made on. Our clients often ask whether MIPAR lets you zero in on a particular area with a cropping tool. The answer is absolutely, yes.
For example, a crop can be used just to include the data bar or exclude the data bar, but users could adjust this and change the area of interest, drag it around, and the algorithm executes exactly where you told it to.
The second step in this client’s process was key in picking up on the very subtle contrast. The boundary detection step uses a local contrast-detection function that does not care about how absolutely dark something is, but rather how dark it is relative to what is immediately around it, which is very similar to how the human eyes work. The boundary-setting selects boundaries based on what parts are darkest immediately next to it.
Then, the connect boundaries are what do the heavy lifting and allow these educated estimate boundaries to be drawn in, allowing for a proper grain size measurement.
The minimum grain size setting was set, and nothing under four pixels was allowed to pass, which was just the internal standard that the client was used to using. As a result, we built that into the recipe. Of course, this can be set in physical units, but we left it in pixels for this solution.
In order to make measurements of grain size in something other than pixels, users must calibrate the image. They have to set what is known as the scale factor, or how big a pixel is in physical units. We opened over 150 file formats, and many of those have metadata built-in and we can pull that out and use it.
“Connecting-the-dots” for automated tungsten carbide grain boundary recognition under SEM.
But, in many cases, our users use basic TIFFs, JPEGs, or PNGs that do not have any of that scale factor metadata. So in these cases, we need to calculate the scale factor from something that has a known distance, for instance, a scale bar that we could see on the original image.
This is done using MIPAR’s calibration tool. MIPAR will even find your scale bars for you, and the resulting scale factors are saved into the recipe and the recipe can then be used for this magnification and imaging resolution.
Our solution allowed the client to batch process many of their images with this recipe and then make the measurements from the whole set once the batch was complete.
Another feature that our clients have pointed out is immensely valuable is the ability not only to process images in batch but to actually see what the batch does before they make the all-important measurements. As I'm sure many readers know, their measurements are only as good as their segmentation, meaning their measurements are only as accurate as of the accuracy with which those features are found by the software.
It does not matter how many measurements software can perform, and it does not matter how many standards that can spit out; if the features are not found accurately, the measurements are no good. So, the ability to perform some quality assurance and supervise images prior to measuring them is key.
In the case of this particular client, they just looked at an overall grain size distribution, which was done in the measure of features environment. Since that is the type of measurement this is, this is a measurement on a per-feature basis rather than a per-image basis.
This client wanted to use the caliber diameter to measure the size of each grain or the longest line that fits inside of each feature. The software measured every grain from every image and put it into one column, which then generated the overall grain size distribution across as many images as were in the batch.
There was a mean size of roughly 700 nanometers, a min, and a max standard deviation. What gave this particular client immense confidence in this method was that they produced nearly identical grain size distributions from their EBSD grains as to the backscatter image grains.
Because of this, they were then able to adopt this MIPAR solution, and what used to take them roughly 10 hours per sample, which meant that they could process roughly 240 samples over four months, could be reduced to 240 samples in one day. 240 samples was a fairly standard batch for this client, and they were able to use backscattered imaging and the SEM, which took roughly 50 seconds per image. Using MIPAR saved their SEM operational costs, which saved them roughly $575,000 in SEM operational costs.
How has MIPAR improved the accuracy of the measurements users can achieve?
One client of ours had two options: either inaccurate automation using software that they knew was not finding the particles accurately, or hand tracing every single particle and every single image. They did not have immense confidence in hand tracing, because two people would trace the particles in different ways.
But, manual tracing was not an option, because the cost of doing so was so high. So, they had to resort to software that was not finding the boundaries correctly, but it still allowed them to get some measurement of shape. We worked with them to understand the challenges.
They were quite unique, given the range of particle examples we have seen. Clusters were to be counted as one. They wanted clusters where there was not a clear dark separation between particles to be classified as one feature, but particles, where there was some visible boundary between them, needed to be separated from one another.
Most importantly, this had to work on more than one picture. It could not require tweaking settings image to image to produce reasonable results. Otherwise, they might as well start hiring a fleet of interns to trace all their images.
In terms of what our solution was able to do, they had tiny particles sitting on top of other ones that the software had isolated, and the client after reviewing the separation, gave us the thumbs up. They felt that all the requirements had been met: overlapping particles were captured, fine ones were found, and challenging rough ones were also accurately found. That allowed us to make accurate measurements as I mentioned before.
To look closer at the recipe that allowed these challenges to be addressed, the layers were separated based on whether the software was confident in its measurements. Red particles, for example, were ones that it was confident enough in the completeness of the particle in order to measure their shape; yellow particles were ones that the software deemed too occluded by other particles in order to get an accurate measure of their shape.
This was something the client initially had concerns about. They asked how, when you have a particle sitting on top of another one, was that to be separated from its parent in the first place? How do you deal with something like a Pac-Man-shaped particle being inaccurately measured as a rougher than it actually is? Our solution, rather than try to estimate where the particle boundary was, even though that would not be too difficult for that case, was to flag the particles that were too covered and measure their shape. We could not reliably estimate where a boundary was when we could not see it.
Roughness is a measure of how abstract or how much concavity a feature has. Roughness has a minimum of one and a maximum of infinity and it is used quite commonly in the particle analysis space.
As we had gotten plenty of particles to measure, we could batch process hundreds of them. To walk through some of the settings, we used a marking and then extended the boundaries approach. A brightness marking step aimed to select areas that we were absolutely confident are parts of particles, and then the rest of the algorithm took this marking and snapped it to the boundary edges.
This is a very powerful approach that allows us to do what the human eye does so well, which is take areas that you know belong to your features and then snap those areas out to where the feature ends.
The separation was the next hurdle. We had to accurately cut particles that had some local darkness between them, but not cut particles where, if you're familiar with something like a watershed approach, we will go crazy and make cuts based on this hourglass type shape that is starting to emerge at its neck. So, we used a combination of our local subtle contrast detector and our separate features tool to give the separate features, tools, some help and say that you are only going to draw your lines where there is strong enough local darkness between particles.
That is what the separation darkness sets. It sets that critical darkness that an area has to have locally in order to be cut. Our separate features tool then goes in and completes the cuts where we ask it to. Then all of the outlines are refined and we ended up with our classification of measurable versus not measurable, for shape purposes.
The measurements that resulted were shown not just as a table of measurements, which is great, and not just a histogram of measurements, which is arguably better, but a color-coded visualization of where these measurements sat in the image. This can be a very valuable tool, and not only for identifying where measurements might reside in the image.
You might have local clustering of rougher particles in one area than another, but it really helps you understand the significance of these measurements and understand what they are telling you about your features.
While there are many size and shape measurements that MIPAR can produce, roughness and eccentricity tend to be our two most commonly used.
This particular client did not set out expecting to use image analysis for particle sizing. They had a laser-based spectroscopy approach where they would pass a Potter sample through a laser, they would get diffraction from the laser, and they would then have a plot of what the expected histogram of particle size was.
They weren't able to see the particles. As you know, with that method, they just flow a bunch through the laser set up and they get a plot. What they came to realize was that their plot was wrong. They were missing a lot of fine particles in that plot because those fine particles were stuck to bigger ones and were getting counted as one.
Once they saw how accurately these tiny particles could be captured, they moved to use image analysis for their sizing with the added benefit of being able to see what they were measuring. There are nice, explorative abilities in your quantification and as with the grains, what this client wanted to do is aggregate size measurements across many fields at once and put it into one population, into one histogram.
In order to do that, the solution had to be repeatable. It had to work just as well on the next image as it did on the previous one. So, when they loaded a new image, the algorithm re-executed and you get its result. A little bit of this gave them great confidence that the solution was working on all of the images they collected.
They then proceeded to batch process and produced an overall size histogram. Prior to MIPAR, this client was knowingly using another automated tool that was inaccurately capturing their particles. It was cutting them where they didn't want. It was missing ones that were sitting on top of others, but it was all they had available to them.
Manual tracing was completely out of the question because of its cost. With this solution in place, they were able to graduate to an automated solution that they felt was accurate and could get the best of all worlds: the speed and the accuracy and the reproducibility.
Can MIPAR go further than finding, separating, and classifying particles? Do you have examples of unique challenges MIPAR was able to solve?
This is another particle analysis example, but it is slightly different to those I have previously spoken about. This particular client was measuring their particles and cross-section. They did not feel that they could reliably identify what was a satellite from a loose powder sample; there might be particles sitting on top of other ones that aren't truly bonded and are only statically connected.
So, in order to get a better satellite analysis, they embedded their particles in a binder. They used an instrument known as a Robo-MET 3D made by UES in Dayton, Ohio, to iteratively mechanically polish and optically image their material, thereby collecting a whole set of 2D slices from this embedded powder mixture so that they could then batch analyze for satellite analysis.
Individual particle identification from loose metal powder under SEM (colors randomly assigned to indicate separate features).
However, they had a few commercial tools available. They had some open source tools, and they did not have a problem identifying particles. A quick threshold did what they needed to, but that did not help them. They did not need to find particles. Instead, they needed to find what were satellites, what were parents, and then construct a custom measurement approach without having to spend weeks or months writing code and ending up with a solution that only one person in the company knew how to operate.
They could not figure out how to use their commercial tools to get this done. They were fairly stuck because this was a consultant-client that had to return this analysis to their customer in a fairly short amount of time. As was the case with all of these solutions, our team created this recipe inside of MIPAR graphically without any programming inside of a working day, and we were able to put it into deployment the next day.
The challenges were to not only separate these very subtle satellite particles but to identify them as satellites separately from their parents and then configure the measurement formula that gave a percent satellite measure for the entire mixture.
Just as before, there were key recipe settings, the classification layers, the measurements that were run and measurements popped up immediately on load. What we call clean particles were colored green, and are particles that are parents of satellites and they don't have any satellites connected to them. The parents of satellites were colored pink, which does have satellites connected to them. The satellites themselves were colored orange.
This classification was done by firstly detecting with somewhat basic thresholding. A critical value was chosen to define what is a particle and what is not. A similar minimum size, like the grain size study, was set. In this case, the client didn't want anything under five pixels to count as a feature.
The separation was done with a combination of our separate features function, but then some pretty powerful morphological functions strung together to allow the very subtle satellite separations to be cut free from the parent.
In order to assign a particle as a child or a parent, the user could not just use size relative to what was around it, because sometimes there were satellites that were roughly the same size as what they were connected to. What it was about was how far extended that feature was away from what it was connected to.
With MIPAR, users have an image processing playground at their fingertips. Users can string together functions graphically, visually, and interactively, in ways that you may have never even thought to with other platforms, and come up with truly unique algorithms and solutions without writing a single line of code.
That power was used to develop this classification algorithm to say: let's consider the whole cluster, all connected, look at it centroid; where is it? Somewhere around here. Let's cut all of the things free and say that the one in the separated set, the one that touches the centroid of the whole cluster, is going to be called the parent and the rest the children or the satellites. That logic was put together for this purpose.
With that all done, we built up the measurements and generated a count of the number of particles that were so-called clean, the number of ones that have satellites, the number of satellites themselves, and then the total number of particles. The percent satellites were satellites over the total, times 100.
Again, that was built without any coding and conformed to what the client needed to deliver as a measurement to their customer. A repeating theme is they wanted to produce measurements from more than just this one field of view. They want to count up satellites across many images and express the satellite percentage from looking at all of those fields together.
The client collected over a hundred images through this recipe and we ended up then back in the post processor, where we were able to take a look at the detection results. Just like before, users can flip through the different images and assess for themselves how the detection worked.
To make their overall satellite analysis measurements, they could make a count on two layers across all images and then get a report per image. It is possible to calculate a satellite percentage on a per-field basis if that is the way the experiment was set up. In this case, the client chose to sum two columns, divide the two, multiply by a hundred and that was their total satellite percentage.
Do you believe there is still a need for human intervention or supervision with MIPAR automating so many processes?
Often, you need to just get in there and make a quick fix on detection or on a classification. Sometimes, there is no substitute for the hybrid between human supervision and automation, and we have taken great lengths to facilitate that as much as we can.
As an example, let's say that even though the software did not think something was a cut for whatever reason, or a user wanted to call this a small satellite, letting it step allows them to do just that. They click edit, make a cut, and the classification then updates in real-time.
That can even be done in batch by making that step uninterruptible, and for each image, the user will be prompted to make any corrections needed before the batch goes onto the next one. That has allowed some very challenging problems to be addressed, like still leading to orders of magnitude and speed increase, but still allowing the human to assist the automation and reach the accuracy and the validation required.
In this case, MIPAR, in under a working day, was able to provide and client with an automated solution, not just for finding particles, not just for separating the satellites, not just for classifying what is a satellite and what is a parent, but for providing a complete solution to take raw measurements and produce an automated metric of percent satellization that technically didn't exist inside of MIPAR until that very day.
Automated particle satellite detection from cross-sectioned metal powder under an optical microscope (orange=satellite particle, pink=particle with satellites, green=particle without satellites).
How would you summarize the main benefits MIPAR can offer?
The solution just described was built from the ground up without coding for that particular customer. That is really our main point. We are a solution-driven company and product that work with users of all types. Whether you are someone who does not know the first thing about image analysis but just needs results, or an expert in image analysis, we are able to create solutions for you.
If you are looking to automate quantification in a reliable way, but don't know very much about the computer-aided side of that image analysis, our environment allows our team to custom build these solutions for your particular problems and give you an environment to drag and drop them and get the results that you need.
Of course, we are here to not only consult during the solution development phase but also to train your team as to how to deploy these solutions. We have a diverse set of self-learning materials from our user manual to our tutorials online, but also our MIPAR Academy, which is a self-paced Udemy course that teaches you the fundamentals of image analysis and how something like MIPAR can help you apply them to solve real problems that are available through our site. It is also no cost.
Lastly, our recipe store. We encourage users to take a look at the recipe store on our site. It is a library of over 50 prebuilt recipes for applications all over the place, not just in materials, but in life science and even other fields, too. Those are just a taste of example applications that MIPAR can address.
Readers are welcome to download those and the example images and run them. We have worked with many clients, from academic to industrial clients of all sizes to develop custom automated solutions for them that meet their needs and fill their gaps and address their real challenges.
It is unique that the same environment that allows image analysis beginners to run complicated solutions is the same one that offers users that have been doing this all their life a very powerful playground for them to create algorithms from scratch. They can very visually, very interactively perform these “what if?” scenarios about how to go about solving these problems and come up with solutions, either orders of magnitude faster than other tools have allowed, or to solve problems that they simply could not solve before. We've got a brief 90-second tour video on our site which does a nice job of visualizing this workflow. I encourage readers to have a look.
We also have an API available that will allow users to run recipes in their applications if that is the type of solution that is needed. We definitely encourage readers, if they have automated micrograph analysis, or if they are looking to try to audit something they have not been able to automate before, or they are stuck doing a lot of manual tracing or concerned about variability from person to person, to get in contact either through email, give us a call, submit a message on our website, www.MIPAR.us, and chat with our website. Our team is here to build solutions.
About Dr. John Sosa
Dr. John Sosa is the CEO and co-founder of MIPAR Software. He received his Ph.D. in material science and engineering from the Ohio State University while focusing on 2D and 3D microstructural characterization of titanium alloys.
Disclaimer: The views expressed here are those of the interviewee and do not necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the Terms and Conditions of use of this website.