Electron microscopy has found more and more applications in recent years. Every sample has a mixture of ideal settings that must be employed to enhance the analysis results.
This article will describe, one by one, all the key elements that must be taken into account when imaging samples and will outline information regarding the mathematics and physics behind them.
Magnifying glasses can be first dated back to the Greeks, where Aristophanes explained that the first attempt to look at intricate details was a play activity for kids. This was when the term magnification was established in human language.
The interest in science for the micro and nano world has massively increased over time, presenting the need for magnification to be quantified.
Magnification is defined in modern times as the ratio between two measurements, which indicates that two objects are required for the value to be successfully evaluated.
The first object is the sample, and the second object is an image of it. While the sample will not vary its size, the picture can be printed in an endless amount of various sizes.
Printing a photograph of an apple that adheres to a typical printer sheet and printing it again to fit on a poster that covers a building, will significantly alter the magnification value (it will be much bigger in the second example).
A more scientific example can be demonstrated in microscopy: when a digital image of the sample is stored, changing the size of the image results in an incorrect value of magnification.
Magnification is a relative number, which means it cannot be practically employed in the scientific field. Scientists instead utilize two parameters that outline the actual area being imaged.
These parameters are the field of view, the region that the microscope focuses on, and how sharp the resulting image is, the resolution. The magnification formula also changes according to this:
The formula continues to provide a quick description but does not include the resolution. This means that the magnification number will change when scaling the same image to a larger screen.
Image Credit: Thermo Fisher Scientific Phenom-World BV
The field of view describes the size of the object being imaged. This value normally varies between a few millimeters (an insect) to several microns (the hair of an insect) to a couple of nanometers (the exoskeleton’s molecular macrostructure).
With present-day instruments, it is possible to image objects in the range of few hundred picometers, which is the typical size of an atom. How can the field of view to image a sample be defined? It depends on several factors.
For example, if the particles have an average size of 1 micron and they need to be counted, it is sufficient to have 20 particles for each image, instead of spending too much time imaging one particle at a time.
A field of view of 25 to 30 microns is adequate for a sample even when considering the space between particles.
If the particle structure is the main focus, in contrast, a closer view is necessary and the observed region must be nearer to 2 to 3 microns, if not smaller.
Images of particles. A close-up of a particle (left) shows the surface topography (FOV=92.7 μm). A larger field of view (right) enables more particles to be imaged (FOV=μm). Image Credit: Thermo Fisher Scientific Phenom-World BV
Click Here to Read the Full Article
This information has been sourced, reviewed and adapted from materials provided by Thermo Fisher Scientific Phenom-World BV.
For more information on this source, please visit Thermo Fisher Scientific Phenom-World BV.