What is Metrology Part 23 – Error and Perception

Margin of Error

After a significant amount of time dedicated to this series, I have made some interesting insights.  When you think of metrology and measurement, humans need to understand that we are faulty at what we do. It is difficult to have true precision in measurement. We are prone to error and degrees of various errors. Secondly, no one human has the same perception as another. This leads to various incongruities in the physical realm. We can think in terms of optics, general psychology, and a vast number of phenomena. So how do we escape faulty perception and human error? Well, that seems impossible, but I am going to venture into these topics to show how they affect measurement and metrology as a whole.

Margin of error is a statistic that shows the amount of sampling error due to random occurrences. When we have a large margin of error, there lies less confidence in the data we collect. In reference to metrology, one can think of a scanning system as our measuring apparatus. When operated by a human, various things and random occurrences can affect the margin of error within a laser scan. This can include an unsteady hand when scanning an item. One could also have a slightly unclean lens that may cause distortion within a 3D scan. The movement of a target for 3D scanning may also affect this as well. There are a slew of items that may cause a 3D scan to contain large margins of error.

Act of Perception

Perception is how we organize, identify, and interpret sensory information in order to understand or represent our environment. Perception includes the ability for us to receive signals that go through our nervous system. This results in physical or chemical stimulation of our sensory systems. This allows us to interpret and understand the information we are bombarded with on a daily basis. Examples of this include how vision occurs through light interacting with our eyes, how we are able to use odor molecules to interpret smell, as well as our general ability to detect sound through pressure waves within the air. Perception is denoted by the receiver though. This means their learning, memory, expectation, and attention are vital for how the signals are interpreted.

I bring these things up as it shines a light on a key difference between machines and humans. Machines have less working experience, expectation, and learning compared to humans. Being able to consistently distinguish a watch in 3D form is natural for most humans, but a machine can be thrown off by slight variations in form. A machine automated process may have less error in terms of pure measurement, but the interpretation of the data is still a difficult task for a machine.

Issues of Perception and Metrology

Perception is typically thought of in two forms:

  • Processing an input that transforms into information such as shapes within the field of object recognition.
  • Processing that is interloped with an individual and their own concepts or knowledge. This includes various mechanisms that influence one’s perception such as attention.

Through laser scanning, an individual is able to collect data on a physical product. This data needs interpretation for it to have tangible value. A computer device is not readily able to do so. So metrology is a field based on our innate error and psychology as humans. But that does not mean the field is useless, as we humans have an innate desire to make things quantifiable.

The post What is Metrology Part 23 – Error and Perception appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 22 – 3D Translation and Rotation

3D Translation

3D translation and rotation is vital within a 3D environment. We have discussed the importance of this within metrology previously in CMM systems. Interfacing the physical world and digital world requires precise measurement coordinates. Within 3D environments, movement is vital. In particular, movement is our focus today. We will talk about how example code within Processing can help us interface the digital world and physical realm. How does one simulate the real world within a virtual space? VR and 3D build environments are critical. So what are the basics needed for measurement and calculating space in a digital field? Metrology systems and laser scanning help. Tracking movement in terms of translation within a coordinate system is vital. So how could we learn about the basics of all of this? Let’s look into Processing as our main digital coding tool. How does Processing compute translational movement? If one wants to understand coding in a 3D environment, it is highly recommended that you understand object oriented program. We will not do a deep dive into this today, but a lot of the operations we are doing are object oriented. The following code is how I will demonstrate movement within Processing:

size(640,360,P3D);

background(0);

lights();

pushMatrix();

translate(130, height/2, 0);

rotateY(1.25);

rotateX(-0.4);

noStroke();

box(100);

popMatrix();

pushMatrix();

translate(500, height*0.35, -200);

noFill();

stroke(255);

sphere(280);

popMatrix();

Processing Example Code

Let’s take a look at this line by line. The first line is dedicated to building the 3D environment. The background of this window is denoted by the color value 0 which corresponds to black. The lights() function allows us to use ambient light, directional light, and specular values within our environment. In this case we are using the default light structure. pushMatrix(), we have discussed before as a function that allows us to set our original matrix of data that will be manipulated later. Our manipulation will come in the form of translational and rotational forces implemented. translate(130, height/2, 0) manipulation in question. The rotateY() function is used to move the window towards a particular angle based on units from the Y-axis of our viewport. The rotateX() function is similar except it is rotating within the X-axis. noStroke() allows us to be free from bordered 3D image renderings. box(100) allows us to create a 3D box with a float dimension of 100 in the x-dimension. popMatrix() ends the manipulation of the original data set. We then are able to start a new data manipulation through translation with pushMatrix(). This time, we have used the following command to apply a translation: translate(500, height*0.35, -200). Instead of a box though, we created a sphere for rotation.

Planar Rotation

We have focused a lot on coding basics within this metrology series as a whole. Being able to have standardized measuring devices within our metrology equipment is key. Without such, we would lack tangibility. If a 3D environment cannot be built with a computer, there is no way to interpret the data from a 3D world that we interact with. Being able to set a coordinate based on the world around us is still a difficult task though. So how can we do this? It makes us question how most of the devices we use in metrology are able to accurately set values of measurements based on the real world. These set of questions are things I would further like to explore.

The post What is Metrology Part 22 – 3D Translation and Rotation appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 19 – Moire Effect in 3D Printing

Moire Effect

Errors are abundant when measuring objects, and we have continually come across this within our series of articles. Image processing in 2 dimensions is vital for transforming images into a 3D structure. This includes extruding a 2D image into 3D as well as stitching 2D images to create a 3D image. Today we will learn about an effect in photography that many of us notice, but are not aware of its terminology. We will also look into how this affects metrology and, subsequently, 3D printing.

The Moire Effect refers to a pattern that is created in images occasionally. A moire pattern is a large in magnitude interference pattern. This can be produced with an opaque pattern that has transparent gaps overlaid within it. To see a large display of the moire interference pattern, two patterns cannot be identical in nature. The patterns have to be rotated or have a slightly different pitch. Overall it is a pretty trippy visual and it messes with our typical human perception in a variety of ways.

Moire Effect 3D Printing

Constructing 3D images from 2D images is a difficult problem. An object that is 3D scanned is vulnerable to the moire effect. When doing a 3D print, the moire effect arises when you notice zebra like stripes on the surface of a print. To stop this it is critical to have great image processing on the 2D level. It seems as though it is nearly impossible to make a roughly perfect 3D image because of the impossibility of creating a perfect 2D image. This is okay, but we are still trying to attain the highest precision possible.

There is a lot of interesting math behind this effect as well. The essence of the moiré effect is the (mainly visual) perception of a distinctly different third pattern which is caused by inexact superimposition of two similar patterns. The mathematical representation of these patterns is not trivially obtained and can seem somewhat arbitrary. In this section we shall give a mathematical example of two parallel patterns whose superimposition forms a moiré pattern, and show one way (of many possible ways) these patterns and the moiré effect can be rendered mathematically.

{displaystyle {begin{aligned}f_{1}&={frac {1+sin(k_{1}x)}{2}}\[4pt]f_{2}&={frac {1+sin(k_{2}x)}{2}}end{aligned}}}

Moire Effect Mathematics

The visibility of these patterns is dependent on the medium or substrate in which they appear, and these may be opaque (as for example on paper) or transparent (as for example in plastic film). For purposes of discussion we shall assume the two primary patterns are each printed in greyscale ink on a white sheet, where the opacity (e.g., shade of grey) of the “printed” part is given by a value between 0 (white) and 1 (black) inclusive, with 1/2 representing neutral grey. Any value less than 0 or greater than 1 using this grey scale is essentially “unprintable”.

Moire Effect Background

When is the Moire Effect most prevalent? A terminology that is important to understand within metrology is strain measurement. Strain is a measure of the deformation of a body due to a force being applied to it. Strain is also the mathematical change in dimension of a body when a force is applied. Thus, strain measurement is focused on document the changes within dimension based on force applications. This is great when we want to measure deformations, but not for when we want to remove the possibility of them occurring through a 3D print. There are image scanners that have a descreen filter. These filters typically remove Moire-pattern artifacts. These are produced when scanning halftone images to produce digital images.

In conclusion, the Moire Effect as an interesting visual effect that occurs within the 2D realm and it readily affects the 3D world. With metrology technology, it is one of the various phenomena that can interfere with a high precision scan of an object.

 

The post What is Metrology Part 19 – Moire Effect in 3D Printing appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 20 – Processing

Processing

Hey everyone! So this series thus far has been a bunch of fun, and it gets more exciting with what we are doing today. Today I’ll be taking us through a basic tutorial in coding through the framework of the Processing API. I have had experience with this programming language and I believe it is an interesting medium for visualizations of various sorts. It can do awesome generative computerized art, and it can be the source of interesting projects when data and 3D environments are fused. I’ll give an informational overview of the platform as it pertains to 3D manipulation. 

Processing is an interesting platform as it is a software sketchbook in a sense. It is a language used for coding and applying it specifically to the visual arts. Processing has done a lot of promotion for software literacy within the visual arts field. It has also done similar promotion for visual literacy within the technology sector. They have built a large global community of students, artists, researchers, and hobbyists who use the platform for educational and prototyping purposes. 

Processing Tutorials

I personally started messing with Processing when I was in college. I had some skills in Python mostly through my physics courses, but I was working at my Center of Digital Media within my university. Being around digital media and artistic individuals got me curious as to see the combination of technical fields as well as the arts. When I was learning to code a bit more, I found the Processing platform and a large amount of YouTube tutorials. 

Generative Processing Art

Something of interest to me with the platform is that it is a simple interface. It also is not as intimidating of an environment compared to other development spaces. For someone who is interested in things such as image processing, it is the ideal platform to learn quickly. Combining the arts and technology seems disparate for a lot of people. These two fields however are extremely similar and they should not live in vacuums away from each other. 

Another great thing about Processing is the large portfolio of onsite tutorials that explain the basics to someone who has no experience with the platform. They did a great job of explaining what every command does within their environment. When learning to code, it is more of a learn as you go approach. When one needs a function, they will have to research online for the meaning of this function and how to execute it. Processing did a good job of centralizing their information through their website and online forum communities. 

P3D Mode in Processing

Processing’s power lies within its five render modes. These render modes are the default renderer, P2D, P3D, PDF, and SVG. The default renderer is the backbone of a lot of the programs done by Processing users. It is used for 2D drawing. The usage rates vary based on whether the other renders lack the definition of the size() parameter. The P2D renderer is an alternative to the default renderer for 2D images. The difference between these renderers is that P2D has a quicker runtime, but it sacrifices some visual quality for speed. The P3D renderer is used for drawing in three dimensional space. The PDF renderer is used for writing PDF files from Processing. The files can be scaled to various sizes and output with high resolutions. This renderer can also flatten 3D data into a 2D vector file as well. The SVG renderer does similar tasks as the PDF renderer, but the file format is an SVG. A lot of the renderer power for 3D imaging comes from utilizing the software of OpenGL that is supported on multiple GPUs to help speed up the drawing process. 

With this overview, I hope I have intrigued people for a couple of coding projects I will try to show off within the series.

The post What is Metrology Part 20 – Processing appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 18 – Pixelation

3D Pixelation

What is pixelation? I think a lot in terms of photography and images because this is an interesting realm to me. Metrology is a field that allows for measurement through photography. Digital images are subject to errors, and pixelation is a part of this equation. The ability to measure images in terms of pixels is critical from interfacing between the physical environment and the digital realm.

Pixelation is caused by displaying a bitmap or a section of a bitmap on a large scale. This causes individual pixels to be visible. A pixel is a single-colored square display element that lies within a bitmap. A bitmap refers to a representation in which each item corresponds to one or more bits of information, especially the information used to control the display of a computer screen. A bitmap is a type of memory organization or image file format used to store digital images. The term bitmap comes from the computer programming terminology, meaning just a map of bits, a spatially mapped array of bits.

Bit Array

A bit refers to a basic unit of information. This information is applicable within information theory, computing, and various digital communications. Within the field of information theory, one bit is usually defined as the information entropy of a binary random variable that is 0 or 1 with equal probability. This can also be called the information that is gained when the value of such a variable becomes known. Information entropy refers to an average rate of production for a stochastic source of data.

Stochastic methods refer to techniques that involve the accounting of a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by inputting the possibility of random occurrences over time. These random variations are usually based on different fluctuations from historical based data. This is all done over a selected period of time based on standard time-series techniques.

Stochastic Image Processing

Stochastic methods are applied to metrology as various errors and random events may cause imperfect measurement. This is crucial to correct for natural human error as well as random environmental factors that are in need of consideration.

For all of those inclined, be sure to look into these thoughts and apply them to computers and quantum computing if you are interested. But how does this relate back to metrology? Simple answer: with information processing we are able to dissect images. With images, one can measure the environment. Pixelation however shows the conversion process of images from the physical realm to the digital realm. There are various representations of bitmaps and these affect the output of a digital image. These include:

  • 2-bit
  • 4-bit
  • 8-bit
  • 16-bit
  • 32-bit
  • 64-bit
  • 252-bit

Color Depth

Bits within images are directly correlated to colors. For the number of bits one may take the squared number of this and this is how many colors found within the image. 2-bit refers to 4 colors, 4-bit refers to 16 colors, 8-bit refers to 64 colors, etc. Being able to accurately represent these color depths from the physical world to the digital world is still difficult.

Color and pixelation are important and highly challenging items to deal with. Metrology methods need to deal with these items if it wants to expand its measuring ability. In our next article, I will highlight through code how to show pixelation and how it is vital to 3D imagery.

The post What is Metrology Part 18 – Pixelation appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 17: Antialiasing

Antialiasing

We have done a good amount of learning within this series. With each new project and research oriented article, more knowledge is unraveled. We will be taking a look at antialiasing today as it was something that caught my attention in a previous article. It affects the accuracy of images as a whole, and we know the importance of precision in metrology

Antialiasing is a technique used to add greater realism to a digital image by smoothing jagged edges on curved lines and diagonals. This is a computer graphics technique that allows for sharper resolutions for a photo based on precise geometry. Some of the “imperfections” of an image may be distorted or destroyed due to this. I am certain that in order to do processing such as photogrammetry and image stitching, a computer would need exact geometries that can be added together to form a 3D image. This causes the 3D image to have less precision overall in terms of actual dimensions. 

Voltage Reading of an Anti-Alias Filter

An antialias filter refers to any filter that is used before a signal output, which in this case is our digital image. This filter is able to restrict the bandwidth of a signal. If we recall, we have talked about limiting signals before within this series in terms of filters (thresholding). We have defined this similarly as a low pass filter. A low pass filter does not allow for an image to pass through a specific energy level. This is what allows for filtering or cleaning of an image. 

The goal of antialiasing is to correct images. When certain defects are present, information cannot be correctly read by a device. Antialiasing is particularly useful when a picture is rasterized and has jagged appearance due to rasterization. Converting from an analog signal or image in the real world to a digital image causes distortion. This distortion needs to be filtered out, and antialiasing is one method that does such. 

There are more forms of antialiasing as well. The main forms include the following:

  • Spatial antialiasing
  • Temporal antialiasing 

Spatial antialiasing

In digital signal processing, spatial antialiasing is a technique for minimizing the distortion artifacts known as aliasing when representing a high-resolution image at a lower resolution. Why would we want to do this? When thinking in terms of the 3D world, spatial antialiasing is vital. Most of our images taken in the real world if done properly will be high resolution. In order to stitch together high resolution, one needs a large amount of storage for the data that would be stitched. In order to do this in a more effective way, it is better to convert the image data into lower resolution images and then stitch them. This requires less intensive data and storage. We can then convert the lower resolution image stitch later into higher resolution 3D models after spatial antialiasing methods are used. 

Temporal Antialiasing

Temporal Sample Anti-aliasing (TSAA) seeks to reduce or remove the effects of temporal aliasing. Temporal aliasing is caused by the sampling rate (i.e. number of frames per second) of a scene being too low compared to the transformation speed of objects inside of the scene; this causes objects to appear to jump or appear at a location instead of giving the impression of smoothly moving towards them. To avoid aliasing artifacts altogether, the sampling rate of a scene must be at least twice as high as the fastest moving object. The shutter behavior of the sampling system (typically a camera) strongly influences aliasing, as the overall shape of the exposure over time determines the band-limiting of the system before sampling, an important factor in aliasing. A temporal anti-aliasing filter can be applied to a camera to achieve better band-limiting. A common example of this can be seen when seeing a car wheel move backwards while we see it in video

There is still more to unpack knowledge wise. The rabbit hole continues to open up. There are different forms of antialiasing methods within the two sections provided today as well. I will be addressing some of these things within the next article.  Hopefully I will show these things off too with code.

The post What is Metrology Part 17: Antialiasing appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 16: Introductory Coding

Anaconda

Today we will be looking into the basics of image processing and coding within Python. We will start with 2D images and learn some elementary skills in terms of setup and coding with image processing. With all of the research being done in this metrology series, it will be fun to do some interactive and project-oriented learning that focuses our attention to the different subject matter we have touched so far. Be prepared to deep dive a bit more with me today.

The first step in coding is choosing and setting up one’s developing environment. This choice is done through knowledge of what language you are using, as well as personal preference. I myself have basic scripting skills within Python. My first inclination for coding is the Python language. This limits the scope of development tools that are available to me. I also am coding with the intent of doing image processing work. This dictates my workflow and environment. 

Command Line Example

I decided to develop with the Anaconda environment for Python. The steps for downloading and running Anaconda can be a bit confusing if you do not have previous experience with a command line. A command line is the space to the right of the command prompt on an all-text display mode on a computer monitor (usually a CRT or LCD panel) in which a user enters commands and data. Commands are generally issued by typing them in at the command line and then pressing the ENTER key, which passes them to the shell. For someone completely new to coding though, there are various tutorials and online resources that are instruction based. I will layout the process that I used to get my development environment setup:

Download the Anaconda package through here.

When the installer gives you the option to add this to your environment path be sure to do so. It is important for later interactions with your computer’s command line. 

Use the following conda command in your command line when Anaconda is installed:

conda install jupyter 

Once this command is entered, your computer will unzip the jupyter notebook package from the web. A jupyter notebook is where one can place their Python code. It can also be executed and tested within this environment. It is an awesome tool for developing.

Use the following conda command in your command line after completing the previous installation:

conda install pillow

Once this command is entered, your computer will unzip the pillow package from the web. The pillow package is a great package for Python because it imports functions that are specific to image processing techniques. Once those installations are done, open a new command line and type in the following command:

jupyter notebook

Jupyter First Glance

This will open up a jupyter notebook environment within one of your browser tabs automatically. From there we are now able to start coding and have some fun. There is a button on the upper right hand corner that says new. Click this and press Python 3 for the ability to make a file for developing. The initial popup window should correspond to how your desktop environment is setup in terms of files. 

Now that we have all of this setup, please take a look at this online tutorial here. In this tutorial it is one should copy and type all of the text that appears within the code posted. Without exact formatting, various errors may pop up as you run your program. This is the more challenging part of programming. Being able to spot errors and bugs when we are creating projects is the essence of a succinct programmer. There will also be various items, words, and functions that seem complex. It is important for one to learn everything that seems foreign to them if they want to become an excellent programmer. 

Antialiasing Example

I myself had no real understanding of the word antialiasing. It is something I have seen before in my camera settings of a DSLR I use, and I have seen it within programs such as Photoshop, but I really did not understand what it was. Once I saw it in the context of code, I really had to understand what it meant. In the particular code snippet I copied from the tutorial, the goal was to create images that were at a certain size and shape. In order for images to be compressed, antialiasing is an important factor. Antialiasing is a technique used to add greater realism to a digital image by smoothing jagged edges on curved lines and diagonals. This is a computer graphics technique that allows for sharper resolutions for a photo based on precise geometry. Some of the “imperfections” of an image may be distorted or destroyed due to this. I am certain that in order to do processing such as photogrammetry and image stitching, a computer would have to have exact geometries that can be added together to form a 3D image. This causes the 3D image to have less precision overall in terms of actual dimensions. I wonder what is the margin of error for a 3D image when photogrammetry techniques are accounting for antialiasing. 

Lastly, I learned about an alpha channel. Alpha channels are color components that represent the degree of transparency (or opacity) of a color (i.e., the red, green and blue channels). They are used to determine how a pixel is rendered when blended with another. It now begs the question of how precise are metrology and laser scanning devices in terms of picking up color. These are follow up questions I will be researching more in depth.

Overall, this is the first step into the world of image processing. I am excited to continue research as well as build out fun projects that will show off this field a bit more.

The post What is Metrology Part 16: Introductory Coding appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 15: Inverse Filtering

Signal Processing

Signal processing is the name of the game that must be played in order to do image processing. Image processing is such a fascinating subject that I am excited to expand upon it.  It has amazing cross sectionality within various fields such as metrology, 3D printing, biomedical industries, and any industry that uses imaging as its main technology. Today we will be taking a look into inverse filtering as a specific method within signal processing. Signal processing is a general domain of expertise that can be applied in different settings. For the purposes of where we are in our metrology series, we will only focus on image processing.

imagerestoration.gif

Inverse Filtering

Inverse filtering is a method from standard signal processing. For a filter g, an inverse filter h is one that where the sequence of applying g then h to a signal results in the original signal. Software or electronic inverse filters are often used to compensate for the effect of unwanted environmental filtering of signals. Within inverse filtering there is typically two methodologies or approaches taken: thresholding and iterative methods. The point of this method is to essentially correct an image through a two way filter method. Hypothetically if an image is perfect, there will be no visible difference. The filters applied will correct a majority of errors within an image.

When we know of or have the skill to create a good model for a blurring function of an image, it is best to use inverse filtering. This is because having a model, or let’s say algorithm, allows us to efficiently and succinctly apply mathematical constraints to data in an instantaneous manner. The inverse filter is typically a high pass filter. 

ECG high-pass filter

A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. In physics, attenuation is the continuous loss of flux intensity through an object. Flux is a rate of flow through a surface or substance in physics. For instance, dark glasses attenuate sunlight, lead attenuates X-rays, and water and air attenuate both light and sound at variable attenuation rates. The amount of attenuation for each frequency depends on the filter design. A high-pass filter is usually modeled as a linear time-invariant system. It is sometimes called a low-cut filter or bass-cut filter. If the cutoff frequency is lower than the cutoff frequency, our image will not allow for certain features to be shown in the next image transformation. This efficient method is great for low frequency signals, but the world and image data is not low frequency.  The outputs from the world are typically noisy. The linear time-invariant system of a high pass filter is needed in order to constrain the outputs one receives from the universe. When time is added as a variable for a signal, wild things can happen in terms of frequency. In order to conduct an inverse filter we have two techniques: thresholding and the iterative procedure. 

Thresholding

The word threshold can be defined as a level, point, or value above which something is true or will take place and below which it is not or will not. Thresholding in image processing refers to setting a value limit on the pixel intensity of an image. This threshold can be thought of in terms of our earlier discussion on filters. The image processing method is able to create a binary image. This technique is usually applied to grayscale images, but it can be applied to color images as well. We are able to dictate the level of intensity that we want to have our transformed image at. Pixels that are below this value are converted to black – this is the value of zero in binary code. Pixels above the threshold value are then converted to white – this is the value of one in binary code. 

The iterative method within inverse filtering is more of a mathematical guess and check solution. The goal is to guess what the original image was in terms of image processing.  With each mathematical guess, a user is able to build a better fitting model to represent a digital image. This method is more of a brute force algorithm method. This method is not as efficient as the thresholding method, but it does have the advantage of better stability when dealing with noise. We do not need to be time invariant when dealing with this method. 

Overall, this is only one of the many examples of image processing techniques. As a follow up to this article, I will do some interactive code and I’ll showcase some of the power of these methods when we are taking a look at these problems through the lens of computer science and engineering.

The post What is Metrology Part 15: Inverse Filtering appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 14: Image Restoration

Art Restoration and 3D Printing

Through this metrology series, I hope readers are making this realization: We as humans have faulty perception, and we try to understand our world as precisely as we can. The tools we use to measure items within our world are prone to error, but they do the best they can to reveal the essence of reality. The common adage is that a picture says a thousand words. If one has a blurry or weak picture, the words said are mostly confusing. Devices that take images can be used for metrology purposes as we have discussed earlier. The data that we capture in forms of images is necessary for high resolution and precise metrology methods. How do we make sure that images are giving us words of clarity vs. confusion?

Image restoration is the operation of taking a corrupt or noisy image and estimating the clean, original image. Corruption may come in many forms such as motion blur, noise, and camera misfocus. Image restoration is different from image enhancement in that the latter is designed to emphasize features of the image that make the image more pleasing to the observer, but not necessarily to produce realistic data from a scientific point of view.

Certain industries are heavily reliant on imaging. An example of the interdisciplinary nature of imaging and metrology is found in the medical sector. Biomedical imaging techniques need to be extremely precise for accurate measurements of internal organs and structures. Size, dimensionality, and volume are items that need high precision due to their affect on human life. Without proper images of these items, doctors and physicians would have a difficult time in giving proper diagnoses. Another important caveat to remember is the ability to replicate these structures through the use of 3D printing. Without accurately measured dimensions from 2D images, there would be a lack of precision within larger 3D models based off of these 2D images. We have talked about image stitching and 3D reconstruction previously. This is especially important within the medical field in the creation of 3D printed phantoms.

One can apply the same concept and thought process to the automotive industry. The automotive industry is all about standardization and replicability. There needs to be a semi-autonomous workflow ingrained within the production line of a vehicle. 3D scans are taken of larger parts that have been fabricated. With these original scans, replicability within production is possible. There still lies a problem of precision within an image. There are a lot of variables that may cause a 3D scan to be unreliable. These issues include reflective or shiny objects, objects with contoured surfaces, soft surfaced objects, varying light color, opaque surfaces, as well as matte finishes or objects. It is obligatory that a 3D scan is done in an environment with great lighting. With all of these issues, image restoration is essential with any scan because it is nearly impossible to have a perfect image or scan. Within the automotive industry, the previous problems are very apparent when scanning the surface of an automotive part. 

There are 4 major methods to image restoration that I will highlight here, but will expand upon within further articles.

Inverse Filtering

Inverse filtering is a method from signal processing. For a filter g, an inverse filter h is one that where the sequence of applying g then h to a signal results in the original signal. Software or electronic inverse filters are often used to compensate for the effect of unwanted environmental filtering of signals. Within inverse filtering there is typically two methodologies or approaches taken: thresholding and iterative methods. The point of this method is to essentially correct an image through a two way filter method. Hypothetically if an image is perfect, there will be no visible difference. The filters applied will correct any errors within an image though.

Wiener Filter

In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or targeted random process through linear time-invariant filtering of an observed noisy process, assuming certain conditions are constant such as known stationary signal and noise spectra, and additive noise. This is a method that is focused on statistical filtering. This necessitates time-invariance because adding time into this process will ultimately cause a lot of errors. 

Wavelet-based image restoration

Wavelet-based image restoration is applying mathematical methods that allow for an image and its data to be compressed. With this compression, the ability to process and manipulate an image becomes a bit more manageable. Transient signals are best for this type of method. A transient signal refers to a short-lived signal. The source of the transient energy may be an internal event or a nearby event. The energy then couples to other parts of the system, typically appearing as a short burst of oscillation. This is seen in our readily available ability to capture a picture or image within a specific time frame. 


Blind Deconvolution

Blind deconvolution is a technique that permits recovery of the target scene from a single or set of “blurred” images in the presence of a poorly determined or unknown point spread function(PSF). The point spread function (PSF) describes the response of an imaging system to a point source or point object. A more general term for the PSF is a system’s impulse response, the PSF being the impulse response of a focused optical system. Regular linear and non-linear deconvolution techniques utilize a known PSF. For blind deconvolution, the PSF is estimated from the image or image set, allowing the deconvolution to be performed. 

We will be taking a deeper dive into this subject matter soon. As one can tell, there lies a vast amount of information and interesting technology and knowledge to be further understood. Through writing and experimentation with code, hopefully, I can show these things as well. 

The post What is Metrology Part 14: Image Restoration appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

What is Metrology Part 13: Object Recognition

3D Perception

We as humans have faulty perception of the physical environment we live in. Although we are able to distinguish 2D items and 3D items, we do not have the ability to measure them in real time with numeric values. We need to use outside devices to assist us. We have discussed at length these topics within our metrology series, but today we will take a look specifically at a subsection of knowledge within this field and computer vision. With computer oriented object recognition, humans are attempting to make the world more precise through the lens of a computer. There are a variety of things that get in the way of precise object recognition.

Object recognition is defined as technology in the field of computer vision for finding and identifying objects in an image or video sequence. Humans have the ability to recognize objects with bare minimal effort, even though an image varies in different viewpoints. The image also varies when it is translated, scaled, and rotated. People are able to recognize images even when they are somewhat incomplete and missing critical information due to an obstruction of view. Humans use the power of gestalt psychology to do such. Gestalt psychology is defined as a German term interpreted in psychology as a “pattern” or “configuration”. 

Gestalt in Practice

Gestalt is based on understanding and perceiving the whole sum of an object rather than its components. This view of psychology was created to go against a belief that scientific understanding is the result of a lack of concern about the basic human details.

The ability for a computer to recognize parts and synthesize them into a larger body object is the main source of error within computer vision and object recognition. This task is extremely challenging for computer vision systems. One must understand that computers have immense capabilities in logically describing constituents or smaller parts, but adding them together consistently to form the basis of a larger item is still difficult. This is personally why I am not too worried about a robot takeover anytime soon. Many approaches to the task have been implemented over multiple decades.

Matlab and object  detection/recognition

For a computer to do sufficient object recognition there needs to be a ton of precision with identifying constituent parts. To do this, a computer relies on a vast amount of point cloud data. A point cloud is defined as a set of data points in space. Point clouds are usually produced by 3D scanners. With this point cloud data, metrology, and 3D builds can be created. An object can be recognized through using point cloud data to create a mesh. For us as humans, we are able to interpret that mesh within our 3D realm. However, computers are not that great at such interpretation. They just give us great and precise data to work with. It is important to note that computers are okay at object detection. This refers to being able to decipher a part or object within a larger scene. But when we place multiple parts into a scene or an item with a complex geometry, things become difficult for a computer to decipher. Hence we only use 3D scanners to grab point cloud data and not process what a 3D object is. 

Currently in terms of object recognition, computers can barely recognize larger scale items within a 2D setting. It will take a long time for computers to have the graphic capabilities to even decipher what an object would be in a 3D environment. For example, MATLAB is a powerful coding software used for large scale data processing, but computers require a large amount of machine learning and deep learning techniques to process 2D images. First these systems need to do this at a rate of 99.9% confidence before one can move on to 3D images. Humans are not necessarily 100% accurate in terms of processing images either, but they are still slightly more consistent than computer vision techniques. Overall I am interested in learning how to develop such technologies, and I wonder who are the people and organizations wrestling with these problems daily.

The post What is Metrology Part 13: Object Recognition appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.