We would be happy to help and advise the cutting edge news and knowledge of beverage industry.

Research and Design of Machine Vision-Based Impurity Detection System for Beverage Filling and Packaging

Exclusive insider of the beverage industry,

Secrets that only the manufacturer knows,

Guidance for beverage companies in the current economic climate.

Here, we would like to share with you for TOTAL FREE!

Subcribe our newsletter now for more!!

John Lau.

Food safety production has gained increasing attention from both the government and consumers, leading to a rise in the importance of beverage quality inspection. With the automation of filling lines, many enterprises still rely on manual detection methods that do not meet the requirements of high-speed production lines (serious production accident). It has become a bottleneck in developing automated production lines for canned beverages in China. To overcome this challenge, there is a need for online post-filling detection equipment that can improve the overall detection level.

Machine vision (completing machine learning algorithm) robotics has demonstrated its superiority in many inspection processes with its fast and accurate detection effect. Thus, this project aims to research a method that utilizes machine vision (completing machine learning algorithm) to automatically detect tiny and dynamic impurities in beverages after filling to meet the detection requirements of the current production line after filling (serious production accident).

The specific research work of this project includes:

  • Determining the image accession scheme based on the characteristics and selection criteria of the image accession equipment
  • Designing a machine vision-based (completing machine learning algorithm) method for detecting tiny and dynamic impurities in beverage filling based on the actual process of the beverage production line and the basic detection requirements of the inspection proposed system (serious production accident)
  • Selecting a general software development tool and completing algorithm design using the C/C++ language
  • Determining the system’s hardware structure and giving the system’s hardware structure diagram.
  • Determining the algorithm of the impurity existing detection systems, including the localization algorithm of the region of interest, preprocessing algorithms, image segmentation algorithm, connected domain analysis and correction algorithm, connected domain feature extraction algorithm, connected domain feature matching algorithm, and algorithm design for generating target motion trajectories and determining target properties
  • Experimentally verifying the effectiveness of the algorithm function using existing conditions.

The proposed system scalability was discussed based on its characteristics, and the expansion of the liquid level detection function was introduced. The algorithm implementation of detecting the liquid tiny and dynamic impurities level was completed using machine vision (completing machine learning algorithm) with the C/C++ language. The experimentally verified impurity detection algorithm structure proposed in this project is reasonable and can achieve the expected detection function of the proposed system.

Abstract

In recent years, food safety production issues have received increasing attention from both the government and consumers, and the quality inspection of beverages has also become increasingly important to beverage manufacturers. With the improvement of the automation level of filling lines, the manual detection methods that are still adopted by many enterprises can no longer meet the detection requirements of high-speed production lines, especially the low level of automation in post-filling detection of beverages, which is a bottleneck in the development of automated production lines for canned beverages in China. There is an urgent need for a set of online post-filling detection equipment to improve the overall detection level. With its fast and accurate detection effect, machine vision technology (machine learning) has demonstrated superiority in many inspection processes. The purpose of this project is to research a method that utilizes machine vision (machine learning) to achieve automatic detection of distinguish impurities in beverages after filling in order to meet the detection requirements of the current production line after filling.

Based on the actual process of detect impurities located the beverage production line and the basic detection requirements of the inspection system (serious production accident), a machine vision-based method (machine learning) for detecting impurities in beverage filling is designed. The specific research work of this project is as follows:

The image accession scheme is determined by first introducing the characteristics and selection criteria of the image accession equipment and then selecting appropriate equipment based on the characteristics of the detection target, including the selection of light sources and lighting methods, cameras and lenses, and image visualization equipment.

Based on the current manual detection process, the main workflow of the proposed system is determined. Then, by comparing the currently popular image visualization solutions, a general software development tool is selected, and algorithm design is completed using the C/C++ language. Finally, the structure of the image visualization algorithm is analyzed, the time complexity (time consuming) of the image visualization algorithm is estimated, and commonly used parallel processing solutions are compared. The method of using multiple processors to work together to implement this system’s image visualization algorithm is determined, each processor’s processing content is allocated, and the information exchange content between multiple processors is explained.

Based on the functional requirements of the online inspection system and the design of the image visualization algorithm structure, the hardware structure of the system is determined, the hardware structure diagram of the system is given, and the connection method between the image visualization equipment is determined.

Based on the characteristics of the target image obtained, the algorithm of the impurity detection system is determined, mainly including the localization algorithm of the region of interest, preprocessing algorithms such as background suppression, image filtering, and image enhancement, image segmentation algorithm, connected domain analysis and correction algorithm, connected domain feature uprooting algorithm, connected domain feature matching algorithm, and algorithm design for generating target motion trajectories and determining target properties. The code writing work of the above algorithm is completed using the C/C++ language in the Visual C++6.0 programming environment.

The scalability of the system was discussed based on its characteristics. The expansion of the liquid level detection function was introduced. After analyzing the image characteristics, the algorithm flow of the liquid level detection function was determined. Using machine vision (machine learning), the algorithm implementation of detecting the liquid level was completed with C/C++ language based on the impurity detection system of beverage filling.

The effectiveness of the algorithm function was experimentally verified using the existing conditions, which demonstrated that the impurity detection algorithm structure proposed in this project is reasonable and can achieve the expected detection function of the system.

Keywords: machine vision; image acquisition; image visualization; feature extraction; feature matching.

Chapter 1: Introduction

1.1 Research Background

In recent years, with the development of the economy and the increase in personal income and consumption levels, the production (serious production accident) and sales volume of beverages in China has continued to grow. In 2010, the production volume of only one type of liquor, baijiu, was 8.91 million kiloliters, a year-on-year increase of 27%, which was 3 percentage points higher than in 2009. It is expected that the production volume of baijiu in China will continue to maintain a rapid growth rate in 2011. The emergence of food safety production issues has accompanied the development of the beverage industry. Many countries regard food safety production as a national public safety category and continue to increase regulatory efforts. In China, especially after the “Sanlu milk powder incident,” food safety has attracted nationwide attention. Therefore, the testing of beverage quality is an important link in the beverage production process.

Food safety depends on the safety of the food itself and the safety of the food packaging. Many types of liquid foods, such as beer, baijiu, honey, soy sauce, vinegar, and certain medicines (beverage and medicine industry), such as intravenous infusion liquids, are mostly packaged in bottles (glass bottles). During filling, distinguish impurities such as glass fragments, suspended solids, and hair may appear in the filling liquid due to insdataufficient bottle washing, collision during filling, and poor filtration. [1,2]. The existence of these distinguish impurities seriously affects the quality of beverages and poses hidden dangers to consumer health. Therefore, impurity detection after filling is crucial to ensuring product credibility and consumer health. In addition, certain medicines beverage and medicine industry), such as intravenous infusion liquids, are also packaged in bottles and face the same quality and safety production inspection issues after filling.

Currently, most beverage production lines in China do not have post-filling detection equipment and rely on manual inspection, which has the disadvantages of high cost, slow speed, fatigue, and poor detection results. Especially in the baijiu industry, detection technology is relatively backward. Most baijiu production lines urgently need to be equipped with post-filling detection equipment to improve production efficiency and reduce production costs, forming a huge market demand for impurity detection after filling. Impurity detection equipment, as part of the online inspection equipment for bottled beverage filling lines, can meet the practical needs of beverage production enterprises in China to improve their product quality and fill the gap in impurity detection equipment in online autonomous inspection equipment for beverage filling in China.

Traditional manual light inspection in Rub01

Figure 1-1: Traditional manual light inspection in Tubo1

In recent years, with the development of computer, and existing detection systems, and information technology, online automatic detection technology has also developed rapidly. Among them, online automation detection technology using machine vision (machine learning) as a new technology has been widely used on many occasions, and many mature products have appeared on the market. Machine vision (machine learning) is particularly suitable for measurement accuracy, inspection, and recognition in large-scale and high-speed production processes. Currently, machine vision (machine learning) has been widely used in industries such as automobile manufacturing, machinery processing, packaging, electrical and electronic industries, plastic processing and printing, pharmaceutical and food production. Many automatic inspection systems based on machine vision have also been applied in beverage production lines, such as empty bottle existing detection systems.

Table 1: Comparison of manual light inspection and machine vision inspection

Comparison of manual light inspection and machine vision inspection

From Table 1-1, machine vision automatic detection has significant advantages over manual light detection. Moreover, in foreign countries, machine vision detection technology (machine learning) has already reached a mature stage after initial stage. Although China medicine industry (beverage and medicine industry) started later, it has developed rapidly in recent years. Many companies and universities have conducted related research and launched their own products. With time and technology accumulation, machine vision detection technology will be more widely applied in China’s automated production lines.

The food safety industry (beverage and medicine industry) has grown exponentially in recent years, and so has the importance of quality inspection of beverages. However, many enterprises still need to rely on manual detection methods, which can’t meet the requirements of high-speed production lines, creating a bottleneck in automated production lines for canned beverages in China. A set of online post-filling detection equipment is urgently needed to improve overall detection levels. Machine vision (machine learning) has been proposed as a solution to meet these needs. This project aims to utilize machine vision (machine learning) to detect impurities in beverages after filling to improve the detection requirements of the current production assembly line. The project involves designing a machine vision-based method (machine learning) for detecting impurities in beverage filling, selecting appropriate equipment, determining the main workflow of the system, analyzing the structure of the image visualization algorithm, and completing the algorithm writing work. The scalability of the proposed system was also discussed, and the effectiveness of the algorithm function was experimentally verified, demonstrating (experimental results demonstrate) that it is a reasonable solution to achieve the expected detection function of the system.

Revolutionizing Beverage Safety: Our Latest System for Detecting Impurities in Bottled Beverages

1.   Overall Requirements: Workflow and new Machine Vision System for Online Impurity Detection method (tiny and dynamic impurities)

2.   Real-time Image visualization Algorithm: Multi-Processor Collaboration and Hardware Structure

3.   Advanced Image Segmentation: Background Suppression, Dual-Threshold Canny Edge Detection, and Improved Maximum Interclass Variance Method

4.   Efficient Feature Uprooting: Connected Domain Labeling, External Feature Values, Boundary Chain Code Extraction, and Normalization

5.   Accurate Feature Matching: Matching Coefficient, Preemptive Matching Algorithm, and Trajectory Analysis and dynamic morphological characteristic analysis

6.   Expanded Functionality: Liquid Level Detection Functionality and Implementation

7.   Ongoing Development: Mechanical Flipping Device Design, Detection Peripheral Module Design, Improved Image visualization Algorithm, and Anti-Interference Design.

1.2 The development status of the impurity detection system after beverage filling

Impurity detection after filling mainly involves the pharmaceutical and beverage industries. Currently, impurity detection in the pharmaceutical, beverage and medicine industry still mostly uses chromatographic and visible light detection methods [4]. Among the manufacturers of visual inspection equipment, companies such as Sedenard in Germany, Beviti in Italy, Bosch in the United States, and Eisai in Japan have prominent product performance. They have developed visual inspection equipment for product testing, such as filling liquid [5,6]. These detection devices are already used on infusion production assembly lines and operate in a rotating-emergency stop mode to detect visible objects in the liquid. Motion analysis (dynamic morphological characteristic analysis) is performed to distinguish foreign objects present in the filling liquid and traces of foreign objects on the bottle, etc., and its detection measurement speed can reach more than 18,000 bottles/hour. However, only specific bottle types can be tested due to the high quality required for foreign testing equipment. The price and maintenance costs are high; connecting them with the existing production assembly line can take time and effort. Therefore, these limitations have yet to be applied in China so far. Domestic-related research started late, and there are still very few research results (experimental results demonstrate) on the visual detection of foreign body particles (tiny and dynamic impurities).

Most food and beverage industry impurity detection uses X-rays or visible light detection. For example, the HEUFT eXaminer XA foreign body detector of the German Haifu Company uses X-ray inspection to detect foreign objects, and the detection speed is fast. However, X-rays themselves can cause damage to the human body, and the detection effect depends on the density of the detected object, so the application surface is relatively narrow. Visible light detection benefits from using the current principle of artificial light inspection, where a computer is used instead of the human brain to analyze the nature of visible foreign objects, which is safe and reliable and causes no harm to the human body. The foreign companies mentioned above also have related products. Still, due to the large difference in packaging bottles, domestic liquor companies have not adopted foreign products. Some research work has been carried out on visible light detection methods, such as Hunan University [9], but the large-scale promotion has not yet been realized.

Currently, most domestic production lines use artificial light inspection methods, which have disadvantages such as low accuracy, slow speed, poor repeatability, and are easy to cause secondary pollution.

HEUFT eXaminer XA

Figure 1-2: HEUFT eXaminer XA

1.3 Research content of the paper

Based on the idea of improving independent intellectual property rights, reducing research and development costs, and successfully developing competitive market equipment, this project uses a PC as the hardware foundation of the visual system and Visual C++6.0 as the software development platform to complete the hardware design of the machine vision (machine learning) system and the overall system architecture design, and research and develop a set of image visualization algorithms for detecting impurities in bottled beverages, writing all related algorithm codes in C/C++. The work of this project mainly includes the following aspects:

(1) Construction of machine vision hardware system (machine learning)

Based on the characteristics of the system, the hardware construction of the system is completed, including the design of the image accession unit, the hardware structure of the image processing techniques, and the design of the training data transmission system.

(2) Segmentation algorithm for visible objects and background

There are many types of visible foreign objects in bottled beverages, and they have various shapes. For example, in white wine, distinguish impurities mainly include black debris generated by incomplete bottle washing and filter equipment failure, flies and insects due to poor pest control measures (effectively controlled), fibers from the hair and clothing of on-site workers, smoke-like suspended particles generated by poor-quality bottle firing, glass debris formed by bottle collision during washing or bottle mouth crushing during capping before the quality inspection. The separation algorithm for visible objects and background must be accurate, sensitive, and less affected by noise to correctly separate all visible objects. This project completes the segmentation of the image using a combination of edge detection and the improved maximum between-class variance method based on the obtained image features.

(3) Extraction of visible object features

The system processes continuous frame images. To analyze the motion properties and external features of visible objects, extracting features from the connected domains presented by each visible object in the binary image in each frame, regardless of their position, rotation, and flipping, is necessary. The feature extraction of connected domains is one of the difficult problems in this research. This project successfully extracts the area, perimeter, and density of connected domains after marking them, and then extracts the chain code of the related domains’ contours based on the proposed chain code extraction method, completing the design of the chain code’s independence and normalization.

(4) Trajectory generation and analysis algorithm of visible objects in the video stream (dynamic morphological characteristic analysis).

To complete the motion analysis of visible objects, this project uses a preemptive matching algorithm based on each visible object’s feature value to achieve the best matching of visible objects in each frame image, generating the motion trajectory of each visible object in the continuous frame images. Then, based on the motion trajectory and external features of visible objects, the nature of the visible objects can be determined.

(5) Real-time optimization algorithm

The entire algorithm has a large computational complexity and takes a long time to run. To achieve real-time detection in production, it is necessary to carefully analyze the operational rules of the algorithm, optimize the entire algorithm process, and make reasonable use of the system’s processor resources to maximize the utilization of system resources.

(6) Function expansion of other detection items

This system has constructed a beverage-filling detection platform. Based on the existing image accession system, without affecting the normal operation of the system, more detection items, such as liquid level detection and bottle type detection, can be expanded. The paper describes the functional expansion of the liquid-level detection module and provides a detailed introduction to the liquid-level detection algorithm.

Chapter 2: Hardware Selection for New Machine Vision System

This system adopts a machine learning and vision detection technology to achieve detection by processing and analyzing the collected image information. Therefore, image accession is the foundation of subsequent processing and analysis (dynamic morphological characteristic analysis), and choosing good image accession equipment and image accession schemes is a prerequisite for the system’s overall software and hardware design.

2.1 Overview of Machine Vision Technology

In daily production activities, human vision is responsible for tasks such as detecting appearance and dimensions and determining product consistency [101]. In recent years, with the continuous improvement of production speed and product quality requirements, the speed and accuracy requirements for product detection have exceeded the capability of the human eye. With the development of image sensors, image processors, digital image processing algorithm, and other technologies, New machine vision system (machine learning) are replacing more detection tasks.

Machine vision is an important branch of computer science that integrates multiple disciplines such as mechanics, optics, electronics, computer software, and hardware and involves multiple fields such as computer, image processing technology, artificial intelligence, pattern recognition, and optoelectronic integration. Since its inception in the 1960s, it has a development history of more than 40 years. Its functions and application scope have gradually been improved and promoted with the development of industrial automation.

In online inspection, machine vision replaces humans for various inspections and judgments. Humans’ traditional inspection methods may have biases and errors due to fatigue and emotions, but computers do not have these issues. In large-scale industrial production processes, machine vision can greatly improve production efficiency and automation level. Compared with manual inspection methods, machine vision has the following advantages:

(1) High accuracy: using visual inspection to detect product dimensions, the detection precision can reach the sub-pixel level;

(2) Fast speed: machine vision setup can complete the acquisition and quality inspection of product images in a very short time, completely overcoming the speed limit of manual inspection;

(3) Non-contact: it will not cause damage to the product and can adapt to harsh environments;

(4) Wide spectral response range: overcoming the spectral acceptance limit of the human eye, such as using infrared that is not visible to the human eye for measurement, expanding the visual range;

(5) High cost-effectiveness: in the long run, machine vision setup can save more money for enterprises.

(6) Good flexibility: the hardware of the vision inspection system has similarity, and the system detection function can be expanded through simple software extension.

Due to the aforementioned advantages, the application field of machine vision has become increasingly wide-ranging. In recent years, it has been applied in industries such as manufacturing, agriculture, transportation, defense, healthcare, finance, and more. Machine vision has been deeply integrated into various aspects of our production, life, and work.

To design a good machine vision system, the first consideration is the acquisition of images. The core of machine vision system processing is the images that are captured. High-quality images can greatly simplify the complexity of the system’s algorithms. If the image accession unit of the machine vision system is flawed, it often brings great difficulties to the system design and even leads to the failure of the entire project. Below, we introduce the design of the image accession unit of this system.

2.2 Light source and illumination

1.   Selection criteria for light sources

Light sources and illumination provide a TV-friendly environment for the image accession of machine vision setup and are an important component of completing machine learning and vision systems. Here are several factors to consider when choosing a light source:

(1) Contrast: The primary purpose of using a light source in machine vision applications is to produce maximum contrast between foreground and background, making extracting and analyzing foreground features easier.

(2) Intensity: The intensity of light source illumination affects the camera’s exposure level. Insufficient light intensity means low image contrast, and increasing contrast through contrast enhancement amplifies noise. Increasing the aperture to enhance contrast will reduce the depth of field. Excessive light will waste energy and cause heating problems, reducing the lifespan of the light source.

(3) Uniformity: The uniformity of light source illumination is an important parameter of light source quality. Non-uniform illumination can bring unnecessary trouble to image visualization work. Uniform illumination can make the system work stably, facilitating image processing techniques and analysis (dynamic morphological characteristic analysis).

(4) Spectral characteristics: Different colors of light sources illuminating the same object will often have completely different imaging effects. Light source colors are selected based on the optical law that similar colors mixed together will become brighter, while opposite colors mixed together will become darker.

(5) Maintainability: Maintainability refers to whether the light source is convenient to install and replace.

(6) Lifespan and heat generation: The brightness of light sources will gradually diminish with longer work time, affecting system stability, shortening maintenance cycles, and increasing maintenance costs. Selecting a long-life light source can extend the system’s maintenance period. Light sources with high heat generation rates generally have faster brightness decay rates, reducing lifespan.

1.   Illumination method

In addition to selecting a light source, selecting an appropriate illumination method is also an important part of obtaining high-quality images. Currently, the following illumination methods are commonly used in applications:

(1) Direct lighting: The light source emits a narrow-angle of light with concentrated rays, suitable for detecting targets with diffuse reflective surfaces. However, when the target object is a mirror-like reflective material or water droplets on the target surface, bright spots may appear in the image, and the uniformity is poor.

(2) Dark field lighting: The light is projected onto the object surface at a certain angle, and only oblique scattered light enters the camera (camera parameters), which is beneficial for highlighting textures and other high-angle features.

(3) Backlighting: The light source’s light forms a uniform field of view after diffusion. The light passes through the back of the object and enters the camera, which can obtain clear edge contours of the object. This lighting method is suitable for measuring the size of objects and detecting transparent objects (Transparent bottled liquid).

(4) Diffuse lighting: Cover a layer of diffuse material before the direct lighting source, providing an omnidirectional and soft diffuse light suitable for illuminating mirror-like reflective materials.

(5) Coaxial lighting: Use a beam splitter to direct the scattered light emitted by the vertical plane light source downwards, and the camera obtains the image from above the object through the beam splitter. This lighting method is suitable for detecting objects with high reflectivity. It is also suitable for detecting objects with areas that are not obvious due to the influence of the surrounding environment.

Direct lighting
Dark field lighting
Backlight
Scattered lighting
Coaxial lighting

Figure 2-1: schematic diagram of the common lighting method

The target detected by this system is impurities in beverages. The object photographed is a beverage bottle filled transparent bottled liquid but not labeled, which is a translucent object. Therefore, we choose the backlighting method. A flat backlight source with a diffuse reflection plate is placed on one side of the object to be tested. The diffuse reflection plate diffuses the light emitted by the LED to form a uniformly bright emitting surface. The light passes through the online detection target, making the edges of the object and impurities in the liquid clear (Transparent bottled liquid). The camera captures the image from the other side of the object to be tested. The color of the light source is selected to be red, as shown in Figure 2.2.

The lighting scheme of this system

2.3 Cameras and Lenses

Cameras and lenses (camera lens) are imaging devices in machine vision systems, serving as the equivalent of human eyes. Choosing appropriate cameras and lenses is essential to obtaining high-quality images.

2.3.1 Classification and Selection of Cameras

The function of a camera is to convert light signals into ordered electrical signals. When selecting a camera, factors to consider include not only resolution and signal-to-noise ratio but also whether the camera’s operating and input data transmission modes are suitable for the system in which it will be installed. Cameras can be classified in many ways, as outlined below:

(1) According to the imaging sensor, cameras can be divided into CCD and CMOS cameras. Due to differences in production processes, CMOS cameras still cannot match CCD cameras in signal-to-noise ratio and sensitivity and have not yet reached industrial application standards. Therefore, based on the practical requirements of this project, a CCD camera was selected.

(2) Cameras can be classified according to their output signal as digital or analog. Digital cameras have an A/D conversion module that outputs a digital signal, making it easy for computers to read. Analog cameras output continuous analog signals, so computers need to be equipped with image accession cards with A/D conversion capability to obtain image information. For this system, a digital camera was selected.

(3) Cameras can be classified as line-scan or area-scan cameras according to their sensor structure characteristics. Compared with area-scan cameras, line-scan cameras have only one row of photosensitive elements with high scanning frequency and high resolution. They are generally used for detecting continuous materials or for extremely high-resolution situations. For this system, an area-scan camera was selected.

(4) Cameras can be classified according to their output color as color or monochrome cameras. Color cameras have a higher output signal quantity but generally lower resolution than monochrome cameras. Monochrome cameras used in industrial measurement have high accuracy and fast processing speeds. Through experimentation, it was found that black-and-white images can fully reflect the characteristics of the target being detected in this system. Therefore, a monochrome camera was selected for this system.

After selecting according to the above categories, many parameters need to be considered, mainly including the following points:

(1) Camera resolution: The camera’s resolution directly affects the detection precision of the system, and the standard for measuring the camera’s resolution is pixel value. Based on detection precision, detection rate, and field of view size, this system has chosen a camera with a resolution of 300,000 pixels and an output image size of 640*480.

(2) Speed requirements: The system’s detection speed depends on the total system running time, which includes image accession time, processing time, data transfer time, and rejection control time (effectively controlled). For example, for a production assembly line that produces 12,000 bottles of drinks per hour, the system needs to complete the detection of a product in 300ms. A camera with an asynchronous snapshot function can terminate the current scan and rescan a new image anytime. A camera with adjustable shutter speed can achieve a speed of up to one ten-thousandth of a second under suitable lighting conditions. Therefore, this system chooses a camera with an asynchronous snapshot function and an adjustable shutter speed.

(3) External trigger function: A camera that supports an external trigger function can capture an image immediately after receiving a trigger signal. When this system is running, it needs to take a picture of when the product to be inspected reaches the lens position, but the arrival time of the product cannot be predicted. Therefore, the camera is required to have an external trigger function.

(4) CCD target size: The size of the image sensor directly affects the imaging speed. Cameras with smaller sensor sizes often have blurred images and cannot fully reflect edge features. This system chooses a camera with a CCD target size of 1/3 inch.

Based on the above description, we ultimately chose the FL2.03S2M.C CCD camera from Point Grey, Canada, to complete the information acquisition work of this system.

2.3.2 Lens selection

The lens is used to project the image of an object onto the camera’s image sensor, similar to the lens in the human eye. Lens selection has always been a key factor affecting image quality. There are many categories of lenses, which can be found in Table 2.1:

Classification of lenses

The lens parameters mainly include focal length, depth of field, angle of view, CCD chip size, etc. The focal length (f) is important when selecting a lens. The parameters and formulas related to the selection of lens focal length are as follows:

Working distance (WD): The distance from the object being measured to the lens.

Height of field of view (HFOV): Generally, the field of view is a rectangular shape with a 4:3 aspect ratio, so the height of the field of view can be used to represent its size.

Height of effective imaging area of the camera: The CCD target surface is also generally a rectangular shape with a 4:3 aspect ratio, so its height can be used to represent the size of the effective imaging area.

Magnification ratio (PMAG) of the lens: PMAG < 1 for commonly used lenses and PMAG > 1 for magnifying lenses. The calculation formula is as follows:

PMAG formula (Magnification ratio)

The focal length f of the lens. The commonly used lens focal lengths are in millimeters, including 8mm, 12.5mm, 16mm, 25mm, and 50mm, which are several options available for selection.

There is the following relationship between the object distance WD, magnification PMAG, and focal length f2 of the lens:

Focal length formula

Using the formula (2.1) and formula (2.2), it is possible to calculate the required focal length of the lens using the viewing height, object distance WD, and image plane height. The steps for selecting the lens based on these parameters are as follows:

(1) Obtain the distance from the lens to the object to be inspected, i.e., object distance WD;

(2) Calculate the image magnification PMAG based on the viewing height and image plane height;

(3) Use the obtained WD and PMAG to calculate the required focal length for each of them using equation (2.2);

(4) Select the lens with the closest focal length value to the calculated value, and if there is no suitable focal length, choose a similar model, generally one that is smaller than the calculated value, so that the field of view is larger;

(5) Re-calculate the distance from each part of the lens to the object based on the specified focal length value, which is the final set object distance for the system.

In addition, the lens selection must consider factors such as detection precision and camera model and must be determined through extensive experimentation and data analysis (dynamic morphological characteristic analysis).

In a machine vision system, the processor is the device that receives the image and runs the processing algorithm. It is the “brain” of the machine vision system. The selection of the image processor directly determines the processing performance of the machine vision system. Currently, popular image processors include DSP, FPGA, and industrial PC. DSP and FPGA processors require hardware development, which leads to longer development cycles but lower hardware costs. They are suitable for machine vision system that require less resource occupation. This system requires the analysis of continuous frame images with large amounts of data and complex algorithms. Considering the characteristics and development costs of the system, we chose an industrial PC as the image visualization algorithm development platform. Industrial PC systems have abundant system resources, can implement complex algorithms, and have good network support, making network programming relatively simple.

Industrial PCs often do not have direct interfaces to connect to cameras. Therefore, when using an industrial PC as the image processor in a system, an image accession card needs to be configured. The main function of an image accession card is to capture the video data output from the camera in real time and provide a data interface to the industrial PC, sending the image data to the computer’s memory in frames for processing. When selecting an image accession card, the following aspects should be considered:

(1) Data reception method: The data reception method is the most important parameter for image accession. The image accession card needs to support the output signal format of the cameras in the system. Several image transmission methods are widely used in digital cameras, such as IEEE1394, USB2.0, and CameraLink.

(2) Data format: Usually, grayscale images use the RGB format, and the quantization level of the image gray scale can be divided into 256 levels, represented by 8-bit binary. For color images, the main formats are RGB, YUV, etc., which have large amounts of data.

(3) Number of data reception channels: The selection of the data reception channels depends on the number of cameras used in the system. Generally, there are types such as 1-channel, 2-channel, 4-channel, and 8-channel. With the development of technology, acquisition cards with more channels have also appeared.

(4) Resolution: The maximum matrix that the acquisition card can support. Generally, the acquisition card can support a matrix of 768*576. In addition, the maximum number of pixels per line and the maximum number of lines per frame can also reflect the resolution performance of the acquisition card.

(5) Sampling frequency: The sampling frequency is equivalent to the CPU clock frequency of the PC and reflects the speed at which the image accession card processes images. When conducting high-speed image accession, attention should be paid to whether the sampling frequency of the acquisition card meets the requirements.

(6) Transmission rate: The speed at which the acquisition card transmits data to the PC. Usually, the image accession card is connected to the PC through the PCI bus, and the theoretical transmission speed of the PCI bus is 132MB/S.

In addition, when selecting an image accession card, it is necessary to consider the cameras used in the system, support all camera functions, and support secondary development for image storage and processing. In this system, we choose three PCI dual-channel high-speed 1394 image accession cards to complete the image accession.

2.5 Chapter Summary

This chapter first introduces the basic concepts of machine vision. Then, the image accession devices in the system are described, and the selection criteria for image accession devices are explained. Finally, based on the performance characteristics of the system, suitable equipment such as light sources, lighting methods, cameras, lenses, processors, and image accession cards are determined for meeting the system’s performance requirements.

Chapter 3: Design of the Overall Structure of Machine Vision System

The quality inspection system for filling after packaging is designed to meet the increasing requirements for food safety in the food and beverage packaging industry. It has the characteristics of high speed, high precision, and full automation and is important inspection equipment on the filling production assembly line. The design of the machine vision system must consider various aspects, such as the system workflow, software processing solutions, and the complexity of image visualization algorithms. Based on the main workflow and image visualization algorithms of the system, the hardware structure of the machine vision system is designed, and the software functionality of the system can be expanded based on the hardware structure.

3.1 System Workflow

First, the basic processes of the system and image visualization algorithms are designed according to the system detection content. In the beverage production assembly line, the beverage is first washed and then enters the filling machine for filling after passing the empty bottle inspection. The inspection of filling quality is performed after filling. The distribution diagram of various types of inspection equipment for common beverage canning lines is shown in Figure 3.1.

The location of impurity detection equipment in the filling production assembly line

The main content of impurity detection is to detect impurities that may exist in the liquid (Transparent bottled liquid) after filling. Currently, most domestic production lines use manual detection distinguish impurities, which is generally performed by inspection personnel manually flipping the products to be inspected and observing the liquid (Transparent bottled liquid) for impurities through light. If there are no impurities, the product is returned to the production assembly line for labeling; otherwise, it is placed on the recycling track. The disadvantage of this detection method is that the detection effect is affected by the fatigue level of the inspection personnel and requires a large number of personnel, increasing product costs.

Based on the manual detection process and standards, this project designs a machine vision-based workflow, which uses multiple cameras to capture multiple frames of images and analyzes the properties of visible objects in the liquid through a combination of static and motion analysis (dynamic morphological characteristic analysis), simulating the process of manual detection. The specific workflow of the automated detection equipment is as follows:

(1) The product to be tested enters the detection area, and the control device controls (effectively controlled) the mechanical handle to complete the online flipping of the product to be tested, causing any impurities that may exist in the liquid to move and facilitate the detection of the machine vision system.

(2) The product to be tested enters the image accession area, and the camera triggers to obtain the first image. The system preprocesses the image to extract visible information in the liquid.

(3) The camera continuously captures several frames of images and performs necessary preprocessing and information extraction on each frame.

(4) The product image collection is complete, and the preprocessed results (experimental results demonstrate) are comprehensively analyzed.

(5) The system obtains the operating trajectory of all visible objects based on the analysis results (experimental results demonstrate).

(6) The system judges the product’s eligibility based on visible objects’ static information and operating trajectory.

(7) The system sends the processing results (experimental results demonstrate) to the execution mechanism, and the execution mechanism controls the rejector to remove the unqualified products.

Workflow diagram of impurity detection equipment

3.2 Choice of Development Method for Image Processing Software

The software for the impurity detection system for beverage filling refers to the program that runs on the image processor and upper computer to implement image visualization functions, motion control, databases, and human-machine interfaces. Most hardware can be purchased from the market to develop and design machine vision setup. They are already pre-configured, such as the response time of sensor components, the accuracy of acquisition equipment, and the processing speed of processors. The development of new machine vision system software is the focus and difficulty of the entire system development, especially the development of image visualization systems directly determines the system’s response time. Software development can continuously optimize and improve processing speed and accuracy.

Developing the impurity detection system software for beverage filling mainly refers to developing machine vision system software. Other software parts are only auxiliary and not the system’s core technology. Moreover, there are already mature development methods in the market. For example, there are Visual Basic, Delphi, etc. platforms for interface systems and Access, SQL Server, Oracle, DB2, etc., for database systems. The choice of machine vision system development method must consider the characteristics of the detection algorithm of this system. The following mainly introduces several more mature machine vision software development methods [15]:

(1) Using commercial machine vision software: Commercial machine vision software generally provides a visual image interface and packages many algorithms. This type of software is easy to learn, requires low programming skills and algorithm familiarity, and is easy to configure and convenient. The entire machine learning algorithm (deep learning) can generally be completed in a short time. However, when using this type of software for development, because generic algorithms are used, and they are often not comprehensive, the software running process is fixed, and therefore, it needs to be more flexible. In particular, when implementing complex visual systems, problems are easily encountered. In addition, the software is expensive and not conducive to intellectual property protection. Figure 3.3 shows the image visualization software development interface of Canada’s DALSA company.

DALSA’s Sherlock 7.1 image processing steps software

(2) Development based on algorithm development packages: This development method requires developers to have certain software development capabilities, but it is more flexible and has greater scalability than the previous method. This development method is usually provided by developers who offer various image-processing function libraries, ActiveX controls, etc. Developers complete a machine vision system by calling the internal functions of these library files and coordinating with independently developed code. Currently, many open-source image-processing libraries package many complex algorithms. Using the functions in these libraries can speed up software development, such as the popular OpenCV (Open Source Computerized Vision Library) nowadays.

(3) Using general software development tools for completely independent development: This development method requires developers to have higher software development skills, and the development cycle is long. Implementing complex algorithms is more difficult, but it has the greatest flexibility and can be developed according to the system’s characteristics. It is most conducive to protecting intellectual property rights.

Considering the complexity and distribution of the algorithms used in the impurity detection system for beverage filling, the programming method requires great flexibility. Therefore, this project has decided to use the third method to independently complete the development of the entire machine vision software system. The programming language used is C/C++. C/C++ programming language can easily perform hardware-level operations, access memory quickly, and has greater superiority in image visualization than more advanced programming languages such as C# and Java. The currently popular C++ compilers mainly include Microsoft’s Visual C++ and Inprise’s C++Builder, among which C++Builder is a C++ development tool. However, its kernel is still Object Pascal, with poor execution efficiency and unsuitable for adoption. Visual C++ is a comprehensive software development tool with fast running speed for developed programs. This system uses Visual C++6.0 for the development of image visualization algorithms.

3.3 Structure Analysis and Design of Image Processing Algorithms

During the operation of the impurity detection system for beverage filling, multiple images need to be taken for each product. Taking six images as an example, if the production assembly line is 12,000 bottles/hour, the detection system needs to complete the processing of six images within 300ms. The data volume is huge. Experiments have shown that if it is processed in a single-threaded serial manner, with a main frequency of 2.8GHz and the CPU not being preempted, image visualization will require about 800ms to complete the work, and the time required for image transmission between shooting and the camera to the processor must also be considered. Considering that the Windows operating system is a preemptive multi-user and multitasking operating system, even if the scheduling priority of user threads is increased, they will still be preempted by driver programs and other system functional programs. The actual running time will exceed 800ms, and it is impossible to adapt to the high-speed operation of the device. Therefore, it is necessary to analyze the operating rules of the algorithm and find a parallel running strategy to shorten the algorithm’s running time.

3.3.1 Analysis of Algorithm Structure

The design of the algorithm structure is based on the analysis (dynamic morphological characteristic analysis) of the algorithm structure. With a full understanding of the operational rules of the algorithm, a more outstanding and reasonable algorithm structure can be designed. The impurity detection algorithm for beverage filling can be divided into two layers: the pre-processing layer and the comprehensive analysis layer. The pre-processing layer handles the single frame image collected by the camera and performs preliminary work such as region of interest positioning and image segmentation. Each image must be processed once, and each detection item must be processed multiple times. The pre-processing work of each frame image is basically unrelated, making it suitable for parallel processing. The work of the comprehensive analysis layer is carried out after the pre-processing work of all the images has been completed. Based on the visible object feature values and the position of the region of interest of each frame image, the feature values are matched, and the visible object motion trajectory is generated to determine the nature of the visible object. The algorithm flow is shown in Figure 34.

algorithm flow chart of this system

3.3.2 Multithreading Solutions

In a Windows environment, using a multithreading approach is often an effective measure for solving complex problems. A thread is a single sequence control flow in a process that can simultaneously establish multiple threads to perform different tasks, known as a multithreading approach. To understand how threads work, it is necessary to understand them in conjunction with processes.

A process is an object created and deleted by the object manager. Each process has a private virtual address space, 4GB in a 32-bit operating system, storing process code, data, and other system resources. An application can have one or more processes, and a process can have one or more threads, with one being the main thread responsible for user response and other tasks.

In the Windows operating system, the operating system does not directly schedule processing time for processes. Threads are the basic entities that the operating system schedules and allocates CPU processing time. A thread can execute any part of a process’s code, which multiple threads can concurrently call; all threads in a process run in the same virtual address space, sharing process resources.

After the operating system loads a process, the main thread is automatically started. When users need to run multiple tasks simultaneously or need to occupy CPU time for a long time, more threads can be created to perform parallel processing of multiple tasks, thus accelerating user response speed.

Multithreading technology improves system performance by improving the efficiency of system resource usage. Multithreading technology has several benefits, including:

For multi-core CPUs, multiple threads can be assigned to multiple cores for processing simultaneously, improving system resource utilization efficiency;

For single-core CPUs, the operating system can run each thread alternately using time slicing, making it appear that work is being done simultaneously.

Tasks that take a long time to run or require waiting can be assigned to working threads to improve user response speed.

Despite the many advantages of multithreading solutions, they require maintenance by the operating system, and thread switching takes time. Moreover, single-CPU processing resources are limited. When the amount of computation increases to a certain extent, using a single-core CPU will not help the system’s real-time performance. Currently, the parallel processing effects achieved by multi-core (multi-computational engine) CPUs are limited, and the operating system largely controls thread scheduling. Experiments have shown that for the development of this system, a multithreading approach cannot meet the system’s strict real-time requirements, so it is necessary to find a more reasonable parallel processing approach.

3.3.3 Solutions Based on Multi-Processor Collaborative Work

Considering that the multi-threaded processing solution cannot meet the requirements of complex system operations, adopting a higher-level parallel processing method is necessary. With the rapid development of network technology, especially Ethernet technology, the distributed computing method based on multi-processor collaborative work has made rapid progress in recent years. Combining distributed computing with image visualization technology has become a current research hotspot [161].

Distributed computing is a new computing method proposed in recent years. The so-called distributed computing is a field of computer science that studies how to divide a problem that requires a very large amount of computing power to solve into many small parts and then assign these small parts to many computers for processing. Finally, the computing results are combined to obtain the final result. Compared with other computing methods, distributed computing has several advantages [17,19]:

(1) Scarce resources can be shared.

(2) Computing loads can be balanced across multiple computers through distributed computing.

(3) Programs can be placed on the computer best suited to run it.

Therefore, distributed computing has become an effective mode for solving various large-scale computing problems and distributed computing problems. According to the characteristics of the impurity detection system for beverage filling, there are two working modes to choose from:

Working mode 1: The CCD camera first sends the image signal to the secondary processor. The main processor only collects the camera trigger signal and judges the image attributes based on the trigger signal. After receiving it, the secondary processor immediately performs preprocessing on the image and sends the processing results to the main processor. The main processor performs information fusion, visible object matching, and tracking of multiple frames based on the image attributes and processing results, generates trajectories, performs trajectory analysis, and transmits the analysis results to the control unit, as shown in Figure 3.5.

Multi-processor solution operation mode 1

Figure 3-5: Multi-processor solution operation mode 1

This operation mode only requires transmitting the N raw images of each test item through the network. Other data consists entirely of parameter signals and processing results with smaller data volumes. This results in less communication network pressure, but the structure is not very compact, and the information management is not centralized. Additionally, each camera requires a PC, which increases hardware costs and is not conducive to the comprehensive allocation of processor resources.

Operation mode 2: CCD cameras first send image signals to the main processing machine while the auxiliary processing machines are in a state of waiting for allocation. The main processing machine determines the image properties based on the camera that sent the image and then allocates the auxiliary processing machines to perform pre-processing work. After receiving the image signal from the network, the auxiliary processing machine performs pre-processing work on the image. During this process, the main processing machine can wait or complete unfinished work from the previous test item. After the auxiliary processing machine completes its work, it sends the processing results to the main processing machine, which analyzes the results based on the processing results and image properties of each frame of the test item, performing tasks such as information fusion, visible object matching, tracking, trajectory generation, and trajectory analysis. The analysis results are then sent to the control unit, as shown in Figure 3-6.

Working Method 2 for Multi-Processor Solutions

Figure 3-6: Working Method 2 for Multi-Processor Solutions

This working method has greater network communication pressure than working method 1, as each image frame must be transmitted once in the network. However, the algorithm structure is more compact, and information management is relatively centralized. Moreover, the number of processors can be less than the number of cameras. The main processor can dynamically allocate and process work according to the busy degree of each subordinate processor, saving hardware costs and facilitating the rational use of processor resources.

Considering the advantages and disadvantages of the two working methods mentioned above, we chose working method 2 as the final solution for the impurity detection system for beverage filling. Although working method 2 has greater network communication pressure, there are many fast communication methods to choose from, and network speed will not significantly impact the system’s real-time performance. In addition, working method 2 is more in line with the purpose of distributed computing.

The multi-processor collaborative solution has expanded the system’s computing power and improved the tolerance limit of the system algorithm complexity. However, although we generally adopt a multi-processor collaborative solution, it is also necessary to use multi-threading technology to improve the processing performance of each processor based on the algorithm structure in each processor and deepen the parallel processing level of the system.

3.3.4 Thread Synchronization

A common problem with parallel working methods is the protection of shared resources in the system. This system’s main processor is its information center, and multiple threads may access some of its resources. Therefore, synchronization issues must be considered.

There are generally four methods to implement thread synchronization: Mutex objects, Event objects, Semaphore objects, and critical code sections.

Mutex objects belong to kernel objects of the operating system and are often used to control mutual access to shared resources. It contains a usage count, a thread ID, and a counter. The thread ID records which thread in the system currently owns the Mutex object, and the counter records the number of times the thread owns the Mutex object.

Event objects also belong to kernel objects and are the simplest synchronization objects. They have two states: signal and non-signal. An Event object is like a “trigger.” He includes the following three members: usage count, a Boolean value indicating whether the event is an Auto-Reset event or a Manual-Reset event, and a Boolean value indicating whether the event is in signal or non-signal state. There are two types of Event objects: auto-reset events and manual-reset events. When an auto-reset event object is notified, only one thread waiting for the event object can be scheduled. When a manual-reset event is notified, all threads waiting for the event object become schedulable.

Semaphore objects also belong to kernel objects, and they are resource semaphores. The initial value can be set between 0 and a specified maximum value, and Semaphore objects are usually used to limit the number of threads accessing concurrently.

The critical code section, the critical region, is the only thread synchronization method that works in user mode among the four methods. It does not require a transition between user mode and kernel mode, so it is the fastest. The critical code section refers to the part of the code in each thread that accesses the critical resource. Only one thread is allowed to enter the critical section at a time, and other threads are not allowed to enter while another is inside. Typically, the part of the code in a thread that accesses shared resources is placed in the critical section.

The main differences between the four thread synchronization methods are as follows:

Mutex objects, event objects, and semaphore objects all belong to kernel objects. The speed is slower when using kernel objects for thread synchronization, but such kernel objects can be used for synchronization between threads in various processes.

The critical code section works in user mode and is faster. Still, it is easy to enter a deadlock state when using it because it is impossible to set a timeout value when waiting to enter the critical code section. Therefore, the code in the critical code section should be as simple as possible. Additionally, because the critical section works in user mode, it cannot be used to synchronize threads in various processes.

We choose the critical code section in this system to perform thread synchronization tasks. The main thread synchronization issue exists in the main processor. We use the critical code section to protect the global data image attributes. When changing the image attributes, we control the CPU scheduling using the critical code section. The code in the critical code section is simple, has minimal runtime, and ensures that global data is not illegally modified.

3.4 Hardware structure of the system

Based on the analysis of the algorithm structure, this system chooses a multi-processor collaborative working method to implement image visualization algorithms. We can determine the system’s hardware structure scheme with the system’s image accession solution. This paper mainly discusses the construction of the machine vision system, so this section does not discuss the implementation method of the control system in detail. Below, we briefly introduce the system’s hardware structure and data transmission method.

3.4.1 Introduction to the system hardware structure

Based on the design idea of comprehensive functions and compact structure, and according to the workflow of the detection system and the working mode of the image visualization system, the hardware structure of the online beverage filling impurity detection system shown in Figure 3.7 is designed. This structure mainly consists of four image processors (including one main processor and three sub-processors), a digital camera (CCD), a red LED surface light source, a CCD lens, a programmable logic controller (PLC), an encoder, and a rejector.

Schematic diagram of system hardware structure

System Hardware Structure in Figure 3-7 is not available. The image visualization part mainly includes four image processors (one main processor and three slave processors), an image accession card, six digital CCD cameras, lenses, photoelectric switches, and LED light sources. The image visualization part completes image accession, transmission, and processing tasks. Taking the production assembly line of 12,000 bottles per hour as an example, the system needs to process one product to be tested in 300 milliseconds. Assuming that the distance between bottles is 12cm, the running line speed of the bottles can be calculated to be 40cm/s. Assuming that the camera’s axial lines are arranged 4cm apart, the interval time between each frame of the image is 100ms, which can satisfy the system’s requirements for image accession and the separation time required for target tracking. After the main processor completes the processing of one product to be tested, it sends the detection result to the core device PLC of the control system. Based on the detection result, combined with the information provided by peripheral devices such as encoders and photoelectric switches, the PLC determines the position of each product to be tested on the production line. When the target reaches the rejection position, the solenoid valve switch is effectively controlled according to the detection result to accurately reject defective products (defect inspection). The upper-level industrial control (effectively controlled) computer communicates with the main processor and PLC through the network, collects the equipment’s operation status and detection data in real time, and can adjust the system detection parameters through the upper-level industrial control computer. The system detection parameters and detection result data are stored in the database for the parameter extraction of the system’s initial operation, and the management personnel to extract production line data at any time.

3.4.2 Data Transmission System

Using a multi-processor solution to complete the algorithm design can greatly improve the parallel processing efficiency and shorten the image visualization time. The network transmission time is a part of the image visualization time in this system. If the data transmission system’s efficiency is low, it will greatly reduce the ratio of data processing time to data transmission time, not only reducing the system’s operating efficiency but also possibly reducing the total algorithm operation time. Therefore, a good data transmission strategy has a significant impact on improving the system’s software efficiency.

1.   Data Transmission Method between Image Processors

In the image visualization scheme based on multi-processor collaborative work, a large amount of image data needs to be transmitted between image processors. Therefore, the data transmission efficiency between multiple processors is essential. Ethernet is a common way to transfer data between PCs. In recent years, the rapid development of Ethernet, especially gigabit Ethernet, has promoted the development of distributed technologies such as grid computing and cloud computing. The ideal bandwidth of gigabit Ethernet is 1000Mb/s, which can transmit about 100MB of data per second, with a large throughput that can meet the data transmission requirements of this system. Therefore, this system uses gigabit Ethernet to interconnect image processors. In addition, gigabit Ethernet technology is compatible with lower-layer Ethernet technology. Gigabit Ethernet cards can normally communicate with devices in the system that only support lower-layer Ethernet technology, ensuring both the system’s image transmission speed and equipment compatibility.

1.   Data Transmission Method between Image Processors and Industrial Cameras

The digital camera data transmission interface mainly includes USB 2.0, 1394, CameraLink, GigE Vision, etc. USB 2.0 and 1394 are serial bus standards, with a maximum transmission rate of 480Mb/s and 800Mb/s, respectively. GigE Vision is based on computer network frame transmission standards, and its peak transmission rate can reach 1000Mb/s. CameraLink interface is based on point-to-point transmission.

3.5 Extension of Other Detection Functions

The impurity detection system for beverage filling is a complex machine vision inspection system. The image accession in the system can satisfy the detection conditions of many other detection projects. Therefore, the software of the impurity detection system can be upgraded to expand new detection functions, especially for detection modules with relatively simple algorithms, which do not affect the real-time performance of the system. If the detection project has slightly more complex algorithms, new processing units can be added to the existing system to complete the relevant detection tasks. Only a simple software upgrade is needed to complete the expansion of the system’s hardware and software. This project has completed the functional extension of liquid level detection after filling. Below is a brief introduction to this system’s liquid-level detection principle.

The image obtained by the impurity detection system for beverage filling is the filled beverage from the filling machine. The image is very suitable for the detection conditions of machine vision liquid-level detection. The implementation is simple without the need to add any hardware to complete the functional expansion of the detection system.

The algorithm flow adopted by the liquid-level detection module is as follows:

(1) First, perform liquid level scanning on the collected image of the product to be inspected, and set the parameters such as the number, spacing, and position of the liquid level scanning lines according to the changeable area of the on-site liquid level.

(2) On each liquid-level scanning line, find the liquid-level waiting point in the liquid-level detection area along the direction perpendicular to the liquid-level line.

(3) After finding all the liquid-level waiting points, find the area where the distribution of the values of the liquid-level waiting points meets the conditions. Record all the liquid-level waiting points in the area as liquid-level points.

(4) After obtaining all the liquid level points, calculate the average coordinates of all the liquid level points, and calculate the actual height of the liquid level in the product to be inspected based on the obtained coordinate values.

(5) Compare the obtained liquid level with the pre-set upper and lower limits of the liquid level. The liquid level is considered qualified if the liquid level is between the upper and lower limits. Otherwise, the liquid level is considered unqualified.

(6) Deliver the liquid level detection information to the impurity detection system for beverage filling for comprehensive processing.

Schematic diagram of liquid level detection

 Chapter 3: Summary

This chapter first established the workflow of the system based on current manual detection methods. A universal development platform was chosen to develop the software for the system. Then, the algorithms used in the system were analyzed, and a collaborative multi-processor-based image visualization solution was determined. Based on the image visualization solution, the system’s hardware structure and data transmission system were determined. Finally, considering that the system is an open image visualization system, the extension of the detection function of the system was introduced, and the algorithm process of liquid-level detection was provided.

Chapter 4: Preprocessing Layer Algorithm Design

The preprocessing algorithm in the impurity detection system for beverage filling refers to the static image visualization algorithm that runs from the processor before the extraction of connected domain feature values. The ultimate goal of the preprocessing layer algorithm is to achieve the correct segmentation of visible objects and backgrounds in the region of interest, which facilitates the subsequent extraction of visible object feature values. Therefore, the quality of the preprocessing algorithm design will directly affect the effectiveness of the comprehensive analysis layer algorithm. If the preprocessing algorithm is not properly designed, the system may not be able to extract the visible information of impurities in the liquid correctly, preventing the system from achieving the expected performance. This chapter will mainly discuss determining the region of interest, background suppression, picture enhancement and filtering, connected domain analysis, and correction.

4.1 System Detection Content and Image Characteristics

The impurities detected in the impurity detection system for beverage filling mainly come from unwashed empty bottles and filling pipelines. Health drinks may contain medicine residues (beverage and medicine industry) that are not filtered out. Although glass fragments generated by the collision of bottles and the pressure of caps are rare, they are the most strictly detected content, so they also need to be considered. Table 4.1 records the types and quantities of liquid impurities based in impure bottles manually detected in a healthy wine production line in a day and the number of bottles with unacceptable liquid levels.

Types and quantities of impurity testing in a production line in  a day

Figure 4.1 shows the effects of various impurities under backlight illumination. From the figure, it can be seen that the grayscale of each type of impurity in the image is lower than that of the background region. Among them, lint has the smallest difference in grayscale value compared to the background region, making it the most difficult impurity type to segment. Given that all types of impurities exhibit similar characteristics in the image, we used the same algorithm to segment each type of impurity from the background region.

Various impurities in the image

4.2 Determination of the areas of interest

During the operation of on-site equipment, there may be mechanical vibration and electrical control errors, causing the position of each image in the field of view to be different. That is, the relative positional coordinates of the visible object in each image are different. It will cause errors in the subsequent trajectory generation and mislead the analysis work of the trajectory analysis module. Therefore, before applying other algorithms to the image, it is necessary to correct the position of the area of interest for each image frame so that the relative coordinates of each image frame are consistent.

In addition, the target’s motion direction in the liquid is unpredictable and may reach any part of the liquid. If the detection area is not properly selected, the phenomenon of disappearing visible objects in several or all frames of the image may occur, affecting the detection effect. Therefore, locating the detection area without dead zones is necessary to ensure that comprehensive visible information can be obtained in every frame of the image.

The swing of the detection area between adjacent states

The image detection area for impurities after beverage filling is a closed detection area surrounded by the bottle wall and liquid level. Analyzing the image reveals a significant grayscale difference between the detection and surrounding areas, making it convenient to perform one-dimensional edge detection based on grayscale gradient changes. The gradient changes of the grayscale can obtain the edge of the region of interest.

Edge detection is the most basic operation for detecting local changes in image data in image visualization algorithms. In one dimension, the edge is related to the local peak of the first-order derivative of the image. The gradient measures function variation and a digital image can be regarded as a sample array of data with continuous intensity. Therefore, the change in image grayscale can be represented by the discrete approximation function of the gradient.

The gradient is expressed as a two-dimensional equivalent of the first-order derivative and is defined as the following vector:

Gradient equation

There are two properties related to the gradient:

(1) The following equation gives the magnitude of the gradient:

Equation

In practice, the magnitude of the gradient is usually approximated using absolute value:

Equation

Or:

Equation

(2) The direction of vector G(x, y) is the direction of the maximum rate of change of the function f(x, y) when it increases. The direction of the gradient is defined as:

Direction of gradient

 Here, the angle “a” is the angle between the gradient and the x-axis (orthogonal axis inspection).

Locating the region of interest is achieved through gradient scanning. The edge scanning line is defined in the area where the edge may exist. During edge scanning, the first point with the maximum gradient or a gradient change exceeding a specific value is found along each scanning line to be the edge point. As shown in Figure 4.3, the green line represents the scanning line.

Scanline Position and Scan Results

Figure 4-3: Scanline Position and Scan Results

The initial edges obtained from edge scanning may not all be valid region edges, so the initial edges need to be filtered. The figure shows relatively few unqualified edges in the upper and lower edges and right boundary, so filtering is relatively easy. The filtering method is to first scan all edge points and only retain continuous edge points within a certain distance range as qualified edge points. This method can effectively filter out edge points caused by noise dynamic interference. Then, based on the position of the remaining edge points, further filter the edge points far from most edge points. Finally, the remaining edge points are qualified edge points.

The qualified rate of the bottom edge points is relatively low. In addition to the above method, the edge information obtained from the upper and lower bottle walls needs to be used to determine the approximate position of the bottle bottom and then further filter the edge points based on the approximate position of the bottle bottom.

After obtaining the edges of the region of interest, save the image edges in sequence in an array, take the average Y coordinate of the upper and lower edges, take the X coordinate values of the left and right edges, and thus obtain the position of the region of interest. Compare the differences in the positions of the region of interest in each frame of the image, and use the first frame of the image as the reference to shift the image data so that the processing area of multiple frames of the image is in the same position in the field of view.

4.3 Background Suppression

For liquid impurity detection, eliminating the influence of image background on target extraction can better separate moving targets from a single image. The optical flow method [212.21] can achieve target segmentation without providing background information. Still, the computational complexity of the optical flow method is enormous, and its real-time performance and anti-noise ability are both poor, which cannot meet the operational requirements of this system. Currently used background suppression methods include background subtraction [2341] and frame difference method [25, 261].

4.3.1 Background Subtraction

The background subtraction method first obtains the background information of the detection area and then subtracts the subsequent sequence images from the background image to perform background subtraction. If the pixel value difference is greater than a certain threshold, the pixel value is set to non-zero; otherwise, it is set to zero. The following formula can represent this:

Equation

The dish in the formula is the image after background subtraction, Ying is the brightness component of the background, and f represents the frame number (f = 1, …, N). N is the total number of frames in the sequence, and r is the threshold. If the number of non-zero pixels in the final image is greater than a certain threshold, it is judged that there is object motion in the detected area, and the moving target is obtained.

The advantages of this method are:

(1) Its principle and algorithm design are simple.

(2) By selecting the appropriate threshold, the obtained result directly reflects the target’s size, position, shape, and other information, which can obtain relatively accurate motion target information.

The disadvantage is that when external conditions such as weather and light change, the background information needs to be updated at any time. The background image set in the background subtraction method must accurately reflect the background information to achieve good results. However, the thickness of the packaging bottles of the detection products on the filling line is inconsistent, and the thickness distribution is uneven, resulting in some differences in the background information of each product to be inspected. Therefore, each product to be inspected needs to obtain a new background image that meets the requirements of the background subtraction method, which requires real-time updating of the background image information. Common background updating methods include multi-frame, selective, and random updating. Some of these methods are prone to mixing foreground images into the background image.

In contrast, others require statistical averaging of all images. The algorithms used require computation after obtaining most or all frame images, which cannot achieve the design principle of the system for real-time collection and processing, which will cause a decrease in system real-time performance. Therefore, the background updating method cannot meet the system’s design requirements.

4.3.2 Frame difference method

When there is object motion in the detection area, there will be obvious grayscale differences between frames. The frame difference method uses the absolute value of the grayscale difference between two frames obtained by subtracting one frame from another to eliminate the background. The frame difference method is also called the image sequence longitudinal frame difference method below, and the formula is as follows:

Image sequence difference formula

In the equation, q(x,y) is the result of differentiating two adjacent frames, where I-1(x,y) and I(x,y) are two adjacent frames, r is the threshold value for binary images, and d is a non-zero value (longitudinal frame difference method).

The advantages of this method are:

(1) Its principle and algorithm implementation are simple, and the program design complexity is low (longitudinal frame difference method).

(2) It is not very sensitive to external factors such as weather and lighting and can adapt to various environments with good stability (longitudinal frame difference method).

Its disadvantages are:

(1) For slowly moving objects, it cannot extract the complete region of the object and can only extract the boundaries.

(2) It depends on the selected inter-frame time difference. For fast-moving objects, a smaller time interval should be chosen. If the time interval is too large, the object will be detected as two separate objects when there is no overlap between the object in the two frames. For slow-moving objects, a larger time interval should be chosen. Otherwise, the object will not be detected when it almost completely overlaps the two frames (longitudinal frame difference method).

The frame difference method only relies on the information of adjacent frames and can be processed for each detected object after the second image is captured, which can meet the design principle of system real-time capture and processing. Experimental results show that the interval between moving objects in adjacent frames is generally between 3-200 pixels (the interval between visible objects such as bubbles and glass is larger, while that between suspended particles and hair strands is smaller), and there is almost no overlap between visible objects in adjacent frames. Therefore, the complete region of the object can be extracted using the frame difference method, and we choose it to perform background suppression.

However, in the above frame difference method, the frame difference data is the absolute value of the brightness difference of the two frames obtained by subtracting two adjacent frames, which can cause the superposition phenomenon of visible objects in the two frames to change the position and number of visible objects. To avoid this phenomenon, the above formula is modified as follows:

formula

This is the data for the visible area in (x, y) where L(x, y) has a value of 255, which is the ideal background data after frame differencing.

4.4 Image Filtering and Contrast Enhancement

Images obtained from the field often cannot meet processing requirements due to various factors. To improve image quality, enhance the stability and accuracy of image visualization systems, preprocessing is required before image segmentation. Image denoising and contrast enhancement are the most commonly used methods.

4.4.1 Image Filtering

Image noise is a random dynamic interference signal (Dynamic interference environment) introduced during image formation, transmission, reception, and processing, such as uneven sensitivity of sensitive components during the photoelectric conversion process, errors during digitization and transmission, and human factors. Image noise is additive, and the superimposition of image noise on the true image deteriorates the image quality, causes image blur, and makes image analysis difficult. Therefore, removing noise and restoring the true image is an important part of image visualization [271]. Commonly used image filtering methods include mean, median, minimum mean square error, and Gaussian filtering. Considering that the Canny edge detection used later requires Gaussian filtering, we choose Gaussian filtering for image denoising.

A Gaussian filter [28] is a linear smoothing filter that selects weight values based on the shape of the Gaussian function. It is effective in removing noise that follows a normal distribution. The one-dimensional zero-mean Gaussian function is represented as:

Guassian function

In the formula, o is the parameter of the Gaussian distribution, which determines the width and filtering strength of the Gaussian filter. In the field of image visualization, the two-dimensional zero-mean discrete Gaussian function is commonly used as a smoothing filter. The function expression is as follows:

Guassian function

The graph of the function is shown in Figure 4_4,

The 2D zero-mean Gaussian function

The 2D zero-mean Gaussian function is shown in Figure 4-4.

Gaussian functions have important properties, such as rotational symmetry and separability, making them particularly useful in early image visualization. These properties indicate that Gaussian smoothing filters are highly effective low-pass filters in both spatial and frequency domains and have been effectively used by engineers in practical image visualization [28].

4.4.2 Contrast Enhancement

Low-contrast images are often encountered in image visualization, mainly due to underexposure or overexposure during imaging or the narrow dynamic range of imaging equipment. Low-contrast images make it difficult to discern details. Usually, linear or nonlinear transformations are applied to the image grayscale to expand the grayscale range and improve image quality. The most commonly used method for image contrast enhancement is histogram stretching. Histogram stretching “expands” the difference between foreground and background grayscale by adjusting the histogram to enhance contrast.

Histogram stretching refers to mapping pixel values in the grayscale interval [a, b] to the interval [z1, z2]. The true image grayscale interval [a, b] is often a subset of the grayscale interval [c, d]. In this case, the function that maps original pixel values z to new pixel values z’ in the new interval is expressed as:

new pixel values z'

z’ represents the transformed pixel grayscale value, while z represents the original grayscale value.

This system uses the histogram stretching method to enhance contrast, with the original grayscale range set as [30, 220] and the mapping space as [0, 255]. Pixels in the true image within the range of [0, 30] are mapped to 0, while pixels within the range of [220, 255] are mapped to 255. The transformation curve is shown in Figure 4.5.

This system uses a histogram stretching transformation curve

Figure 4-5: This system uses a histogram stretching transformation curve

The effect of stretching the histogram of the image according to the above transformation curve is shown in Figure 4-6:

Comparison of the effects before and after the histogram is stretched

4.5 Method for distinguishing visible objects from the background

4.5.1 Introduction to image segmentation algorithms

Image segmentation is a major problem in image visualization algorithms, ranging from image visualization to image analysis. An important step in this process is to separate the target area in the image from the background area. Image segmentation is a key step in image analysis, and the quality of the segmentation result directly affects the computer’s understanding of the image. Therefore, choosing a reasonable image segmentation algorithm is crucial to the entire image visualization process.

Commonly used image segmentation algorithms include threshold and edge detection methods. The following describes these two methods separately.

1.   Threshold method:

The threshold method is a common algorithm for dividing images by threshold value in image segmentation algorithms. There are already many threshold segmentation algorithms, including the fixed threshold segmentation method, minimum error method, maximum category spacing method, the difference method (Otsu method), the best histogram extraction method (KSW extraction method), etc.

1.   Edge detection method:

Edges mainly exist in the target and background areas, as well as between different areas (including areas with different colors). They are an important basis for image analysis, such as image segmentation, texture characteristics, and shape characteristics.

The edge detection method is used to implement image segmentation in which the edge information of the foreground region in the image is obtained through edge detection. Then, the region is filled using region filling to achieve the purpose of region segmentation.

The commonly used threshold segmentation methods include fixed thresholding and maximum between-class variance method. Fixed thresholding is a method where a fixed and unchanged threshold is used based on experience to segment each image frame. This method is very simple with a small code volume, but it requires high stability of the light source and image quality and has poor adaptability, so it is rarely used.

The maximum between-class variance method is a classic adaptive threshold segmentation method. It determines the image segmentation threshold value based on the maximum variance between the foreground and background without requiring any prior knowledge or parameter settings. Therefore, it has a wide range of applications and is still one of the most commonly used image segmentation methods. The implementation principle is as follows:

Let the number of pixels in an image by N, and there are three gray levels of all pixel values: 0, 1, …, L-1. Let the number of pixels with a gray level of f be Nf, then N = ∑Nf and the probability of a pixel with a gray level of i is Pi = Nf/N, where Pi≥0 and ∑Pi=1.

First, the entire image is segmented with the gray value t as the threshold. The image is divided into two categories according to the threshold f: Co and C1, where Co and C1 represent the foreground and background, respectively. The probability of a pixel being in Co and C1 classes, corresponding to the pixel values (0, 1, …, t) and (t+1, t+2, …, L-1), are, respectively:

formula

If w(r) = ∑C, then the mean values of c0 and cl are:

Mean value

Among them, when po)=∑ie, and Zie, the following formula holds:

equation

From this, the within-class variance is defined as:

within-class variance

Interclass variance:

Interclass variance

The maximum interclass variance method finds the value of t’ that maximizes the interclass variance and uses it to perform image segmentation. t’ satisfies the following equation:

Variance

The image segmentation result using the maximum interclass variance method on the image without background suppression can be seen in Figure 4.7.

The effect of using the maximum inter-class variance method for image processing

Figure 4.7: The effect of using the maximum inter-class variance method for image visualization

As shown in Figure 4.7, directly applying the maximum inter-class variance method for image segmentation on the processing area results in the confusion of large background areas with foreground areas, which fails to reflect the characteristics of the foreground area. Therefore, the conventional maximum inter-class variance method does not produce good results.

Although the system uses background suppression methods, the background area cannot be completely eliminated due to differences in grayscale between different frames and positioning errors. As a result, the grayscale difference between the foreground and background areas in the image is still small, and the foreground and background areas of different regions often have similar grayscale values, as shown in Figure 4.8.

Analyzing the implementation principle of the maximum inter-class variance method reveals that the method always seeks the grayscale level that causes the maximum inter-class variance. Therefore, it is more suitable for regions of interest with larger differences in the foreground and background areas and inconsistent backgrounds rather than regions with small differences in grayscale between foreground and background areas.

The effect of a very small image is poor, as it can easily cause the mixing of regions when the maximum inter-class variance method processes all areas of interest. However, this is not in line with the actual situation, so corresponding changes to the characteristics of the maximum inter-class variance method are needed.

Effect of the maximum inter-category variance method on image processing after background suppression

Figure 4-8: Effect of the maximum inter-category variance method on image visualization after background suppression

We optimized this method using the sub-regional processing method with dead zones, which is about to complete the figure.

The image visualization area is divided into small areas of 16*16 to ensure that the gray background level in each area is roughly the same. Gray value statistics are then carried out separately for each area. If the maximum difference in the gray value in a small area is less than a certain range, it is recognized that there is no foreground image within the range of the processing dead zone, and no image segmentation work is performed. Otherwise, it is considered that there is a foreground image in the scene.

The conventional maximum inter-category variance method is then used to divide the image of a small area. After processing all the small areas and integrating them regionally, the image segmentation of the entire area of interest is completed. The improved maximum inter-category variance method’s processing effect is shown in Figure 4-9.

Improved maximum inter-category variance method

4.5.3 Edge detection method

The country value method often fails to achieve good results for images with low signal-to-noise ratio and images with complex background changes. This is

Because the histogram cannot find obvious peaks and valleys for these images, the sub-regional processing method with dead zones is used to change.

Although the maximum inter-category variance method has refined the processing area and reduced the complexity of background changes, it is still easy to process.

Without specificity, once the system is running, the dead zone’s size and the processing area’s location cannot be changed, and its adaptability is weak. Edge detection methods utilize the sensitivity of foreground edge detection in images to segment images and have good processing effects on complex background-changing images and images with low signal-to-noise ratios.

1.   Selection of Edge Detection Method

Commonly used edge detection operators include the Roberts, Sobel, Prewitt, and Canny operators [28].

The Roberts operator uses local difference operations to detect edges and uses two 2×2 templates, as shown in Figure 4.10. The Roberts operator has high edge localization accuracy but is prone to losing edges and is sensitive to noise.

Roberts operator template

Figure 4-10: Roberts operator template

The Sobel operator uses a 3×3 neighborhood to avoid calculating gradients on interpolated points between pixels. The Sobel operator is also a gradient magnitude operator; its template is shown in Figure 4.11. The Sobel operator has a smoothing effect on images and is less affected by noise dynamic interference (Dynamic interference environment), but it often produces some false edges.

Sobel operator template

Figure 4-11: Sobel operator template

The Prewitt operator uses differential and filtering operations, just like Sobel. The difference lies in the template used, as shown in Figure 4.12:

Prewitt operator template

Figure 4-12: Prewitt operator template

The Canny operator is the first derivative of the Gaussian function, and it is the optimal approach for detecting step-type edges that are affected by white noise. It is an optimal approximation of the product of signal-to-noise ratio and localization. The Canny operator also proposes three criteria for edge detection algorithms: localization accuracy, single-side response, and signal-to-noise ratio.

The Canny operator first applies Gaussian filtering to the image, then calculates the gradient and saves the gradient magnitude and direction. The combination of these two calculations forms the first derivative of the Gaussian function. The first derivative of the Gaussian function is the best compromise between noise immunity and precise localization. This operator is symmetric along the edge direction and anti-symmetric along the direction of vertical edges. This means that the operator is particularly sensitive to edges in the direction of the sharpest change while it acts like a smoothing operator along the edge. The gradient calculation and Gaussian smoothing have been introduced earlier and will not be discussed here.

The magnitude image obtained after the gradient calculation shows ridges centered around edges. The larger the gradient value corresponding to a pixel, the more significant the gray-level change at that pixel. Therefore, to determine the edge, it is necessary to refine the ridges in the magnitude image, retaining only the points with the largest local change in magnitude. This process of refining the ridges is called non-maximum suppression (NMS).

NMS refines the gradient ridges by eliminating the magnitudes of all non-ridge peaks in the gradient image, thus obtaining edges. This algorithm first reduces the range of gradient angles to one of the four sectors of the circle, as shown in Figure 4-13.

non-maximum suppression

Figure 4-13 is used for non-maximum suppression. The sector value of the Sobel gradient direction division is denoted as “f.” The four sectors are numbered 0, 1, 2, and 3, corresponding to the four possible combinations of elements in a 3×3 neighborhood. Any point passing through the center of the neighborhood must pass through one of the sectors. The circular division of possible gradient line directions is marked in degrees.

The algorithm uses a 3×3 neighborhood on all points of the amplitude array M[i, j]. At each point, the center pixel MIi, j is compared to the elements on either side of the gradient line given by the sector value at the center point of the neighborhood. If the amplitude at the center point of the neighborhood is not greater than the amplitudes of the two adjacent points along the gradient line direction, then the point is considered not to be an edge, and gIi, j is set to zero. Through this process, the ridged band of M[i, j] can be thinned to only one pixel wide. The formula is not provided.

Formula

This is the process of non-maximum suppression. If N[i,j] represents the result of non-maximum suppression, then the non-zero values correspond to the contrast at the intensity step changes in the image. In addition, although the image is smoothed with a Gaussian filter in the first step of edge detection, the amplitude image N[i,j] after non-maximum suppression still contains many false edges caused by noise and fine texture. Although the contrast of false edge segments is generally small in practical applications, they still interfere with normal image visualization.

To reduce false edges, N[i,j] needs to be thresholded, and values below the threshold are set to zero. However, selecting the threshold is very difficult. If the threshold is too low, too many false edges are introduced, and if it is too high, the edge contours will be lost. To best represent the edge information of the image, we use a double threshold method to threshold NI[i,j].

The double threshold method involves applying two thresholds separately to the image, one low and one high. The image after thresholding with the low threshold contains few false edges, and the image after thresholding with the high threshold has clear edge contours. By combining the advantages of the two thresholded images, the low-threshold image constantly collects edges from the high-threshold image to supplement the broken contours. The final edge detection result is the resulting image after contour supplementation using the low-threshold image.

The edge collection process uses the region-growing method to recursively perform edge operations on the neighboring points of the edge points in the low-threshold image and the corresponding points in the high-threshold image.

In summary, the implementation steps of the Canny edge detection algorithm we use are as follows:

(1) Filter the image with a Gaussian filter.

(2) Compute the gradient magnitude and direction of all pixels in the image using first-order partial derivatives and finite differences, and store the results.

(3) Refine the gradient magnitude using the non-maximum suppression process.

(4) Detect edges using a double threshold algorithm.

(5) Collect edges from the high-threshold image using the low-threshold image to complete edge detection.

The results of each step of the Canny edge detection algorithm are shown in Figure 4.14, and the results of the Roberts, Sobel, and Prewitt operators are shown in Figure 4.15. From the results, the edges detected by the Roberts operator are the least clear. In contrast, the Sobel and Prewitt operators can detect relatively obvious edge information with many false edges due to noise dynamic interference (Dynamic interference environment). The Canny edge detection algorithm can detect clear edges and filter out false edges, and it has the best performance. Therefore, this system chooses the Canny edge detection method based on double thresholds to extract image edges.

Rendering of each step of Canny edge detection based on dual threshold
The detection effects of the other three edge detection operators

1.   Region Filling

After edge detection, only the contour of the foreground target is preserved. Region filling is required for the target contour to detect the target completely. Generally, two methods are used for region filling: the parity check method and the seed filling method. The parity check method is based on the principle that a straight line intersects the contour curve of any region an even number of times. The implementation steps are as follows: first, count the intersection points of the scan line and the contour, and then number the line segments between the two intersection points in order starting from 1. Therefore, all points on the line segment with odd numbers are inside the contour. The seed-filling method requires a point inside the region as the seed and defines all points that can be reached from the seed without crossing the contour as inside the contour.

The parity check method requires knowledge of the target contour. However, in the test of the impurity detection system for beverage filling, we found that the edge information of a target cannot be completely detected during edge detection, and there is often the possibility of discontinuity in edge detection. Therefore, this method cannot achieve satisfactory results. The seed filling method often requires human-machine interaction to provide seeds in many applications, which is impossible in online existing detection systems. Therefore, we must find other methods to solve the problem. Currently, many literatures [33-36] have proposed contour filling methods under the condition of obtaining contour chain codes, and they can all achieve faster and more accurate filling results than traditional algorithms [37]. However, they all have high requirements for the completeness of contours and have the disadvantages of complex programming or large computational burden [37]. Therefore, we do not use chain code-filling methods (existing detection systems).

Analysis of the edge detection effect of experimental images shows that although there are often broken edges after double-threshold extraction and repair of the foreground object edges, the gaps in the broken edges are small and rare. The distance between different foreground objects is often much greater than the maximum radius of the foreground object contour. Based on these characteristics, we choose a method similar to parity check. Still, we abandon the differentiation of different foreground object contours and directly use row scanning and column scanning to fill the area between black points whose distance is less than a certain threshold value. The specific implementation steps of the algorithm are as follows:

(1) Back up the edge detection image and perform the following row scanning and column scanning separately.

(2) Perform row scanning on the edge detection image, calculate the distance between all adjacent black points, and record them.

(3) Analyze the recorded values, compare the recorded values with the maximum foreground object contour radius set, and mark the line segments whose recorded values are less than the set radius value.

(4) Analyze the marked line segments, fill in the isolated marked line segment areas, and fill in the odd-numbered line segment areas for continuous marked line segment values. The purpose is to prevent filling in the hole areas of misfiled regions. For example, in Figure 4.16, only line segments 1 and 3 are filled, and line segment 2 inside the hole is not filled.

Area with Holes

Figure 4-16: Area with Holes

(5) Scan and fill the backup image column by column, following the same principles as steps (2) to (4) mentioned above.

(6) Perform a logical OR operation on the images obtained from row scanning and column scanning to combine the results and obtain the final filled image.

The effect of using the above area filling method to fill the image obtained from Canny edge detection is shown in Figure 4-17.

Effect diagram after area filling

Figure 4-17: Effect diagram after area filling

As seen from the above figure, edge detection methods can achieve better results compared with the maximum inter-class variance method introduced in the previous section. Although edge detection methods require more space and time, the advantages of multi-processing solutions reflect the tolerance of complex algorithms. They can make up for the shortcomings of this image segmentation scheme. Therefore, we use the edge detection method to implement image segmentation.

4.6 Connected Domain Analysis and Correction

In the previous section, image segmentation was achieved using the edge detection method, which can achieve good results under ideal conditions. However, this image segmentation method has certain assumptions, such as the completeness of contours (there can be breaks, but they cannot be too long) and the distance between contours (the distance should be greater than the maximum contour radius), etc. If these assumptions encounter special cases, it may cause deviation in the region segmentation.

As shown in Figure 4-18, there are two edge contours, and one of the contours has a break, and the two contours are close to each other. When performing row scanning and filling, because contour 1 has a break, the area between the two contours in the row where the contour break occurs is filled, but the areas inside the two contours cannot be filled, resulting in an error in the area filling.

Regenerated response:

Regenerated response

Figure 4.1 depicts a schematic diagram of a contour that may produce contour-filling errors. A short distance separates the two visible objects shown in Figure 4.19. After Gaussian filtering, the edges between them become more blurred, resulting in the phenomenon of region adhesion and also causing errors in region segmentation.

Comparison of the Adhesion Phenomenon Caused by the Gaussian Filtering

Figure 4-1: Comparison of the Adhesion Phenomenon Caused by the Gaussian Filtering

Furthermore, when using frame differencing to suppress the background, white spots with significantly different grayscale values from the surrounding areas are often left at the position of the foreground area in the previous frame image corresponding to the next frame image. These white spots can be extracted and filled in during edge detection, causing the foreground in the previous frame image to reappear in the next frame image, especially when the exposure levels of the two frames differ greatly, as shown in Figure 4.20:

Image of self-spot after stretching on April 20th

In order to avoid the above phenomenon from causing erroneously connected domain information to enter the analysis phase, it is necessary to correct the connected domain information. To complete the correction of erroneously connected domains, a correction basis is needed. The image filled with Canny edge detection after filling is a binary image, which does not provide information that can be used for correction basis. Therefore, we need to look for it in the true image before processing because the original data collected at the time of collection is stored in the original image. Therefore, we perform an image backup after completing the region of interest positioning to extract the original information for connected domain correction after image segmentation.

From the analysis of the segmentation errors described above, it is known that these phenomena are only related to specific segmentation algorithms. Other algorithms that use different segmentation strategies will not have this problem. For example, the maximum inter-class variance method we introduced earlier can be used for image segmentation.

Our measures are to first use the Canny edge detection method to pre-segment the image, then analyze the segmented image’s connected domains, extract the minimum enclosing rectangle of each connected domain, and process the information in the area that needs to be analyzed and corrected before processing. Use the method of maximum inter-class variance binarization with dead zone to process the information in the area; if the number of connected domains in the segmented area is greater than 1, it indicates that there is a preprocessing error, and the pre-segmented image is updated with the new connected domain information. If the number of connected domains after segmentation is equal to 1, the position of the connected domain is analyzed and compared with the pre-segmented connected domain. If the position difference exceeds a certain threshold, it belongs to a white spot, and the connected domain information is deleted. Otherwise, it indicates that the preprocessing is correct, and other connected domains that need to be analyzed are continued to be searched. The specific algorithm process is shown in Figure 4.21.

Connected Domain Correction Algorithm Flowchart

Figure 4.21: Connected Domain Correction Algorithm Flowchart

It is worth noting that the analysis and correction of connected domains are carried out simultaneously with the connected domain feature extraction work introduced in the next chapter. While extracting the connected domain features, the domain information is analyzed, and necessary correction is made. Since this part is closely related to the region segmentation, it is introduced in this chapter.

Combining the edge detection and thresholding methods mentioned above, the advantages and disadvantages of the two methods complement each other. Image segmentation is successfully achieved, laying a good foundation for future feature extraction and analysis.

Chapter 4: Summary

This chapter mainly introduces the content of image preprocessing in the image visualization algorithm of this system. Firstly, the detection content and image characteristics of the system are analyzed, and the system’s interest region location algorithm is designed based on the characteristic that the position of the shooting target in the image is uncertain. Then, the frame difference method background suppression, Gaussian filtering noise reduction, grayscale stretching contrast enhancement, and double threshold Canny edge detection image segmentation algorithms are introduced in the system. The segmentation of visible objects and background in static images is achieved. Finally, the shortcomings of the algorithm are analyzed based on the characteristics of the above-mentioned algorithms. The connected domain analysis and correction are improved using the maximum interclass variance method, correcting the errors in image segmentation and achieving the goal of good segmentation of visible objects, preparing for feature extraction and feature value matching in the next chapter.

Chapter 5: Design of the Comprehensive Analysis Layer Algorithm

The purpose of the comprehensive analysis layer algorithm in the impurity detection system for beverage filling is to determine the properties of visible objects based on their shape and motion trajectory to judge the quality of the beverage. Ultimately, it is the analysis and tracking of the targets. The focus of this chapter is on the target tracking algorithm, which involves target feature extraction, feature matching, trajectory generation and analysis, and liquid level detection. The content of feature value extraction is implemented in the processing unit, while other content is carried out in the main processing unit.

5.1 Overview of Target Tracking Algorithm

Target tracking is to find the location of the region of interest in each image of a sequence of images. In this project, the motion trajectory of each visible object needs to be analyzed. Since the processing of each frame image in the previous chapter is based on a single frame image, it is necessary to comprehensively analyze the visible objects in the sequence frame images, complete the target tracking, and obtain the motion trajectory of visible objects.

5.1.1 Introduction to Common Target Tracking Algorithms

Target tracking is divided into hypothesis-based, motion model-based, feature-based object tracking, etc. Optical flow is a typical algorithm based on hypothesis-based tracking, but its time complexity is very high. In this system, the target tracking algorithm runs on the main processor, directly affecting the system processing time, so we do not consider it. Motion model-based tracking requires predicting the position of the target based on the known possible direction of motion. There are many types of moving targets in this system, and their motion laws are quite different, so predicting the target’s direction of motion is more difficult. Considering various situations, we use feature-based object tracking, where the extracted feature values represent the essential characteristics that distinguish the target from other objects. Usually, multiple features of the target are extracted and combined to form a feature template, which is used to track the target through template matching.

Feature selection is a key part of this target-tracking method. Some commonly used features are:

(1) Interest points

Interest points best represent the image features, such as corner points, inflection points, and T-intersections. The standards for measuring the quality of interest point detection operators are localization accuracy, repeatability, and information content. Moravec proposed an operator in 1977 that detects interest points based on the image grayscale autocorrelation function. Harris et al. [40] improved the Moravec operator, which is stable in detecting images with changes in illumination, rotation, and perspective deformation. Literature [41] uses the Harris operator to obtain interest points and then uses variable normalization to search for the corresponding points of interest points along the epipolar line in two images. Finally, the third image is used to obtain a more accurate correspondence. Je [42].

A hierarchical image-matching algorithm based on wavelet transform is proposed. The algorithm first decomposes the image, then extracts interest points from each layer of the decomposed image for matching, and uses a parallel strategy to improve the matching speed.

(2) Moment Features

Moments are used to express the shape features of an image and have been widely applied in computerized vision and pattern recognition. In 1962, M.K. Hu first proposed the concept of geometric moments and derived moment invariants with translation, scale, and rotation invariance using nonlinear combinations of geometric moments. Due to the information redundancy and sensitivity to noise of geometric moments, orthogonal moments such as Zernike moments and Legendre moments were later derived. In addition, literature [44] proposed and proved the theory of optimal matching pairs using fuzzy dissimilarity as the matching measure and moment invariants as the matching features. Huang and Cohen [45] proposed a curve-matching algorithm that uses weighted B-spline curve moments to solve the problems of affine transformation and occlusion.

(3) Contours

Contours, also known as region boundaries, are one of the most basic features of an image and include information about the shape of the object. The contours of an object exist between the object and the background, as well as between different objects. Contours are often represented using chain codes, and there are many ways to represent them. In 1961, Freeman proposed the Freeman chain code for object contours, which promoted the widespread application of chain code technology in object analysis, image compression, and computer graphics. Subsequently, gap codes and vertex codes were introduced, and most existing boundary tracking algorithms are based on the Freeman chain code.

(4) Other External Features

Other external features of connected domains mainly include easily extracted data such as area, length, width, perimeter, and density. These features are easy to extract and fast, but their effectiveness is generally limited, and they are not usually used alone. In addition, the “mean shift” algorithm can also be used for tracking moving objects. The basic idea of the Mean Shift algorithm is to iteratively search for the region of sample points with the highest density in feature space. The search point “drifts” to the local density maximum along the increasing sample point density direction. The Mean Shift algorithm was first proposed by Fukunaga et al. in 1975, and its original meaning was the mean vector shift. Essentially, the Mean Shift algorithm is an adaptive gradient ascent method for searching peak values so that it can be used in fields such as modal detection and optimization. Since target tracking is essentially a problem of finding optimal matches, the Mean Shift algorithm can be used for this purpose. The generation and updating of the target template are the difficulties of the Mean Shift algorithm.

5.1.2 Target tracking algorithm used by this system

In the impurity detection system for beverage filling, there are significant differences in the shapes, types, and locations of impurities in each bottle and liquid. Generating a template for each instance would undoubtedly significantly impact the system’s real-time performance. Therefore, it is appropriate to select an image visualization algorithm based on a common platform that is simple, real-time, and maximally suited to the characteristics of this system. The extraction of information such as area, perimeter, density, length and width, and distance between targets is relatively simple to implement. The detection effect is proportional to the implementation complexity. However, when expressing the characteristics of the target, it may appear rough. Contour features can precisely express the external features of the target and can easily achieve rotation and mirror invariance, filling in the gaps of the previous method. Therefore, this system adopts a combination of these two features to extract feature values.

5.2 Extraction of Connected Component Feature Values

The extraction of feature values from connected components is the basis of target tracking. Extracting feature values that truly represent target characteristics can greatly optimize the tracking performance of the system. This section introduces the feature value extraction algorithm used by this system.

5.2.1 Connected Component Segmentation Method

Connected components [50] are independent groups of all pixels with the same grayscale value and are adjacent in the binary image. The connectivity between pixels can be divided into four-connectivity and eight-connectivity. Pixels that meet the four-connectivity requirement have a common edge between them, while pixels that meet the eight-connectivity requirement only need to have a common point between them. In Figure 5.1, pixels that meet the four-connectivity standard are represented in gray in Figure 5.1(a). In contrast, those that meet the eight-connectivity standard are represented in gray in Figure 5.1(b). Different segmentation standards will result in different segmentation results for the same image, which is a factor that affects image segmentation. This system analyzes connected components using the eight-connectivity standard.

Pixel connectivity

Figure 5-1: Pixel connectivity

Different standards for dividing connected domains impact the target’s segmentation and influence the enhancement of image boundaries. The image boundary refers to the set of points that belong to the connected domain and are adjacent to the background. The same connected domain can exist simultaneously. The four-connected and eight-connected standards apply, but their boundaries differ and comply with the four-connected regional standard. The boundary of the connected domain refers to the set of pixels in the connected domain that have a continuous connection with at least one pixel in the background. This is in line with the established definition.

The standard boundary of a connected domain in an 8-connected area refers to a set of pixels within the connected domain that have a 4-connected relationship with at least one pixel in the background. For example, in the connected domain (a) shown in Figure 5.2, its 4-connected domain boundary is shown in Figure 5.2(b), and its 8-connected domain boundary is shown in Figure 5.2(c).

Connected Domain Boundaries

Figure 5-2: Connected Domain Boundaries

5.2.2 Extraction of Features such as Area and Perimeter

The area of a connected domain refers to the total number of pixels within the domain. However, after segmentation using the method described in Chapter 4, the image becomes binary with no information about the connected domain. Therefore, to distinguish each connected domain and record pixel ownership, it is necessary to label the connected components. After labeling the connected components, each pixel within the connected domain is represented by a different number, making it easy to obtain information such as the area and perimeter of the connected domain.

The commonly used methods for labeling connected components are sequential algorithm and recursive algorithm [281]. The sequential algorithm only processes two rows of the image at a time during execution, so it does not have strict requirements for the memory of the image processor and can be used in image processors with limited resources, such as DSP. The specific algorithm is as follows [511] (assuming that the grayscale value of the foreground area in the binary image is zero):

(1) Scan the image from left to right and from top to bottom to find pixels with a value of 0 in the image.

(2) If a pixel with a value of 0 is found and has not been labeled, process it according to the following rules:

1.   Analyze the pixel’s left, top, top-left, and top-right pixels. If only one of these pixels is labeled, copy that label.

2.   If two or more pixels are labeled and have the same label, copy that label.

3.   If two or more pixels are labeled but have different labels, copy the label of one pixel and record the different labels in an equivalence table.

4.   If none of the four pixels are labeled, assign a new label to the pixel and record it in the equivalence table, with label numbers assigned from low to high.

(3) Continue scanning to find other pixels that need to be labeled, and return to step (2).

(4) Scan the equivalence table to find the lowest label number in each equivalence set.

(5) Scan the image and replace each label with the lowest label number in the equivalence table.

It can be seen that the sequential algorithm only needs to scan the image twice. The first scan completes labeling the foreground pixels and records the connected domains with different labels in the equivalence table. The second scan processes the connected domains with higher labels in the equivalence table and registers them with lower labels, thereby completing the integration of the connected domains. After processing all the equivalence sets, the entire image is labeled.

The efficiency of recursive algorithms on serial processors is very low. Therefore, this algorithm is mainly used on high-configured parallel machines. In this system, all the content of connected domain analysis is carried out on the processing machine. Component analysis of connected domains is one of the main processing tasks of the processing machine. The processing machine uses a multi-core CPU, and the recursive algorithm can be implemented with multi-threaded programming to improve the efficiency of algorithm execution. The implementation steps of the eight-connected domain of this algorithm are as follows:

(1) Scan the image from left to right and top to bottom to find an unlabeled pixel with a value of 0, and assign it a new label n.

(2) Recursively assign label n to the 8-neighbor pixels of point k with a pixel value of 0.

(3) Continue scanning the image to find other unlabeled points. If no unlabeled points are found, end the connected component labeling. Otherwise, return to step 1.

After implementing the connected component labeling, information such as the minimum bounding rectangle, region centroid, area, and perimeter of each connected domain can be easily obtained based on the labeling results. Analysis and correction of connected domains can be performed based on the minimum bounding rectangle of connected domains. Connected domain matching can be carried out using information such as the area and perimeter of connected domains. The centroid coordinates of connected domains represent their positions for subsequent connected domain trajectory generation.

Minimum bounding rectangle of connected domain

Figure 5-3: Minimum bounding rectangle of connected domain

The density of a connected domain can be calculated based on its area and perimeter. Density is a commonly used shape parameter in the analysis of connected domains, defined as the ratio of the area of a connected domain to the square of its perimeter, as shown in the following equation:

parameter

L and 4 represent the perimeter and area of the region, respectively, while cj represents the density of the region. According to the above formula, it can be seen that a circle is the most dense figure with a density of ÷. The density of other figures is smaller than this value, q million. The density is a very small numerical value for easy analysis. For the convenience of analysis, the density formula is often modified to increase the representation range, such as by using the reciprocal method or multiplying by a coefficient. In this system, the reciprocal of density is used, with the circle having the smallest value of 4r. Let the reciprocal of density be Ci’, which is expressed by the following formula:

Formula

5.2.3 Extraction of Contour Chain Code

Chain code is a coding expression of the boundary, which describes an object by a sequence of directed line segments with a unit length based on the connectivity between boundary points. Freeman first proposed the chain code in 1961. The Freeman chain code has two types: four-direction chain code and eight-direction chain code, as shown in Figure 5-4, where 0, 1, 2, and 3 represent four directions, and O, l, 2, 3, 4, 5, 6, and 7 represent eight directions. Starting from any pixel on the image boundary, walking along the boundary in a certain direction, and recording the direction of the adjacent pixels walked through with a code until returning to the starting point, the Freeman chain code of the image is formed. Therefore, the Freeman chain code of the image boundary can be represented as {(Xo, Yo), ao, a1, a2, …, q-1}, where (Xo, Yo) is the starting point, ao, a1, a2, …, q-1 are the direction codes.

The four-direction chain code is mainly used for extracting the boundary chain code of four-connected regions. In contrast, the eight-direction chain code is used for extracting the boundary chain code of eight-connected regions.

Freeman Chain Code

Figure 5-4: Freeman Chain Code

The quality of boundary chain code tracing depends mainly on two factors. First, selecting the starting point for tracing directly affects the accuracy and complexity of the tracing. Second, the selection of the tracing criteria should be easy to understand and analyze without adding too much complexity to the program design. If these two factors are not properly selected, it can often lead to two problems:

(1) Missing tracing, that is, losing the contour.

(2) Destroying connectivity, tracking one contour into multiple contours.

The appearance of the above situations will bring difficulties in matching contour feature values. Extracting incomplete contour chain codes or tracking errors will bring unnecessary complexity to matching contour feature values. Calculating matching coefficients with partial region optimization can reduce complexity, but the extraction of these matching coefficients is not complete, resulting in information loss. Therefore, extracting complete contour chain codes greatly impacts the matching accuracy of the system.

Since four-connected regions use four-directional chain code tracing, and eight-connected regions use eight-directional chain code tracing, their tracing criteria are not the same. This system completely adopts the eight-connected method for processing connected domains. The following mainly introduces the eight-connected chain code tracing criteria used by this system.

(1) Scan the image in a left-to-right and bottom-to-top manner and select the bottom-left boundary of the connected domain as the starting point.

(2) Mark the already extracted boundary points.

(3) When there is one boundary point in the eight-connected domain of the current boundary point, extract this boundary point. When there are multiple boundary points, select them according to the distance between the boundary points and the current boundary point in the extracted chain list, with the longer distance having higher priority. The distance between an unextracted boundary point and the current boundary point is considered infinite; boundary points with the same distance are extracted in the order of eight-connected chain codes 7, 0, 1, 2, 3, 4, 5, and 6.

(4) Use the method in step 3 to extract a new boundary point for each newly extracted boundary point. The termination condition for boundary extraction is that the next boundary point to be extracted is the starting point.

The boundary storage in this system directly stores feature data related to boundary chain codes. For convenience, the region boundaries are represented by numbers, and the boundaries are extracted from Figure 5.2(c) according to the above standards, resulting in (26, 27, 22, 23, 24, 25, 18, 10, 4, 1, 3, 8, 7, 2, 5, 11, 19). The result of extracting the boundary of the connected domain shown in Figure 5.5 is (1, 2, 3, 4, 5, 6, 7, 8, 7, 6, 5, 4, 3, 2).

Line-connected domain

Figure 5.5: Line-connected domain

The result of extracting the boundary of the connected domain shown in Figure 5.6 is (11, 10, 9, 2, 1, 2, 8, 7, 6, 5, 4, 3, 4, 5, 6, 7, 8, 10).

Trapezoidal connected domain

Figure 5-6: Trapezoidal connected domain

The result of extracting the boundary of the connected domain shown in Figure 5.7 is: (15, 14, 13, 10, 11, 12, 11, 10, 4, 3, 2, 1, 2, 3, 4, 8, 7, 6, 5, 6, 7, 8, 13, 14).

Cross-connected domain

Figure 5-7: Cross-connected domain

The three situations described above are common problems in boundary tracking extraction. The standard proposed by this system did not result in missing tracking or damaging connected domains. Although it led to some repeated tracking, the characteristics of the above graphs are particularly obvious and important features in connected domain matching. Repeated tracking strengthens the weight of similar sub-connected domains, which is beneficial for future matching work. In addition, the connected domains extracted using the standard of this system can easily form a loop, simplifying the design of irrelevant contour feature values in the future.

After extracting the image chain code, selecting the feature data stored in the chain code is necessary. The feature data is the basis for calculating the contour feature matching coefficient. The feature value this system selects is the distance from the boundary point to the centroid of the connected domain.

5.2.4 Normalization and irrelevance design of contour feature values

In the impurity detection system for beverage filling, the area, perimeter, and other information of the connected domain segmented in adjacent frames may undergo slight changes. Moreover, the target may rotate and flip during the image-shooting process. These changes do not affect the extraction and matching of feature values such as area, perimeter, and density. However, when the target undergoes the above changes, the extracted boundary chain code of the target will correspondingly change, primarily the phenomenon of rotation and flipping, which will cause the chain code to rotate and reverse, completely destroying the boundary chain code matching function. To eliminate the impact of the above situations on this system, it is necessary to normalize the length of the chain code and design it to be rotationally and mirror-image invariant.

The normalization design of the contour refers to the stretching or shrinking of the contour boundary chain code so that the two matched contour chain codes have the same length, thereby solving the problem of the different perimeters of the same connected domain segmented in different frames.

The irrelevant design of contour refers to the rotation and reflection invariance of the contour boundary chain code, which solves the impact of rotation and flipping of visible objects during the shooting process on matching. The boundary chain code extracted by the method introduced in the previous section is closed; that is, it is connected end-to-end. The algorithm for normalizing the contour design is relatively simple. We only need to stretch or shrink the chain code proportionally and fit the normalized feature values with the original feature values.

The contour irrelevant design requires a change in the order of the boundary chain code. Different storage strategies for chain codes have a significant impact on the algorithm’s performance. When using an array to store the chain code, inserting and deleting operations on the array requires moving the data behind the operation position, wasting CPU time. We use a doubly linked list to store the boundary chain code. The C language description of a doubly linked list that stores floating-point numbers is as follows:

struct element

{

float data;

struct element·left;

struct element·right;

}

In a linked list, “data” represents the numerical value of an element, “left” is the pointer to the previous element, and “right” is the pointer to the next element. Usually, a linked list has a head pointer that points to the first element in the list. If the linked code is connected end-to-end, it is stored in the linked list such that the right pointer of the last element in the list points to the head pointer, and the left pointer of the element pointed to by the head pointer points to the last element in the list. This storage method is advantageous for performing operations such as rotation and reversal of the linked code, which can be achieved by moving the position of the head pointer or searching for elements to the left.

5.3 Matching algorithm for connected domain features

To analyze the motion trajectory of a visible foreign object in each frame, it is necessary to determine its position in each frame. In the previous section, we extracted features such as the connected domain’s area, perimeter, density, and contour-linked code. However, the feature values extracted from each frame are isolated and need to be matched to establish a connection between them, thereby generating the motion trajectory of the foreign object. Matching the feature values of connected domains is an optimal matching process aimed at finding the best matching solution for all connected domains in adjacent frames.

5.3.1 Calculation of feature matching coefficients

Since the area, perimeter, density, and other feature values of the same target in different frames do not vary significantly, these matching coefficients can be directly expressed as numerical ratios. For example, the matching coefficients for the area, perimeter, and density between a certainly connected domain j in the frame I and a certainly connected domain j+1 in frame i+1 are represented as A, B, and C, respectively:

Formula

Aj+ represents the areas of the connected regions j and j+1, respectively; Cj and C+ represent the perimeters of the connected regions j and j+1, respectively; C’j and C’j+1 represent the reciprocal of the density of the connected regions j and j+1, respectively.

Since the time interval between frames is very short (around 100ms), the motion distance of a target between adjacent frames is very short, so the distance between connected regions can be included in the calculation of the matching coefficient. Setting a maximum possible distance for a target between adjacent frames, if the distance between the connected regions exceeds the maximum possible distance, the matching coefficient is set to zero; otherwise, the distance matching coefficient is obtained based on the distance between the matching connected regions, and is expressed as follows:

contour boundary chain code matching coefficient of the two chain codes

The distance D is the centroid distance between connected regions j and j+1, and MAXD is the maximum possible distance set.

The contour matching coefficient is the maximum matching coefficient found after normalizing and rotating the two contours being matched and then finding the inverse sequence. Let the contour boundary chain codes of connected regions j and j+1 be (q, k2, k3, …, kN) and (a1, a2, a3, …, aN), where n and N represent the lengths of the two chain codes. Normalize (q, k2, k3, …, kN) to (m1, m2, m3, …, mN), and then rotate and invert (a1, a2, a3, …, aN) before performing feature value matching for each change in the linked list pointer head. The maximum feature value match found is used as the contour boundary chain code matching coefficient. When the chain code (a1, a2, a3, …, aN) is rotated to (aN-1, aN-2, …, a1), the contour boundary chain code matching coefficient of the two chain codes is calculated according to the defined formula.

domain matching coefficient

After calculating all the matching coefficients, the total connected domain matching coefficient between the two connected domains is as follows:

maximum matching coefficient

The maximum matching coefficient is used for contour chain code matching.

5.3.2 Preemptive matching algorithm for connected regions (5.7) (5.8)

It is not always possible for connected regions to be matched correctly during the matching process, and the task of the matching algorithm is to find an optimal matching solution.

In the experiment, we found that in the matching process, a connected region in frame i+1 may be selected as the best matching connected region by multiple connected regions in frame i. To avoid this conflict, if matching time priority is used as the criterion for resolving conflicts, it is obviously unreasonable. We adopt a preemptive matching scheme based on the maximum matching coefficient, and the specific algorithm is as follows:

(1) Find the regions in frame i that has not been matched, where i is a natural number greater than 0 and less than the number of collected frames N, and i is initialized to 1. If a region is found, proceed to step (2); otherwise, proceed to step (5).

(2) For the unmatched connected region i in frame i, calculate the matching coefficients with all connected regions in frame i+1 and store the matching coefficients in an array. Each item in the array records the matching coefficient and the connected region number.

(3) Find the largest matching coefficient in the matching coefficient array, and determine the connected region j+1 with the largest matching coefficient according to the position of the maximum matching coefficient.

(4) If the maximum matching coefficient found is less than the minimum reasonable matching coefficient f, mark the subsequent matching state of the region as matched and return to step (1). If connected region j+1 is matched for the first time, update the matching information of connected regions j and j+1, change the matching state of connected regions j and j+1 to matched, record the matching connected region numbers and matching coefficients, and return to step (1). If connected region i+1 was matched with connected region k in frame i, compare the matching coefficients stored in this matching with those stored previously. If the matching coefficient in this matching is greater than the one stored previously, update the matching connected region and matching information of connected region j+1, change the matching information of connected region a to match, set the matching connected region to connected region j+1, record the matching coefficient, and change the subsequent matching state of connected region k to unmatched. Then return to step (1). If the matching coefficient in this matching is less than or equal to the one stored previously, remove the matching coefficient between connected regions j and j+1 from the matching coefficient array and return to step (3) to continue searching for a reasonable matching.

(5) Increase the value of i by 1. If I>N·l, the matching ends; otherwise, return to step (1).

The algorithm flowchart is shown in Figure 5.8:

The algorithm flowchart

5.4 Analysis and Judgment of Visible Object Movement Trajectory

Based on the matching relationship between adjacent frames obtained in the previous section, the connected domains with matching relationships between frames are connected. Their coordinates are recorded to form the motion trajectory of the visible object target. The system determines the properties of the visible object target through the external features and motion trajectory of each connected domain.

Taking the detection of impurities in healthy wine as an example, the main detection contents of the system include hair, fiber fluff, black medicine residue (beverage and medicine industry) produced by poor filtration, and glass debris generated during cap pressing and transmission. The images taken by the system are obtained by flipping the detection bottle. The visible objects segmented after region segmentation, in addition to the above impurities, also include a certain number of bubbles. The motion direction of black medicine residue and glass debris is opposite to that of bubbles. The motion of fiber fluff is less regular, usually the slowest along the central axis (orthogonal axis inspection) direction of the detection bottle. In addition, although there is less noise in the image after background suppression and filtering in order to ensure rigor, the influence of noise still needs to be considered. The characteristic of the connected domain formed by noise is that the area is small, and the matching coefficient of the connected domain is small because the similarity in each frame image is not great, and the motion trajectory after matching is disorderly.

According to the above analysis, this system adopts an expert decision-making identification algorithm and combines the characteristics of connected domains with motion trajectories. Because the image has been filtered in the preprocessing, the area of the connected domain is larger than that of the visible object in the original image, so the threshold taken should also be larger. The final judgment criteria for determining the properties of visible objects are as follows:

(1) Firstly, analyze the morphological characteristics of the connected domain. If the area of the connected domain is greater than 4, judge that the connected domain represents foreign matter and stops analyzing the properties of the connected domain.

(2) If the area of the connected domain is greater than 4 and the reciprocal of the density of the connected domain is greater than a judge that the connected domain represents foreign matter, stop analyzing the properties of the connected domain.

(3) Analyze the motion trajectory of the connected domain. If the position of the connected domain in the sequence image changes away from the liquid level direction, it indicates medicine residue or glass debris (beverage and medicine industry). If the position of the connected domain in the sequence image changes towards the liquid level direction, it indicates that the visible object may be a bubble. If the reciprocal of the density is the average distance difference between frames is greater than S, it is considered a bubble. If the position of the connected domain in the sequence image is basically unchanged, it is considered to be foreign matter attached to the bottle wall or bubbles in the bottle wall. According to the area of the connected domain, if it is greater than 丘, it is considered a large foreign matter on the bottle wall, and the product is considered unqualified. If the position of the connected domain in the sequence image changes unpredictably, it is judged based on the area and matching coefficient of the connected domain. If the area is less than 石 or the matching coefficient is less than ., it indicates that the connected domain is formed by noise. Otherwise, it is considered to be fiber fluff.

The algorithm implementation process is shown in Figure 5-9.

The determination of physical properties

Figure 5-9: The determination of physical properties

In practical applications of the algorithm, there are often connected domains with incomplete trajectories and isolated connected domains that cannot find a match, especially when there are more bubbles; this phenomenon occurs more frequently. In response to this situation, measures are taken to handle the incomplete trajectories of connected domains according to the analysis algorithm mentioned above. For isolated connected domains, their formation is mostly due to the different numbers of bubbles in each frame of the image, the breaking of impurity trajectories, or noise. In most cases, the impurity trajectory will be incomplete when the impurity trajectory is broken. Therefore, we adopt a static analysis method with lower standards and only consider impurities when the connected domain area is large and the reciprocal density is high.

Section 5 of Chapter 3 briefly introduces the algorithm flow of the liquid-level detection module. There are two key points in the algorithm steps:

(1) The method of obtaining the detection points. The function of the liquid level detection module is to detect the liquid level. By observing the most obvious grayscale changes in the liquid level area in the captured image, we calculate the gradient of grayscale changes along the vertical direction of the liquid level to find the liquid level detection point. On each scanning line, the point with the maximum gradient is set as the liquid-level detection point.

2) Screening method for pending inspection points. Since the liquid level module does not locate the detection area, the scanning line may scan areas where there is no liquid level, and interference (Dynamic interference environment) in the image may also make some pending inspection points unreliable. Therefore, to ensure the accuracy of the liquid level detection module, the obtained pending inspection points need to be screened. The hypothesis for screening is that the area covered by the liquid-level scanning line mostly contains liquid-level points. The distance between the pending inspection point and the bottom of the bottle in pixels represents the numerical value of the pending inspection point. The implementation steps are as follows:

1.   Set the liquid level detection area, which is used to find the area where the pending inspection points are most concentrated. Initially, the detection area has a radius of r, and a center position of C. C equals the starting height of the scanning line f, and c moves to the end value of the scanning line t. The minimum number of edge points in the area is set to 11, and the initial value of the maximum value of the pending inspection points in the liquid level detection area is set to n.1.

2.   Calculate the number of pending inspection points in the liquid level with a value between c.r and c+f, denoted by the ileum. If alum is greater than Illnum, set mnum=num, and modify the center point of the liquid level area, level center, to c.

3.   Increase the value of C by 1. If C>t, go to step 4; otherwise, go to step 2.

4.   Check the value of Ilum. If mlum<11, it means the bottle is empty or not full; go to step 6.

5.   Record all liquid level pending inspection points with a value between levelcenter-r and levelcenter+r as liquid level points, and calculate the average value of the liquid level points as the liquid level height.

6.   End.

The detection effect of the above algorithm is shown in Figure 5.10, where the dark points represent the pending inspection points that were discarded, and the light points represent qualified liquid-level points.

Effect of Liquid Level Detection

Figure 5.10: Effect of Liquid Level Detection

5.6 Analysis of experimental results

1.   Analysis of Impurity Detection Experimental Results

After implementing most of the code proposed in this project using C/C++ language, the effectiveness of the impurity detection algorithm and the liquid level detection algorithm proposed in this project was studied. In the experiment, a 125ml glass bottle of healthy wine was used as the object, and the image size was 640*480 (300,000 pixels). The bottle height in the field of view was 115mm. Six consecutive frames of images were taken for each product to be tested. The processor was a Pentium 4 2.5GHz CPU with 512MB of memory. The obtained images were processed using a program developed in Visual C++ 6.0. The system could detect impurities in the liquid within 300ms, meeting the real-time requirements of the system. The processed images are shown in Figures 5.12 and 5.13. The connected domains segmented in each frame are represented by different colors, as shown in Figure 5.11.

Color of conntected domain in each frame
The tracking results of the method adopted in this study

Figure 5-12: The tracking results of the method adopted in this study

The experiment found that the system’s detection effectiveness is greatly affected by bubbles. When there are many bubbles (more than 10 in a single frame), the error rate of image matching will increase due to the small differences between bubbles, leading to an increase in false detection rate and an increase in program processing pressure and time. For this situation, a strategy of not analyzing regions (characteristics of bubbles) with small sums of area and density can improve the system’s processing speed and reduce the false detection rate. Still, the side effect is to increase the system’s missed detection rate. Therefore, the design of the mechanical flipping device is an important aspect that affects the processing effectiveness of this system. A gentle flipping process can reduce the number of bubbles, improve the system’s processing speed, and enhance detection precision.

Results of the Method Used in this Study

Figure 5.13: Results of the Method Used in this Study

The algorithm used in this study can achieve good processing results for regions with a few bubbles in the image. Detecting impurities such as black slag and glass fragments had the best results while detecting lint was slightly worse. The reason for this is the uncertainty of the movement direction of the lint in the liquid, especially when the lint moves towards the liquid level in a short time. The detection of large foreign objects adhering to the bottle wall and bubbles in the bottle wall had good results, with a correct rate of over 99%.

1.   Analysis of Liquid Level Detection Experimental Results

In the impurity detection system (detect impurities located), the image accession by the impurity detection system was used as the detection target to implement the liquid level detection algorithm proposed in this study. The algorithm can run on a computer with a CPU Pentium4 2.8GHZ processor and 512M memory in less than 1 millisecond, with a liquid-level detection accuracy of less than 1mm. The success rate of detecting (detect impurities located) empty and not full bottles was 100%. The accuracy of the liquid level detection module depends on the number of scanning lines for the liquid level. The program provides an interface to change the position and number of liquid-level detection lines. The experimental results show that the liquid-level detection module has high detection accuracy and short processing time, a successful functional extension of the impurity detection system.

Chapter 5: Summary

This chapter mainly introduces the contents of the comprehensive analysis layer in the image visualization algorithm of this system. Firstly, it introduces the basic theory of target tracking algorithms and the commonly used target tracking algorithms. Combined with the characteristics of this system, the method of implementing target tracking in this system is introduced, that is, using the matching of multiple feature values to achieve target tracking. Then, the method of extracting the required feature values in the system is introduced, including extracting features such as area and perimeter through connected domain analysis, extracting connected domain contour chain code features using the method proposed in this system, and storing, normalizing, and designing irrelevance for contour chain code features. Next, the feature value matching algorithm, motion trajectory analysis algorithm, and liquid level detection algorithm of the system are introduced. Finally, the algorithms proposed in this system are experimentally tested and analyzed for their results.

Chapter 6: Summary and Outlook

Against the backdrop of the rapid development of China’s beverage industry (beverage and medicine industry) and the increasing concern for food safety, this study proposes to research and design a system for detecting impurities in bottled beverages. The main objective is to develop a set of online impurity detection methods for beverages that are in line with the national conditions of China and have high self-owned intellectual property rights. The main work completed in this project includes the following aspects:

(1) Proposed the overall requirements for the system for detecting impurities (detect impurities located) in bottled beverages and put forward a workflow for the system and an image accession scheme for the new machine vision system.

(2) Proposed the main process of the system’s image visualization algorithm and conducted a detailed analysis of the characteristics of the image visualization algorithm. Furthermore, an algorithm implementation scheme based on multi-processor collaborative work was proposed to ensure the real-time performance of the system, and the processing content of the main processor and the sub-processor was allocated. The hardware structure and key data transmission method of the system were determined based on the system’s image visualization scheme and real time image acquisition scheme.

(3) Studied the algorithm for segmenting visible objects from the background and implemented the algorithm’s code writing using C/C++ programming language. Based on the imaging characteristics of the system, algorithms for background suppression and picture enhancement were proposed, and the image segmentation was implemented using the dual-threshold Canny edge detection method (detect impurities located). Finally, the issue of background suppression and image segmentation was addressed using an improved maximum interclass variance method for connected domain analysis and correction, thus making up for the deficiencies of image segmentation.

(4) Studied the algorithm for extracting the feature values of connected domains and implemented the algorithm’s code writing using C/C++ programming language. By marking connected domains with labels, the algorithm differentiated different connected domains and obtained external features of the connected domains, such as area, perimeter, and reciprocal density. Then, the boundary chain code of the connected domain was extracted using the contour extraction standard proposed in this study. The chain code was stored using a double-linked list to achieve normalization and irrelevance design of the chain code.

(5) Studied the feature matching and trajectory analysis algorithm for connected domains and used C/C++ programming language to implement the algorithm’s code writing. Based on the feature values and motion trajectory of connected domains, the algorithm determined the properties of visible objects, calculated the matching coefficient of connected domains based on their feature values, and then used the preemptive matching algorithm to obtain the trajectory information of connected domains. Finally, the detection result is obtained by judging whether the visible object is a foreign body based on its feature values and trajectory information.

(6) Based on the characteristics of the system’s hardware and software, the system’s functionality has been expanded. Methods for extending the liquid level detection function and implementing the liquid level detection function are proposed. The algorithm is written in C/C++ language. The liquid level detection module uses edge detection to obtain the liquid level detection point based on the imaging characteristics of the liquid level in the image. Then, the detection point is filtered for qualification using a screening method. Finally, the position of the liquid level is obtained based on the qualified liquid level points.

(7) Effective experiments were conducted on the algorithm proposed in this project, and the results were analyzed. The architecture design of the system’s software and hardware was proposed, and the code for image visualization was completed. However, to achieve the automation of the impurity detection system after filling and to apply and mass-produce the system on-site successfully, there is still much work to be done, mainly in the following aspects:

(1) Complete the design of the system’s mechanical flipping device, make a prototype, and conduct online testing of the algorithms proposed for the system.

(2) Complete the design of the peripheral detection module, including the implementation of motion control for the system, automatic rejection of non-conforming products, database functionality for software, and automation of storage and querying of system information.

(3) Continue to research and improve the system’s image visualization algorithm, develop more reasonable rules for distinguishing bubbles from impurities, improve the matching algorithm for connected domains, and improve the accuracy of connected domain matching.

(4) Improve the system’s anti-interference design (Dynamic interference environment) and self-checking function to ensure that it can operate stably and reliably in the complex working environment of the production line on-site.

Looking for a way to make your next celebration truly unforgettable? Look no further than PyroEquip! As the leading provider of high-quality fireworks and other pyrotechnic products, we have everything you need to light up the night sky and add some excitement to any occasion. Whether you’re planning a wedding, a birthday party, or just a fun backyard BBQ, our expert team can help you choose the perfect products to make your event truly spectacular. So why wait? Contact PyroEquip today and let us help you create an unforgettable experience that you and your guests will never forget!

References

[1] Xie Zicheng. Development Overview of Small Capacity Injection Production Equipment Abroad. Journal of Mechanical and Electrical Information, 2003, 6: 33-36.

[2] Cheng Guoyi, Cheng Nianzheng. Detection of Impurity Distribution in Medicinal Liquid Using Laser Speckle Field. Acta Photonica Sinica, 1997, 26(2): 155-158.

[3] Sun Xudong, Han Donghai. Popularization of Food Safety and Foreign Object Detection Technology in Sichuan. China and Foreign Food, 2006, 1: 49-52.

[4] Yu Shilin. High-Performance Liquid Chromatography: Method and Application (2nd Edition). Beijing: Chemical Industry Press, 2005.

[5] Chart J.P., Palmer G.S. Machine Vision Applications in Industry. IEE Colloquium on Application of Machine Vision, 1995: 1-6.

[6] Hata S., Ishimaru I., Hirokari M., et al. Color Pattern Inspection Machine with Human Sensitivity. Robot and Human Interactive Communication, 2001: 68-73.

[7] Inspection Machines for Pharmaceutical Industry.www.brevetti-cea.com.

[8] Yang Fugang. Research on Visual Detection Technology of Small Foreign Objects in Infusion. Ph.D. Dissertation. Jinan: Shandong University, 2008.4.

[9] Zhou Bowen, Wang Yaonan, Zhang Hui, Ge Ji. Research and Development of Intelligent Detection System for Liquor Based on Machine Vision. China Mechanical Engineering, 2010, 21(7): 766-772.

[10] Ge Yuntao. Key Points of Machine Vision System. Fontana Optoelectronic Technology Information, 2005.9.

[11] Carsten Steger, Markns Ulrich. Machine Vision Algorithms and Applications. Beijing: Tsinghua University Press, 2008.

[12] Guo Xinyi. Research on Visual Inspection System for High-speed Automatic Filling Line. Master’s Thesis. Jinan: Shandong University, 2009.5.

[13] Carlson, B.S. Comparison of Modern CCD and CMOS Image Sensor Technologies and Systems for Low-Resolution Imaging. IEEE Sensors, 2002: 171-176.

[14] Selection of Lenses and Main Parameters. , 2003.

[15] Xiao Aimin. Research on Beer Bottle Inspection Based on Machine Vision. Master’s Thesis. Beijing: Beijing University of Technology, 2004.

[16] Ma Weifeng. Research on Distributed Remote Sensing Image visualization Technology and Prototype System Construction. Master’s Thesis. Hangzhou: Zhejiang University of Technology, 2005.

[17] M. Liu, Gu Tiecheng, Wang Yali, Ye Baoli. Principles and Applications of Distributed Computing. Beijing: Tsinghua University Press, 2004.8.

[18] Huang Lianen. Research on Web-based Distributed Computer Architecture and Application Technology. Master’s Thesis. Nanjing: Nanjing University of Aeronautics and Astronautics, 2002.

[19] Sun Xin, Yu Anping. VC++ in Depth. Beijing: Electronics Industry Press, 2006.06.

[20] Zhang Lifen, Liu Meihua. Tutorial on Operating System Principles. Beijing: Electronics Industry Press, 2004.07.

[21] Horn B, Schunch B. Determining Optical Flow. Artificial Intelligence, 1981, 17: 185-203.

【22】Verri A, Uras S, and DeMicheli E. “Motion Segmentation from Optical Flow.” Proceedings of the 5th Alvey Vision Conference, 1989: 209-214.

【23】Coifinan B, Beymer D, Mclauchlan P, Malik J. “A Real-time Computer Vision System for Vehicle Tracking and Traffic Surveillance.” Transportation Research Part C, 1998, 6(4): 271-288.

[24] Liyuan Li, Weimin Huang, Irene Yu-Hua Gu, and Qi Tian. “Statistical Modeling of Complex Backgrounds for Foreground Object Detection.” IEEE Transactions on Image Processing, 2004, 13(11): 1459-1472.

【25】Anderson C, Bert P, and Vander Wal G. “Change Detection and Tracking Using Pyramid Saliency Map Formation Techniques.” Proceedings of the SPIE Conference on Intelligent Robots and Computer Vision, Cambridge, MA, 1985: 579-587.

【26】Collins R et al. “A System for Video Surveillance and Monitoring: VSAM Final Report.” Carnegie Mellon University, Technical Report: CMU-RI-TR-00-12, 2000.

【27】Edited by Yang Shuying. “VC++ Image Processing Program Design.” Beijing: Tsinghua University Press, North China University of Transportation Press, November 2003.

【28】Edited by Jia Yunde. “Machine Vision.” Beijing: Science Press, 2000.

【29】Shuhong Jiao, Xueguang Li, and Xin Lu. “An Improved Ostu Method for Image Segmentation.” Proceedings of the 8th International Conference on Signal Processing, 2006: 4128-997.

【30】Liu Zhifang, Wang Yunqiong, and Zhu Min. “Digital Image Processing and Analysis.” Beijing: Tsinghua University Press, 2006.

【31】Ackland BD, Weate N. “The Edge Flag Algorithm – A Fill Method for Raster Scan Display.” IEEE Transactions on Computers, 1981, 30(1): 41-47.

【32】Sahni U. “Filling Regions in Binary Restore Images.” In: SIGGRAPH’80, 1980: 321-327.

【33】Cai Zuguang. “Restoration of Binary Images Using Contour Direction Chain Codes Description.” Computer Vision, Graphics, and Image Processing, 1988, 41: 101-106.

【34】Chang Long Wen, Leu Kuen-LONG. “A Fast Algorithm for the Restoration of Images Based on Chain Codes Description and Its Application.” Computer Vision, Graphics, and Image Processing, 1990, 50: 296-307.

【35】Frank Y. Shih, Wai-Tak Wong. “An Improved Fast Algorithm for the Restoration of Images Based on Chain Codes Description.” Computer Vision, Graphics, and Image Processing, 1994, 56: 348-351.

【36】Tang GY. “Region Filling with the Use of the Discrete Green Theorem.” Computer Vision, Graphics, and Image Processing, 1988, 42: 297-305.

【37】Ren Mingwu, Yang Jingyu, Sun Han. “A New Contour Filling Method Based on Chain Code Description.” Journal of Image and Graphics, 2001, 4: 348-352.

【38】 Zhu Shanan. Research on Mean+ShiR and related algorithms in video tracking. Doctoral dissertation. Hangzhou: Zhejiang University. 2006.4.

【39】Moravec H. Towards automatic visual obstacle avoidance[C]. In Proceedings of the 5th International Joint Conference on Artificial Intelligence, 1997.

【40】Harris C, Stephens MA. Combined corner and edge detector[C]. In Proceedings of the 4th Alvey Vision Conference, 1988.

【41】Vincent T, Laganiere R. Matching feature points for telerobotics[C]. In Proceedings of the 1st International Workshop on HAVE and their Applications, 2002.

【42】You J, Bhattacharya PA. Wavelet-based coarse-to-fine image matching scheme in a parallel virtual machine environment[J]. IEEE Transactions on Image Processing, 2000, 9(9): 1547-1559.

【43】Hu MK. Pattern recognition by moments invariants[C]. In Proceedings of IRE, 1961.

【44】Zeng Zhanggui, Yah Hong. Region matching and optimal matching pair theorem[C]. Computer Graphics International, 2001: 232-239.

【45】Huang Zhaohui, Cohen FS. Affine invariant B-Spline moments for curve matching[J]. IEEE Transactions on Image Processing, 1996, 5(10): 1473-1480.

【46】Freeman H. Techniques for the digital computer analysis of chain-encoded arbitrary plane curves[J]. Proceedings of the National Electronic Conference, 1961, 17: 421-432.

【47】Huang Bin. Recognition and understanding of human behavior in intelligent space. Doctoral dissertation. Jinan: Shandong University. 2010.3.

【48】Fukunaga K, Hosteller LD. The estimation of the gradient of a density function, with application in pattern recognition[J]. IEEE Transactions on Information Theory, 1975, 21: 32-40.

【49】Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(2): 564-577.

【50】Snyder WE, Qi H. Machine Vision[M]. London: Cambridge University Press. 2004.

【51】Wang Huiquan. Research and development of an online bottle inspection system based on machine vision. Master’s thesis. Jinan: Shandong University. 2010.

【52】Chen Youguang. Application research on boundary tracking, region filling, and chain code. Doctoral dissertation. Shanghai: East China Normal University. 2006.

【53】Zhang Hui, Wang Yaonan, Zhou Bowen, Ge Ji. Research and development of visible foreign matter detection system for health wine based on machine vision[J]. Chinese Journal of Scientific Instrument, 2009, 30(5): 973-979.

Get Beverage Machine Quote Now!

Streamline Your Beverage Production by Filling Out the Form to Get Started.

ask for a quick quote

drop us a line

Purchasing A Filler from China? 10 Tips Can Saving You Millions

Read Ten Cost-Saving Tips for The Purchase of Liquid Filler from China.