During the day, the human eye can see the scenery in nature. Because the eye receives direct light that reflects the sun on the surface of the scene or scatters light. At night, although there is no solar illumination, most nights, there are still moonlight, starlight or atmospheric glow. The surface of the scene in nature still reflects these faint light, so the human eye can still see the nearby scenery and the big The outline of the scene is observed at night, and the basic contradiction is that the light intensity received by the human eye is insufficient. The solution to this problem is

(1) Use a large-diameter telescope to obtain as much light energy as possible;

(2) Like electronics, try to zoom in on the weak light image;

(3) Illuminating the scene with an artificial light source;

(4) Using the radiant energy of the scene in the infrared band to achieve thermal imaging.

Using different techniques to solve this problem has resulted in different night vision methods - low-light night vision technology and thermal imaging technology.

The low-light night vision technology realizes the conversion of a photon image-electron density image-photon image by using an electric vacuum and electron optics and enhances the photon image by multiplying the electron density image during the conversion process. A technique for nighttime observations with faint light illumination.

The core of low-light night vision technology is a low-light image intensifier, which is an electric vacuum device consisting of three parts: a photocathode, electronic optics, and fluorescent screen. Its working principle is the combination of weak visible light and near-infrared light reflected by the scene. On the photocathode, the photocathode is excited to emit electrons outward. In this process, the image of the light intensity distribution of the scene is transformed into an electron density distribution image corresponding thereto; in the electro-optical component, an electron is input, Thousands of electrons are output, so the electron density distribution image of the photocathode is enhanced by thousands of times. The so-called "light image enhancement" is realized in this process. Finally, after a large number of electron bombardment screens are multiplied, the electron density image-photon image is transformed, and the enhanced low-light image is obtained for human eye observation. To date, the low-light image micro-enhancer has been developed to the fourth generation, the difference being mainly in the material of the photocathode and the electro-optical components. The practical low-light night vision device also includes a telescope that receives and concentrates light, and an eyepiece, a power source, and the like that are directly observed by the human eye. If the image output by the low-light image intensifier is taken by a television camera and viewed by a television, such a system is a low-light television.

Heat is the macroscopic representation of the molecules and atomic motions that make up an object. Temperature is a measure of the degree of thermal motion of molecules and atoms. Objects in nature emit electromagnetic waves as long as they have a temperature. In the electromagnetic spectrum, infrared rays having a wavelength of 0.001 to 1 mm are used. Objects in nature have a temperature, so energy exchange is performed in the form of infrared radiation and the environment. Since infrared rays are caused by thermal motion of molecules and atoms, they are also called heat radiation. The intensity of the surface heat radiation of the object is related to the temperature and surface state of the point, thereby forming a thermal radiation image (referred to as thermal image or thermal image) of the temperature distribution and surface characteristics of the reaction object, but the human eye has no visual reflection on the thermal radiation. , you can't see the thermal image directly. The physical factors that determine the visible light reflection and the reflectance difference of the scene also determine the difference between the thermal radiation emission and the emissivity of the scene. Therefore, the image of the thermal radiation distribution of the scene can reproduce most of the details of the visible light image formed by the reflection of the scene and the difference in reflectance.

Due to the thermal radiation imaging emitted by the scene itself, the problem of insufficient light intensity in nighttime observation is fundamentally solved. The room temperature scene is constantly emitted including medium wave infrared (3~5 μm), especially long-wave infrared (8~14 μm) Infrared radiation with a large number of photons in the band. In these two infrared bands, there is no day and night. Obviously, if you can perceive the thermal image of the scene, it is transparent 24 hours a day. Therefore, by means of thermal imaging technology, it is possible to make people observe nighttime more efficiently.

 

A device capable of capturing an infrared radiation distribution image of a scene and converting it into a visible image of the human eye is an infrared thermal imaging system (referred to as a thermal imager). The technology to achieve thermal imaging of scenes is called thermal imaging technology. Thermal imaging technology is the comprehensive use of infrared physics and technology, semiconductors, microelectronics, vacuum, cryogenic refrigeration, precision optics, electronics, signal processing, computer, system engineering, etc. to obtain thermal radiation images of the scene, and turn it into electrical signals Then, the processed electrical signal is used to drive the display to generate a high-tech for the human eye to observe the thermal image. A device that converts a thermal image of a scene that is invisible to the human eye into an observable thermal image of an adult eye is called a thermal imager.

 

The hot image generally includes six parts

(1) receiving and converging infrared telescopes that emit infrared rays;

(2) An optomechanical scanner that not only matches the large field of view of the infrared telescope with the infrared detector but also encodes the signal according to the requirements of the display system (when using an infrared focal plane detector with a sufficient number of detectors, the optomechanical The scanner can be omitted);

(3) an infrared detector assembly that converts the thermal radiation signal into an electrical signal;

(4) an electronic component that processes electrical signals;

(5) a display that converts an electrical signal into a visible light image;

(6) Algorithms and software for signal processing.

So far, three generations of thermal imaging products have been developed, the difference being mainly the use of different infrared detector components.

Thermal imaging technology has three major functions:

(1) extending the observation range of the human eye to the infrared region of the spectrum;

(2) greatly improve the sensitivity of the human eye;

(3) Obtained information about the objective world and the thermal movement.

The rapid development of modern science and technology has revolutionized thermal imaging technology.

(1) Realizing thermal imaging of long-wave infrared and medium-wave infrared, realizing night vision in the true sense;

(2) The successful development of the second generation of thermal imaging products, mass production, has been widely used in land, sea, air, and sky;

(3) Realizing uncooled thermal imaging, the product enters batch production, which greatly reduces the price of the thermal imager;

(4) The development of complex and powerful information processing technology enables night vision technology to be more widely used.

Modern high-tech local warfare has several characteristics:

(1) Electronic warfare and information warfare;

(2) Air strikes and anti-air strikes;

(3) Stealth and anti-stealth;

(4) Night battle;

(5) Remote precision strike.

These operational characteristics are closely related to thermal imaging technology. Night vision equipment is the material basis of modern high-tech local wars. Equipped with night vision equipment, the troops can continue to fight day and night, making the operation more abrupt and beneficial to gain the initiative of the war. Early active infrared night vision devices were easy to expose targets and are now largely phased out. The low-light night vision device is very mature, but it is still affected by the weather. The appearance of thermal imaging cameras is a revolution in night vision technology, and it is a true night vision device. As the most important night vision technology today, thermal imaging technology has five major advantages:

(1) Environmental adaptability is superior to visible light, especially at night and in harsh weather, and has good ability to penetrate smoke and dust;

(2) Good concealment, safer and more confidential than radar and laser detection, and not easy to be interfered with;

(3) The ability to identify camouflage targets is better than visible light and has a strong anti-stealth ability;

(4) has a long distance of action;

(5) Compared with the radar system, it is small in size, light in weight, and low in power consumption.

Since the water molecules in the atmosphere absorb infrared radiation more than the radar waves, thermal imaging technology cannot achieve all-weather work.

In the past 10 years, thermal imaging technology has developed rapidly and has become a high-tech with strategic position in the future military technology, which is embodied in the following three aspects:

(1) Thermal imaging technology is the main detection technology that national security relies on.

The massive use of ballistic missiles and long-range cruise missile assault strategic targets is one of the characteristics of modern high-tech local wars. From the war of the sea bend, until the recent Iraq war, ballistic missiles and long-range cruise missiles have been used extensively as effective assault and counter-attack weapons. The use of missile early warning satellites for early warning, tracking, identification, timely warning and interception is directly related to the security of national strategic objectives; reconnaissance satellites, Earth resources remote sensing satellites, and meteorological satellites have a significant impact on national security and economic interests; Thermal imaging technology is a key technology for these satellites.

(2) Thermal imaging technology is one of the main technologies used in modern high-tech local warfare.

Modern high-tech local warfare is carried out under high-intensity electronic countermeasures, so most battles are carried out at night or in bad weather conditions. The superiority of passive operation of thermal imaging systems will be more fully demonstrated. As the first link in the acquisition of images from the battlefield information network, thermal imaging technology is one of the key technologies, and the thermal imaging system and data link on various platforms such as satellites, reconnaissance planes, drones, ships, vehicles, personnel, etc. (Wireless and wired, radio and optical communication) combine to form a thermal image information network, which transmits the thermal image to the relevant combat units in real time, gaining the advantage of battlefield information, and is of great or even decisive for winning the battle and reducing the loss. effect. Thermal imaging technology is the main means of deep measurement under severe electromagnetic interference.

The strategic position of thermal imaging technology is also determined by its breadth and importance in military use. As a night vision device, thermal imaging cameras have been widely used in aircraft, ships, vehicles for night navigation, driving, frontier reconnaissance, and battlefield surveillance. Not only that, thermal imaging technology has been widely used in anti-armor weapon systems, precision guided weapons, artillery, missile fire control systems, photoelectric countermeasures, and so on. In fact, all military services and almost all arms in developed countries are equipped with thermal imaging cameras.

In the Army field, in the field of anti-armor, it has been used for portable thermal sights for light-range short-range anti-tank missiles, thermal aiming stations for vehicle anti-tank missiles, etc.; for air defense anti-low-altitude missiles, it has been used for shoulder-fired ground-to-air missiles. Sight, integrated infrared search, tracking, fire control system, such as split-type infrared search, tracking, fire control system, anti-aircraft gun, ground-to-air missile or ammunition combined with anti-aircraft and ground-to-air missiles. Armored combat vehicles are one of the most used applications of thermal imaging technology. They have been used in tank fire control systems, vehicle perimeter periscope, gunhead thermal sights, and night vehicle driving. In reconnaissance, they have been used for battlefield reconnaissance and monitoring. Damage assessment, unattended intelligence stations, armored reconnaissance vehicles, etc.; for light weapons thermal sights, it has been used for squat guns, rockets, high-altitude machine guns, rifles, machine guns, rent rifles, etc.; Used for anti-tank missiles, surface-to-air missiles, cruise missiles, etc.; in military robots, it is an important visual system thermal imaging technology for battlefield robots. It can also be used to detect poisonous gas.

In the Air Force's city, a new generation of air-to-air missiles uses thermal imaging guidance, and air-to-ground missiles are one of the most successful examples of thermal imaging guidance. In order to evaluate the impact effect and improve the accuracy of the hit, thermal imaging guidance was also started on the newly developed cruise missile. There are also many thermal imaging technologies used in aircraft, including pods and turret-type forward-looking infrared systems for night-time navigation; pods, turrets and fixed front-view infrared, side-view infrared, and down-view for aerial reconnaissance Infrared system; there are pods, turrets, fixed forward-looking infrared systems for aerial search and positioning, ground attack; on the helicopter, there is a mast-type forward-looking infrared system mounted on the rotor shaft. Thermal imaging technology is the standard equipment for reconnaissance drones, and all drones must be equipped.

In the navy, the main combat ships have adopted thermal imaging technology, ranging from aircraft carriers to ballistic missile submarines to small missile speedboats and patrol boats. Thermal imaging technology has been used for the guidance and guidance of air defense missiles, anti-ship missiles, cruise missiles and other precision-guided weapons. The biggest threat to the surface boat is the sea-going anti-ship missile. Search tracking is the premise of intercepting it. The infrared search and tracking system is a good system that can be used for this purpose. The last line of defense for air defense is the short-range point defense weapon system. With a thermal imager, this system can effectively fight even if the radar is disturbed. Surface ships are often equipped with radar and thermal imaging fire control systems, and some are only equipped with thermal imaging fire control systems. The traditional periscope can no longer meet the requirements of modern naval battles. The periscope of modern submarines has developed into a photoelectric mast, equipped with a thermal imager, which obviously improves the combat effectiveness of the submarine. In addition, thermal imaging technology is also widely used in maritime patrols and rescue formation voyages.

Thermal imaging can give information on the surface thermal radiation of the object and its internal heat dissipation, thus becoming a new means for people to observe and perceive the objective world. Thermal imaging technology is the best choice when it is necessary to observe the thermal image of an object, to obtain a temperature profile of the object without contact, and to observe the object at night. Therefore, thermal imaging technology is also widely used in paramilitary and civil applications.

In the aerospace field, meteorological satellites, resource remote sensing satellites, Earth observation satellites, infrared astronomical observation satellites, etc. use thermal imaging technology for weather forecasting, disaster mitigation and prevention, crop growth monitoring, yield assessment, pest and disease forecasting, forest fire prevention, land survey, resource survey , district city planning, environmental pollution monitoring, scientific research, etc., these applications have a major impact on the national economy. China is a country with frequent natural disasters. Using satellites to monitor changes in the environment and timely issuing forecasts can reduce the loss of people's lives and property. Chinese scientists have used the infrared image of the satellite to find out that before the earthquake, the temperature of the air above the earthquake zone will rise, and the so-called "redness" phenomenon will appear. The use of this phenomenon for earthquake prediction is currently being studied and there have been successful examples. In the areas of land survey, resource survey, district city planning, environmental pollution, forest fire prevention, etc., the use of satellite thermal images can provide macro, comprehensive, dynamic, timely and uninterrupted information, providing a basis for the country's macro and control. Satellite thermal imaging devices can be used for scientific research in ocean currents, volcanoes, icebergs, vegetation on the earth, thermal balance of the Earth, etc., enabling humans to better understand nature. In short, thermal imaging technology has developed into a high-tech that has a major impact on national economic interests and security.

In the industrial sector, there are many applications for thermal imaging technology. The power sector has used the thermal imager to perform live detection on transmission lines, transformers, etc., which can detect hidden troubles in time and play a very good role in ensuring power supply. In the metallurgical department, the thermal imager is used to check the temperature distribution of the furnace body in order to detect the corrosion of the lining in time and organize the repair. In the petroleum and chemical sectors, thermal imaging cameras are used to check the temperature distribution and leakage of reaction towers and pipelines. In the building and building materials sector, thermal imaging cameras are used to check the heat leakage of buildings in order to take energy-saving measures to reduce the energy consumption of heating and air-conditioning systems, improve economic efficiency and reduce environmental pollution. In the transportation sector, thermal imaging cameras have been used for night driving and navigation of aircraft, especially at low altitudes. The thermal imager is used to check the axle of the train to prevent overheating. Trains equipped with thermal imaging cameras can sail at night and in fog, avoiding collisions. Vehicles equipped with thermal imaging cameras can avoid rear-end collisions on the highway in foggy weather and ensure safe driving at night.

In the medical field, thermal imaging cameras have been the early diagnosis of cancer, the standard means of diagnosis of skin, bones, blood vessels and other diseases, and now they are used to monitor the heart and blood vessels of patients in surgery, in order to find abnormal phenomena and take timely measures.

In scientific research, thermal imaging cameras are used for non-contact temperature profile measurements, thermal radiation measurements, astronomical observations, and wildlife nighttime activities. In addition, thermal imagers are useful in aerospace resource exploration, non-destructive testing in manufacturing, non-contact temperature profile measurements, organic chemical gas analysis, pollution detection and control.

In the public security department, thermal imaging cameras can be used for security surveillance, searching for evidence of crimes, finding personnel and fire sources in firefighting, customs smuggling, and anti-smuggling at borders. It is an effective technical means to combat crime and protect public safety. With the development of thermal imaging technology and people's deep understanding of thermal imaging cameras, new areas of more applications will be developed.

Since the human eye cannot directly perceive infrared radiation, it must be converted into a measurable signal (usually a physical quantity) by means of a material that is sensitive to infrared radiation. In order to improve conversion efficiency and ease of use, this material has been made into a sophisticated device - infrared detector. In infrared technology, the development of infrared detectors and materials is inseparable, because the infrared sensitive properties of a material and the in-depth study of it are always realized in some form of device. To date, research on infrared detectors and their materials remains the most important and dynamic part of the infrared field.

Any type of infrared detector is required to organically form a simple or complex system with its signal processing, transmission and display devices to detect infrared radiation. Obviously, the performance of this infrared system is first limited by the level of development of infrared detectors and their materials. It can be said that what kind of infrared detector is there is what kind of infrared system. The performance and level of the infrared detector determine the performance and level of the thermal imaging system. Now, infrared detectors have become a strategic resource for the development of high-tech weapons and equipment in the country. On the other hand, infrared detectors can only be used to the fullest extent if they are used in the whole machine. Moreover, the development of thermal imaging systems has also raised new issues and new directions for the research of detectors and materials, and strongly promoted its progress. The devices, materials and the whole machine are independent of each other, interact with each other, interact and develop together.

In the 1950s, the rapid development of semi-physical physics and technology enabled people to develop high-performance infrared detectors corresponding to three atmospheric windows of 1 to 2.5 μm, 3 to 5 μm and 8 to 14 μm in the early 1960s. The most important are three kinds of photodetectors such as sulphide (PbS), indium antimonide (LrSb), and mercury cadmium telluride (HgCdTe). The development of these three detectors from unit to multi-line, small-area, long-line and large-area arrays enabled the development of tens of thousands of thermal imaging systems after 1960.

In the early days, high-speed scanning of the unit device with an optomechanical scanner enabled real-time thermal imaging of the object. However, due to the short integration time of the detector to the infrared radiation signal, the detection capability of the system is not high. Scanning with a line detector can improve the detection capability of the system. With long line detectors, the system can have a large field of view in one direction. With a large area array, the system can eliminate optomechanical scanners for gaze imaging. It can be seen that the development of the infrared deep detector from the unit to the area array is also driven by the requirements of the thermal imaging system.

In the 1970s, the British scientific research system with HgCdTe material success sweep product type detector (also called SPRITE detector, English Signal Procesing in the Element acronym), also illustrates another infrared detectors and the whole relationship is very A good example. The thermal imaging system can be optimized with a small area array detector and an oscilloscope scanner in a serial-to-parallel scanning mode. Scanning the probe in the direction of the small array detector column can reduce the scanning speed and simplify the scanning mechanism. In the row direction of the detector, the signals outputted by several detector elements are subjected to delay integration processing, so that the system can obtain a higher signal to noise ratio. The traditional method is to use an electronic circuit to delay the integration of the signal of the detector element outside the detector. The problem is solved simply and subtly by using a sweep detector. When the sweep detector is in operation, the spot of the scanner of the optical machine is excited by a minority carrier packet excited by the detector element and drifts from the high potential end to the low potential end under the bias electric field. When the spot scanning speed is equal to the drift speed of the minority carrier packet, the detector completes the delay integration process of the signal while completing the detection signal. In this way, detectors, electronics and systems are optimized.

In the early stage of the development of infrared technology, the development mode of the thermal imager was generally to develop the detector and its materials first, and then develop the system after developing the detector. Nowadays, the functions of thermal imaging systems are becoming more and more complex and perfect, and the materials of detectors and their materials are becoming more and more mature. The relationship between devices and complete machines is getting closer and closer. More and more detectors and their materials are included in the design of the whole machine, designed and manufactured according to the requirements of the whole machine. The simultaneous design of such complete machines and detectors is a future development trend.

The infrared system is the most widely used term, followed by the infrared imaging system. Infrared systems can be classified in a variety of ways.

According to whether the detected target is a point heat source target or an extended source target classification: point target infrared detection system; imaging detection system; sub-imaging detection system.

For any infrared imaging system, when the target distance is far enough, all the detected points are points. The advantage of using the infrared focal plane detector to develop the point source infrared detection system is that a sufficiently large field of view can be obtained in at least one direction, reducing The complexity of the optical system in this direction. At present, the sub-imaging detection system is mainly used for the terminal guidance of precision guided munitions. A simple scanning method and a detector array with a small number of detecting elements, such as a rose line scanning mechanism and a quaternary infrared detector, can be used to obtain the characteristics of the target from the generated thermal image to minimize the cost.

Thermal imaging systems can be divided into passive infrared imaging systems and active infrared imaging systems by active imaging or passive imaging.

The thermal imaging system refers to the real-time passive imaging of the room temperature target by using the long-wave infrared and medium-wave infrared bands of the atmospheric window. It is imaged by the long-wave infrared and medium-wave infrared radiation of the target's own radiation. A device for real-time infrared imaging using target reflection moonlight, atmospheric glow, and short-wave infrared radiation in night sky light is a passive infrared imaging system. Devices that rely on artificial infrared light sources to illuminate targets to achieve infrared imaging, including those operating in the long-wave, medium-wave, and short-wave infrared bands, are called active infrared imaging systems.

According to whether the infrared imaging system requires optomechanical scanner classification: scanning infrared imaging system; gaze-type infrared imaging system.

Line-column focal plane detectors with TDI functions such as 288×4, 480×6, 768×8 are currently the mainstream of long-wave infrared focal plane detectors, taking into account key factors such as thermal sensitivity, spatial resolution and price, especially suitable for For a variety of viewing-type thermal imaging cameras, infrared search and tracking systems, infrared scanning devices.

To improve spatial resolution, some gaze-type infrared imaging systems (including uncooled thermal imaging cameras) also use micro-scanning mechanisms. Such infrared imaging systems are also classified as scanning infrared imaging systems. In general, uncooled thermal imaging cameras use a gaze-type infrared focal plane detector that does not scan. However, some special uncooled thermal imaging systems use uncooled line-column focal plane detectors, instead of using optical mechanical scanners, instead use the target's own motion to achieve thermal imaging. This is applied in industrial capitals such as quality control. For example, the monitoring of the axle of a train is to achieve thermal imaging using the motion of the train.

According to whether the system has a refrigeration device, it is divided into a cooling type thermal imager and an uncooled type thermal imager.

The performance of the cooling type thermal imager is high, but the price is high, the volume and weight are relatively large, and the reliability is not high. The uncooled thermal imager is not as cool as the refrigeration type, but the price is small, the volume is small, and the weight is light. In particular, the refrigeration device is omitted, so the reliability is very high, and it has broad application prospects in the low-end application and civil use of the military city.

According to the working band of the system, it is divided into long-wave infrared thermal imager, medium-wave infrared thermal imager, short-wave infrared thermal imager, dual-band infrared thermal imager and multi-band infrared thermal imager.

According to the purpose and technical characteristics of the system, it is divided into platform view type thermal imager, portable thermal imager, guided thermal imager, infrared search and tracking system, and infrared line scanter.

Infrared technology based on modern science and technology such as molecular physics, quantum mechanics, semiconductor physics, materials science, cryogenic physics, precision optics, microelectronics, electronic technology, computer, systems engineering, etc., as a specialized field Infrared radiation and radiation sources, basic laws of infrared radiation, infrared optical materials, infrared detectors, packaging and Dewar technology, refrigerators and refrigerators, readout integrated circuit technology for infrared imaging arrays, test methods for infrared detectors, etc. It constitutes the core content of a relatively independent and complete knowledge system.

The basic contents of infrared physics include infrared radiation, black body, an actual infrared radiation source, the basic law of infrared radiation, and the interaction between infrared radiation and matter.

From 1860 to 1900, after 40 years of hard work, people built a complete theory of infrared radiation, the core of which includes the three laws of transmission, reflection and absorption, Kirchhoff's law, and Planck's law. The Wien's law and the Stefan-Boltzmann law, which are summed up from the experiment, are actually a special form of Planck's law and therefore are not independent laws. The thermal derivative is also derived from the above law and is therefore not an independent law.

The following nouns and concepts are often used in infrared radiation theory.

1) Radiant energy

In infrared radiation theory, radiant energy refers to the total energy of an object emitting infrared radiation, symbol. The unit is Joule (J). The black body radiation is the sum of the total spectrum energy, the gray body is the total energy of the emissivity correction based on the black body radiant energy at the same temperature, and the sum of the effective infrared radiant energy of the selective emitter for the emissivity correction. For the laser, it is the radiant energy corresponding to a certain wavelength.

2) Radiation energy density

The radiant energy density is the infrared radiant energy emitted by an object in a unit volume, symbol, defined

The unit is joules per cubic meter ( ).

3) Radiant energy flux

Radiant energy flux is the infrared radiant energy emitted or received by an object in unit time, referred to as radiant flux, symbol, defined

The unit is (W).

4) Radiant flux density / emission / irradiance

Radiation flux density, radiance, and irradiance are the infrared radiant energy fluxes emitted or received by an object per unit area in watts per square meter ( ). It is customary to use the emission degree when describing the emission of an object, the symbol, defined

Irradiance, symbol, defined when describing the reception of an object

Radiative flux density is a general term that describes this concept by definition. In general, the radiation exit of an object is a function of temperature and wavelength.

5) Radiation intensity

Radiation intensity is the infrared radiation flux emitted by the infrared radiation source at a unit solid angle, symbol, defined

The unit is watts per sphericity ( ), which characterizes the ability of infrared radiation sources to emit infrared radiation.

6) Radiation brightness

The radiance is the infrared radiation flux emitted by the unitary angle of the infrared radiation source and the unit area when the angle between the normal and the surface of the infrared radiation source is θ, symbol, defined

The unit is watts per square meter ( ), which characterizes the extent to which infrared radiation sources emit infrared radiation. The radiance of an object is also a function of temperature and wavelength.

After all the above physical quantities are added with the lower angle, it becomes a physical quantity describing a certain wavelength of infrared radiation. For example, the radiant brightness is referred to as a single-color radiance or a spectral radiance. And so on.

When infrared radiation is incident on an object, absorption, reflection, and transmission will occur. In addition, the object also emits infrared radiation. These physical phenomena are described by the following concepts and nouns. In general, the absorptivity, reflectance, transmittance, emissivity, etc. of an object are a function of wavelength and temperature. The absorption rate, reflectance, transmittance, and emissivity may be different for not only different objects but also objects of different states (such as temperature, surface finish, etc.).

7) Absorption power / absorption rate

The absorptive power indicates the ability of an object to absorb infrared radiation incident thereon, and the absorption power is numerically expressed as the absorption rate. The absorption rate is dimensionless and is the ratio of the absorbed amount to the incident amount. Since the absorption rate is a function of wavelength and temperature, there is a spectral absorption rate  and an average absorption rate

8) Reflective power / reflectivity

The reflection power indicates the ability of an object to reflect the infrared radiation incident thereon, and the reflection power is a reflection rate. The reflectivity is dimensionless and is the ratio of the amount of reflection to the amount of incident. Since the reflectance is a function of wavelength and temperature, there is a spectral reflectance and an average reflectance

9) Transmission power/transmittance

The transmission power indicates the ability of an object to transmit infrared radiation incident thereon, and the transmission power by a number is the transmittance. The transmittance is dimensionless and is the ratio of the amount of transmission to the amount of incident. Since the transmittance is a function of wavelength and temperature, there is a spectral transmittance and an average transmittance.

10) Launching power/emissivity

The emission power indicates the emission capability of the infrared radiation of the object, and the emission power by the number is the emissivity. The emissivity is dimensionless, the ratio of the amount of infrared radiation emitted by an object to the amount of infrared radiation emitted by the same temperature. Since the emissivity is a function of wavelength and temperature, there is a spectral emissivity and an average emissivity. For the black body, the spectral emissivity  is equal to 1, so the average emissivity  is also equal to

The performance evaluation of the infrared thermal imaging system refers to the prediction of the performance of the designed system by using the established thermal imaging system performance evaluation model, which is of great significance for finding the design deficiency, improving and improving the performance of the thermal imaging system.

This chapter will introduce commonly used performance evaluation indicators, such as modulation transfer function, noise equivalent temperature difference, minimum resolvable temperature difference, the range of action, etc., as well as the test system of the infrared camera.

The complete thermal imaging process is: the infrared radiation of the scene in the three-dimensional space is projected onto the infrared focal plane array through the infrared optical system, and the infrared focal plane array is converted into a one-dimensional time-distributed electrical signal output, after subsequent signal processing, and finally The two-dimensional spatially distributed visible light signal reproduces the infrared radiation field of the scene. In this process, each aspect of the scene radiation characteristics, atmosphere, an optical system, focal plane array, electrical part, display, the human eye, etc. will affect the imaging performance.

The history of performance evaluation models dates back to the 1970s. The US Night Vision Lab established the NVL75 performance evaluation model, which is suitable for the first generation of infrared imaging systems. It can make better predictions for medium-space frequency targets and meet the requirements of the US military at that time. With the advent of focal plane array devices, the FLIR92 model has emerged. It introduces a three-dimensional noise model, which fully characterizes all noise sources, makes the fusion of complex noise factors and MRTD model formulas simple, and can predict the static performance of scanning or staring infrared imaging systems. The NV Therm and TRM3 models have also appeared in recent years.

The performance of the thermal imaging system mainly includes static performance and dynamic performance. The static performance describes the imaging capability of the system to static targets, that is, the three-dimensional spatial distribution of the scene does not change with time, and the dynamic performance describes the imaging capability of the system for dynamic targets. Thermal imaging system performance parameters usually refer to the system's laboratory testable parameters, such as noise equivalent temperature difference (NETD), the minimum resolvable temperature difference (MRTD), the modulation transfer function (MTF), etc., which can be further extended to the system's range of action. The imaging quality of infrared thermal imaging systems must have an objective evaluation method. Usually, we use a variety of characteristic parameter tests to evaluate the infrared thermal imaging system. These characteristic parameters include noise and response characteristics, image resolution characteristics, image geometric characteristics, personal subjective characteristics and other parameters, the details of which are shown in Table 14.1. The characteristic parameters of the commonly used infrared thermal imaging system are noise equivalent temperature difference (NETD), modulation transfer function (MTF), and minimum resolvable temperature difference (MRTD).

No Noise and response characteristics Image resolution characteristics Image geometry Subjective characteristics Other features
1 Fixed pattern noise Modulation transfer function Image distortion The minimum resolvable temperature difference Spectral response function
2 Noise equivalent temperature difference Contrast transfer function Image rotation The minimum detectable temperature difference Line of sight
3 Non-uniformity Spatial resolution Field of view   Temperature stability
4 Dynamic Range Instant field of view      
5 Signal transfer function Effective momentary field of view      

The noise equivalent temperature difference is defined as a uniform square black body target with temperature. It is in the uniform black body background with temperature. The thermal imager observes this target. When the signal-to-noise ratio of the system output is 1, the black body target and The temperature difference of the blackbody background is called the noise equivalent temperature difference. The noise equivalent temperature difference describes the temperature sensitivity characteristics of the infrared imaging system, and its relationship with the noise voltage is

Where is the F-number of the optical system;  can be the noise voltage of any kind of noise or the total noise voltage; ;  is the rate of change of the target radiated power with temperature.

For a scanning thermal imaging system, its noise is distributed in each subsystem. Due to the effect of scanning, it can be represented by temporal noise. The noise power spectrum is applied, and the system noise is equivalent to a noise source, which is inserted into the detector. The reference electronic filter for measuring NETD is generally used to simulate the filtering effect of the subsequent system of the detector of the first generation thermal imaging system. Finally, the bandwidth of the NETD and system noise can be used to find the system noise.

For gaze-type thermal imaging systems, second-generation NETD measurement points are often placed at the video signal output. Before the system is displayed, NETD is not sufficient to describe system noise. First, both NETD measurements and calculations require a reference filter to simulate subsequent system signal processing circuits. In fact, the signal processing of second-generation thermal imaging systems often appears before the measurement points of NETD. Second, Noise from signal processing inhomogeneities and focal plane inhomogeneities contributes significantly to system noise and even dominates, and NETD clearly cannot describe these noises.

In fact, the signal outputting the second-generation focal plane already contains various noises such as time-space random noise, time-independent spatial correlation noise, and spatial correlation time-independent noise. A three-dimensional noise analysis method has been introduced for this purpose, and three-dimensional refers to the horizontal, vertical, and temporal directions of the space. When the time-space random noise term is converted to the corresponding temperature, it is similar to the form of NETD. In fact, for gaze arrays,  is often written as NETD. Since is mainly affected by fluctuation noise, its noise power spectrum is white noise power spectrum. Meanwhile, although other literature believe that other noises (called fixed noise) should have their non-spatial noise power spectrum, only the models considered in the current consideration are considered. The noise arms, and its spatial noise power spectrum is considered white noise power spectrum, the processing method is similar to, of course, this has a certain approximation, especially affecting the accuracy of detecting small targets. Although NETD is not enough to describe the second-generation thermal imaging system, because people are used to it, NETD is still used in the model for the second-generation thermal imaging system and is introduced by it.

Figure 14.1 shows a 320×240 infrared focal plane array imaging system at an ambient temperature of 20 °C.

The histogram of the noise equivalent temperature difference can be seen that the noise equivalent temperature difference between different rows is different, and the average value is about 0.10 °C.

We can examine the noise equivalent temperature difference of the system from three levels: pixel, row (or column) and the entire focal plane. Determining the Pixel Noise Equivalent temperature difference is commonly used to evaluate the performance of an infrared focal plane array; the noise equivalent temperature difference test for determining the line or the entire focal plane is typically used to evaluate an infrared thermal imaging system. The noise equivalent temperature difference test for the determined line or the entire focal plane must be done in the corrected state. In addition, the noise equivalent temperature difference is also a function of the detector's ambient operating temperature, so testing the noise equivalent temperature difference must indicate the ambient temperature.

The modulation transfer function measures the extent to which the thermal imaging system faithfully reproduces the scene, and it is the result of the interaction of the components with different spatiotemporal frequency characteristics. In models such as NVTHERM, it is usually assumed that the thermal imaging system is a linear system, and each point on the target is imaged by the point spread function, and the image formed by the image surface is accumulated after the innumerable points of the object surface are convolved with the point spread function. result. Figure 14.2 illustrates the role of the modulation transfer function in the imaging process. The modulation transfer function of an infrared thermal imaging system is mainly determined by four components: an optical system, a detector, a circuit, and a display.

Thermal imaging systems have a wide range of wavelengths, and the received scene radiation is incoherent, and its optical system can be considered a diffraction limited optical system. The transfer function of a diffraction-limited optical system depends on its wavelength and aperture. For a common circular aperture, the modulation transfer function under diffraction constraints is

Where  is the cutoff frequency, which is determined by the wavelength and the F number, . In addition to diffraction, the imaging process is also affected by aberrations in the optical system. The dispersion circle energy distribution caused by the aberration is a Gaussian function with a circular symmetry form with a standard deviation of and a modulation transfer function of

The total modulation transfer function of the optical system can be obtained by combining the above two factors.

The detector has the effect of spatial sampling and integration on the incident image. The spatial integral will generate high frequency confusion, and its modulation transfer function is

W is the effective detection length of the pixel. The effect of the electronic circuit on the signal is mainly low-pass filtering, which is usually described as a multi-stage RC low-pass filter whose modulation transfer function can be expressed as

Where  is the 3db attenuation frequency of the electronic circuit. The time frequency of the electronic circuit is converted to the spatial frequency by the scanning speed of the IRFPA. The point spread function of the CRT display is approximated as a Gaussian distribution function, assuming that the modulation transfer function of the CRT display is

Where  is the spatial feature frequency. Therefore, the modulation transfer function of the entire system is

An infrared thermal imaging system can be viewed as a linear system. From the linear theory, there is a definite relationship between the output function and the input function. This relationship is called the optical transfer function .

Where  and  are the Fourier transforms of the input and output functions, respectively. Let  be called the modulation transfer function of the system. Therefore, the modulation transfer function can reflect the response of the infrared thermal imaging system to image signals of different spatial frequencies.

If the input function is a step function, its derivative is a δ function, then . According to the Fourier transform theory,

If the input image satisfies the step function requirement, we can use Equation (14-10) to calculate the modulation transfer function of the system.

In the thermal imaging system, MRTD is the main parameter for comprehensively evaluating the temperature resolution and spatial resolution of the system. It includes not only the system characteristics but also the subjective factors of the observer. It is defined as a standard blackbody pattern of four strips (7:1 aspect ratio) with a certain spatial frequency, observed by the observer on the display screen for an indefinite period of time. When the temperature difference between the target and the background gradually increases from zero until the observer confirms that the target pattern of the four strips can be resolved (50% probability), the temperature difference between the target and the background is called the minimum of the space. The temperature difference can be resolved. The MRTD is a function of the spatial frequency. When the spatial frequency of the target pattern changes, the corresponding resolvable temperature difference is different.

MRTD is not only an important basis for designing infrared imaging systems, but it can also be used to estimate the working distance of the system. For the gaze-type thermal imaging camera, which considers three-dimensional noise, its minimum resolvable temperature difference calculation model is

random spatiotemporal noise;  is the noise correction function, subscript z stands for  or ;  is the threshold SNR for identifying 4 strip targets, where the recommended value of FLR92 is 2.6;  is human The time integral function of the eye,  and  are the spatial integration functions of the human eye, and their expressions are respectively

        

In the formula,  is the time sampling correlation degree,  is the frame frequency, τE is the human eye integration time,  is the horizontal sampling rate,  is the vertical sampling rate, and is the system noise filter.

The test of the minimum resolvable temperature difference is usually done by different people looking at different infrared targets of different sizes and distances. During the test, the black body is placed behind the target, and a fixed temperature difference is maintained with the target, and then observed by the human eye. If the human eye can move just to distinguish the target pattern, the temperature difference is the minimum resolvable temperature difference of the angle of view.

One of the important indicators of infrared imaging systems is the distance of observation, recognition, and recognition of specific targets under specified meteorological conditions, which are called observation distance, recognition distance, and recognition distance. The working distance is the main tactical index of the infrared imaging system and plays a decisive role in the design of the system. The working distance consists of two computational models: the point target model and the extended source model. When the target is not enough to fill a pixel, the target can be regarded as a point target. In theory, as long as the radiant energy of the target reaching the system is greater than the detection threshold of the system, the system responds and the target can be detected. In fact, the main purpose of detecting the target is to obtain the exact information of the target, such as the type and model of the target, so the point target model has no practical significance. At this point, the goal should be considered as an extended source target. When studying the distance of the infrared imaging system to such targets, it is not only possible to consider the radiant energy of the target, but also the influence of the size and shape of the target on the estimation of the line of sight.

The is used to estimate the line of sight of the thermal imaging system. The basic idea is that the actual temperature difference between the target and the background with a spatial frequency is transmitted through the atmosphere, and is still larger than the when it reaches the thermal imaging system; The angle should be greater than or equal to the minimum opening angle required for the level of observation. The traditional expression for line-of-sight estimation is

 

Where  is the characteristic spatial frequency of the target;  is the target-to-system distance;  is the temperature difference between the target and the background when reaching the thermal imaging system, which is a function of the distance  and the spatial frequency ; Ne is required by the Johnson criteria The number of target equivalent strips;  is the target height;  is the angle formed by the target to the system.

The thermal performance parameter of the thermal imaging system is a laboratory parameter. When the system is used for the detection of actual targets, the target characteristics and environmental conditions do not meet the laboratory standard conditions, and various observation levels and probabilities will also affect the observation effect. The and some other parameters must be corrected.

(1) Atmospheric transmission attenuation

For the detection of the actual target, the infrared image information of the target is always attenuated by the atmospheric transmission, and its influence cannot be ignored, and the actual atmospheric attenuation is one of the most important influence items. For the detection of a small temperature difference target image, the thermal imaging system receives a signal generated by the target and background radiation power difference in proportion to the temperature difference between them. Let the apparent temperature difference of zero lines of sight between the black object and the background be. When the atmospheric transmission of the distance reaches the thermal imaging system, the equivalent temperature difference between the target and the background can be approximated. Expressed as

Where and are the average extinction coefficient and the average atmospheric transmittance of the atmospheric transmission in the working band of the thermal imaging system, respectively, along with the target direction distance. Atmospheric transmission attenuation affects the line of sight of thermal imaging systems, and the attenuation produced by different atmospheric conditions is quite different. Therefore, in the technical and tactical indicators of the thermal imaging system, there should be clear atmospheric conditions (such as atmospheric pressure, temperature, relative humidity, visibility distance, transmission path, etc.).

(2) Determination of observation level

The observation level is a visual division method that combines system performance with human vision and needs to be completed by visual psychological test. The currently accepted division method is the Johnson criterion, which relates the observation problem of the target to the observation problem of the equivalent strip pattern according to the experiment, and divides the visual observation level into discovery, recognition, and recognition, and the definition is as shown in the table.

 

Johnson criterion for visual observation level

Observation level definition Number of strips required
Find Find a target in the field of view
Identification Can classify targets
recognize Differentiate the target model and other characteristics

In general, random noise limits find performance, system magnification limits limit classification performance, modulation transfer functions limit recognition performance, and scan grating limits recognition performance. The number of strips required for different observation levels is quite different, and the influence on the line of sight is also very large. The line of sight of the thermal imaging system should be analyzed by different positions (system design, program argument or technical and tactical indicators). Level. On the other hand, the number of bands required for each of the above observation levels is obtained at a probability of 50%. Therefore, there will be some changes in the number of stripes corresponding to other probabilities. Using probability integral fitting, the relationship between the number of strips  and the probability  can be expressed as

The values corresponding to the three observed levels were found, identified, and recognized as 0.625, 1.882, and 3.529, respectively. In the line-of-sight estimation, the required number of bands can be determined by iteratively solving according to the required observation level and probability.

(3) Influence of target shape

The test pattern of the laboratory performance parameter is a 4-band target with an aspect ratio of 7:1, and the aspect ratio of the equivalent strip pattern of the actual target generally does not satisfy the above conditions. Therefore, in the estimation of the line of sight, The correction is made according to the aspect ratio of the strip corresponding to the actual target. Let the target height be, the target direction factor (aspect ratio) be, and the number of equivalent strips required for the observation level be, then the aspect ratio of the target equivalent strip pattern becomes

The aspect ratio of the strip is related to the spatial accumulation performance of the human eye. The longer the strip, the larger the accumulation, and the higher the perceived signal-to-noise ratio. As can be seen from the  expression, considering the actual target of the aspect ratio change,  should be corrected to

Where is the minimum discernable temperature difference obtained by the laboratory.

This section will introduce the design of multi-light harmonic image fusion system by taking the fusion imaging system of low-light TV, CDD and thermal imager developed by Nanjing University of Science and Technology as an example. The workflow: the video of the low-light TV and the infrared camera will be output to the fusion circuit, and the video is collected by the fusion circuit. The acquired low-light video will be subjected to spatio-temporal filtering and noise reduction processing to remove flicker noise in the low-light image. The noise-reduced low-light video and infrared video will first perform image registration processing. The registered infrared and low-light images will be image fusion based on color transfer and finally sent to D/A. Output after conversion to standard video.

In the traditional image fusion system, the front-end infrared imaging system and the low-light/visible imaging system respectively output analog video signals to the image fusion circuit, and the image fusion circuit needs to be digitized separately to perform various image registration and image fusion. Processing, this will inevitably increase the time delay of the image signal from the detector to the monitor, reducing the image quality of the image. In the fusion imaging system, the denoising processing technology for the low-light image is added, and the pitting noise of the low-light image can be effectively removed, and the uncooled infrared imaging technology introduced in the previous chapter can be non-uniform. The digital infrared image of the corrected and adaptive image enhancement is directly outputted to the image fusion circuit, which avoids the disadvantage of the analog signal being susceptible to interference and reduces the delay time of the signal.

For better fusion, the fusion system front-end detector must meet two basic requirements: a) both must have nearly the same field of view; b) the optical axes of both must be parallel. The first requirement can be achieved by selecting the lens with the right focal length. In this system, the infrared detector is 320 × 240 pixels, the pixel pitch is 45μm; the CCD of the low-light TV is 1/2 inch, and the focal length of the infrared lens is 55mm, so in order to ensure that the two have the same field of view, The focal length of the low-light lens is about 24.5mm. With precise optical design and high-precision optical processing, infrared and low-light objective lenses with approximately the same field of view can be obtained.

In order to ensure accurate registration of dual-source images from near to infinity, when fixing two kinds of detectors, precise optical axis adjustment must be performed to make the two optical axes parallel to ensure the consistency of the viewing direction.  The optical axis adjustment instrument is a "multi-spectral optical system optical axis parallelism test device". The instrument can adjust the optical axis from visible light to far infrared, and the adjustment accuracy (optical axis angle) is 0.1 milliradians. When the cross target is applied with a voltage, the cross target will heat up, and the generated thermal radiation is received by the infrared detector, thereby displaying a cross-dotted line on the infrared detector. After the low-light source illuminates the cross target, the low-light detector can detect the shadow of the cross-dotted line.

When using the low-light observer and the thermal imager to observe the same far enough point target at the same place and the corresponding allowable illuminance, if the image of the point falls on the intersection of the cross-dotted lines of the instruments to be calibrated respectively Above, it indicates that the optical axes after adjustment of the instrument are parallel. Otherwise, using the characteristics of the off-axis parabolic mirror, place the cross-dash line at the focus F, and immerse the objective lens of the instrument in the collimated beam. The cross-point observed is equivalent to the observation in the field. Long distance goal. Measure the deviation between the intersection of the target crosshair in the field of view and the intersection of the cross-line of the instrument, and adjust each instrument so that the two crosshairs coincide and then lock to maintain the accuracy of the calibration. The center positions of the images captured by the two observers coincide.

 

The hardware structure of the fusion system circuit is mainly composed of TMS320DM642 type Dsp, CY-clone III type FPGA, A/D video acquisition chip, D/A video synthesis chip, power system, clock system, CPLD, RS-232/422/485 compatibility. A communication interface, Flash, SDRAM, EEPROM, etc.

 

The main technical indicators of the fusion system are:

◇Main processor: TMS320DM642, working frequency up to 720MHZ, processing capacity up to 5760MIPS

◇FPGA: Using Altera's EP3C55F484C8N, internal integration of about 56,000 logic cells and 2340K-bit SRAM memory

◇SDRAM: two pieces of HY57V283220-T, capacity: 4M × 64 bits, working clock 133MHZ

◇FLASH: one AM29033C, capacity: 4M × 8 bits, 70ns

◇EEPROM: One piece of AT24C08B, capacity: 1024 × 8 bits

DSP is the core of the system, mainly to complete the image registration, image fusion, and other functions, but also has the functions of data communication, system management, data storage, and peripheral device configuration.

The asynchronous serial communication interface is mainly used for data transmission between the fusion circuit and the PC, such as the expansion coefficient and the registration mode. It can be used for future expansion of new functions between the system and the PC. For example, relying on the advanced image processing functions of the PC, the system can provide more effective scaling factor and registration mode, so that the user can perform the system without knowing the system design. debugging.

The EEPROM mainly stores adjustable parameters such as differentiation position, expansion coefficient, registration mode, and the like. The EEPROM uses the IIC communication interface and is directly accessed by the DSP.

CPLD mainly implements the logic decoding function of the system, such as FLASH, serial port and other read and write enable signals, and IIC transmission channel selection.

(1) DSP-based hardware design.

DSP selects TMS320DM642 type DSP of Ti Company. It is based on C64x core and adopts the second generation high-performance long command word structure (VLIW), which can simultaneously perform the 8-channel 256-bit parallel operation. The DM642 can run at 720MHz with a maximum operating speed of 5760MIPS.

The main peripheral resources of the DM642 are 3 video ports, 1 10/100M Ethernet port, 1 audio interface, one host interface (HPI), three 32-bit timers, one PCI interface, one ATA hard disk interface, and 16 GPIO port. DM642 integrates rich peripheral resources, providing the possibility for future system function expansion

(a) DSP peripheral configuration.

Many of the pins of the DM642 are multiplexed with multiple peripherals. In order to use the peripherals effectively and avoid accidental operation of the peripherals, the peripherals must be specified for a specific state to achieve reasonable configuration. The peripheral configuration scheme of the fusion system circuit is given below.

DSP peripheral configuration mode

Peripheral Configuration Remarks
VP0 Output ITU-R BT.656
VP1 Input ITU-R BT.656
VP2 Input ITU-R BT.656
PLL 12 times frequency ECLKOUT1 is 133M
GPIO0 Output  Strobe channel for managing IIC
GPIO1-15 High resistance  
PCI Prohibited  
EMAC Prohibited  
HPI High resistance  
TOUT1 Ground Big Endian mode

 

On the system board, the external storage resources extended through the EMIF interface mainly include SDRAM, FLASH, and UART. The address space allocation is shown in the table.

 

Address space allocation

Peripheral capacity Address range
SDRAM 4M×64 0x80000000-0x81FFFFFF
FLASH 4M×8 0x90000000-0x9007FFFF
UART 8 0x90080000-0x90080007

 

The TMS320DM642 DSP has a variety of power-on bootstrap modes, which are selected by AEA[22:21] state at reset. The correspondence between pin state and the power-on bootstrap mode is shown in the table. In the fused system circuit, the configuration is implemented by pulling up or pulling down the resistor.

Power-on bootstrap design

AEA[22∶21] Bootstrap method
0 0 Bootless mode
0 1 HPI or PCI bootstrapping
1 0 Reserved
1 1 ENMIFA bootstrapped by FLASH

 

(b) DM642 clock design.

The DM642's clock system consists of a PLL, divider, and multiplexer that clocks the DM642's CPU core, EMIF, and on-chip peripherals.

The DSP clock on the fused circuit board is generated by the programmable clock CY22801. The frequency input to CLKIN is 50MHz, the CLKMODE[1:0] is configured as 10, that is, the on-chip PLL is configured to be 12 times, and the CPU core frequency is 50×12=600MHZ. The input frequency of ECLKIN is 133MHZ, AEA[20:19] is pulled down to 00, that is, the EMIF clock ECLKOUT1 configuration comes from ELKIN, which is 133MHZ; the on-chip bus, EDMA, and L2 memory operate at 1/2 of the CPU core; that is, 300MHZ; The timer is 1/8 of the CPU core, which is 75 MHz.

DM642 clock configuration

CLKMODE[1∶0] Multiplication factor AEA[20∶19] EMIFA input clock
0 0 Bypass(×1) 0 0 ELKIN
0 1 ×6 0 1 CPU/4
1 0 × 1 0 CPU/6
1 1 Reserved 1 1 Reserved

 

The PLL needs to provide a high-speed clock of 600MHz for the DSP. The rising and falling edges of the clock are in the nanosecond range. Therefore, special processing is required for the input clock signal and the power supply of the PLL.

(c) DM642 video port design.

The DM642 integrates three video ports, each consisting of a 20-bit data line, two clock ports VPxCLK0 (input) and VPxCLK1 (input/output), three control signals VPxCTL0, VPxCTL1, and VPxCTL2. The clock signal is input/output as a clock signal of the video source, and the control signal is used as a synchronization signal input/output of the video source (line sync, frame sync, field flag, video capture enable, etc.).

Each video port is divided into two channels: upper (B) and lower (A), and A channel of VP0 and MCBSP0 are complex.

For example, the A channel of VP1 is multiplexed with the McBSP1 channel, the B channel of VP0 and VP1 is multiplexed with McASP, and VP2 is a single function pin. On the converged system board, port A of VP0 is configured as an output video port, port A of VP1 is configured as an input video port, and ports A and B of VP2 are both configured as video input ports. When VP0 is configured as a video output port, VP0CLK1 is used as the output clock of the video source, and VP0CLK0 is used as the input clock. VP0CTL0, VP0CTL1, and VP0CTL2 respectively serve as synchronization signals for the output video. When VP1 is configured as a single-channel input port, VPxCLK0 is used as the input clock for the video source, and VPxCLK1 is not used. VPxCTL0, VPxCTL1, and VPxCTL2 are respectively connected to the CAPON/AVID/HSYNC, VBLNK/VSYNC, and FID input synchronization signals of the video source. When VP2 is configured as dual-channel video input, VPxCLK0 and VPxCLK1 serve as the input clocks for the two video sources, respectively, and VP2CTL0 and VP2CTL1 serve as the acquisition enable signals for the two video sources, respectively.

In the fused circuit board, the video input uses the BT.656 format, and the line/field synchronization of the video data is controlled by the base signals in the respective BT.656 video streams. The sampling of the BT.656 video data stream is controlled by the CAPON signal or by the base code in the BT.656 video data stream. The video data stream will not be sampled when the CPAN signal is inactive or between the EAV and SAV time base codes. When the video port of the DM642 is used as an 8-bit video port, it uses the upper 8 bits of the 10-bit data bus, namely VPxD[9..2] or VPxD[19..12]. The start, horizontal synchronization, vertical synchronization, etc. of BT.656 video data stream acquisition are controlled by VCEN, EXC, HRST, VRST, FLDD, etc. in the signal CAPEN and video channel control register VCxCTL.

(2) A/D video capture circuit.

According to the technical requirements of the video protocol and the requirements of the future military applications for the chip, we chose the TVP5150 of Ti Company as the A/D video acquisition chip. The TVP5150 is an ultra-low-power, dual-channel, 8-bit video capture chip with automatic gain. It typically consumes 115mW and operates over the -40°C to +85°C temperature range. The TVP5150 uses differential video pre-processing technology to achieve video signal clamping and synchronization separation. A 4-wire adaptive dressing filter is embedded inside to realize filtering of luminance and chrominance signals, which can effectively eliminate virtual images. The TVP5150 supports NTSC, PAL, and SECAM video formats, as well as composite video and S-Video video input methods. The output format satisfies the ITU-R BT.656 standard and produces YCbCr (4: 2: 2) video signals with embedded digital sync signals.

The TVP5150 is functionally configured via the IIC bus. The configurable functions are input video format, input channel, brightness control, contrast control, chroma gain, synchronization and blanking start position. Since the DM642DSP supports video data streams in the BT.656 format, it can seamlessly connect to the video data stream of the TVP5150PBS. The INTREQ of the TVP5150PBS acts as the CAPEN of the VP port to control the acquisition of the video data stream. When INTREQ is 1, the VP port is allowed to perform video data collection. When the VP port is 0, the VP port is prohibited from collecting video data streams.

(3) D/A video synthesis circuit.

According to the technical requirements of the video protocol for video capture and the accuracy requirements of the fused image, we chose Phils SAA7121 as the video output chip. The SAA7121 is a high-precision (10-bit), single-supply video synthesis chip that supports both PAL and NTSC video formats and supports full TV signal (CVBS) and S-Video video output modes. The SAA7121 has a line-synchronization programming function that facilitates synchronous phase control of the video signal; a built-in palette allows for correction of the chrominance signal of the color image.

The SAA7121 uses the IIC bus to implement function configuration. The configurable functions include output video format, output channel, line sync phase, color difference signal gain, and image gray scale reference level.

(4) Serial communication interface design.

The serial communication interface is realized by the DSP bus extension, wherein the serial data transceiver device adopts TL16C752B, which includes two independent transceivers, receives and transmits each 64-byte FIFO, and each has a Modem interface signal, and the highest transmission. The rate is up to 3MBps.

The TL16C752B uses an 8-bit asynchronous serial port parallel storage interface and is powered by a +3.3V power supply. It can be directly connected to the EMI64's external memory interface EMIFA. The TL16C752B also provides two interrupt request signals, INTA and INTB, for channels A and B to request an interrupt for the DM642. In this system, only the A channel is used according to the needs of the application.

The level shifting interface uses the MAX3160. The MAX3160 is a multi-protocol transceiver that allows asynchronous serial interface levels to be configured for RS-232, RS-422, and RS-485 levels. The Modem signal of TLl6C752B is not fully connected to the serial interface, but it adopts a 2-wire system (RDX and TXD) when RS-232 interface standard, and 4-wire system (Y, Z, A, B) when adopting RS422 interface standard. When using RS485, the two-wire system is adopted.

The pin RS485/( ) on the MAX3160 is used to select whether to operate in RS422/485 mode or RS232 mode. The pin HDPLX is used to select RS422 or RS485.

(1) Analysis of the characteristics of low-light images.

The low-light image intensifier is the core device for obtaining low-light imaging. With the development of low-light night vision technology, the image intensifier has undergone four generations of technological innovation. At present, the imaging mechanism of the night vision device composed of a general-purpose device is as follows: weak natural light is reflected into the night vision device through the target surface and is focused on the photocathode surface of the image intensifier under the action of the objective lens to excite the photoelectrons. The photoelectrons are accelerated, focused, and imaged by the electron optical system inside the image intensifier, bombarding the microchannel plate (MCP) in the image intensifier at a very high speed. The photoelectrons bombard the screen after MCP multiplication and excite a sufficiently strong visible light. Thus, a distant object illuminated by weak natural light is turned into a visible light image suitable for human eye observation and further enlarged by the eyepiece to achieve more effective target observation. Therefore, the low-light image is different from the general visible light image, and it is not only related to the illumination condition of the scene and the reflectance distribution of the scene, but also related to the signal conversion of the imaging device, the gain of the image intensifier, and the system noise. The summary is as follows:

(a) The low-light image conforms to the habit of human eye observation, high detail resolution, and good ability to characterize the object. This is the advantage of the low-light image, which is why we chose it as the source of image fusion information.

(b) The low-light image has obvious random flicker noise. When the illuminance is lower, the noise is larger, and even the image is flooded. This is mainly due to noise from image intensifiers and CCD devices, including thermal noise in the intensifier semiconductor material, shot noise generated when the photocathode emits electrons, noise introduced by microchannel plate gain fluctuations, CCD dark current noise, CCD reset noise and CCD amplifier noise.

(c) Low-light images usually have low contrast, a small gray-scale dynamic range, and an edge blur that is not easily distinguishable from the surrounding environment. From the perspective of the gray-scale distribution histogram, the gray value of the low-light image is generally concentrated in the low-gradation range, and the distribution is not uniform enough. This is mainly due to the influence of the noise of the micro-optical device on the imaging contrast of the grayscale image.

(2) Low-light image noise reduction.

In order to effectively eliminate the low-light image noise, it must be noise-reduced. Common image denoising methods include neighborhood averaging, multi-frame accumulation, time domain recursion, etc., which can effectively eliminate noise, but also bring about problems such as motion blur and spatial resolution. For this, a self-design will be designed here. Adapt to the time domain filter.

The time domain recursive filtering algorithm utilizes the high correlation between adjacent two frames of images and the randomness of noise to perform a time-weighted average of the image signals in the frame period to improve the signal to noise ratio of the image. Its mathematical expression is as follows:

Where is the output frame image after the nth recursive processing,  is the current input frame image,  is the frame image after recursive processing, and K is recursive Filter coefficient. Obviously, this filtering algorithm is recursive in time. It is a weighted average of the current frame image and the frame image after -1 recursive processing, that is, the weighted average of the image of the previous -1 frame. The image weight of the previous frame is smaller, and the image weight of the more recent frame is larger.

From the perspective of signal processing, according to the expression of the time domain recursive filtering algorithm, it is actually a low-pass filter. The cutoff frequency of the low pass filter is related to the value of the filter coefficient K. The larger the coefficient K, the narrower the bandwidth of the filter, and the value of the coefficient K is between 0 and 1.

According to such a model, the random noise in the image is filtered out due to its high-frequency characteristics, and the image signal passes due to its low-frequency characteristics, so that the random noise of the image can be suppressed, and the degree of suppression of the noise by the filtering process depends on For the filter coefficient K, the larger the coefficient K, the greater the suppression of noise. Its signal-to-noise ratio improvement to the image is

It can be seen that this algorithm has a considerable effect on the improvement of the image signal-to-noise ratio. However, for the recursive filtering algorithm, it can also be concluded that the larger the filter coefficient K, the more serious the moving target image will be. Smear. This is because the larger the coefficient K, the larger the proportion of image pixel information including the previous frame in the processed image, and the smaller the ratio of the image information including the current frame. Therefore, the more serious smear occurs when the moving target image is displayed, and the faster the moving speed, the more serious the smear. Obviously, in the processing of video images with moving targets by recursive filtering algorithm, the problem of noise reduction and image smear is a pair of inevitable contradictions. Since the smear is related to the filter coefficient K, we can adjust the value of the coefficient K according to the speed of the dynamic target motion in real time, and improve the contradiction in a compromising manner, and finally achieve a good processing effect on the dynamic target image.

If the rate of change of the target in the image is characterized by the rate of change ε of the current frame image pixel compared to the image of the adjacent previous frame, then such a model can be established. A threshold value of the difference between the pixels of the two frames before and after is set, and the number of pixels whose difference between the pixel of the current frame image and the image pixel of the adjacent previous frame is greater than the threshold is counted, so that the magnitude of the pixel change rate ε can be obtained. The filter coefficient K is set as a function of ε. When ε is larger, indicating that the image target moving speed is faster, K takes a smaller value to ensure that the dynamic image has a good time response rate, but at this time, the intensity of noise reduction is sacrificed; otherwise, when ε is smaller, indicating that the slower the moving speed of the image target is to be stationary, then K takes a larger value to increase the image noise reduction intensity. This is the idea of adaptive time domain recursive filtering. Among them, the functional relationship between the filter coefficient K and the image change rate ε becomes the core problem of the algorithm.

At present, there are many theories about the functional form between the filter coefficient K and the image change rate ε in the adaptive time-recursive filtering algorithm, including nonlinear functional forms, piecewise linear functional forms, and others. Form and so on. Since this function is a continuous function, implementation in FPGA is unrealistic. Considering the implementation of the algorithm, we use the piecewise discrete mapping table between K and ε to implement adaptive time domain recursive filtering on FPGA.

(3) Image contrast enhancement algorithm.

The contrast of the image directly reflects the clarity and blur of the image. The higher the contrast of the image, the clearer the detail, the more distinguishing the target feature. Now, the contrast enhancement of images is mainly handled from the aspects of histogram equalization processing and grayscale stretching transformation.

(a) Image histogram equalization processing

Infrared and low-light images From the perspective of image gray histogram statistics, the probability that pixels appear on low-value intervals is relatively large, making the image darker. The details are not very clear. The so-called histogram equalization is a process of transforming an image with a known grayscale probability distribution into a new image with a uniform probability distribution.

First, the probability of occurrence of each gray level pixel point in the image is obtained. Since the gray level of the digital image is discrete, the probability of occurrence of the kth gray level  in one image is

Where is the total number of pixels in the image,  is the number of pixels in the image with a gray level, and is the number of gray levels of the image.

Then, using the cumulative probability distribution of each gray level as the gradation transformation function, that is, the histogram equalization equation

Thus, by the above formula, each pixel of the original image with a gray level of can be mapped to a corresponding pixel of the new image with a gray level of.

From the process of histogram equalization, we can see that the degree of image enhancement of a gray level is proportional to the cumulative probability distribution of the gray level. For general images, the background and target occupy more pixels, and the contrast between the background and the target is increased by the histogram equalization method.

(b) Grayscale contrast stretching.

Grayscale transformation is a classic method often used to improve image contrast. In a broad sense, there are many types of gradation transformation methods used for image enhancement processing. Mainly reflected in the selection of different transformation functions. Regardless of the transformation of the function, it enhances the contrast of the image by increasing the dynamic range between certain gray values in the image, thereby highlighting the image detail. The gradation transformation method is usually easier to implement in the hardware and is actually an implementation of an algorithm function. Generally, the gradation transformation is divided into a linear transformation and a nonlinear transformation.

In practical applications, different transform functions can be designed according to the purpose of the application. A typical contrast stretch transform is a piecewise linear transform. The advantage of the piecewise linear transformation is that it can be adjusted according to the actual needs of the image, stretching the grayscale details of the feature object, and suppressing gray levels that are not of interest. It not only improves the contrast of the image but also enhances the line and edge features in the image. The general form of a piecewise linear transformation function is

By adjusting the values in the function, such as the,  and the constants, after the slope, the function form can be changed, and thus many special transformations are obtained. The specific function is obtained by adjusting in the actual processing.

(4) FPGA-based low-light image preprocessing.

FPGA-based low-light image preprocessing mainly includes real-time noise reduction and enhancement processing of the image to be fused. When the system works, first, the software system needs to configure the peripheral components of the FPGA so that the hardware system can work normally. In this system, the video decoding chip SAA7111AHZ needs to configure the internal register  to make it work according to the required working mode, so the logic system needs to design the  configuration program module; secondly, because The system is a PAL real-time video processing system, so there must be a PAL standard video synthesis module to provide the reference timing signal needed for video synthesis for the off-chip video synthesis device; meanwhile, in this sequential logic system, There should be a signal synchronization delay module to delay control the timing control signals of the relevant video reference so that it can meet the design requirements; since the off-chip memory needs to be read and written when implementing different algorithms, here is designed The ping-pong cache control module for the off-chip SRAM can be called for the algorithm design module that requires a large amount of storage space by appropriate adjustment; in this system, a small-scale NiosⅡ soft-core processor is designed according to actual needs. Control module, so that the signal in the system can be controlled by an external keyboard; The image processing algorithm implementation module is the processing core of the image. Here, the algorithm implementation platform is built, and the resources of each functional module can be utilized to satisfy the implementation of various algorithms to perform various image optimization processing. Specifically, in our design, the optimized implementation of the low-light image selectively completes the hardware implementation of the multi-mode adaptive recursive denoising algorithm and the gray-scale transform algorithm to perform pipeline-based noise reduction and enhancement processing. Of course, many other image processing algorithms can be implemented on the platform of the algorithm implementation module, because the algorithm implements the platform module and provides sufficient foreign resources and interfaces in the software system.

(a) SRAM ping-pong buffer control module design,

When completing FPGA-based real-time video processing, the amount of data to be processed is very large, especially when implementing the filtering algorithm for time-domain interframe processing, the required data memory is not enough for the on-chip access unit of FPGA alone. Therefore, four 1M×16-bit SRAMs are designed in the hardware system as off-chip memories. In the implementation of recursive filtering and noise reduction algorithm, adaptive recursive filtering and noise reduction algorithm and other time domain video processing algorithms, SRAM should be read and written by ping-pong structure. Therefore, SRAM ping-pong buffer control is a necessary common function. Modules differ in that their algorithms are slightly different for their specific use. The principle of ping-pong buffer is to alternately read and write two memory devices. When reading memory A, the corresponding address of memory B is simultaneously written; conversely, when reading memory B, the corresponding address of memory A is simultaneously performed. Write operation.

The SRAM squad buffer control to be designed here is the operation of the digital video pixel signal, and the digital signal outputted by the video acquisition chip not only includes the image pixel gradation signal, but also includes the synchronization and blanking signals in the video format, so The ping-pong buffer control also needs to remove the invalid data to ensure that the pixel signals stored in the memory are all involved in the algorithm operation. This is also an inevitable requirement for the size of the storage space for control operations. Because a piece of SRAM space is only enough to store a frame of valid pixel data.

In this control module design, a total of three sub-function modules are required. First, the SRAM read/write enable control timing for the effective pixel data is combined with the video reference control signals HREF, VREF, OE, and VS reference video timing logic. Second, an SRAM address sync count controller is designed in conjunction with these reference control signals. Furthermore, it is to design a three-state control module for reading and writing SRAM data. In this way, the SRAM ping-pong buffer control module for the PAL video effective image pixels is designed, and all are designed in VHDL language.

(b) Adaptive recursive FPGA design.

According to the mathematical model of the adaptive recursive filtering algorithm, the design of the functional module must include a sub-module implemented by a recursive filtering algorithm and a sub-module controlling the mapping vector table. Because of the inter-row operation processing of image pixels, to implement the time domain recursive filtering algorithm, it is necessary to use the off-chip two-chip SRAM as the frame memory through the SRAM ping-pong buffer control module. In the two SRAMs, the SRAM A is used to store the frame image pixels after n-1 recursive processing, and the SRAM B is used to store the frame image pixels after the nth recursive processing. . In the current frame processing cycle,  in SRAM A is read under ping-pong control and is compared with the current input frame image pixel, The operation of is written into the memory location of the same address in SRAM B for the data operation in the next frame period. In the next frame processing cycle, the pixel data in the read SRAM B is read and written to the corresponding position in the SRAM A. In this way, ping-pong alternately uses two SRAMs to complete the buffering of data and perform periodic recursive operations. Here, the multiplication and addition operations in are done by constant multipliers and addition and subtraction operators in the logic design. In this way, a most basic time domain recursive unit is completed. Since we are going to implement the adaptive recursive algorithm, in this recursive filtering algorithm sub-module, we must design a number of such recursive arithmetic units with different K-factor values.

The design of the mapping vector table module is actually to establish the mapping selection relationship between the filter coefficient K and the image change rate ε, and adaptively adjust the K value according to the statistical value of ε to control the operation in the recursive filtering algorithm sub-module. deal with. First, a statistical function circuit capable of completing the pixel change rate ε of the current two adjacent frames is designed. In the design of this functional circuit, since the change value of the current frame and the image pixel of the adjacent previous frame is to be counted, the access control of the adjacent two image frames on the off-chip memory must also be performed through the SRAM ping-pong buffer control module. The periodic cycle mode of its read and write control is completely similar to that in the recursive operation above. The statistical operation period of this statistical module is consistent with the image frame period, that is, for each frame of the image, a value of ε is obtained to represent the moving speed of the target in the current image, and a suitable one is controlled by the mapping vector table. The K value is used to control the intensity of the recursive filtering process of the next frame. Such a periodic control adjustment and real-time arithmetic processing can complete the adaptive recursive noise reduction processing.

The DM642 is a dedicated video DSP chip that integrates a digital video port to support multi-threaded processing. The DSP receives digital video from the thermal imaging camera and digital video based on the FPGA, and then performs image registration and image fusion. The image fusion algorithm uses a color transfer algorithm based on YCBCR space, and the image matching is based on bilinear interpolation based on affine transformation.

(1) Image registration.

Due to the different imaging mechanisms of infrared thermal imaging and low-light television, the optical axes are not parallel and the imaging resolution is different, the positions of the targets in different images are different or distorted, such as translation, rotation, proportional change and distortion caused by noise. Considering that image fusion requires pixel-level registration, the system uses a fast image registration method based on affine transformation. The specific process is based on the low-light image, and the infrared image is aligned pixel by pixel, and the matching relationship of the corresponding points between the images is determined to eliminate or reduce the position difference of the target.

The dual source image of the same scene must satisfy the affine transformation model, and the affine transformation model is

Where R is the rotation matrix and T is the translation matrix

In the formula, ( , ), ( , ) are the coordinates of the corresponding points in the two images, and the other four parameters are the row and column translation pixels, the rotation angle, and the scaling factor.

For the determination information of the same imaging object, since the number of infrared image pixels is smaller than the low-light image, each pixel of the infrared image contains information of up to four pixel points adjacent to the low-light image, so the infrared image is simulated to the low-light image. In the process of the shot transformation, a weighted average calculation is needed to calculate a corresponding gray value of a number of low-light pixel points, that is, a bilinear digital interpolation calculation.

At this point, we can get the address and corresponding weight of the corresponding four points of the infrared image pixels in the registered infrared image. Finally, when the registration module is executed by the DSP, we can calculate the gray value of each pixel of the registered infrared image by the above algorithm. In order to further reduce the amount of computation, we make an affine transformation lookup table for each pixel of the registered infrared image. When the registration module is executed by the DSP, the affine address of the registered infrared image can be called by directly looking up the table. And the corresponding weight value, thereby calculating its gray value, and then further fusion processing, thereby greatly improving the system speed, and achieving the purpose of real-time registration while ensuring pixel-level registration accuracy.

(2) Implementation of image registration and fusion algorithms.

The software design of the system is carried out under the CCS integrated development environment of TI. The real-time operating system DSP/B1OS is also included inside CCS. It is mainly designed for multi-task multi-real-time scheduling and synchronization as well as host/target system communication and real-time monitoring applications. DSP/BIOS has many functions of real-time operating system, such as scheduling management of tasks, synchronization and communication between tasks, memory management, real-time clock management, interrupt service management, and peripheral driver management. Using DSP/BIOS tools can help developers more easily control the hardware resources of the DSP, more flexibly coordinate the execution of each software module, and speed up software development and debugging.

Since video data capture, display, image registration, and image fusion are all performed simultaneously, the system software uses a multitasking design. Create five tasks with the same priority using the DSP/BIOS tool. Tasks take precedence over idle loops, below software interrupts and hardware interrupts. The DSP/BIOS task object is a thread managed by the Task Object Management (TSK) module. Before entering the DSP/BIOS scheduler, the program needs to initialize multiple modules to be used. Including: 1DM642 and system board initialization; 2RF-5 module initialization; 3 establish capture and display channels: establish and start 2 capture channels, establish and start a display channel. After completing the initialization work, the system enters five task modules under the management of the DSP/BIOS scheduler. The five task modules send messages to each other through the SCOM module of the RF-5. The specific functions of the five task modules are:

Capture Task Module  Get a frame of the latest video image from the TVP5150PBS using the FVID exchange call provided by the driver. The input task module then sends an SCOM message to the processing task, which contains the pointer to the frame. Then wait for the sent message to return to continue processing.

Display Task Module  Play back the image on the display device. He uses the FVID exchange call provided by the output driver to display the image, and then waits for the message from the fusion task to continue running.

Fusion task module  Includes image registration unit and image fusion unit. After the image registration unit receives the two images, the image is registered by the affine transformation lookup table based on the image. The two images after registration are sent to the image fusion unit. The image fusion unit fuses the two images according to the image fusion mode. After the fusion is completed, the fusion task module waits for the SCOM message sent to the display task module to return, and then sends a message to the display task module to notify that the new output data is ready.

UART task module  It is mainly used to detect and respond to data transmitted by a PC.