Advantages and disadvantages of the radar with a synthesized aperture. Laser location, Doppler images and aperture synthesis. Analysis of the technical task

Aperture synthesizing (CA) is a signal processing method that allows you to significantly increase the transverse linear resolution of the radar relative to the direction of the bottom and improve the detail of the area radar. Ca to obtain a radar card (when mapping), exploration of ice environment and other situations. In terms of quality and detail, such cards are comparable to aerial photographs, but in contrast to the latter can be obtained in the absence of optical visibility of the earth's surface (when flying above the clouds and night).

14.1. Principle of operation and radar device with sa

Details of the radar image depends on the linear resolution of the radar. When using the polar coordinates, the resolution of the range (the radial resolution) is determined by the parameters of the probing signal, and in the transverse direction (tangential resolution) of the radar bottom width and the distance to the target (Fig. 14.1). Details of the radar image of the area are higher than less than. It depends on the size of the resolution element.

Fig. 14.1. Parameters characterizing the detalation of the radar image

Since the reduction task is solved by using probe signals with a small pulse duration or transition to complex signals (frequency-modulated or phase-manipulated). Reduction requires the use of narrow bottom, since the width of the bottom is proportional, and (to the wavelength; the length of the antenna), which cannot be larger than the longitudinal size (length) of the aircraft. The main way to increase the tangential resolution - use in the radar synthesis methods

the aperture of the antenna when moving La. Most often radar with sa is used in the so-called lateral overview radar (Fig. 14.2).

In radar, in which the antenna is located along the fuselage, and it is higher, the more longitudinal size of the Fuselage la. Since it constructively limits the size of the internal antenna, the details of the image in radar with along the fuselage antennas is improved, although the dependence on the range is preserved.

A more radical path leads to radar with the synthesis of aperture (RSA) with the progressive movement of La.

Fig. 14.2. Side view radar orientation diagrams

The principle of synthesizing aperture. Let the linear headlights (aperture) (Fig. 14.3, a) consists of emitters. Summing the signal received by irradiants, you can receive a headlight headlight chart at each time if you need to synthesize the headlights, sequentially moving one emitter (antenna) along this aperture at some speed V, taking the signals reflected from the target, and then Co-processing (Fig. 14.3, b). This synthesizes the aperture of a linear antenna with an effective size and

The bottom of the width However, the costs of time for synthesizing and the instrument of the radar is complicated.

Let la move at a certain height with a constant velocity V straight and parallel to the earth's surface (Fig. 14.4).

Fig. (4.3. Phased antenna lattice (A) and an aperture synthesis scheme when moving the emitter (b)

Antenna, which has a width of a width and rotated 90 ° to the path line, consistently passes a number of positions in which it takes signals reflected from the target at the point on the earth's surface. With different positions of the antenna (different) signals from the same point pass different distances, which leads to a change in the phase shifts of these signals caused by the stroke difference since the signal passes twice (in the target direction and from it), two signals adopted during the adjacent antenna positions differ in phase on

Depending on whether phase raids are compensated for or not when processing received signals, phase raids (formed on sections distinguish focused and focused RSA. In the first case, the processing is reduced to moving the antennas, memorizing signals, compensating phase raids and summation of signals (see Fig. And in the second - To the same operations, but without compensation of phase raids.

Fig. 14.4. The appearance of phase shifts in the process of rectilinear LA movement during the synthesis of aperture

Tangential resolution of RSA. Ophterocused processing ensures the addition of signals V, with the difference in the phases of signals from the extreme and central elements of the aperture if you put the maximum value will be from fig. 14.4 should therefore, if

Thus, when summing the signals on the trajectory area equal to the width of the synthesized bottom will be

At the same time, the tangential allowing ability and at an arbitrary distance to the target (Fig. 14.5).

Fig. 14.5. The dependence of the tangential resolution from the range in a conventional radar (1), in the pefocused RL with Ca (2) and in the focused RL with sa (3)

With focused processing, the signals are summed on the mixing area of \u200b\u200bthe real antenna installed on LA, which is irradiated at the point the target:

In this case, the width of the synthesized bottom

and tangential resolution

PCA structural scheme. The basis of the RCa is coherent-pulsed radar, constructed according to the scheme with internal coherence (Fig. 14.6).

The coherent generator (kg) at a frequency is used to form in a single-band modulator of the probing signal with a frequency of oscillation sources with a frequency is a radio frequency generator (GRCH). The probing signal is modulated by a pulse sequence from the modulator Power amplifier (mind) is the terminal cascade of the transmitter. Signal processing (memorization, phase compensation, summation) is usually performed by complex digital filters at low frequency, therefore, the scheme includes quadrature channels, each of which begins with the corresponding phase detector. The source of reference voltage for phase detectors is coherent heterodyne (kg). Signals of quadrature channels (saving phase information) are supplied either to the recording device either on a real-time digital processing device (WSOS). With analogous processing of signals in the radar radar, information from the outputs of quadrature phase detectors is supplied to a special recording device, for example, in an optical device for recording on the photo film from the screen of an electronized tube modulated by brightness

Fig. 14.6. Structural diagram of the radar with the synthesis of aperture

glow spots. Processing and playing information occur later, after processing the film with delay in time (not real-time).

When digital processing of signals, the resulting information is obtained immediately during the processing of real-time time.

The principles of processing signals in the RSA. With any form of processing, it is necessary to memorize the framework of information about the signals of the targets.

The sizes of the frame are set in azimuth by the effective value of the synthesized aperture and by range (Fig. 14.7, a).

Since the antenna received each time the signals arrive at the receiver input with the viewed distance sequentially in time, they are also recorded sequentially in each of the azimuth channels, which is conditionally shown by the arrows in Fig. 14.7, b. At the same time, the corresponding area of \u200b\u200bthe area is formed by the frame of the image with the dimensions to obtain information about the corner position of the target, i.e. On the coordinate, when the aperture synthesized, it is possible only when analyzing the signals reflected from this purpose recorded on the synthesis interval therefore information from the recording device is read sequentially in each of the range channels (Fig. 14.7, B).

Fig. 14.7. Memorable frame of terrain (a): record charts (b) and reading (c) of the sutaps

The signal processed in the RSA. Let the radar operates in a pulse mode. Then for the period of repetition, the antenna shifts on the segment

To eliminate the goal of the goal with such an antenna displacement, we will require in Fig. 14.8. In this case, the corresponding area of \u200b\u200bthe area is formed by the frame image with dimensions and obtain information about the angular position of the target, i.e. On the coordinate, when the aperture synthesizing, it is possible only when analyzing the signals reflected from this purpose recorded on the synthesis interval therefore information from the recording device is read sequentially in each of the range channels (see Fig. 14.7, a). Suppose now that is immobile, and the goal

Fig. 14.8. Kinematics of mutual mixing and point target

moves relative to it at the same speed V (Fig. 14.9, a). Starting the time from the moment of passing the purpose (point M) of the middle of the aperture and we have

Upon passing a goal through the focus chart, the Doppler frequency shift (Fig. And the phase (Fig. 14.9, c) are changing according to the laws:

Note that the coefficients with permanent in flight "K and V are dependent on the consequently, the processing of multichannel signals by range.

The complex amplitude of the reflected signals during the synthesis of the aperture can be represented as

Fig. 14.9. Diagram of the formation of a radial velocity vector (s); The nature of the change in the Doppler frequency (b) and the phase (c) of the signal when the target flight

In the pulse radar signal, the signal comes to discrete moments of time, so then

Discrete components (14.4) must be remembered on the time interval where

Signal processing algorithms in RSA. For optimal signal processing (14.4), a filter with a pulsed transitional characteristic is required

The problem of radical increase in the resolution in the direction perpendicular to the bottom axis is particularly relevant for the surface view of the surface under the aircraft or spacecraft, since in the direction of the bottom axis is achievable very high resolution with the appropriate expansion of the RLS signal spectrum. If the antenna radiation is sent perpendicular to the vector of radar velocity, i.e., a side review is carried out, the movement of the antenna relative to the irradiated surface allows you to obtain a very high resolution in the optimal processing of the reflected signals and in the direction perpendicular to the bottom axis. Thus, the task of obtaining high-definition radar images is solved.

Improving the resolution in the lateral review can be considered as the result of the bottom compression with optimal processing (similar to the compression of the pulse with intra-pulse modulation) or as the formation of the diagram with a synthesized antenna grid formed when the RLS antenna is moving relative to the irradiated surface.

Consider the principle of operation and potential capabilities of the aircraft radar of the side review. The station antenna is elongated along the axis of the aircraft and forms the bottom, narrow in the horizontal and wide in the vertical plane, oriented perpendicular to the axis of the aircraft. Usually two identical bottoms are created on both sides of the aircraft axis, which in this case is insignificant.

With the wavelength of the radiated radios of oscillations and the longitudinal size of the antenna, the width of the bottom in the horizontal plane. Considering for simplicity, radiation is limited in a horizontal plane angle, we will find the time of irradiation of the surface point at a distance D from the radar:

where - the speed of the aircraft, which is considered constant; - Linear width of the bottom at a distance d from radar. The radial component of the speed relative to the points of the irradiated surface (Fig. 18.7, a), where is the angle between the bottom axis in the horizontal plane and the direction on the point under consideration. Thus, on the bottom axis, and at the edges reaches the maximum value. Since the narrow bottom is used in the radar view, it can be considered. Due to the radial component of the speed, the Doparkrovsky shift of the reflected signal, changing according to the linear law from before. Thus, the frequency-modulated pulse is accepted with a distance of the distance (Fig. 18.7, b) with frequency deviation.

With optimal consistent processing, such a pulse can be compressed to a pulse with a duration, the reverse width of the signal spectrum and approximately equal. Hence, . Since, then. Note that at the output of the compressive filter, the envelope of the pulse has a form and its duration (measured at the level of 0.64 maximum value) determines the limiting resolution by time, which corresponds to the distance resolved in the direction of the vector V, perpendicular to the bottom axis.

Consequently, when coherent processing, the resolved distance does not depend on the range and is limited by a value equal to. This conclusion, first apparent paradoxical, becomes clear when analyzing the resolution of the radar of the side view from the point of view of the disclosure synthesis.

If all reflected signals are coherent (i.e., taking into account the phase) to summarize, then you can form (synthesize) the bottom of the width

moreover, the coefficient 2 takes into account the phase raid when the distance D signal is passed to and back.

Resolved in the direction of flight (perpendicular to the bottom axis) distance

The segment L, which produces a coherent summation of reflected signals, determines the size of the synthesized opening, since such a summation is similar to the signal to the SNF phase antenna with a disclosure size equal to. From here it becomes clear why the resolved distance decreases, i.e. the resolution increases with a decrease in the revealing of the real antenna and does not depend on D. This is explained by the increase in the synthesized disclosure directly proportional to the width of the DNA RLS and the distance of the point under consideration.

However, the difficulty of ensuring coherence in the processing of signals is increasing. Therefore, the antennas of the PLS side review to obtain small values \u200b\u200bshould have a significant dimension of the disclosure, which allows to realize coherent treatment that provides an approximation to the potential resolution system with a synthesized opening defined by formula (18.27).

When switching from a continuous signal to a pulse with a period, a synthesized antenna is similar to an antenna array, the distance between the elements of which is equal. In the radar of the side view, impulse radiation is usually used, so such radars are called stations with a synthesized antenna array.

With the radiation of each pulse, the radar antenna becomes an element of a synthesized lattice, the range of which from the surface point under consideration is equal to the shortest distance (Fig. 18.7, a) only at the moment when the point under consideration turns out to be the bottom axis. At the edges of the synthesized lattice, the distance differs from on

This difference distance corresponds to the maximum alar signal delay. If during the flight process the changing phase delays are fixed and recorded during processing, then the synthesized lattices are called focused. The signal processing system in this case is complex, so it is necessary to find out how the losses of the resolution leads to the "focus", i.e. the transition to non-flowing processing without taking into account phase shifts. In this case, the movement difference at the ends of the synthesized opening is allowed, which corresponds to the maximum phase shift. From this condition you can find the size of the effective disclosure of the synthesized antenna. From fig. 18.7, it is clear that, consequently,

Thus, in the absence of focusing, the bottom width of the synthesized opening of the size, and the corresponding linear resolution

For signal processing without correction (focus), a conventional exponential drive with a delay line for the repetition period is suitable. It is clear that the names focused and the measured systems appeared by analogy with the optical system in which the lens focus is necessary at a fully open diaphragm.

With strong diaphragmation, sufficient clarity (sharpness) is provided without focusing when the lens is constant to infinity.

Therefore, when focused signal processing (focused opening) is achievable maximum linear resolution in the direction perpendicular to the bottom, regardless of the range when the refined processing (focused openness) for a conventional antenna with the size of the opening resolution.

The dependence of the resolution of the range D for these cases is presented in Fig. 18.8.

Thus, for the complete implementation of the potential capabilities of the synthesized antenna, a signal processing is required to introduce phase amendments in accordance with the position of the point under consideration relative to the RLS antenna. In pulsed radar radars, the signal is repeated with a period and the amendments are entered discretely at the time of time counting on the time of receiving the average pulse reflected at that time when this point is on the traverse of the flying aircraft.

The agreed filter for the point target signal with a known range and the RLC rate relative to the target corresponds to the scheme of a coherent filter for a pack of pulses, while the amplitudes of pulses are multiplied by weight coefficients and are shifted by phase to the value of the correction. This processing (focus) is required for each range element, i.e., a filter is required for each range (discreteness depends on the resolution of the range, determined by the spectrum of the signal spectrum), and the filter parameters must be changed when the RLS movement rate changes.

Requirements for processing device are specified above all of the synthesis time equal to the focused systems. So, at the speed of the aircraft, a given permission to the range during the operation of the radar on the wave the desired size of the synthesized aperture. In this case . At the pulse repetition frequency, the number of signals are summed by processing for each range element, the number of which in the range of the range can reach. The number of quantization levels determines the discharge of the processing device. Thus, the total amount of information being processed. If there are quadrature channels, the value doubles and has an order of 108 bits. Taking into account the phase correction in each period of repetition, the need for processing in such systems reaches.

Despite the relative complexity, the digital implementation of processing devices when using a modern element base is possible, especially when processing on video-frequency. The advantage of digital processing is the ability to obtain an image of the area under the aircraft or a real-time satellite.

If a delay is allowed upon receipt of an image (for example, when mapping), it is advisable to apply optical signal processing methods in the synthesizing of the opening, since optical devices provide multichannel coherent signal processing immediately for all range elements.

The principle of processing is as follows. The received signals are fixed on the film pulled at the speed of the proportional speed of the aircraft V, while the row rows are located across the film. At a certain distance from the beginning of each row, the proportional range of the considered point D, reflected signals are recorded during the recording time in the longitudinal direction (along the film) in the corresponding scale transmits the distribution of signals along the synthesized opening.

After manifestation (the time of manifestation and determines the delay in processing), the film is pulled in front of the optical device window, simultaneously irradiating a homogeneous coherent light beam. Flat light wave passing through the film, modulated by amplitude and phase recorded signal. The dimensions of the spots obtained on an optical screen or other film at the output of the optical filter correspond to the width of the radiation pattern of the synthesized antenna, which many times less than the width of the actual antenna pattern. The selection of the parameters of the elements (lenses) of the optical filter can be coherent processing and obtain high clarity of the synthesized radar image. It is using the RLS side review with the synthesizing of a disclosure located on the artificial satellite of Venus, the Soviet researchers managed to get a clear radar image of this planet closed for optical observation.

Aperture synthesis, method of obtaining high angular resolution using the synthesis of measurement results performed by a radio interferometer consisting of two small apertures moving within a large aperture, and a correlation (multiplayer) receiver. The measurement result by the method of aperture synthesis is similar to measurements with an antenna large aperture. With aperture synthesis, a large amount of measurements are performed at different positions of the elements and the results are summed with certain weights and phases.

The aperture synthesis method was proposed in 1952 by M. Rail, who studied the Radio Building of Galaxies with it. In 1974, Rail together with E. Hewish was awarded the Nobel Prize "For Innovative Studies in Radio Savorphysics". The most spread is an aperture synthesis received in radio astronomy and radar. In radio astronomy, an aperture synthesis is used due to the objectives of the study of the angular distribution of the radio source radiation intensity with a fine structure from angular minutes to a fraction of seconds. For such studies, antennas are needed with the ratio D / λ (D - the linear size of aperture, λ is the wavelength) of about 10 3 -10 6, therefore, for a centimeter radio range D, there should be about hundreds of meters and more. Naturally, conventional antennas with such aperture cannot be created, therefore the aperture is "synthesized", measuring in separate points located inside this synthesized aperture, and performing appropriate measurement processing. As a result, a high angular resolution is achieved.

When using the aperture synthesis method, a large antenna is divided into N elements. Falling waves, reflected from each element, fall into the focus of the antenna in the phase. Therefore, the high-frequency voltage V (T) in the focus can be recorded as a sum of the components of ΔV I (T) from the individual elements:

Power P at the output of a high antenna receiver is proportional to the average voltage square value:

From formula (2) it can be seen that the measurement result contains terms depending on the signals obtained only from the parameters of the elements. Each term can be measured with two small antennas with a size equal to an aperture element in positions I and K, and a correlation (multiple) receiver. If the observed section of the sky does not contain variable sources, then such an interferometer can be used to consistently measure the members of the row (2).

The cut of the East-West line on the surface of the Earth, visible from the side of the remote source, is rotated 180 ° in 12 hours. If all the elements of the antenna grid on this segment are followed by the source, then in 12 hours it is possible to synthesize a round aperture in the plane perpendicular to the axis of the earth rotation, with a diameter equal to the length of the segment. The width of the synthesized diagram in any direction is inversely proportional to the projection of the aperture to this direction. The deterioration of the resolution in directions close to the equator plane is eliminated when using a T-shaped antenna grid with segments oriented in the directions of East-West and North-South (Fig.).

Modern systems of aperture synthesis consist of a large number of full-time antennas and at the same time existing independent correlations interferometers, which significantly reduces the observation time. Rounding together with Earth, each interferometer measures a large number of categories (2). For multi-element interferometers, the aperture synthesis method allows to synthesize the beam with such a width that can be obtained with aperture having dimensions comparable to the size of an antenna grid.

For a more complete extraction of information from measurement results, a priori information about the brightness of the sky is used. Such a priori information allows the use of far-separated antennas systems, as well as to build the sky maps using only amplitude measurements when the phase information is not ordinary or missing.

First work using for aperture synthesis of small mobile antennas were performed in Cambridge (United Kingdom) in 1954. In Sydney (Australia) in 1956, the Earth's rotation was used for the first time for the synthesis of a two-dimensional grid with a linear. The most famous system of aperture synthesis is an antenna grinning VLA (Very Large Array) in New Mexico (USA), completed in 1981. It consists of 27 full-time paraboloids with a diameter of 25 m each, which can move along three-21-kilometer railway tracks, laid in the form of the letter Y. The angular resolution of this system at a wavelength of 1.3 cm is 0.05 ".

The aperture synthesis method is also used in the interferometers formed by antennas, spread by hundreds and thousands of kilometers (radio interfernteters with super-long bases). This allows you to synthesize the apertures comparable to the dimensions of the Earth, and getting an angular resolution of about 0.001 ", much superior to the achieved in optical astronomy. In the future - the creation of an aperture of land-space, some of the elements of which will be placed on spacecraft (Radiastron project, Russia) .

Lit.: Kraus J.D. Radio Astronomy. 2nd ed. Powell, 1986; Christian S., Hyogb I. Radioteleeskopa. M., 1988.

Angle resolution is the most important characteristic of any telescopic system. The optics claims that this resolution is uniquely connected with the wavelength, on which observation is carried out, and with the diameter of the telescope's inlet aperture. With large diameters, as you know, a big problem. It is unlikely that a telescope will be built more than that.
One methods of a significant increase in the resolution is used in radio astronomy and radar method of synthesizing large and super-high apertures. In the millimeter range the largest aperture - 14 km - they promise to form the 66th antennas of the Alma project in Chile.

The transfer of aperture synthesis methods into an optical region, where the wavelengths are several orders of magnitude less than in radar, associated with the development of laser heterodinization technique.

1.Fysical foundations of image formation.

It will not be a mistake to say that the image in any optical device is formed by the diffraction of light on the inlet aperture, and no longer. Let's look at the image of the object from the aperture center. The angular distribution of the brightness of the image of an infinitely remote point light source (as, however, and any other) will be the same for the lens and chamber-taps of equal diameter. The difference between the lenses from the obscura lies only that the lens transfers the image formable by its aperture from infinity into its focal plane. Or, in otherwise, it produces a phase transformation of the inlet flat wave front to the spherically moving. For a remote point source and a round aperture, the image is a well-known picture of Airy with rings.


The angular size of the Airy disk can be in principle to reduce and it seems to increase the resolution (according to the Rayleigh criterion) if the aperture has a special way. There is such a distribution of transmission by a radius, in which the central disk is theoretically can be made arbitrarily small. However, the light energy is redistributed over the rings and the contrast of the complex image drops to zero.

From a mathematical point of view, the formation procedure of the diffraction image is reduced to two-dimensional Fourier transform from the input light field (in the scalar approximation, the field is described by the complex coordinate and time function). Any image recorded by the eye, screen, matrix or other square in the intensity of the receiver is nothing but a two-dimensional amplitude spectrum of a limited aperture of the light field emitted by the object. It is easy to get the same picture of Eyri, if you take a square matrix from the same complex numbers (imitating a flat wave front from a remote point), "cut" from it a round "aperture", resetting the edges, and make the Fourier-transformation of the entire matrix.

In short, if you somehow write a field (synthesize aperture) on a fairly large area without loss of amplitude and phase information, then you can do without gigantic mirrors of modern telescopes and megapixel matrices, simply calculating the Fourier image of the resulting data array.

2. Location of satellites and overtime.

We will observe moving across the ray of view a stabilized object, highlighted by a continuous coherent laser source. Registration of radiation reflected from it is made by a heterodyne photodetector with a small aperture. Recording a signal for time T is equivalent to the implementation of a one-dimensional aperture length VT, where V is the tangential speed of the object. It is easy to estimate the potential allowing the ability of this method. Let's look at the near-earth satellite in the upper elongation, flying at an altitude of 500 km at a speed of 8 km / s. In 0.1 seconds, the signal recording is received by the "one-dimensional telescope" of 800 meters in size, theoretically capable of considering in the visible range of the part of the satellite size in the share of millimeter. Not bad for such a distance.

Of course, the reflected signal at such distances weakens for many orders. However, the heterodyne taking (coherent mixing with support radiation) significantly compensates for this weakening. Indeed, as is well known, the output photocurrent of the receiver in this case is proportional to the product of the amplitudes of the support radiation and the incoming signal. We will increase the proportion of support radiation and thereby enhance the entire signal.

You can see from the other hand. The spectrum of the recorded signal from the photodetector is a set of the Doppler component, each of which has the amount of deposits from all points of the object having the same radial speed. The one-dimensional distribution of reflective points on the object determines the distribution of spectral lines in frequency. The resulting spectrum is essentially a one-dimensional "image" of the object by the coordinate "Doppler shift". Two points of our satellite, located at a distance of 1 mm from each other in the plane, perpendicular to the beam of view, have the difference in radial speeds of about 0.01-0.02 mm / s. (The ratio of this difference to the satellite speed is equal to the distance between the dots to the distance to the satellite). The difference in the Doppler frequencies of these points for the visible wavelength of 0.5 MK will be (F \u003d 2V / λ) about 100 Hz. The spectrum (Doppler image) from the entire microsatellite, say, the size of 10 cm, will be put in the range of 10 kHz. Quite measurable magnitude.

You can look at the third party. This technology is nothing more than a hologram recording, i.e. The interference pattern arising when mixing the support and signal fields. It contains amplitude and phase information sufficient to restore the full image of the object.

Thus, a highlighted satellite with a laser by registering the reflected signal and mixing it with a reference beam from the same laser, we obtain a photoclocker on the photodetector, whose dependence on the time reflects the structure of the light field along the "one-dimensional aperture", the length of which, as already mentioned, can be made big enough.

Two-dimensional aperture, of course, is much better and more informative. Let us spread evenly several photodetectors across the movement of the satellite and write down the reflected field on the area VT * L, where L is the distance between the extreme photodetectors, which is in principle unlimited. For example, the same 800 meters. Thus, we synthesize the aperture of the "two-dimensional telescope" size of 800 * 800 meters. The permission from the transverse coordinate (L) will depend on the number of photodetectors and the distance between them, on the other, the "temporal" coordinate (VT) - from the bandwidth of the laser radiation and the frequency of the signal from the photodetector.

So, we have a recorded light field on a very large area and can do anything with him anything. For example, get a two-dimensional image of very small objects at a very long distance without any telescopes. Or you can restore the three-dimensional structure of the object by digital refocusing for the range.

Of course, the real three-dimensional configuration of reflective points at the facility does not always coincide with their "Doppler" distribution by radiation speeds. The coincidence will be if these points are in the same plane. But in general, from the "Doppler Image" you can learn a lot of useful information.

3. What was before.

American Darpa has financed a program for some time ago, the essence of which was the implementation of such technology. It was assumed from the flying aircraft to learn with ultra-high resolution objects on Earth (tanks, for example), some encouraging data were obtained. However, this program was closed, whether they were classified as in 2007 and since then nothing heard about it. In Russia, too, something was done. Here you can see the picture obtained at the wavelength of 10.6 MK.

4. Technical realization equipment at a wavelength of 1.5 MK.

On the mature reflection, I decided not to write anything here. Too many problems.

5. Some primary results.

While it was difficult to "consider" with a distance of 300 meters of a flat diffuse reflective metal object with a size of 6 per 3 mm. It was a piece of some kind of printed circuit board, here's a photo:


The object rotated around the axis perpendicular to the beam of view, the registration of the reflected signal occurred at about the maximum reflection (glare). The stain from the laser, the illuminating object, had a size about 2 cm. Total 4 photodetectors were used, separated by 0.5 meters. The size of the synthesized aperture is estimated by 0.5 m per 10 m.
Actually, just in case, the recorded signals (left) and their spectra (right) in relative units:


From the previous photo of an object Photoshop allocated only those of the coverage and reflecting sites you want to see:


The image restored by two-dimensional Fourier transform of 4 signals and sacchable for comparison:


This picture generally consists of only 4 lines (and about 300 columns), vertical image resolution, respectively, about 0.5 mm, however, a dark corner and both round holes seem to be visible. The horizontal resolution is 0.2 mm, this is the width of the conductive tracks on the board, all five pieces are visible. (An ordinary telescope must be a two-meter diameter to see them in the near IR).

In truth, the resulting resolution is far from the theoretical limit, so it would be nice to bring this technology to mind. The devil, as you know, lies in the details, and there are a lot of items here.

Thanks for attention.

  • 5. Futical systems performing Fourier transform
  • 6. Optical Fourier transformations
  • 6.1. Fourier formation and Wiener spectra of some functions
  • 8. Interference of light beams. The concept of spatial and temporal coherence
  • 10. Physical principles of holography
  • 10.1. Basic types of holograms
  • 10.2. Fraunhofer, Fresnel and Fouriergolograms
  • 10.3. Associative properties of holograms
  • 11. Functional functional scheme for processing optical signals
  • 12.German Optical Analog Information Processing Systems
  • 12.1.German analog optical processor
  • 13.Sintez of spatial operating filters
  • 14.Gerette optical signal processing using feedback
  • 15.Other electronic hybrid computing system
  • 16. The work of the acousto-optic analyzer of the spectrum of radio signals
  • 17.Diolocation stations with synthesized aperture antenna (RSA)
  • 18. Discrete and Analog Control Plane Polarization Light Beam
  • 18.1. Polarizing modulation based on the branch bunch of light beam
  • 18.2. Discrete switching angles of inclination of arbitrary oriented planes polarization of light radiation
  • 18.3. Analog control arbitrarily oriented plane polarization of the light beam
  • 19. Discretization of the optical signal.
  • 19.1.The Cotelnikova-Shanon samples
  • 19.2 Discrete Fourier transformation
  • 17.Diolocation stations with synthesized aperture antenna (RSA)

    Radar with a long distance along the external antenna allow you to obtain detailed radar images only on relatively small ranges. When removing the explosion bands for tens of kilometers from the aircraft, it is necessary to use antennas long in tens and hundreds of meters, the placement of which is impossible on the plane.

    To overcome this difficulty, an antenna aperture synthesis method is used, consisting in memorizing the signals reflected on the target site, the length of which is equal to the desired length of the antenna. The subsequent processing of registered signals in the on-board or ground equipment allows you to get a radar image with high detail.

    IN currently, optical processing systems have received the greatest distribution. They are based on a holographic method in which radar signals (radio beam) recorded on the film are used to form a radar image.

    IN RSA The principle of holography is used both when registered radio filters and optical devices of the OOS.

    The support wave passing through the hologram, creates an image of the object precisely in the place where he was at the time of the hologram record. Picture

    (Points) will not be point, but somewhat blurred. The spot size Δ x, which determines the detail of the image being created, can be found from the expression having the following form:

    Δ x \u003d λ r / x;

    where λ is the length of the irradiating wave; R is the distance from the hologram to the object; X - linear hologram size.

    We formulate the main features of the holographic process:

    - it is necessary for the presence of coherent reference and signal waves;

    - in the process of holographicing, the amplitude-phase distribution of the signal wave field in the amplitude distribution of the signal and the registration of this signal in the form of a hologram (interference pattern);

    - to restore the image, it is necessary to irradiate the hologram of the support wave.

    Glograms have a number of interesting properties. One of them consists in the possibility of changing the scale of the image. If the linear size of the hologram and the wavelength of the beam restores the beam image and the wavelength of the beam, then the corresponding number of times

    the scale of the created image will change. If the changes in the wavelength and the scale of the hologram are disproportionately, the image will also be formed, but it will arise large-scale distortion. In many practical applications, these distortions do not play a significant role.

    This property allows you to record holograms on one wavelength, for example, in a radioappa view, and restore the wave front and observe the image on a different wave, in the optical range.

    Consider the lateral overview radar system installed on board the aircraft, as shown in Fig. 17.1. Suppose that the sequence of pulsed radar signals is directed to the terrain from the radar system on the aircraft and that reflected signals depending on the reflectivity of the area are taken from the site near the aircraft rate. We call the coordinate of the radar image, the transverse direction of flight, "range", and the coincidence of the flight - "azimuth". It is also convenient to also name the coordinate connecting the path of the radar by an airplane with any purpose, "inclined range". If a radar system of the usual type is used, the azimuth resolution will have the value of the order λ R1 / d, where λ is the wavelength of the radar signals, R1 is the inclined range, D is the antenna aperture size along the flight route. However, the wavelength of the radar signal is several orders of magnitude more optical wave and, therefore, in order to obtain an angular resolution comparable to the resolution of the photo exploration system, a very large antenna aperture is required. The required antenna length can be tens and even hundreds of meters. Obviously, on the plane it is difficult to implement.

    However, this difficulty can be overcome by applying a synthesized aperture method. The basic principle of synthesizing the aperture is that the various elements of the grille do not have to exist simultaneously in space. Suppose that the plane has a small side view antenna and that a relatively wide raid ray scans the locality due to the movement of the aircraft. The positions of the aircraft in which radar pulses are emitted, can be considered as elements of a linear antenna array. Then the received signal in each of these provisions is recorded coherently as a function function, since a reference signal is fed to the radar receiver, which allows you to simultaneously register both amplitude and phase information. Then various recorded complex waves are appropriately processed to the synthesis of the actual aperture.

    To explore in more detail how this method of synthesizing the antenna is realized, consider first the task with a point point and then spread the results obtained by the superposition to a more difficult case. Suppose that the point target is at point x1.

    The radar pulse is formed by periodic rectangular modulation of the sinusoidal signal with an angular frequency equal to Ω.

    Azimuth Review Area

    where A1 is the corresponding complex constant. The complex value A1 includes factors such as the radiated power, the reflectivity of the target, the phase shift and the propagation law (inversely proportional to the fourth degree of power). Taking advantage of the paracual approximation, the range R can be written as:

    where k \u003d 2π / λ. The expression (17.3) depends on T and X, and the spatial and temporary variables are interconnected by the ratio

    where V is the speed of the aircraft. If now suppose that the area at a distance R1 consists of a set of n point purposes, then using the superposition method, write a full reflected signal in the form

    S (T) \u003d Σ An (xn, R1) exp (I [ω T-2KR1 -K (VT-XN) 2 / R1]). (17.5) n \u003d 1

    If the reflected radar signal is described by (17.5), demodulated using a synchronous detector, the demodulated signal can be written as follows:

    S (T) \u003d Σ An (XN, R1) Cos [Ω C T-2KR1 -K (VT-XN) 2 / R1 + φ n], (17.6) n \u003d 1

    where ω c is an arbitrary carrier frequency, and φ n is an arbitrary phase angle. To memorize the reflected radar signal apply

    electron beam tube. The demodulated signal fed to it modulates the intensity of the electron beam, which is deployed in the vertical direction synchronously with reflected radar pulses. If the image of the signal from the tube screen to design on a photographic film, which moves in a horizontal direction at a constant speed, the sequence of the ranges of the range will be registered, which will form a two-dimensional image (Fig. 17.2). The vertical lines describe the range by the range, and the horizontal positions are postponed. Thus, the registered image is a set of signal samples S (T). This sample is carried out in such a way that by the time of the end of the recording of signals on the film, it turns out to be substantially indistinguishable from the initial signal. With such registration, it is obvious that time variables are converted to variables by space in the distance values \u200b\u200balong the recording line. With proper exposure, the transparency of the recording film represents a change in the reflected radar signal in azimuth. Thus, if we consider only the data registered in the direction y \u003d y1, the amplitude transmission can be represented as

    ) 2 + φ

    Σ a (x

    ) cos [ω x-2kr

    r 1 V F

    Range (y)

    Trail modulatedAzimuth (x) on the brightness of the electron beam

    where k1 and k2 are the offset and proportionality coefficient, x \u003d vf t - the coordinate of the film; VF - the speed of moving the film; ω x \u003d ω C / VF. Since the cosine can be represented as the sum of two complex-associated exponentials, the amount in (7.75) can be written as two sums T1 and T2:

    ) exp (i [ω x-2kr

    ) 2 (x-x

    / V) 2 + φ

    ) \u003d ---- Σ a

    ) 2 (x-x

    / V) 2 + φ

    ) \u003d ---- Σ a

    ) exp (-i [ω x-2kr

    For simplicity, limit ourselves to the task for one purpose. Then for N \u003d J Equation (17.8) takes

    ) \u003d Cexp (iω x) [- I --- (----) 2 (x - x

    / V) 2],

    where C is the corresponding complex constant. The first exponent describes a linear phase function, i.e. Just the inclination of the radiated wave. The angle of inclination to the plane of the film is determined by the expression

    Thus, with the exception of the linear phase function, (7.76) is a superposition of N positive cylindrical lenses centered at the points determined by the expression

    x \u003d VJ XN / V,

    n \u003d 1, 2, ..., N.

    Similarly (17.9) contains a linear phase factor - 0 and describes the superposition of n negative cylindrical lenses with centers defined by (17.14), and with focal lengths described by (17.13).

    To restore the image, the transparency corresponding to (17.7) is illuminated by a monochromatic flat wave, as shown in Fig. 17.3. Then you can show, applying the theory of Fresnel-Kirchhoff or the Guigens principle that the actual images created by T1 (X, Y1), and imaginary images created by T2 (X, Y1) will be recovered in the front and rear focal film planes. The relative positions of the images of point diffusers are distributed along the focus line, since the numerous centers of the lens-like film structure are determined by the position of point diffusers. However, the reduced image will be smeared in the Y direction; That is why this film is essentially the implementation of the one-dimensional function along y \u003d y1 and, therefore, there is no focusing action in this direction.

    Since our goal is to restore the image not only in the azimuthane direction, but also in the direction of the range, it is necessary to display the y coordinate directly on the focal plane of azimuth image. To perform this, it is necessary to recall that it is directly proportional to the range R1. In turn, the focal length is directly proportional to the y coordinate under consideration. Thus, to create a map of the area, we must display the y coordinate of the transmitted signal to the plane, the position of which is determined by the focal distances of the azimuthal direction. This is easy to implement by setting a positive conical lens directly behind the recording film, as shown in Fig.17.4. Obviously, if the conical lens bandwidth is equal to

    x2 / 2f),

    f is a linear function from R1, as shown in (17.13), then one can completely remove the entire plane with the entire imaginary diffraction in infinity, while leaving the transmission coefficient towards the Y is unchanged. Thus, if the cylindrical lens is placed on the focal length from the film transparency, the imaginary image in the Y direction will be in infinity. Let the azimuth image and the image in the range direction (i.e., in the x and y directions) coincide, but in an infinitely remote point. They can be transferred back to the final distance using a spherical lens. With this operation, the actual image of the area coordinates in azimuth and the range will be focused on the output plane of the system. However, in practice, the desired image is recorded through the slot in the output plane.

    The manifested secondary film can be considered and decrypt.